muta...@gmail.com wrote:   
   > On Tuesday, July 20, 2021 at 11:31:52 AM UTC+10, anti...@math.uni.wroc.pl   
   wrote:   
   >   
   > > > > > > You needed   
   > > > > > > better model to get more memory. AFAIK all other models   
   > > > > > > had 32-bit registers (but there were rather severe restrictions   
   > > > > > > on max supported memory). But if you think that 32-bit   
   > > > > > > registers are too expensive and memory is cheap enough to   
   > > > > > > have more of it you could create fictional 360/27 having   
   > > > > > > say 2 or 3 register extended to 20 bits.   
   > > > > >   
   > > > > > That exceeds the 16-bit registers. That's not achieving   
   > > > > > what the 8086 achieved.   
   > > > >   
   > > > > That is better thing: transparent compatibility to bigger   
   > > > > machines that seem to be so dear to you. Note that   
   > > > > beyond tiny model, what 8086 does exceeds the 16-bit registers:   
   > > > > you need 32 bit addresses (segment + offset) and pair   
   > > > > of register emulating bigger one. Explicit logically 32-bit,   
   > > > > physicaly 20-bit register is cleaner.   
   > > >   
   > > > It may be cleaner/easier, but it defeats the purpose for   
   > > > what segmentation was created for. Intel didn't come   
   > > > up with a segmented processor for fun, and they didn't   
   > > > use a 4-bit shift because they'd been smoking wacky   
   > > > weed. It was the absolute correct engineering solution.   
   >   
   > > I wander what in your opinion was purpose of segmentation?   
   >   
   > To allow multiple tiny-mode applications to run, without   
   > requiring alignment on a 64k address boundary, thus   
   > wasting space.   
      
   That have have at least 3 different solutions: relocation (base)   
   register, paging and position independent code. Note that   
   relocation register is set up by operating system, so you can   
   enlarge it without change to application (of course operating   
   system must know (possibly compute) size of relocation register).   
   For many years paging is preferred method to run multiple   
   applications.   
      
   Position independent code runs at whatever address you load   
   it, so you can load multiple programs in different areas   
   of memory. Of course, it depends on compiler producing   
   special code and to make it efficient you need appropriate   
   hardware (in particular PC relative addressing is desirable).   
      
   > > 4-bit shift was actually kind of reasonable, certainly better   
   > > than 16 (or 13) bit shift that you advocate.   
   >   
   > I advocate 4-bit shift on the 8086, and 16-bit shift if you   
   > have a machine capable of addressing 4 GiB and that   
   > amount of memory actually installed, and an effective   
   > 13-bit shift on the 80386 where design limitations limit   
   > you to 512 MiB.   
   >   
   > As opposed to what alternative for large memory model   
   > 8086 programs? There's some advantage to restricting   
   > them to 1 MiB instead of letting them fly on an 80386?   
      
   Simple fact is that program which has more than 1M of code   
   will not run on 1M machine. More generally, if program   
   really needs more than 1M of data, it will not run on 1M   
   machine. In such case, why to bother with segments?   
   What remain are programs that are small enough to fit   
   in 1M and do some useful work and which can also do   
   useful work on larger datasets. IME significant percentage   
   of such programs involved largish array, they would fail   
   with large data in large model. At best you could   
   use huge model, at cost of significant slowdown.   
      
   So you deal with small class of programs. And apparently   
   you did not realize that if you _have to_ deal with   
   segments (say for compatibility with 8086), then what   
   286 and 386 did is much better.   
      
   > > 8086 was "correct" in sense that it was good enough to make   
   > > a lot of money for Intel (unlike failures like 432 or   
   > > Itanium). But they could make better processor.   
   >   
   > IBM selected it for a reason too. IBM wasn't on wacky   
   > weed either. It was the right processor for what IBM   
   > had in mind - CPM-compatibility.   
      
   IBM did not have much choice. They started with classic   
   8-bit design and in the middle realized that they need   
   more. IIUC IBM did not want Motorola 68000 becase it   
   was more expensive and had 16-bit bus, for cost reason   
   IBM wanted to keep 8-bit bus. Motorola later introduced   
   version with 8-bit bus, but it was too late. I am not   
   sure if IBM seriously looked at Z8000, but it had 16   
   bit segment shift, which is worse than what Intel did.   
      
   At that time personal computer market was growing fast,   
   and IBM felt that they need to react quickly, otherwise   
   competition would be too strong.   
      
   > > >   
   > > > No problem with that either. It is up to individual programmers   
   > > > to clear the register and do an ICM B'0111'.   
   >   
   > > OK, "individual programmers" should get crystall ball in 1965   
   > > and use it to find out what IBM will do around 1978, and   
   > > code apropriately.   
   >   
   > Yes, and that is basically what I am doing. I am wondering:   
   >   
   > 1. What SHOULD have been done.   
   >   
   > 2. Whether it was POSSIBLE to figure this out from first   
   > principles, possibly by Babbage. Maybe even pre-Babbage.   
      
   Well, significant reason for 360 series was that earlier   
   machines run out of address bits. So it was predictible   
   that 24-bit will run out at some time in the future.   
      
   In instruction set, IBM missed PC relative instructions   
   and usefulness of larger immediates. In software conventions   
   IBM underestimated or overlooked usefulness of stack.   
      
   OTOH, when 360 was designed, it was not clear if concept   
   of series of compatible machines was feasible. I mean,   
   it sounds nice, but people make errors and it was not   
   clear if ineviable errors do not destroy compatibility.   
   There was also question of economic cost. At hardware   
   level, 360/30 is machine with 16-bit registers and   
   8 bit memory having 1.5us cycle. 360 instructions are   
   interpreted by microcode. Microcode runs with 750ns   
   cycle from ROM. With different microcode it would be   
   16-bit mini and could probably execute 3-5 times   
   more instructions than as 360. So there were nontrival   
   cost of compatiblity.   
      
   With hindsight, concept of CKD discs was a non-starter:   
   CKD support lead to complicated software, and conseqently   
   discs were hard to use on small machines (disc access   
   routines did not fit RAM). Initial idea of CKD was to   
   move some work from CPU to disc controller, but with   
   growing CPU speeds there was overall slowdown.   
      
   You may pretend to find something from first principles,   
   but fact is that many early predictions were wrong.   
   It seems that most pioniers underestimated complexity   
   of of software, it size and effort needed to create it.   
   Also, all times specific properties/restrictions of   
   hardware play role. Do you think that anybody would   
   bother with complexity of programming 8 core processor   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|