home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.arch      Apparently more than just beeps & boops      131,241 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 129,885 of 131,241   
   Anton Ertl to John Savard   
   Re: Linus Torvalds on bad architectural    
   11 Oct 25 07:18:16   
   
   From: anton@mips.complang.tuwien.ac.at   
      
   John Savard  writes:   
   >On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:   
   >> |If somebody really wants to create bad hardware in this day and age,   
   >> |please do make it big-endian, and also add the following very   
   >> |traditional features for sh*t-for-brains hardware:   
   >   
   >I think that for a computer to be big-endian is a good thing.   
      
   Whatever the technical merits of different byte orders may be (and the   
   names "big-endian" and "little-endian" already indicate that far more   
   discussion has been expended on the topic than these merits justify   
   ),   
   little-endian has won, and that's its major merit, and big-endian's   
   major demerit.   
      
   Any big-endian architecture will suffer from less software support,   
   and conversely, if software wants to include support for this   
   hardware, that results in extra development effort, i.e., extra cost   
   (not for all software, but for some).  And Linus Torvalds is not   
   willing to expend this effort, not even if the initial patches for   
   supporting such an architecture come for free, because the additional   
   effort would be ongoing.   
      
   IBM has recognized the sign of the times, and added full-blown   
   little-endian support to Power (including unaligned accesses), and in   
   their Linux efforts retracted their support for the big-endian Power   
   and threw their weight behind little-endian Power.   
      
   Standardization has lots of merits, and deviating from an established   
   standard is a step one should not take lightly.   
      
   >But more importantly, it means that binary integers are ordered the same   
   >way as packed decimal integers, which are ordered the same way as integers   
   >in character text form.   
      
   Says who?  In a course we were a group of five who had to write some   
   program dealing with BCD numbers in 80286 assembly language.  We   
   divided the work up, with each one writing some routines.  Evantually,   
   on integration testing, we found that half of the group had   
   interpreted the numbers to be represented in little-endian order   
   (because the CPU was little-endian), and the other half had   
   interpreted them to be represented in big-endian order (because that   
   results in more readable memory dumps); and none of us thought that   
   any of the others would implement the other byte order.  So no, the   
   byte order of BCD numbers is not obvious.   
      
   >> | - only do aligned memory accesses   
   >   
   >Nearly all memory access are, or could be, aligned. Performance is   
   >improved if they are. As long as there's some provision to handle   
   >unaligned data, such as a move characters instruction, data structures can   
   >be dealt with for things like communications formats.   
   >I'm not saying it isn't bad, just that it was excusable before we had as   
   >many transistors available as we do now.   
      
   Again, the merit of supporting unaligned accesses in this day and age   
   is that more software will run on your hardware, and the demerit of   
   not doing it is that extra software effort is required for some   
   software to support it, as you outline.   
      
   >Failing to support the entire IEEE 754 floating-point standard just needs   
   >to be documented. Expecting software to fake it being implemented is not   
   >reasonable: as long as denormals instead produce zero as the result, one   
   >just has an inferior floating-point format, not a computer that doesn't   
   >work.   
      
   Software that expects a-b == 0.0 to give the same result as a==b (as   
   guaranteed by IEEE 754 40 years ago) won't work.  What do you mean   
   with "not a computer that does not work" if the computer does not run   
   software with the intended results?   
      
   I take pride in the portability of my software, but for things that   
   have been settled in the mainstream (byte order, alignment, IEEE FP,   
   among other things), there must be a very good reason to support   
   deviants.  E.g., RWX mappings have worked on every OS since the   
   beginning of mmap(), and are necessary for JITs.  Trying to mmap RWX   
   fails on MacOS on Apple Silicon (it works on the same MacOS version on   
   Intel hardware, and it works on the same Apple Silicon under Linux, so   
   this is a voluntary removal of a capability by Apple).  As a result,   
   the development version of Gforth did not work on MacOS on Apple   
   Silicon for several years.   
      
   My plan for fixing that was to just disable the JIT compiler and fall   
   back to the threaded code interpreter on that OS, but Bernd Paysan   
   actually decided to jump through the hoops that Apple sets up for   
   people writing JIT compilers.  The result is a speedup by a factor 2-3   
   (times are run-times in seconds):   
      
    sieve bubble matrix   fib   fft   
    0.108  0.107  0.071 0.119 0.057 threaded code on Mac Mini M1 MacOS   
    0.052  0.041  0.027 0.038 0.018 JIT compiler on Mac Mini M1 MacOS   
    0.029  0.034  0.015 0.044 0.015 JIT compiler on Core i5-1135G7 Linux   
      
   For comparison, I also provided numbers for laptop hardware   
   contemporary with the M1.   
      
   - anton   
   --   
   'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'   
     Mitch Alsup,    
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca