From: terje.mathisen@tmsw.no   
      
   Thomas Koenig wrote:   
   > John Levine schrieb:   
   >> According to Thomas Koenig :   
   >>> John Levine schrieb:   
   >>>> There was never any sort of type punning in FORTRAN.   
   >>>   
   >>> [Interesting history snipped]   
   >>>   
   >>>> Fortran ran on many Different machines with different floating point   
   >>>> formats and you could not make any assumptions about similarities in   
   >>>> single and double float formats.   
   >>>   
   >>> Unfortunately, this did not keep people from using a feature   
   >>> that was officially prohibited by the standard, see for example   
   >>> https://netlib.org/slatec/src/d1mach.f . This file fulfilled a   
   >>> clear need (having certain floating point constants ...   
   >>   
   >> Wow, that's gross but I see the need. If you wanted to do extremely   
   >> machine specific stuff in Fortran, it didn't try to stop you.   
   >>   
   >>> Fortunately, these days it's all IEEE; I think nobody uses IBM's   
   >>> base-16 FP numbers for anything serious any more.   
   >>   
   >> Agreed, except IEEE has both binary and decimal flavors.   
   >   
   > And two decimal flavors, as well, with binary and densely packed   
   > decimal encoding of the significand... it's a bit of a mess.   
   >   
   > IBM does the arithmetic in hardware, their decimal arithmetic probably   
   > goes back to their adding and multiplying punches, far before computers.   
   >   
   >> It's never been   
   >> clear to me how much people use decimal FP. The use case is clear enough,   
   >> it lets you control normalization so you can control the decimal precision   
   >> of calculations, which is important for financial calculations like bond   
   >> prices. On the other hand, while it is somewhat painful to get correct   
   >> decimal rounded results in binary FP, it's not all that hard -- forty   
   >> years ago I wrote all the bond price functions for an MS DOS application   
   >> using the 286's FP.   
   >   
   > Speed? At least when using 128-bit densely packed decimal encoding   
   > on IBM machines, it is possible to get speed which is probably   
   > not attainable by software (but certainly not very good compared   
   > to what an optimized version could do, and POWER's 128-bit unit   
   > is also quite slow as a result).   
   >   
   > And people using other processors don't want to develop hardware,   
   > but still want to do the same applications. IIRC, everybody but   
   > IBM uses binary encoding of the significand and software, probably   
   > doing something similar to what you did.   
   >   
   Using binary mantissa encoding makes everything you do in DFP quite   
   easy, with the exception of (re-)normalization. Our own Michael S have   
   come up with some really nice ideas here to make it (significantly) less   
   painful.   
      
   That said, for the classic AT&T phone bill benchmark you pretty much   
   never do any normalization in the main processing loop, so software   
   emulation with binary mantissa is perfectly OK.   
      
   A somewhat similar problem is the 1brc (One Billion Rows Challenge)   
   where you aggregate records containing two-digit signed temperature   
   records, all of which have one decimal digit.   
      
   Here the fastest approach is to simply do everything with int16_t /i16   
   min/max values and a i64 global accumulator. No DFP needed!   
      
   Terje   
   PS. In reality, the 1brc is 90%+ determined by the speed of your chosen   
   hash table implementation.   
      
   --   
   -    
   "almost all programming can be viewed as an exercise in caching"   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|