home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.arch      Apparently more than just beeps & boops      131,241 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 131,077 of 131,241   
   Waldek Hebisch to quadi   
   Re: Combining Practicality with Perfecti   
   14 Feb 26 16:33:41   
   
   From: antispam@fricas.org   
      
   quadi  wrote:   
   >   
   > I remember having read one article in a computer magazine where someone   
   > mentioned that an unfortunate result of the transition from the IBM 7090   
   > to the IBM System/360 was that a lot of FORTRAN programs that were able to   
   > use ordinary real nubers had to be switched over to double precision to   
   > yield acceptable results.   
      
   Note that IBM floating point format effectively lost about 3 bits of   
   accuracy compared to modern 32-bit format.  I am not sure how much they   
   lost compared to IBM 7090 but it looks that it was at least 5 bits.   
   Assuming that accuracy requirements are uniformly distributed between   
   20 and say 60 bits, we can estimate that loss of 5 bits affected about   
   25% (or more) of applications that could run using 36-bits.  That is   
   "a lot" of programs.   
      
   But it does not mean that 36-bits are somewhat magical.  Simply, given   
   36-bit machine original author had extra motivation to make sure that   
   the program run in 36-bit floating point.   
      
   > And I noticed that a lot of mathematical tables from the old days went up   
   > to 10 digit accuracy, and scientific calculators had 10 digit displays,   
   > calculating internally to a slightly higher precision.   
      
   There were various accuracies.  I have (or maybe had) 100 digits   
   logarithm tables.  Trouble with high accuracy tables is that with   
   naive use k-digit table need 10 to k positions, so quickly becomes   
   unmanagably big.  Usual way is to have main table and table of correction   
   coefficients to allow easy interpolation.  That means that 2k-digit   
   table needs 10 to k positions.  With k=5 we get reasonably sized   
   table.  So 10-digits tables are the largest ones that still have   
   reasonable size and are easy to use.   
      
   Concernig calculators, in one case internal accuracy was about 14   
   digits which agrees with using 8-byte BCD format (with 2 digit   
   exponents).   
      
   Early machines that I know about had really varying accuracies,   
   starting from 23-bits trough 43-bits.   
      
   AFAICS now that main consumer of FLOPS is graphics.  Most of graphics   
   would be happy with lower accuracy, but 32-bit is available.  There   
   is one important thing that needs higher accuracy, namely determining   
   orientation of a triangle require computing sign of determnant of   
   a 3 by 3 matrix.  Assuming 16 significant bit input data (adequate   
   for most graphic uses) determiant needs about 48 significant bits,   
   so works with 64-bit doubles and would not work with 36-bit or   
   32-bit floating point numbers.   
      
   Second big user is audio.  Some audio fans what high accuracy,   
   32-bit integer probably is good enough for them, 36-bit floating   
   point is probably kind of borderline and 32-bit float is not   
   good enough.  But it seems that most audio uses lower accuracy,   
   in particular 32-bit floats.   
      
   So, the biggest users of floating point probably have no demand   
   for 36-bit floats.   
      
   If you look at other uses, then note that GPU makers used to   
   offer single precision only GPU-s as main option and fast double   
   precision as an expensive extra.  So they understand need   
   for better precision but are relucant to affer it as a   
   standard feature.  36-bit clearly is much more exotic than that.   
      
   BTW. You seem to care about floating point.  My personal intersts   
   go mainly toward integer calculations.  More precisly, I am   
   interested in exact results, that is symbolic computation.   
   In symbolic computation main way to speed up computation is to   
   compute modulo prime number.  For this 61-bit self-complement   
   aritmetic would be nice (self-complement means arithemtic modulo   
   2^n - 1, and 2^61 - 1 is a prime).  64-bit self-complement would   
   be of some use, but not so good because 2^64 - 1 has several   
   small factors.  It would be good to have 3 argument artimetic   
   operations of form   
      
      (x op y) mod p   
      
   p could be of restricted form, say 2^64 - a where a could have   
   restricted range, for example 10-bit or 16-bit.  IIUC for   
   addition and subtraction circuit cost of such operation could   
   be quite low, main const would be data path to prowide a and   
   encoding space.  For multiplication cost would be higher, as   
   one would need to compute high bits of the product, shift them   
   down, multiply by a and subtract, then perform final correction   
   as in case of addition.  Still, that would probably add about   
   20% of cost compared to normal multiplication, so not so high   
   cost you need the operation.  Considering that alternative   
   uses several instructions, some of them expensive ones   
   (either division or extra multiplication) such instruction could   
   be attractive one.   
      
   --   
                                 Waldek Hebisch   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca