home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.arch      Apparently more than just beeps & boops      131,241 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 130,866 of 131,241   
   MitchAlsup to All   
   Re: floating point history, word order a   
   12 Jan 26 02:05:15   
   
   From: user5857@newsgrouper.org.invalid   
      
   antispam@fricas.org (Waldek Hebisch) posted:   
      
   > MitchAlsup  wrote:   
   > >   
   > > antispam@fricas.org (Waldek Hebisch) posted:   
   > >   
   > >> Thomas Koenig  wrote:   
   > >> > Waldek Hebisch  schrieb:   
   > >> >   
   > >> >> I use every day a computer algebra system.  One possiblity here   
   > >> >> is to use arbitrary precision fractions.  Observations:   
   > >> >> - once there is more complex computation numerators and denominators   
   > >> >>   tend to be pretty big   
   > >> >> - a lot of time is spent computing gcd, but if you try to save on   
   > >> >>   gcd-s and work with unsimplified fractions you may get trenendous   
   > >> >>   blowup (like million digit numbers in what should be reasonably   
   > >> >>   small computation).   
   > >> >> - general, if one needs numeric approximation, then arbitrary   
   > >> >>   precision (software) floating point is much faster   
   > >> >   
   > >> > It is also not possible to express irrational numbers,   
   > >> > transcendental functions etc.  When I use a computer algebra   
   > >> > sytem, I tend to use such functions, so solutions are usually   
   > >> > not rational numbers.   
   > >>   
   > >> Support for algebraic irrationalities is by now standard   
   > >> feature in computer algebra.  Dealing with transcendental   
   > >> elementary functions too.  Support for special functions   
   > >> is weaker, but that is possible to.  Deciding if a number   
   > >> is transcendental or not is theoretically tricky, but for   
   > >> elementary numbers there is Schanuel conjecture, which   
   > >> wile unproven tends to work well in practice.   
   > >>   
   > >> What troubles many folks is fact that for many practical   
   > >> problems answers are implicit, so if you want numbers at the   
   > >> end you need to do numeric computation anyway.   
   > >>   
   > >> Anyway, my point was that exact computations tend to need   
   > >> large accuracy at intermediate steps,   
   > >   
   > > Between 2×n+6 and 2×n+13 which is 3-9-bits larger than the   
   > > next higher precision.   
   >   
   > You are talking about a specific, rather special problem.   
      
   Yes, exactly, where the exact result is either known or computable   
   with current known methods/means. And its all in the Mueller book.   
   All I did was to take typical elementary functions and make the   
   evaluation of them similar, in clock cycles, as FDIV of the same   
   operand size; and for this gain in performance, I am willing to   
   sacrifice the merest loss in precision:: 1 rounding error every   
   "quite large number of calculations"   
      
   > Reasonably typical task in exact computations is to compute   
   > determinant of n by n matrix with k-bit integer entries.   
   > Sometimes k is large, but k <= 10 is frequent.  Using   
   > reasonably normal arithmetic operations you need slightly   
   > more than n*k bits at intermedate steps.  For similar   
   > matrix with rational entries needed number of bits may   
   > be as large as n^2*k.  If you skip simplifications of   
   > fractions at intermediate steps your numbers may grow   
   > exponentially with n.  In root finding problem that   
   > I mentioned below, to get k bits of accuracy you need   
   > to evaluate polynomial at k bit number.  If you do   
   > evaluation in exact arithmetic, then at intermediate   
   > steps you get n*k bit numbers, where n is degree of the   
   > polynomial.  OTOH in numeric computation you can get   
   > good result with much smaller number of bits (trough   
   > analysis and its result are complex), but growing   
   > with n.   
   >   
   > >>                                       so computational   
   > >> cost is much higher than numerics (even arbitrary precion   
   > >> numerics tend to be much faster).  As a little example   
   > >> is sligthly different spirit, when can try to run approximate   
   > >> root finding procedure for polynomials in rational   
   > >> arithemtics.  This solves problem of potential numerical   
   > >> instability, but leads to large fractions which are   
   > >> much more costly than arbitrary precion floating point   
   > >> (which with some care deals with instability too).   
   > >>   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca