From: no.email@nospam.invalid   
      
   minforth writes:   
   > I don't do parallelization, but I was still surprised by the good   
   > results using FMA. In other words, increasing floating-point number   
   > size is not always the way to go.   
      
   Kahan was an expert in clever numerical algorithms that avoid roundoff   
   errors, Kahan summation being one such algorithm. But he realized that   
   most programmers don't have the numerics expertise to come up with   
   schemes like that. A simpler and usually effective way to avoid   
   roundoff error swamping the result is simply to use double or extended   
   precision. So that is what he often suggested.   
      
   Here's an example of a FEM calculation that works well with 80 bit but   
   poorly with 64 bit FP:   
      
   https://people.eecs.berkeley.edu/~wkahan/Cantilever.pdf   
      
   > Anyhow, first step is to select the best fp rounding method ....   
      
   Kahan advised compiling the program three times, once for each IEEE   
   rounding mode. Run all three programs and see if the outputs differ by   
   enough to care about. If they do, you have some precision loss to deal   
   with somehow, possibly by use of wider floats.   
      
   https://people.eecs.berkeley.edu/~wkahan/Mindless.pdf   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|