home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.math.symbolic      Symbolic algebra discussion      10,432 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 8,845 of 10,432   
   Richard Fateman to Waldek Hebisch   
   Re: on sin(very large number)   
   04 Aug 15 23:08:50   
   
   From: fateman@cs.berkeley.edu   
      
   On 8/4/2015 5:12 AM, Waldek Hebisch wrote:   
   > Nasser M. Abbasi  wrote:   
   >> I was answering someone question on Matlab forum, and noticed   
   >> something interesting.   
   >>   
   >> Maple:   
   >> ========   
   >> Digits::60: sin(2*10^30): evalf(%);   
   >> 0.17950046751493908795061771231643809505098047699633484280836744\   
   >>    698514457349325219   
   >>   
   >> Matematica:   
   >> ===========   
   >> In[21]:= N[Sin[2*10^30], 60]   
   >> Out[21]= 0.179500467514939087950617712316438095050980476996334842808367   
   >>   
   >> Mupad:   
   >> ======   
   >> Digits:=60:   
   >> simplify(sin(2.0*10^30));   
   >> -0.6054240282319655434839500429688996518962085247039794921875   
   >>   
   >> But when calling mupad from Matlab, now it gives different answer   
   >> (same as Maple and Mathematica)   
   >>   
   >>>> evalin(symengine,'DIGITS := 60: simplify(sin(2.0*10^30))')   
   >> 0.17950046751493908795061771231643809505098047699633484280836   
   44698514564171539337   
   >>   
   >> But Matlab gives different answer   
   >> =================================   
   >>>> sin(2*10^30)   
   >>    -0.018662125294758   
   >>   
   >> Which is the correct result, and what is the algorithm used for such   
   >> evaluations when the argument of trig is large?   
   >   
   > "correct" depends of definition.  Of course the '0.1795...' approximation   
   > has much smaller error than the other one.  However, if you   
   > perform calculations in strightforward way using machine arithmetic   
   > you get essentially random result.   
      
   Not really.   
   The essence of (some of) the confusion above is that if you compute, in   
   IEEE double-float,   2.0d0 * (10^30)  then you have a number   
   that is approximately   
      
   2.000000000000000039769249677312e30   
      
   and if you compute the number 2*10^30  exactly  you have a different   
   number.   
      
     So it would be quite a coincidence if they had the same sin().   
   There is nothing random about this.   
   (Maybe Mupad's 2.0  is  SINGLE  precision, for yet a third number)   
      
   So, the question is if   
   > 'sin' should do heroic effort to get smaller error.   
   I think he answer is yes because  (a) it is not so hard and (b) it   
   would be wrong to provide a bad answer.   
      
   There   
   > is one school, which says yes.  In more extreme form they require   
   > error smaller than half of machine precision, which means that   
   > to deliver "correct" machine precision result one has to   
   > use multiple precision calculations.   
   Not much.  All you need is to have a very accurate value of pi to   
   reduce the argument into a standard interval (say 0 to pi/8) as a   
   double-float.   
      
      The other school says   
   > that extra effort is normally wasted: when argument is approximate   
   > then no heroics in 'sin' can salvage precision.   
      
   This is quite wrong, in my opinion.  There is usually no reason to think   
   that any input to a library routine is anything but the floating-point   
   representation of an exact rational number.  Unless   
   you are explicitly doing interval arithmetic or have an extra "error"   
   argument or have reason to believe (e.g. in graphics) that the   
   answer need be right only to the nearest pixel.  And if you are   
   doing interval arithmetic the endpoints should be viewed as   
   exact rational numbers too.   
      
     And for   
   > precise arguments one can request evaluation of 'sin' in   
   > better precision.   
      
   I think that methods returning nearly full precision in machine float   
   for elementary functions   
   are well documented for example by papers by Ping Tak Peter Tang   
   in ACM TOMS and elsewhere.   
      
      
   >   
   > Note, that for 'sin' there are tricks to substantialy reduce   
   > cost of multiple precision calculations.  But no such tricks   
   > are known for nonelementary functions (in particular for   
   > multivariate library functions).   
      
   I don't understand this.  There are particular tricks for particular   
   functions.  Mathematica claims to have algorithms for arbitrary   
   precision for everything it has built in.  Maybe some other CAS   
   have similar claims.         MPFR  has quite a few functions too.   
      
      If you want very high precision you should   
   expect to pay for it in time.   
      
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca