home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.arch      Apparently more than just beeps & boops      131,241 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 130,412 of 131,241   
   Anton Ertl to Thomas Koenig   
   Re: Multi-precision addition and archite   
   30 Nov 25 14:14:16   
   
   From: anton@mips.complang.tuwien.ac.at   
      
   Thomas Koenig  writes:   
   >Anton Ertl  schrieb:   
   >Both our guesses were wrong, and Scott (I think) was on the right   
   >track - this is a signed / unsigned issue.  A reduced test case is   
   >   
   >void bar(unsigned long, long);   
   >   
   >void foo(unsigned long u1)   
   >{   
   >  long u3;   
   >  u1 = u1 / 10;   
   >  u3 = u1 % 10;   
   >  bar(u1,u3);   
   >}   
      
   Assigning to u1 changed the meaning, as Andrew Pinski noted; so the   
   jury is still out on what the actual problem is.   
      
   >This is now https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122911 .   
      
   and a revised one at   
      
      
   (The announced attachment is not there yet.)   
      
   The latter case is interesting, because real_ca and spc became global,   
   and symbols[] is still local, and no assignment to real_ca happens   
   inside foo().   
      
   So one way the compiler could interpret this code might be that   
   real_ca gets one of the labels whose address is taken in some way   
   unknown to the compiler; the it has to preserve all the code reachable   
   through the labels.   
      
   Another way to interpret this code would be that symbols is not used,   
   so it is dead and can be optimized away.  Consequently, none of the   
   addresses of any of the labels is ever taken, and the labels are not   
   used by direct jumps, either, so all the code reachable only by   
   jumping to the labels is unreachable and can be optimized away.   
      
   Apparently gcc takes the latter attitude if there are <=100 labels in   
   symbols, but maybe something like the former attitude if there are   
   >100 labels in symbols.  This may appear strange, but gcc generally   
   tends to produce good code in relatively short time for Gforth (while   
   clang generates horribly slow code and takes extremely long in doing   
   so), and my guess is that having such a cutoff on doing the usual   
   analysis has something to do with gcc's superior performance.   
      
   I guess that if you treat symbols like in the original code (i.e.,   
   return it in one case), you can reduce the labels more without the   
   compiler optimizing everything away.  I don't dare to predict when the   
   compiler will stop generating the inefficient variant.  Maybe it has   
   to do with the cutoff.   
      
   - anton   
   --   
   'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'   
     Mitch Alsup,    
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca