From: no.email@nospam.invalid   
      
   dxf writes:   
   >> That 30% difference was because VFX doesn't attempt to optimize locals.   
   > What's the evidence? My observation is compilers do not generate   
   > native code independently of the language. Parameter passing   
   > strategies differ between C and Forth and this necessarily affects the   
   > code compilers lay down.   
      
   1) comparisons between VFX and other compilers like iForth, 2) the   
   observation that there is any difference at all between the generated   
   code for the two versions of EMITS under VFX.   
      
   This isn't a question of C vs Forth. It's two equivalent pieces of   
   Forth code being compiled by the same optimizing Forth compiler, one   
   version resulting in worse code instead of identical code.   
      
   > For me it comes down why have I chosen to use Forth. The philosophy   
   > of it appeals to me in a way other languages don't. There's the   
   > question which forth - because forth has essentially split down two   
   > paths with rather incompatible motivations.   
      
   I gather that one path is industrial users who want there to be a   
   standard with well-supported commercial implementations, and who want to   
   run development projects with large teams of programmers (the Saudi   
   airport being the classic example).   
      
   I guess the other path is something like solo practitioners who don't   
   really care about standardization, perhaps because they just want the   
   most direct way to an end result. Philosophical appeal is another such   
   motivation. That's fine too, but partly a matter of personal taste.   
      
   What I'm unclear about is what the philosophical purist path has to say   
   about optimizing compilers. I think anyone wanting to reject locals for   
   reasons of code efficiency, probably should be using a VFX-style   
   compiler. My own idea of purity says to use a simple interpreter and   
   accept the speed penalty, using CODE when needed.   
      
   FWIW, most of the code I write these days doesn't spend much time on   
   computation. It might spend 100ms retrieving something over the   
   network, and then 1ms computing. So if the computing part somehow sped   
   up by 1000x, I wouldn't notice or care about the difference.   
      
   FWIW 2, I suspect most computing operations in the real world right now   
   are spent in GPU kernels or large parallel batch jobs, rather than in   
   ordinary single-CPU programs.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|