From: arne@vajhoej.dk   
      
   On 12/16/2025 9:05 AM, Dan Cross wrote:   
   > In article <10h3upk$3f20l$1@dont-email.me>,   
   > Craig A. Berry wrote:   
   >> [...] But with people using scripting   
   >> languages to process massive vectors to train LLM models, it all seems   
   >> pretty puny.   
   >   
   > This is a good point, but, I sometimes wonder if, perhaps, we   
   > need to recalibrate what we mean when we say, "scripting   
   > language." I imagine that you are referring to Python here, as   
   > that seems to be the thing that the kids are all hip on these   
   > days when it comes to model training and such-like, but I think   
   > it's fair to say that that language has grown far beyond   
   > traditional "scripting" use.   
      
   ML, data processing and web has certainly passed admin   
   scripting in usage.   
      
   > Python is interpreted, yes, but people who are using it to do   
   > numerical analysis are often using the jit-compiled variant,   
      
   Most still use CPython.   
      
   None of the JIT implementations PyPy, GraalPy, Codon etc. has   
   really gotten traction.   
      
   (CPython 3.13+ actually comes with JIT, but it does not provide   
   the same speedup as PyPy and GraalPy can for those CPU intensive   
   cases that should never be done in Python)   
      
   Reason: fear of compatibility issues combined with the fact that   
   JIT usually does not matter.   
      
   Because:   
      
   > and   
   > more often the actual heavy computational lifting is being done   
   > in a library that's exposed to Python via an FFI; so the actual   
   > training code is in Fortran or C or some more traditional   
   > compiled language.   
      
   If CPython interpretation use 0.1-1.0% of total CPU usage   
   and native library execution use 99.0-99.9% of total CPU usage,   
   then ...   
      
   Arne   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|