Richard Fateman wrote:   
   > On   
   > Again, apologies for a delayed response, but thanks for your long note!!!   
   > My comments are interspersed.   
   > rjf   
   >   
   > Monday, February 7, 2022 at 6:32:30 PM UTC-8, anti...@math.uni.wroc.pl   
   wrote:   
   > > Richard Fateman wrote:   
   > > > On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl   
   wrote:   
   > > There is also another possible problem: if code is written as   
   > > generic Lisp arithmetic, than for double precision it will be   
   > > much slower than specialised one. So you really want several   
   > > versions of code: real double precision, complex double precision   
   > > and separate arbitrary precision version(s) (you can probably use   
   > > single version for real and complex arbitrary precision, but   
   > > it may be simpler to have 2 arbitrary precision versions) and   
   > > possibly also single precision version (ATM in FriCAS there is   
   > > no support for single precision).   
   >   
   > This is certainly true, and I think it is a basis for the Julia fans to say   
   they can   
   > make programs run faster by compiling -- on the fly -- specialized code for   
   particular   
   > types. In lisp, there is certainly the option to compile multiple versions   
   of the same   
   > procedure but with different type declarations. So I don't see that as a   
   problem;   
   > more like an opportunity.   
      
   Well, the point is that get good performance you need to do more   
   work than mechanical translation.   
      
   > > > Other code is potentially called as foreign function libraries (e.g. Gnu   
   MPFR). I suppose that   
   > > > any of the packages linked to (say) Sage or Julia could be called from   
   Lisp, since there are   
   > > > existing interfaces to C and Python. I don't know if anyone has done   
   this, but again to be part   
   > > > of the Maxima distribution it would have to be platform (hardware,   
   software) agnostic.   
   > > >   
   > > > So are these "special"? I don't know for sure, but I think there are   
   certainly not dated.   
   > > Well, what you mention above are things outside core. If you think   
   > > that they are important, than you should really go with "new" systems,   
   > > AFAICS they have this part much better than Maxima.   
   >   
   > I doubt it.   
      
   What you mean by "it"? That "new" systems have better non-core   
   parts?   
      
   > If all you want to do is write a C or Python program to do bignum   
   arithmetic, and you   
   > are proficient in those languages, that might be better than using Maxima,   
   especially if you   
   > are unwilling to read the documentation, or download the binary code.    
   (Better to spend   
   > a week in the computer lab than an hour in the library.)   
      
   I am not sure what binary code changes here.   
      
   > > > 3. There is a continuing effort by a host of people who provide fixes,   
   enhancements, and applications   
   > > > in their own library public repositories. There are educational   
   projects, and wholesale adoption of   
   > > > Maxima in schools and school districts. There is an active Maxima   
   mailing list.   
   > > You basically say that there is inertia: in short time for Maxima   
   > > folks it is easier to continue to use Maxima than to switch to   
   > > something else. True, inerta is powerful force. But if you   
   > > look at number of people behind various systems, Maxima would   
   > > look better than FriCAS, but worse than several "new" systems.   
   >   
   > How many people are using Maxima? Between 11/1/2021 and 2/1/2022 76,209   
   people downloaded   
   > the Windows version of the binary. It is hard to know how many Unix   
   versions, since they are included   
   > in distributions not directly from sourceforge.   
      
   If you look at improvements what matters is people who engage in developement.   
   AFAICS Sympy has more developers than Maxima. Sage defintely has _much_   
   more developers. Even Julia symbolics, which up to now is now fairly   
   incomplete, seem to have several developers.   
      
   > > > 4. If there were a set of standard benchmarks for "core" f   
   nctionalities, there might be a basis for   
   > > > testing if Maxima's facilities were dated.   
   > > In arxiv preprint 1611.02569 Parisse gives an example of polynomial   
   > > factorization and proposes special method to solve it. ATM FriCAS   
   > > does not do very well on this example (2500s on my machine), but   
   > > certainly better than systems that can not do it at all. However,   
   > > my estimate is that using up-to-date general methods it should take   
   > > fraction of second (Magma was reported to need 3s, Parisse special   
   > > method 2s). So, I would say that possibly all systems have   
   > > some work to do.   
   >   
   > Parisse states in his conclusion that he hopes that other open-source CAS   
   will implement   
   > this method. It does not look too difficult to do in Maxima.   
      
   My adice is to ignore his method. Instead use state-of-the art general   
   Hensel lifting. This example is very sparse, general Hensel lifting   
   should work well also on less sparse examples or even completely dense   
   ones (of couse cost grows with number of terms). AFAICS potentially   
   problematic part in factorization is factor recombination and   
   leading coefficient problem. In Parisse approach you need to   
   solve this several times. In standard Hensel lifting one can   
   use old Kaltofen trick to limit both problems to essentially   
   single stage (there is some work in each stage, but it gets   
   much easier after first stage).   
      
   > Factorization time has not   
   > been raised as a limiting issue in a computation of interest, that I am   
   aware of. The particular   
   > test example is large.   
      
   I do not think the example is large. Similar thing may appear from   
   rather small problem due to unfortunate choice of parametrization   
   (the point is that _after_ factoring it may be clear how to   
   change parametrization to make problem smaller, before it may   
   be not so clear).   
      
   Concerring "limiting issue", existing codes in many cases avoided   
   factorization due to old myth of "expensive factorization". With   
   faster factorization you can apply it in more cases. As to   
   more concrete examples, general testing for algebraic dependence   
   (in presence of nested roots) depends on factorization, and   
   AFAICS factorization is limiting ability to handle algebraic   
   dependencies. In similar spirit, computation of Galois groups   
   needs factorization. In both cases you need algebraic   
   coefficients which tend to make factorization much more   
   costly. If you use Trager method for handling algebraic   
   coefficients you get _much_ larger polynomial as norm.   
      
   > > > I am aware of the testing of indefinite integration of   
   > > > functions of a single variable, comparing Rubi to various other systems.   
   I have some doubts about   
   > > > measurements of Maxima, since they are done through the partly-blinded   
   eyes of Sage. I have run   
   > > > some of the "failed" Maxima tests through Maxima and found they succeed,   
   and indeed find answers   
   > > > that are simpler and smaller than some of the competition. So I would   
   not judge from this.   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|