home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 241,456 of 243,242   
   Janis Papanagnou to Kaz Kylheku   
   Re: Optimization of cascading recursions   
   24 Oct 25 20:31:27   
   
   From: janis_papanagnou+ng@hotmail.com   
      
   On 24.10.2025 19:14, Kaz Kylheku wrote:   
   > On 2025-10-24, Janis Papanagnou  wrote:   
   >> On 24.10.2025 16:35, Michael S wrote:   
   >>> On Fri, 24 Oct 2025 15:36:58 +0200   
   >>> Janis Papanagnou  wrote:   
   >>>   
   >>>> On 24.10.2025 09:53, Rosario19 wrote:   
   >>>>>   
   >>>>>   
   >>>>>> into the above pair of functions, or anything similar.   
   >>>>>>   
   >>>>>> That's not ordinary optimization of the type which improves the   
   >>>>>> code generated for the user's algorithm; that's recognizing and   
   >>>>>> redesigning the algorithm.   
   >>>>>>   
   >>>>>> The line between those is blurred, but not /that/ blurred.   
   >>>>>   
   >>>>> int fib1(int n){return fib_tail(0, 1, n);}   
   >>>>>   
   >>>>> is different from   
   >>>>>   
   >>>>> int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);   
   >>>>>   
   >>>>>   
   >>>>> both these function calculate from the 0 1  the number result, but   
   >>>>> fib() start to calculate from n goes dowwn until n=0, than goes up   
   >>>>> until n=input, instead fib1() go only down for n. It seen the calls   
   >>>>> (fib has 2 call each call... instead fib_tail() just 1), I think   
   >>>>> fib() is O(n^2) instead fib1 is O(n).   
   >>>>   
   >>>> It's worse; it's O(2^N).   
   >>>>   
   >>>> Janis   
   >>>>   
   >>>   
   >>> Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.   
   >>> So, O(1.618**n) - a little less bad than your suggestion.   
   >>   
   >> It's new to me that O-complexities would be written with decimals.   
   >>   
   >> You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).   
   >   
   > I think, whereas constant factors and constant terms are understood not   
   > to matter in asymptotic complexity, the choice of exponential base in   
   > exponential complexity does. Similarly to the choice of exponent   
   > in the category of polynomial time: e.g. cubic is slower than quadratic.   
      
   I don't know about you, but my (40 years old) knowledge is reflected   
   by the table you find in https://de.wikipedia.org/wiki/Landau-Symbole.   
   We have differentiated O(1), O(log log N), O(log N), O(N), O(N log N),   
   O(N^2), O(N^3) - other O(c^N) for c>3 were practically "irrelevant";   
   no need to show me an example of O(N^4), or O(N^3 log N), I know there   
   are examples - O(2^N), and O(n!). (There are two more listed on Wiki.)   
   (Just note that all listed complexity classes are here(!) functions on   
   N alone, where "1" is "N^0" and the "N^M" just an abbreviated notation   
   to not list the individual mostly anyway irrelevant exponents.)   
      
   I think that if we compare, as done here, different complexity classes,   
   O(N) vs. O(2^N) the decimals are anyway irrelevant! If we're comparing   
   functions within the same class O(2^N) it makes sense to differentiate   
   an O(2^N) complexity with concrete numbers of 1.618**n vs. 2**n.   
   But we've never said that these two were different complexity classes.   
      
   The class O(a^N) (for a>1) is anyway just called exponential complexity.   
   And since O defines an _order_ (with a specific characteristic) there's   
   no point in differentiating a class O(1.618^N) from a class O(1.617^N),   
   to accentuate the point. In case those classes are differentiated -   
   say, in the US domain? - I can't tell, okay. I just think it makes no   
   sense if we're speaking about complexity classes and orders.   
      
   It could be that there's different schools of teaching if you compare   
   e.g. the slight notation variances in the complexity class tables in   
   https://de.wikipedia.org/wiki/Landau-Symbole vs.   
   https://en.wikipedia.org/wiki/Big_O_notation   
   or the definitions of the O metrics, but I'm not sure about that since   
   the definitions are at least equal; lim sup{x->a} |f(x)|/|g(x)| < inf.   
      
   (The reasoning given below is not convincing to me.)   
      
   Relevant for this thread is: fib() is of exponential complexity, and   
   the linearized form is of linear complexity.   
      
   Janis   
      
   >   
   > If we have 1 < a < b  then O(b^n) is a faster growth than O(a^n).   
   >   
   > 2^n doubles with every increment in n.   
   >   
   > Whereas 1.02^n doubles --- here we can use the 'rule of 72' from   
   > finance --- for a +36 increase in n.   
   >   
   > If spending twice the computing time buys us only a +1 increase in   
   > the input parameter n, versus +36, that's a big deal.   
   >   
   > A term added to n in in 2^n wouldn't matter because 2^(n+c)   
   > is (2^c)(2^n) translating to a constant factor. But a coefficient   
   > on n would matter: 2^(kn) = (2^k)^n. It changes the base.   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca