home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 241,465 of 243,242   
   Kaz Kylheku to Janis Papanagnou   
   Re: Optimization of cascading recursions   
   24 Oct 25 23:59:33   
   
   From: 643-408-1753@kylheku.com   
      
   On 2025-10-24, Janis Papanagnou  wrote:   
   > On 24.10.2025 19:14, Kaz Kylheku wrote:   
   >> On 2025-10-24, Janis Papanagnou  wrote:   
   >>> On 24.10.2025 16:35, Michael S wrote:   
   >>>> On Fri, 24 Oct 2025 15:36:58 +0200   
   >>>> Janis Papanagnou  wrote:   
   >>>>   
   >>>>> On 24.10.2025 09:53, Rosario19 wrote:   
   >>>>>>   
   >>>>>>   
   >>>>>>> into the above pair of functions, or anything similar.   
   >>>>>>>   
   >>>>>>> That's not ordinary optimization of the type which improves the   
   >>>>>>> code generated for the user's algorithm; that's recognizing and   
   >>>>>>> redesigning the algorithm.   
   >>>>>>>   
   >>>>>>> The line between those is blurred, but not /that/ blurred.   
   >>>>>>   
   >>>>>> int fib1(int n){return fib_tail(0, 1, n);}   
   >>>>>>   
   >>>>>> is different from   
   >>>>>>   
   >>>>>> int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);   
   >>>>>>   
   >>>>>>   
   >>>>>> both these function calculate from the 0 1  the number result, but   
   >>>>>> fib() start to calculate from n goes dowwn until n=0, than goes up   
   >>>>>> until n=input, instead fib1() go only down for n. It seen the calls   
   >>>>>> (fib has 2 call each call... instead fib_tail() just 1), I think   
   >>>>>> fib() is O(n^2) instead fib1 is O(n).   
   >>>>>   
   >>>>> It's worse; it's O(2^N).   
   >>>>>   
   >>>>> Janis   
   >>>>>   
   >>>>   
   >>>> Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.   
   >>>> So, O(1.618**n) - a little less bad than your suggestion.   
   >>>   
   >>> It's new to me that O-complexities would be written with decimals.   
   >>>   
   >>> You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).   
   >>   
   >> I think, whereas constant factors and constant terms are understood not   
   >> to matter in asymptotic complexity, the choice of exponential base in   
   >> exponential complexity does. Similarly to the choice of exponent   
   >> in the category of polynomial time: e.g. cubic is slower than quadratic.   
   >   
   > I don't know about you, but my (40 years old) knowledge is reflected   
   > by the table you find in https://de.wikipedia.org/wiki/Landau-Symbole.   
   > We have differentiated O(1), O(log log N), O(log N), O(N), O(N log N),   
   > O(N^2), O(N^3) - other O(c^N) for c>3 were practically "irrelevant";   
   > no need to show me an example of O(N^4), or O(N^3 log N), I know there   
   > are examples - O(2^N), and O(n!). (There are two more listed on Wiki.)   
   > (Just note that all listed complexity classes are here(!) functions on   
   > N alone, where "1" is "N^0" and the "N^M" just an abbreviated notation   
   > to not list the individual mostly anyway irrelevant exponents.)   
   >   
   > I think that if we compare, as done here, different complexity classes,   
   > O(N) vs. O(2^N) the decimals are anyway irrelevant! If we're comparing   
   > functions within the same class O(2^N) it makes sense to differentiate   
   > an O(2^N) complexity with concrete numbers of 1.618**n vs. 2**n.   
   > But we've never said that these two were different complexity classes.   
   >   
   > The class O(a^N) (for a>1) is anyway just called exponential complexity.   
      
   But that's the same as "polynomial time" including quadratic, cubic,   
   quartic, ..   
      
   > And since O defines an _order_ (with a specific characteristic) there's   
   > no point in differentiating a class O(1.618^N) from a class O(1.617^N),   
      
   But these orders are different.   
      
   The complexity upper bound O(1.618^n) includes faster function growth than   
   the complexity upper bound O(1.617^n).   
      
   If we know something runs in K * 1.618^n time, then it isn't   
   O(1.617^n); it blows past that tighter upper bound.   
      
   For any positive constant factor K, given a sufficiently large n:   
      
      1.617^n < K 1.68^n   
      
   No matter how small you make K, after a certain n value, the right side   
   will permanently overtake the left.   
      
   Both complexities are in the exponential class, but they are not   
   of the same order.   
      
   Complexity classes are coarse-grained containers of complexity orders:   
      
   https://en.wikipedia.org/wiki/Complexity_class   
      
   For instance everything polynomial is lumped into the complexity   
   class P, even though you make an algorithm have significantly better   
   complexity if you can reduce it from cubed to quadratic.   
      
   The exponential time complexity class contains a recognized subclass   
   called E which is O(2^l(n)) where l(n) is a linear polynomial of n.   
      
   Both O(2^n) and O(1.68^n) are found within E yet are not of the   
   same order.   
      
   1.68^n can be rewritten as approximately 2^(0.7485n), showing   
   that it belongs to class E.   
      
   But that 0.7485 coefficient on n cannot be ignored when it is inside   
   exponentiation, it leads to slower growth of a lower order.   
      
   --   
   TXR Programming Language: http://nongnu.org/txr   
   Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal   
   Mastodon: @Kazinator@mstdn.ca   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca