home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 241,521 of 243,242   
   Janis Papanagnou to James Kuyper   
   Re: Optimization of cascading recursions   
   27 Oct 25 12:52:03   
   
   From: janis_papanagnou+ng@hotmail.com   
      
   On 27.10.2025 10:48, James Kuyper wrote:   
   > On 2025-10-24 11:22, Janis Papanagnou wrote:   
   >> On 24.10.2025 16:35, Michael S wrote:   
   > ...>> Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.   
   >>> So, O(1.618**n) - a little less bad than your suggestion.   
   >>   
   >> It's new to me that O-complexities would be written with decimals.   
   >   
   > Complexities are determined by the highest power of the relevant   
   > parameter; that power needn't be integral, though it usually is. I'd be   
   > interested in the derivation of that result.   
   >   
   >> You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).   
   >   
   > While you're correct, I wonder why you said that? I don't see any   
   > examples of anyone saying anything like that above.   
      
   (I haven't said or implied that anyone said that.)   
      
   I tried to anticipate to avoid unnecessary follow-ups. - To   
   expand on that...   
   My point was that we used O-notation to classify and compare   
   algorithms, with the goal to improve efficiencies. For that it   
   was irrelevant whether a constant algorithm was characterized   
   by a constant runtime factor of 20000 or 4; with increasing N   
   it was still constant. Similar for linear algorithms; one may   
   have a slower runtime depending on 3*N the other 2*N; both are   
   O(N). The example above was just randomly chosen, since sorting   
   appeared to me to be a commonly known task and algorithms known.   
   Typical O(N**2) algorithms have a runtime of c*N*(N-1)/2. Some   
   other, like Quicksort in its elementary form, cannot guarantee   
   O(N*log N); its O(N**2). For exponential O-bounds we said that   
   these are of order O(2**N); and that's where we were differing   
   in our discussion. (I'm not keen to dispute whether one of the   
   doctrines is the only right view. That's BTW one reason why I   
   also didn't continue to Kaz's latest reply, another reason was   
   that he seemed to have mostly just repeated his previous post,   
   just adding another complexity measure that's IMO not clearing   
   the O-notation question, but mixing different complexity areas.)   
   For the initially mentioned goal to classify, compare and modify   
   algorithms to get a better complexity it had in my application   
   areas (and also back in my university time) been the goal to   
   get better algorithms for specific tasks. We've done O(N)->O(1),   
   O(N**2)->O(N*log N), or similar with higher order functions.   
   (In practice the O(N) might have even a better runtime if the   
   constant factor is huge and for some side-conditions the value   
   of N is practically bounded. But that's not the point of using   
   the O-notation to determine real-time runtime of algorithms.)   
   The example in this thread was especially noteworthy since it   
   was reducing an exponential complexity to even a linear order   
   by a sensible transformation! It's irrelevant whether the   
   original exponential complexity has a runtime 2**N or 1.6**N;   
   we called it O(2**N), and others obviously call it O(c**N),   
   depending on two parameters. It's irrelevant for the order we   
   are talking about. But if others think it's important I'm fine   
   with that.   
      
   Janis   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca