home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   soc.culture.quebec      More than just pale imitations of France      108,435 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 106,689 of 108,435   
   Wisdom90 to All   
   Yet more rigorous about computational co   
   11 Jan 20 14:58:39   
   
   From: d@d.d   
      
   Hello...   
      
      
   Yet more rigorous about computational complexity..   
      
   I said previously(read below) that for example a time complexity such   
   as n*(log(n)) and n are fuzzy, because we can say that n*(log(n) is   
   an average resistance(read below to understand the analogy with material   
   resistance) or we can say that n*log(n) is faster than a quadratic   
   complexity or than an exponential complexity, but we can not say giving   
   a time complexity of n or n*log(n) how fast it is giving the input of   
   the n of the time complexity, so since it is not exact prediction, so it   
   is fuzzy, but this level of fuzziness, like in the example below of the   
   obese person, permits us to predict important things in the reality, and   
   this level of fuzziness of computational complexity is also science,   
   because it is like probability calculations that permits us to predict.   
      
   Read my previous thoughts to understand:   
      
   What is science? and is computational complexity science ?   
      
   You just have seen me talking about computational complexity,   
   but we need to answer the questions of: What is science ?   
   and is computational complexity science ?   
      
   I think that we have to be more smart because there is like   
   higher level abstractions in science, and we can be in those   
   abstractions exact precisions of science, but we can be more fuzzy   
   precisions that are useful and that are also science, to understand me   
   more, let me give you an example:   
      
   If i say that a person is obese, so he has a high risk to get a disease   
   because he is obese.   
      
   Now you are understanding more that with this abstraction we are not   
   exact precision, but we are more fuzzy , but this fuzziness   
   is useful and its level of precision is also useful, but is it   
   science ? i think that this probabilistic calculations are   
   also science that permits us to predict that the obese person   
   has a high risk to get a disease. And this probabilistic calculations   
   are like a higher level abstractions that lack exact precision but   
   they are still useful precisions. This is how look like computational   
   complexity and its higher level abstractions, so you are immediately   
   understanding that a time complexity of O(n*log(n)) or a O(n)   
   is like a average level of resistance(read below to know why i am   
   calling it resistance by analogy with material resistance) when n grows   
   large, and we can immediately notice that an exponential time complexity   
   is a low level resistance when n grows large, and we can immediately   
   notice that a log(n) time complexity is a high level of resistance   
   when n grows large, so those time complexities are like a higher level   
   abstractions that are fuzzy but there fuzziness, like in the example   
   above of the obese person, permits us to predict important things in the   
   reality, and this level of fuzziness of computational complexity is also   
   science, because it is like probability calculations that permits us   
   to predict.   
      
   Read the rest of my previous thoughts to understand better:   
      
   The why of computational complexity..   
      
      
   Here is my previous answer about computational complexity and the rest   
   of my current answer is below:   
      
      
   =====================================================================   
   Horand gassmann wrote:   
      
   "Where your argument becomes impractical is in the statement "n becomes   
   large". This is simply not precise enough for practical use. There is a   
   break-even point, call it n_0, but it cannot be computed from the Big-O   
   alone. And even if you can compute n_0, what if it turns out that the   
   breakeven point is larger than a googolplex? That would be interesting   
   theoretically, but practically --- not so much."   
      
      
   I don't agree, because take a look below at how i computed the binary   
   search time complexity, it is a divide and conquer algorithm, and it is   
   log(n), but we can notice that a log(n) is good when n becomes large, so   
   this information is practical because a log(n) time complexity is   
   excellent in practice when n becomes large, and when you look at an   
   insertion sort you will notice that it is a quadratic time complexity of   
   n^2, here again, you can feel that it is practical because an quadratic   
   time complexity is not so good when n becomes large, so you can say that   
   n^2 is not so good in practice when n becomes large, so   
   as you are noticing having time complexities of log(n) and n^2   
   are useful in practice, and for the rest of the the time complexities   
   you can also benchmark the algorithm in the real world to have an idea   
   at how it is performing.   
   =================================================================   
      
      
      
   I think i am understanding better Lemire and Horand gassmann,   
   they say that if it is not exact needed practical precision, so it is   
   not science or engineering, but i don't agree with this, because   
   science and engineering can be like working with more higher level   
   abstractions that are not exact needed practical precision calculations,   
   but they can still be useful precision in practice, it is like being a   
   fuzzy precision that is useful, this is why i think that probabilistic   
   calculations are also scientific , because  probabilistic calculations   
   are useful in practice because they can give us important informations   
   on the reality that can also be practical, this is why computational   
   complexity is also useful in practice because it is like a higher level   
   abstractions that are not all the needed practical precision,  but it is   
   precision that is still useful in practice, this is why like   
   probabilistic calculations i think computational complexity is also science.   
      
      
   Read the rest of my previous thoughts to understand better:   
      
      
   More on computational complexity..   
      
   Notice how Horand gassmann has answered in sci.math newsgroup:   
      
   Horand gassmann wrote the following:   
      
   "You are right, of course, on one level. An O(log n)   
   algorithm is better than an O(n) algorithm *for   
   large enough inputs*. Lemire understands that, and he   
   addresses it in his blog. The important consideration   
   is that _theoretical_ performance is a long way from   
   _practical_ performance."   
      
      
   And notice how what Lemire wrote about computational complexity:   
      
   "But it gets worse: these are not scientific models. A scientific model   
   would predict the running time of the algorithm given some   
   implementation, within some error margin. However, these models do   
   nothing of the sort. They are purely mathematical. They are not   
   falsifiable. As long as they are mathematically correct, then they are   
   always true. To be fair, some researchers like Knuth came up with models   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca