home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 57,923 of 59,235   
   Kaz Kylheku to olcott   
   Re: I corrected the very subtle error in   
   26 Sep 25 20:35:02   
   
   XPost: comp.theory, comp.lang.c++, comp.lang.c   
   From: 643-408-1753@kylheku.com   
      
   On 2025-09-26, olcott  wrote:   
   > On 9/26/2025 2:28 PM, Kaz Kylheku wrote:   
   >> On 2025-09-26, olcott  wrote:   
   >>> On 9/26/2025 12:05 PM, Richard Heathfield wrote:   
   >>>> On 26/09/2025 16:56, olcott wrote:   
   >>>>   
   >>>>    
   >>>>   
   >>>>> Two other PhD computer scientists agree with me.   
   >>>>   
   >>>> That's an attempt at an appeal to authority, but it isn't a convincing   
   >>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied   
   >>>> the Halting Problem (for the 10 minutes it takes to drink a cup of   
   >>>> coffee while they run the proof through their minds) and who have no   
   >>>> problem with it whatsoever.   
   >>>>   
   >>>   
   >>> And of course you can dismiss whatever they say   
   >>> without looking at a single word because majority   
   >>> consensus have never been shown to be less than   
   >>> totally infallible.   
   >>   
   >> Consensus in mathematics /is/ pretty much infallible.   
   >>   
   >   
   > That is like pretty much sterile.   
      
   Sometime things are sterile and that is good.  Like your surgeon's   
   gloves, or the interior of your next can of beans, and such.   
      
   > Generally very reliable seems apt.   
      
   You don't even know the beginning of it.   
      
   > Math and logic people will hold to views that   
   > are philosophically primarily because they view   
   > knowledge in their field to be pretty much infallible.   
      
   Formal systems are artificial inventions evolving from their axioms.   
   While we can't say that we know everything about a system just   
   because we invented its axioms, we know when we have captured an   
   air-tight truth.   
      
   It is not a situation in which we are relying on hypotheses,   
   observations and measurements, which are saddled with conditions.   
      
   You're not going to end up with a classical mechanics theory   
   of Turing Machine halting, distinct from a quantum and relativistic one,   
   in which they can't decide between loops and strings ...   
      
   The subject matter admits iron-clad conclusions that get permanently   
   laid to rest.   
      
   > The big mistake of logic is that it does not retain   
   > semantics as fully integrated into its formal expressions.   
   > That is how we get nutty things like the Principle of Explosion.   
   > https://en.wikipedia.org/wiki/Principle_of_explosion   
      
   The POE is utterly sane.   
      
   What is nutty is doing what it describe; go around assuming falsehoods   
   to be true and the deriving nonsense from them with the intent of   
   adopting a belief in all those falsehoods and the nonsense that follows.   
      
   But that, ironically, perfectly describes your own research programme,   
   right down to the acronym:   
      
   Principle of Explosion -> POE -> Peter Olcott Experiment   
      
   A contradiction is a piece of foreign material in a formal system.  It   
   is nonsensical to bring it in, and assert it as a truth; it makes no   
   sense to do so. Once you do, it creates contagion.   
      
   I believe that POE is closely linked to the principle we know   
   in the systems side of computer science: "one bad bit stops the show".   
   If you interfere with a correct calculation program by flipping a bit,   
   all bets are off.   
      
   Another face of POE in computing is GIGO: garbage in, garbage out.   
   Assuming a falsehood to be true is garbage in; the bogus things   
   you dan then prove are garbage out.   
      
   The /reduction ad absurdum/ technique usefully makes a controlled use of   
   a contradiction.  We introduce a contradiction and then derive from it   
   some other contradictions using the same logical tools that we normally   
   use for deriving truths from truths. We do that with the specific goal   
   of arriving at a proposion that we otherwise already know to be false.   
   At that point we drop regarding as true the entire chain, all the way   
   back to the initial wrong assumption.   
      
   The benefit is that the contradiction being initially assumed is not   
   /obviously/ a contradiction, but when we show that it derivews an   
   /obvious/ contradiction, we readily see that it is so.   
      
   (Note that the diagonal halting proofs do not rely on /reduction ad   
   absurdum/ whatsoever. They directly show that no decider can be total,   
   without assuming anything about it.)   
      
   --   
   TXR Programming Language: http://nongnu.org/txr   
   Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal   
   Mastodon: @Kazinator@mstdn.ca   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca