XPost: comp.theory, comp.lang.c++, comp.lang.c   
   From: polcott333@gmail.com   
      
   On 9/26/2025 3:35 PM, Kaz Kylheku wrote:   
   > On 2025-09-26, olcott wrote:   
   >> On 9/26/2025 2:28 PM, Kaz Kylheku wrote:   
   >>> On 2025-09-26, olcott wrote:   
   >>>> On 9/26/2025 12:05 PM, Richard Heathfield wrote:   
   >>>>> On 26/09/2025 16:56, olcott wrote:   
   >>>>>   
   >>>>>    
   >>>>>   
   >>>>>> Two other PhD computer scientists agree with me.   
   >>>>>   
   >>>>> That's an attempt at an appeal to authority, but it isn't a convincing   
   >>>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied   
   >>>>> the Halting Problem (for the 10 minutes it takes to drink a cup of   
   >>>>> coffee while they run the proof through their minds) and who have no   
   >>>>> problem with it whatsoever.   
   >>>>>   
   >>>>   
   >>>> And of course you can dismiss whatever they say   
   >>>> without looking at a single word because majority   
   >>>> consensus have never been shown to be less than   
   >>>> totally infallible.   
   >>>   
   >>> Consensus in mathematics /is/ pretty much infallible.   
   >>>   
   >>   
   >> That is like pretty much sterile.   
   >   
   > Sometime things are sterile and that is good. Like your surgeon's   
   > gloves, or the interior of your next can of beans, and such.   
   >   
      
   Pretty much infallible is like pretty much the   
   one and only creator of the Heavens and Earth.   
      
   >> Generally very reliable seems apt.   
   >   
   > You don't even know the beginning of it.   
   >   
      
   That I start from a philosophical foundation   
   different than the rules that you learned   
   by rote does not mean that I am incorrect.   
      
   >> Math and logic people will hold to views that   
   >> are philosophically primarily because they view   
   >> knowledge in their field to be pretty much infallible.   
   >   
   > Formal systems are artificial inventions evolving from their axioms.   
   > While we can't say that we know everything about a system just   
   > because we invented its axioms, we know when we have captured an   
   > air-tight truth.   
   >   
      
   That is sometimes not airtight at all.   
      
   > It is not a situation in which we are relying on hypotheses,   
   > observations and measurements, which are saddled with conditions.   
   >   
      
   Computer science guys do not tend exhaustively to check every   
   detail about every nuance of everything that they were taught   
   over and over looking for the tiniest inconsistency.   
      
   Philosophers of computer science do this.   
      
   > You're not going to end up with a classical mechanics theory   
   > of Turing Machine halting, distinct from a quantum and relativistic one,   
   > in which they can't decide between loops and strings ...   
   >   
   > The subject matter admits iron-clad conclusions that get permanently   
   > laid to rest.   
   >   
   >> The big mistake of logic is that it does not retain   
   >> semantics as fully integrated into its formal expressions.   
   >> That is how we get nutty things like the Principle of Explosion.   
   >> https://en.wikipedia.org/wiki/Principle_of_explosion   
   >   
   > The POE is utterly sane.   
   >   
      
   That is just your indoctrination talking.   
   Try the same thing ion relevance logic.   
      
   > What is nutty is doing what it describe; go around assuming falsehoods   
   > to be true and the deriving nonsense from them with the intent of   
   > adopting a belief in all those falsehoods and the nonsense that follows.   
   >   
   > But that, ironically, perfectly describes your own research programme,   
   > right down to the acronym:   
   >   
   > Principle of Explosion -> POE -> Peter Olcott Experiment   
   >   
   > A contradiction is a piece of foreign material in a formal system. It   
   > is nonsensical to bring it in, and assert it as a truth; it makes no   
   > sense to do so. Once you do, it creates contagion.   
   >   
      
   Like dog shit in a birthday cake.   
      
   > I believe that POE is closely linked to the principle we know   
   > in the systems side of computer science: "one bad bit stops the show".   
   > If you interfere with a correct calculation program by flipping a bit,   
   > all bets are off.   
   >   
   > Another face of POE in computing is GIGO: garbage in, garbage out.   
   > Assuming a falsehood to be true is garbage in; the bogus things   
   > you dan then prove are garbage out.   
   >   
      
   Far far better to not let garbage in.   
      
   > The /reduction ad absurdum/ technique usefully makes a controlled use of   
   > a contradiction. We introduce a contradiction and then derive from it   
   > some other contradictions using the same logical tools that we normally   
   > use for deriving truths from truths. We do that with the specific goal   
   > of arriving at a proposion that we otherwise already know to be false.   
   > At that point we drop regarding as true the entire chain, all the way   
   > back to the initial wrong assumption.   
   >   
   > The benefit is that the contradiction being initially assumed is not   
   > /obviously/ a contradiction, but when we show that it derivews an   
   > /obvious/ contradiction, we readily see that it is so.   
   >   
   > (Note that the diagonal halting proofs do not rely on /reduction ad   
   > absurdum/ whatsoever. They directly show that no decider can be total,   
   > without assuming anything about it.)   
   >   
      
   *The conventional halting problem proof question is this*   
   For a halt decider H what correct halt status can   
   be returned for an input D that does the opposite   
   of whatever value is returned?   
      
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|