home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.logic      Logic -- math, philosophy & computationa      262,912 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 261,527 of 262,912   
   olcott to Kaz Kylheku   
   Re: Done with Olcott. --- Kaz cannot thi   
   29 Nov 25 14:56:46   
   
   XPost: comp.theory, sci.math   
   From: polcott333@gmail.com   
      
   On 11/29/2025 2:39 PM, Kaz Kylheku wrote:   
   > On 2025-11-29, olcott  wrote:   
   >> On 11/28/2025 11:52 PM, Kaz Kylheku wrote:   
   >>>> I have shown that the original assumptions are   
   >>>> incoherent just like   
   >>>   
   >>> You have not. Only that your understanding is incoherent.   
   >>>   
   >>   
   >> The halting problem instance is merely the Liar Paradox in disguise.   
   >   
   > No, it isn't.   
   >   
   > You're not even smart enough to keep straight in your head   
   > what is part of the /problem/ and what is a /result/   
   > of investigating the problem.   
   >   
   > The Halting Problem is literally a sentence of the form: "can   
   > a Turing machine calculate whether any Turing Machine halts".   
   >   
      
   When the halting problem instance defines and input   
   that does the opposite of whatever its decider reports   
   this is structurally the same as the set of sentences   
   that are true only when they are false and false only   
   when they are true.   
      
   ChatGPT didn't initially think that I properly understood   
   isomorphism until I gave it that example.   
      
   > It's a valid, and very good question. The subject, the Turing Machine,   
   > has a rigid definition.   
   >   
   > The self-referential instance arises when we explore the /answer/ to the   
   > question. We realize that such a deciding program could be given a   
   > Turing Machine input which contains an embedding of that program itself,   
   > and this leads us to discover certain situations that force us to   
   > conclude that a TM cannot calculate the halting of all TM.   
   >   
   > That's not the Halting Problem; all of that is /findings/ from   
   > the investigation of the Halting Problem: that simple question.   
   >   
   > The findings are air-tight.   
   >   
   > The test cases involving self-reference cases can actually be built; a   
   > criticism from a constructivist angle is hardly possible.   
   >   
   > If you don't like self-referental programs, you need to propose   
   > a model of computation in which they are /provably/ impossible to   
   > write, and convince the world that thekkkkkk   
   >   
   >> The halting problem instance is even screwier.   
   >   
   > The self-referential instance used in halting problem proofs   
   > contains no self-contradiction. No part of it is a logical   
   > proposition which claims that something is false that is   
   > elsewhere in that that instance claimed to be true.   
   >   
   > H(D) may be regarded as a proposition, whch claims something to be   
   > false. But nowhere in the D case or in the H program is it claimed to be   
   > true.   
   >   
   > Philosophically, the case has an /observer/: an agent not part of   
   > the case who is evaluating what is going on inside it.   
   >   
   > The observer sees that H(D) returns zero and interprets that   
   > as an assertion "D does not halt".   
   >   
   > The observer sees that D is reacting to the value and halting.   
   >   
   > The observer puts these two together and concludes that H(D)   
   > is incorrect.   
   >   
   > The observer is not in the loop; it is not part of H or D.   
   >   
   >> I won't get into that until after you prove   
   >> that you understand the Liar Paradox   
   >>   
   >>>> the set of all sets that are not members of themselves   
   >>>> is isomorphic to a can of soup that contains itself   
   >>>> so completely that it has no outside surface.   
   >>>   
   >>> Unrelated to halting.   
   >>   
   >> It it is all incoherence of self-reference.   
   >   
   > Not all self-reference is incoherent. "I'm hungry" or "I'm hurt   
   > and I need help" are beneficial, useful self-references.   
   >   
   > "This sentence has five words" is self-reference and truth-bearing.   
   > (You have agreed with this.)   
   >   
   > Self-reference is real and cannot be removed from the world.   
   >   
   > Long after you are removed from the world, self-reference will   
   > live on.   
   >   
   >> I won't get into that until after you prove   
   >> that you understand how the Liar Paradox is   
   >> incorrect. I have no patience for people   
   >> playing perpetual head games.   
   >   
   > No, you have no patience for people who can use their heads.   
   >   
   > (Or so you say, but you constantly reply to them.)   
   >   
   >>> Nonsense. It's just a general technique involving a two-dimensional   
   >>> table in which something interesting develops involving the   
   >>> diagonal trace.   
   >>   
   >> The diagonal trace cheats because it hides the incoherence   
   >> of the underlying semantic inference steps. If you leap to   
   >> a conclusion without showing your work people might guess   
   >> that you are correct never seeing the mistake.   
   >   
   > No; HHH cheats by receiving different values of Root from Init_Halts_HHH,   
   > and behaving like two different deciders.   
   >   
   > All the cheats are yours; Turing never cheated whatsoever.   
   >   
   > Disagreeing with the diagonalization technique is like ranting   
   > against the chain rule or integration by parts in calculus.   
   >   
   > It's just a tool; a useful one for the right job.   
   >   
   >>>>> The paper you reference makes it clear that this is a required   
   >>>>> ingredient.  You can't just use the pronoun "this sentence"; that's a   
   >>>>> self-reference, but not acheived via diagonalization.   
   >>>>>   
   >>>>   
   >>>> Whut ???   
   >>>>   
   >>>>       In formal languages, self-reference is also very   
   >>>>       easy to come by. Any language capable of expressing   
   >>>>       some basic syntax can generate self-referential   
   >>>>       sentences via so-called diagonalization   
   >>>>       https://plato.stanford.edu/entries/liar-paradox/#ExisLiarLikeSent   
   >>>   
   >>> For all the accusations that others are not "paying attention",   
   >>> apparently you do not see the tiny superscripts that indicate   
   >>> foonotes:   
   >>>   
   >>>     "The situation with formal languages is actually somewhat subtler than   
   >>>     our brief discussion indicates. In most cases, corner quotes really   
   >>>     indicate formal terms for Gödel numbers of sentences, and are not   
   >>>     genuine quotation marks in the usual sense (e.g., denoting the   
   >>>     expression ‘inside’ them). Hence, the sense in which such languages   
   >>>     have reference to sentences is delicate. Yet with very minimal   
   >>>     resources, syntax can be represented and diagonal sentences   
   >>>     constructed. Hence, there is a sense, albeit subtle, in which such   
   >>>     languages can express self-reference. See the entries on provability   
   >>>     logic and Gödel (the section on the incompleteness theorems), as well   
   >>>     as Heck (2007)."   
   >>>   
   >>   
   >> I created Olcott's Minimal Type Theory to get around   
   >   
   > It is three pages of pure rubbish, not suitable to be handed in   
   > as PHIL 100 homeowrk.   
   >   
   >>> Nowhere does your paper say that "This sentence is false" is   
   >>> diagonal by use of the pronoun.  You need quoting!   
   >>   
   >> You must pay attention to the MTT language specification.   
   >> provided above. Especially the "defined as" operator: :=   
   >   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca