Message 58,017 of 59,235   
   olcott to dbush   
   Re: Updated input to LLM systems proving   
   13 Oct 25 11:26:49   
   
   XPost: comp.theory, comp.lang.c++, comp.lang.c   
   From: polcott333@gmail.com   
      
   On 10/12/2025 11:22 PM, dbush wrote:   
   > On 10/13/2025 12:12 AM, olcott wrote:   
   >> On 10/12/2025 10:49 PM, dbush wrote:   
   >>> On 10/12/2025 11:43 PM, olcott wrote:   
   >>>> On 10/12/2025 9:59 PM, dbush wrote:   
   >>>>> On 10/12/2025 10:57 PM, olcott wrote:   
   >>>>>> On 10/12/2025 9:40 PM, dbush wrote:   
   >>>>>>> On 10/12/2025 10:34 PM, olcott wrote:   
   >>>>>>>> On 10/12/2025 9:29 PM, dbush wrote:   
   >>>>>>>>> On 10/12/2025 10:20 PM, olcott wrote:   
   >>>>>>>>>> On 10/12/2025 9:15 PM, dbush wrote:   
   >>>>>>>>>>> On 10/12/2025 9:56 PM, olcott wrote:   
   >>>>>>>>>>>> On 10/12/2025 8:22 PM, dbush wrote:   
   >>>>>>>>>>>>> On 10/12/2025 9:20 PM, olcott wrote:   
   >>>>>>>>>>>>>> On 10/12/2025 3:11 PM, dbush wrote:   
   >>>>>>>>>>>>>>> On 10/12/2025 11:47 AM, olcott wrote:   
   >>>>>>>>>>>>>>>> On 10/12/2025 9:19 AM, dbush wrote:   
   >>>>>>>>>>>>>>>>> On 10/12/2025 9:50 AM, olcott wrote:   
   >>>>>>>>>>>>>>>>>> Please think this all the way through without making   
   >>>>>>>>>>>>>>>>>> any guesses   
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>>    
   >>>>>>>>>>>>>>>>>> Simulating Termination Analyzer HHH correctly   
   >>>>>>>>>>>>>>>>>> simulates its input until:   
   >>>>>>>>>>>>>>>>>> (a) Detects a non-terminating behavior pattern:   
   >>>>>>>>>>>>>>>>>> abort simulation and return 0.   
   >>>>>>>>>>>>>>>>>> (b) Simulated input reaches its simulated "return"   
   >>>>>>>>>>>>>>>>>> statement:   
   >>>>>>>>>>>>>>>>>> return 1.   
   >>>>>>>>>>>>>>>>>> (c) If HHH must abort its simulation to prevent its   
   >>>>>>>>>>>>>>>>>> own non- termination   
   >>>>>>>>>>>>>>>>>> then HHH is correct to abort this simulation and   
   >>>>>>>>>>>>>>>>>> return 0.   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>> These conditions make HHH not a halt decider because   
   >>>>>>>>>>>>>>>>> they are incompatible with the requirements:   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>> It is perfectly compatible with those requirements   
   >>>>>>>>>>>>>>>> except in the case where the input calls its own   
   >>>>>>>>>>>>>>>> simulating halt decider.   
   >>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>> In other words, not compatible. No "except".   
   >>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>> Given any algorithm (i.e. a fixed immutable sequence of   
   >>>>>>>>>>>>>>>>> instructions) X described as with input Y:   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>> A solution to the halting problem is an algorithm H   
   >>>>>>>>>>>>>>>>> that computes the following mapping:   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>> (,Y) maps to 1 if and only if X(Y) halts when   
   >>>>>>>>>>>>>>>>> executed directly   
   >>>>>>>>>>>>>>>>> (,Y) maps to 0 if and only if X(Y) does not halt   
   >>>>>>>>>>>>>>>>> when executed directly   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>> typedef int (*ptr)();   
   >>>>>>>>>>>>>>>>>> int HHH(ptr P);   
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>> int DD()   
   >>>>>>>>>>>>>>>>>> {   
   >>>>>>>>>>>>>>>>>> int Halt_Status = HHH(DD);   
   >>>>>>>>>>>>>>>>>> if (Halt_Status)   
   >>>>>>>>>>>>>>>>>> HERE: goto HERE;   
   >>>>>>>>>>>>>>>>>> return Halt_Status;   
   >>>>>>>>>>>>>>>>>> }   
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>> int main()   
   >>>>>>>>>>>>>>>>>> {   
   >>>>>>>>>>>>>>>>>> HHH(DD);   
   >>>>>>>>>>>>>>>>>> }   
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>> What value should HHH(DD) correctly return?   
   >>>>>>>>>>>>>>>>>>    
   >>>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>> Error: assumes it's possible to design HHH to get a   
   >>>>>>>>>>>>>>>>> correct answer.   
   >>>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>> HHH(DD) gets the correct answer within its set   
   >>>>>>>>>>>>>>>> of assumptions / premises   
   >>>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>> Which is incompatible with the requirements for a halt   
   >>>>>>>>>>>>>>> decider:   
   >>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>>   
   >>>>>>>>>>>>>>   
   >>>>>>>>>>>>>> Yes, but the requirements for a halt decider are inconsistent   
   >>>>>>>>>>>>>> with reality.   
   >>>>>>>>>>>>>>   
   >>>>>>>>>>>>>   
   >>>>>>>>>>>>> In other words, you agree with Turing and Linz that the   
   >>>>>>>>>>>>> following requirements cannot be satisfied:   
   >>>>>>>>>>>>>   
   >>>>>>>>>>>>>   
   >>>>>>>>>>>>   
   >>>>>>>>>>>> Sure and likewise no Turing machine can   
   >>>>>>>>>>>> give birth to a real live fifteen story   
   >>>>>>>>>>>> office building. All logical impossibilities   
   >>>>>>>>>>>> are exactly equally logical impossible.   
   >>>>>>>>>>>>   
   >>>>>>>>>>>   
   >>>>>>>>>>> So we're in agreement: no algorithm exists that can tell us   
   >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when   
   >>>>>>>>>>> executed directly, as proven by Turning and Linz.   
   >>>>>>>>>>   
   >>>>>>>>>> In exactly the same way that: "this sentence is not true"   
   >>>>>>>>>> cannot be proven true or false. It is a bogus decision   
   >>>>>>>>>> problem anchored in   
   >>>>>>>>   
   >>>>>>>> a fundamentally incorrect notion of truth.   
   >>>>>>>>> The false assumption that such an algorithm *does* exist.   
   >>>>>>>>   
   >>>>>>>> Can we correctly say that the color of your car is fifteen feet   
   >>>>>>>> long?   
   >>>>>>>> For the body of analytical truth coherence is the key and   
   >>>>>>>> incoherence rules out truth.   
   >>>>>>>>   
   >>>>>>>   
   >>>>>>> There is nothing incoherent about wanting to know if any   
   >>>>>>> arbitrary algorithm X with input Y will halt when executed directly.   
   >>>>>>>   
   >>>>>>   
   >>>>>> Tarski stupidly thought this exact same sort of thing.   
   >>>>>> If a truth predicate exists then it could tell if the   
   >>>>>> Liar Paradox is true or false. Since it cannot then   
   >>>>>> there must be no truth predicate.   
   >>>>>   
   >>>>> Correct. If you understood proof by contradiction you wouldn't be   
   >>>>> questioning that.   
   >>>>>   
   >>>>   
   >>>> It looks like ChatGPT 5.0 is the winner here.   
   >>>> It understood that requiring HHH to report on   
   >>>> the behavior of the direct execution of DD()   
   >>>> is requiring a function to report on something   
   >>>> outside of its domain.   
   >>>   
   >>> False. It is proven true by the meaning of the words that a finite   
   >>> string description of a Turing machine specifies all semantic   
   >>> properties of the machine it describes, including whether that   
   >>> machine halts when executed directly.   
   >>>   
   >>   
   >> ChatCPT 5.0 was the first LLM to be able to prove   
   >> that is counter-factual.   
   >   
   > Ah, so you don't believe in semantic tautologies?   
   >   
      
   *They are the foundation of this whole system*   
   Any system of reasoning that begins with a consistent   
   system of stipulated truths and only applies the truth   
   preserving operation of semantic logical entailment to   
   this finite set of basic facts inherently derives a   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)