Message 57,574 of 59,235   
   olcott to Richard Damon   
   Re: Respect [was: The halting problem as   
   20 Jul 25 09:30:08   
   
   XPost: comp.theory, sci.logic   
   From: polcott333@gmail.com   
      
   On 7/20/2025 6:13 AM, Richard Damon wrote:   
   > On 7/19/25 11:20 PM, olcott wrote:   
   >> On 7/19/2025 9:12 PM, Richard Damon wrote:   
   >>> On 7/19/25 5:18 PM, olcott wrote:   
   >>>> On 7/19/2025 4:00 PM, Alan Mackenzie wrote:   
   >>>>> Mike Terry wrote:   
   >>>>>   
   >>>>> [ .... ]   
   >>>>>   
   >>>>>> ps. learn to post more respectfully.   
   >>>>>   
   >>>>> You've hit the nail on the head, there. Peter Olcott doesn't show   
   >>>>> respect here for anybody. Because of this he isn't shown any respect   
   >>>>> back - he hasn't earned any. I don't think he understands the concept   
   >>>>> of respect any more than he understands the concept of truth.   
   >>>>>   
   >>>>> If he were to show repect, he'd repect knowledge, truth, and learning,   
   >>>>> and strive to acquire these qualities. Instead he displays   
   >>>>> contempt for   
   >>>>> them. This is a large part of what makes him a crank. It is   
   >>>>> a large part of what makes it such a waste of time trying to correct   
   >>>>> him, something that you've sensibly given up.   
   >>>>>   
   >>>>   
   >>>> Now that chat bots have proven that they understand   
   >>>> what I am saying I can rephrase my words to be more   
   >>>> clear.   
   >>>>   
   >>>   
   >>> They have done no such thing, because they can't   
   >>>   
   >>> Since yoiu feed them lies, all you have done is shown that you think   
   >>> lies are valid logic.   
   >>>   
   >>>> I have been rude because I cannot interpret the   
   >>>> rebuttal to this statement as anything besides   
   >>>> a despicable lie for the sole purpose of sadistic   
   >>>> pleasure of gaslighting:   
   >>>   
   >>> Because you are just too stupid.   
   >>>   
   >>> How is the "pattern" that HHH detects a non-halting pattern, when   
   >>> non- halting is DEFINED by the behavior of the directly executed   
   >>> machine, and the pattern you are thinking of exists in the execution   
   >>> of the DDD that halts because it was built on the same HHH you claim   
   >>> is correct to return 0,   
   >>>   
   >>> Thus, your claim *IS* just a lie, and you shows your ignorance by   
   >>> saying you can't undetstand how it is one.   
   >>>   
   >>>>   
   >>>>    
   >>>> typedef void (*ptr)();   
   >>>> int HHH(ptr P);   
   >>>>   
   >>>> void DDD()   
   >>>> {   
   >>>> HHH(DDD);   
   >>>> return;   
   >>>> }   
   >>>>   
   >>>> int main()   
   >>>> {   
   >>>> HHH(DDD);   
   >>>> DDD();   
   >>>> }   
   >>>>   
   >>>> Termination Analyzer HHH simulates its input until   
   >>>> it detects a non-terminating behavior pattern. When   
   >>>> HHH detects such a pattern it aborts its simulation   
   >>>> and returns 0.   
   >>>>    
   >>>>   
   >>>> Every chatbot figures out on its own that HHH   
   >>>> correctly rejects DDD as non-terminating because   
   >>>> the input to HHH(DDD) specifies recursive simulation.   
   >>>>   
   >>>   
   >>> BECAUSE YOU LIE TO THEM, and a prime training parameter is to give an   
   >>> answer the user is apt to like, and thus will tend to just accept   
   >>> lies and errors provided.   
   >>>   
   >>   
   >> I only defined the hypothetical possibility of a simulating   
   >> termination analyzer. This cannot possibly be a lie. They   
   >> figured out all the rest on their own.   
   >   
   > No, you stated that it DOES something that it doesn't.   
   >   
      
   Unlike a halt decider that must be correct   
   on every input a simulating termination analyzer   
   only needs be correct on at least one input.   
      
   void Infinite_Recursion()   
   {   
    Infinite_Recursion();   
   }   
      
   void Infinite_Loop()   
   {   
    HERE: goto HERE;   
    return;   
   }   
      
   void Infinite_Loop2()   
   {   
   L1: goto L3;   
   L2: goto L1;   
   L3: goto L2;   
   }   
      
   HHH correctly determines the halt status of   
   the above three functions.   
      
   > Also, you imply that your "input" isn't the input that actually needs to   
   > be given, as without the code of the specific HHH that this DDD calls,   
   > no Simulating Halt Decider could do the simulation that you talk about.   
   >   
      
   Your brain damage causes you to keep forgetting   
   that DDD has access to all of the machine code   
   in Halt7.obj. I told you this dozens of times   
   and you already forget by the time you reply.   
      
   > It should be noted that it is a well known property of Artifical   
   > Intelegence, and in particular, Large Languge Models, are built not to   
   > give a "correct" answer, but an answer the user will like. And thus they   
   > will pick up on the subtle clues of how things are worded to give the   
   > responce that seems to be desired, even if it is just wrong.   
   >   
      
   That is an incorrect assessment of how LLM systems work   
   and you can't show otherwise because you are wrong.   
      
   > When you add to the input the actual definition of "Non-Halting", as   
   > being that the exectuion of the program or its complete simulation will   
   > NEVER halt, even if carried out to an unbounded number of steps, they   
   > will give a different answer.   
   >   
      
   This is a whole other issue that I have addressed.   
      
   They figured out on their own that if DDD was correctly   
   simulated by HHH for an infinite number of steps that   
   DDD would never stop running.   
      
   > If you disagree with that definition, then you are admitting that you   
   > don't know the meaning of the terms-of-art of the system, but are just   
   > admitting to being the lying bastard that you are.   
   >   
      
   Two different LLM systems both agree that the halting   
   problem definition is wrong.   
      
      
    The standard proof assumes a decider   
    H(M,x) that determines whether machine   
    M halts on input x.   
      
    But this formulation is flawed, because:   
    Turing machines can only process finite   
    encodings (e.g. ⟨M⟩), not executable entities   
    like M.   
      
    So the valid formulation must be   
    H(⟨M⟩,x), where ⟨M⟩ is a string.   
      
      
   You cannot point out any error with that because   
   it is correct.   
      
   >   
   >>   
   >>> All you are doing is showing you don't understand how Artificiial   
   >>> Intelegence actualy works, showing your Natural Stupidity.   
   >>   
   >> That they provided all of the reasoning why DDD correctly   
   >> simulated by HHH does not halt proves that they do have   
   >> the functional equivalent of human understanding.   
   >   
   > But the problem is that your HHH that answers doesn't do a correct   
   > simulation.   
   >   
      
   All simulating termination analyzers only predict   
   what would happen if they did a complete simulation   
   on non terminating inputs.   
      
   It has always been completely nuts to require a non   
   terminating input to be simulating until its non-existent   
   completion.   
      
   > Yes, if *THE* HHH is one that correctly simulates the input (that has   
   > been fixed to include the code of HHH) then that simulation will not   
   > halt and be non-halting, but that HHH never answers.   
   >   
      
   Correctly *predict* the behavior of unlimited simulation,   
   not actually do an infinite simulation.   
      
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)