Message 57,589 of 59,235   
   Richard Damon to olcott   
   Re: Respect [was: The halting problem as   
   20 Jul 25 19:28:43   
   
   XPost: comp.theory, sci.logic   
   From: richard@damon-family.org   
      
   On 7/20/25 10:30 AM, olcott wrote:   
   > On 7/20/2025 6:13 AM, Richard Damon wrote:   
   >> On 7/19/25 11:20 PM, olcott wrote:   
   >>> On 7/19/2025 9:12 PM, Richard Damon wrote:   
   >>>> On 7/19/25 5:18 PM, olcott wrote:   
   >>>>> On 7/19/2025 4:00 PM, Alan Mackenzie wrote:   
   >>>>>> Mike Terry wrote:   
   >>>>>>   
   >>>>>> [ .... ]   
   >>>>>>   
   >>>>>>> ps. learn to post more respectfully.   
   >>>>>>   
   >>>>>> You've hit the nail on the head, there. Peter Olcott doesn't show   
   >>>>>> respect here for anybody. Because of this he isn't shown any respect   
   >>>>>> back - he hasn't earned any. I don't think he understands the   
   >>>>>> concept   
   >>>>>> of respect any more than he understands the concept of truth.   
   >>>>>>   
   >>>>>> If he were to show repect, he'd repect knowledge, truth, and   
   >>>>>> learning,   
   >>>>>> and strive to acquire these qualities. Instead he displays   
   >>>>>> contempt for   
   >>>>>> them. This is a large part of what makes him a crank. It is   
   >>>>>> a large part of what makes it such a waste of time trying to correct   
   >>>>>> him, something that you've sensibly given up.   
   >>>>>>   
   >>>>>   
   >>>>> Now that chat bots have proven that they understand   
   >>>>> what I am saying I can rephrase my words to be more   
   >>>>> clear.   
   >>>>>   
   >>>>   
   >>>> They have done no such thing, because they can't   
   >>>>   
   >>>> Since yoiu feed them lies, all you have done is shown that you think   
   >>>> lies are valid logic.   
   >>>>   
   >>>>> I have been rude because I cannot interpret the   
   >>>>> rebuttal to this statement as anything besides   
   >>>>> a despicable lie for the sole purpose of sadistic   
   >>>>> pleasure of gaslighting:   
   >>>>   
   >>>> Because you are just too stupid.   
   >>>>   
   >>>> How is the "pattern" that HHH detects a non-halting pattern, when   
   >>>> non- halting is DEFINED by the behavior of the directly executed   
   >>>> machine, and the pattern you are thinking of exists in the execution   
   >>>> of the DDD that halts because it was built on the same HHH you claim   
   >>>> is correct to return 0,   
   >>>>   
   >>>> Thus, your claim *IS* just a lie, and you shows your ignorance by   
   >>>> saying you can't undetstand how it is one.   
   >>>>   
   >>>>>   
   >>>>>    
   >>>>> typedef void (*ptr)();   
   >>>>> int HHH(ptr P);   
   >>>>>   
   >>>>> void DDD()   
   >>>>> {   
   >>>>> HHH(DDD);   
   >>>>> return;   
   >>>>> }   
   >>>>>   
   >>>>> int main()   
   >>>>> {   
   >>>>> HHH(DDD);   
   >>>>> DDD();   
   >>>>> }   
   >>>>>   
   >>>>> Termination Analyzer HHH simulates its input until   
   >>>>> it detects a non-terminating behavior pattern. When   
   >>>>> HHH detects such a pattern it aborts its simulation   
   >>>>> and returns 0.   
   >>>>>    
   >>>>>   
   >>>>> Every chatbot figures out on its own that HHH   
   >>>>> correctly rejects DDD as non-terminating because   
   >>>>> the input to HHH(DDD) specifies recursive simulation.   
   >>>>>   
   >>>>   
   >>>> BECAUSE YOU LIE TO THEM, and a prime training parameter is to give   
   >>>> an answer the user is apt to like, and thus will tend to just accept   
   >>>> lies and errors provided.   
   >>>>   
   >>>   
   >>> I only defined the hypothetical possibility of a simulating   
   >>> termination analyzer. This cannot possibly be a lie. They   
   >>> figured out all the rest on their own.   
   >>   
   >> No, you stated that it DOES something that it doesn't.   
   >>   
   >   
   > Unlike a halt decider that must be correct   
   > on every input a simulating termination analyzer   
   > only needs be correct on at least one input.   
      
   WRONG. (Full details in another post)   
      
   That you can't show a reliable source for your definition just proves   
   you just make things up, because you are just a pathological liar.   
      
   You may have THOUGHT you read that somewhere, but that was probably   
   because you misunderstood what you were reading or took something out of   
   context.   
      
   >   
   > void Infinite_Recursion()   
   > {   
   > Infinite_Recursion();   
   > }   
   >   
   > void Infinite_Loop()   
   > {   
   > HERE: goto HERE;   
   > return;   
   > }   
   >   
   > void Infinite_Loop2()   
   > {   
   > L1: goto L3;   
   > L2: goto L1;   
   > L3: goto L2;   
   > }   
   >   
   > HHH correctly determines the halt status of   
   > the above three functions.   
      
   So?   
      
   Proof by example of a universal condition is just a logical fallacy.   
      
   Shows you don't understand how logic works.   
      
   >   
   >> Also, you imply that your "input" isn't the input that actually needs   
   >> to be given, as without the code of the specific HHH that this DDD   
   >> calls, no Simulating Halt Decider could do the simulation that you   
   >> talk about.   
   >>   
   >   
   > Your brain damage causes you to keep forgetting   
   > that DDD has access to all of the machine code   
   > in Halt7.obj. I told you this dozens of times   
   > and you already forget by the time you reply.   
      
   And thus you are admitting that all of that is "part of the input", as   
   in computability theory, programs can only access things that are part   
   of thier input.   
      
   It seems you just don't undetstand that.   
      
   >   
   >> It should be noted that it is a well known property of Artifical   
   >> Intelegence, and in particular, Large Languge Models, are built not to   
   >> give a "correct" answer, but an answer the user will like. And thus   
   >> they will pick up on the subtle clues of how things are worded to give   
   >> the responce that seems to be desired, even if it is just wrong.   
   >>   
   >   
   > That is an incorrect assessment of how LLM systems work   
   > and you can't show otherwise because you are wrong.   
      
   Look at the training regimine and traineer instructions,   
      
   The people "grading" the results were often specifically told not to   
   worry about if the details were factually correct, just if the answer   
   seemed plausible.   
      
   Maybe you just never looked into the full process of building an AI. THe   
   issue is that it is a very abstract process, and many of the mechanisms   
   not well understood.   
      
   >   
   >> When you add to the input the actual definition of "Non-Halting", as   
   >> being that the exectuion of the program or its complete simulation   
   >> will NEVER halt, even if carried out to an unbounded number of steps,   
   >> they will give a different answer.   
   >>   
   >   
   > This is a whole other issue that I have addressed.   
   >   
   > They figured out on their own that if DDD was correctly   
   > simulated by HHH for an infinite number of steps that   
   > DDD would never stop running.   
      
   But that is a different input.   
      
   THe problem is you LIED about the input, because you omitted that DDD   
   and HHH had access to a fixed halt7.c file, which actually means that   
   there is no HHH that simluates for an infinite number of steps, as the   
   definition of HHH was fixed by your problem specification, as it is in   
   that fixed Halt7.c   
      
   >   
   >> If you disagree with that definition, then you are admitting that you   
   >> don't know the meaning of the terms-of-art of the system, but are just   
   >> admitting to being the lying bastard that you are.   
   >>   
   >   
   > Two different LLM systems both agree that the halting   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)