home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 57,586 of 59,235   
   Richard Damon to olcott   
   Re: Respect [was: The halting problem as   
   20 Jul 25 19:13:10   
   
   XPost: comp.theory, sci.logic   
   From: richard@damon-family.org   
      
   On 7/20/25 10:08 AM, olcott wrote:   
   > On 7/20/2025 2:38 AM, Fred. Zwarts wrote:   
   >> Op 20.jul.2025 om 05:20 schreef olcott:   
   >>> On 7/19/2025 9:12 PM, Richard Damon wrote:   
   >>>> On 7/19/25 5:18 PM, olcott wrote:   
   >>>>> On 7/19/2025 4:00 PM, Alan Mackenzie wrote:   
   >>>>>> Mike Terry  wrote:   
   >>>>>>   
   >>>>>> [ .... ]   
   >>>>>>   
   >>>>>>> ps. learn to post more respectfully.   
   >>>>>>   
   >>>>>> You've hit the nail on the head, there.  Peter Olcott doesn't show   
   >>>>>> respect here for anybody.  Because of this he isn't shown any respect   
   >>>>>> back - he hasn't earned any.  I don't think he understands the   
   >>>>>> concept   
   >>>>>> of respect any more than he understands the concept of truth.   
   >>>>>>   
   >>>>>> If he were to show repect, he'd repect knowledge, truth, and   
   >>>>>> learning,   
   >>>>>> and strive to acquire these qualities.  Instead he displays   
   >>>>>> contempt for   
   >>>>>> them.  This is a large part of what makes him a crank.  It is   
   >>>>>> a large part of what makes it such a waste of time trying to correct   
   >>>>>> him, something that you've sensibly given up.   
   >>>>>>   
   >>>>>   
   >>>>> Now that chat bots have proven that they understand   
   >>>>> what I am saying I can rephrase my words to be more   
   >>>>> clear.   
   >>>>>   
   >>>>   
   >>>> They have done no such thing, because they can't   
   >>>>   
   >>>> Since yoiu feed them lies, all you have done is shown that you think   
   >>>> lies are valid logic.   
   >>>>   
   >>>>> I have been rude because I cannot interpret the   
   >>>>> rebuttal to this statement as anything besides   
   >>>>> a despicable lie for the sole purpose of sadistic   
   >>>>> pleasure of gaslighting:   
   >>>>   
   >>>> Because you are just too stupid.   
   >>>>   
   >>>> How is the "pattern" that HHH detects a non-halting pattern, when   
   >>>> non- halting is DEFINED by the behavior of the directly executed   
   >>>> machine, and the pattern you are thinking of exists in the execution   
   >>>> of the DDD that halts because it was built on the same HHH you claim   
   >>>> is correct to return 0,   
   >>>>   
   >>>> Thus, your claim *IS* just a lie, and you shows your ignorance by   
   >>>> saying you can't undetstand how it is one.   
   >>>>   
   >>>>>   
   >>>>>    
   >>>>> typedef void (*ptr)();   
   >>>>> int HHH(ptr P);   
   >>>>>   
   >>>>> void DDD()   
   >>>>> {   
   >>>>>    HHH(DDD);   
   >>>>>    return;   
   >>>>> }   
   >>>>>   
   >>>>> int main()   
   >>>>> {   
   >>>>>    HHH(DDD);   
   >>>>>    DDD();   
   >>>>> }   
   >>>>>   
   >>>>> Termination Analyzer HHH simulates its input until   
   >>>>> it detects a non-terminating behavior pattern. When   
   >>>>> HHH detects such a pattern it aborts its simulation   
   >>>>> and returns 0.   
   >>>>>    
   >>>>>   
   >>>>> Every chatbot figures out on its own that HHH   
   >>>>> correctly rejects DDD as non-terminating because   
   >>>>> the input to HHH(DDD) specifies recursive simulation.   
   >>>>>   
   >>>>   
   >>>> BECAUSE YOU LIE TO THEM, and a prime training parameter is to give   
   >>>> an answer the user is apt to like, and thus will tend to just accept   
   >>>> lies and errors provided.   
   >>>>   
   >>>   
   >>> I only defined the hypothetical possibility of a simulating   
   >>> termination analyzer. This cannot possibly be a lie. They   
   >>> figured out all the rest on their own.   
   >>   
   >> No you told it that a correct simulating termination analyser could be   
   >> presumed. Which is an invalid presumption, because it has been proven   
   >> that it cannot.   
   >>   
   >   
   > Unlike a halt decider that must be correct   
   > on every input a simulating termination analyzer   
   > only needs be correct on at least one input.   
      
   Nope, got a source for that definition.   
      
   Per you favorite sourse:   
   https://en.wikipedia.org/wiki/Termination_analysis   
      
   The difference between a Halt Decider and a Terminatation Analyzer is:   
      
      
   In computer science, termination analysis is program analysis which   
   attempts to determine whether the evaluation of a given program halts   
   for each input. This means to determine whether the input program   
   computes a total function.   
      
   ANd further described with:   
      
   It is closely related to the halting problem, which is to determine   
   whether a given program halts for a given input and which is   
   undecidable. The termination analysis is even more difficult than the   
   halting problem: the termination analysis in the model of Turing   
   machines as the model of programs implementing computable functions   
   would have the goal of deciding whether a given Turing machine is a   
   total Turing machine, and this problem is at level  of the arithmetical hierarchy and thus is strictly   
   more difficult than the halting problem.Note,   
   So, there is NOTHING about the base term being allowed to be just a   
   partial decider. It is a fact (as explained somewhat in the article)   
   that the field understand that complete isn't possible, and talk about   
   how good you can do.   
      
   >   
   > void Infinite_Recursion()   
   > {   
   >    Infinite_Recursion();   
   > }   
   >   
   > void Infinite_Loop()   
   > {   
   >    HERE: goto HERE;   
   >    return;   
   > }   
   >   
   > void Infinite_Loop2()   
   > {   
   > L1: goto L3;   
   > L2: goto L1;   
   > L3: goto L2;   
   > }   
   >   
   > HHH correctly determines the halt status of   
   > the above three functions.   
   >   
      
   So?   
      
   It seems you don't understand the error of proof by example. Just   
   because you get a simple problem right, doesn't me you are right on a   
   more complicated one.   
      
   >>>   
   >>>> All you are doing is showing you don't understand how Artificiial   
   >>>> Intelegence actualy works, showing your Natural Stupidity.   
   >>>   
   >>> That they provided all of the reasoning why DDD correctly   
   >>> simulated by HHH does not halt proves that they do have   
   >>> the functional equivalent of human understanding.   
   >>   
   >> The other error is the presumption that a simulation that does not   
   >> reach the end of the simulation is evidence for non-termination.   
   >   
   > *Incorrect paraphrase*   
   > Halting is defined as reaching a final halt state.   
      
   *INCORRECT PARAPHRASE*   
      
   Halting is defined as the PROGRAM / MACHINE reaching a final state.   
      
   >   
   > if the state is final, the machine just stops and continues no more.   
   > https://cs.stackexchange.com/questions/38228/what-is-halting   
   >   
   > When it is correctly predicted that   
   > an infinite simulation of the input   
   > cannot possibly reach its own "return"   
   > statement final halt state then the   
   > input is non-halting.   
      
   But it isn't correctly predeicted, as Halting is about THE PROGRAM, as   
   DIRECTLY EXECUTED and you admit that this halts.   
      
   That the decider can't simulate the input has nothing to do with the   
   definition, showing that you just don't understand what you are talking   
   about.   
      
   >   
   >> It is not. An incomplete simulation is at best an indication that   
   >> other tools are needed to determine non-halting behaviour.   
   >>   
   >   
   > You cannot possibly coherently explain the details of this   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca