Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,476 of 59,235    |
|    Richard Damon to olcott    |
|    Re: ChatGPT agrees that I have refuted t    |
|    23 Jun 25 19:43:37    |
   
   XPost: comp.theory, sci.logic, sci.math   
   From: richard@damon-family.org   
      
   On 6/23/25 10:30 AM, olcott wrote:   
   > On 6/23/2025 6:02 AM, Richard Damon wrote:   
   >> On 6/22/25 11:05 PM, olcott wrote:   
   >>> On 6/22/2025 9:11 PM, Richard Damon wrote:   
   >>>> On 6/22/25 10:05 PM, olcott wrote:   
   >>>>> Since one year ago ChatGPT increased its token limit   
   >>>>> from 4,000 to 128,000 so that now "understands" the   
   >>>>> complete proof of the DD example shown below.   
   >>>>>   
   >>>>> int DD()   
   >>>>> {   
   >>>>> int Halt_Status = HHH(DD);   
   >>>>> if (Halt_Status)   
   >>>>> HERE: goto HERE;   
   >>>>> return Halt_Status;   
   >>>>> }   
   >>>>>   
   >>>>> *This seems to be the complete HHH(DD) that includes HHH(DDD)*   
   >>>>> https://chatgpt.com/share/6857286e-6b48-8011-91a9-9f6e8152809f   
   >>>>>   
   >>>>> ChatGPT agrees that I have correctly refuted every halting   
   >>>>> problem proof technique that relies on the above pattern.   
   >>>>>   
   >>>>>   
   >>>>   
   >>>> Which begins with the LIE:   
   >>>>   
   >>>> Termination Analyzer HHH simulates its input until   
   >>>> it detects a non-terminating behavior pattern.   
   >>>>   
   >>>>   
   >>>   
   >>> ChatGPT does not know anything about my work besides   
   >>> what I told it on those 38 pages.   
   >>>   
   >>> Since I am stipulating the definition of a simulating   
   >>> termination analyzer and this definition is coherent   
   >>> this definition cannot possibly be incorrect.   
   >>>   
   >>   
   >> Right, so since you began with a LIE, its results are not based on FACTS.   
   >>   
   >   
   > Not at all. ChatGPT understands that a correct   
   > simulation does not mean a complete simulation   
   > of a non-terminating input. If you read the 38   
   > pages you will see this.   
   >   
   >> By "Stipulating" your definition, you are just declairing that you   
   >> work has nothing to do with the actual Halting Problem, because your   
   >> "definition" is inconsistant and based on LIE.   
   >>   
   >   
   > ChatGPT immediately recognizes that DD is the halting   
   > problem proof counter-example without even being told.   
   >   
   >> Of course it ia inconherent and incorrect, as it is based on the   
   >> inproper presumption that there DOES exist a set of patterns that can   
   >> correctly determine if a program will never halt.   
   >>   
   >   
   > If you knew as much as a CS grad you will be able to   
   > figure out what the pattern is yourself and see that   
   > it exists.   
   >   
   >> In particular, the pattern you are trying to claim to use, is part of   
   >> the Halting Program D, DD, and DDD, so it is BY DEFINITION incorrect.   
   >>   
   >   
   > If you read the 38 pages you will see how this is incorrect.   
   > ChatGPT "understands" that any program that must be aborted   
   > at some point to prevent its infinite execution is not a   
   > halting program.   
      
   LLM do not "understand" anything. They are just a markovian chain   
   processing the most likely following symbol based on the preceding context.   
      
   >   
   > int main()   
   > {   
   > DD(); // calls HHH(DD) that must abort its simulation   
   > { // or the directly executed DD() will never stop running.   
      
   Right, and since HHH(DD) aborts its procdessing and returns 0 to DD,   
   that DD will halt, and thus the answer is wrong.   
      
   You stupidly confuse the hypothetical HHH that you are thinking of, with   
   the actual HHH that is there.   
      
   Yes, a DD' that calls the HHH'(DD') that doesn't abort will be   
   non-halting, but the real HHH isn;t given DD', it is given DD.   
      
   >   
   >> Sorry, your problem is you are so stupid and brain damaged that you   
   >> are believing your own lies.   
   >>   
   >   
   > *If that was true then you could convince ChatGPT of that*   
   > ChatGPT analysis of HHH(DDD) only 12 pages long   
   > https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2   
      
   I did that once. I took one of your sessions, adding a couple of   
   sentence, and it admitted that HHH was wrong.   
      
   >   
   >> It seems you don't even understand the ground rules for how logic works.   
   >   
   > Only scatterbrained nonsense believes that a non-terminating   
   > input must be simulated until it terminates.   
   >   
      
   Since that is the CONSEQUENCE of non-halting, we see who is the   
   scatterbrain.   
      
   Note, I never said the DECIDER needed to do that, only the correct   
   simulation of the input that the decider is reporting on.   
      
   It seems that concept is just too abstract for your wittle brain.   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca