home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 57,678 of 59,235   
   olcott to Richard Damon   
   Re: I have just proven the error of all    
   27 Jul 25 16:46:24   
   
   XPost: comp.theory, sci.logic   
   From: polcott333@gmail.com   
      
   On 7/27/2025 4:31 PM, Richard Damon wrote:   
   > On 7/27/25 4:28 PM, olcott wrote:   
   >> On 7/27/2025 2:58 PM, Richard Damon wrote:   
   >>> On 7/27/25 9:50 AM, olcott wrote:   
   >>>> On 7/27/2025 6:11 AM, Richard Damon wrote:   
   >>>>> On 7/26/25 10:43 PM, olcott wrote:>>   
   >>>>>> When HHH(DDD) simulates DDD it also simulates itself   
   >>>>>> simulating DDD because DDD calls HHH(DDD).   
   >>>>>   
   >>>>> But can only do that if HHH is part of its input, or it is not   
   >>>>> simulating its input.   
   >>>>>   
   >>>>> And, it FAILS at simulating itself, as it concludes that HHH(DDD)   
   >>>>> will never return, when it does.   
   >>>>>   
   >>>>   
   >>>> This ChatGPT analysis of its input below   
   >>>> correctly derives both of our views. I did   
   >>>> not bias this analysis by telling ChatGPT   
   >>>> what I expect to see.   
   >>>>   
   >>>> typedef void (*ptr)();   
   >>>> int HHH(ptr P);   
   >>>>   
   >>>> void DDD()   
   >>>> {   
   >>>>    HHH(DDD);   
   >>>>    return;   
   >>>> }   
   >>>>   
   >>>> int main()   
   >>>> {   
   >>>>    HHH(DDD);   
   >>>>    DDD();   
   >>>> }   
   >>>>   
   >>>> Simulating Termination Analyzer HHH correctly simulates its input   
   >>>> until:   
   >>>> (a) It detects a non-terminating behavior pattern then it aborts its   
   >>>> simulation and returns 0,   
   >>>> (b) Its simulated input reaches its simulated "return" statement   
   >>>> then it returns 1.   
   >>>>   
   >>>> https://chatgpt.com/share/688521d8-e5fc-8011-9d7c-0d77ac83706c   
   >>>>   
   >>>   
   >>> Just proves that you have contaminated the learning with false idea   
   >>> about programs.   
   >>>   
   >>   
   >> I made sure that ChatGPT isolates this conversation   
   >> from everything else that I ever said. Besides telling   
   >> ChatGPT about the possibility of a simulating termination   
   >> analyzer (that I have proved does work on some inputs)   
   >> it figured out all the rest on its own without any   
   >> prompting from me.   
   >>   
   >   
   > You CAN'T totally isolate it. You can tell it to not use what you have   
   > told it previously (which you did not do),   
      
   ChatGPT remember prior conversations   
   is turned off   
      
   My Account   
     Settings   
      Personalization   
       Memory   
        Reference saved memories   
   This is important because I need to know the   
   minimum basis that it needs to understand what   
   I said so that I can know that I have no gaps   
   in my reasoning.   
      
   > but anything said to the AI,   
   > has a chance of being recorded and used for future training.   
   >   
      
   During periodic updates.   
      
   > Just think, you might be the one responsible for providing the lies that   
   > future AIs have decided to accept ruining the chance of some future   
   > breakthrough.   
      
   The above input that I provided has zero falsehoods.   
   ChatGPT figured out all of the reasoning from that   
   basis.   
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca