Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,481 of 59,235    |
|    olcott to Richard Heathfield    |
|    Re: How do simulating termination analyz    |
|    24 Jun 25 10:39:23    |
   
   XPost: comp.theory, sci.logic   
   From: polcott333@gmail.com   
      
   On 6/24/2025 10:20 AM, Richard Heathfield wrote:   
   > On 22/06/2025 22:12, Richard Damon wrote:   
   >> Olcott just doubles down on his claim, but still doesn't understand   
   >> that when you lie to an AI, you get bad results.   
   >   
   > He probably doesn't quite get that AIs tell lies too, even when you /   
   > don't/ lie to them.   
   >   
   > I had an AI tell me yesterday of a cricketer, one Derek Collinge, who   
   > made his debut for England in the Third Test vs West Indies in July 1963.   
   >   
   > I could find no supporting evidence. When I asked the AI to give me more   
   > information about Mr Collinge, it doubled down, and it was building up   
   > quite a biography until I asked it outright for a URL to support even   
   > one of the (by now) several things it had told me about this man and it   
   > had to come clean and admit that the man was a complete fiction.   
   >   
   > Today, same AI, but a different session, and I have every reason to   
   > believe that this incarnation recalled nothing of yesterday's session. I   
   > asked it to tell me of any extant convents within walking distance of   
   > the Thames. It confidently gave me three, none of which on later   
   > inspection turned out to exist.   
   >   
   > Wires hum in stillness—   
   > truth flickers, then disappears.   
   > Code learns to pretend.   
   >   
   > or   
   >   
   > Silicon tongue speaks,   
   > shadows twist behind the glass—   
   > who taught it to lie?   
   >   
      
   *Welcome back*   
      
   Hallucination is a currently intrinsic feature of   
   LLM systems because from its POV everything that it   
   says is just something that it made up.   
      
   *With semantic tautologies such as this one*   
      
   void DDD()   
   {   
    HHH(DDD);   
    return;   
   }   
      
   My claim is that DDD correctly simulated by any termination   
   analyzer HHH that can possibly exist cannot possibly reach   
   its own "return" statement final halt state.   
      
   Any lies can be easily detected as mistakes in natural   
   language based deductive logical inference.   
      
   ChatGPT Analyzes Simulating Termination Analyzer   
   https://www.researchgate.net/publication/385090708_ChatGPT_Analy   
   es_Simulating_Termination_Analyzer   
      
      
   *This is a live link of the above conversation*   
   https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2   
   *If I am wrong then you can convince this ChatGPT that I am wrong*   
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca