home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 58,015 of 59,235   
   olcott to Mike Terry   
   Re: Updated input to LLM systems proving   
   12 Oct 25 20:49:43   
   
   XPost: comp.theory   
   From: polcott333@gmail.com   
      
   On 10/12/2025 12:06 PM, Mike Terry wrote:   
   > On 12/10/2025 16:53, Bonita Montero wrote:   
   >> Sorry, that's silly. You spend half your life discussing the   
   >> same problem over and over again and never get to the end.   
   >   
   > This gives PO a narrative he can hold on to which gives his life a   
   > meaning:  he is the heroic world-saving unrecognised genius, constantly   
   > struggling against "the system" right up to his final breath!  If he   
   > were to suddenly realise he was just a deluded dumbo who had wasted most   
   > of his life arguing over a succession of mistakes and misunderstandings   
   > on his part, and had never contributed a single idea of any academic   
   > value, would his life be better?  I think not.   
   >   
   > Thankfully he has recently discovered chatbots who can give him the   
   > uncritical approval he craves,   
      
   Clearly you have not kept up with the current state   
   of the technology.   
      
   LLM systems have gotten 67-fold more powerful in that   
   their context window increased from 3000 words to   
   200,000 words in the last one year.   
      
   They seem to be very reliable at applying semantic   
   logical entailment to a set of premises. This does   
   seems to totally prevent any hallucination.   
      
   It like talking to a guy with a 160 IQ that knows   
   the subject of computer theory and practice like a PhD.   
      
   It went from barely understanding my most basic proof   
   to be able to accurately critique all of my work of how   
   I apply an extension of Kripke   
      
   https://files.commons.gc.cuny.edu/wp-content/blogs.dir/1358/file   
   /2019/04/Outline-of-a-Theory-of-Truth.pdf   
      
   to Gödel, Tarski, the Liar Paradox and the Halting   
   problem in a single conversation. I now have Kripke   
   as the anchor of my ideas.   
      
   > so there is next to no chance of that   
   > happening now.  [Assuming they don't suddenly get better, to the point   
   > where they can genuinely analyse and criticise his claims in the way we   
   > do...  Given how they currently work, I don't see that happening any   
   > time soon.]   
   >   
   > Would the lives of other posters here be better?  That's a trickier   
   > question.   
   >   
   >   
   > Mike.   
   >   
   >>   
   >> Am 12.10.2025 um 15:50 schrieb olcott:   
   >>> Please think this all the way through without making any guesses   
   >>>   
   >>>    
   >>> Simulating Termination Analyzer HHH correctly simulates its input until:   
   >>> (a) Detects a non-terminating behavior pattern:   
   >>>      abort simulation and return 0.   
   >>> (b) Simulated input reaches its simulated "return" statement:   
   >>>      return 1.   
   >>> (c) If HHH must abort its simulation to prevent its own non-termination   
   >>>      then HHH is correct to abort this simulation and return 0.   
   >>>   
   >>> typedef int (*ptr)();   
   >>> int HHH(ptr P);   
   >>>   
   >>> int DD()   
   >>> {   
   >>>    int Halt_Status = HHH(DD);   
   >>>    if (Halt_Status)   
   >>>      HERE: goto HERE;   
   >>>    return Halt_Status;   
   >>> }   
   >>>   
   >>> int main()   
   >>> {   
   >>>    HHH(DD);   
   >>> }   
   >>>   
   >>> What value should HHH(DD) correctly return?   
   >>>    
   >>>   
   >>   
      
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca