home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 58,016 of 59,235   
   olcott to Andrew Church   
   Re: Updated input to LLM systems proving   
   12 Oct 25 20:36:11   
   
   XPost: comp.theory   
   From: polcott333@gmail.com   
      
   On 10/12/2025 1:49 PM, Andrew Church wrote:   
   > On 10/12/25 12:04 PM, olcott wrote:   
   >> Also very important is that there is no chance of   
   >> AI hallucination when they are only reasoning   
   >> within a set of premises.  Some systems must be told:   
   >>   
   >> Please think this all the way through without making any guesses   
   >   
   > I don't mean to be rude, but that is a completely insane assertion to   
   > me. There is always a non-zero chance for an LLM to roll a bad token   
   > during inference and spit out garbage.   
      
   If it is provided the entire basis for reasoning   
   then is cannot simply make stuff up about this basis.   
      
   > Sure, the top-p decoding strategy   
   > can help minimize such mistakes by pruning the token pool of the worst   
   > of the bad apples, but such models will never *ever* be foolproof. The   
   > price you pay for convincingly generating natural language is   
   > bulletproof reasoning.   
   >   
      
   LLM systems have gotten 67-fold more powerful in that   
   their context window increased from 3000 words to   
   200,000 words in the last year.   
      
   They seem to be very reliable at applying semantic   
   logical entailment to a set of premises. This does   
   seems to totally prevent any hallucination.   
      
   It like talking to a guy with a 160 IQ that knows   
   the subject of computer theory and practice like a PhD.   
      
   > If you're interested in formalizing your ideas using cutting-edge tech,   
   > I encourage you to look at Lean 4. Once you provide a machine-checked   
   > proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People   
   > might adopt a very different tone.   
   >   
   > Best of luck, you will need it.   
   >   
      
   https://leodemoura.github.io/files/CAV2024.pdf   
   LLM's can do the same thing with very carefully   
   crafted English. My initial post provided an   
   example of this.   
      
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca