home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 57,464 of 59,235   
   Doc O'Leary , to Richmond   
   Re: A conversation with ChatGPT's brain.   
   28 Apr 25 16:55:56   
   
   From: droleary.usenet@2023.impossiblystupid.com   
      
   For your reference, records indicate that   
   Richmond  wrote:   
      
   > Doc O'Leary ,    writes:   
   >   
   > > Again, *all* the output is hallucinations, whether you realize/notice it   
   > > or not.  There is no mechanism for “thought” that allows it to   
   distinguish   
   > > truth from fiction.   
   >   
   > Ah, so you have redefined hallucination to mean all output from   
   > LLM. It's rather meaningless to use the word then.   
      
   Ha!  Blame the AI hype machine for making hallucination a “meaningless”   
   word.  Call it whatever you like, but the fact remains that these programs   
   give *incorrect answers* as part of their regular operation.  It’s not a   
   “bug” that occurs in certain conditions; it really *is* “all output”   
   that   
   can be right or wrong, given with equal confidence.   
      
   Don’t fool yourself into thinking chatbots are thinking.  If it isn’t   
   obvious that the people you talk to are thinking more than machines, start   
   hanging around smarter people.  They may challenge you to do more   
   thinking, too.  Win-win in my book.   
      
   --   
   "Also . . . I can kill you with my brain."   
   River Tam, Trash, Firefly   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca