From: droleary.usenet@2023.impossiblystupid.com   
      
   For your reference, records indicate that   
   Richmond wrote:   
      
   > They use the term 'hallucination' for a particular circumstance.   
      
   Then you’re going to have to share what that “particular circumstance”   
   is,   
   because I’m not seeing it. You input text, it outputs text. That it. As   
   part of generating the response, it will just make things up (toxic pizza   
   toppings, fake legal cases, non-existent software libraries, etc.), leaving   
   you to sort out the mess.   
      
   > And   
   > anyway, human beings give incorrect answers as part of their normal   
   > operation too.   
      
   So what? Just because humans can be wrong doesn’t mean LLMs get a pass for   
   the mistakes they make. More importantly, the *types* of errors made are   
   very different. It was something that was obviously a problem as far back   
   as when Watson was on Jeopardy.   
      
   > The part that I disagree with is 'equal   
   > confidence'.   
      
   And yet you offer up no evidence to the contrary. You’re welcome to point   
   me to your favorite chatbot and it’ll probably take me all of 5 minutes to   
   get it to try to pass of an *obvious* lie as the truth.   
      
   > Searching the internet can give you wrong answers, and   
   > takes much longer to do it, especially if you end up on Quora.   
      
   That’s incoherent. Are you just using a chatbot to try to refute my   
   points? Regular searching *makes no claims of intelligence*, but what it   
   *does* do is accurately give you what it finds, possibly including   
   nothing. It’s plenty fast, too. Again, stop trying to push this into a   
   tangent about search; it’s about chatbots still not actually being good AI.   
      
   > I am not fooling myself into thinking it is thinking.   
      
   You’re the one who started this thread by claiming that a chatbot “brain”   
   was outperforming humans. You still don’t seem willing to acknowledge the   
   *massive* shortcomings such tools have.   
      
   > It is spewing out something it read somewhere. But   
   > what's the difference? Do you know where your thoughts come from? Do you   
   > ever have intuition and wonder how you knew?   
      
   The question isn’t how I know what I know. It’s what real value there is   
   in a chatbot that *cannot* know what it knows. Just spewing out shit is   
   not a welcome interaction in my book, done by man *or* machine.   
      
   > Try asking ChatGPT: "How do I tell the difference between consciousness   
   > and simulated consciousness?", then ask a human being, who will probably   
   > say "Huh?"   
      
   Again, find better humans to engage with if that’s your experience.   
      
   --   
   "Also . . . I can kill you with my brain."   
   River Tam, Trash, Firefly   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|