home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 57,465 of 59,235   
   Richmond to droleary.usenet@2023.impossiblystup   
   Re: A conversation with ChatGPT's brain.   
   28 Apr 25 19:11:16   
   
   From: dnomhcir@gmx.com   
      
   Doc O'Leary ,  writes:   
      
   > Ha!  Blame the AI hype machine for making hallucination a   
   > “meaningless” word.  Call it whatever you like, but the fact remains   
   > that these programs give *incorrect answers* as part of their regular   
   > operation.  It’s not a “bug” that occurs in certain conditions; it   
   > really *is* “all output” that can be right or wrong, given with equal   
   > confidence.   
      
   They use the term 'hallucination' for a particular circumstance. But   
   it's not just any circumstance where it gives a wrong answer. And   
   anyway, human beings give incorrect answers as part of their normal   
   operation too. The part that I disagree with is 'equal   
   confidence'. Searching the internet can give you wrong answers, and   
   takes much longer to do it, especially if you end up on Quora.   
      
   >   
   > Don’t fool yourself into thinking chatbots are thinking.  If it isn’t   
   > obvious that the people you talk to are thinking more than machines,   
   > start hanging around smarter people.  They may challenge you to do   
   > more thinking, too.  Win-win in my book.   
      
   I am not fooling myself into thinking it is thinking. And anyway, it   
   says it is not thinking. It describes how it operates. It looks up in   
   its database how LLM works, and spews it out. It has no understanding of   
   what it is saying. It is spewing out something it read somewhere. But   
   what's the difference? Do you know where your thoughts come from? Do you   
   ever have intuition and wonder how you knew?   
      
   I've watched this video of Andrej Kaparthy:   
      
   https://www.youtube.com/watch?v=7xTGNNLPyMI   
      
   But the end result is still amazing. I've used it to solve DIY problems   
   and to write bits of code.   
      
   Try asking ChatGPT: "How do I tell the difference between consciousness   
   and simulated consciousness?", then ask a human being, who will probably   
   say "Huh?"   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca