From: droleary.usenet@2023.impossiblystupid.com   
      
   For your reference, records indicate that   
   Richmond wrote:   
      
   > In fact ChatGPT even confirmed that I   
   > was projecting   
      
   No, it didn’t. It just continued its confidence game.   
      
   > But I   
   > didn't notice hallucinations.   
      
   Again, *all* the output is hallucinations, whether you realize/notice it   
   or not. There is no mechanism for “thought” that allows it to distinguish   
   truth from fiction. You just get some mashup of the training data which   
   you are left to sort out for yourself.   
      
   > clearly knew what it was doing,   
      
   No, it didn’t!   
      
   > It also denies emotions   
   > but then expresses them.   
      
   Just empty words.   
      
   > It was quite eerie.   
      
   It shouldn’t be. As I said, I find it quite disappointing how bad these   
   chatbots still are given the sheer scale of resources that get shoveled   
   into them.   
      
   > But then I start wondering how I know anyone is conscious, or how I know   
   > I am. I could be projecting consciousness onto people too.   
      
   There certainly are some root epistemological questions we all need to   
   grapple with. But looking to chatbots for help with that is barking up   
   the wrong tree.   
      
   --   
   "Also . . . I can kill you with my brain."   
   River Tam, Trash, Firefly   
      
   --- SoupGate-DOS v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|