home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.physics.research      Current physics research. (Moderated)      17,516 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 17,219 of 17,516   
   Sylvia Else to Richard Livingston   
   Re: Chat GPT =&D Example Verbatim   
   16 Feb 23 22:30:10   
   
   f9ed3167   
   From: sylvia@email.invalid   
      
   On 03-Feb-23 3:52 am, Richard Livingston wrote:   
   > On Wednesday, February 1, 2023 at 2:53:43 AM UTC-6, Douglas Dana Edward^2   
   Parker-Goncz (fully) wrote:   
   >>>> long ChatGPT text delete, see previous post<<<   
   >   
   > That ChatGPT transcript was an interesting combination of partial   
   > understanding, errors, wide range of knowledge, probably stock   
   > boiler plate text, and occasional insights.  While I would not be   
   > impressed with this result from a competent human engineer, compared   
   > to what any AI could do a decade ago I think this is very, very,   
   > impressive.  In another decade or two I would not be surprised if   
   > these programs could compete or exceed a human engineer.   
   >   
   > And then what will people do?   
   >   
   > Rich L.   
      
   Although ChatGPT is impressive, it is not a general purpose AI by any   
   means (and nor is it claimed to be by its creators). On my current   
   understanding, it is constructing text output by taking the previous   
   text in the session (both user input and its own output), and including   
   it's output so far in the current response, in combination with its   
   training on text, and determining the most likely word to follow.   
      
   People post examples of asking it to write programs, and for simpler   
   ones it often does a good job. I suspect that's because the kind of   
   examples people ask for are sufficiently similar to example programs   
   that can be found on the 'net. Once people start trying to use it to   
   solve programming problems they actually have, I expect the experience   
   will be different.   
      
   ChatGPT can seem quite clever, until you unknowingly step outside what   
   it's been trained on. For example, input:   
      
   As an example of how quickly ChatGPT can go astray, try   
      
   Define a WORD as a sequence of alphabetic characters   
   Define rule C: Repeatedly remove from text all WORDs of the same length,   
   until there are no more changes.   
      
   ChatGPT will typically[*] then describe in considerable detail how to   
   apply Rule C (which is often correct), and then provide an example,   
   which it often gets wrong.   
      
   Trying to correct its errors is then an exercise in going down a rabbit   
   hole. It can seem that it 'understands' its mistake, but further   
   examples will show that it doesn't.   
      
   Sylvia.   
      
   [*] Because there is variation in the way it responds, even to the same   
   stimulus.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca