XPost: rec.arts.sf.written, rec.arts.sf.science   
   From: YourName@YourISP.com   
      
   In article , Ryk E. Spoor   
    wrote:   
   > On 1/24/14 3:13 PM, Gutless Umbrella Carrying Sissy wrote:   
   >   
   > > The point Greg made, that has not been disputed in any way, is that   
   > > the definition used to be "do only one thing well," until computers   
   > > could do only one thing really well, and now the definition has   
   > > changed.   
   >   
   > Oh, I would DEFINITELY dispute that. AI was always "A machine that   
   > thinks like a human, only maybe better", and the Turing Test (as a   
   > general concept -- making one that really works is harder) was always   
   > the general idea of how to really measure it. Can it pass for human in   
   > realistic circumstances?   
   >   
   > The fact that the concept is as foggy as our understanding of what   
   > intelligence IS is what causes the confusion.   
   >   
   > The definition of "thinks like" has been refined through the years,   
   > yes. And people -- usually laymen -- would put up examples of tasks that   
   > "only a true AI could solve!", like chess, but anyone with any skin in   
   > the game knew that this wasn't true; enough brute-force would beat any   
   > human without any actual intelligence involved. It WAS thought that   
   > computers would never HAVE such brute force available and that,   
   > therefore, any computer that could do grandmaster chess must be doing   
   > something intelligent, but Moore's Law changed that.   
      
   "Artificial intelligence" has never been defined as the ability to do   
   just one particular thing well, no matter how complex that thing may   
   seem to be.   
      
   Even ignoring Turing, the true test of "artifical intelligence" would   
   require the ability to do many, many, many different things, and to be   
   able to learn to do new things.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|