home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,057 of 1,954   
   Dmitry A. Kazakov to David Kinny   
   Re: Goal of AI: Perfect or Bounded Ratio   
   25 May 06 14:01:00   
   
   From: mailbox@dmitry-kazakov.de   
      
   On Thu, 25 May 2006 01:31:26 GMT, David Kinny wrote:   
      
   > In <4474f7fb$1@news.unimelb.edu.au> adityar7@gmail.com writes:   
   >   
   >> Now, it appears to me that the goal of artificial intelligence should   
   >> be Bounded Rationality for two reasons:   
   >> 1. Computation complexity makes perfect rationality impossible   
   >> 2. Perfect rationality would mean the lack of such irrational behaviour   
   >> in humans like morality.   
   >   
   >> Does anyone have views regarding this? Which kind of rationality should   
   >> be the goal of AI, and WHY ?   
   >   
   > It has been recognized for ~20 years that perfect rationality is   
   > an unattainable goal for AI, not just due to limited computational   
   > resources but due also to limits on and uncertainties in agents'   
   > knowledge of the world and of the effects of their actions.   
      
   Though reasoning under uncertainty is not same as uncertain reasoning. I   
   suspect that AI is effectively defined as all computational problems we   
   don't know how to solve. So it is irrational per definition. Once we   
   discover rationality in a problem, that problem leaves the realm of AI...   
      
   1. Computational complexity can hardly be an issue here, rather descriptive   
   complexity. One can perfectly rationally judge about incomputable things.   
   It depends on what is the object and what is the meta language. Which of   
   complexity we are talking about?   
      
   2. Simulating human behavior is a questionable issue as well. It boils down   
   to Turing test. But how our *inability* to decide (machine vs. human) could   
   characterize anything as intelligent? How the complexity of intelligence   
   c(I) is ordered to the complexity of Turing test c(T)? c(I)

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca