Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai    |    Awaiting the gospel from Sarah Connor    |    1,954 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,062 of 1,954    |
|    Jack Saalweachter to makc.the.great@gmail.com    |
|    Re: Goal of AI: Perfect or Bounded Ratio    |
|    27 May 06 01:54:32    |
      From: saalweachter@purdue.edu              makc.the.great@gmail.com wrote:       > Dmitry A. Kazakov wrote:       >       >>2. Simulating human behavior is a questionable issue as well. It boils down       >>to Turing test. But how our *inability* to decide (machine vs. human) could       >>characterize anything as intelligent?       >       >       > Agree. What if AI will be more rational than humans? Then, they may       > fail Turing test very well because human judge himself would not be       > able to comprehend supreme rationality, and will consider machine       > behavior nothing better than random noise.              I think this view gives too little credit to both 'supremely rational       beings' and 'humans'.              When you ask the supremely rational AI, during the Turing test, "So, how       'bout them Bears?", do you expect it to spit out a terse "ERROR.       BASEBALL IS ILLOGICAL."? Or would you prefer that the exsquisitly       rational being launch into a perfectly thought out, well reasoned       analysis of how they're doomed this season, having traded Larry Bird to       the Nicks for Roy Orbison, when what they REALLY need is a fullback?                     There is certainly a computational tie-in with reasoning: proof       construction is an NP-complete problem. Even if you assume that the       entity has 'sufficient knowledge', it's still going to take an       exponential amount of time to infer arbitrary conclusions from this       knowledge.              However, consider that, while proof-construction is NP-complete,       proof-checking is not. This means that if we had some exquisitely       rational fellow sitting around, he might occassionally reach conclusions       we never could. Why? He's just a better reasoner; he constructs proofs       that are beyond our capacity to construct. HOWEVER, it is perfectly       reasonable to expect that he could then /explain/ his reasoning to us,       and we could say, "Oh, that makes perfect sense." Checking a proof is       computationally simple, and we should expect ourselves to be able to       check proofs we ourselves could never discover.                     Thus, 'perfectly rational' or even just 'spectacularly rational' AI       shouldn't seem incomprehensible, irrational to humans. Its reasons       should be utterly simple to comprehend -- and utterly impossible to       discover.                     Jack Saalweachter              [ comp.ai is moderated ... your article may take a while to appear. ]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca