Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai    |    Awaiting the gospel from Sarah Connor    |    1,954 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,527 of 1,954    |
|    David Kinny to Tero Hakala    |
|    Re: Exploiting limitations of Turing mac    |
|    27 Sep 07 10:03:33    |
      XPost: comp.theory       From: dnk@OMIT.csse.unimelb.edu.au              Tero Hakala writes:              > This may be a well known question or a just result of my misconceptions,       > but so far I haven't got any definite answer.. So maybe someone       > here could help or clarify things for me.              > ..              > I was recently contemplating Turing tests and Turing machines (TM) and       > was wondering if the fundamental limitations of TM can be exploited       > to discover whether the conversation partner in a Turing test is a       > digital computer AI or a real person.              > As far as I have understood the issue, we have the following points              > 1) any digital computer+software can in principle be reduced to a       > somekind of TM. So the computer can not exceed the computational       > capabilities of TM.              > 2) There are problems that a universal TM can't decide. Eg.       > the halting problem: given TM b and input c, does the machine stop       > at some point?              So far, so good.              > Now, suppose that we come up with a simple TM with input that does       > not stop. Eg. it produces an endless string of aaa..'s. A human       > with sufficient knowledge should be able to see that this       > machine never stops.              > Let's say that we pose this question to our human/AI partner,       > ie. we describe our never stopping TM and ask: does it       > stop?              > Now a real human could provide us with a definite answer. However, any       > digital computer is subjected to the limitations of TM and       > therefore can not say for sure if our machine stops or not.              Here's where you make an error. There are indeed TM's that can detect       non-terminating TM's, as people can. 2) above doesn't say no TM can       detect non-termination of *any* TM's, it says something much more       specific, that no TM can correctly determine termination of *every*       TM/input pair. Equally, no person can reliably determine this, since       some cases would be more complex than they could possibly understand.              Indeed, if a person can do it reliably for some case, then so can a       specific TM (but that TM will fail on other cases, as the human would).       And TM's can solve termination problems that humans never could.              > So my question is, can we use this kind of scheme to discover whether we       > are speaking with an AI implemented on a digital computer or with       > a genuine human?              The short answer is no. For a long-winded one, try comp.ai.philosophy              > (of course, an AI mimicing human behaviour would probably say       > something like "get a life, smart ass, don't bore me to death", in       > which case we couldn't tell :) )              > - T.H              [ comp.ai is moderated ... your article may take a while to appear. ]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca