Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.cyberpunk    |    Ohh just weirdo cyber/steampunk chat    |    2,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,224 of 2,235    |
|    joss to Kevin Calder    |
|    Re: No Consciousness for Artificial Inte    |
|    18 Jun 04 22:30:24    |
   
   ekrodomos.net> d436387b   
   From: joss@nospampleasewerebritish.nekrodomos.net   
      
   On Fri, 18 Jun 2004 13:15:42 +0100, Kevin Calder wrote:   
      
   > Anyone familiar with David Searle and his arguments about the   
   > difficulties involved in equating simulated consciousness with   
   > consciousness per se?   
   >   
   > His most famous I think is the "Chinese room argument".   
   >   
   > His position in a miniaturised nutshell is that simulations of   
   > consciousness, computational models of brain activity and the like are   
   > merely simulations, and are not sufficient for consciousness. So he's   
   > claiming that an AI that works by modelling brain function as a bunch of   
   > computations, or by mimicking behaviour and thereby passing the turing   
   > test, isn't conscious at all, and can't be. Basically I suppose he is   
   > saying that the turing test, and the fields of scientific exploration   
   > based on the turing test (specially AI, and more specifically the   
   > "strong AI thesis") are all a load of crap.   
   >   
   > Anyone got any objections?   
      
   I think that my main argument against the Chinese Room argument is that it   
   states that there is an unmeasurable, undefined quality ("consciousness")   
   which cannot be possessed by the AI but is inherently possessed by a   
   human. Also, an observer cannot determine whether this quality is   
   possessed by something else. My point would therefore be that if such a   
   quality is unobservable and unprovable then it either does not exist, or   
   its existence is insignificant.   
      
   My extension to the Chinese Room argument is this: Imagine if the   
   hypothetical man in the room has not only a set of rules which cause him   
   to write the correct Chinese phrase, but an adaptive set of rules which   
   govern how he behaves in all aspects of life. Now   
   imagine that he does not refer to a book for the rules, but has the rules   
   memorized and follows them instantly and implicitly. To any observer, he   
   would appear to understand and speak Chinese perfectly. I believe that he   
   WOULD understand and speak Chinese perfectly to all intents and purposes.   
   This man would be indistinguishable from any other human walking around,   
   but according to Searle would not be "conscious". I simply disagree.   
      
   If something behaves as a simulation which is perfect in every way, then I   
   believe it should be treated as if it were the object of its simulation. I   
   would treat a Nexus 6 exactly as I would treat a human :o)   
      
   Joss   
   --   
   -----------------------------------------------------------   
   Joss Wright   
   Computer Science Department http://www.pseudonymity.net   
   York University http://www.cs.york.ac.uk/~joss   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca