ekrodomos.net> d436387b   
   From: kcalder@blueyonder.co.uk   
      
   In message   
   ,   
   joss writes   
   >On Fri, 18 Jun 2004 13:15:42 +0100, Kevin Calder wrote:   
      
   >> Anyone familiar with David Searle and his arguments about the   
   >> difficulties involved in equating simulated consciousness with   
   >> consciousness per se?   
      
   >> His most famous I think is the "Chinese room argument".   
      
   >> His position in a miniaturised nutshell is that simulations of   
   >> consciousness, computational models of brain activity and the like are   
   >> merely simulations, and are not sufficient for consciousness.   
      
      
      
   >> Anyone got any objections?   
      
   (Im about to go to bed, so sorry if this is a bit fuzzy, I will make   
   corrections tomorrow :)   
      
   >I think that my main argument against the Chinese Room argument is that it   
   >states that there is an unmeasurable, undefined quality ("consciousness")   
   >which cannot be possessed by the AI but is inherently possessed by a   
   >human.   
      
   Searle's position is that consciousness is a strictly biological   
   phenomena, caused by the interaction of the specific chemicals and   
   materials the brain is composed of. It isn't well defined because even   
   scientists have difficulty accepting this fact, and have never studied   
   its material causes seriously. For Searle then, it is indeed definable,   
   though not yet well defined, (he argues that all sorts of things are   
   studied by beginning with a rubbish definition and then hashing it out   
   through scientific inquiry) and, being an aspect of a physical   
   phenomena, very much measurable, if one accepts this, then its a problem   
   for neuroscience to solve.   
      
   A brain simulator cannot be conscious because it simply does not have a   
   brain (the cause of consciousness) and is merely a model of a brain.   
      
   Think about it this way:   
      
   Apparently, we know that one of the affects of cocaine on the brain is   
   to inhibit the re-absorption of a particular neuro-transmitter in the   
   synaptic cleft. They say that its called neuropenefrin (I have no idea   
   how to spell it) and having it sloshing around in the synaptic cleft   
   like that has all kind of dramatic effects on ones state of   
   consciousness. Now, even if we ran a computer simulation of this the   
   computer running the simulation wouldn't get a cocaine high! If it   
   isn't obvious why, then forget the computer and imagine a very   
   sophisticated simulation using ping-pong balls and beer cans, where the   
   ping pong balls represent neuropenefrin molecules, and the beer cans the   
   different anatomical features of the brain. Theoretically we can do   
   this down to any level of accuracy we like, but it will never be more   
   than a simulation! Despite the fact that it has become common practice   
   in cognitive science, it simply doesn't make sense to say that a   
   simulation, and the thing it simulates are equivalent. Just look at   
   them! They are different! Consciousness is an aspect of brain function,   
   brains are specific physical things, and without them you are just   
   throwing ping pong balls at beer cans.   
      
   > Also, an observer cannot determine whether this quality is   
   >possessed by something else. My point would therefore be that if such a   
   >quality is unobservable and unprovable then it either does not exist, or   
   >its existence is insignificant.   
      
   This is an odd version of the "Other Minds" reply. Its based on the   
   mistake that you made above, about what it is that consciousness   
   actually is. If you agree that consciousness is an aspect of physical   
   brain activity, in the same way that the "liquidity" of water is an   
   aspect of water molecule activity, then you will be able to observe   
   consciousness to the same degree that you can observe "liquidity". You   
   just need to look at the physical causes, i.e. examine water molecules,   
   or cut peoples heads open.   
      
   The usual version of the "Other Minds" reply, which is in kinda implicit   
   in your objection is to reply:   
   "How do you know that other people understand Chinese or anything else?   
   Only by their behavior. Now the computer can pass the behavioral tests   
   as well as they can (in principle), so if you are going to attribute   
   cognition to other people you must in principle also attribute it to   
   computers. "   
      
   Searle's response to this is basically common sense : "We presuppose   
   that other people have minds in our dealings with them, just as in   
   physics we presuppose the existence of objects." He thinks that   
   functional presuppositions are fine if you just want to deal with things   
   at a functional common sense level.   
      
   >My extension to the Chinese Room argument is this: Imagine if the   
   >hypothetical man in the room has not only a set of rules which cause him   
   >to write the correct Chinese phrase, but an adaptive set of rules which   
   >govern how he behaves in all aspects of life. Now   
   >imagine that he does not refer to a book for the rules, but has the rules   
   >memorized and follows them instantly and implicitly. To any observer, he   
   >would appear to understand and speak Chinese perfectly. I believe that he   
   >WOULD understand and speak Chinese perfectly to all intents and purposes.   
   >This man would be indistinguishable from any other human walking around,   
   >but according to Searle would not be "conscious". I simply disagree.   
      
   Searle calls this the "Systems Reply", and the last two lines are the   
   "Brain Simulator Reply". I covered the latter at the start of my reply.   
   Searle's response to the former is based on the idea that the man in the   
   room, even if he internalises part of the system, is still just   
   manipulating formal symbols which he does not recognise the meaning of.   
   If he was to internalise the *entire* system then he could leave the   
   room, and walk about outside conversing in Chinese in an automatic   
   fashion, but he wouldn't understand the meaning of any of the words he   
   was using. In effect Searle is arguing that you can't get semantics   
   from syntax. The formal symbol manipulations that constitute   
   computational models of the mind are simply not sufficient for   
   understand & consciousness. its no coincidence that where we find   
   consciousness, we tend to find brains :)   
      
   >If something behaves as a simulation which is perfect in every way, then I   
   >believe it should be treated as if it were the object of its simulation.   
      
   But do you believe that the simulation and the object would be literally   
   equivalent?   
      
   > I   
   >would treat a Nexus 6 exactly as I would treat a human :o)   
      
   Didn't Nexus6 actually have biological brains? They seemed very fleshy   
   in BR, but I'd imagine that Dick would have them made out of springs and   
   relays and such. So long since I read DADOES, I can't remember.   
      
   Coincidentally, Searle believes that if you were able to construct a   
   brain and all of its periphery then you would have some sort of   
   conscious entity, not just a simulation.   
      
   thanks,   
   --   
   Kevin Calder   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|