8a32aec9   
   From: kcalder@blueyonder.co.uk   
      
   In message   
   ,   
   ghost writes   
   >In article ,   
   > Kevin Calder wrote:   
      
   >> In message   
   >> ,   
   >> ghost writes   
      
   >> >the short version:   
      
   >> >If a simulation is in everyway indistinguishable from the real thing,   
   >> >what's the difference in the end results?   
      
   >> Fair enough, but, do you really think that a brain and a computational   
   >> simulation of a brain are indistinguishable?   
      
   >> If I remove the top of your head with a buzz saw, what do you expect to   
   >> pop out? :)####   
      
   >> I'd expect some grey matter that is easily distinguishable from some   
   >> abstract, invented, computational system that doesn't actually exist in   
   >> nature of its own accord.   
      
   >> But I wouldn't bet any money on it.   
      
   >Basically the crux of your, and Searle's,   
      
   Really just Searle's, I didn't come up with any of these arguments, and   
   I don't feel any particular allegiance to them BTW. I agree with Searle   
   on a few points though.   
      
   > arguement is that a   
   >simulation, no matter how accurate even to the point of being completely   
   >indistinguishable, can never be treated as the same as the original   
   >simply because it is not organic in nature?   
      
   >Seems like a form of prejudice to me honestly.   
      
   This isn't exactly the case.   
      
   Think about it sensibly. If you have a simulation of a brain, that is   
   the identical to a brain (*1) then you don't have a simulation any more,   
   you have a brain!   
      
   And this business of "organic", and "not organic" isn't really relevant.   
   Its just a throwback from science fiction, Searle isn't interested in   
   organicity per se. What is relevant is this business of "brain" or "not   
   brain".   
      
   *1: I'll head off a whole class of distracting objections by pointing   
   out that by this I mean "that is identical to a brain in everyway, /as   
   far as we can tell/", which is, as far as we are concerned, as good as   
   it gets.   
      
   >If I were to manage to put forth an object that would in every way act   
   >like a person, pass every possible test you could throw at it   
      
   Ok, assuming that consciousness is a feature of brain chemistry (give me   
   a reason not to, it seems reasonable enough to me(*2)) I am going to   
   want to cut your person-object's head open and check for a brain. If I   
   find a bunch of chips in there I'm not logically obliged to assume that   
   they are conscious, even if they are replicating brain function. See   
   below.   
      
   >Note - I did not say were had reached that point, or are even close to   
   >reaching that point.   
      
   OK. Searle isn't saying that we could never reach this point either, he   
   is just arguing that even if we did, it wouldn't be significant in the   
   way that "Strong AI" claims it would be.   
      
   >This is all hypothetical. But if we did produce an intelligence that in   
   >all ways acted exactly like a humanbeing would in every single way it   
   >couldn't possibly have "conciousness" - and have we even defined   
   >"conciousness" within this context?   
      
   Ok, if we accept that consciousness exists (refuting consciousness is a   
   whole other can of worms) then, given the state of our current   
   scientific knowledge about the functioning of the brain as a physical   
   thing and how its physical states relate to conscious states (go drop   
   some E's to see what I mean) then doesn't it seem to make sense to say   
   that brains cause consciousness? I have to admit, that to me, it seems   
   very sensible to tentatively conclude that consciousness is a physical   
   phenomena, rooted in the brain and its periphery.   
      
   (*2): What is it that you think it is? If you believe that abstract   
   computational simulations are capable of possessing consciousness then   
   presumably you don't think that consciousness is a biological phenomena,   
   which sounds pretty much like a wishy washy "soul" to me. It could be   
   that you believe that consciousness is a feature of one of the things   
   that brains and simulations of brains have in common. Usually people   
   cite "complexity" as the feature in question, but, isn't this business   
   of things becoming conscious as a result of their complexity also a   
   little bit transcendentalist? IMHO "consciousness from complexity" is   
   every bit as unfounded the wishy washy hardware-independent "soul".   
      
   Doesn't sound very scientific to me.   
      
   However, studying consciousness as a feature of brain chemistry does   
   sound pretty scientific to me, and computational simulations are an   
   important way of doing this. Just so long as we keep in mind that just   
   because the thing we are simulating exhibits certain characteristics, we   
   can't claim that the simulation will *necessarily* exhibit those same   
   characteristics. And this is what Strong AI does.   
      
   And, once and for all, saying "well imagine a simulation so perfect that   
   it is indistinguishable from the object" doesn't help, because the only   
   thing that is indistinguishable from a brain, *is* a brain. Otherwise   
   you are opening the door to all sorts of anti-scientific,   
   anti-intellectual taxonomical skepticisms. Its almost deconstruction!   
      
   Geez you guys are fickle. When I'm talking about postmodernism you want   
   hard, rigorous scientific fact, and when i'm talking about hard,   
   rigorous scientific fact, you want postmodernism! ;)   
      
   >Searle sounds like a machine-bigot. Sorry, but he does.   
      
   No need to apologise. Feel free to join the abusive, witch burning   
   masses. Searle is not well received by many. You won't be lonely.   
      
   > Utterly denying   
   >the possiblity of something that we haven't even really begun to explore   
   >shuts doors that may need to be opened to understand the concept fully.   
      
   What shuts doors, IMHO, is adherence to the Strong AI thesis, even   
   although it is totally unfounded, and doesn't really make much sense.   
   AI is an important field, it doesn't need wobbly deus ex machina   
   speculation.   
      
   danke,   
   --   
   Kevin Calder   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|