home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.cyberpunk      Ohh just weirdo cyber/steampunk chat      2,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,343 of 2,235   
   Kevin Calder to All   
   Re: No Consciousness for Artificial Inte   
   05 Sep 04 00:17:37   
   
   defb8c55   
   From: kcalder@blueyonder.co.uk   
      
   Sorry for the long posting delay.  I was doing many other things :)   
      
   I'm not going to quote your post, but I have been reading it and   
   thinking about it and have partially reformulated my position.   
      
   Here are my responses the some problems you raised that still seem   
   relevant.  If you feel I have missed anything, or if my paraphrasing of   
   you misses your point then feel free to let me know.   
      
   1) Consciousness doesn't exist.   
      
   There are a whole bunch of ways to come to this conclusion (and some of   
   them I'm partial too) but what's important is that both Searle and   
   strong AI presume that consciousness does in fact exist (claiming that   
   sufficiently sophisticated computational mind simulation type AI's could   
   be conscious doesn't make much sense otherwise).   
      
   The classic Turing test, which defines consciousness in terms of an   
   entity's ability to behave in a manner which a human tester associates   
   with its own experience of consciousness, could easily be fooled by a   
   mindless machine programed to impersonate human behaviour, and therefor   
   does not test for that characteristic which we most commonly associate   
   with consciousness, the subjective experience of having a mind and   
   experiencing the world consciously.   
      
   Strong AI inherits its definition of consciousness from the Turing test,   
   and inherits its weaknesses.  Strong AI simply does not account for the   
   experience of conscious states.   
      
   Now, if you want to say something like; "Well, if that's the case then a   
   behavioural account of consciousness (+neurology e.t.c.) is the only   
   scientific option, and the best we can hope for," then I have to admit   
   that I partially agree .  I don't think that Searle quite overcomes the   
   "other minds" problem, though I agree with him in principle that there   
   is no reason why we can't study subjective phenomena (conscious states),   
   particularly when they seem to be closely related to some physical   
   phenomena (brain states).   
      
   Maybe the problem with strong AI is that it isn't entirely clear about   
   what it is that it is claiming, and doesn't differentiate its   
   "behavioural\objective consciousness" from the more common "I think   
   therefore I am".  Claiming that sophisticated computational AI's can (in   
   principle) be demonstrated to behave very much like humans in terms of   
   language use, but that strong AI can say nothing about "whether they   
   *really* see the world like we do, *think* like we do e.t.c." is much   
   less exciting to the general public than claiming that AI's will be   
   "conscious just like real people".   
      
      
   2) What's the difference between a simulation of brain activity and a   
   real brain?   
      
   Surely this is obvious?   
      
   Are you telling me that if I had a computational simulation running "in"   
   some chips, and a brain floating in a clear vat, you wouldn't be able to   
   see that they weren't equivalent.  Really, this seems insane to me.   
      
   3) What materials could you make a conscious entity out of?   
      
   You asked about whether you could make a conscious brain with silicon   
   cells and I got a bit side tracked arguing that brains needed to be made   
   from the right materials.  Forget all that.  You could (in principle)   
   make conscious artefacts out of all sorts of stuff.  Our brain is of   
   course made out of physical stuff doing physical stuff, and it causes us   
   to experience conscious states, so in principle other combinations of   
   physical stuff could cause artificial entities to experience conscious   
   states (though we might not be able to test for their subjective   
   aspect).   
      
   Which physical things can we use to make artificial life?   
      
   My first answer to this question was that we should incrementally alter   
   the chemical composition of your brain to determine the point at which   
   you appear to lose consciousness :)   
      
   Anyway, to get back to the point, Searle isn't really worried about   
   people claiming that artificial life could be conscious, rather he   
   argues that a computer running a computer program, even if its the   
   "right sort of program" (i.e. a brain simulation) isn't sufficient to   
   guarantee consciousness of the subjective experience kind in the   
   software, hardware or entire system.   
      
   He argues that this is simply because we have no reason to believe that   
   it might.  Computation is a very abstract thing, and even worse it is   
   very much a subjective thing.  We don't often overturn stones and find   
   some computations going on.  Computation is an analytical tool, not a   
   physical thing.  The computational simulation of the brain only exists   
   *in* the computation in so far as it exists in the eye of the beholder.   
   Unlike physical objects, the computational simulation of the brain   
   *will* cease to exist if everyone stops believing in it.   
      
   We can guess that a brain sufficiently like our own (belonging to some   
   other human) probably produces conscious states in its owner, in the   
   same way ours does in us.   
      
   We can guess that the silicon brain might experience conscious states   
   because it is doing the some of the same physical stuff as the "real"   
   brain.   
      
   However, for the computational simulation of the brain this seems like a   
   bit more of a leap because the brain in the second case isn't even   
   physical at all!  The computer hardware is certainly doing something   
   physical, but why would we think it would be causing the computer (or   
   the brain simulation program, or the whole system) to be experiencing   
   consciousness if its made out of different stuff and doing different   
   things?   
      
   thanks,   
   --   
   Kevin Calder   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca