home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.cyberpunk      Ohh just weirdo cyber/steampunk chat      2,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,721 of 2,235   
   Kevin Calder to All   
   Re: No Consciousness for Artificial Inte   
   14 Mar 06 18:45:24   
   
   From: kevin.calder@onetel.net   
      
   OK, I'm just going to rez this coz I have been thinking about it   
   recently, and I am feeling settled about it.   
      
      Lets say that the Strong AI Thesis (SAIT) is the claim that we can, in   
   principle, make software (sufficiently detailed brain simulations or   
   whatever) that will be conscious.   
      
      By conscious I mean that they appear conscious to us (this is the   
   behaviourist part, which is in principle testable by some version of the   
   Turing test) but also that they posses a first person point of view   
   (FPPOV).   
      
      The latter stipulation is the controversial and problematic part, so   
   for the sake of argument lets address the version of "Strong" AI that   
   omits it. This is based on the "if it isn't measurable, then why do we   
   need to include it in the design" school of AI design (I'm not sure that   
   this is a really school but a comp sci student said something pretty   
   similar during our first time round).  In this case the thesis is true   
   if an AI passes an appropriate version of the Turing test, and I'd be   
   happy to acknowledge it as true.  The problem is that although this   
   version of the SAIT works, it's a little banal.  We are basically saying   
   "If I can make a duck that walks and talks like a duck it will be   
   conscious like a duck."  Which, when we expand on what we mean by   
   'conscious' in this version of the thesis, becomes "If I can make a duck   
   that walks and talks like a duck it will be conscious, or in other   
   words;. walking and talking like a duck."  Fair enough, but its more   
   than a little tautological.   
      
      I'd like to call this the Banal AI Thesis (BAIT) (I think the acronym   
   here is telling!), and though I can agree with it, I'm not sure it quite   
   hits the spot. I think what most people are arguing over with SAIT is   
   the latter, more problematic stipulation, the FPPOV.  That said, if you   
   are a proponent of the BAIT, and have no interest in the SAIT (with all   
   that it implies) then I have no quarrel with you.  Lets move on.   
      
      The problem with the FPPOV is that it isn't testable by any   
   scientifically satisfying method *1.  In everyday life we bypass this   
   problem by subconsciously making an argument from analogy (they seem   
   like me on the 'outside' so chances are they are like me on the   
   'inside') and not worrying about it too much.  This isn't much help to   
   us in this case though (because we are worried about it), so we need   
   something that is as firm as possible.   
      
      One of the few opportunities we have to "see" consciousness making   
   contact with its physical causes is in circumstances where observable   
   changes in brain chemistry (and other brain science stuff) relate to   
   changes in our FPPOV.  We can do this in the lab, or we can just take   
   some cocaine at home and document the results.  We know about some of   
   the ways in which cocaine interacts with our brain chemistry and we can   
   relate these effects to the qualitative changes in our FPPOV.  If you   
   don't get the qualitative changes you've been burned man!   
      
      So basically we do have an option for a firmer test for FPPOV and that   
   is to test for its physical causes, and as we learn more about these   
   causes the test will get increasingly firm.   
      
      So where does all this leave SAIT?  If it's the "claim that we can, in   
   principle, make software (sufficiently detailed brain simulations or   
   whatever) that will have a FPPOV" then, as things currently stand, I'm   
   afraid its in trouble.  The firmest test that we have simply doesn't   
   permit us to assert confidently that a computationally 'perfect'   
   simulation of a brain will have a FPPOV, simply because while we have   
   some reason to believe that brains cause FPPOVs we currently have no   
   reason to believe that computation causes them.   
      
      Some of you may have noticed that there is a gap here that is   
   impossible to bridge.  The problem with the physical causes test is that   
   watching someone taking cocaine, or having their brain probed and   
   testifying to the link between the physical causes and the qualitative   
   changes in FPPOV isn't enough.  You actually have to "get in the chair"   
   and do it yourself to 'see' the link.  The impossible task for Andy your   
   robot friend is that even if he testifies to his having a FPPOV, you   
   will never be able to assert with confidence that he actually has a   
   FPPOV using the physical causes argument, because he simply doesn't have   
   the things doing the causing.  You might suppose that in the future we   
   will be able to relate FPPOV to other physical causes, but how would we   
   do this?  We can only ever make the connection by "getting in the chair"   
   ourselves, so we can only ever make the connection between FPPOV's and   
   brains.  We just can't "get in the chair" as an AI computing on a box   
   full of silicon chips and validate the correlation.  This is the   
   unbridgeable gap.   
      
        You might also notice that this creates a sort of relative sliding   
   scale of FPPOVness.  By this I mean that the extent to which I can   
   confidently assert that a particular being has a FPPOV like mine depends   
   on the extent that they resemble me in terms of the physical causes that   
   produce their FPPOV.  Actually this applies to all things! I'd rather   
   wager that large primates have a similar FPPOV to me than reef coral,   
   and I'd rather wager on coral that a lump of granite.  Even a human with   
   considerably different brain chemistry causes problems, though I expect   
   that as the science focuses in on the causes of FPPOV, we will be able   
   to tell the important (in terms of causing FPPOV) differences from the   
   unimportant ones.  Brain structure for instance can vary quite a lot on   
   some levels in different humans, but I'm guessing that the extent of the   
   average variation isn't the sort of thing that changes the FPPOV   
   dramatically or even worse, makes it so different I can't relate it to   
   my own.  (But who knows?)   
      
      This business of "my FPPOV" being the standard is tricky, but I don't   
   see any obvious way around it given that you can never get into anyone   
   else's FPPOV 'vehicle' and give it a spin.  You'll notice that it also   
   prevents us from speculating about the possibility, or nature of "other   
   forms of FPPOV" i.e. computational, emergent, silicon based or whatever.   
   Such speculation is simply unintelligible.  The further it gets from   
   your own experience of FPPOV, and its specific physical causes, the less   
   you can confidently say about it.   
      
      Personally I have a feeling that these peculiar features are a symptom   
   of our crude level of understanding of the physical causes themselves.   
   Perhaps if we were able to understand the nature of the exact processes   
   that cause FPPOV we might be able to say that other entities\systems   
   might also exhibit this characteristic, but I'm aware that this is wild   
   speculation about science that doesn't exist yet, and may never.   
      
      Some of you accused me, as Searl's proxy, as prohibiting AI from ever   
   being recognised as conscious.  I think "machine bigot" was the precise   
   term ;) This is not the case, I just think that strong AI is a logically   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca