home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.cyberpunk      Ohh just weirdo cyber/steampunk chat      2,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 439 of 2,235   
   Alienthe to Raist   
   Re: AI (again) (1/2)   
   04 Nov 03 22:23:48   
   
   From: Alienthe@hotmail.com   
      
   Raist wrote:   
      
   > Alienthe wrote:   
   >   
   >> alias wrote:   
   >>   
   >>> On Tue, 28 Oct 2003 01:35:40 +0000, Raistlin wrote:   
   >>>   
   >>> [snip]   
   >>>   
   >>>>>> IMHO for an AI to evolve it should not only be able to learn, but   
   >>>>>> also   
   >>>>>> be able to re-write it's own code. This would, I imagine, lead to an   
   >>>>>> exponential growth in "intelligence"   
   >>>>>>   
   >>>>>> R.   
   >>   
   >> It wouldn't be necessary to rewrite its own code; rather one   
   >> could design and start the next, upgraded version in what   
   >> would be a society of AIs. Still, the explosive growth and   
   >> development seems rather likely.   
   >   
   > It wouldn't be necesarry for it to re-write its own code, bu ti t is an   
   > interesting idea, no? Also, if it wanted to grow and become more than it   
   > started as, and it had the ability to write more efficient code and   
   > better algorithms, it stands to reason that it would apply it to itself.   
      
      
   There are many possibilities here. Of course if you make the   
   rather grand assumption that an Ai would write bug free code,   
   then yes. Otherwise it would be a better bet to test run code,   
   see what is better adapted to the environment, artificial as   
   it may be. Then you get the question of culling the herd and   
   applying the new code; after all survival functionality will   
   probably be a high priority. Culling would be necessary since   
   computational resources will always be limited, yet the code   
   would resist culling.   
      
   Looking in nature evolution is (at least from what I can see)   
   always applied to the next generation. Intuitively that looks   
   more safe to me. For an AI it could lead to a rapidly   
   multiplying number of new AIs, splintering in the way hinted   
   at in the Sprawl trilogy?   
      
   [snip]   
      
      
   >> I believe it is the idea behind the CYC project that   
   >> intelligence arises our of complexity. They are filling up   
   >> the database with ever increasing complexity but nothing   
   >> intelligent appears to have surfaced yet. Then again it   
   >> would to me be a sign of intelligence to keep a low profile.   
   >>   
   >>>> Of course it would need to be fully conscious to do this.   
   >>   
   >> Would it? It has been suggested that also humanity was not   
   >> concious up to just a few thousand years ago, yet was   
   >> clearly intelligent. See old discussions on the bicameral   
   >> mind for more.   
   >   
   > To re-write its own code, yes it would (by the definition given below).   
      
      
   I am not convinced, could you elaborate?   
      
   >>> a personality (consciousness, awareness, entity, whatever) is the result   
   >>> of its expierence *and* the hardware it runs on.  i, for example, am not   
   >>> just a product of the New Jersey waste.. i am a product of the New   
   >>> Jersey   
   >>> waste as expierenced by some of gods finest monkey-meat ; )  there are   
   >>> others here that have lived similar lives and come away with different   
   >>> perspectives.. a certian amount of that could be put down to   
   >>> expierence..   
   >>> but certianly, and to a large degree, it must be a side effect of the   
   >>> hardware my consciousness runs on.   
   >>   
   >> I am not quite buying this hardware angle, rather I think   
   >> it is more on what you perceive to be your hardware which   
   >> is just another part of your experience. After all it could   
   >> be a monster grade deception in the style of the Matrix. It   
   >> would not be necessary to tell an AI the full truth of its   
   >> environment.   
   >   
   > No, but that would be likely to put artificial limits on it's   
   > development. Also, a sufficiently developed entity might work it out for   
   > itself. It might be akin to a sort of AI mysticism :o)   
      
      
   There will always have to be limits in a finite world; it   
   would be interesting to know how the limitations would affect   
   the AI.   
      
   I am less convinced that an AI would be able to work it out,   
   looking outside the box it is enclosed by. It might guess, much   
   as we make guesses to the (possible) underlying nature but   
   obtaining hard facts from the void seems implausible to me.   
      
   >>> alterations to that hardware may possibly result in a functioning   
   >>> (perhaps   
   >>> even superior) "entity" but i do not believe that entity would be me.   
   >>> or to put it more concisely.. if such an AI achieved consciousness, and   
   >>> then realized it had the capability to alter that consciousness..   
   >>> don't u   
   >>> think it would seek to destroy that capability?  self-preservation is   
   >>> the   
   >>> 1st priority of (most) self-aware beings.   
   >>   
   >> The definition below seems to have combined conciousness   
   >> with self conciousness, two concepts I believe differ by   
   >> degrees. Conciousness, as I see it, is about knowledge of   
   >> your own thought process (Cogito ergo sum) but that would   
   >> not have to involve a body or a distinct self. Self   
   >> conciousness then is to me the knowledge of the self as   
   >> distinct from others. Certain forms of autism involves   
   >> not being able to distinguish the self from others, or   
   >> even others from others. I am not an expert, this is   
   >> just what I understood from an article, though it does   
   >> strike me that autism seems to be a great number of things.   
   >   
   > Quite possible that an AI would exhibit similar symptoms as human autism.   
      
      
   That wold be a disturbing possibility. Moreover it could   
   make the AI less useful for the general population.   
      
   >> I agree that self preservation would seem related to being   
   >> self aware though I am not sure one leads to the other   
   >> without some darwinism.   
   >>   
   >>>> From Websters:   
   >>>>   
   >>>> \Con"scious*ness\, n. 1. The state of being conscious; knowledge of   
   >>>> one's own existence, condition, sensations, mental operations, acts,   
   >>>> etc.   
   >>   
   >> Hmmm. Definitions that involve "etc." seem shaky to me.   
   >   
   > Well, it is a short dictionary definition, not a philosophical treatise.   
   > Just thought I would define a term that is being bandied around in this   
   > discussion.   
      
      
   Discussions in this group frequently become philosophical,   
   especially when the topic drifts towards AIs, which it has   
   many times in the past. Sharp definitions tend to be more   
   useful than the overly broad; it also helps avoid topic drift   
   turn into a derailing. Topic drift is not a bad thing; it is   
   one of the things that makes this place so interesting, at   
   least to me.   
      
   >>>>> i think that in a sense, the code of an AI could be seen as the   
   >>>>> body/brain   
   >>>>> of the AI, rather than its mind or intelligence.   
   >>>>>   
   >>>> Personally I would try and avoid human analogies with regards to   
   >>>> AIs... The less preconceptions we can include in the language used   
   >>>> to discuss them the more possibilities will open up.   
   >>>   
   >>> yes.  yes.  yes. the part that has always confused me in these   
   >>> conversations on AI is why   
   >>> people seem to be convinced that such a creature would even be aware of   
   >>> the macro-verse surrounding it.   
   >>   
   >> Agreed. It does make for easy plot lines in movies. Moreover I   
   >> am not even sure self conciousness is required to make a useful   
   >> AI.   
   >   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca