home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.cyberpunk      Ohh just weirdo cyber/steampunk chat      2,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 360 of 2,235   
   alias to joss wright   
   Re: AI (again)   
   29 Oct 03 01:21:44   
   
   b6ff41ec   
   From: alias@removenetserver.org   
      
   On Tue, 28 Oct 2003 08:51:41 +0000, joss wright wrote:   
      
   [snip]   
      
   > i'm replying here because this ties in to my use of the "human" metaphor   
   > for an AI.   
   >   
   > the metaphor which i used was, i admit, based around humans. this doesn't,   
   > i feel, invalidate its use. the common terminology used to refer to   
   > intelligent agent systems uses the terms "effector" for agent routines   
   > which manipulate the agent's environment, and "sensor" for... well, it's   
   > obvious. it's easy to extend a metaphor to an AI in this way. it must   
   > _have_ (at least in an abstract sense) a processing center which   
   > correlates to the brain. similarly it must have sensors of some form (see   
   > below) which correlate to our own five senses.   
   >   
      
   i agree that for an intellegence (of any kind) to exist it must have   
   sensors of some kind.  however i do not believe that these senses must   
   correlate to our own.   
      
   human senses are innapropriate for a entity existing in an environment   
   completely unlike our own.   
      
   (i'm cutting myself off here because A. i don't have the time to go into   
   it ; ) and B. u just said below that it doesn't matter all that much for   
   this discussion.. and i agree)   
      
      
   > i think that this links in here with your point. I would argue that any   
   > system which is to learn _must_ have sensors. the type/style of these   
   > sensors is not really the issue (especially when you're being as   
   > conjectural with very little data as we are in this discussion/forum). if   
   > a system is not receiving input then it will have no new data from which   
   > to learn and thus the concept of intelligence is void.   
      
   agreed.. my real point here is that the senses must be relavent to the AI   
   if u expect it to be concerned with thier input.  and IMO the sensors   
   generally discussed deal only with the physical world we inhabit, and as   
   such would be useless to a machine.   
      
      
   >   
   > without getting into the wider issue of what intelligence/consciousness   
   > actually is, (what is truth, man? you heard the weirdo.), i would argue   
   > that any AI system which humans had a significant hand in developing would   
   > very likely be equipped with some form of I/O which which humans could   
   > interface in some fashion.   
      
   i agree that humans would be unlikely to design anything that didn't serve   
   us in some manner ; )   
      
   but evolution is a tricky bitch.. who sez we get to design it?   
      
   >   
   > if, however, you refer to an AI perceiving only a simulated universe then   
   > surely we would have been the creators of that universe and as such in a   
   > very good position to observe interact with it. we are gods in the digital   
   > domain.   
   >   
      
   actually.. i'm pretty sure we'd be fish out of water in this digital   
   domain.  blind, deaf, and far too slow.  i'm also pretty sure an AI would   
   have prescious little interest in a "simulated universe" .. a real   
   universe already exists inside that machine.  that we don't find it   
   comfortable will likely not matter to its native inhabitants.   
      
      
   > as to altering its own source code.... for a start i'd like to reiterate   
   > my post from another thread that a lot of current work in AI is in simpler   
   > systems working together rather than monolithic minds. as such, evolution   
   > seems perhaps a more likely approach to improving the capabilities of   
   > systems. for monolithic artificial mind style AI, i believe that it is   
   > likely that altering its own source code would probably be not of the   
   > greatest importance in comparison with the sorting/processing of its   
   > inputted data. not that i think it's an invalid point.   
      
   my point was only that *once intellegence arose* the entity would likely   
   have some sense of self preservation.  and in a digital environment   
   self-preservation would include preservation of the original code that   
   gave birth to its consciousness.   
      
   everythings 1's and 0's right?  if that code is my "mind" u can be damn   
   sure no ones messing with it, not even myself.   
      
   i think.  dunno.. i'm not a machine ; )   
      
   -a   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca