home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 347 of 1,954   
   Jochen Fromm to All   
   Re: The Power of the Google Cluster (1/2   
   20 Jun 04 03:55:49   
   
   From: Jochen.Fromm@t-online.de   
      
   >   
   > There have already been grand space-like projects,   
   > such as the 5th generation computers in Japan:   
   > quite a failure. We don't know the fundamentals,   
   > you cannot "decompose" intelligence   
   >   
      
   If everything is connected with everything, it is   
   likely you will have difficulties to understand or   
   handle the structure. Therefore you will probably   
   need to find a decomposition or modular structure   
   for the construction of intelligent agents.   
      
   We know the fundamentals, and we know how the   
   mind works. The mind is simply what the brain does   
   (Marvin Minsky's "Society of mind", chapter 28.5),   
   and even in higher life-forms, the brain is to a   
   large amount controlled by emotions [1]. The brain   
   is based on large-scale associative neural networks,   
   which are modulated and reinforced through emotional   
   systems (esp. the limbic system). It's purpose   
   is to handle sensory information in order to control   
   the motion of the body in the environment. It's   
   advantage is flexibility, intelligence and creativity   
   - the ability to learn or understand, to deal with   
   new or challenging situations, and to be creative.   
      
   I don't agree with you if you say we don't need   
   grand space-like projects. Yes, some researchers say   
   "there is no need to attempt an "Manhattan Project"   
   approach with a monolithic project that attempts   
   to create human-level intelligence all at once" [2].   
   But I think the size of the project should be related   
   to the size of the problem. We need such an Apollo   
   or Manhattan AI project to melt the scientists   
   and different experts together. Such a project   
   will also create completely new experts and job   
   niches.   
      
   The "getting-into-virtual-world-challenge"   
   is not new. AI researchers have worked on it for   
   years. But although they all have the same goal,   
   every AI researcher is creating his own limited   
   virtual world, instead of working on the same project   
   together. Because the resources of a single research   
   group are limited, they usually start with a simple 3D   
   world made of boxes. The agents in these worlds typically   
   reach the intelligence of boxes, too. The complexity   
   of adaptive and evolutionary Multi-Agent Systems (MAS)   
   mirrors the complexity of the environment.   
      
   Complex Multi-User Games can be the platform for the   
   next-generation AI. A really large project based   
   on 3D multi-player computer games (similar to   
   EverQuest, Dark Age of Camelot, etc.) and   
   Collaborative Virtual Environments (CVEs) can be   
   the crucial step. According to Tony Manninen [3],   
   CVEs provide a computer-generated 2-D or 3-D space   
   within which multiple users can move and interact.   
   Multi-player games are inherently complex and   
   social systems, and therefore suitable for the   
   development and evolution of intelligent autonomous   
   agents.   
      
   Leading researchers in AI research focused on computer   
   games like John E. Laird are convinced [2] that   
   human-level AI can successfully pursued in interactive   
   computer games. They argue that [2] "interactive   
   computer games have increasingly complex and realistic   
   worlds and increasingly complex and intelligent   
   computer-controlled characters".   
      
   A 3D graphics engines produces complex 3D patterns.   
   It maps an abstract state of the world model into   
   a complex 3D graphic scence. The process of understanding   
   involves the inverse mapping of a complex three or   
   multi-dimensional scence to an abstract state of the   
   world model. Such a mapping could be created by an   
   "inverse" graphics engine for pattern recognition.   
   Pattern recognition is not a new challenge. It is   
   possible to understand normal images and pictures   
   through simple back-propagation with neural nets. In   
   a similar way, it should be possible to understand   
   moving images and more complex scences. The task is   
   not fundamentally different, only larger.   
      
   One open question is if the two engines - the graphics   
   engine and the inverse engine - should be completely   
   independent of each other or not. Prediction and   
   expectation in the particular context of the current   
   situation are essential in reducing the complexity of   
   the task. To facilitate the task of pattern recognition,   
   each agent could use it's own pattern producing engine   
   as well. It is possible to use common intermediate   
   representation layers in both engines, but of course   
   then they are not independent from each other. What is   
   more suitable and useful, two completely independent   
   engines, or two layered engines which share common   
   layers ?   
      
   I agree with all who say we need an advance in theory,   
   not only faster and bigger computers or clusters, and not   
   a theory based on the terminology of logic. Logic was good   
   as the base for serial, binary and digital computers.   
   What is needed is instead a new kind of theory based on   
   DAI, Agents and Multi-Agent Systems. And we need a new   
   kind of experiments. You can make a breakthrough in AI   
   related theory if it is guided and supported by solid   
   experiments, experiments in complex virtual worlds.   
   Experiments will show which approach is suitable and   
   which does not make sense.   
      
   Once humans made the first steps into space fourty years   
   ago in the 1960's, they knew it was finally possible to   
   tackle the "getting-into-space-challenge", to bring a   
   man to the moon and savely back. Concrete experiments in   
   form of several mission with clear mission objectives   
   paved the way to the larger mission, the larger mission   
   of building a huge spacecraft capable of reaching the   
   moon. The task of building a super-booster, the   
   SATURN V Moon rocket, was of course too big for a   
   single person or isolated group.   
      
   We made the first steps into virtual worlds, the   
   current computer games - Far Cry, Unreal, Quake etc. -   
   offer highly sophisticated computer graphics. Thus   
   we know it is now finally possible to tackle the   
   "getting-into-virtual-world-challenge". Of course the   
   task of constructing really intelligent and completely   
   autonomous agents is a big task, too, which can only   
   be solved together, through concrete common experiments   
   in form of implementing several prototypes with clearly   
   defined and increasingly complex goals. If everybody   
   tries to build his own private rocket or attempts to   
   reinvent the wheel, the AI community will possess   
   many small rockets and AI-wheels, but it will not   
   succeed.   
      
   [1]   
   The Emotion Machine, Draft   
   Marvin Minsky   
   http://web.media.mit.edu/~minsky/   
      
   [2]   
   Human-Level AI's Killer Application - Interactive Computer Games   
   John E. Laird and Michael van Lent   
   http://ai.eecs.umich.edu/people/laird/papers/AAAI-00.pdf   
      
   [3]   
   Rich interaction model for game and virtual environment design   
   Tony Manninen, University of Oulu, Finland, 2002   
   http://herkules.oulu.fi/isbn9514272544/   
      
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca