home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 338 of 1,954   
   Jochen Fromm to All   
   Re: The Power of the Google Cluster   
   11 Jun 04 01:15:36   
   
   From: Jochen.Fromm@t-online.de   
      
   >   
   > You can use large clusters for many purposes, one of which is parallel   
   > IR that Google company is most interested in. Evolutionary programming   
   > can be considered a grand challenge; however it must serve some end.   
   > Building a "Matrix" is probably not a sufficient end in itself.   
   >   
      
   I think building a virtual "Matrix" world is a sufficient end in itself.   
   This   
   includes of course the construction of intelligent agents, which are able to   
   understand and live in this "Matrix". You need several clusters, one for the   
   simulation of the world, and one for every intelligent agent. The two tasks   
   of   
   constructing an external and an internal world are closely related, if you   
   consider that an intelligent agent should have a world knowledge which is   
   large enough to build up a representation of the world.   
      
   Randall Davis says in an AI Magazine article about thinking   
   and representation (Volume 19 Number 1 (1998) 91-110) that   
   "thinking is not simply the decontextualized manipulation   
   of abstract symbols, powerful though that may be. Instead,   
   some significant part of our thinking may be the reuse or   
   simulation of our experiences in the environment [..]   
   Representations allow us to re-present things to ourselves   
   in the absence of the thing, so that we can think about it, not just   
   react to it."   
      
   Intelligent behavior is characterized by prediction and   
   imagination, intentional action and reasoning. Davis continues:   
   "Animal intelligence has a here and now character: With animal calls,   
   for example, there is an immediate link from the perception to the mind   
   state to the action. If a monkey sees a leopard, a certain mind state   
   ensues, and a certain behavior (giving the appropriate call) immediately   
   follows. Human thought, by contrast, has an unlimited spatiotemporal   
   reference, by virtue of several important disconnections. Human thought   
   involves the ability to imagine, the ability to think about something in the   
   absence of perceptual input, and the ability to imagine without reacting."   
      
   Therefore building intelligent agents and creating a virtual "Matrix" world   
   are in fact closely related topics. Today there are sophisticated 3D   
   graphics engines, especially in new computer games (for example   
   Far Cry), which can display very complex worlds. Thus the goal   
   of AI seems so close as never before. James F. Allen has written   
   more than 5 years ago in the AI Magazine article "AI Growing Up -   
   The Changes and Opportunities" (Volume 19 Number 4 (1998) 13-23)   
      
   "We are [in AI] at a similar transition point to the first flight in   
   aviation. The field of aviation was changed dramatically by the   
   development of working prototypes because for the first time,   
   experimental work could be supported [..] I believe that we're at   
   a similar transition point to the first flight because we are now   
   able to construct simple working artifacts which then can be   
   used to support experimental work."   
      
   This experimental work should answer unresolved questions,   
   for example the question Nils J. Nilsson formulated in "Eye on the Prize":   
   "Is general intelligence dependent on just a few weak methods   
   (some still to be discovered) plus lots and lots of commonsense   
   knowledge? Does it depend on perhaps hundreds or thousands   
   of specialized minicompetences in a heterarchical society of   
   mind? No one knows the answers to questions such as these,   
   and only experiments and trials will provide these answers."   
      
   Allen writes further in his article, that a critical point in the   
   transition is the development of a calculus or a set of laws:   
   "By analogy to a mature science such as physics, we are [in AI] at   
   a stage prior to the development of calculus and Newton's laws."   
      
   What calculus could that be ? I am convinced it should   
   be based on the language of Multi-Agent Systems (see   
   Minsky's classic book "the Society of Mind"), both for   
   the external and the internal world of the agent. Distributed Artificial   
   Intelligence (DAI) has become the most important branch of AI.   
      
   Probably Allen is right, and we are at a critical transition point,   
   similar to the first plane and the first flight in aviation. The computers,   
   graphics engines, machines and cluster seem to be strong enough.   
   Yet I have the impression, there are in fact many people working   
   in AI, but too many people who want to construct the cockpit   
   (especially the console of the cockpit), and each of them tries to   
   create it alone, whereas there are not enough scientists who work   
   _together_ on the engine and the wings, for example in a common   
   large scale project.   
      
   The engine and the wings correspond to the software which enables   
   the agent to move around in a complex 3D world and which permits   
   the agent to understand this complicated world (including a powerful   
   and modular 3D-graphics engine). Such a distributed application   
   should at least enable the ability to understand the prepositions   
   at/on/in, before/behind, above/below, over/under, etc. (*)   
      
   Allen's attempt of a one-sentence AI definition is "AI is the science of   
   making machines do tasks that humans can do or try to do."   
   Humans live in a complex multi dimensional environment, and the main   
   purpose of the brain is to control the movement of the body in this   
   environment. If we are able to construct agents with common sense   
   which are able to understand a complex 3D environment, we come close   
   to AI's original goal, which is to produce intelligent programs that are   
   able to   
   use general tools and to build systems with humanlike capabilities and   
   intelligence.   
      
   An intelligent agent with common sense should have at least the   
   _common_ abilities of all humans. It should be able to do what every   
   human can: everyone of us is an expert in speech & language and   
   vision & motion, with the highly developed ability to understand and   
   comprehend a complex 3D environment.   
      
      
   (*) see   
   Language and spatial cognition   
   Annette Herskovits   
   Cambridge University Press, 1986   
      
   [ comp.ai is moderated.  To submit, just post and be patient, or if ]   
   [ that fails mail your article to , and ]   
   [ ask your news administrator to fix the problems with your system. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca