XPost: comp.ai.edu   
   From: mailbox@dmitry-kazakov.de   
      
   On Fri, 09 May 2008 13:35:11 GMT, Ondra Zizka wrote:   
      
   > "Dmitry A. Kazakov" writes:   
   >| On Wed, 07 May 2008 10:57:58 GMT, Ondra Zizka wrote:   
   >|   
   >|> Is there some AI theory ( or idea / area of research aiming to create   
   >|> a theory) which would cover most currently known concepts and use   
   >|> them together? What about some fuzzy graph-like database of   
   >|> n-tuples holding all knowledge of an intelligent system, perhaps using   
   >|> neural networks to create the fuzzy relations and to perform   
   >|> tranformations of both short-term knowledge (aka. cogitation) and long-term   
   >|> knowledge (learning, memorizing, creating memories) ?   
   >|   
   >| The topology of the graph in effect induces some distance/similarity   
   >| measure in n-dimensional space of tuples, which in turn determines how   
   >| learning works. This implies that there cannot be any universal structure,   
   >| because for any distance we could construct a problem, for which the least   
   >| distance learning will not work. Now, if the structure is to define the   
   >| distance, then that is not universal. If the distance is determined by   
   >| something else, then the structure is not *all* knowledge.   
   >   
   > Sure, the structure would not hold *all* knowledge, just the storable   
   > part of it.   
   [...]   
   > The structure would hold experience (actions done and its effects,   
   > learned techniques), memory (remembered objects, remembered "classes   
   > of objects", social memory), current environmental information (like   
   > "where am I", "what's the time"), current, mid-term and long-term   
   > "goals", etc etc.   
      
   Well, to summarize it - this structure has no idea how to learn.   
      
   That makes your initial question meaningless. The structure without a   
   notion of learning is irrelevant so long it can hold all possible states of   
   learning . For that matter, take single integer number for a structure. It   
   can hold all information you have described...   
      
   > My personal bet is that it will have something of Prolog's inference   
   > mechanism, only the associations will be fuzzy and self-learned,   
   > stored in the structure I've described above, and the rules will be   
   > also subject of inference and storing - and that's the way the   
   > intelligent system will learn:   
      
   This can be disproved experimentally by constructing a problem which the   
   inference system cannot solve, and then presenting it to human respondents,   
   who would be able to solve it. (i.e. Turing test)   
      
   > 1) current sate -> accidental actions done -> their effect ->   
   > associations update   
   > 2) current sate -> observated environmental changes -> their effect   
   > -> associations update   
   > 3) current sate + desired state -> inference -> assumed actions   
   > needed -> actual effect of actions done -> associations update   
   [...]   
   > Not that the newborn is powered by fuzzy prolog, but it could work   
   > this way.   
      
   A problem with all this is in clustering the observed states (stimuli) and   
   actions taken into generalized/hierarchical structures of lesser   
   cardinality. There is a long long way between 2040x2048 pixels x 50Hz frame   
   rate -> "bread" -> "I am being fed."   
      
   This sort of linguistic variables construction is a part of learning, and a   
   subject of AI (as well as of intelligence).   
      
   > The question now is, whether such infering could solve all problems.   
   > As far as I can imagine, it could solve quite complex tasks. What's   
   > your opinion?   
      
   Certainly this cannot solve all problems. A more interesting question is   
   how close the class of solved problems is to "general intelligence." My   
   impression is that it is quite remote.   
      
   Such systems can be analysed when formalized appropriately. There exist   
   more and less obvious conditions for a system to work. For example, proper   
   identification of stimuli and actions, consistency, continuity etc.   
      
   P.S. If you get a chance to look at AI publications of 40-50s, I think you   
   would be surprised how close in the core were their ideas of "homeostate"   
   etc, to yours. It is an enjoyable reading. Unfortunately, that was and IMO   
   still is a wrong way. However, the idea is so attractive that I often catch   
   myself on thinking this way... Who knows...   
      
   --   
   Regards,   
   Dmitry A. Kazakov   
   http://www.dmitry-kazakov.de   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|