home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 14 of 1,954   
   JXStern to exa   
   Re: The AI Theory-of-Everything (Was: Ho   
   27 Jul 03 23:54:18   
   
   From: JXSternChangeX2R@gte.net   
      
   [[MOD: Last item in this thread if the drift into philosophy continues]]   
      
   On Sun, 27 Jul 2003 01:16:26 GMT, erayo@bilkent.edu.tr (Eray Ozkural   
   exa) wrote:   
   >> Now I'd say that AI and simulation are related somehow or other, but a   
   >> careful discussion has to separate issues of reference,   
   >> representation, performance, competence, necessity, convention, and a   
   >> few other things.  Twelve faces is probably not enough!   
   >   
   >Greetings,   
   >   
   >I think Jorn has made a mindful observation that understanding   
   >requires one to be able to simulate the particular   
   >event/object/process in question.   
   >   
   >I presume that by simulation he refers not to the physical level, but   
   >a level of abstraction sufficient for the intelligent agent.   
   >   
   >That is a great way of putting it, for simulation carries with it the   
   >merit of prediction which is a highly valued skill for those creatures   
   >that wish to survive. Indeed, common sense knowledge is utterly   
   >useless otherwise! What good is a gigabyte of logical sentences if you   
   >cannot use it to imagine a fine ceramic cup being filled with   
   >cappucino by a pretty barkeep?   
   >   
   >We humans seem to treat things as black boxes only when we don't need   
   >to understand how they work or when we can't :) Experts obviously   
   >bring together a lot of knowledge that usually includes theories of   
   >what things are and how they work. I wouldn't be a computer expert if   
   >I didn't know how they worked at several levels of abstraction, with   
   >so many kinds of architectures, OSs, programming languages, etc.   
   >   
   >With all due respect, I would view somebody who thinks of computers as   
   >black-boxes as *not* a computer expert. Likewise, the expertise of any   
   >complex process will necessarily involve an understanding of its   
   >innards, its working principles.   
      
   Well, that's why behaviorism and positivism and the like have   
   generally fallen out of favor.   
      
   However, it's not that easy.  Dealing with computers is the perfect   
   example.  When you choose levelOfAbstraction(i), you are treating the   
   levels n=i but has problems at nAs you know c.ai.philosophy is concerned with philosophical treatment   
   >of AI, subjects Joshua has been mentioning such as logicism or   
   >functionalism. However, I must point out that only 5% of discussions   
   >on that newsgroup have any value! I don't think this is particularly   
   >because of the people there, but because humanity has developed some   
   >really unproductive approaches about philosophy of mind.   
      
   Non-sequitur.   
      
   Only 5% of the discussions on virtually any newsgroup have any value,   
   and Sturgeon's Law holds that's going to be true in other venues as   
   well.  Nothing in that observation should be used to judge the value   
   of the topics.  Probably 99.44% of the discussions on the politics   
   newsgroups are repetitive and/or incoherent (not to mention libelous,   
   obscene and/or irrelevant), yet politics is a valuable and difficult   
   subject.  Not to mention that there are some unproductive approaches   
   to politics (fill in your favorites here!).   
      
   >Joshua's summary of philosophical developments is nice indeed, but if   
   >you've noticed it reads like nothing has been accomplished. Start from   
   >logicism, back to logicism.   
      
   Well, this is my contingent observation, not a necessity.  Those parts   
   of philosophy that could possibly contribute to a philosophy of   
   computation (much less AI!) simply need to get their act together!   
   And perhaps they are, slowly.  As the current generation that has   
   grown up with computers filters into the philosophy departments, we   
   will probably see a lot more of it.   
      
   With all due respect, it may be some leading AI practitioners who   
   hamper the process of having philosophy contribute productively to the   
   field, by accepting bad philosophy as putting limits on AI.  The key   
   example of this is in Winograd and Flores' Understanding Computers and   
   Cognition, where they completely concede the point that only a   
   Heidiggerian philosophy, a phenomenology of irreducible intelligence,   
   can capture real intelligence, so that AI may be fun and useful, but   
   can never be real.  To me, it seems far more likely that Heidiggerian   
   philosophy is pre- and anti-scientific nonsense, if occassionally   
   attractive as poetry, and that the only hope for capturing real   
   intelligence involves computation, in some way, shape, or form.  In   
   any case, I don't much like it when someone surrenders the point.   
      
   >I agree and disagree with that conclusion. Some thinkers have come up   
   >with useless theories while others have made progress.   
      
   Quite.  About any subject one may imagine.   
      
   >At any rate, to attack problems such as consciousness we need some   
   >form of philosophical foundation. Before you can formulate a problem   
   >like that you must be able to think about the necessary and sufficient   
   >conditions for it to come about. That demands thinking and it is   
   >philosophy indeed.   
      
   Exactly!   
      
   >It doesn't seem likely that a simple formulation can help us with a   
   >solution. In AI, what we have discovered instead is even if you take   
   >some seemingly simple cognitive task such as planning it turns out to   
   >be an incredibly sophisticated problem on its own. Such a level of   
   >technical sophistication implies to me that the underlying philosophy   
   >is irrelevant to a large extent.   
      
   Non-sequitur.   
      
   Are you suggesting philosophy is only relevant for simple subjects?   
   Isn't that like saying that atoms are only good for tiny objects?   
      
   >It is relevant, however, in describing the very basis of research.   
   >Getting back to the question of consciousness, if we have a   
   >philosophical theory that consciousness is compositional and it   
   >requires traits X and Y, this might turn out to be a great idea for an   
   >AI researcher who thinks he might get Y right. Or if we had a good   
   >philosophical theory of learning (which we do not have!) we could   
   >enumerate what kinds of learning there are, and formulate new machine   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca