From: abh2n@cobra.cs.Virginia.EDU   
      
   Ashlie Benjamin Hocking wrote:   
   >> My comment about NNs, very few of which mimic the CNS   
   >> very realistically,   
      
   erayo@bilkent.edu.tr (Eray Ozkural exa) wrote:   
   > I wonder which algorithms (not models, that isn't the real question)   
   > mimic the CNS very realistically. It looks like only Hebbian learning   
   > comes close to biological plausibility (like used in Kohonen networks)   
      
   Well, _none_ of them mimic the entire CNS very realistically (unless   
   you limit yourself to squids, etc.) Levy neural networks do an   
   excellent job (in my very _biased_ opinion) of modelling the CA3   
   region of the mamalian hippocampus.   
      
   Ashlie Benjamin Hocking wrote:   
   >> IMO, was meant to refer to the intractability of   
   >> understanding NNs that are actually capable of solving interesting   
   >> problems.   
      
   erayo@bilkent.edu.tr (Eray Ozkural exa) wrote:   
   > You sound as if you think ANN learning algorithms are the only methods   
   > that are actually capable of solving interesting problems, but that is   
   > not the case. There are a variety of methods that are capable of such   
   > feats. In fact, Hans Moravec told he didn't use large-scale ANNs   
   > because they didn't fare well for his perception algorithms. I can   
   > understand that because training algorithms are too inefficient to   
   > scale-up.   
      
   I'll admit to sounding that way, and, no, I don't really believe   
   that. My reaction was more an over-reaction to the idea (that I've   
   heard too many times) that NNs are not really AI. My attempt was to   
   turn a weakness of NNs (the extreme difficulty of analyzing them) into   
   a strength (through the Parnas argument).   
      
   erayo@bilkent.edu.tr (Eray Ozkural exa) wrote:   
   > Yours is an intriguing thought. What are those interesting problems? I   
   > don't see neural networks handling high-dimensional large-scale   
   > complex machine learning problems any time soon (at least not the   
   > current algorithms) I suppose that would be your definition of   
   > "interesting" in the context of machine learning.   
      
   There are several different classes of interesting problems, but I'm   
   sure we agree on the most interesting of those. (E.g., passing the   
   Turing test - which I'm sure you agree won't happen any time soon with   
   _any_ algorithm. "Soon" means within a couple decades - I won't   
   predict for or against passing the Turing test 20 years from now.) A   
   very, very biased class is understanding the human brain. As mentioned   
   earlier, the Levy NNs definitely are already contributing to this. (By   
   making predictions that can and have been verified through   
   neuroscience.)   
      
   As for other interesting problems solved by neural nets, I enjoy the   
   work of Elman with respect to learning parts of speech without direct   
   input that such things even exist. He uses a multi-layer perceptron   
   with "context neurons" and back-prop. The "context neurons" are what   
   help the network to maintain state, and hence, be aware of chronology,   
   etc.   
      
   A more trivial, but still interesting, example is that of handwriting   
   recognition - used in today's Palm Pilots. (Something that would not   
   have been considered trivial at all 10-20 years ago, I believe - just   
   to beat a dead horse.)   
      
   >> Sure, one can talk about Ising spin-glasses and minimizing   
   >> energy states, finding minimal entropies, etc., but the _truly_   
   >> interesting neural networks are not currently strongly amenable to   
   >> such analysis.   
   >   
   > This, I believe is a misleading view of neural networks as it seems to   
   > assign ANNs a special status in theoretical analysis. One should not   
   > forget that a MLFF network is basically a general purpose computer.   
   > Then, MLFF learning (with fixed topology) is a search in a *subset* of   
   > the computation space, ie. a (small) function space. Algorithms such   
   > as error back-propagation learning seek the proper function to model   
   > the I/O.   
      
   My point was to argue pretty much the same. I.e., although certain   
   classes of NNs can be analyzed, by the time you've simplified them   
   enough to analyze, you've removed much of what makes them interesting.   
      
   > Now, it should not come as a surprise why we cannot "see inside" those   
   > neural networks that are learnt. A 100 lines of computer code has the   
   > same properties. The function it corresponds to can be so complex that   
   > it can avoid analysis for years (especially if that is a high level   
   > programming language).   
   >   
   > Every algorithm is a constructive proof. If you think about it, a lot   
   > of the state-of-the-art algorithms (like say, all-to-all shortest path   
   > algorithms) that require very elaborate mathematical understanding,   
   > are just a few 10s of lines. Then, it is not hard to see why it would   
   > be hard to analyze a large enough ANN. But give me a small enough ANN   
   > with numbers on it, and I will tell you what it does. (It's not   
   > different than giving me a piece of machine code, and asking me what   
   > the algorithm does)   
   >   
   >> I will agree that genetic programming and GAs could   
   >> also be said to fall into such a category.   
   >   
   > In fact, all CS falls into that category as indicated above.   
      
   Touche'. In fact, the aforementioned Parnas would make exactly that   
   argument, I'm sure. However, there are definitely differences here, if   
   only of scale. Most programmers at least have the illusion that they   
   know what their programs are doing and why. Although people working   
   with NNs can give general arguments about why one NN architecture   
   works better than another to solve a particular class of problems,   
   they'll be hard-pressed to explain exactly why a (large) NN fails and   
   another succeeds when the two have the same architecture.   
      
   >> Consider this: Do you consider DFS and BFS to   
   >> be AI?   
   >   
   > DFS and BFS are graph algorithms. They are mentioned in AIMA as   
   > "uninformed search algorithms", because that's what they do:   
   > systematic searching in graphs. They are there to show the   
   > fundamentals of search algorithms, to give a complete picture of the   
   > subject. Therefore, I would say they are part of AI subject, but they   
   > are not unique to AI since they are two fundamental algorithms in   
   > general algorithms research.   
   >   
   > Do you consider a gradient-descent search in a function space to be   
   > AI?   
      
   I think gradient-descent search would fall into the same   
   category, namely "[it is] part of of AI subject, but [it is] not   
   unique to AI".   
      
   So, in summary, I rescind many of my previous statements, but I stand   
   by my basic premise: Not only are NNs part of AI, they are a   
   fundamental part of AI and will continue to be for the forseeable   
   future.   
      
   ---------------------------------------------------------------------   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|