home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 623 of 1,954   
   Greg Heath to Ted Dunning   
   Re: Functional approximation in higher d   
   24 Feb 05 20:04:59   
   
   XPost: comp.ai.neural-nets, sci.math.num-analysis, sci.math   
   From: heath@alumni.brown.edu   
      
   Ted Dunning wrote:   
   > It doesn't really solve the problem,   
      
   I assume you are referring to *Linear* PCA and PLS. In   
   general, they are definitely not the silver bullet.   
   However, they are quick, easy to implement, and   
   relatively easy to understand.   
      
   I always try easy methods (e.g.,linear/logistic,...)   
   first.   
      
   Recently I successfully used Linear PCA in the input   
   (not even combined input-output space!) space for a   
   561-input, 158-output classification problem. The result   
   was a 8-14-158 MLP which fit the bill. I may have done   
   better with combined space PCA, PLS, or nonlinear   
   techniques. However, the current result was sufficient   
   for my purposes.   
      
   When time permits, I plan to go back and see what   
   additional insights the more sophisticated methods   
   will reveal.   
      
   Hope this helps.   
      
   Greg   
      
   > but support vector methods can   
   > handle thousands of inputs with a feasible number of training examples.   
   >   
   > Note, however, that this is dependent on the problem actually being a   
   > low dimensional one that just happens to be phrased in high dimensional   
   > terms.   In addition, the low dimensional nature of the problem has to   
   > fit the assumptions of the method.   
   >   
   > Bayesian methods can be essentially equivalent to SVM and thus can pull   
   > the same sorts of tricks.   
   >   
   > Essentially all of these combine a presumption about the simplicity of   
   > the desired model with a measure of error.  The presumption of   
   > simplicity is converted into a penalty for complex models and this is   
   > used as a regularizer.  Bayesians think of this penalty as a prior   
   > expectation, SVMers think of it as a performance bound on unseen data.   
   > It works either way.   
   >   
      
   [ comp.ai is moderated.  To submit, just post and be patient, or if ]   
   [ that fails mail your article to , and ]   
   [ ask your news administrator to fix the problems with your system. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca