home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,026 of 1,954   
   Ted Dunning to All   
   Re: Transduction with regression   
   03 May 06 01:56:07   
   
   From: ted.dunning@gmail.com   
      
   Actually, the MASS book is about software, but it has a good intro to   
   resistant estimators and I believe it has good links to the literature.   
    I may have pointed you to the software companion site, but you can get   
   to the book from there.   
      
   Also, the BUGS system is all about Bayesian inference of various sorts,   
   but it can definitely be used for regression.  The idea is that you   
   have a model with unknown parameters that have prior distributions and   
   computing the posterior is either inference or regression.  In the   
   simplest case of single variable linear regression, you might have   
      
      y_i = a x_i + b + noise   
      
   where a and b have relative uninformative priors and the noise is   
   normal with unknown variance and zero mean.  The variance on the noise   
   would then also have a relatively uninformative prior.  Given   
   observations of x_i and y_i, you can find a posterior distribution of   
   a, b and the variance on the noise.  This is regression, plain and   
   simple.   In this example, knowing that value of some x without a   
   corresponding y (or vice versa) doesn't tell you very much, but if   
   there were multiple x's on the right hand side and you were told the   
   values of four of them plus the corrresponding value of y, then you   
   begin to have something.  Likewise, if you assume that the x's are   
   sampling from a lower dimension sub-space, then having x's without y's   
   can inform you about that sub-space and thus constrain the regression.   
      
   The BUGS sites have good links to literature on graphical models which   
   is the currently fashionable idiom for expressing these sorts of   
   things.   
      
   The Mackay site talks a lot about Bayesian inference, but neural   
   networks are simply a form of regression and he talks quite a bit about   
   some interesting approximation approaches for using neural networks   
   that allow one to approximate the posterior distribution of the   
   coefficients.   
      
   I know that these links don't directly bear on transduction, but I do   
   think that they would give you an interesting alternative nomenclature   
   that would open up alternative sets of literature (written by all the   
   guys who don't use the word transduction).  They also would provide you   
   some opportunities to possibly break some new ground in your particular   
   problem area.   
      
   [ comp.ai is moderated.  To submit, just post and be patient, or if ]   
   [ that fails mail your article to , and ]   
   [ ask your news administrator to fix the problems with your system. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca