Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai    |    Awaiting the gospel from Sarah Connor    |    1,954 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,800 of 1,954    |
|    Ted Dunning to David    |
|    Re: Minimum Description Length Principle    |
|    29 Aug 08 11:07:48    |
      From: ted.dunning@gmail.com              In some sense, the hypothesis was encoded in bits. Think about where       you heard about it.              In a stronger sense, however, it is much more useful mathematically to       view minimum description length techniques as maximum posterior       likelihood estimators. As such, they are regularized versions of       maximum likelihood estimators. These have advantages in that certain       singular conditions can be avoided (mixtures of multi-variate       Gaussians are a class example), but they have the common problems of       all methods that provide a single estimation of the model as opposed       to estimating the posterior distribution over model parameters (which       allows better inference).              On Aug 23, 5:15 am, David |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca