home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,191 of 1,954   
   Ted Dunning to Michael   
   Re: Graphical Model evaluation   
   29 Sep 06 00:05:22   
   
   From: ted.dunning@gmail.com   
      
   Michael wrote:   
   > Suppose you are building a graphical model (Bayesian network).  After   
   > you have picked a topology and trained the network, you want to revise   
   > the network - make minor changes to the topology by possibly adding a   
   > new variable, deleting an edge, etc.   
   >   
   > What techniques are typically used to determine if a small change is   
   > worthwhile?  I've read some articles that discuss "quality measures";   
   > you accept the change if the quality measure increases.  Intuitively,   
   > it seems that there should be some way to consider the marginal   
   > decrease in entropy or gain in likelihood.   
   >   
   > Could anyone point me in the right direction?   
   >   
   > All the best,   
   > -Michael   
   >   
      
   This a pretty difficult problem.  The fact that you are using graphical   
   models helps somewhat since you can sample from the posterior   
   distribution of all of the parameters of all of the variants of the   
   model.  This allows you to marginalize out everything except the   
   topology of the model and thus you can do a direct comparison in terms   
   of probability.  You can also preserve all of the models and generate   
   predictions based on mixtures of the alternatives.  This is essentially   
   just Bayesian hypothesis testing.  I don't know the current state of   
   the literature very well, but Mackay has some good discussion on model   
   selection in his book   
   (http://www.inference.phy.cam.ac.uk/mackay/itprnn/book.html) and I   
   think that Michael Jordan has something on this in his book on learning   
   and graphical models.  I think I remember a very nice introduction to   
   Bayesian inference from somebody at Microsoft Research, but I can't   
   place it.   
      
   It quickly becomes intractable to do this in general because the number   
   of graphical models is exponential in size.  MCMC methods are neat and   
   can give you difficult answers quickly, but there are still limits.   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca