From: xgeorgiou@pathfinder.gr   
      
   "Xiao Xiong" wrote in news:452373b8$   
   @news.unimelb.edu.au:   
      
   > I have a general question to ask:   
   >   
   > There are a set of supervised training data which belong to two classes   
   > A and B. Each training sample is a multi-dimensional vector and the   
   > covariance matrix of the features are not diagonal, that is the feature   
   > elements are correlated. Assume the underlying pdf is completely known.   
   > We can use two Gaussian mixture models (GMM) for the two classes   
   > separately and build a Bayesian classifer.   
   >   
   > My quesetion is:   
   >   
   > Is it possible to use some kinds of feature transformation method, such   
   > as Linear Discriminative Analysis, to project the features into a new   
   > space for better discriminative power?   
   >   
   >   
      
      
   See KLT, PCA and ICA tranformations. By the way, your assumption about using   
   GMM depends   
   heavily on much "gaussian-like" is your pdf and it will most likely include   
   more than two gaussian bins   
   (at least in most cases of real experimental data).   
      
      
   --   
   Harris   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|