home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,313 of 1,954   
   Kresimir Delac to All   
   connection between preselecting parts of   
   24 Feb 07 23:08:19   
   
   XPost: sci.image.processing, comp.compression   
   From: kdelac@gmail.com   
      
   hi all,   
      
   i am from the image processing / pattern recognition field and i have   
   stumbled upon an interesting mathematical problem / question. maybe it   
   will be trivial to you guys, but all the better :))   
      
   so, the problem is related to eigenvector decomposition, i.e.   
   Karhunen-   
   Loeve's transform or as we call it - PCA (Principal Component   
   Analysis). Application area - Face Recognition. Images are rearranged   
   into vectors that represent points in n-dimensional (n being the   
   number of pixels in each image) space.   
      
   The idea is that eigenvectors of the covariance matrix of the set of   
   images (the set of points in n-D space) will decorrelate the data. The   
   first "most important" eigenvector (the one associated to the larges   
   eigenvaule) captures the direction with larges variance.... well, you   
   know the rest. You can then keep only a few of those eigenvectors with   
   larges eigenvalues and project all the images onto that new few-D   
   space (let's call it the k-D space, with k << n). By keeping the   
   vectors with largest eigenvalues you are keeping a large portion of   
   the energy of your data, or consequently you are keeping the most   
   information. eigenvectors are linear combinations of the original   
   dimensions, thus making PCA a simple rotation/stretch procedure.   
   Similarity of two images (this is the basic face recognition idea) can   
   then be determined by measuring the distance (e.g. Euclidean) between   
   the two projections in the lower dimensional space instead of doing it   
   in the high n-D space.   
      
   Now for the problem :) :   
      
   In my little experiment I used the preprocessed images (so the input   
   to calculating the covariance matrix and the rest of the PCA are not   
   pixels anymore, but some other coefficients - but this is irrelevant).   
   My "images", or to be precise, my matrices of coefficients are the   
   size of 128 x 128, rearranged to 1x16384 vectors (so n = 16384,   
   original space is n-D). I performed PCA on those "images" and then   
   performed a simple face recognition in the yielded k-space (k<

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca