From: mailbox@dmitry-kazakov.de   
      
   On Wed, 30 Jul 2003 16:58:51 GMT, "project2501"   
    wrote:   
      
   >having done a little quick thinking on this this afternoon, i came up with   
   >the folliwing:   
   >   
   >x1 = {feature1: A1 + B1 + C1,   
   > feature2: D1 + E1}   
   >   
   >x2 = {feature1: A2 + B2 + C2,   
   > feature2: D2 + E2} .... the same as the first post in this thread.   
   >   
   >now the simple proposal was to use the distances between the corresponding   
   >mmemberships for the clustering:   
   > d(A1,A2).   
   >this seems logical and will no dount give results. howver, in order to   
   >overcome some of the previously discussed "lost ordering information" i   
   >propose the followiung:   
   >   
   >(note, the ordering is on A < B < C on the fatures)   
      
   Then you probably have separate features/coordinates for the sets A,   
   B, C.   
      
   If you have 1..N coordinates with some distances defined on them. Then   
   you could define a distance for vectors:   
      
   d(X,Y)= d(X1,Y1) * W1 + d(X2,Y2) * W2 + d(X3,Y3) * W3 + ...   
      
   Here Wi are positive weights. To make the coordinates 1..3 ordered as   
   1<<2<<3<<, define its weights as:   
      
   W1 = 1;   
   W2 = the diameter the domain set of 1 (max distance over set elements)   
   W3 = W2 * diameter of 2   
   ...   
   Wi+1 = Wi * diameter of i   
      
   ... and this is not yet fuzzy. (:-))   
      
   P.S. A distance you define shall reflect the application domain, i.e.   
   the real thing you are studying/modelling.   
      
   ---   
   Regards,   
   Dmitry Kazakov   
   www.dmitry-kazakov.de   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|