home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.fuzzy      Fuzzy logic... all warm and fuzzy-like      1,275 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 511 of 1,275   
   Ted Dunning to All   
   Re: Detecting Anomalies of events   
   22 Sep 05 08:53:03   
   
   XPost: comp.ai, comp.ai.neural-nets, comp.databases   
   XPost: sci.math   
   From: ted.dunning@gmail.com   
      
   The essence of the problem is that you have a classification problem   
   with training examples for all but the class of interest.   
      
   Viewed this way, the problem reduces to building probability models for   
   known cases and the unknown case.  Since you have little or no data for   
   the unknown (anomalous) case, you have to make some strong assumptions.   
    Ultimately, you may be able to collect putative data from the   
   anomalous case, but you really can't depend on that being possible.  At   
   most, you only get enough data to slightly constrain the posterior   
   distribution of the anomalous case.   
      
   Take for example the ultimately simple case of normally distributed   
   events with normally distributed anomalies.  You know (in this example)   
   that the probability of an anomalous event is less than 1%.   
      
   Let us assume that from domain knowledge, you know that the mean of the   
   distributions is probably less than 10 and the standard deviation is   
   less than about 10.  It is pretty easy to come up with conjugate prior   
   distributions that encode this knowledge.   
      
   If you take some number of training examples, then you can get a pretty   
   good posterior distribution for the non-anomalous case since you know   
   that at most about 1% of the training examples will be anomalies.  In   
   fact, you can train a mixed Gaussian model on your data to get an even   
   better model.  The model for the anomalous case will be largely   
   undetermined by the data and thus will be dominated by the prior   
   distribution.   
      
   In this framework, it is pretty easy to get a posterior probability   
   that each new data point is an anomaly or not (especially given that   
   each point must be one of the two) by integrating over all possible   
   parameter values.  Obviously, these posterior estimates depend pretty   
   critically on the prior distribution of the anomalies.  The wider you   
   choose the prior to be, the more extraordinary a point must be before   
   considering it to be an anomaly.  The good news is that with a   
   Gaussian, your tails drop so sharply, you will do pretty well once you   
   have seen a few anomalies.   
      
   A previous poster claimed to use SOM's for this sort of problem.  SOM's   
   may be a mildly interesting way to estimate complex probability   
   estimates, but I have found that actually analyzing your problem more   
   carefully generally leads to better solutions.   
      
   I would be very interested to hear of specific examples of production   
   systems that actually use SOM's for fraud detection.  None of the   
   systems that I have designed for that task, nor any of the ones that I   
   am familiar with actually use SOM's for fraud detection.  Anomaly   
   detection is an important step in these systems because fraud is so   
   commonly under-reported in the real world, but I haven't seen any SOM's   
   used in anger in these systems.   
      
   I would love to hear otherwise.   
      
   [ comp.ai is moderated.  To submit, just post and be patient, or if ]   
   [ that fails mail your article to , and ]   
   [ ask your news administrator to fix the problems with your system. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca