home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.fuzzy      Fuzzy logic... all warm and fuzzy-like      1,275 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 634 of 1,275   
   Dmitry A. Kazakov to makc.the.great@gmail.com   
   Re: convergence question   
   23 Jun 06 15:05:19   
   
   From: mailbox@dmitry-kazakov.de   
      
   On 23 Jun 2006 04:01:47 -0700, makc.the.great@gmail.com wrote:   
      
   > in P(t1), Q(t2) => R(t0), what's t0(t1, t2)?   
      
   Time.   
      
   You have P at t1, rather than P for all times. If t0 is the time now, then   
   your trust in P(t1) would be reverse proportional to t0-t1. If all facts   
   are devaluating with the same speed (that's another assumption, which might   
   be wrong), then you could create a kind of "temporal" measure of truth by   
   adding a time dimension to the truth values. For example, by using   
   (possibility, necessity time) instead of just (possibility, necessity). You   
   could then try to define lattice operations on such composite objects.   
      
   Especially, composition operations, like consensus (+) and gullibility (*).   
   Normally, consensus: true + false = uncertain, gullibility: true * false =   
   contradictory. But with the time aspect it could become   
      
   (true, now) + (false, year ago) = (almost true and slightly uncertain about   
   false, now)   
      
   (true, now) * (false, year ago) = (almost true and slightly reserved to   
   false, now)   
      
   I would assume that false and true change to uncertain with the time. In   
   terms of possibility it would mean that pos(P) ---> 1. So nec(P) ---> 0.   
      
   > Dmitry A. Kazakov wrote:   
   >> On 21 Jun 2006 06:46:35 -0700, makc.the.great@gmail.com wrote:   
   >>   
   >>> let's say we have some reasoning program that constantly draws out some   
   >>> conclusions based on results of its own previous conclusions. what   
   >>> choise of functions will guarantee that resulting values will not "flat   
   >>> out" to 0 or "squeeze up" to 1 just because they have been put through   
   >>> too many iterations?   
   >>   
   >> This is an interesting problem. I think that there is no such function in   
   >> the following sense. The model is inadequate. The assumption that the facts   
   >> which the conclusions are inferred from are not stable. So each new   
   >> iteration can potentially add contradictions to the knowledge base.   
   >> Inference can handle contradictions either by consensus or gullibility   
   >> operations, but in the end, it will anyway decline to either "dunno" or   
   >> "rubbish". I think that the only way out is to change the model, i.e. to   
   >> give a time aspect to the knowledge. So that the inference would deal with   
   >> P(t1), Q(t2) => R(t0) rather than just P, Q => R. The model should then   
   >> describe how, say, delta t=t0-t1 influences confidence in P. The goal would   
   >> be to let more fresh facts to override old ones if they contradict each   
   >> other, or to approve each other otherwise.   
      
   --   
   Regards,   
   Dmitry A. Kazakov   
   http://www.dmitry-kazakov.de   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca