home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,918 of 1,954   
   ScottFrye to Raeldor   
   Re: Feed Forward Back Prop Network to So   
   03 Mar 11 02:29:22   
   
   From: scottf3095@aol.com   
      
   On Feb 15, 10:26 pm, Raeldor  wrote:   
   > Hi All,   
   >   
   > I'm trying to get into AI and am reading the AI Techniques for Game   
   > Programming book, which is a great read.  I have built a feed forward   
   > network with 2 inputs, 2 hidden neurons in 1 layer and one output   
   > neuron and am using the back propagation rule in the book (based on   
   > Werbos) to calculate the backprop.  However, I can't get it to   
   > converge for the XOR training data set.   
   >   
   > I plotted out on paper that the XOR can be solved using 2 neurons in a   
   > hidden layer, but I wonder if that assumption was incorrect.  Should   
   > this be solvable using 2 hidden neurons?   
   >   
   > Thanks   
   > Ray   
   >   
      
   I recently implemented an XOR neural net from reading some machine   
   learning book (can't remember off the top of my head which one).  I   
   had problems getting the network to converge as well and it turned out   
   I didn't have a weight associated with the bias in each node.  Later I   
   had a problem typo where I was accidently replacing the weights during   
   each backprop instead of changing them by the required delta.   
      
   I found this site VERY useful for debugging the process:   
   http://www.generation5.org/content/2001/xornet.asp.   
      
   James Mathews goes through an entire iteration of the backprop so you   
   seed the network with the weights he starts with you can see if any of   
   your calculations are not happening correctly.   
      
   I found the network converged after several thousand interations in   
   most cases when I assinged random weights to the nodes.  However, on   
   occasion, the random weight swould be so far off that the convergence   
   wasn't as complete.  That is it would predict and outcome with less   
   than 95% accuracy.   
      
   Feel free to email me if you want me to look at your code.   
      
   -Scott Frye   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca