From: enisbayramoglu@gmail.com   
      
   On Feb 16, 4:26 am, Raeldor wrote:   
   > Hi All,   
   >   
   > I'm trying to get into AI and am reading the AI Techniques for Game   
   > Programming book, which is a great read. I have built a feed forward   
   > network with 2 inputs, 2 hidden neurons in 1 layer and one output   
   > neuron and am using the back propagation rule in the book (based on   
   > Werbos) to calculate the backprop. However, I can't get it to   
   > converge for the XOR training data set.   
   >   
   > I plotted out on paper that the XOR can be solved using 2 neurons in a   
   > hidden layer, but I wonder if that assumption was incorrect. Should   
   > this be solvable using 2 hidden neurons?   
   >   
   > Thanks   
   > Ray   
   >   
      
   Hi,   
      
   What activation functions do you use at the hidden neurons? If they're   
   linear, you won't be able to distinguish. Another point might be, what   
   are your initial weights? Do you assign them randomly? Because if they   
   are identical to begin with, your training will not be able to make   
   them learn different things.   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|