home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai      Awaiting the gospel from Sarah Connor      1,954 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 1,917 of 1,954   
   =?UTF-8?Q?Enis_BAYRAMO=C4=9ELU?= to Raeldor   
   Re: Feed Forward Back Prop Network to So   
   25 Feb 11 04:11:58   
   
   From: enisbayramoglu@gmail.com   
      
   On Feb 16, 4:26 am, Raeldor  wrote:   
   > Hi All,   
   >   
   > I'm trying to get into AI and am reading the AI Techniques for Game   
   > Programming book, which is a great read.  I have built a feed forward   
   > network with 2 inputs, 2 hidden neurons in 1 layer and one output   
   > neuron and am using the back propagation rule in the book (based on   
   > Werbos) to calculate the backprop.  However, I can't get it to   
   > converge for the XOR training data set.   
   >   
   > I plotted out on paper that the XOR can be solved using 2 neurons in a   
   > hidden layer, but I wonder if that assumption was incorrect.  Should   
   > this be solvable using 2 hidden neurons?   
   >   
   > Thanks   
   > Ray   
   >   
      
   Hi,   
      
   What activation functions do you use at the hidden neurons? If they're   
   linear, you won't be able to distinguish. Another point might be, what   
   are your initial weights? Do you assign them randomly? Because if they   
   are identical to begin with, your training will not be able to make   
   them learn different things.   
      
   [ comp.ai is moderated ... your article may take a while to appear. ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca