Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai    |    Awaiting the gospel from Sarah Connor    |    1,954 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,916 of 1,954    |
|    Raeldor to All    |
|    Feed Forward Back Prop Network to Solve     |
|    16 Feb 11 03:26:38    |
      From: raeldor@gmail.com              Hi All,              I'm trying to get into AI and am reading the AI Techniques for Game       Programming book, which is a great read. I have built a feed forward       network with 2 inputs, 2 hidden neurons in 1 layer and one output       neuron and am using the back propagation rule in the book (based on       Werbos) to calculate the backprop. However, I can't get it to       converge for the XOR training data set.              I plotted out on paper that the XOR can be solved using 2 neurons in a       hidden layer, but I wonder if that assumption was incorrect. Should       this be solvable using 2 hidden neurons?              Thanks       Ray              [ comp.ai is moderated ... your article may take a while to appear. ]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca