Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai    |    Awaiting the gospel from Sarah Connor    |    1,954 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 294 of 1,954    |
|    Sandy Hodges to All    |
|    outputs that represent relationships    |
|    23 Apr 04 06:24:33    |
      XPost: comp.ai.neural-nets       From: QXUXBTVOTSAO@spammotel.com              I am interested in how nerve nets handle relationships, and am hoping       for some references.              Suppose the task is to look at scenes of a few objects, and produce       grammatical descriptions of them, such as: "There is a red square       above a blue circle, and a yellow diamond inside the blue circle."       I understand how you would go about training a nerve net so it would       light a a particular output for any scene with a circle, and another       output for any scene with a yellow object, etc. But what sort of       output should the nerve net produce, that would indicate that it       recognized that it was the diamond that was yellow, and the circle       that was blue? Or that it recognized that the square was above the       circle and not the other way around?              It is not vision in particular that I am interested in, but in any       task where the output needs to describe the relationships in the       input.              If you had an output for every combination: yellow square, red circle,       etc., you would need n-squared of them, which may not be practical.              Here's the sort of output scheme I have in mind:       There are outputs, called primary, for red, yellow, blue, black,       circle, diamond, square, etc.       There are outputs that represent classes of objects, such as a square       or diamond that is either yellow or red.              Thus, if the scene was a red square and a blue circle, the output       could be: red, square, blue, circle, square or diamond that is either       red or yellow, circle or oval that is either blue or yellow. Since       the yellow, diamond, and oval outputs are not lit, we can tell from       this output that it is the square that is red, and the circle that is       blue.              I'm not sure this works. And it doesn't feel as if my brain does it       this way. But that should give an idea of the kind of thing I'm       looking for. It would be helpful if I even knew the name used to       describe the problem, so I could search on it.              The question of how networks of neurons represent relationships, seems       to me a quite fundamental one for understanding how it is that real       brains can hold ideas in short-term memory; thus I have taken the       liberty of cross-posting to comp.ai as well as comp.ai.neural-nets.              thanks,              [ comp.ai is moderated. To submit, just post and be patient, or if ]       [ that fails mail your article to |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca