Bever's claims

Geoffrey Hinton hinton at ai.toronto.edu
Sat Sep 10 16:58:29 EDT 1988


In a recent message, Bever claims the following:

"We have demonstrated (Lachter and Bever, 1988; Bever, 1988) that existing
connectionist models learn to simulate rule-governed behavior only insofar as
the relevant structures are built into the model or the way it is fed.  What
would be important to show is that such models could achieve the same
performance level and characteristics without structures and feeding schemes
which already embody what is to be learned."

This claim irks me since I have already explained to him that there are
connectionist networks that really do discover representations that are not
built into the initial network.  One example is the family trees network
described in

Rumelhart, D.~E., Hinton, G.~E., and Williams, R.~J. (1986) 
Learning representations by back-propagating errors.
Nature, 323, pages 533--536.

I would like Bever to clearly state whether he thinks his "demonstration"
applies to this (existing) network, or whether he is simply criticizing
networks that lack hidden units.

Geoff



More information about the Connectionists mailing list