more fuel for the fire
Jim Hendler
hendler at dormouse.cs.umd.edu
Sun Sep 11 11:20:57 EDT 1988
While I come down on the side of the connectionists in the recent debates,
I think some of our critics, and some of the criticisms of Bever and P&P,
do focus on an area that is a weakness of most of the distributed models:
it is one thing to learn features/structures/etc., it is another to
apply these things appropriately during cognitive processing. While,
for example, Geoff's model could be said to have generalized a feature
corresponding to `gender', we would be hard pressed to claim that
it could somehow make gender-based inferences.
The structured connectionists, have gone far beyond the distributed when
it comes to this. The models, albeit not learned, can make inferences
based on probabilities and classifications and the like (cf. Shastri etc.)
I believe that it is crucial to provide an explanation of how distributed
representations can make similar inferences. One approach, which I
am currently pursuing, is to use the weight spaces learned by distributed
models as if they were structured networks -- spreading activation among
the units and seeing what happens (the results look promising). Other
approaches will surely be suggested and pursued.
Thus, to reiterate my main point -- the fact that a backprop (or other)
model has learned a function doesn't mean diddly until the internal
representations built during that learning can be applied to other
problems, can make appropriate inferences, etc. To be a cognitive model
(and that is what our critics are nay-saying) we must be able to learn,
our forte, but also to THINK, a true weakness of many of our current
systems.
-Jim H.
More information about the Connectionists
mailing list