Distributed representations and graceful degradation
Michael Gasser
gasser at bend.UCSD.EDU
Sun Jun 9 00:58:26 EDT 1991
Max Coltheart discusses how damage to real neural networks often
results in more of a clumsy than a graceful sort of degradation.
But isn't degradation under conditions of increasing task complexity
a different matter? I'm thinking of the processing of increased levels of
embedding or (possibly also) numbers of arguments in natural language.
Fixed-length distributed representations of syntactic or semantic
structure (e.g., RAAM, Elman nets) seem to model this behavior quite
well, in comparison to the usual symbolic approach (you're no more
likely to fail at 28 levels of embedding than at 2) and to localist
connectionist approaches (you can handle sentences with 3 arguments,
but 4 are out because you run of units).
Mike Gasser
More information about the Connectionists
mailing list