distributed representations

Max Coltheart ps_coltheart at vaxa.mqcc.mq.oz.au
Sat Jun 8 10:51:22 EDT 1991


The original posting about this mentioned the property of graceful degradation
as one of the virtues of systems that use distributed respresentations. In what
way is this a virtue? For nets that are doing some engineering job such as
character recognition, it would obviously be good if some damage or malfunction
didn't much affect the net's performance. But for nets that are meant to be
models of cognition, the hidden assumption seems to be that after brain damage
there is graceful degradation of cognitive processing, so the fact that nets
show graceful degradation too means they have promise for modelling cognition.

But where's the evidence that brain damage degrades cognition gracefully? That
is, the person just gets a little bit worse at a lot of things? Very commonly,
exactly the opposite happens - the person remains normal at almost all kinds
of cognitive processing, but some specific cognitive task suffers catastroph-
ically. No graceful degradation here.

I could give very many examples: I'll just give one (Semanza & Zettin, 
Cognitive Neuropsychology, 1988 5 711). This patient, after his stroke, had
impaired language, but this impairment was confined to language production
(comprehension was fine) and to the production of just one type of word: proper
nouns. He could understand proper nouns normally, but could produce almost none
whilst his production of other kinds of nouns was normal. What's graceful about
this degradation of cognition? 

If cognition does *not* degrade gracefully, and neural nets do, what does this
say about neural nets as models of cognition?

Max Coltheart


More information about the Connectionists mailing list