Recurrent Linguistic Domain Papers?

Jeff Elman elman at amos.ucsd.edu
Fri Apr 27 15:26:46 EDT 1990


I have done work along these lines, using a simple recurrent
network.  Nets have been trained on a variety of stimuli.  Probably
the most interesting simulations (for your purposes) are those
which involve discovering a way to represent recursive grammatical
structures.  

The networks succeed to a limited extent by implementing what I 
call "leaky" or "context-sensitive recursion" in which a state
vector does the job normally done by a stack.  Since the entire
state vector is visible to the part of the network which computes the
output function, information from different levels leaks.  

I believe this kind of leakage is just what is needed to account
for natural language processing.

For a copy of TR's reporting this work, send a note to
'yvonne at amos.ucsd.edu' asking for 8801 and 8903.

Jeff Elman
Dept. of Cognitive Science
UCSD



More information about the Connectionists mailing list