Recurrent Linguistic Domain Papers?
David.Servan-Schreiber@A.GP.CS.CMU.EDU
David.Servan-Schreiber at A.GP.CS.CMU.EDU
Wed May 2 10:58:37 EDT 1990
Tom,
Axel Cleeremans, Jay McClelland and I have also worked on simple
recurrent networks (SRNs) and their ability to discover finite state and
recurrent grammars from examplars.
We have shown that, during training with exemplars generated from a finite
state grammar, an SRN progressively encodes more and more temporal context.
We also explored the conditions under which the network can carry
information about distant sequential contingencies across intervening
elements to distant elements. Such information is retained with relative
ease if it is relevant at each intermediate step of a sequence; it tends to
be lost when intervening elements do not depend on it. However, in a more
complex simulation, we showed that long distance sequential contingencies
can be encoded by an SRN even if only subtle statistical properties of
embedded strings depend on the early information. Our interpretation of
this phenomenon is that the network encodes long-distance dependencies by
*shading* internal representations that are responsible for processing
common embeddings in otherwise different sequences.
This ability to represent simultaneously similarities and differences
between several sequences relies on the graded nature of representations
used by the network, which contrast with the finite states of traditional
automata. For this reason, in our more recent work we have started to call
such networks *Graded State Machines*.
Axel and Jay have also shown that learning and processing in such graded
state machines accounts nicely for the way in which human subjects
improve and perform in an implicit finite-state grammar learning experiment.
Finally, in addition to Jeff Elman's and Jordan Pollack's work, Bob Allen
has also done some interesting experiments with recurrent networks and
discovery of sequential structures. Unfortunately I cannot put my hands
on the appropriate references just now but he can be contacted at
RBA at flash.bellcore.com.
Cleeremans A, and McClelland J (submitted to Cognitive Science) Learning
the Structure of Event Sequences. Available from the first author,
dpt of Psychology, Carnegie Mellon University, Pgh, PA, 15213
Cleeremans A, Servan-Schreiber D, and McClelland J (1989) Finite State
Automata and Simple Recurrent Networks. Neural Computation 1:372-381
Servan-Schreiber D, Cleeremans A, and McClelland J (1988) Encoding
Sequential Structure in Simple Recurrent Networks. Technical Report
CMU-CS-183, Carnegie Mellon University (orders taken by copetas at cs.cmu.edu,
no charge)
More information about the Connectionists
mailing list