papers on architectural bias of RNNs
Gary Cottrell
gary at cs.ucsd.edu
Thu Jul 3 13:31:50 EDT 2003
Folks interested in Peter's paper on RNN's and Definite Memory
Machines may also be interested in our papers on TDNN's and
definite memory machines:
Clouse, Daniel S., Giles, Lee C., Horne, Bill G. and
Cottrell, G. W. (1997) Time-delay neural networks:
Representation and induction of finite state machines. IEEE
Transactions on Neural Networks.
This work attempts to characterize the capabilities of
time-delay neural networks (TDNN), and contrast two
subclasses of TDNN in the area of language induction. The
two subclasses are those with delays limited to the inputs
(IDNN), and those which include delays also on hidden units
(HDNN). Both of these architectures are capable of
representing the same languages, those representable by
definite memory machines (DMM), a subclass of finite state
machines (FSM). They have a strong representational bias
towards DMMs which can be characterized by little logic. We
demonstrate this by learning a 2048 state DMM using very few
training examples. Even though both architectures are
capable of representing the same class of languages, HDNNs
are biased towards learning languages which are
characterized by shift-invariant behavior on short input
windows in the mapping from recent inputs to the
accept/reject classification. We demonstrate this
difference in learning bias via a set of simulations and
statistical analysis.
http://www.neci.nec.com/%7Egiles/papers/IEEE.TNN.tdnn.as.fsm.ps.Z
Clouse, Daniel S., Giles, Lee C., Horne, Bill G. and
Cottrell, G. W. (1997) Representation and induction of
finite state machines using time-delay neural networks. In
Michael C. Mozer, Michael I. Jordan, and Thomas Petsche
(eds.) Advances in Neural Information Processing Systems 9,
pp. 403-409. MIT Press: Cambridge, MA, 1997.
(Similar abstract!)
http://nips.djvuzone.org/djvu/nips09/0403.djvu
More information about the Connectionists
mailing list