papers on architectural bias of RNNs

Peter Tino P.Tino at cs.bham.ac.uk
Thu Jul 3 11:27:30 EDT 2003


Dear Connectionists,

a collection of papers
dealing with theoretical and practical aspects of
recurrent neural networks before and in the early stages of
training is available on-line.
Preprints can be found at
http://www.cs.bham.ac.uk/~pxt/my.publ.html


B. Hammer, P. Tino:
Recurrent neural networks with small weights implement definite memory 
machines.
Neural Computation, accepted, 2003.
- Proves that
   Recurrent networks are architecturally biased
   towards definite memory machines/Markov models.
   Also contains rigorous learnability analysis
   of recurrent nets in the early stages of
   learning.



P. Tino, M. Cernansky, L. Benuskova:
Markovian architectural bias of recurrent neural networks.
IEEE Transactions on Neural Networks, accepted, 2003.
- Mostly empirical study of the architectural bias phenomenon
   in the context of connectionist modeling of symbolic sequences.
   It is possible to extract (variable memory length) Markov models
   from recurrent networks even prior to any training!
   To assess the amount of useful information extracted during the training,
   the networks should be compared with variable memory length Markov 
models.



P. Tino, B. Hammer:
Architectural Bias in Recurrent Neural Networks - Fractal Analysis.
Neural Computation, accepted, 2003.
- Rigorous fractal analysis of recurrent activations in
   recurrent networks in the early stages of
   learning. The complexity of input patterns (topological entropy)
   is directly reflected by the complexity of recurrent activations
   (fractal dimension).



Best wishes,

Peter Tino


-- 
Peter Tino
The University of Birmingham
School of Computer Science
Edgbaston, Birmingham B15 2TT, UK
+44 121 414 8558  ,  fax: 414 4281
http://www.cs.bham.ac.uk/~pxt/






More information about the Connectionists mailing list