Connectionist symbol processing

Simon Lucas sml at essex.ac.uk
Mon Aug 24 03:49:17 EDT 1998


 I would suggest that most recurrent neural net architectures
 are not fundamentally more 'neural' than hidden Markov models -
 think of an HMM as a neural net with second-order weights
 and linear activation functions.

 HMMs are, of course, very much alive and kicking,
 and routinely successfully applied to problems in speech and OCR
 for example.  It might be argued that the HMMs tend to employ
 less distributed representations than RNNs, but even if this is true,
 so what?

 Some interesting work that has explored links between the two:

 @ARTICLE{Bridle-alpha-net,
  AUTHOR = "Bridle, J.S.",
  TITLE = "Alpha-nets: a recurrent ``neural'' network architecture with a hidden Markov model
interpretation",
  JOURNAL = "Speech Communication",
  YEAR = "(1990)",
  VOLUME = 9,
  PAGES = "83 -- 92"}

@ARTICLE{Bengio-iohmm,
  AUTHOR = "Bengio, Y and Frasconi, P",
  TITLE = "Input-output HMMs for sequence processing",
  JOURNAL = "IEEE Transactions on Neural Networks",
  YEAR = "(1996)",
  VOLUME = 7,
  PAGES = "1231 -- 1249"}


 Also related to the discussion is the Syntactic Neural Network (SNN) -
 an architecture I developed in my PhD thesis (refs below).

 The SNN is a modular architecture that is able to parse and (in some cases)
 infer context-free (and therefore also regular, linear etc) grammars.

 The architecture is composed of Local Inference Machines (LIMs) that
 rewrite pairs of symbols.  These are then arranged in a matrix parser formation
 (see Younger1967) to handle general context-free grammars - or we can
 alter the SNN macro-structure in order to specifically deal with simpler classes
 of grammar such as regular, strictly-hierarchical or linear.  The LIM remains
 unchanged.

 In my thesis I only developed a local learning rule for the strictly-hierarchical
 grammar, which was a specialisation of the Inside/Outside algorithm for training
 stochastic context-free grammars.

 By constructing the LIMs from forward-backward modules (see Lucas-fb)
 however, any SNN that you construct automatically has an associated
 training algorithm.  I've already proven this to work for regular grammars, I'm
 now in the process of testing some other cases - I'll post the paper to this group
 when its done.

 refs:

@ARTICLE{Younger1967,
  AUTHOR = "Younger, D.H.",
  TITLE = "Recognition and parsing of context-free languages in time $n^{3}$",
  JOURNAL = "Information and Control",
  VOLUME = 10,
  NUMBER = 2,
  PAGES = "189 -- 208",
  YEAR = "(1967)"}

@ARTICLE{Lucas-snn1,
  AUTHOR = "Lucas, S.M. and Damper, R.I.",
  TITLE = "Syntactic neural networks",
  JOURNAL = "Connection Science",
  YEAR = "(1990)",
  VOLUME = "2",
  PAGES = "199 -- 225"}

@ARTICLE{Lucas-phd,
  AUTHOR = "Lucas, S.M.",
  TITLE = "Connectionist Architectures for Syntactic Pattern Recognition",
  JOURNAL = "PhD Thesis, University of Southampton",
  YEAR = "(1991)"}


ftp://tarifa.essex.ac.uk/images/sml/reports/fbnet.ps
@INCOLLECTION{Lucas-fb,
  AUTHOR = "Lucas, S.M.",
  TITLE = "Forward-backward building blocks for evolving
       neural networks with intrinsic learning behaviours",
  BOOKTITLE = "Lecture Notes in Computer Science (1240): Biological
    and artificial computation: from neuroscience to technology",
  YEAR = "(1997)",
  PUBLISHER = "Springer-Verlag",
  PAGES = "723 -- 732",
  ADDRESS = "Berlin"}




------------------------------------------------
Simon Lucas
Department of Electronic Systems Engineering
University of Essex
Colchester CO4 3SQ
United Kingdom

Tel:    (+44) 1206 872935
Fax:    (+44) 1206 872900
Email:  sml at essex.ac.uk
http://esewww.essex.ac.uk/~sml
secretary:  Mrs Wendy Ryder (+44) 1206 872437
-------------------------------------------------




More information about the Connectionists mailing list