Computational Power and Limits of ANNs

Bob Hadley hadley at cs.sfu.ca
Wed Sep 8 19:23:56 EDT 1999


   URL:  www.cs.sfu.ca/~hadley/online.html
 
                ~~~~~~~~~   Paper Available ~~~~~~~~



      COGNITION AND THE COMPUTATIONAL POWER OF CONNECTIONIST NETWORKS


                                   by
  
                            Robert F. Hadley
                      School of Computing Science 
                     and Cognitive Science Program 
                        Simon Fraser University 
                     Burnaby, B.C., V5A 1S6   Canada 
                           hadley at cs.sfu.ca 





                           ABSTRACT

  This paper examines certain claims of ``cognitive significance''
which   (wisely or not) have been based upon the theoretical
powers of three distinct classes of connectionist networks,
namely, the ``universal function approximators'',  recurrent
finite-state simulation networks, and Turing equivalent networks. 
Each class will be  considered with respect to its potential in
the realm of cognitive modeling.  Regarding the first class, I
argue that, contrary to the claims of some influential
connectionists, feed-forward networks do  NOT possess the
theoretical capacity to approximate all functions of
interest to cognitive scientists.    For example, they cannot
approximate many important,  recursive (halting) functions which
map symbolic strings onto other symbolic strings.

  By contrast, I argue that a certain class of recurrent
networks (i.e., those which closely approximate 
deterministic finite automata, DFA) shows considerably greater
promise in some domains.     However, from a cognitive
standpoint, difficulties arise when we consider how the relevant
recurrent networks  could acquire the weight vectors needed
to support  DFA simulations.  These difficulties are severe in
the realm of central high-level cognitive functions.  


  In addition, the class of Turing equivalent networks is here
examined.  It is argued that the relevance of such networks to
cognitive modeling is seriously undermined by their reliance on
infinite precision in crucial weights and/or node activations.  
I also examine what advantages these networks might conceivably
possess over and above classical symbolic algorithms.  For, from
a cognitive standpoint,  the Turing equivalent networks present
difficulties very similar to certain classical algorithms; they
appear highly contrived, their structure is fragile, and they
exhibit little or no noise-tolerance.


                     (21 Pages  --  1.5 spacing  )


   Available by web at:  www.cs.sfu.ca/~hadley/online.html


More information about the Connectionists mailing list