Are Neural Nets Turing Equivalent

CFoster@cogsci.edinburgh.ac.uk CFoster at cogsci.edinburgh.ac.uk
Fri Sep 21 11:14:26 EDT 1990


Given the discussion thus far, and in particular Dave Touretzky and
Peter Cariani's comments on the discrepancy between theoretical
computer science and inherently finite real cognitive and computational
systems, you may be interested in the following.

I have just submitted my Ph.D. thesis 'Algorithms, Abstraction and
Implementation: A Massively Multilevel Theory of Strong Equivalence
of Complex Systems'.  It is a formalisation of a notion of algorithms
that can be used across languages, hardware and architectures (notably
connectionist and classical -- this was my starting point), and that
can ground a stronger equivalence than just input-output (weak)
equivalence.  I spell this out as equivalence in terms of states
which systems pass through -- at a level of description.

The main point is this:  I started with the assumption of finiteness.
Algorithms are defined as finite sets of finite sequences of finite
states.  In trying to relate them to Turing computability, I ended
up characterising the class of functions computed (or at least described)
by them as 'bounded computable functions' or, to put it another way,
as the class of functions computed by 'finite tape machines'.
In contrast to some common usage, a finite tape machine (a Turing machine
with a finite tape) is NOT the same as a finite state machine.  The
latter is generally defined over all integers and actually gets its
constraints from restrictions on HOW it reads the infinite tape, not 
from its restriction to finitely many states at all.  A general Turing
machine only has finitely many states of course, so this cannot be the
interesting distinction.

Somewhat surprisingly, the super-finite bounded computable functions
do not even seem to be a subset of computable functions, but possibly
of partially computable functions.  This is because any inputs not
defined for the finite tape machine may actually be defined and cause
the system to halt with a sensible output at a lower level of description
(depending on the implementation), but then again they may not be 
defined there either.  We just don't know what happens to them.  This
is quite similar to the case for unexpected inputs to actual computer
systems.

C. Foster


More information about the Connectionists mailing list