Neuring Machine
Paul Kube
kube%cs at ucsd.edu
Tue Jan 16 18:44:38 EST 1990
Date: Tue, 16 Jan 90 12:33:33 -0500
From: Jordan B Pollack <pollack at cis.ohio-state.edu>
There are two reasons not to assume an unbounded network:
1) Each new unit doesnt just add more memory, but also adds
"control."
Yes, you'd get two different connectionist models in the
two cases (inifinite size vs. infinite precision). But they'd
both be connectionist models, more or less equally abstract, so
for arguments about theoretical reduction of TM's to networks they seem
prima facie as good. That's all I was getting at.
2) The Neuron Factory would still be "outside" the system, which is
the original problem with McPitt's tape.
It seems to me the issue turns on whether scaling the machine up to
handle bigger problems makes the machine of a different class. So
usually everybody agrees Jim's Sun is a universal machine; adding more
memory (and maybe an addressing mode to the 68020) is "trivial".
Adding unbounded memory to a finite state machine, or another stack to
a PDA, are nontrivial changes. I would think that adding more neurons
to a net keeps it a neural net, though you could put restrictions on
the nets you're interested in to prevent that. Since no natural
computational classification of nets is yet known, how you define your
model class is up to you; but the original question seemed to be about
Turing equivalence of unrestricted connectionist networks.
Also, there IS a difference between infinite and unbounded (but
still finite in practice). Various proofs of the "universal
approximation" of neural networks (such as White, et al)
depend on an unbounded (but not infinite) number of hidden units.
Isn't it just a matter of how you order the quantifiers? Every TM
computation requires only finite tape, but no finite tape will suffice
for every TM computation. Similarly in White et al every
approximation bound on a Borel-measurable function can be satisfied
with a finite net, but no finite net can achieve every approximation
bound on every function.
Paul Kube (along with several readers of my thesis)
seemed to take my theoretical argument (about a sufficient set of
primitive elements) as a programme of building neural networks in a
physically impossible and very stilted fashion.
Sorry, I didn't mean to imply that you were trying to do
anything impossible, or even stilted! I think that results on
computational power of abstract models are interesting, and that
questions about their relation to practical machines are useful
to think about. But in the absence of any more concrete results,
maybe we should give it a rest, and get back to discussing
advertising and marketing strategies for our graduate programs. :-)
--Paul
kube at cs.ucsd.edu
More information about the Connectionists
mailing list