No subject

David Wolpert dhw at santafe.edu
Fri Sep 10 01:30:35 EDT 1993


Penio Penev writes of the recently posted abstract from my paper w/
Bruce Maclennan:

>>>
Thresholding is the most non-linear finite function.  In a sense, the
computer I'm writing this lines on is a finite linear machine with an
"IF" function, which is the same as thresholding. If I believe, that
my machine (given infinite disk space) is computationally universal, I
have no problems believing, that adding an infinite processor to it
would be universal also.
>>>

Let me emphasize some points which are made clear in the paper (which 
I suspect Dr. Penev has not yet read).

First, as even the abstract mentions, all that's non-linear in our
system is the *representation*; the dynamics as the signal
propagates through the net is purely linear.

As an analogy, it's like having a conventional neural net, except that
the net doesn't use sigmoids to go from layer to layer; the dynamics
as the signal is propagated through layers is purely linear. Then, when 
the signal has run through, you interpret output by looking at which 
of the output neurons don't have the value 0. That ending interpretation
is the *only* place that thresholding arises. 

The important point is that it's the representation which is non-linear, 
not the dynamics. Note that such non-linear representations are
quite common in neural nets; results like those in our paper show that
that non-linearity suffices (in the continuum limit), and the non-linerity
of sigmoids is not needed. 

Second, although I'm not sure I understand exactly what Dr. Penev
means by "adding an infinite processor to an infinite disk space machine",
I certainly would agree that, w/ *countably* infinite "memory", and *discrete*
dynamics of the signal through the net, it's trivial to use thresholding to 
get computational universality. In fact, we devote a paragraph to this very 
point early in the paper.

What's quite a bit more difficult is emulating an arbitrary (discrete
space and time) Turing machine using a neural net with an *uncountably* 
infinite "memory", and *continuum* dynamics of the signal through
the net (i.e., signal dynamics governed by differential equations, rather 
than by discrete time steps).

In addressing this issue, our paper parallels Steve Omohundro's earlier
work showing how to emulate an arbitrary (discrete space and time) cellular
automata with differential equations.

There are several interesting aspects to such issues. One is the
intrinsic interest of the math. Another is the fact that the universe is
in fact continuous and not discrete. (Note the resultant issue of trying
to use continuum-limit analyses like that of our paper to try to 
construct analog computers.) And perhaps most enticingly, there's the 
fact that one example of a system like that described in our paper 
is a wave function evolving according to Schrodinger's equation. 
(The dynamics in quantum mechanics is purely linear, with the 
non-linearity we see in the world around us arising from the "thresholding" 
of the collapse of the wave packet, roughly speaking.) Although
we didn't have space to purse this issue in our paper, it suggests that
the systems we investigate can be "trained" using the very-throroughly
understood machinery of quantum mechanical scattering theory.

All of this is discussed in our paper.



More information about the Connectionists mailing list