analog noise

Pekka Orponen orponen at math.jyu.fi
Sun Dec 1 07:09:22 EST 1996


Dear Connectionists:

Sorry to waste the bandwidth, but I think the facts are still
not straight.

On Sat, 30 Nov 1996, Mike Casey wrote:

> 
> Regarding the RNN+noise implying only finite state machine power result, I
> completely agree that their result is a nice extension of the corollary
> that I proved

The readers are welcome to look up the papers and compare our "nice"
extension" to Dr. Casey's proof -- or come see our poster at NIPS.
(We actually obtained our result before seeing Dr. Casey's paper, but
that is beside the point.)

> they failed to mention that I had already
> proved something very similar.

This is true, and an unfortunate omission from our part. While we do
discuss Dr. Casey's paper and point out the limitations of the noise
model he considers, I only now realize that we failed to include an
explicit statement to the effect that "Casey proved an analogous
result in his model".

I can assure that this omission was not intentional, but was caused
by our finding out about Dr. Casey's work only after completing the
first version of our paper, and thus adding the technical comparisons
to an already existing text. To us it was so clear _what_ Dr. Casey's
result was that we forgot to mention that explicitly. I suppose this
one sentence would have set the record straight and saved us all from
this discussion.

> Their "clipped" Gaussian noise is a special
> case of bounded noise with arbitrary distribution (Bowen's pseudo-orbit
> formalism), so there's no sense in which they "relaxed" the definition of
> analog noise.

Well, the "clipped" Gaussian noise is bounded because the state space is.
It is not technically quite clear that large noise levels could not
be used in some devious way in our setting, but this is a minor issue.
Also, if I read Dr. Casey's paper correctly, he assumes that the noise
level is nonzero everywhere within an \epsilon radius of the correct
state, an assumption that we do not need. But these are little
technical points that probably could be changed also in his proof. 

> wouldn't lead to anything interesting).  Finally, in section 4 of their
> paper where they concretely discuss RNNs performing computations, they
> assume that the noise is bounded and that the computation is done with
> perfect reliability (which were precisely the assumptions that I used
> which they have spent so much time discrediting in other parts of the
> paper).

This is not quite fair, because in this section we do not _assume_
that the computation can be performed with 100% reliablity, but
_prove_ that it can be, in the case that one only wants to simulate
finite state automata, and the noise is bounded at some sufficiently
low level. (This is actually an embarrassingly simple result, which
we just didn't find in the literature for the basic BSB network
model we consider.) The computational upper bound result, which is
more significant, shows that even if we don't assume 100% reliability,
we still cannot do more than finite automata.

I will try to keep offline from now; see you at NIPS.

-- Pekka Orponen



More information about the Connectionists mailing list