No subject

Ray White white at teetot.acusd.edu
Wed Nov 6 19:39:03 EST 1991


Yoshio Yamamoto <yoshio at eniac.seas.upenn.edu> wrote:
> ...
> Suppose you have two continuous input units whose data are normalized between 
> 0 and 1, several hidden units, and one continuous output unit.
> Also suppose the input given to the input unit A is totally unrelated with the
> output; the input is a randomized number in [0,1].  The input unit B, on the
> other hand, has a strong correlation with its corresponding output. 

> Therefore what we need is a trained network such that it shows no correlation
> between the input A and the output.  ...

> Why is this interesting?  This is useful in practical problem.  
> Initially you don't know which input has correlation with the outputs
> doesn't.  So you use all available inputs anyway.  If there is a nonsense 
> input, then it should be identified so by a neural network and the influence
> from the input should be automatically suppressed.

> The best solution we have in mind is that if no correlation were identified, 
> then the weights associated with the input will shrink to zero.

> Is there any way to handle this problem? 

> As a training tool we assume a backprop. 

> Any suggestion will be greatly appreciated.
> - Yoshio Yamamoto
>   General Robotics And Sensory Perception Laboratory (GRASP)
>   University of Pennsylvania

In reading this, I infer that there is a problem in training the net with
backprop - and then not getting the desired behavior. I'm not enough of
a backprop person to know if that inference is correct. But in any case,
why use backprop, when the desired behavior is a natural outcome of
training the hidden units with an optimizing algorithm, something
similar to Hebbian learning, but Hebbian learning modified so that the
learning is correlated with the desired output function?

An example of such an algorithm is Competitive Hebbian Learning, which
will be published in the first (or maybe the second) 1992 issue of
NEURAL NETWORKS (Ray White, Competitive Hebbian Learning: Algorithm and
Demonstrations). One trains the hidden units to compete with each
other as well as with the inverse of the desired output function. I've
tried it on Boolean functions and it works, though I haven't tried
the precise problem with real-valued inputs. Other optimizing
"soft-competition" algorithms may also work.

One should get the best results for the output layer by training it with
a delta rule (not backprop, since only the output layer training is
still needed). Competitive Hebbian Learning may work for the output layer
as well, but one should get better convergence to the desired output
with delta-rule training.

Ray White	(white at teetot.acusd.edu)
Depts. of Physics & Computer Science
University of San Diego



More information about the Connectionists mailing list