Linear nonseparability

Dan Nabutovsky FEDIMIT at weizmann.weizmann.ac.il
Sun Mar 15 14:11:02 EST 1992


> From: christoph bruno herwig <cherwig%eng.clemson.edu at CARNEGIE.BITNET>


> I was wondering if someone could point me in the right direction
> concerning the following fundamental separability problem:

> Given a binary (-1/1) valued training set consisting of n-dimensional
> input vectors (homogeneous coordinates: n-1 inputs and a 1 for the bias
> term as the n-th dimension) and 1-dimensional target vectors. For this
> 2-class classification problem I wish to prove (non-) linear separability
> solely on the basis of the given training set (hence determine if
> the underlying problem may be solved with a 2-layer feedforward network).

   An algorithm that solves this problem is described in our paper:

     D. Nabutovsky & E. Domany, "Learning the Unlearnable",
           Neural Computation 3(1991), 604.

   We present perceptron learning rule that finds separation plane when
a set of  patterns is linearly separable, and proves linear
non-separability otherwise.

   Our approach is completely different from those
described by Christoph. Our idea is to do perceptron
learning, always keeping in mind constraint for the distance
between current vector and solution. When this constraint
becomes impossible, nonseparability is proved.

   Using sophisticated choice of learning step size, we
ensure that algorithm always finds a solution or proves
its absence in a finite number of steps.

   Dan Nabutovsky (FEDIMIT at WEIZMANN.WEIZMANN.AC.IL)



More information about the Connectionists mailing list