linear separability
christoph bruno herwig
cherwig at eng.clemson.edu
Sun Mar 8 03:19:24 EST 1992
I was wondering if someone could point me in the right direction
concerning the following fundamental separability problem:
Given a binary (-1/1) valued training set consisting of n-dimensional
input vectors (homogeneous coordinates: n-1 inputs and a 1 for the bias
term as the n-th dimension) and 1-dimensional target vectors. For this 2-class
classification problem I wish to prove (non-) linear separability
solely on the basis of the given training set (hence determine if
the underlying problem may be solved with a 2-layer feedforward network).
My approach so far:
Denote input vectors of the first (second) class with H1 (H2). We need
to find a hyperplane in n-dimesional space separating vectors of H1
and H2. The hyperplane goes through the origin of the n-dimensional
hypercube. Negate the vectors of H2 and the problem now states: Find the
hyperplane with all vectors (from H1 and the negated of H2) on one side
of the plane and none on the other. Pick one of the vectors (= a vertex of
the hypercube) and compute the Hamming Distance to all other vectors.
Clearly, there must be a constraint on how many of the vectors are
allowed to be how far "away" in order for the hyperplane to be able to
separate. E.g.: If any of the Hamming Distances would be n, then the
training set would not be linearly separable.
My problem is concerning the constraints for Distances of n-1, n-2,
etc...
Has anyone taken a similar approach? Are there alternate solutions to
linear separability? I tried linear programming, but the simplex
algorithm uses a prohibitive number of variables for a mid-size
problem.
Your help is greatly appreciated,
Christoph.
More information about the Connectionists
mailing list