The Four-Quadrant Problem

Alexis Wieland alexis at marzipan.mitre.org
Wed Aug 31 07:58:43 EDT 1988


Let me try to remove some of the confusion I've caused.

The four-quadrant problem is *my* name for an easily described problem which
*requires* a neural net with three (or more) layers (e.g. 2+ hidden layers).

The only relation of all this to the recent DARPA report is that they use an
illustration of it in passing as an example of what a two layer net can do 
(which I assert it cannot).

The four-quadrant problem is to use a 2-input/1-output   AAAAAAAAA***BBBBBBBBB
network and, assuming that the inputs represent xy pts   AAAAAAAAA***BBBBBBBBB
on a Cartesian plane, classify all the points in the     AAAAAAAAA***BBBBBBBBB
first and third quadrant as being in one class and all   AAAAAAAAA***BBBBBBBBB
the points in the second and forth quadrant as being     AAAAAAAAA***BBBBBBBBB
in the other class.  For pragmatic reasons, you can      *********************
allow a "don't care" region along each axis not to       *********************
exceed a fixed width delta.  This is illustrated at      BBBBBBBBB***AAAAAAAAA
left: A's are one class (i.e., one output (or range      BBBBBBBBB***AAAAAAAAA
of outputs)), B's are the other class (i.e.,  another    BBBBBBBBB***AAAAAAAAA
output (or non-overlapping range of outputs)), and *'s   BBBBBBBBB***AAAAAAAAA
are don't cares.  As always with this sort of problem,   BBBBBBBBB***AAAAAAAAA
rotations and translations of the figure can be ignored.

Alexis Wieland
alexis%yummy at gateway.mitre.org


More information about the Connectionists mailing list