Connectionist symbol processing: any progress?
Lev Goldfarb
goldfarb at unb.ca
Wed Aug 12 20:52:09 EDT 1998
On Tue, 11 Aug 1998 Dave_Touretzky at cs.cmu.edu wrote:
> I'd like to start a debate on the current state of connectionist
> symbol processing? Is it dead? Or does progress continue?
> .................................................................
> So I concluded that connectionist symbol processing had reached a
> plateau, and further progress would have to await some revolutionary
> new insight about representations.
> ....................................................................
> Today, Michael Arbib is working on the second edition of his handbook,
> and I've been asked to update my article on connectionist symbol
> processing. Is it time to write an obituary for a research path that
> expired because the problems were too hard for the tools available?
> Or are there important new developments to report?
>
> I'd love to hear some good news.
David,
I'm afraid, I haven't got the "good news", but, who knows, some good may
still come out of it.
About 8-9 years ago, soon after the birth of the connectionists mailing
list, there was a discussion somewhat related to the present one. I recall
stating, in essence, that it doesn't make sense to talk about the
connectionist symbol processing simply because the connectionist
representation space--the vector space over the reals--by its very
definition (recall the several axioms that define it) doesn't allow one to
"see" practically any symbolic operations, and therefore one cannot
construct, or learn, in it (without cheating) the corresponding inductive
class representation. I have been reluctant to put a substantial effort
into a formal proof of this statement since I believe (after so many years
of working with the symbolic data) that it is, in some sense, quite
obvious (see also [1-3]).
Let me try, again, to clarify the above. Hacking apart, the INPUT SPACE of
a learning machine must be defined axiomatically, as is the now universal
practice in mathematics. These axioms define the BASIC OPERATIONAL BIAS of
the learning machine, i.e. the bias related to the class of permitted
object operations (compare with the central CS concept of abstract data
type). There could be, of course, other, additional, biases related to
different classes of learning algorithms each operating, however, in the
SAME input space (compare, for example, with the Chomsky overall framework
for languages and its various subclasses of languages).
It appears that the present predicament is directly related to the fact
that, historically, in mathematics, there was, essentially, no work done
on the formalization of the concept of "symbolic" representation space.
Apparently, such spaces are nontrivial generalizations of the classical
representation spaces, the latter being used in all sciences and have
evolved from the "numeric" spaces. I emphasize "in mathematics" since
logic (including computability theory) does not deal with the
representation spaces, where the "representation space" could be thought
of as a generalization of the concept of MEASUREMENT SPACE. By the way,
"measurement" implies the presence of some distance measure(s) defined on
the corresponding space, and that is the reason why the study of such
spaces belongs to the domain of mathematics rather than logic.
It appears to us now that there are fundamental difference between the two
classes of "measurement spaces": the "symbolic" and the "numeric" spaces
(see my home page). To give you at least some idea about the differences,
I am presenting below the "symbolic solution" (without the learning
algorithm) to the generalized parity problem, the problem quite notorious
within the connectionist community.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
THE PARITY CLASS PROBLEM
The alphabet: A = {a, b}
------------
Input set S (i.e. the input space without the distance function): The set
-----------
of strings over A.
The parity class C: The set of strings with an even number of b's.
------------------
Example of a positive training set C+: aababbbaabbaa
------------------------------------- baabaaaababa
abbaaaaaaaaaaaaaaa
bbabbbbaaaaabab
aaa
Solution to the parity problem, i.e. inductive (parity) class representation:
-----------------------------------------------------------------------------
One element from C+, e.g. 'aaa', plus the following 3 weighted operations
operations (note that the sum of the weights is 1)
deletion/insertion of 'a' (weight 0)
deletion/insertion of 'b' (weight 1)
deletion/insertion of 'bb' (weight 0)
This means, in particular, that the DISTANCE FUNCTION D between any two
strings from the input set S is now defined as the shortest weighted path
(based on the above set of operations) between these strings. The class is
now defined as the set of all strings in the measurement space (S,D) whose
distance from aaa is 0.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Why do, then, so many people work on the "connectionist symbol
processing"? On the one hand, many of us feel (correctly, in my opinion)
that the symbolic representation is a very important topic. On the other
hand, and I am quite sure of that, if we look CAREFULLY at any
corresponding concrete implementation, we would see that in order to
"learn" the chosen symbolic class one had to smuggle into the model, in
some form, some additional structure "equivalent" to the sought symbolic
structure (e.g. in the form of the recurrent ANN's architecture). This is,
again, due to the fact that in the vector space one simply cannot detect
(in a formally reasonable manner) any non-vector-space operations.
[1] L. Goldfarb, J. Abela, V.C. Bhavsar, V.N. Kamat, Can a vector space
based learning model discover inductive class generalization in a
symbolic environment? Pattern Recognition Letters, 16 (7), 1995, pp.
719-726.
[2] L. Goldfarb and J. Hook, Why classical models for pattern recognition
are not pattern recognition models, to appear in Proc. Intern. Conf.
on Advances in Pattern Recognition (ICAPR), ed. Sameer Singh,
Plymouth, UK, 23-25 Nov. 1998, Springer.
[3] V.C. Bhavsar, A.A. Ghorbany, L. Goldfarb, Artificial neural networks
are not learning machines, Tech. Report, Faculty of Computer Science,
U.N.B.
--Lev Goldfarb
http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm
More information about the Connectionists
mailing list