Connectionists: The symbolist quagmire

Barak A. Pearlmutter barak at pearlmutter.net
Wed Jun 15 09:04:24 EDT 2022


In the GOFAI literature, "symbol" ala Simon and Newell basically meant
the kind of thing GENSYM gives you: an atomic token that can be put in
tables and such, but has no real internal structure.

Even on its own terms that notion didn't entirely pass muster, because
in relating that to cognition everyone implied that things like words
are symbols ("GRANDMOTHER"), but words *do* have coherent meaningful
internal structure: onomatope, rhymes, alliteration, microfeatures,
"that word sounds like it came from the French",
"grand"++"mother"="grandmother", etc. And in connectionist systems
going back many decades we've had activity vectors that represent
things and have microfeatures. They're in the "family trees" paper
that popularized backprop and MLPs. What is "word2vec" but mapping
symbol-to-symbol? What is w2v("queen")-w2v("king") but a microfeature
for "female"?

So what is this "symbol" thing that seems to pop up all over the place
inside our connectionist systems through training, but yet somehow
we're told they don't quite have? Is it that a "symbol" has to be able
to be copied without loss of fidelity? Stored on disk? Maybe you're
not allowed to add or overlay them, and if you can it's not a symbol?
They're not allowed to change form when communicated? Or is it that
they must be naturally representable as an English word with a
cartouch around it? Do you have to be able to manipulate them using
Lisp?

I would contend that this whole "symbol" and "symbol processing"
business is, when you get right down to it, pretty much ungrounded.

--Barak A. Pearlmutter.


More information about the Connectionists mailing list