Connectionists: How the brain works? Symbolic vs. Connectionist

Juyang Weng weng at cse.msu.edu
Sat Apr 5 12:30:55 EDT 2014


On 4/18/08 3:38 PM, Asim Roy wrote:
 > By the way, your algorithms I believe are structurally similar to the 
other connectionists algorithms I analyzed in my paper.

Asim, I respectfully disagree.
If what you said were true, it would be absolutely impossible for me to 
bridge the wide gap between the two major schools in AI:
the symbolic school and the connectionist school.
Please see the (incomplete) 14-point list of novelties below from an 
anonymous reviewer.   I first briefly introduce the wide gap:

Marvin Minsky 1991 in AI Magazine:
- Symbolic: Logic & Neat
- Connectionist: analogical & scruffy.
Michael Jordan at IJCNN 2011:
- Neural nets do not abstract well
- I will not talk about neural nets today

Michael Jordan, Joshua Tenenbaum, and many other respected researchers 
used graphical models to model higher brain
computations or intelligent systems.

However, my humble understanding is:
(1) Graphical Models belong to the category of symbolic representation 
that Marvin Minsky pointed to.
(2) Graphical Models are GROSSLY wrong as models of the human brain.
(3) Graphical Models are not an acceptable model even for high brain 
functions such as abstraction and reasoning.

I am afraid that this issue is EXTREMELY difficult for a computational 
neuroscientist or an AI researcher to understand,
if he has not taken the course BMI 871: Computational Brain-Mind.
All the courses offered elsewhere are fundamentally different from BMI 
871 in nature, regardless how the title of the course may look similar.
(By the way, the BMI 2014 course application deadline is tomorrow, 
Sunday.  See http://www.brain-mind-institute.org/)

The following is quoted from my email to Michael Jordan while he and I 
were discussing this important issue yesterday via emails:

"By the way, I do not agree with your `graphical models provide a better 
platform for abstraction' (which you did not seem to have said that day),
because the abstraction is mainly in the mind of the human designer of 
the graphic models and probably also other humans
who have heard the human designer's explanation.   However, the meanings 
of each node in a graphic model are not learned
from experience as the brain of a human child.   Thus, each such node 
does not have required grounded invariance (e.g., location invariance
for the type-concept abstraction).  Without the necessary invariance 
(e.g., abstraction of type from concrete instances of different locations),
there is not any power of abstraction in any node of a graphic model."

"In summary, abstraction of any node of a graphic model is mainly in the 
mind of a human designer, instead of the abstraction power
of each node handcrafted.  The impression of abstraction is the human 
designer's illusion.  His brain can abstract from
instances does not mean that the node he handcrafted has also a power of 
abstraction."

The following 14-point list is quoted from a respected anonymous 
reviewer for my IEEE TAMD paper that addresses the title of this email.
He seems to have gone beyond looking for superficial similarities.
The major differences lie in basic ideas, basic principles, 
architectures, representations, algorithms, and the ways to teach.

--- start of quote ---
Comments to the Author
The author has addressed all the concerns I raised in my previous 
review. In doing so, he has added a considerable amount of new 
material.  As a result, this version of the paper is much improved and 
the underlying argument is conveyed more clearly. Overall, it is easier 
to read and more instructive.  The many important issues raised are now 
more clearly conveyed and, in its current form, it will no doubt be a 
valuable addition to the literature in AMD.

Without attempting to be in any way exhaustive, I would note the 
following key messages conveyed by the paper, focussing in particular on 
the clarifications that have been added to the current version:

- The external nature of symbolic representations.

- The need to deal incrementally with new external environments through 
real-time autonomously generated internal modal representations.

- The importance of modelling brain development.

- The now-clear distinction between emergent and symbolic representations.

- The differentiation between connectionist and emergent representations.

- The argument symbolic representations are open to ‘hand-crafting’ 
while emergent representations are not.

- The identification of the brain with the central nervous system.

- The external nature of the sensory and motor aspects of the brain.

- The consensual nature of the interpretation of sensations in human 
communication.

- The necessity for emergent representations to be based on 
self-generated states without imposing meaning or specifying boundaries 
on the meaning on these states.

- The transfer of temporal associations by the brain.

- The accumulation of spatiotemporal attention skills based on 
experience of the real physical world.

- The back-projection of signals, not errors, in descending connections.

- The importance of the developmental program and top-down attention for 
understanding human and artificial intelligence.

--- end of quote ---

John Weng

-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
428 S Shaw Ln Rm 3115
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: weng at cse.msu.edu
URL: http://www.cse.msu.edu/~weng/
----------------------------------------------






More information about the Connectionists mailing list