Sci Am Article
INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU
INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU
Sun Dec 16 19:27:00 EST 1990
A. Worth:
Jerry Feldman's posting ("Scientific American") foments a long
standing dispute within my mind; are there fundamental differences
between connectionism and AI that make them incompatible in an ideal
sense?
Examining each as an approach to the same goal, if one of
connectionism's fundamental tenets is to mimic biological computation,
and AI, on the other hand, holds sacred the extracting of the essence
of "intelligence" while ignoring implementation details, (i.e. a
bottom up vs. top down dichotomy) then is it not a bastardization of
both to combine them?
If each approach has fundamentally different and opposing assumptions,
then wouldn't one or both of them have to be weakened in their
combination?
I see neural networks as a sub-field of artificial intelligence.
Both are trying to develop "intelligent" artifacts. There is of
course a dichotomy between symbolic AI and neural net AI. Symbolic
AI has proven itself in many tasks such as symbolic calculus, theorem
provers, expert systems, and other machine learning tasks. The difference
between symbolic AI and neural AI is more one of computational substrate
(although researchers may have artificially distanced neural nets from
symbolic AI in the past). Alot of neural networks for the last few years
has been applying a hill-climbing heuristic (well known to the symbolic AI
community) to our nets. They learn, but not well. There is still a great
deal of symbolic AI machine learning theory which could be used to set up
really interesting neural networks, but there are difficulties in
translating between a symbolic computational substrate and the
neural network substrate.
The constructive/destructive (or "ontogenic") networks which are
comming down the line, such as Cascade-Correlation, are showing that
hill-climbing in a fixed energy landscape is not the only way to do
learning. There is also "compositional" learning (someone asked for
the Schmidhuber ref. a while back...it's J. Schmidhuber. Towards
compositional learning with dynamic neural networks. Report FKI-129-90,
Technische Universitat Munchen, April 1990.) utilizing combining sub-goal
networks together to achieve larger goals.
Anyway, I think the lesson is that no matter how much connectionist
researchers think their networks are capable of better inductive learning
than symbolic AI systems, in order to do allow for deductive learning
we are going to have to couch alot of existing symbolic AI heuristics
and machine learning paradigms in a network architecture.
-Thomas Edwards
More information about the Connectionists
mailing list