Summary of panel discussion at IJCNN'2002 and ICONIP'02 on the question: "Oh sure, my method is connectionist too. Who said it's not?"

Asim Roy ASIM.ROY at asu.edu
Mon May 12 19:40:28 EDT 2003


This note summarizes the panel discussions that took place at IJCNN'2002
(International Joint Conference on Neural Networks) in Honolulu, Hawaii in
May, 2002 and at ICONIP'02-SEAL'02-FSKD'02 (the 9th International Conference
on Neural Information Processing, the 4th Asia-Pacific Conference on
Simulated Evolution And Learning, and the 2002 International Conference on
Fuzzy Systems and Knowledge Discovery) in November 2002 in Singapore.
IJCNN'2002 was organized jointly by INNS (International Neural Network
Society) and the IEEE Neural Network Council. This was the fifth panel
discussion at these neural network conferences on the fundamental ideas of
connectionism. The discussion topic at both of these conferences was:  "Oh
sure, my method is connectionist too. Who said it's not?" The abstract below
summarizes the issues/questions that were addressed by this panel.

The following persons were on these panels and their bio-sketches are
included at the end:
At ICONIP'02 in Singapore:
1.	Shun-Ichi Amari
2.	Wlodzislaw Duch
3.	Kunihiko Fukushima
4.	Nik Kasabov
5.	Soo-Young Lee
6.	Erkki Oja
7.	Xin Yao
8.	Lotfi Zadeh
9.	Asim Roy

At IJCNN'2002 in Honolulu:
1.    Bruno Apolloni
2.    Robert Hecht-Nielsen
3.    Robert Kozma
4.    Steve Rogers
5.    Ron Sun

Thanks to Lipo Wang, General Chair of ICONIP'02, and Donald C. Wunsch,
Program Co-Chair of IJCNN'02, for allowing these discussions to take place.
For those interested, summaries of prior debates on the basic ideas of
connectionism are available at the CompNeuro archive site. Here is a partial
list of the prior debate summaries available there.

http://www.neuroinf.org/lists/comp-neuro/Archive/1999/0079.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1999/0079.html>  - Some
more questions in the search for sources of control in the brain
http://www.neuroinf.org/lists/comp-neuro/Archive/1998/0084.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1998/0084.html>  - BRAINS
INTERNAL MECHANISMS - THE NEED FOR A NEW PARADIGM
http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0069.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0069.html>  - COULD
THERE BE REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN?
http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0057.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0057.html>  -
CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?
http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0012.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1997/0012.html>  - DOES
PLASTICITY IMPLY LOCAL LEARNING? AND OTHER QUESTIONS
http://www.neuroinf.org/lists/comp-neuro/Archive/1996/0047.html
<http://www.neuroinf.org/lists/comp-neuro/Archive/1996/0047.html>  -
Connectionist Learning - Some New Ideas/Questions

Asim Roy 
Arizona State University


Panel Question:

"Oh sure, my method is connectionist too. Who said it's not?"

Description:

Some claim that the notion of connectionism is an evolving one. Since the
publication of the PDP book (which enumerated the then accepted principles
of connectionism), many new ideas have been proposed and many new
developments have occurred. So according to these claims, the connectionism
of today is different from connectionism of yesterday.  Examples of such new
developments in connectionism include hybrid connectionist-symbolic models
(Sun 1995, 1997), neuro-fuzzy models (Keller 1993, Bezdek 1992),
reinforcement learning models (Kaelbling et al. 1994, Sutton and Barto
1998), genetic/evolutionary algorithms (Mitchell 1994), support vector
machines (references), and so on. In these newer connectionist models, there
are many violations of the "older" connectionist principles. One of the
simplest violations is the reading and setting of connection weights in a
network by an external agent in the system.  The means and mechanisms of
external setting and reading of weights were not envisioned in early
connectionism.  Why do we need local learning laws if an external source can
set the weights of a network? So this and other features of these newer
methods are obviously in direct conflict with early connectionism.

In the context of these algorithmic developments, it has been said that
maybe nobody at this stage has a clear definition of connectionism, that
everyone makes things up (in terms of basic principles) as they go along.
Is this the case? If so, does this pose a problem for the field? To defend
this situation, some argue that connectionism is not just one principle, but
many? Is that the case? If not, should we redefine connectionism given the
needs of these new types of learning methods and on the basis of our current
knowledge of how the brain works?

This panel intends to closely examine this issue in a focused and intensive
way.  Debates are expected. We hope to at least clarify some fundamental
notions and issues concerning connectionism, and hopefully also make some
progress on understanding where it needs to go in the near future.



BRIEF SUMMARY OF INDIVIDUAL REMARKS
Shun-ichi Amari -- RIKEN Brain Science Institute, Japan

Connectionism----Theory or Philosophy
 
    A ghost haunted in the mid eighties in the world.  It was named the
connectionism. It was welcomed enthusiastically by many people, but was
hated so much by the traditional AI people.  Now there is a question:  What
is connectionism and where does it going?  Is it too old now?
 
   If it is collections of theories, there have been many new developments.
However, even at the time of rise of connectionism, its theories relied
mostly on those developed in the seventies.  Most were found to be
rediscoveries. However, as a philosophy, it declared so strongly that
information in the brain is distributed and processed in parallel by dynamic
interactions.  Novel engineering systems should learn from this fact.  This
philosophy has been accepted with enthusiasm, and generated many new
theories and findings.  Its righteousness is still valid.
 
Bruno Apolloni -- University of Milano, Italy.

Connectionism or not connectionism.

The true revolution of the connectionism has been to credit heuristics, say
subsymbolic computations, as scientific matter. But, the Aristotelian
thought in our genomic inherit put a sharp divide between the noble brain
activity represented by the symbolic reasoning and the low level attitudes,
such as intuition, fantasy, and all what in general cannot be gauged by a
theorem, rather is explicable just by the electro-chemistry of our neuron
activities. A second revolution is currently demolishing the barrier between
the two category, recognizing that neurons are also the seat of the abstract
thought and no sharp difference occurs between the two mental attitudes.
Likewise a photon that is both a particle and a wave, a thought is
materially a well proven neural network, so well that it proves a fixed
points vs. the adaptation to the environment, so simple that it can be
stored on a limited amount of memory. Also learning algorithms are reveal
themselves similar at the two levels. A search for rewarding is almost
random at the subsymboilic level, and pursued by a hypothetic, possibly
absurd, reasoning at symbolic one. The reaction to punishment is possibly
unconscious at the former, a source of intellectual pain and a search for
avoiding it at the symbolic level. The key point is to maintain an efficient
set of  feedbacks between the levels.  In such framework we discover
ourselves  as material automatons, rather the physical matter sharing the
nature of a God.

Wlodzislaw Duch -- Nicholas Copernicus University <http://www.umk.pl/en/> ,
Torun, Poland

Everybody has some view what connectionist is and all these points of view
are in some way limited. Perhaps we should not worry about definitions. New
branches of science, as Allen Newell once said, are not defined, but emerge
from common interest of people that meet at conferences, discuss problems
they find interesting, establish new journals. When people ask me what am I
working on, what do I usually say? Computational intelligence. Trying to
solve problems, that are not effectively algoritmizable. Is this
connectionism? Unless I work in neurobiology, where methods should be
biologically plausible, I do not care. It may be more statistical, or more
evolutionary, as long as it solves interesting problems it is something
worth working on. 

Unfortunately it is not quite so simple and since the matter has not been
thoroughly discussed, connectionist approaches have spontaneously developed
in many different directions, we face all kind of problems now. Some neural
network conferences do not accept papers on statistical learning methods,
because they are not neural, but accept SVM papers, although they have
equally little to do with connectionism. Because recent development in SVM
were somehow connected with perceptrons, and papers on this subject appear
mostly in neural journals, it is perceived as a branch of neural computing.
Many articles in the IEEE Transactions on Neural Networks, and other neural
network journals, have little to do with connectionist ideas. Although IEEE
formed the Neural Network Society, conferences organized by this society
cover much broader range of topics. Not only SVMs are welcome, but also
Bayesian belief networks and their outgrowth, graphical methods, statistical
mean field theories, sampling techniques, chaos, dynamical systems methods,
fuzzy, evolutionary, swarm, immune system, and many other techniques are
accepted. Fields are overlapping, boundaries are fuzzy and we do not know
how to define connectionism any more. 

Many people work on the same problems using different approaches that
originate from fields they have been trained in. Classification Society, or
Numerical Taxonomy experts, sometimes know about neural networks, but do
neural network experts know what Classification Society or Pattern
Recognition society is doing, and what kind of problems and methods are of
their interest? For example, committees of models are investigated by neural
network, evolutionary, machine learning, numerical taxonomy and pattern
recognition communities. Same problems are solved over and over by people
that do not know about existence of other fields. How to bring experts
working on the same problems from different perspectives together? If
anything should came out from this discussion, it should not be a definition
of connectionism, but rather understanding that a lot of research efforts is
duplicated in a many-fold way. 

The point is that we are too focused on the methods, forgetting about the
challenges and problems that wait to be solved. It is too easy to modify one
of neural methods, add another term to the error function, or modify network
architecture. An infinitely many variants of clusterization, or unsupervised
learning methods, may be devised. Are the classical branches of science
defined by the methods? Biology, physics, chemistry and other classical
branches of science were always problem-oriented. Why do we keep on thinking
in the method-oriented way? Connectionist or not, does it solve the problem?
Defining our field of interest as looking for solution of problems that are
non-algorithmic, problems for which effective algorithms do not exist, makes
it problem oriented. Solutions to such problems require intelligence. Since
we solve them with computational means the field may be appropriately called
Computational Intelligence (CI). Connectionist methods are an important part
of this field, but there is no reason to restrict oneself to one group of
methods. 

A good example is the contrast between symbolic, rule based methods used by
Artificial Intelligence (AI), and subsymbolic, neural methods. Contrasting
neural networks with rule-based AI must ultimately fail. How will we solve
problems requiring systematic thinking without rules? Rules must emerge
somehow from networks. Some CI problems require knowledge and symbolic
reasoning, and this is where traditional AI has focused. These problems are
related to higher cognitive functions, such as thinking, reasoning,
planning, problem solving and understanding natural language. On the other
hand neural computing has tried to solve problems requiring senso-motoric
functions, perception, control, development of feature detectors, problems
concerned with low-level cognition. Computational intelligence, being
problem-oriented, is interested in algorithms coming from all sources.
Search and logical rules may solve problems in theorem proving or sentence
parsing that connectionist techniques are not able to solve. Learning,
adaptation is just one side of intelligence. Although our brains use neurons
to solve problems requiring systematic reasoning, there must be a way to
approximate this neural activity with symbolic search-based processes. As
with all approximations it may sometimes break down, but in most cases AI
expert systems are solving interesting problems. Instead of stressing the
differences it may be better to join forces, since low and high-cognitive
functions are both needed for true intelligence. Solving problems, for which
effective algorithms do not exist, by connectionist or other methods,
provides clear definition for Computational Intelligence. Clear definition
of neural computing, or soft computing, a definition that covers all that
experts work on in these fields, is very difficult to agree upon, because of
the method, rather than problem orientation. 

Early connectionism was naive: psychologist were writing papers showing that
MLPs are not all-mighty. Everybody knows that now. For some tasks modular
networks are necessary. The brain is not just one big network. External
sources - other parts of the brain - control learning, for example the
limbic structures involved in emotions decide what is interesting and worth
learning. Weights are not constant, but are a function of inputs, not just
in the long-term, but also short-term dynamics. But neurons, networks and
brain functions are only one source of inspiration for us. Many methods were
inspired by the Parallel Distributed Processing (PDP) idea. The name PDP did
not become popular, since "neural networks" sounded much better. Almost all
algorithms may be represented in some graphical way, with nodes representing
functions or local processors. Graphical computations are going to be
popular, but this is again just a broad group of algorithms, not a candidate
for a branch of science.

Modeling neurobiological systems at a very detailed level leads to
computational neuroscience. Simpler approximations are still useful to model
various brain functions and processes. Very rough approximations, leading to
modular neural networks where single neurons do not matter, but the
distributed nature of processing is important, lead to connectionist systems
useful for psychology. These fields are appropriately based on neural
processing, although they have strong overlap with many other branches of
science, for example neuroscience with neurochemistry, molecular biology and
genetics, and connectionist approaches in psychology with cognitive science.
Engineering applications of neural computing should be less method-oriented
and more problem-oriented. If we do not make an effort in this direction
many journals and conferences will present solutions to the same problems,
repeating many-fold the same work, and preventing comparison of results that
may be obtained using different methods. Time will pass, but we shall not
grow wiser ...

Kunihiko Fukushima -- Tokyo University of Technology, Japan

Find Out Other Principles that Govern the Brain

The final goal of the connectionism is to understand the mechanism of
information processing in the biological brain.  In the history of the
connectionism, we have experienced a kind of booms of the research twice,
from 1960 and from 1985.  One of the triggers for the first boom was a
proposal of a neural network model "perceptron" by Rosenblatt, and that for
the second boom was the introduction of the idea of cost minimization.  In
both cases, the difficult problem of understanding information processing in
the brain was reduced to simple problems: in the first case, to the analysis
of a model called perceptron; and in the second case, to a pure mathematical
problem of cost minimization.  This replacement with simple problems allowed
nonprofessionals easily to join the research of the brain without having a
large knowledge on neurophysiology or psychology.

Brain is a system that works under several constraints.  Since one of the
constraints was shown as a hypothesis of cost minimization, the analysis of
a system that works under the constraint became very easy.  In other words,
the process of understanding the brain was divided into two steps:
biological experiments and solving mathematical problems.  This division of
labor allowed nonprofessionals to enter brain research very easily.  It is
true that the technique of cost minimization was very powerful.  It can not
only explain brain mechanisms but also is useful for other problems, such as
forecasting weather and even stock market. Although this approach has
produced a great advance in the brain research, it involves a risk at the
same time.  Researchers who are engaged in the research themselves have a
large tendency of having an illusion that they are doing the research of the
biological brain.  

They often forget that they are simply analyzing a behavior of a system that
works under a certain constraint.  This is a similar situation we had in
1960's.  Everyone forgot the fact that they were analyzing a system called
perceptron, and erroneously believed that they were making the research of
the brain itself.  Once mathematical limitations of the ability of the
perceptron became clear, they moved away, not only from the research of the
perceptron, but from the research of the brain itself.  Their illusory
belief caused of the winter era of the research in 1970's. Mathematical
limitations of the ability of the principle of cost minimization are now
becoming clear.  Cost minimization is not the only rule that control the
biological brain.  We are now in the time to find out other constraints that
govern the biological brain.  

Otherwise, we will have a winter era of the research again.

Robert Hecht-Nielsen -- University of California, San Diego

Robert Hecht-Nielsen's current views are described in Chapter 4 of the new
book: Hecht-Nielsen, R. & McKenna, T. [Eds] (2003) Computational Models for
Neuroscience: Human Cortical Information Processing [London,
Springer-Verlag]. 
 


Nik Kasabov -- Auckland University of Technology, New Zealand

Yes, indeed, very often researchers claim that their method is
connectionist, too. We talk about a method being connectionist if the method
utilizes artificial neurons (basic processing units) and connections between
them, and if two main functions are performed in this connectionist
environment - learning, and generalization [1,2]. Without having the above
characteristics, it is hard to classify a method being connectionist. A
method can by hybrid connectionist, too, if the connectionist principles
from above are integrated with other principles for information processing,
such as rule based systems, fuzzy systems [3], evolutionary computation [4]
etc.  There are additional characteristics that reinforce the connectionist
principles in a method. For example, adaptive, on-line learning in an
evolving connectionist structure; learning and capturing abstract
information, rules; modular connectionist organization; different types of
learning available in one system (e.g. active, passive, supervised,
unsupervised, reinforcement); relating neurons to genes contained in them
regarded as parameters of the learning and the development process [5].   

     As connectionism is inspired by the organization and the functioning of
the brain, we can assume that the more brain-like a method is - the more
connectionist it is. This is true. On the other hand a connectionist method,
as defined above, can be more engineering (application), or mathematics
oriented, rather than brain-like oriented. For the brain study research and
for the modeling of brain functions it is important to have adequate
brain-like models [6], but it is irrelevant to ask for an engineering method
how much connectionist it is if it serves its purpose well. 

     In the end, all possible directions for the development of new
scientific methods for information processing should be encouraged if these
methods contribute to the progress in science and technology regardless how
much connectionist they are indeed. And the more a method can gain from the
principles of connectionism, the better, as information processing methods
are constantly "moving" towards being more human oriented and human-like to
serve the humanity.
     
[1] McClelland J, Rumelhart D, et al. (1986) Parallel Distributed
Processing, vol. II, MIT Press.
[2] Arbib M, (1995,2003) The Handbook of Brain Theory and Neural Networks.
The MIT Press.
[3] N.Kasabov (1986) Foundations of neural networks, fuzzy systems and
knowledge engineering, The MIT Press.
[4] X.Yao (1993) Evolutionary artificial neural networks, Int. Journal of
Neural Systems, vol.4, No.3, 203-222.
[5] N.Kasabov (2002) Evolving connectionist systems - methods and
applications in bio-informatics, brain study and intelligent machines,
Springer Veralg.
[6] Amari, S. and N.Kasabov (1998) Brain-like computing and intelligent
information systems, Springer Verlag 

Chaotic neurodynamics - A new frontier in connectionism

Robert Kozma, University of Memphis, Memphis, TN 38152

Summary of my viewpoint presented at the Panel Session 
"My method is connectionist, too!" at IJCNN'02 / WCCI'02, Honolulu, May
10-15, 2002


All nontrivial problems we face in practical applications of pattern
recognition and intelligent information processing systems require a
nonlinear approach. In addition, adaptivity of the models is very often a
key requirement, which allows producing a robust solution to real life
problems. Connectionist models and neural networks, in particular, offer
exactly these qualities. It is not surprising, therefore, that connectionism
gains wide popularity in the literature. Connectionist methods can be
considered as a family of nonlinear statistical tools of pattern recognition
with a large number of parameters, which are adapted using powerful learning
algorithms. In most of the cases, the parameterization and learning
algorithm guarantees that the trained network operates in a convergent
regime. In other words, the activation level of the network's nodes approach
a steady state value in the autonomous case or when the inputs to the
network are constant. 

There is an emergent field of research using dynamical neural networks that
operate in oscillatory limit cycles or in a chaotic regime; see, e.g.,
Aihara et al. (1990). Although the first nonconvergent neural network models
have been proposed about 4 decades ago, the time became ripe only recently
to embrace these ideas and include them to the mainstream of connectionist
science (Freeman, 1975). These new developments are facilitated by
advancements both inside and outside connectionism. In the past decades,
research into convergent NNs laid down the solid theoretical foundations,
which now can be extended to chaotic domain. In addition, the mathematical
theory of dynamical systems and chaos has reached maturity by now, i.e., it
can address the very complex issues raised by high-dimensional chaotic
models, like neural systems (Kaneko, 1990; Tsuda, 2001). 

Spatio-temporal neurodynamics is a key focus area of the research into
nonconvergent neural systems. Within this field, we emphasize the role of
intermediate-range, or mesoscopic effects in describing population dynamics
(Kozma & Freeman, 2001). There are two major factors contributing to the
emergence of the mesoscopic paradigm of neuroscience:

1.	Biological systems exhibit a mesoscopic level of organization
unifying 10^4 to 10^6 neurons, while the overall system size is 10^10 to
10^12. Mesoscopic approach provides an intermediate level between local
(microscopic) and global (macroscopic) levels. Mesoscopic levels are
biologically plausible. Artificial neural systems, however, do not need to
imitate all the details of neural systems. Therefore, it is arguable whether
we should follow nature's path in this case?
2.	The introduction of mesoscopic level is very practical from
computational perspective as well. Based on the present technology, it is
not feasible to create computational devices with 10^10 to 10^12 processing
units, the complexity level dictated by studying scaling properties of
complex networks. Probably, we need to wait at least 10-15 years, when
nanotecnology will become mature enough to produce systems of that
complexity (Govindan, 2002). 

Until the technology of creating such an immense concentration of
computational power, software and hardware implementations of neural
networks representing mesoscopic level of granulation can provide a
practically usable tool (Principe et al., 2001) of building models of
space-time neurodynamics.

References:

Aihara, K., Takabe T., Toyoda M. (1990) Chaotic neural networks, Phys. Lett.
A, 144(6-7), 333-340.
Freeman, W.J. (1975)  Mass Action in the Nervous System, Academic Press, New
York.
Govindan, T.R. (2002) NASA/USRA Workshop on Biology-Information
Science-Nano-technology Fusion BIN, Ames, CA, Oct. 7-9, 2002.
Kaneko, K. (1990) Clustering, coding, switching, hierarchical ordering, and
control in a network of chaotic elements, Physica D, 41, 137-172.
Kozma, R. and W.J. Freeman (2001) Chaotic Resonance - Methods and
applications for robust classification of noisy and variable patterns, Int.
J. Bifurc.&Chaos, 11(6), 2307-2322.
Principe, J.C., Tavares, V.G., Harris, J.G., Freeman, W.J. (2001) Design and
Implementation of a Biologically Realistic Olfactory Cortex in Analog
Circuitry, Proc. IEEE, 89(7): 1030-1051.
Tsuda, I. (2001) Toward an interpretation of dynamic neural activity in
terms of chaotic dynamical systems", Behav. Brain Sci., 24, pp. 793-847.


Soo-Young Lee -- Korea Advanced Institute of Science and Technology

In my mind connectionism is a philosophy to build artificial systems based
on biological neural systems. It is not
necessarily limited to adaptive systems with layered architecture such as
multiplayer Perceptron and radial basis 
function networks. Biological neural systems also utilize fixed
interconnections, which has been evolved through 
generations. For example many biological neural systems incorporate
winner-take-all networks based on lateral 
inhibition. My favorite connectionist model comes from human auditory
pathway from cochlea to auditory cortex. It
consists of several modules, which mainly have layered architecture with
both feedforward and feedback connections. By
understanding functions of each module and their connections we are able to
build up mathematical models for speech 
processing in auditory pathway. The object path includes nonlinear
noise-robust feature extraction from simple frequency 
selectivity to more complex time-frequency characteristics. By combing
signals from both ears the spatial path performs 
sound localization and speech enhancement. The backward path is responsible
to the top-down attention, which filters 
out irrelevant or unfamiliar signals. Although the majority of the networks
have fixed interconnections, their combined 
network results in complicated dynamic functions. I believe it is a very
important class of connectionist models.


Erkki Oja -- Helsinki University of Technology, Finland

In traditional cognitive science, the basic paradigm for natural and
artificial intelligence is symbol manipulation: 
processing of well-defined concepts by rules. With the introduction of
parallel distributed processing or connectionism in 
the 1980's, there was a paradigm shift. The new models are motivated by real
neural networks but they do not have to 
be faithful in every detail to biology.  The representation for data is a
pattern of activity, a numerical vector, instead of a 
logical entity. Learning means changing some numerical parameters like
connection weights, instead of updating rules. 
This is connectionism.

Connectionist methods offer new hopes of solving many highly challenging
problems like data mining, bioinformatics, 
novel user interfaces, robotics etc. Two examples which I have done research
on are Independent Component Analysis 
(ICA) and Kohonen's Self-Organizing Maps (SOM). Both ideas are motivated by
neural models but they can also be 
taken as data analysis tools as such. They are connectionist methods based
on unsupervised learning, a very powerful
way to infer empirical models from large data sets.


Steven Rogers and Matthew Kabrisky -- Qualia Computing, Inc. (QCI) and 
				CADx Systems
 

The only reason to restrict the connectionist label to a subset of
computational intelligence techniques is out of arrogance associated with
the false impression that we understand to any significant detail how
animals process information.  What little is known will be modified
dramatically by the many things we have yet to discover.  All current
connectionist techniques make big assumptions on what is included and what
is relevant.  There does not exist a unifying theory of fundamental
processing methods used in physiological information processing that
includes all potentially relevant electro-chemical elements (neurons, glial
cells, etc ).  Thus, in our opinion, to rule out any technique based on our
current assumptions is premature.  

In the end, being engineers, what we care most about is how we can couple
our learning algorithms in efficient productive ways with humans to achieve
improved performance in useful tasks, intelligence amplification.   Even if
the techniques used cannot currently be mapped to similar processing
strategies employed in physiological information processing systems, the
fact that they are useful in interacting with the wetware of real
connectionist systems makes them relevant.  These quale modifying systems,
whether composed of rule-based or local learning methods or even external
setting of constraints are the only real connectionist systems that we can
consider at the present time.

Ron Sun -- University of Missouri-Columbia

There have been a number of panel discussions on this and related issues.
For example,  a panel discussion on the question: "does connectionism permit
reading of rules from a network?" took place at IJCNN'2000 in Como, Italy.
This previous debate pointed out the limitations of strong connectionism.  I
noted then  that clearly the death knell of strong connectionism had been
sounded.

Many early connectionist models have some significant shortcomings. For
example, the limitations due to the regularity of their structures led to,
for example, difficulty in representing and interpreting symbolic structures
(despite some limited successes that we have seen).  Other limitations are
due to learning algorithms used by such models, which led to, for example,
requiring lengthy training (requiring many repeated trials); requiring
complete I/O mappings to be known a priori; and so on.  These models may
bear only remote resemblance to biological processes; they are far less
complex than biological neural networks.

In coping with these difficulties, two forms of connectionism emerged:

Strong connectionism adheres strictly to the precepts of connectionism,
which may be unnecessarily restrictive and lead to huge cost for certain
symbolic processing.  On the other hand, weak connectionism (or hybrid
connectionism) encourages the incorporation of both symbolic and subsymbolic
processes: reaping the benefit of connectionism while avoiding its
shortcomings. There have been many theoretical and practical arguments for
hybrid connectionism; see, for example,  Sun (1994) and Sun (2002).

I shall re-iterate the point I made before: To remove the strait-jacket of
strong connectionism, we should  advocate some forms of hybrid
connectionism, encouraging the incorporation of non-NN representations and
processes.  It is time for a more open-minded  framework in which our
research is conducted.

See http://www.cecs.missouri.edu/~rsun <http://www.cecs.missouri.edu/~rsun>
for details of work along this line.

References:
R. Sun, (2002). Duality of the Mind.  Lawrence Erlbaum Associates, Mahwah,
NJ.
R. Sun, (1994). Integrating Rules and Connectionism for Robust Commonsense
Reasoning.  John Wiley and Sons, New York, NY.




BIO-SKETCHES

Shun-ichi Amari

Professor Shun-ichi Amari is currently the Vice Director of RIKEN Brain
Science Institute and Group Director of the Brain-Style Intelligence
<http://www.brain.riken.go.jp/english/b_rear/b3_lob/b3_top.html>  and
Brain-Style Information Research Systems
<http://www.brain.riken.go.jp/english/b_rear/b3_lob/b3_top.html>  research
groups. Professor Amari received his Ph.D. Degree in Mathematical
Engineering in 1963 from University of Tokyo, Tokyo, Japan. Since 1981 he
has held a professorship at the Department of Mathematical Engineering and
Information Physics, University of Tokyo. In 1994, he joined RIKEN's
Frontier Research Program, then moved to RIKEN Brain Science Institute when
it was established in 1997. He is a fellow of the IEEE and received the IEEE
Neural Network Pioneer Award, the Japan Academy Award and the IEEE Emanuerl
Piore Award. Professor Amari has served as a member of numerous editorial
committee boards and organizing commitees and has published around 300
papers, including several books, in the areas of information theory and
neural nets. 


Bruno Apolloni

Professor of Cybernetics and Information Theory at the Dipartimento di
Scienze dell' Informazione (Department of Information Science), University
of Milano, Italy. Director, Neural Networks Research Laboratory (LAREN),
University of Milano. President, Neural Network Society of Italy. Author of
over 100 papers in the frontier area between probability and statistics on
the one hand and theoretical computer science on the other, with special
regard to computational learning, pattern recognition, optimization, control
theory, probabilistic analysis of algorithms, epistemological aspects of
probability and fuzziness.  His current research interests are in the
statistical bases of learning,  and in hybrid subsymbolic-symbolic learning
architectures. 



Wlodzislaw Duch
Wlodzislaw Duch is a professor of theoretical physics and applied
computational sciences, since 1990 heading the Department of Informatics
<http://www.phys.uni.torun.pl/kmk>  (formerly called a Department of
Computer Methods) at Nicholas Copernicus University <http://www.umk.pl/en/>
, Torun, Poland. His degrees include habilitation (D.Sc. 1987) in many body
physics, Ph.D. in quantum chemistry (1980), and Master of Science diploma in
physics (1977) at the Nicholas Copernicus University, Poland.
He has held a number of academic positions at universities and scientific
institutions all over the world. These include longer appointments at the
University of Southern California in Los Angeles, and the
Max-Planck-Institute of Astrophysics in Germany (every year since 1984), and
shorter (up to 3 month) visits to the University of Florida in Gainesville;
University of Alberta in Edmonton, Canada; Meiji University, Kyushu
Institute of Technology and Rikkyo University in Japan; Louis Pasteur
Universite in Strasbourg, France; King's College London in UK, to name only
a few. 
He has been an editor of a number of professional journals, including IEEE
Transactions on Neural Networks, Computer Physics Communications, Int.
Journal of Transpersonal Studies and a head scientific editor of the
"Kognitywistyka" (Cognitive Science) journal. He has worked as an expert for
the European Union science programs and for other international bodies. He
has published 4 books and over 250 scientific and popular articles in many
journals. He has been awarded a number of grants by Polish state agencies,
foreign committees as well as European Union institutions.
Kunihiko Fukushima

Kunihiko FUKUSHIMA is a Full Professor, Katayanagi Advanced Research
Laboratories, at Tokyo University of 
Technology, Tokyo, Japan.  He was a full professor at Osaka University from
1989 to 1999, at the University of Electro-
Communications from 1999 to 2001.  Prior to his Professorship, he was a
Senior Research Scientist at the NHK Science 
and Technical Research Laboratories.  He is one of the pioneers in the field
of neural networks and has been engaged in 
modeling neural networks of the brain since 1965.  His special interests lie
in modeling neural networks of the higher 
brain functions, especially, the mechanism of the visual system.  He is the
inventor of the Neocognitron for deformation 
invariant visual pattern recognition, and the Selective Attention Model for
recognition and segmentation of connected 
characters and natural images.  One of his recent research interests is in
modeling neural networks for active vision 
in the brain.  He is the author of many books on neural networks, including
"Information Processing in the Visual and 
Auditory Systems", "Neural Networks and Information Processing", "Neural
Networks and Self-Organization", and 
"Physiology and Bionics of the Visual System".  He received the Achievement
Award, Excellent Paper Awards, and so 
on from IEICE.  He serves as an editor for many international journals.  He
was the founding President of JNNS and is a 
founding member on the Board of Governors of INNS.


Robert Hecht-Nielsen

Beginning in 1968 with neural network computer experiments and continuing
later with foundation and management of neural network research and
development programs at Motorola (1979-1983) and TRW (1983-1986),
Hecht-Nielsen was a pioneer in the development of neural networks. He has
been a member of the University of California, San Diego faculty since 1986
and was the author of the first textbook on neural networks (Neurocomputing
(1989) Reading MA: Addison-Wesley). He teaches a popular year-long graduate
course on the subject (ECE-270 Neurocomputing). A Fellow of the IEEE and
recipient of its Neural Networks Pioneer Award, Hecht-Nielsen's research is
centered around elaboration of his recently completed theory of the function
of thalamocortex. 
Hecht-Nielsen, R. and McKenna, T. [Eds.] (2003) Computational Models for
Neuroscience: Human Cortical Information Processing, London:
Springer-Verlag.
Sagi, B., et al (2001) A biologically motivated solution to the Cocktail
Party Problem, Neural Computation 13: 1575-1602.


Nikola K. Kasabov

Fellow of the Royal Society of New Zealand, Sen. Member of IEEE

Affiliation: Director, Knowledge Engineering and Discovery Research
Institute, Professor and Chair of Knowledge Engineering, School of
Information Technologies, Auckland University of Technology 

Brief Biographical History:
1971 - MSc in Computer Science and Engineering, Technical University of
Sofia
1972 - MSc in Applied Mathematics, Technical University of Sofia
1975 - PhD in Mathematical Sciences, Technical University of Sofia
1976 - 89  Lecturer and Associate Professor, Technical University of Sofia
1989-91 Research Fellow and Senior Lecturer, University of Essex, UK
1992- 1998 Senior Lecturer and Associate Professor, University of Otago, New
Zealand  
1999-2002 Professor and Personal Chair, Director Knowledge Engineering Lab,
University of Otago, New Zealand  

Honours: 
Past President of the Asia Pacific Neural Network Assembly  (1997-98). 
The Royal Society of New Zealand Silver Medal for Contribution to Science
and Technology, 2001 

Recent books:N.Kasabov, Evolving connectionist systems: Methods and
applications in bioinformatics, brain study and intelligent machines,
Springer Verlag, London, New York, Heidelberg (2002),450pp 
N. Kasabov, N., ed. Future Directions for Intelligent Systems and
Information Sciences, Heidelberg, Physica-Verlag (Springer Verlag) (2000),
420pp
Kasabov, N. and Kozma, R. eds. Neuro-Fuzzy Techniques for Intelligent
Information Systems, Heidelberg, Physica-Verlag (Springer Verlag) (1999),
450pp 
Amari, S. and Kasabov, N. eds. Brain-like Computing and Intelligent
Information Systems, Singapore, Springer Verlag  (1998), 533 pp 
N.Kasabov, Foundations of Neural Networks, Fuzzy Systems and Knowledge
Engineering, The MIT Press, CA, MA (1996), 550pp.

Associate Editor of Journals: Information Science; Intelligent Systems; Soft
Computing   

Robert Kozma

Robert Kozma holds a Ph.D. in applied physics from Delft University of
Technology (1992). Presently he is Associate Professor at the Department of
Mathematical Sciences, Director of Computational Neurodynamics Lab,
University of Memphis. Previously, he has been on the faculty of Tohoku
University, Sendai, Japan (1993-1996); Otago University, Dunedin, New
Zealand (1996-1998); and the Division of Neuroscience and Department of EECS
at UC Berkeley (1998-2000). His expertise includes autonomous adaptive brain
systems, mathematical and computational modeling of spatio-temporal dynamics
of cognitive processes, neuro-fuzzy systems and computational intelligence.
He is a Senior Member of IEEE, member of the Neural Network Technical
Committee of the IEEE Neural Network Society, and other professional
organizations. He has been on the Program Committee of about 20
international conferences in the field of intelligent computation and soft
computing.

Soo-Young Lee

Soo-Young Lee received B.S., M.S., and Ph.D. degrees from Seoul National
University in 1975, Korea Advanced Institute of Science in 1977, and
Polytechnic Institute of New York in 1984, respectively. From 1977 to 1980
he worked for the Taihan Engineering Co., Seoul, Korea. From 1982 to 1985 he
also worked for General Physics Corporation at Columbia, MD, USA. In early
1986 he joined the Department of Electrical Engineering, Korea Advanced
Institute of Science and Technology, as an Assistant Professor and now is a
Full Professor. In 1997 he established Brain Science Research Center, which
is the main research organization for the Korean Brain Neuroinformatics
Research Program. The research program is one of the Korean Brain Research
Promotion Initiatives sponsored by Korean Ministry of Science and Technology
from 1998 to 2008, and currently about 70 Ph.D. researchers have joined the
research program from many Korean universities and research institutes. He
was President of Asia-Pacific Neural Network Assembly, and is on Editorial
Board for 2 international journals, i.e., Neural Processing Letters and
Neurocomputing. He received Leadership Award and Presidential Award from
International Neural Network Society in 1994 and 2001, respectively. His
research interests have resided in artificial auditory systems based on
biological information processing mechanism in our brain. 

Erkki Oja

Erkki Oja is Director of the Neural Networks Research Centre and Professor
of Computer Science at the Laboratory of Computer and Information Science,
Helsinki University of Technology, Finland. He received his Dr.Sc. degree in
1977.  He has been research associate at Brown University, Providence, RI,
and visiting professor at Tokyo Institute of Technology. Dr. Oja is the
author or coauthor of more than 250 articles and book chapters on pattern
recognition, computer vision, and neural computing, as well as three books:
"Subspace Methods of Pattern Recognition" (RSP and J.Wiley, 1983), which has
been translated into Chinese and Japanese, "Kohonen Maps" (Elsevier, 1999),
and "Independent Component Analysis" (J. Wiley, 2001). His research
interests are in the study of principal components, independent components,
self-organization, statistical pattern recognition, and applying artificial
neural networks to computer vision and signal processing.  Dr. Oja is member
of the editorial boards of several journals and has been in the program
committees of several recent conferences including ICANN, IJCNN, and ICONIP.
He is member of the Finnish Academy of Sciences, Fellow of the IEEE,
Founding Fellow of the International Association of Pattern Recognition
(IAPR), and President of the European Neural Network Society (ENNS).

Steven K. Rogers and Matthew Kabrisky

Steven K. Rogers, PhD is the President/CEO of Qualia Computing, Inc. (QCI)
and CADx Systems.  He founded the company in May 1997 to commercialize the
Qualia Insight(tm) (QI) platform.  The goal of QCI is to systematically
apply QI to achieve Intelligence Amplification across market sectors.  Dr.
Rogers spent 20 years in the U.S. Air Force designing smart weapons.  He has
published more than 200 papers in neural networks, pattern recognition and
optical information processing and several books.  He is a Fellow of the
Institute of Electrical and Electronics Engineering for design,
implementation and fielding of neural solution to Automatic Target
Recognition.  Dr. Rogers is also a Fellow of The International Optical
Engineering Society for contribution to the science of optical neural
computing and a charter member of International Neural Network Society.  He
was a plenary speaker at the 2002 World Congress on Computational
Intelligence.


Matthew Kabrisky, PhD is currently a Professor Emeritus of Electrical
Engineering, School of Engineering, Air Force Institute of Technology,
(AFIT).  He advises the faculty on courses and research in autonomous
pattern recognition, mathematical models of the central nervous system, and
human factors engineering.  His research interests include computational
intelligence and self-awareness.  Dr. Kabrisky is the Chief Scientist
Emeritus of CADx Systems.

Ron Sun

Dr. Ron Sun is James C. Dowell Professor of computer science and computer
engineering at the University of Missouri-Columbia.  He received his Ph.D in
1991 from Brandeis University in computer science.

Dr. Ron Sun's research interest centers around the studies of intelligence
and cognition, especially in the areas of commonsense reasoning, human and
machine learning,  and hybrid connectionist models.  He is the author of
over 120 papers, and has written, edited  or contributed to 20 books.  For
his paper on integrating rule-based reasoning and connectionist models, he
received the 1991 David Marr Award from Cognitive Science Society.  He has
also been on the program committees of the National  Conference on
Artificial Intelligence (AAAI-93, AAAI-97, AAAI-99), International Joint
Conference on Neural Networks (IJCNN-99, IJCNN-00, IJCNN-02), International
Conference on Neural Information Processing (1997, 1999, 2001),
International Two-Stream Conference on Expert Systems and Neural Networks,
and other conferences, and has been an invited/plenary  speaker for some  of
them.  

Dr. Sun is the founding co-editor-in-chief of the journal Cognitive Systems
Research (Elsevier).  He  serves on the editorial boards of Connection
Science, Applied Intelligence, and Neural Computing Surveys.  He was a guest
editor of the special issue of the journal Connection Science on
architectures for integrating neural and symbolic processes and the special
issue of IEEE Transactions on Neural Networks on hybrid intelligent models.
He is a member of AAAI and Cognitive Science Society, and a senior member of
IEEE.






More information about the Connectionists mailing list