[repost] Summary of panel discussion at IJCNN'2000 on the question: DOES CONNECTIONISM PERMIT READING OF RULES FROM A NETWORK?

Asim Roy ASIM.ROY at asu.edu
Tue Oct 24 13:03:51 EDT 2000


[ Reposted to correct a formatting error.  -- moderator ]
 
A panel discussion on the question: 
"DOES CONNECTIONISM PERMIT READING OF RULES FROM A NETWORK?" 
took place this July at IJCNN'2000 (International Joint Conference on
Neural Networks) in Como, Italy. Following persons were on the panel:
1) DAN LEVINE; 2) LEE GILES; 3) NOEL SHARKEY; 4) ALESSANDRO SPERDUTI; 5)
RON SUN; 6) JOHN TAYLOR 
7) STEFAN WERMTER; 8) PAUL WERBOS; 9) ASIM ROY.
This was the fourth panel discussion at these IJCNN conferences on the
fundamental ideas of connectionism. The abstract below summarizes the
issues/questions that were addressed by this panel.
The fundamental contention was that the basic connectionist framework as
outlined by Rumelhart et al. in their many books and papers has no
mechanism for rule extraction (reading of weights, etc. from a network) or
rule insertion (constructing and embedding rules into a neural network) as
is required by many rule-learning mechanisms (both symbolic and fuzzy
ones). This is not a dispute about whether humans can and do indeed learn
rules from examples or whether such rules can indeed be embedded in a
neural network. It is about whether the connectionist framework let's one
do that; that is, whether it allows one to create the algorithms required
for doing such things as rule insertion and rule extraction. As pointed
out in the abstract below, the cell-based distributed control mechanism of
connectionism empowers only individual cells (neurons) with the capability
to modify/access the connection strengths and other parameters of a
network; no other outside agent can do that, as is required by rule
insertion and extraction techniques. These rule-learning techniques are,
in fact, moving away from the cell-based distributed control notions of
connectionism and using broader control theoretic notions where it is
assumed that there are parts of the brain that control other parts. In
fact, it can be shown that every connectionist algorithm, from
back-propagation to ART to SOM, goes beyond the original framework of
cell-based distributed control and uses the broader control theoretic
notion that there are parts of the brain that control other parts. There
is, of course, nothing wrong with this broader control theoretic notion
because there is enough neurobiological evidence about neurotransmitters
and neuromodulators to support it. 

This debate once more points out the limitations of connectionism. Ron Sun
notes that "Clearly the death knell of strong connectionism has been
sounded." With regard to rule extractors in rule-learning schemes, John
Taylor notes that "There do not seem to be there similar rule extractors
of the connection strengths." On the same issue, Paul Werbos says: "But I
would not call it a "neural network" method exactly (even though neural
net learning is used) because I do not believe that real organic brains
contain that kind of hardwired readout device." Noel Sarkey says:
"Currently there seems little reason (or evidence) to even think about the
idea of extracting rules from our neural synapses - otherwise why can we
not extract our bicycle riding rules from our brain" and "However, the
relationship between symbolic rules and how they emerge from connectionist
nets or even whether or not they really exist has never been resolved in
connectionism."

For those interested, summaries of prior debates on the basic ideas of
connectionism are available at the CompNeuro website at Caltech. Here is a
partial list of the debate summaries available there.
www.bbb.caltech.edu/compneuro/cneuro99/0079.html - Some more questions in
the search for sources of control in the brain
www.bbb.caltech.edu/compneuro/cneuro98/0088.html - BRAINS INTERNAL
MECHANISMS - THE NEED FOR A NEW PARADIGM
www.bbb.caltech.edu/compneuro/cneuro97/0069.html - COULD THERE BE
REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN?
www.bbb.caltech.edu/compneuro/cneuro97/0043.html - CONNECTIONIST LEARNING:
IS IT TIME TO RECONSIDER THE FOUNDATIONS?
www.bbb.caltech.edu/compneuro/cneuro97/0040.html - DOES PLASTICITY IMPLY
LOCAL LEARNING? AND OTHER QUESTIONS
www.bbb.caltech.edu/compneuro/cneuro96/0047.html - Connectionist Learning
- Some New Ideas/Questions
Some of the summaries are also available at the CONNEC_L website:
[ More results from www.shef.ac.uk
</search?hl=en&lr=&safe=off&output=washingtonpost&q=site:www.shef.ac.uk+As
im+Roy> ] 

Asim Roy 
Arizona State University 
--------------------------------------------------------------------------

DOES CONNECTIONISM PERMIT READING OF RULES FROM A NETWORK? 
Many scientists believe that the symbolic (crisp) and fuzzy (imprecise and
vague) rules learned, used and expressed by humans are embedded in the
networks of neurons in the brain - that these rules exist in the
connection weights, the node functions and in the structure of the
network. It is also believed that when humans verbalize these rules, they
simply "read" the rules from the corresponding neural networks in their
brains. Thus there is a growing body of work that shows that both fuzzy
and symbolic rule systems can be implemented using neural networks. This
body of work also shows that these fuzzy and symbolic rules can be
retrieved from these networks, once they have been learned, by procedures
that generally fall under the category of rule extraction. But the idea of
rule extraction from a neural network involves certain procedures -
specifically the reading of parameters from a network - that are not
allowed by the connectionist framework that these neural networks are
based on. Such rule extraction procedures imply a greater freedom and
latitude about the internal mechanisms of the brain than is permitted by
connectionism, as explained below.
In general, the idea of reading (extracting) rules from a neural network
has a fundamental conflict with the ideas of connectionism. This is
because the connectionist networks by "themselves" are inherently
incapable of producing the "rules," that are embedded in the network, as
output, since the "rules" are not supposed to be the outputs of
connectionist networks. And in connectionism, there is no provision for an
external source (a neuron or a network of neurons), in a sense a third
party, to read the rules embedded in a particular connectionist network.
Some more clarification perhaps is needed on this point. The connectionist
framework, in the use mode, has provision only for providing certain
inputs (real, binary) to a network through its input nodes and obtaining
certain outputs (real, binary) from the network through its output nodes.
That is, in fact, the only "mode of operation" of a connectionist network.
In other words, that is all one can get from a connectionist network in
terms of output - nothing else is allowed in the connectionist framework.
So no symbolic or fuzzy rules can be "output" or "read" by a connectionist
network. The connectionist network, in a sense, is a "closed entity" in
the use mode; no other type of operation, other than the regular
input-output operation, can be performed by or with the network. There is
no provision for any "extra or outside procedures" in the connectionist
framework to examine and interpret a network, to look into the rules it's
using or the internal representation it has learned or created. So, for
example, the connectionist framework has no provision for "reading" a
weight from a network or for finding out the kind of rule/constraint
learned by a node. The existence of any "outside procedure" for such a
task, in existence outside of the network where the rules are, would go
against the basic connectionist philosophy. Connectionism has never stated
that the networks can be "examined and accessed in ways" other than the
input-output mode. 
So there is nothing in the connectionist framework that lets one develop
procedures to read and extract rules from a network. So a rule extraction
procedure violates in a major way the principles of connectionism by
invoking a means of extracting the weights and rules and other information
from a network. There is no provision/mechanism in the connectionist
framework for doing that. 
So the whole notion of rules existing in a network, that can be accessed
and verbalized as necessary, is contradictory to the connectionist
philosophy. There is absolutely no provision for "accessing
networks/rules" in the connectionist framework. Connectionism forgot about
the need to extract rules. 

--------------------------------------------------------------------------

LEE GILES

Early Connectionism/NNs
*	McCulloch, Pitts (MC) 40's: models that were basically circuit
design, suggestive but very primitive.
*	Kleene, 50's: MC networks as regular expressions and grammars. Early
AI.
*	Minsky, 60's: MC networks as logic, automata, sequential machines,
design rules. More AI. (not the perceptron work!) Foundations of early
high level VLSI design.
Early connectionism/NNs always had rules and logic as part of their
philosophy and implementation.
Late 20th Century Connectionism/NNs
*	Rules - vital part of AI.
*	Empirical & theoretical work on rules extractable, encodeable and
trainable in many if not all connectionist systems.
*	Most recent work in data mining
*	Future work in SVMs
Rules are more important but not essential in some applications; natural
language processing, expert systems and speech processing systems used in
many applications.
21st Century Connectionism/NNs
*	Knowledge & information discovery and extraction
*	Knowledge prediction
Rules and laws are important
*	New connectionism/NN - challenges
*	Applications will continue to be important
*	Cheap and plentiful data will be everywhere
*	Text
*	Nontext - sensor, audio, video, etc.
*	Pervasive computing and information access
*	New connectionism/NN future?
*	Integration philosophically, theoretically and empirically with
other areas of AI, computer and engineering science continues (rules will
become more important)
*	Biology and chips will play a new role
Foundations of Connectionism/NNs
*	Rules were always a theoretical and philosophical part of the
connectionist/nn models.
*	If/then rules, automata, graphs, logic
*	Importance?
*	Comfort factor
*	New knowledge
*	Autonomous systems - communication
--------------------------------------------------------------------------

DAN LEVINE

There are really two basic types of problems that involve encoding rules
in neural networks.  One type involves inferring rules from a series of
interactions with the environment that entail some regularity.  An example
would be a cognitive task (such as the Wisconsin Card Sorting Test used by
clinical neuropsychologists) in which a subject is positively reinforced
for certain types of actions and needs to discern the general rule guiding
the actions which will be rewarded.  The other type of problem involves
successfully performing cognitive tasks that are guided by an externally
given rule.  One example is the Rapid Information Processing task, in
which the subject is instructed to press a key when he or she sees three
odd or three even digits in a row.  In other words, the neural network
needs to translate the explicit verbal rule into connection strengths that
will guide motor performance in a way that accords with that rule.
	The first type of problem has already been simulated in a range of
neural networks including some by my own group and by John Taylor's group.
These networks typically require modulatory transmitters that allow reward
signals to bias selective attention or to selectively strengthen or weaken
existing rule representations.  Modulatory transmitters are also involved
in models in progress of the second type of problem.  In this case,
activation of a rule selectively biases appropriate representations of and
connections among working memory representations of objects, object
categories, and motor actions relevant to following the particular rule.
	The framing of the question, "Does connectionism allow ...,"
suggests implicitly that "connectionism" refers to a specified class of
neural architectures.  However, connectionist and neural network models
have been in the last several years increasingly less restricted in their
structures.  Modelers who began working within specific "schools" such as
adaptive resonance (like my own group and Stephen Grossberg's) or back
propagation (like Jonathan Cohen's group) have developed models that are
guided as much, if not more, by known brain physiology and anatomy as by
the original "modeling school."  Hence the question should be rephrased
"What form of connectionism allows ... ."
	
--------------------------------------------------------------------------

NOEL SHARKEY

One of the main themes of 1980s connectionism  was the unconscious
application of rules. This comes from Cognitive Psychology (where many of
the major players started). There are very many charted behaviors, such as
reading or riding a bicycle, where participants can perform extremely well
and yet cannot explicitly state the rules that they are using. One of the
main goals of  Cognitive Psychologists was to find tasks that would enable
them to probe at unconscious skills. Such skills were labeled as
"automatic" in contrast to the slower, more intensive "controlled" or
"strategic" processes. From the limited amount that psychologists talk
beyond their data, strategic processes were vaguely considered to have
something to do with awareness or conscious processes.  This was a hot
potato to be avoided.

But there is no single coherent philosophy covering all of connectionism.
Researchers in the 1980s, including myself,  began to experiment with
methods for extracting the rules from connectionist networks. This was
partly motivated by psychological considerations but mainly to advance the
field in computing and engineering terms. In my lab, the interest in rule
extraction was to help to specify the behavior of physical systems, such
as engines, which are notoriously difficult to specify in other ways. For
example, if a neural network could learn to perform a difficult-to-specify
task, then, if  rules could be extracted from that net, a crude
specification could be begun.

However, the relationship between symbolic rules and how they emerge from
connectionist nets or even whether or not they really exist has never been
resolved in connectionism. It seems clear that we can propositionalize our
rules and pass them on to other people. Simple rules such as, "eating is
not allowed in this room" appear to be learned instantly from reading them
in linguistic form, yet we have not seen a universally accepted
connectionist explanation for this type of phenomenon and we certainly do
not extract these rules from our nervous system after the fact. Now
imagine we do extract rules directly from our brains, how would this be
done. If we follow from the lessons of rule extraction techniques for
neural networks, there are two distinctive methods which may be called
internal and external. Internal is where the process of extraction
operates on network parameters such as weight values or hidden unit
activations. External is where the mechanism uses the input and output
relation to calculate the rules - it is assumed that the neural network
will have filtered out most of the noise. Currently there seems little
reason (or evidence) to even think about the idea of extracting rules from
our neural synapses - otherwise why can we not extract our bicycle riding
rules from our brain. It seems that the only real option would be to "run
the net" and calculate the rules from the input output relations.
Nonetheless, this is not a well informed answer, nor is there likely to be
one at present. This is an issue that needs considerably more research.

--------------------------------------------------------------------------

ALESSANDRO SPERDUTI

Before arguing about the possibility to extract rules from a neural
network, the concept itself of "rule" should be clarified.  In fact, when
talking about rules in this context, everybody has in mind the concept of
"symbolic rule", i.e., a rule that involves discrete entities.  Moreover,
the semantics of these entities is defined by a subjective
"interpretation" function.

However, the concept of rule is far more general and it can involve in
general any kind of entity or variable, subject to the constraint that a
finite description (i.e., representation) of the entity exists and can be
used.  Thus, a rule involving continuous variables and/or entities is as
well legitimate, and in many cases useful. Consequently, the question
about the capability of a connectionist system to capture "symbolic rules"
in such a form to permit easy reading is conditional to the nature of the
learned function: if it is discrete, the posed question is meaningful.
Assuming a discrete nature for the learned function, however, there is no
guarantee that a trained neural network will encode the function in a way
that allows easy reading of "symbolic rules", whose representation, by the
way, is in principle arbitrary with respect to the representational
primitives (neurons) of the neural network.

--------------------------------------------------------------------------

RON SUN

Many early connectionist models have some significant shortcomings. For
example, the limitations due to the regularity of their structures led to,
e.g., difficulty in representing and interpreting symbolic structures
(despite some limited successes that we have seen).  Other limitations are
due to learning algorithms used by such models, which led to, e.g.,
lengthy training (requiring many repeated trials); complete I/O mappings
must be known a priori; etc.  There are also limitations in terms of
biological relevance.  For example, these models may bear only remote
resemblance to biological processes; they are far less complex than
biological NNs, and so on.  In coping with these difficulties, two forms
of connectionism emerged: Strong connectionism adheres strictly to the
precepts of connectionism, which may be unnecessarily restrictive and
incur huge cost for some symbolic processing.  On the other hand, weak
connectionism (or hybrid connectionism) encourages the incorporation of
both symbolic and subsymbolic processes: reaping the benefit of
connectionism while avoiding its shortcomings. There have been many
theoretical and practical arguments for hybrid connectionism; see e.g. Sun
(1994).

In light of this background, how do we answer the question of whether
there can be ``rule reading" in connectionist models?  Here is my
three-fold answer: (1) Psychologically speaking, the answer is yes. For
example, Smith, Langston and Nisbet (1992), Hadley (1990), Sun
(1995) presented strong cases for the existence of EXPLICIT rules in
psychological processes, based on psychological experimental data,
theoretical arguments, thought experiments, and  cognitive modeling. If
connectionist models are to become general cognitive models, they
should be able to handle the use and the learning of such explicit rules
too.  (2) Methodologically speaking, the answer is also yes. Connectionism
is merely a methodology, and not an exclusive one --- to be used to the
exclusion of other methodologies. Considering
our lack of sufficient neurobiological understanding at present, a
dogmatic or strict view on ``neural plausibility" is not warranted. (3)
Computationally speaking, the answer is again yes.  By now, we know that
we can implement ``rule reading" in many ways computationally, e.g., (a)
in symbolic forms (which leads to hybrid connectionism), or (b) in
connectionist forms (which leads to connectionist implementationalism).
Some such implementations may have as good neurobiological  plausibility
as any other connectionist models.

The key point is: To remove the strait-jacket of strong connectionism: we
should  advocate (1) methodological connectionism, treating it as
one possible approach, not to the exclusion of others.  and (2) weak
connectionism (hybrid connectionism), encouraging the incorporation
of  non-NN representations and processes.  Clearly, the death knell of
strong connectionism has been sounded.  It's time for a more open-minded
framework in which we conduct our research.  My own group has been
conducting research in this way for more than a decade.  For the work by
my group along these lines, see http://www.cecs.missouri.edu/~rsun

--------------------------------------------------------------------------

JOHN TAYLOR

Getting a Connectionist Network to Explain its Rules.

JG Taylor, Dept of Mathematics, King's College, Strand, London WC2R2LS,
UK. email: john.g.taylor at kcl.ac.uk

Accepting that rules can be extracted from trained neural networks by a
range of techniques, I first addressed the problem of how this might occur
in the brain. There do not seem to be there similar rule extractors of the
connection strengths. In the brain are two extremes: implicit and explicit
rules. Implicit skills, which implement rules in motor responses, are not
based on an explicit knowledge of the rules implemented  by the neural
networks of the motor cortex. It is in explicit rules, as supported by
language and the inductive/deductive process that 
rules are created by human experience. 

Turning to language, I described what is presently known from brain
imaging about the coding of semantics and syntax in sites in the brain.
These both make heavy use of the frontal recurrent cortico-thalamo-NRT
circuits, and it can be conjectured to be the architecture used to build
phrase structure analysers (through suitable recurrence), guided by
'virtual actions'. These are the basis for syntactic rules and also rules
for causal inference, as seen in what is called 'predictive coding' in
frontal lobes in monkeys. Thus rule development is undoubtedly supported
by such architectures and styles of processing. There is no reason why it
cannot be ultimately be implemented in a connectionist framework. 

Such a methodology would enable a neural system to learn to talk about,
and develop, its own explicit rules (although never the implicit ones),
and hence solve part of the problem raised by Asim Roy. Implicit rules can
be determined by the rule-extraction methods I noted at the beginning.

--------------------------------------------------------------------------

PAUL WERBOS

Asim Roy has asked us to address many very different, though related
issues. A condensed response: (1) A completely seamless interface between
rule-based "white box" descriptions and neural net learning techniques
already exists. I have a patent on "elastic fuzzy logic" (see Gupta and
Sinha eds); Fukuda and Yaeger have effectively applied essentially the
same method. But I would not call it a "neural network" method exactly
(even though neural net learning is used) because I do not believe that
real organic brains contain that kind of hardwired readout device. (2)
Where, in fact, DOES symbolic reasoning arise in biology? Some of my views
are summarized in the book "The Evolution of Human Intelligence" (see
www.futurefoundation.org). Curiously, there is a connection to a previous
panel Asim organized, addressing memory-based learning. In the most
primitive mammal brains, I theorized back in 1977 that there is an
interplay between two levels of learning: (1) a slow-learning but powerful
system which generalizes from current experience AND from memory; (2) a
fast-learning but
poorly-generalizing heteroassociative memory system. (e.g. See my chapter
in Roychowdhury et al, Theoretical Advances...). At IJCNN2000, Mike Denham
described the  "what when where" system of the brain. I theorize that some
(or all) primates extended the heteroassociative memory system, to include
"who what when where," using mirror neurons to provide an encoding of the
experience of other primates. In other words, even monkeys probably have
the power to generalize from the (directly observed) experience of OTHER
MONKEYS,
which they can reconstruct without any higher reasoning faculties. I
theorize that human intelligence is basically an extension of this
underlying capability, based on a biological system to reconstruct
experience of others communicated first by dance (as in the Bushman
dance), and later by "word movies." Symbolic reasoning ala Aristotle and
Plato, and propositional language ala English, are not really biologically
based as such, but learned based on modern culture, and rooted in the
biology which supports dance and word movies. If there can be such a thing
as truly biologically rooted symbolic/semiotic intelligence, we aren't
there yet; modern humanity is only a kind of missing link, a halfway house
between other primates and that next level. (For more detail, see the last
chapter of my book "The Roots of Backpropagation," Wiley 1994, which also
includes the first published work on true backpropagation, and the chapter
in Kunio Yasue et al eds, ...Consciousness... forthcoming from John
Benjamins.)

--------------------------------------------------------------------------

STEFAN WERMTER

Linking Neuroscience, Connectionism and Symbolic Processing

Does connectionism permit reading of rules from a network? There are at
least two main answers to this question. Researchers from Knowledge
Engineering and Representation would argue that it has been done
successfully, that it is useful if it helps to understand the networks, or
they might not even care whether reading is part of connectionism or
external symbolic processes. Connectionist representations can be
represented as symbolic knowledge at higher abstraction levels. Symbolic
extraction may be not part of connectionism in the strict sense, but
symbolic knowledge can emerge from connectionist networks. This may lead
to a better understanding and also to the possibility for combining
connectionist knowledge with symbolic knowledge sources.

Researchers from Cognitive Science, Neuroscience, on the other hand, would
argue that in real neurons in the brain there is no symbolic reading
mechanism, that  symbolic processing emerges based on dynamics of
spreading of activation in cortical cell assemblies and that there may be
rule-like behavior emerging from neural elements. It would be useful in
the future to explore constraints and principles from cognitive
neuroscience for building more plausible neural network architectures
since there is a lot of new evidence from fmri, eeg, meg experiments.
Furthermore, new computational models of spiking neural networks, pulse
neural networks, cell assemblies have been designed which promise to link
neuroscience with connectionist and even symbolic processing. We are
leading the exploration of such efforts in the EmerNet project
www.his.sunderland.ac.uk/emernet/. Computational models can benefit from
emerging vertical hybridization and abstraction: 1. The symbolic
abstraction level is useful for abstract reasoning but lacks preferences.
2. The connectionist knowledge has preferences but still lacks
neuroscience reality. 3. Neuroscience knowledge is biologically plausible
but architecture and dynamic processing are computationally extremely
complex. Therefore we argue for an integration of all three levels for
building neural and intelligent systems in the future.

**************************************************************************
********************************
BIOSKETCHES
--------------------------------------------------------------------------

DAN LEVINE

Web site:	www.uta.edu/psychology/faculty/levine

--------------------------------------------------------------------------

LEE GILES

http://www.neci.nj.nec.com/homepages/giles/html/bio.html

--------------------------------------------------------------------------

NOEL SHARKEY

Noel Sharkey is an interdisciplinary researcher. Currently a full
Professor in the department Computer Science at the university of
Sheffield, he holds a Doctorate in Experimental Psychology, is a Fellow of
the British Computer Society, a Fellow of the Institution of Electrical
Engineers, and a member of the British Experimental Psychology Society. He
has worked as a research associate in Computer Science at Yale University,
USA, with the AI and Cognitive Science groups and as a senior research
associate in psychology at Stanford University, USA, where he has also
twice served as a visiting assistant professor. His other jobs have
included a "new blood" lecturship (English assistant professor) in
Language and Linguistics at Essex University, U.K. and a Readership in
Computer Science at Exeter. His editorial work includes Editor-in-Chief of
the journal Connection Science, editorial board of Robotics and Autonomous
Systems, and editorial board of AI Review. He was Chairman of the IEE
professional group A4 (AI) and founding chairman of IEE professional group
A9 (Evolutionary and Neural Computing). He has edited special issues on
modern developments in autonomous robotics for the journals Robotics and
Autonomous Systems, Connection Science, and Autonomous Robots. Noel's
intellectual pursuits are in the area of biologically inspired adaptive
robotics. In recent years Noel has been involved with the public
understanding of science, engineering, technology and the arts. 
He makes regular appearances on TV as judge and commentator of robot
competitions and is director of the Creative Robotics Unit at Magna (CRUM)
with projects in flying swarms of robots and in the evolution of
cooperation in collective robots. 
--------------------------------------------------------------------------

ALESSANDRO SPERDUTI

Alessandro Sperduti received his education from the University of Pisa,
Italy ("laurea"  and Doctoral degrees in 1988 and 1993, respectively, all
in Computer Science.)  In 1993 he spent a period at the International
Computer Science Institute, Berkeley, supported by a postdoctoral
fellowship.  In 1994 he moved back to the Computer Science Department,
University of Pisa, where he was Assistant Professor, and where he
presently is Associate Professor.  His research interests include pattern
recognition, image processing, neural networks, hybrid systems. In the
field of hybrid systems his work has focused on the integration of
symbolic and connectionist systems.  He contributed to the organization of
several workshops on this subject and he served also in the program
committee of conferences on Neural Networks. Alessandro Sperduti is the
author or co-author of around 70 refereed papers mainly in the areas of
Neural Networks, Fuzzy Systems, Pattern Recognition, and Image Processing.
Moreover, he gave several tutorials within international schools and
conferences, such as IJCAI `97 and IJCAI `99.  He acted as Guest Co-Editor
of the IEEE Transactions on Knowledge and Data Engineering for a special
issue on Connectionist Models for Learning in Structured Domains, and of
the journal Cognitive Systems Research for a special issue on Integration
of Symbolic and Connectionist Information Processing Systems.
--------------------------------------------------------------------------

RON SUN
Ron Sun is an associate professor of computer engineering and computer
science at the University of Missouri-Columbia.  He received his
Ph.D in 1991 from Brandeis University. Dr. Sun's research interests center
around the study of intellegence and cognition, especially in the areas of
hybrid neural networks model, machine learning, and connectionist
knowledge representation and reasoning, He is the author of over 100
papers, and has written, edited  or contributed to 15 books, including
authoring the book {\it Integrating Rules and Connectionism for Robust
Commonsense Reasoning}. and co-editing {\it Computational Architectures
Integrating Neural and Symbolic Processes}.  For his paper on models of
human reasoning, he received the 1991 David Marr Award from Cognitive
Science Society

He organized and chaired the Workshop on Integrating Neural and Symbolic
Processes, 1992, and the Workshop on Connectionist-Symbolic Integration,
1995, as well as co-chairing the Workshop on Cognitive Modeling, 1996 and
the Workshop on Hybrid Neural Symbolic Systems, 1998.  He has also been on
the program committees of the National  Conference on Artificial
Intelligence (AAAI-93, AAAI-97, AAAI-99), International Joint Conference
on Neural Networks (IJCNN-99 and IJCNN-2000), International Two-Stream
Conference on Expert Systems and Neural Networks, and other conferences,
and has been an invited/plenary  speaker for some  of them.

Dr. Sun is the editor-in-chief of Cognitive Systems Research (Elsevier).
He  also serves on the editorial boards of Connection Science, Applied
Intelligence, and Neural Computing Surveys.  He was a guest editor of a
special issue of the journal Connection Science and a special issue of
IEEE Transactions on Neural Networks, both  on hybrid intelligent models.
He is a senior member of IEEE.

--------------------------------------------------------------------------

JOHN TAYLOR

Trained as a theoretical physicist in the Universities of London  and
Cambridge. Positions in Universities in the UK, USA, Europe in physics and
mathematics. Created the Centre for Neural Networks at King's College,
London, in 1990, and is still its Director. Appointed Professor of
Mathematics, King's College London in 1972, and became Emeritus Professor
of Mathematics of London University in 1996. Was Guest Scientist at the
Research Centre in Juelich, Germany, 1996-8, working on brain imaging and
data analysis. Has been consultant in Neural Networks to several
companies. Is presently Director of Research on Global Bond products and
Tactical Asset Allocation for a financial investment company involved in
time series prediction and European Editor-in-Chief of the journal Neural
Networks. He was President of the International Neural Network Society
(1995) and the European Neural Network Society (1993/4). He is also editor
of the series Perspectives in Neural Computing. Has been on the Advisory
Board of the Brain Sciences Institute, RIKEN in Tokyo since 1997.

Has published over 500 scientific papers (in theoretical physics,
astronomy, particle physics, pure mathematics, neural networks, higher
cognitive processes, brain imaging, consciousness), authored 12 books,
edited 13 others, including the titles When the Clock Struck Zero (Picador
Press, 1994), Artificial Neural Networks (ed, North-Holland, 1992), The
Promise of Neural Networks (Springer, 1993), Mathematical Approaches to
Neural Networks (ed, Elsevier, 1994), Neural Networks (ed, A Waller, 1995)
and The Race for Consciousness (MIT Press, 1999).  
 
 Started research in neural networks in 1969. Present research interests
are: financial and industrial applications; dynamics of learning processes
and multi-state synapses; stochastic neural chips and their applications
(the pRAM chip); brain imaging and its relation to neural networks; neural
modelling of higher cognitive brain processes, including consciousness.
Has funded research projects from the EC (on building a hybrid
symbolic/subsymbolic processor), from British Telecom.(on Intelligent
Agents) and from EPSRC on Building a Neural Network Language System to
learn syntax and semantics.

--------------------------------------------------------------------------

STEFAN WERMTER

Stefan Wermter is Full Professor of Computer Science and Research Chair in
Intelligent Systems at the University of Sunderland, UK. He is an
Associate Editor of the journal Connection Science and serves on the
Editorial Board of the journals Cognitive Systems Research, and Neural
Computing Surveys. He has written or edited three books as well as more
than 70 articles. He is also Coordinator of the international EmerNet
network for neural architectures based on neuroscience and head of the
intelligent system group http://www.his.sunderland.ac.uk/.
He holds a Diplom from Dortmund University, a MSc from the University of
Massachusetts, a PhD and Habilitation from Hamburg  University.

Stefan Wermters research interests are in Neural Networks, Hybrid Systems,
Cognitive Neuroscience, Natural Language Processing, Artificial
Intelligence and Bioinformatics. The motivation for this research is
twofold: How is it possible to bridge the large gap between real neural
networks in the brain and high level cognitive performance? How is it
possible to build more effective systems which integrate neural and
symbolic technologies in hybrid systems? Based on this motivation Wermter
has directed and worked on several projects, e.g. on hybrid
neural/symbolic systems for text processing and speech/language
integration. Furthermore, he has research interests in Knowledge
Extraction from Neural Networks, Interactive Neural Network Agents,
Cognitive Neuroscience, Fuzzy Systems as well as  the Integration of
Speech/Language/Image Processing.
--------------------------------------------------------------------------




More information about the Connectionists mailing list