No subject
Mon Jun 5 16:42:55 EDT 2006
I presume that the panel organizers really want to focus on these
two assumptions of connectionist learning rather than looking at
other assumptions that are embedded in the whole connectionist
approach. We should recognize, however, that the two assumptions
listed above do not begin to exhaust the theoretical commitments
that are bound up in most connectionist models.
Some of these other assumptions are:
1) Place-coding.
Inputs to the network are encoded by spatial patterns of
activation of elements in the input layer rather than in the
time-structure of the inputs (i.e. rate-place or labelled-line
neural codes rather than temporal pattern codes). To the extent
that input patterns are encoded temporally, in the std view a
time-delay layer is used to convert this into a place pattern
that is handled by a conventional network. However, examples of
fine time structure have been found in many parts of the brain
where there has been a concerted effort to look for it.
2) Scalar signals.
There is one signal conveyed per element (no multiplexing of
multiple signals onto the same lines), so that inputs to and
outputs from each element are scalar quantities (1 and 2 are
closely related).
3) Synchronous or near-synchronous operation.
The inputs are summed together to produce an output signal for
each time step. The std. neurobiological assumption is that
neurons function as rate-integrators with integration times
comparable to the time-steps of the inputs rather than
coincidence detectors. But examples of coincidence detection
in the brain are many, and there has been an ongoing debate
about whether the cortical pyramidal cell is best seen in terms
of an integrating element or as a coincidence detector.
4) Fan-out of the same signals to all target elements.
Connection weights may differ, but the same signals are being
fed to all of their targets. The std. neurobiological assumption
is that impulses of spike trains invade all of the daughter
branches of axonal trees. However, there may be conduction blocks
at branchpoints that mean that some spikes will not propapagate
into some terminals (so that the targets do not receive all of
the spikes that were transmitted through the axon trunk).
There are examples of this in the crayfish.
To the extent that one or more of these assumptions are altered,
one can potentially get neural architectures with different
topologies, and with potentially different capacities.
I'm not one to quibble over what the topic of a discussion should
be -- that's up to the participants. I'd just like to suggest
that if we (a very general and inclusive "we") are going to
"reconsider the foundations" of connectionism, we might think
more broadly about ALL the assumptions, tacit and explicit,
that are involved.
My own sense is that, in lieu of a much deeper understanding of
exactly what kinds of computational operations are being carried
out by cortical structures and how these are carried out, we
should probably avoid labels like "brain-like" that give the
false sense that we understand more than we do about how the
brain works.
If one is seriously interested in how "brain-like" a given
network architecture is, then one needs to get real, detailed
neuroanatomical and/or neurophysiological data and make the
comparison more directly. Comparisons with what's in the
textbooks just doesn't do. Things get messy very fast when the
neural responses are phasic, nonmonotonic, and have a multitude
of different kinds of stimuli that produce them.
Peter Cariani
Peter Cariani, Ph.D.
Eaton Peabody Laboratory
Massachusetts Eye & Ear Infirmary
243 Charles St, Boston MA 02114
tel (617) 573-4243
FAX (617) 720-4408
email peter at epl.meei.harvard.edu
============================================================
Asim Roy's note to Peter Cariani:
If I understand you correctly, you are saying that we need to
broaden the set of questions. I am sure this issue will come
up as we grapple with these questions. And one of the issues
from the artificial neural network point of view is how exactly
do you replicate the detailed biological processes. You
certainly want to extract the clever biological ideas, but at
some point, say 50 years from now, we might do better than
biology with our artificial systems. And we might do things
differently than in biology. An example is our flying machines.
We do better than the birds out there. The functionality is
there, but we do it differently and do it better. I think the
point I am making is that we need not be tied to every biological
detail, but we certainly want to pick up the good ideas and then
develop them further. And in the end, we would have a system far
superior to the biological ones, but not exactly like it in all
the details.
==============================================================
Peter Cariani's reply to the above note:
Yes, I think the most important decisions we make regarding
how to construct neural nets in the image of the brain are to
determine exactly which aspects of the real biological system
are functionally relevant and which ones are not. I definitely
agree with you that every biological detail is not important,
and I myself am trying to work out highly abstracted schemes
for how temporally-structured signals might be processed by
arrays of coincidence detectors and delay lines. (Usually
the standard criticisms from biologists are that not enough
of the biological details are included.) What I am saying,
however, is that the basic functional organization that is
assumed by the standard connectionist models may not be the
right one (or it may not be the only one or the canonical
one). There are many aspects of connectionist models that
really don't seem to couple well to the neurophysiology,
so maybe we should go back and re-examine our assumptions
about neural coding. I myself think that temporal pattern
codes are far more promising than is commonly thought,
but that's my idee fixe. (I could send you a review
that I wrote on them if you'd like).
I definitely agree with you that once we understand the
functional organization of the brain as an information
processing system, then we will be able to build devices
that are far superior to the biological ones. My motto
is: "keep your hands wet, but your mind dry" -- it's
important to pay attention to the biology, to not project
one's preconceptions onto the system, but it's equally
important to keep one's eyes on the essentials, to avoid
getting bogged down in largely irrelevant details. I've
worked on both adaptive systems and neurophysiology, and
I know that the dry people tend to get their neural models
from textbooks (that present an overly simplified and uncritical
view of things), and the wet people tend not to say much about
what kinds of general mechanisms are being used (they look
to the NN people for general theories). There is a cycling of
ideas between the two that we need to be somewhat wary of ---
many neuroscientists begin to believe that information processing
must be done in the ways that the neural networks people suggest,
and consequently, they cast the interpretation of the data in
that light. The NN people then use the physiological evidence
to suggest that their models are "brain-like". Physiological
data gets shoe-horned into a connectionist account (and
especially what aspects of neural activity people decide to go
out and observe), and the connectionist account is then held
up as being an adequate theory of how the brain works. There
are very few strong connectionist accounts that I know of that
really stand up under scrutiny -- that are grounded in the data, =0Athat pr=
edict important aspects of the behavior, and that cannot
be explained through other sets of assumptions. In these
discussions you really need both the physiologists, who
understand the complexities and limitations of the data, and
the theorists, who understand the functional implications, to
interact strongly with each other.
So, anyway, I wish you the best of luck with your session.
==============================================================
More information about the Connectionists
mailing list