There is no use for bad data
Petar Simic
simic at kastor.ccsf.caltech.edu
Wed Jan 2 14:02:55 EST 1991
Connectionism and NN made a bold guess of what should be the zeroth
order approximation in modeling neural information processing, and I
think that the phrase 'biological plausibility' (or 'neural style') is
meant to indicate, (1) fine-grained parallel processing, (2) reasonable
although no-doubt crude model-neuron . While I heard that some
not-only-IN PRINCIPLE intelligent people think that parallelism is 'JUST
a question of speed', I think that so defined biological plausibility is
not to be discounted in discussing the differences between the two
approaches to modeling intelligence, AI and Connectionism/NN&ALL THAT.
Perhaps the phrase 'biological plausibility' should be changed to
'physical plausibility' indicating that Connectionist/NN models have
natural implementation in physical hardware.
This is not to say that Connectionism/NN&ALL THAT should pose as
biology. It doesn't need to ---it has its own subject of study (which in
combination with traditional AI makes what one may call 'modern AI') and
it need not be theology (J.B) or pure math, providing it is willing to
impose on itself some of the hard constraints of either engineering, or
natural science (or both). The engineering constraint is that
understanding of intelligence should not be too far from building it (in
software or physical hardware). This is not the unfamiliar constraint
for connectionists, and is also healthy part of the traditional AI,
since they both make computational models, and as they move from toy
problems to larger scales, concepts and representations they develop
should be softly constrained by the plausibility of implementation
today, or perhaps tomorrow. The natural science constraint is in traying
to reverse-engineer the real brain (J.B), but I would suggest that this
constraint is 'softer' then what seem to be suggested by Jim Bower, and
that the reason for this is not in our underestimate of the complexity
of the brain, but in the theoretical depth of the reverse-engineering
problem.
I think that at the level of the detailed modeling of specific neural
circuits, Connectionism/NN provide a set of tools which may or may not
be useful, depending on the problem at hand, and how these tools are
applied. The interesting question, therefore, is not how useful is
Connectionism/NN to neurobiology ---how useful is sharp pencil or PC for
neurobiology? --- but how useful is neurobiology, as practiced today, to
the study of the information processing phenomena across the levels, in
natural (and, why not, artificial) systems. I would think that the
present situation in which Connectionist/NN models are ignoring many of
the 'details' should be a source of challenge to theoretically minded
neurobiologists ---especially to the ones who think that the theoretical
tools needed to describe the transition between between the levels are
just the question of reading a chapter from some math textbook ---and
that they should come up with computational models and convince
everybody that particular detail does matter, in the sense that it is
'visible' at the higher information processing level, and can account
for some useful computational phenomena which simplified model can not,
or can but in a more complicated way.
Modeling firmly rooted in biological fact if to detailed, might not be
directly useful as modeling-component at higher level, except as a good
starting point for simplification. That simplifications are essential,
not DESPITE but BECAUSE the complexity of the brain, should be
self-evident to all who believe that an understanding of the brain is
possible. What is not self-evident is which details should not be thrown
out, and how are they related to the variables useful at the higher
information processing level.
Even in simple systems such as the ones studied in physics, where one
knows exactly the microscopic equations, the question of continuity
between the correct theoretical descriptions at two different levels is
very deep one. Just "..a cursory look at the brain ..." (J.B.) should
not be enough to disqualify simplified phenomenological models which are
meant to describe phenomenology at higher (coarser) level, as wrong. For
example, if you want to model the heat distribution in some material,
you use macroscopic heat-equation, and the basic variable there (the
temperature) is related, but in rather unobvious way, to moving
particles and their collisions, which are microscopic details of which
heat is made of. Yet, the heat equation is the correct phenomenological
description of the 'heat processing phenomenology'.
This is not to say that the very simplified connectionist/NN models are
good, but just warning that the theoretical modeling across the
different levels of description is as hard as it is fascinating, it
needs to be attacked simultaneously from all ends, and more often then
not, one finds that the relationship between the microscopic variables
and the variables useful at the next level, is neither obvious nor
simple. I would suggest that if we are to understand the brain and the
connection between its levels, one should not underestimate the
theoretical depth of the problem. While it is definitely a good idea for
a theorist to know well the phenomenology he is theorizing about, and it
is an imperative for him, in this field, to build computational
models, I think that it is somewhat silly to condition the
relevance of his work on his ability to do neurobiological experiments.
There is no use for bad data.
More information about the Connectionists
mailing list