Mathematical Tractability of Neural Nets

slehar@bucasb.bu.edu slehar at bucasb.bu.edu
Thu Mar 1 13:49:14 EST 1990


AAAAH! Now I understand the source of the confusion!  Your statement...

 "It is the   subsequent  analysis of  function corresponding to  this
  **linguistic theory** which underlies  the development of the neural
  analysis of  the  brain areas at what  you consider the **functional
  level**."    (**my emphasis**)

reveals that you and I are refering to  altogether  different types of
neural models.   You   are doubtless  refering  to  the  connectionist
variants of the Chomsky  type   linguistic  models   which   represent
language in abstract and rigidly   functional and hierarchical  terms.
If you think that such models are excessively rigid and abstract, then
you and I are in complete agreement.

The neural models to which I refer are more in the Grossberg school of
thought.   Such  models are  characterized   by  a   firm  founding in
quantitative neurological  analysis,  expression  in  dynamic  systems
terms, and are confirmed  by  psychophysical and behavioral data.   In
other   words  these models  adhere closely   to  known biological and
behavioral knowledge.

For instance the Grossberg neural  model for vision [3],[4],[5] (which
is more my area of expertise) is built  of  dynamic neurons defined by
differential equations derived from the Hodgkin Huxley equations (from
measurement of  the squid giant  axon) and  also from  behavioral data
[1].

The topology  and   functionality of   the  model  is  based  again on
neurological  observation  (such as   Hubel   &  Wiesel  intracellular
measurement   in   the visual    cortex)  together with psychophysical
evidence, particularly visual  illusions such  as perceptual  grouping
[10],  color   perception  [6],  neon color spreading   [7],[8], image
stabilization experiments [9] and others.   The  model duplicates  and
explains such illusory  phenomena as well as  elucidating  aspects  of
natural vision processing.

In their  book  VISUAL PERCEPTION, THE  NEUROPHYSIOLOGICAL FOUNDATIONS
(1990), Spillmann & Werner  say  "Neural models for cortical  Boundary
Contour System and Feature  Contour System interactions have begun  to
be  able   to  account  for   and  predict a  far   reaching   set  of
interdisciplinary data  as manifestations of  basic design principles,
notably how the cortex achieves  a resolution of uncertainties through
its parallel and hierarchical interactions"

The  point is that  this  class of models is  not  based on  arbitrary
philosophising  about abstract  concepts, but rather  on hard physical
and behavioral data, and Grossbergs models  have on numerous occasions
made  behavioral and  anatomical predictions  which  were subsequently
confirmed by experiment  and histology.  Such models  therefore cannot
be challenged on purely philosophical  grounds,  but simply on whether
they predict the behavioral data, and  whether they are neurologically
plausible.   In this sense,  the  models are scientifically  testable,
since  they  make concrete   predictions  of how  the   brain actually
processes information, not vague speculations on how it might do so.

So,  I maintain my original conjecture  that  the time is  ripe for  a
fusion of knowledge from the diverse  fields of neurology, psychology,
mathematics and  artificial intelligence, and I  maintain further that
such a fusion is already taking place.


REFERENCES
==========
Not all of these are pertinant to the  discussion at hand,  (they were
copied from another work) but I  leave them in  to give you a starting
point for further research if you are interested.


[1]   Stephen Grossberg THE   QUANTIZED  GEOMETRY OF  VISUAL SPACE The
Behavioral and  Brain Sciences 6, 625  657 (1983) Cambridge University
Press.   Section 21 Reflectance Processing,  Weber Law Modulation, and
Adaptation Level  in  Feedforward Shunting Competitive  Networks.   In
this  section  Grossberg   examines  the dynamics   of   a feedforward
on-center off-surround network of shunting neurons and shows  how such
a  topology  performs   a  normalization of  the   signal,    i.e.   a
factorization of   pattern  and energy,  preserving the   pattern  and
discarding the overall illumination energy.  Reprinted in THE ADAPTIVE
BRAIN Stephen Grossberg Editor, North-Holland (1987) Chapter 1 Part II
section 21

[2] Gail Carpenter    &   Stephen  Grossberg  A   MASSIVELY   PARALLEL
ARCHITECTURE FOR A  SELF-ORGANIZING NEURAL PATTERN RECOGNITION MACHINE
Computer Vision,  Graphics, and Image  Processing  (1987), 37,  54-115
Academic Press, Inc.  This is a neural  network  model  of an adaptive
pattern classifier (Adaptive Resonance   Theory, ART  1)  composed  of
dynamic  shunting   neurons  with  interesting   properties of  stable
category formation while maintaining plasticity to new  pattern types.
This is achieved through the use of resonant  feedback  between a data
level layer and a feature  level layer.  The original  ART1 model  has
been  upgraded  by  ART2, which   handles   graded instead   of binary
patterns,   and    recently  ART3   which  uses   a   more elegant and
physiologically   plausible  neural mechanism    while   extending the
functionality to account for more data.  Reprinted in  NEURAL NETWORKS
AND  NATURAL INTELLIGENCE, Stephen  Grossberg Editor, MIT Press (1988)
Chapter 6.

[3] Stephen  Grossberg & Ennio  Mingolla NEURAL DYNAMICS OF PERCEPTUAL
GROUPING: TEXTURES, BOUNDARIES AND EMERGENT SEGMENTATIONS Perception &
Psychophysics (1985), 38 (2), 141-171.  This work  presents  the BCS /
FCS model with   detailed psychophysical  justification  for the model
components and computer simulation of the BCS.

[4]  Stephen Grossberg &   Ennio Mingolla NEURAL DYNAMICS   OF SURFACE
PERCEPTION:  BOUNDARY  WEBS,  ILLUMINANTS,    AND  SHAPE-FROM-SHADING.
Computer Vision, Graphics   and Image Processing  (1987) 37,  116-165.
This model  extends the BCS  to explore its response to   gradients of
illumination.  It is mentioned here because of an elegant modification
of the second competitive stage that was utilized in our simulations.

[5] Stephen Grossberg & Dejan Todorovic  NEURAL DYNAMICS OF 1-D AND 2-D
BRIGHTNESS PERCEPTION Perception and Psychophysics (1988) 43, 241-277.
A  beautifully  lucid summary  of  BCS / FCS  modules with 1-D and 2-D
computer  simulations  with   excellent graphics  reproducing  several
brightness perception   illusions.  This  algorithm    dispenses  with
boundary completion,  but in return it simulates   the FCS  operation.
Reprinted  in   NEURAL  NETWORKS    AND NATURAL  INTELLIGENCE,  Stephen
Grossberg Editor, MIT Press (1988) Chapter 3.

[6] Land, E. H. THE RETINEX THEORY OF COLOR VISION Scientific American
(1977) 237, 108-128. A mathematical  theory  that predicts  the  human
perception  of  color  in Mondrian type  images, based   on  intensity
differences at boundaries between color patches.  

[7] Ejima, Y., Redies, C., Takahashi,  S., & Akita,  M. THE NEON COLOR
EFFECT IN THE EHRENSTEIN PATTERN Vision Research (1984), 24, 1719-1726

[8] Redies,  C., Spillmann, L., & Kunz,  K.   COLORED NEON  FLANKS AND
LINE GAP ENHANCEMENT Vision Research (1984) 24, 1301-1309

[9]  Yarbus,  A.  L. EYE  MOVEMENTS  AND VISION New York: Plenum Press
(1967)  a startling demonstration of featural flow in human vision.

[10] Beck,  J.   TEXTURAL SEGMENTATION,  SECOND-ORDER STATISTICS,  AND
TEXTURAL ELEMENTS. Biological Cybernetics (1983) 48, 125-130

[11] Beck, J., Prazdny, K., & Rosenfeld, A.  A THEORY OF TEXTURAL
SEGMENTATION in J. Beck, B. Hope, & A. Rosenfeld (Eds.), HUMAN AND
MACHINE VISION.  New York: Academic Press (1983)

[12] Stephen Grossberg SOME PHYSIOLOGICAL AND BIOCHEMICAL CONSEQUENCES
OF  PSYCHOLOGICAL  POSTULATES  Proceedings of  the National Academy of
Sciences (1968) 60, 758-765.  Grossbergs  original formulation of  the
dynamic   shunting   neuron   as derived   from   psychological    and
neurobiological  considerations and subjected to rigorous mathematical
analysis.  Reprinted  in STUDIES  OF MIND AND BRAIN Stephen Grossberg,
D. Reidel Publishing (1982) Chapter 2.

[13] David Marr VISION Freeman & Co.  1982.  In a remarkably lucid and
well illustrated book Marr presents a theory of  vision which includes
the Laplacian operator as the front-end feature extractor.  In chapter
2  he shows  how this  operator can  be  closely  approximated  with a
difference of Gaussians.

[14] Daugman J. G.,  COMPLETE DISCRETE 2-D GABOR TRANSFORMS  BY NEURAL
NETWORKS    FOR  IMAGE ANALYSIS    AND  COMPRESSION I.E.E.E.    Trans.
Acoustics, Speech,  and Signal  Processing  (1988)  Vol.  36  (7),  pp
1169-1179.   Daugman  presents the  Gabor  filter, the  product of  an
exponential and  a  trigonometric term,  for extracting  local spatial
frequency  information  from images;  he shows   how such filters  are
similar  to    receptive  fields mapped   in  the  visual  cortex, and
illustrates their use in feature extraction and image compression.

[15]  Stephen Grossberg  CONTOUR ENHANCEMENT, SHORT  TERM  MEMORY, AND
CONSTANCIES IN REVERBERATING   NEURAL  NETWORKS Studies    in  Applied
Mathematics  (1973)  LII,  213-257.   Grossberg  analyzes the  dynamic
behavior of a recurrent competitive field of shunting  neurons, i.e. a
layer wherein the neurons are  interconnected with inhibitory synapses
and receive excitatory  feedback from themselves,  as a mechanism  for
stable short term memory storage.  He finds that the synaptic feedback
function is critical  in  determining  the dynamics  of  the system, a
faster  than  linear function  such  as  f(x)  =   x*x   results in  a
winner-take-all choice,   such that only   the  maximally active  node
survives and suppresses the others in the layer.  A sigmoidal function
can   be   tailored  to    produce either   contrast    enhancement or
winner-take-all, or any variation in between.





More information about the Connectionists mailing list