Final program and abstracts for MIND conference May 5-7

B344DSL@UTARLG.UTA.EDU B344DSL at UTARLG.UTA.EDU
Tue May 3 01:38:53 EDT 1994


CONTENTS
Announcement
Program Schedule
Abstracts of Presentations
Directions to Conference
Registration Form

                CONFERENCE ON OSCILLATIONS IN NEURAL SYSTEMS
 
Sponsored by the Metroplex Institute for Neural Dynamics (MIND) and the
                       University of Texas at Arlington

     Co-sponsored by the Departments of Mathematics and Psychology 

                                           MAY 5-7, 1994 

                                  UNIVERSITY OF TEXAS AT ARLINGTON 
                                   MAIN LIBRARY, 6TH FLOOR PARLOR 

The topic of neural oscillation is currently of great interest to
psychologists and neuroscientists alike.  Recently it has been
observed that neurons in separate areas of the brain will oscillate
in synchrony in response to certain stimuli.  One hypothesized
function for such synchronized oscillations is to solve the
"binding problem," that is, how is it that disparate features of
objects (e.g., a person's face and their voice) are tied together
into a single unitary whole.  Some bold speculators (such as
Francis Crick in his recent book, The Astonishing Hypothesis) even
argue that synchronized neural oscillations form the basis for
consciousness. 

Further inquiries about the conference can be addressed to any of
the conference organizers: 


Professor Daniel S. Levine 
Department of Mathematics, University of Texas at Arlington 
411 S. Nedderman Drive 
Arlington, TX 76019-0408 
817-273-3598, fax: 817-794-5802 
b344dsl at utarlg.uta.edu  
 
Professor Vincent Brown
Department of Psychology, University of Texas at Arlington
Arlington, TX 76019
817-273-3247                                          
b096vrb at utarlg.uta.edu 

Mr. Timothy Shirey                            
214-495-3500 or 214-422-4570
73353.3524 at compuserve.com

Please distribute this announcement to anyone you think may be
interested in the conference.

                                     SCHEDULE 
 
Posters (ongoing throughout the conference):  Govindarajan, Lin,
Mobus, Penz, Rhoades, Tam, Young 
 
Thursday:       9:00-9:15   Introduction by Peter Rosen, Dean of the
                       College of Science
                9:15-9:30   Introduction by Daniel S. Levine, Co-
                       organizer of the conference
                9:30-10:30  Mpitsos
               10:30-11:15  Baxter

               11:15-11:30  15 minute break

               11:30-12:30  Stemmler

               12:30-2:00   LUNCH

                2:00-2:45   Thomas
                2:45-3:45   Horn

                3:45-4:00   15 minute break

                4:00-5:00   Yuen
                5:00-5:45   Gross


Friday:         8:30-9:30   Wong
                9:30-10:30  Traub
               
               10:30-10:45  15 minute break

               10:45-11:30  Soltesz
               11:30-12:15  Wolpert

               12:15-2:00   LUNCH

                2:00-2:45   (A.) Brown
                2:45-3:45   Bulsara

                3:45-4:00   15 minute break

                4:00-5:00   Maren
                5:00-5:45   Jagota

Saturday:      10:00-11:00  Baird
               11:00-11:45  Park

               11:45-12:00  15 minute break

               12:00-12:45  DeMaris

               12:45-1:45   LUNCH

                1:45-2:45   Grunewald
                2:45-3:30   Steyvers

                3:30-3:45   15 minute break

                3:45-5:00   Discussion (What Are Neural Oscillations Good
                        For?)  (If weather permits, discussion may continue
                        after 5PM outside library.)

                7:30-?  Trip to The Ballpark in Arlington to see
                         Minnesota Twins at Texas Rangers

Titles and Abstracts of Talks and Posters (Alphabetical by First Author)



BILL BAIRD, UNIVERSITY OF CALIFORNIA/BERKELEY
(BAIRD at MATH.BERKELEY.EDU)
"GRAMMATICAL INFERENCE BY ATTENTIONAL CONTROL OF SYNCHRONIZATION IN
AN ARCHITECTURE OF COUPLED OSCILLATORY ASSOCIATIVE MEMORIES" 
 
        We show how a neural network "computer" architecture, inspired
by observations of cerebral cortex and constructed from recurrently
connected oscillatory associative memory modules, can employ
selective "attentional" control of synchronization to direct the
flow of communication and computation within the architecture to
solve a grammatical inference problem. 
        The modules in the architecture learn connection weights
between themselves which  cause the system to evolve under a
clocked "machine cycle" by a sequence of transitions of attractors
within the modules, much as a digital computer evolves by
transitions of its binary flip-flop states.  The architecture thus
employs the principle of "computing with attractors" used by
macroscopic systems for reliable computation in the presence of
noise.  Even though it is constructed from a system of continuous
nonlinear ordinary differential equations, the system can operate
as a discrete-time symbol processing architecture, but with analog
input and oscillatory subsymbolic representations.  The discrete
time steps (machine cycles) of the "Elman" network algorithm are
implemented by rhythmic variation (clocking) of a bifurcation
parameter.  This holds input and "context" modules clamped at their
attractors while hidden and output modules change state, then
clamps hidden and output states while context modules are released
to load those states as the new context for the next cycle of
input. 
        In this architecture, oscillation amplitude codes the
information content or activity of a module (unit), whereas phase
and frequency are used to "softwire" the network.  Only
synchronized modules communicate by exchanging amplitude
information; the activity of non-resonating modules contributes
incoherent crosstalk noise. The same hardware and connection matrix
can thus subserve many different computations and patterns of
interaction between modules.
        Attentional control is modeled as a special subset of the
hidden modules with outputs which affect the resonant frequencies
of other hidden modules.  They perturb these frequencies to control
synchrony among the other modules and direct the flow of
computation (attention) to effect transitions between two subgraphs
of a large automaton which the system  emulates to generate a Reber
grammar.  The internal crosstalk noise is used to drive the
required random transitions of the automaton. 


DOUG BAXTER, CARMEN CANAVIER, H. LECHNER, UNIVERSITY OF
TEXAS/HOUSTON, JOHN CLARK, RICE UNIVERSITY, AND JOHN BYRNE,
UNIVERSITY OF TEXAS/HOUSTON (DBAXTER at NBA19.MED.UTH.TMC.EDU)
"COEXISTING STABLE OSCILLATORY STATES IN A MODEL NEURON SUGGEST
NOVEL MECHANISMS FOR THE EFFECTS OF SYNAPTIC INPUTS AND
NEUROMODULATORS" 

        Enduring changes in the electrical activity of individual
neurons have generally been attributed to persistent modulation of
one or more of the biophysical parameters that govern, directly or
indirectly, neuronal membrane conductances.  Particularly striking
examples of these modulatory actions can be seen in the changes in
the activity of bursting neurons exposed to modulatory transmitters
or second messengers.  An implicit assumption has been that once
all parameters are fixed, the ultimate mode of electrical activity
exhibited is determined.  An alternative possibility is that
several stable modes of activity coexist at a single set of
parameters, and that transient synaptic inputs or transient
perturbations of voltage-dependent conductances could switch the
neuron from one stable mode of activity to another.
        Although coexisting stable oscillatory states are a well known
mathematical phenomenon, their appearance in a biologically
plausible model of a neuron has not been previously reported.  By
using a realistic mathematical model and computer simulations of
the R15 neuron in Aplysia, we identified a new and potentially
fundamental role for nonlinear dynamics in information processing
and learning and memory at the single-cell level.
        Transient synaptic input shifts the dynamic activity of the
neuron between at least seven different patterns, or modes, of
activity.  These parameter-independent mode transitions are induced
by a brief synaptic input, in some cases a single excitatory
postsynaptic potential.  Once established, each mode persists
indefinitely or until subsequent synaptic input perturbs the neuron
into another mode of activity.  Moreover, the transitions are
dependent on the timing of the synaptic input relative to the phase
of the ongoing activities.  Such temporal specificity is a
characteristic feature of associative memories.
        We have also investigated the ways in which changes in two
model parameters, the anomalous rectifier conductance (gR) and the
slow inward calcium conductance (gSI), affect not only the intrinsic
activity of R15, but also the ability of the neuron to exhibit
parameter-independent mode transitions.  gR and gSI were selected
since they are key determinants of bursting activity and also
because they are known to be modulated by dopamine and serotonin. 
We have found that small changes in these parameters can annihilate
some of the coexisting modes of electrical activity.  For some
values of the parameters only a single mode is exhibited.  Thus,
changing the value of gR and gSI can regulate the number of modes
that the neuron can exhibit.
        Preliminary electrophysiological experiments indicate that
these mechanisms are present in vitro.  These combined experimental
and modeling studies provide new insights into the role of
nonlinear dynamics in information processing and storage at the
level of the single neuron and indicate that individual neurons can
have extensive parameter-independent plastic capabilities in
addition to the more extensively analyzed parameter-dependent ones.
 


ANTHONY BROWN, DEFENSE RESEARCH AGENCY, UNITED KINGDOM
(ABROWN at SIGNAL.DRA.HMG.GB)
"PRELIMINARY WORK ON THE DESIGN OF AN ANALOG OSCILLATORY NEURAL
NETWORK" 

        Inspired by biological neural networks our aim is to produce
an efficient information processing architecture based upon
analogue circuits. In the past analogue circuits have suffered from
a limited dynamic range caused by inter-device parameter
variations. Any analogue information processing system must
therefore be based upon an adaptive architecture which can
compensate for these variations. Our approach to designing an
adaptive architecture is to mimic neuro-biological exemplars, we
are therefore examining architectures based upon the Hebb learning
rule.  
        In neuro-biological systems the Hebb rule is associated with
temporal correlations which arise in phase locked oscillatory
behaviour. The starting point for our new system is the Hopfield
network. To modify the fixed point dynamics of such a network we
have introduced a "hidden" layer of neurons.  Each new neuron is
connected to an existing neuron to form a pair which in isolation
exhibits a decaying, oscillatory response to a stimulus. 
        Several promising preliminary results have been obtained:
Sustained oscillations are stimulated by the "known" patterns which
were used to determine the weight matrix. In contrast "unknown"
patterns result in a decaying oscillatory response, which can be
reinforced for frequently occurring new input patterns to create a
new characteristic response. Finally, a mixture of two known inputs
will stimulate both characteristic oscillatory patterns separated
by a constant phase lag.  Overall the introduction of oscillatory
behaviour in an associative memory will both simplify the
embodiment of the learning rule and introduce new modes of
behaviour which can be exploited. 



ADI BULSARA, NAVAL COMMAND, CONTROL, AND OCEAN SURVEILLANCE CENTER,
SAN DIEGO (BULSARA at MANTA.NOSC.MIL)
"COMPLEXITY IN THE NEUROSCIENCES: SIGNALS, NOISE, NONLINEARITY, AND
THE MEANDERINGS OF A THEORETICAL PHYSICIST" 

        We consider the interpretation of time series data from firing
events in periodically stimulated sensory neurons. Theoretical
models, representing the neurons as nonlinear dynamic switching
elements subject to deterministic (taken to be time-periodic)
signals buried in a Gaussian noise background, are developed. The
models considered include simple bistable dynamics which provide a
good description of the noise-induced cooperative behavior in
neurons on a statistical or coarse-grained level, but do not
account for many important features (e.g. capacitative effects) of
real neuron behavior, as well as very simple "integrate-fire"
models which provide reasonable descriptions of capacitative
behavior but attempt to duplicate refractoriness through the
boundary conditions on the dynamics. Both these classes of models
can be derived through a systematic reduction of the Hodgkin-Huxley
equations (assumed to be the best available description of neural
dynamics).  Cooperative effects, e.g. "stochastic resonance",
arising through the interplay of the noise and deterministic
modulation, are examined, together with their  possible
implications in the features of Inter-Spike-Interval Histograms
(ISIHs) that are ubiquitous in the neurophysiological literature.
We explore the connection between stochastic resonance, usually
defined at the level of the power spectral density of the response,
and the cooperative behavior observed in the ISIH. For the simpler
(integrate-fire-type) threshold model, a precise connection between
the two statistical measures (the power spectral density and the 
ISIH) of the system response can be established; for the more
complex (bistable) models, such a connection is, currently,
somewhat tenuous.



DAVID DEMARIS, UNIVERSITY OF TEXAS/AUSTIN
(DEMARIS at PINE.ECE.UTEXAS.EDU)
(TITLE TO BE ADDED)

        A body of work on nonlinear oscillations in vision has
emerged, both in the analysis of single unit inter-spike intervals
and in a theory of perceptual encoding via spatio-temporal
patterns. This paper considers other roles nonlinear oscillating
networks may play in an active visual system. Kaneko's coupled map
lattice models and extensions are examined for utility in
explaining tasks in attention and monocular depth perception.
Visual cortex is considered as an array of coupled nonlinear
oscillators (complex cell networks) forced by imbedded simple cell
detector networks of the Hubel and Wiesel type. In this model, elf
organization of local and global bifurcation parameters may form
spatial regions of heightened activity in attentional modules and
form bounded dynamics regimes (domains) in visual modules related
to binding and separation of figure and ground. This research is
still in a rather speculative stage pending simulation studies;
hence the aims of this talk are:
 
* Provide a brief introduction to dynamics of spatially extended
nonlinear systems such as coupled map lattices with self-organized
control parameters and how these may support perceptual activity
and encoding. 
* Review some recent work on underlying physiological mechanisms
and measurements which support the use of nonlinear oscillator
models. 
* Describe visual phenomena in the areas of ambiguous depth
perception,   figure / ground feature discrimination, and spatial
distortions. Discuss mechanisms in coupled map models which may
account for these phenomena. 
 
A demonstration of experiments involving cellular automata
processing of Necker cube and Muller/Lyer figures is possible
running on an IBM compatible PC. 



SRIRAM GOVINDARAJAN AND VINCENT BROWN, UNIVERSITY OF
TEXAS/ARLINGTON (SRIRAM at CSE.UTA.EDU)
"FEATURE BINDING AND ILLUSORY CONJUNCTIONS: PSYCHOLOGICAL
CONSTRAINTS AND A MODEL" 

(Abstract to be added) 



GUENTER GROSS AND BARRY RHOADES, UNIVERSITY OF NORTH TEXAS
(GROSS at MORTICIA.CNNS.UNT.EDU)
"SPONTANEOUS AND EVOKED OSCILLATORY BURSTING STATES IN CULTURED
NETWORKS" 

        In monolayer networks derived from dissociated embryonic mouse
spinal cord tissue, and maintained in culture for up to 9 months,
oscillatory activity states are common in the burst domain and
represent the most reproducible of all network behaviors. 
Extensive observations of self-organized oscillatory activity
indicate that such network states represent a generic feature of
networks in culture and suggest that possibly all networks
comprised of mammalian neurons have a strong tendency to oscillate.
  
Native Oscillatory States:
        The most characteristic pattern is a temporally variable, but
spatially coordinated bursting. Quasi-periodic oscillations are
generally transient but coordinated among most of the electrodes
recording spontaneous activity.  Networks left undisturbed for
several hours display a tendency to enter coordinated oscillatory
states and to remain in these states for long periods of time.

Pharmacologically-induced oscillations:
        Synaptic inhibition by blocking glycine and GABA receptors
increases spike rates, but generates a much different response
pattern than that obtained from the excitatory transmitters.
Whereas the latter produce excitation by disrupting existing
patterns with increased spike and burst activity and only
occasional transient oscillatory patterns, disinhibition brings the
network into more tightly synchronized bursting with highly regular
burst durations and periods in essentially all cultures.  Such
states can last for hours with minimal changes in burst variables.
Other compounds such as 4-aminopyridine and cesium increase burst
rate and regularity, in a manner qualitatively matched by elevating
extracellular potassium.  Cultures are much more sensitive to
strychnine than to bicuculline.  Whereas oscillatory behavior
usually begins at 20-30 m bicuculline, similar pattern changes are
obtained with nanomolar to low micromolar quantities of strychnine. 
Burst fusion and intense spiking (produced by NMDA) has never been
observed as a result of network disinhibition.  Extensive
pharmacological manipulations of calcium and potassium channels has
confirmed that spontaneous oscillations depend on potassium
currents and intracellular Ca++ levels but not on calcium-dependent
potassium conductances. 

Electrically-induced oscillations:
        Networks can often be periodically driven by repetitive
electrical stimulation at a single electrode.  Repeated stimulus
trains have also been observed to induce episodes of intense,
coherent bursting lasting beyond the termination of the stimulus
pattern.  Such responses appear "epileptiform" and might be
considered a cultured network parallel to electrical induction of
an epileptic seizure in vivo.

Entrainment:
        Repetitive pulse train stimulation often causes the network
burst patterns to organize and finally follow the temporal
stimulation pattern.
We have also found that networks in pharmacologically-induced
periodic bursting modes can be entrained to a periodic single
channel stimulation if the stimulus cycle is at or near a multiple
of the spontaneous burst cycle period.  The ability of a few axons
at one electrode to entrain an entire network of 100 -300 neurons
is remarkable and invites studies of entrainment mechanisms in
these networks.



ALEXANDER GRUNEWALD AND STEPHEN GROSSBERG, BOSTON UNIVERSITY
(ALEX at CNS.BU.EDU)
"BINDING OF OBJECT REPRESENTATIONS BY SYNCHRONOUS CORTICAL DYNAMICS
EXPLAINS TEMPORAL ORDER AND SPATIAL POOLING DATA" 

        A key problem in cognitive science concerns how the brain
binds together parts of an object into a coherent visual object
representation. One difficulty that this binding process needs to
overcome is that different parts of an object may be processed by
the brain at different rates and may thus become desynchronized. 
Perceptual framing is a mechanism that resynchronizes cortical
activities corresponding to the same retinal object. A neural
network model based on cooperation between oscillators via feedback
from a subsequent processing stage is presented that is able to
rapidly resynchronize desynchronized featural activities.  Model
properties help to explain perceptual framing data, including
psychophysical data about temporal order judgments. These
cooperative model interactions also simulate data concerning the
reduction of threshold contrast as a function of stimulus length.
The model hereby provides a unified explanation of temporal order
and threshold contrast data as manifestations of a cortical binding
process that can rapidly resynchronize image parts which belong
together in visual object representations. 



DAVID HORN, TEL AVIV UNIVERSITY (HORN at VM.TAU.AC.IL)
"SEGMENTATION AND BINDING IN OSCILLATORY NETWORKS" 

        Segmentation and binding are cognitive operations which
underlie the process of perception. They can be understood as
taking place in the temporal domain, i.e. relying on features like
simultaneity of neuronal firing. We analyze them in a system of
oscillatory networks, consisting of Hebbian cell assemblies of
excitatory neurons and inhibitory interneurons in which the
oscillations are implemented by dynamical thresholds. We emphasize
the importance of fluctuating input signals in producing binding
and in enabling segmentation of a large set of common inputs.
Segmentation properties can be studied by investigating the cyclic
attractors of the system and the partial symmetries that they
implement in a symmetric neural network.



ARUN JAGOTA, MEMPHIS STATE UNIVERSITY, AND XIN WANG, UNIVERSITY OF
CALIFORNIA/LOS ANGELES (JAGOTA at NEXT1.MSCI.MEMST.EDU)
"OSCILLATIONS IN DISCRETE AND CONTINUOUS HOPFIELD NETWORKS" 

        The first part of this talk deals with analyzing oscillatory
behavior in discrete Hopfield networks with symmetric weights. It
is well known that under synchronous updates, such networks admit
cyclic behavior of order two but no higher. The two-cycles
themselves are not known to have any useful characterizations in
general however. By imposing certain restrictions on the weights,
we obtain an exact characterization of the two-cycles in terms of
properties of a certain graph underlying the network. This
characterization has the following benefits. First, in small
networks of this kind, all the two-cycles may be found merely by
inspection of the underlying graph (which depends only on the
weights). Second, this characterization suggests certain
applications which exploit the two-cycles. We illustrate both of
these benefits in detail.
        The second part of this talk deals with synthesizing chaotic
or periodic oscillatory behavior in continuous Hopfield networks
for the purposes of solving optimization problems. It is well known
that certain dynamical rules for continuous Hopfield networks with
symmetric weights exhibit convergent behavior to stable fixed
points. Such convergent behavior is one reason for the use of these
networks to solve optimization problems. Such behavior, however,
also limits their performance in practice, as it is of the
gradient-descent form, which often leads to sub-optimal local
minima. As a potential remedy to this problem, we propose methods
for injecting controllable chaos or periodic oscillations into the
dynamical behavior of such networks. In particular, our methods
allow chaotic or oscillatory behavior to be initiated and converted
to convergent behavior at the turn of a "knob". This is in analogy
with simulated annealing where at high temperature the behavior is
"random" and at low temperature relatively "convergent". We
introduce chaos or periodic oscillations into the network in two
ways: one via the external input to each neuron, and the other by
replacing each neuron by two neurons arranged into a coupled
oscillator. We present some experimental results on the performance
of our networks, with and without the oscillations, on the Maximum
Clique optimization problem. 



SHIEN-FONG LIN, RASHI ABBAS, AND JOHN WIKSO, JR., VANDERBILT
UNIVERSITY (LIN at MACPOST.VANDERBILT.EDU)
"ONE-DIMENSIONAL MAGNETIC MEASUREMENT OF TWO-ORIGIN BIOELECTRIC
CURRENT OSCILLATION" 

        The squid giant axons when placed in low calcium and high
sodium extracellular environment will abruptly enter a state of
self-sustained oscillation.  Such an oscillation exhibits a linear
temperature dependence in frequency, can be entrained, and enters
chaotic states with proper entrainment patterns.  The origination
of such an oscillation, although of significant implication to
neural oscillation in general, has never been extensively studied
experimentally.  Specifically, one of the most intriguing problem
was the scarcity of experimental evidence for symmetrical multiple
oscillation origins in such a homogeneous one dimensional
structure.   
        In this presentation, we report a novel non-invasive magnetic
observation of a stable 2-origin self-sustained oscillation in
squid giant axons.  The oscillation showed a standing-wave pattern
when observed in the spatial domain, and a proper geometry was
required to sustain the 2-origin pattern.  The origins were coupled
and synchronized in phase.  The results from model simulation using
explicit implementation of propagating Hodgkin-Huxley axon allowed
us to investigate the mechanisms underlying such a behavior.  The
study clearly demonstrated the merits of magnetic methods in
studying intricate neural oscillations.   



ALIANNA MAREN, ACCURATE AUTOMATION CORPORATION, AND E. SCHWARTZ,
RADFORD UNIVERSITY (AJMAREN%AAC at OLDPAINT.ENGR.UTC.EDU)
"A NEW METHOD FOR CROSS-SCALE INTERACTION USING AN ADAPTABLE BASIC
PROCESSING ELEMENT"

        A new concept for the basic processing element in a neural
network allows the characteristics of this element to change in
response to changes at the neural network level.  This makes it
possible to have "cross-scale interactions," that is, events at the
neural network level influence not only the immediate network
state, but also the response characteristics of individual
processing elements.  This novel approach forms the basis for
creating a new class of neural networks, one in which the
processing elements are responsive to spatial and historical
context.  This capability provides a valuable tool in advancing the
overall richness and complexity of neural network behavior.
        The most evident advantage of this new approach is that neural
networks can be made dependent, in a substantial way, upon past
history for the present state.  This property is most useful in
applications where past history is important in determining present
actions or interpretations.
        There is a major difference between this approach and most
current methods for adapting neural networks to exert the influence
of time or to provide "learning."  This lies in the fact that most 
existing methods provide either a means for maintaining the
activation due to initial stimulus (either with time-delay
connections or with recurrent feedback), or provide a means for
changing the values of connection weights ("learning").  The
approach offered here is substantively different from existing
approaches, in that changes are made to the response
characteristics of the individual processing units themselves; they
now respond differently to stimuli.
        The model for the new interpretation of the basic processing
element comes from considering the basic element as a
(statistically large) ensemble of interacting bistate processing
units.  By way of analogy to domains of neurons in biological
systems, we call this ensemble, or basic processing element, an
artificial neural domain.  The neural domain is modeled at the
ensemble level, not at the level of individual components.  Using
a simple statistical thermodynamics model, we arrive at ensemble
characteristics.  Ensemble, or domain, behavior, is controlled not
only by input activations but also by parameter values which are
modified at the neural network level.  This creates an avenue for
cross-scale interaction.



GEORGE MOBUS AND PAUL FISHER, UNIVERSITY OF NORTH TEXAS
(MOBUS at PONDER.CSCI.UNT.EDU)
"EDGE-OF-CHAOS-SEARCH: USING A QUASI-CHAOTIC OSCILLATOR CIRCUIT FOR
FORAGING IN A MOBILE AUTONOMOUS ROBOT"

        A neural circuit that emulates some of the behavioral
properties of central pattern generators (CPGs) in animals is used
to control a stochastic search in a mobile, autonomous robot.  When
the robot is not being stimulated by signals that represent
mission-support events, it searches its environment for such
stimuli.  The circuit generates a quasi-chaotic oscillation that
causes the robot to weave back and forth like a drunken driver. 
Analysis of the weave pattern shows that the chaotic component
yields sufficient novelty to cause the robot to conduct an
effective search in a causally-controlled but non-stationary
environment.  Unlike a random-walk search which may exhaust the
robot's power resources before it accomplishes its mission, we
show, through simulations, that a quasi-chaotic search approaches
optimality in the sense that the robot is more likely to succeed in
finding mission-critical events.
        The search patterns displayed by the robot resemble,
qualitatively, those of foraging animals.  When the robot senses a
stimulus associated with a mission-support event, a combination of
location and distance signals from other parts of the simulated
brain converge on the CPG causing it to transition to more ordered
directional output.  The robot orients relative to the stimulus and
follows the stimulus gradient to the source.  The possible role of
chaotic CPGs and their transitions to ordered oscillation in
searching non-stationary spaces is discussed and we suggest
generalizations to other search problems.  The role of learning
causal associations as a prerequisite for successful search is also
covered.
 

 
GEORGE MPITSOS, OREGON STATE UNIVERSITY
(GMPITSOS at SLUGO.HMSC.ORST.EDU)
"ATTRACTOR GRADIENTS: ARCHITECTS OF NETWORK ORGANIZATION IN
BIOLOGICAL SYSTEMS" 

        Biological systems are composed of many components that must
produce coherent adaptive responses. The interconnections between
neurons in an assembly or between individuals in any population all
pose similar questions, e.g,: How does one handle the many degrees
of freedom to know how the system as a whole functions?  What is
the role of the individual component? Although individuals act
using local rules, is there some global organizing principle that
determines what these rules are? I raise the possibility that many
simplifications occur if the system is dissipative; i.e., if it has
an attractor such that it returns to a characteristic state in
response to external perturbation. I ask what does the dissipative
process do to the system itself? What global organizing effects
does it produce? Biological and artificial neural networks are used
to describe dissipative processes and to address such questions.
Although individual systems may express different details, the fact
that attractors are generally applicable constructs, suggests that
the understanding of one complex system may give insight into
similar problems of self-organization in others.  
 
Supported by AFOSR 92J-0140



NAM SEOG PARK, DAVE ROBERTSON, AND KEITH STENNING, UNIVERSITY OF
EDINBURGH (NAMSEOG at AISB.EDINBURGH.AC.UK)
"FROM DYNAMIC BINDINGS TO FURTHER SYMBOLIC KNOWLEDGE REPRESENTATION
USING SYNCHRONOUS ACTIVITY OF NEURONS" 

        A structured connectionist model using temporal synchrony has
been proposed by Shastri and Ajjanagadde. This model has provided
a mechanism which encodes rules and facts involving n-ary
predicates and handles some types of dynamic variable binding using
synchronous activity of neurons.  Although their mechanism is
powerful enough to provide a solution to the dynamic variable
binding problem, it also shows some limitations in dealing with
some knowledge representation issues such as binding generation,
consistency checking, and unification, which are important in
enabling their model achieving better symbolic processing
capabilities. 
        This paper presents how Shastri and Ajjanagadde's mechanism
can be modified and extended to overcome those limitations.  The
modification is made by redefining a temporal property of one of
four types of node used in their model and replacing it with the
one newly defined.  Two layers of node are also added to enable a
uniform layered connection between the antecedent and the
consequent of various types of rule, which allows comparatively
straightforward translation from symbolic representation of rules
to connectionist representation of them.  As a result, the modified
system is able to tackle more knowledge representation issues
while, at the same time, reducing the number of types of node
required and retaining the merits of the original model.



ANDREW PENZ, TEXAS INSTRUMENTS (PENZ at RESBLD.TI.COM)
(TITLE AND ABSTRACT TO BE ADDED)
 

BARRY RHOADES, UNIVERSITY OF NORTH TEXAS
(RHOADES at MORTICIA.CNNS.UNT.EDU)
"GLOBAL NEUROCHEMICAL DETERMINATION OF LOCAL EEG IN THE OLFACTORY
BULB" 

        Spatially coherent bursts of EEG oscillations are a dominant
electrophysiological feature of the mammalian olfactory bulb,
accompanying each inspiratory phase of the respiratory cycle in the
waking state and recurring intermittently under moderate
anesthesia.  In the rat these oscillatory bursts are nearly
sinusoidal, with a typical oscillation frequency of 50-60 Hz.  The
averaged evoked potential (AEP) to repetitive near threshold-level
electrical stimulation of either the primary olfactory nerve (PON)
or lateral olfactory tract (LOT)  has a dominant damped sinusoidal
component at the same frequency.  These oscillations are generated
by the negative feedback relationship between the mitral/tufted
(MT) cell principal neurons and the GABAergic granule (G) cell
interneurons at reciprocal dendro-dendritic synapses of the
external plexiform layer (EPL).  This EPL generator produces
oscillations in mitral/tufted cells and granule cell ensembles,
under the high input levels produced by inspiratory activation of
the olfactory epithelium or electrical stimulation of the bulbar
input or output tracts.
        The dependence of oscillations in the bulbar EEG and evoked
potentials on local and regional alterations in GABAergic
neurochemistry was investigated in barbiturate anesthetized
Sprague-Dawley rats.  The main olfactory bulb, primary olfactory
nerve (PON) and lateral olfactory tract (LOT) were surgically
exposed, unilaterally.  Basal EEG from both bulbs and AEPs from the
exposed bulb in response to stimulation of the PON and LOT were
recorded before and following both local microinjection and
regional surface application of the GABAactive neurochemicals
muscimol, picrotoxin, and bicuculline.  Locally restricted
microinjections profoundly altered AEP waveforms,  but had
negligible effects on the background EEG.  Regional applications of
the same neurochemicals at the same concentrations across the
entire exposed bulbar surface produced discontinuous transitions in
EEG oscillatory state.  The temporal properties of the basal EEG
recorded from a site on the bulbar surface could thus be altered
only by GABAergic modification of G->MT cell synapses over a large
region of the olfactory bulb.  This provides neurochemical evidence
that the temporally and spatially patterned oscillatory activity 
deriving from the interactions of mitral/tufted and granule cells
is globally organized; i.e. that global oscillatory state overrides
local neurochemistry in controlling background oscillations of
local neuronal ensembles. 

This research was conducted in the laboratory of Walter J. Freeman
at the University of California, Berkeley and supported primarily
by funds from NIMH grant #MH06686. 



IVAN SOLTESZ, UNIVERSITY OF TEXAS HEALTH SCIENCES CENTER, DALLAS
(SOLTESZ at UTSW.SWMED.EDU)
(TITLE AND ABSTRACT TO BE ADDED)
 

MARTIN STEMMLER, CALIFORNIA INSTITUTE OF TECHNOLOGY
(STEMMLER at KLAB.CALTECH.EDU)
"SYNCHRONIZATION AND OSCILLATIONS IN SPIKING NETWORKS" 

        While cortical oscillations in the 30 to 70~Hz range  are
robust and commonly found in local field potential measurements  in
both cat and monkey visual cortex (Gray et al., 1990; Eckhorn et
al., 1993), they are much less evident in single spike trains
recorded from behaving monkeys (Young et al., 1982; Bair et al.,
1994). We show that a simple neural network with spiking "units"
and a plausible excitatory-inhibitory interconnection scheme can
explain this discrepancy.  The discharge patterns of single units
is highly irregular and the associated single-unit power spectrum
flat with a dip at low frequencies, as observed in cortical
recordings in the behaving monkey (Bair et al., 1994).  However, if
the local field potential,  defined as the summed spiking activity
of all "units" within a particular distance, is computed over an
area large enough to include direct inhibitory interactions among
cell pairs, a prominent peak around 30-50 Hz becomes visible.


MARK STEYVERS, INDIANA UNIVERSITY AND CEES VAN LEEUWEN, UNIVERSITY
OF AMSTERDAM, THE NETHERLANDS (MSTEYVER at HERMES.PSYCH.INDIANA.EDU)
"USE OF SYNCHRONIZED CHAOTIC OSCILLATIONS TO MODEL MULTISTABILITY
IN PERCEPTUAL GROUPING" 

        Computer simulations are presented to illustrate the utility
of a new way of dynamic coupling in neural networks. It is
demonstrated that oscillatory neural network activity can be
synchronized even if the network remains in a chaotic state. An
advantage of chaotic synchronous oscillations over periodic ones is
that chaos provides a very powerful and intrinsic mechanism for
solving the binding problem and at the same time, multistability in
perception. The 
resulting switching-time distributions of a multistable grouping
show qualitative similarities with experimentally obtained
distributions. The chaotic oscillatory couplings were used to model
the Gestalt laws of proximity, good continuation and symmetry
preference. In addition, interpretations provided by the model were
shown to be liable to sequence influence.  



DAVID TAM, UNIVERSITY OF NORTH TEXAS (DTAM at UNT.EDU)
"SPIKE TRAIN ANALYSIS FOR DETECTING SYNCHRONIZED FIRING AMONG
NEURONS IN NETWORKS"
 
     A specialized spike train analysis method is introduced to
detect synchronized firing between neurons.  This conditional
correlation technique is developed to detect the probability of
firing and non-firing of neurons based on the pre- and
post-conditional cross-intervals, and interspike intervals after
the reference spike has fired.  This statistical measure is an
estimation of the conditional probability of firing of a spike in
a neuron based on the probability of firing of 
another neuron after the reference spike has occurred.  By
examining the lag times of post-interspike intervals and post-cross
intervals, synchronized coupling effects between the firing of the
reference neuron can be revealed. 



ELIZABETH THOMAS, WILLAMETTE COLLEGE (ETHOMAS at WILLAMETTE.EDU)
"A COMPUTATIONAL MODEL OF SPINDLE OSCILLATIONS" 

        A model of the thalamocortical system was constructed for the
purpose of a computational analysis of spindle.  The parameters
used in the model were based on experimental measurements.  The
model included a reticular thalamic nucleus and a dorsal layer. 
The thalamic cells were capable of undergoing a low threshold
calcium mediated spike.  The simulation was used to investigate the
plausibility and ramifications of certain proposals that have been
put forward for the production of spindle.  An initial stimulus to
the model reticular thalamic layer was found to give rise to
activity resembling spindle. The emergent population oscillations
were analyzed for factors that affected its frequency and
amplitude.  The role of cortical feedback to the pacemaking RE
layer was also investigated.  Finally a non-linear dynamics
analysis was conducted on the emergent population oscillations. 
This activity was found to yield a positive Lyapunov exponent and
define an attractor of low dimension.  Excitatory feedback was
found to decrease the dimensionality of the attractor at the
reticular thalamic layer. 



ROGER TRAUB, IBM T.J. WATSON RESEARCH CENTER (TRAUB at WATSON.IBM.COM)
"CELLULAR MECHANISMS OF SOME EPILEPTIC OSCILLATIONS"
 
        Cortical circuitry can express epileptic discharges
(synchronized population oscillations) when a number of different
system parameters are experimentally manipulated: blockade of fast
synaptic inhibition; enhancement of NMDA conductances; or
prolongation of non-NMDA conductances.  Despite the differences in
synaptic mechanism, the population output is, remarkably,
stereotyped.  We shall present data indicating that the stereotypy
can be explained by three basic ideas: recurrent excitatory
connections between pyramidal neurons, the ability of pyramidal
dendrites to produce repetitive bursts, and the fact that
experimental epilepsies engage one or another prolonged
depolarizing synaptic current. 



SETH WOLPERT, UNIVERSITY OF MAINE (WOLPERT at EECE.MAINE.EDU)
"MODELING NEURAL OSCILLATIONS USING VLSI-BASED NEUROMIMES" 

        As a prelude to the VLSI implementation of a locomotory
network, neuronal oscillators that utilize reciprocal inhibition
(RI) and recurrent cyclic inhibition (RCI) were re-created for
parametric characterization using comprehensive VLSI-based
artificial nerve cells, or Neuromimes.   Two-phase RI oscillators
consisting of a pair of self-exciting, mutually inhibiting neuronal
analogs were implemented using both fixed and dynamic synaptic
weighting, and cyclic inhibitory RCI ring networks of three and
five cells with fixed synaptic weighting were characterized with
respect to cell parameters representing resting cell membrane
potential, resting threshold potential, refractory period and tonic
inhibition from an external source.  For each of these parameters,
the frequency at which an individual cell would self-excite was
measured.  The impact of that cell's self-excitatory frequency on
the frequency of the total network was then assessed in a series of
parametric tests.   Results indicated that, while all four input
parameters continuously and coherently effected the cellular
frequency, one input parameter, duration of the cellular refractory
period, had no effect on overall network frequency, even though the
cellular frequency ranged over more than two orders of magnitude. 
 These results would suggest that neuronal oscillators are
sensitive to concentrations of the ionic species contributing to
resting cell membrane potential and threshold, but are stable with
respect to cellular conditions affecting refraction, such as the
conditions in the Sodium inactivation channels. 



ROBERT WONG, DOWNSTATE MEDICAL CENTER/BROOKLYN (NO E-MAIL;
TELEPHONE 718-270-1339, FAX 718-270-2241)
(TITLE AND ABSTRACT TO BE ADDED) 


DAVID YOUNG, LOUISIANA STATE UNIVERSITY (DYOUNG at MAX.EE.LSU.EDU)
"OSCILLATIONS CREATED BY THE FRAGMENTED ACCESS OF DISTRIBUTED
CONNECTIONIST REPRESENTATIONS"

        The rapid and efficient formation of transient interactions on
a systems level is viewed as a necessary aspect of cognitive
function.  It is a principle behind the binding problem of symbolic
rule-based reasoning which has seen many recent connectionist
approaches inspired by observations of synchronized neural
oscillations in separate cortical regions of the brain.  However
the combinatorial complexity of linking each of the numerously
possible interactions that may be needed exposes a serious
limitation inherent to connectionist networks.  As is well known an
artificial neural network constitutes a massively parallel device
yet above the most basic organizational level it effectively does
only one thing at a time.  This limitation is called the opacity of
a neural network and it describes the ability to access the
knowledge embodied in the connections of a network from outside the
network.
        This talk presents two new results relevant to neural
oscillations.  Firstly, wider access to the information storage of
feedback structures is achieved through composite attraction basins
that represent a combination of other learned basins.  Secondly, a
dynamics of inactivity is introduced and is shown to support
concurrent processes within the same structure.  By quieting the
activity of dissimilar network elements system states are
temporarily merged to form combined states of smaller dimension. 
The merged state will then proceed along a monotone decreasing path
over an energy surface toward a composite basin just as a single
state will proceed toward a single basin.  Since changes are not
made to interconnection weights specific instantiations of full
dimension may be reconstructed from vector fragments.  Moreover the
fragment size is dynamic and may be altered as the system operates. 
Based on this observation a new dynamics of inactivity for feedback
connectionist structures is presented allowing the network to
operate in a fragment-wise manner on learned distributed
representations.  The new mechanism is seen as having tracks of
activation passing through an otherwise quiet system.  The active
fragment repeatedly passes through the distributed representation
setting up an oscillation.  Inactive portions of the structure may
then be utilized by other processes that are locally kept separate
through phase differences and efferent coincidence.  Out-of-phase
tracks may be brought into synchrony thus allowing the interaction
of disparate features of objects by lowering the inhibition of the
neighboring elements involved.  The feedback structure is less than
fully connected globally but highly interconnected for local
neighborhoods of network elements.  Reduced global connectivity in
an environment operating fragment-wise permits true concurrent
behavior as opposed to the local use of time-shared resources which
is not concurrent.  A second structure is interwoven with and
regulates the first through inhibitory stimulation.  This
relationship of the two networks agrees with the predicted
regulatory influence that neurons with smooth dendritic
arborizations have on pyramidal cells and stellate cells displaying
spiny dendrites.



GEOFFREY YUEN, NORTHWESTERN UNIVERSITY (YUEN at MILES.PHYSIO.NWU.EDU)
"FROM THE ADJUSTABLE PATTERN GENERATOR MODEL OF THE CEREBELLUM TO
BISTABILITY IN PURKINJE CELLS"

        Based on the anatomy and physiology of the cerebellum and red
nucleus, the adjustable pattern generator (APG) model is a theory
of movement control that emphasizes the quasi-feedforward nature of
higher motor control processes. This is in contrast to the heavily
feedback-based control processes on the level of the spinal cord. 
Thus, limb movement-related motor commands (i.e. high-frequency
bursts discharges) in red nucleus during awake-monkey experiments
are postulated to be generated by endogenous CNS pattern generators
rather than via continuous feedback from the periphery.  The
postulated endogenous movement-command CNS pattern generator
includes neurons in magnocellular red nucleus (RNm), deep
cerebellar nuclei (i.e. nucleus interpositus (NI) for limb
movements) and cerebellar Purkinje cells.  Recurrent excitatory
interactions between RNm and NI which give rise to burst discharges
are modulated by the inhibitory outputs of cerebellar Purkinje
cells.  Thus dynamic burst durations and patterns are sculpted by
learning-based inhibition from Purkinje cells, giving rise to
appropriate movement command signals under different movements and
circumstances.
        Intrinsic to the concept of a pattern generator is the
existence of self-sustained activities. Aside from the
reverberatory positive feedback circuit in the recurrent loop
between the cerebellum and red nucleus, bistability in the membrane
potentials of Purkinje cells can also support self-sustained
activity.  This concept of bistability is based on the phenomena of
plateau potentials as observed in Purkinje cell dendrites.
        This talk will concisely summarize the APG theory and
circuitry, report on the results of its use in limb-movement
control simulations and describe current efforts to capture the
biophysical basis of bistability in Purkinje cells.  The
bistability of cerebellar Purkinje cells also has significance
particularly for the control of oscillations in the recurrent
excitatory circuits between red nucleus and deep cerebellar nuclei,
as well as movement control in general. With respect to the
biophysical basis of dendritic bistability, we have carried out
simulations and phase-plane analysis of the ionic currents which
underlie dendritic plateau potentials in Purkinje cells.  Here we
shall report on the results of the phase plane analyses of the
systems based on high-threshold P-calcium, delayed rectifier
potassium and slow, calcium-mediated potassium channels. We
gratefully acknowledge the support of the various aspects of this
work by ONR (N-00014-93-1-0636 to G. L. F. Yuen), NIH
(P50MH48185-01 to J. C. Houk) and NSF  (NS-26915 to P. E.
Houkberger).

DIRECTIONS TO CONFERENCE AND EVENING ACTIVITIES
To Those Attending the MIND conference on Oscillations in Neural Systems:

   For those of you who are baseball fans, or are perhaps just curious to
see the new Ballpark in Arlington, we are arranging a trip to the game
on Saturday May 7.  The Minnesota Twins are in town to take on the Texas
Rangers.   Game time is 7:30 pm.  If we gather a large enough crowd, we can
probably get a group discount.  Please send a response of you are interested.

   The second, but not least, purpose of this message is to inform those of
you arriving by car how to get to the motel and UTA campus.  If you are arriving by air, you need not read further.  

If you are entering Arlington from the north side via Interstate 30, you will
exit on Cooper Street and travel south (after exiting you should cross back
over the freeway to head south).  You will drive about two or three
miles to reach campus.  You will pass Randol Mill and Division streets.  UTA
is about four blocks beyond Division Street.  You should turn east (left) on
Mitchell street.  If you get to Park Row, you have gone too far.

To get to the Park Inn, continue past Mitchell one block to Benge.  The Inn
is on your right.

If you are entering Arlington from the south side via Interstate 20, you
should exit on Cooper Street and head north.   You will drive three or four
miles to reach campus.  Some of the major streets you will pass are Arbrook,
Arkansas and Pioneer Parkway, and Park Row.  UTA is just beyond Park Row.
You should turn east (right) on Mitchell Street.  If you get to Border
Street, you have gone too far.

The Park Inn is two blocks north of Park Row on your left.  Turn left on
Benge Street.

Once you are on Mitchell, continue east two blocks to West street and turn
left (north).  Proceed one block to Nedderman.  There are two parking lots
at the corner of West and Nedderman.  If possible park in the north lot.
You will now have the nursing building to the west and the business building
to the north.

To get to the library on foot from the parking lot, head west towards the
nursing building.  You will cross on a sidewalk with the nursing building to
your left and the parking garage to your right.  (DO NOT park in the parking
garage.  It costs an arm and a leg.  Parking in the other lot is free.)
When you cross the street past the parking garage the library is the building
on the right.  The Life Sciences Building will be on the left.  The conference
is on the sixth floor of the library, in the Parlor.

Parking permits (free of charge) will be available at the conference 
registration table, as will campus maps.

If you are staying at the Inn, it is proabably easier to park at the Inn and
then walk to campus (two blocks away).  Campus maps will be available at
the Park Inn desk.

Hope the directions are clear.

                                   Vince Brown
                                   b096vrb at utarlg.uta.edu

                                Registration and Travel Information 
 
Official Conference Motel: 
Park Inn 
703 Benge Drive 
Arlington, TX 76013 
1-800-777-0100 or 817-860-2323 
 
A block of rooms has been reserved at the Park Inn for $35 a night
(single or double).  Room sharing arrangements are possible. 
Reservations should be made directly through the motel.

Official Conference Travel Agent: 
Airline reservations to Dallas-Fort Worth airport should be made
through Dan Dipert travel in Arlington, 1-800-443-5335.  For those
who wish to fly on American Airlines, a Star File account has been
set up for a 5% discount off lowest available fares (two week
advance, staying over Saturday night) or 10% off regular coach
fare; arrangements for Star File reservations should be made
through Dan Dipert.  Please let the conference organizers know (by
e-mail or telephone) when you plan to arrive: some people can be
met at the airport (about 30 minutes from Arlington), others can
call Super Shuttle at 817-329-2000 upon arrival for transportation
to the Park Inn (about $14-$16 per person).
 
Registration for the conference is $25 for students, $65 for
non-student oral or poster presenters, $85 for others.  MIND
members will have $20 (or $10 for students) deducted from the
registration.  A registration form is attached to this
announcement.  Registrants will receive the MIND monthly newsletter
(on e-mail when possible) for the remainder of 1994.  
REGISTRATION FOR MIND CONFERENCE ON OSCILLATIONS IN NEURAL 
SYSTEMS, UNIVERSITY OF TEXAS AT ARLINGTON, MAY 5-7, 1994 
 

Name 
______________________________________________________________ 
 
Address 
___________________________________________________________ 
 
        
___________________________________________________________ 
           
        
___________________________________________________________ 
 
        
____________________________________________________________ 
 
E-Mail   
__________________________________________________________ 
 
Telephone _________________________________________________________

 
 
Registration fee enclosed: 
                   _____   $15  Student, member of MIND 
 
                   _____   $25  Student 
 
                   _____   $65  Non-student oral or poster
presenter 
 
                   _____   $65  Non-student member of MIND 
 
                   _____   $85  All others 
  
Will you be staying at the Park Inn?         ____  Yes  ____  No 
Are you planning to share a room with 
someone you know?                            ____  Yes  ____  No 
 
If so, please list that person's name __________________________  
     
     
 
If not, would be you be interested in 
sharing a room with another conference 
attendee to be assigned?                     ____  Yes  ____ No 
 
PLEASE REMEMBER TO CALL THE PARK INN DIRECTLY FOR YOUR RESERVATION
(WHETHER SINGLE OR DOUBLE) AT 1-800-777-0100 OR 817-860-2323.



More information about the Connectionists mailing list