papers in neuroprose archive

Nicolas Brunel brunel at venus.roma1.infn.it
Thu Jul 13 06:13:38 EDT 1995


FTP-host: archive.cis.ohio-state.edu

The following three papers are now available for copying from
the neuroprose archive.

FTP-filename: /pub/neuroprose/brunel.dynamics.ps.Z

Title: Dynamics of an attractor neural network converting
temporal into spatial correlations (29 pages)

Network: computation in neural systems, 5: 449

Author: Nicolas Brunel
Dipartimento di Fisica
Universita di Roma I La Sapienza
P.le Aldo Moro 2 - 00185 Roma
Italy 

Abstract

The dynamics of a model attractor neural network, dominated by
collateral feedback, composed of excitatory and inhibitory neurons
described by afferent currents and spike rates, is studied
analytically.  The network stores stimuli learned in a temporal
sequence.  The statistical properties of the delay activities are
investigated analytically under the approximation that no neuron is
activated by more than one of the learned stimuli, and that inhibitory
reaction is instantaneous.  The analytic results reproduce the details
of simulations of the model in which the stored memories are
uncorrelated, and neurons can be shared, with low probability, by
different stimuli. As such, the approximate analytic results account
for delayed match to sample experiments of Miyashita in the
inferotemporal cortex of monkeys. If the stimuli used in the experiment
are uncorrelated, the analysis deduces the mean coding level $f$ in a
stimulus (i.e. the mean fraction of neurons activated by a given
stimulus) from the fraction of selective neurons which have a high
correlation coefficient, of $f\sim 0.0125$.  It also predicts the
structure of the distribution of the correlation coefficients among
neurons. 


FTP-filename: /pub/neuroprose/brunel.learning.ps.Z

Title: Learning internal representations in attractor neural network
with analogue neurons

To be published in Network: computation in neural systems

Authors: Daniel J Amit and Nicolas Brunel
Dipartimeto di Fisica
Universita Roma I
P.le Aldo Moro 2 - 00185 Roma
Italy

Abstract:

  A learning attractor neural network (LANN) with a double dynamics of
  neural activities and synaptic efficacies, operating on two different
  time scales is studied by simulations in preparation for an
  electronic implementation.  The present network includes several
  quasi-realistic features:  neurons are represented by their afferent
  currents and output spike rates; excitatory and inhibitory neurons
  are separated; attractor spike rates as well as coding levels in
  arriving stimuli are low; learning takes place only between
  excitatory units.  Synaptic dynamics is an unsupervised, analog
  Hebbian process, but long term memory in the absence of neural
  activity is maintained by a refresh mechanism which on long time
  scales discretizes the synaptic values, converting learning into an
 asynchronous stochastic process induced by the stimuli on the
  synaptic efficacies.

  This network is intended to learn a set of attractors from the
  statistics of freely arriving stimuli, which are represented by
  external synaptic inputs injected into the excitatory neurons.    In
  the simulations different types of sequences of many thousands of
  stimuli are presented to the network that do not distinguish between
  retrieval and learning phases.  Stimulus sequences differ in preassigned
  global statistics (including time dependent statistics); in orders of
  presentation of individual stimuli within a given statistics; in
  lengths of time intervals for each presentation and in the intervals
  separating one stimulus from another.

  We find that the network effectively learns a set of attractors
  representing the statistics of the stimuli, and is able to modify its
  attractors when the input statistics change.  Moreover, as the global
  input statistics changes the network can also forget attractors
  related to stimulus classes no longer presented.  Forgetting takes
  place only due to the arrival of new stimuli.  The performance of the
  network and the statistics of the attractors are studied as a
  function of the input statistics.  Most of the large scale
  characteristics of the learning dynamics can be captured
  theoretically.

  This model modifies a previous implementation of a LANN composed of
  discrete neurons, in a network of more realistic neurons.  The
  different elements have been designed to facilitate their
  implementation in silicon.
 

FTP-filename: /pub/neuroprose/brunel.spontaneous.ps.Z

Title: Global spontaneous activity and local structured (learned)
delay activity in cortex

submitted to Journal of Neurophysiology

Authors: Daniel J Amit and Nicolas Brunel
Dipartimento di Fisica
Universita di Roma I
P.le Aldo Moro 2 -- 00185 Roma
Italy

Abstract:

1. We investigate the conditions under which cortical activity alone
makes spontaneous activity self-reproducing and stable against
fluctuations of spike rates.  Invoking simple assumptions about
properties of integrate-and-fire neurons it is shown that the
stochastic background activity, of 1-5 spikes/second, cannot be
stabilized when all neurons are excitatory.
 
2. On the other hand, spontaneous activity becomes self-stabilizing in
presence of local inhibition:  given reasonable values of the
parameters of the network spontaneous activity reproduces itself and
small fluctuations in the rate are suppressed.
a. If the integration time constants of excitatory and inhibitory
neurons at the soma are equal, {\em local} excitatory and inhibitory
inputs to a neuron must balance to provide {\em local} stablility.
b. If inhibition integrates faster its synaptic inputs,
spontaneous activity is stable even when local recurrent excitation
predominates.
 
3. In a network sustaining spontaneous rates of 1-5 spikes/second, we
study the effect of learning in a local module, expressed in synaptic
modifications in specific populations of synapses.  We find: 
a. Initially no stimulus specific delay activity manifests itself.
Instead, there is a delay activity in which, locally, {\em all} neurons
selective to any of the stimuli learned have rates which gradually
increase with the amplitude of synaptic potentiation.
b. When the average LTP increases beyond a critical value, specific
local attractors appear abruptly against the background of the global
uniform spontaneous attractor. This happens with either gradual or
discrete stochastic LTP.
 
4. The above findings predict that in the process of learning
unfamiliar stimuli, there is a stage in which all neurons selective to
any of the learned stimuli enhance their spontaneous activity relative
to the rest.  Then, abruptly, selective delay activity appear.  Both
facts could be observed in single unit recordings in delayed match to
sample experiments.
 
5. Beyond this critical learning strength the local module has two
types of collective activity.  It either participates in the global
spontaneous activity, or it maintains a stimulus selective elevated
activity distribution.  The particular mode of behavior depends on the
stimulus:  if it is unfamiliar, the activity is spontaneous; if similar
to a learned stimulus, the delay activity is selective.  These new
attractors (delay activities) reflect the synaptic structure developed
during learning.  In each of them a small population of neurons have
elevated rates, 20-30 spikes/second, depending on the strength of LTP.
The remaining neurons of the module have their activity at spontaneous
rates.
 

Instructions for retrieving these papers:

unix> ftp archive.cis.ohio-state.edu
login: anonymous
passwd: (your email address)
ftp> cd /pub/neuroprose
ftp> binary
ftp> get brunel.dynamics.ps.Z
ftp> get brunel.learning.ps.Z
ftp> get brunel.spontaneous.ps.Z
ftp> quit
unix> uncompress brunel.dynamics.ps.Z
unix> uncompress brunel.learning.ps.Z
unix> uncompress brunel.spontaneous.ps.Z



More information about the Connectionists mailing list