Temporal Information
Fry, Robert L.
FRYRL at f1groups.fsd.jhuapl.edu
Wed Apr 5 10:30:00 EDT 1995
The establishment of what actually comprises information in biological
systems is an essential problem since this determination provides the basis
for the analytical evaluation of the information processing capability of
neural structures. In response to the question "What comprises information
to a neuron?" consider the answer that those quantitites which are
observable or measureable by a neuron represent information. Hence what is
information to one neuron may not necessarily be information to another.
Now as current discussions have pointed out, there are many possibilities
regarding what exactly these measureable quantites might consist of in the
way of rate encoding, time-of-arrival, and so on or even possibly
combinations thereof. Consider the following simplistic perspective.
Observable quantities may be measured in both space and in time both of
which can be conceptually be thought of as being quantized in a neural
context. Spatial quantization occurs due to the specificity of synaptic (or
perhaps axonal input accrding to current understandings of some neural
structures) for a given neural. The synaptic efficacies can be viewed as a
Hermitian measurement operator giving rise to the somatic measured
quantity. In a dual sense, time is also quantized if time-of-arrival is the
critical measurement temporal quantity of specific action potential which
either do or do not exist at a given instant in time. The term "instant"
used here obviously must be considered in regard to "Bill's" question of
what the critical time constant is or are for the subject neural assemblies.
There is empirical evidence that there are adaptation mechanisms in place
which serve to modulate time-of-arrival giving rise to a delay vector having
a one-to-one correspondance with the efficacy vector. From this perspective
there is a dual time-space dependency on at least some of the quantites
observable by an individual neuron. The observable quantity would then
consist of a_n*x(t-tau_n) where a_n is the learned connection strength and
tau_n is the learned delay. This has been the basis for my research in
which I have been applying the basic Shannon measures of entropy, mutual
information , and relative entropy to the study of neural structures which
are optimal in an information-theoretic sense and have publications and
papers some of which exist in the neuroprose repository. With this view,
the sets {a_n} and {tau_n} are seen to represent Lagrange vectors which
serve to maximize the mutual informatioon between neural inputs and output.
This is of course a personal perspective and obviously there may be
many other temporal modalities for the inter-neuron exchange of information.
It can be argued however, that the above modality is in many ways the most
simple. Analytically, it seems a very tractable perspective as opposed to
rate, latencies, etc.
Robert Fry
Johns Hopkins University/
Applied Physics Laboratory
Johns Hopkins Road
Laurel, MD 20723
More information about the Connectionists
mailing list