Connectionists: A radically new theory of how the brain represents and computes with probabilities

Rod Rinkus rod.rinkus at gmail.com
Mon Jan 30 22:43:04 EST 2017


I am pleased to announce a new paper available on arXiv that presents a
radically new concept for how the brain represents and computes with
probabilities.  It is based on sparse distributed coding (SDC) and is
profoundly different from the long-standing, mainstream probabilistic
population-based coding (PPC) concept and I think it will be of broad
interest to the computational neuroscience and machine learning
communities.  Amongst other things, it entails a completely different
concept of noise, specifically that noise is actively generated and used to
preserve similarity from input space to coding space, which in turn,
entails a different (from mainstream) explanation of correlation.  It also
explains the classic, unimodal, bell-shaped, single-cell tuning curve as an
artifact of the process of embedding SDCs (a.k.a. cell assemblies,
ensembles) in superposition.

Here is the arXiv link:     http://arxiv.org/abs/1701.07879

Title: A Radically New Theory of how the Brain Represents and Computes with
Probabilities

Abstract:

The brain is believed to implement probabilistic reasoning and represent
information via population, or distributed, coding. Most previous
population-based probabilistic theories share several basic properties: 1)
continuous-valued neurons (units); 2) fully/densely-distributed codes,
i.e., all/most coding units participate in every code; 3) graded synapses;
4) rate coding; 5) units have innate unimodal, e.g., bell-shaped, tuning
functions (TFs); 6) units are intrinsically noisy; and 7) noise/correlation
is generally considered harmful.  In contrast, our theory assumes: 1)
binary units; 2) only a small subset of units, i.e., a sparse distributed
code (SDC), comprises any individual code; 3) binary synapses; 4) signaling
formally requires only single, i.e., first, spikes; 5) units initially have
completely flat TFs (all weights zero); 6) units are not noisy; and 7)
noise is a resource generated/used to cause similar inputs to map to
similar codes, controlling a tradeoff between storage capacity and
embedding the input space statistics in the pattern of intersections over
stored codes, indirectly yielding correlation patterns. The theory,
Sparsey, was introduced 20 years ago as an efficient canonical cortical
circuit/algorithm model—learning and best-match retrieval (inference,
recognition) time remains fixed as the number of stored codes (hypotheses,
memories) grows—but was not emphasized as a probabilistic model. Assuming
input similarity correlates with likelihood, the active SDC code
simultaneously represents both the most probable hypothesis and the
probability distribution over all stored hypotheses. We show this for
spatial and spatiotemporal (sequential) cases. In the latter case, the
entire distribution is updated, on each sequence item, in fixed time.
Finally, consistent with moving beyond the Neuron Doctrine to the view that
the SDC (cell assembly, ensemble) is the fundamental neural
representational unit, Sparsey suggests that classical unimodal TFs emerge
as an artifact of a single/few-trial learning process in which SDC codes
are laid down in superposition.

I look forward to comments/feedback from the community.

-Rod Rinkus

-- 
Gerard (Rod) Rinkus, PhD
President,
rod at neurithmicsystems dot com
Neurithmic Systems LLC
275 Grove Street, Suite 2-400
Newton, MA 02466
617-997-6272

Visiting Scientist, Lisman Lab
Volen Center for Complex Systems
Brandeis University, Waltham, MA
grinkus at brandeis dot edu
http://people.brandeis.edu/~grinkus/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20170130/c0c64602/attachment.html>


More information about the Connectionists mailing list