Connectionists: how the brain works? (UNCLASSIFIED)

Juyang Weng weng at cse.msu.edu
Sat Apr 5 10:51:59 EDT 2014


Dear Steve,

This is one of my long-time questions that I did not have a chance to 
ask you when I met you many times before.
But they may be useful for some people on this list.
Please accept my apology of my question implies any false impression 
that I did not intend.

(1) Your statement below seems to have confirmed my understanding:
Your top-down process in ART in the late 1990's is basically for finding 
an acceptable match
between the input feature vector and the stored feature vectors 
represented by neurons (not meant for the nearest match).
The currently active neuron is the one being examined by the top down 
process
in a sequential fashion: one neuron after another, until an acceptable 
neuron is found.

(2) The input to the ART in the late 1990's is for a single feature 
vector as a monolithic input.
By monolithic, I mean that all neurons take the entire input feature 
vector as input.
I raise this point here because neuron in ART in the late 1990's does 
not have an explicit local sensory receptive field (SRF),
i.e., are fully connected from all components of the input vector. A 
local SRF means that each neuron is only connected to a small region
in an input image.

My apology again if my understanding above has errors although I have 
examined the above two points carefully
through multiple your papers.

Best regards,

-John


On 3/22/14 10:04 PM, Stephen Grossberg wrote:
> Dear Tsvi,
>
> You stated that ART  "requires complex signals". I noted that this 
> statement is not correct.
>
> To illustrate what I meant, I noted that ART uses a simple measure of 
> pattern mismatch. In particular,  a top-down expectation selects 
> consistent features and suppresses inconsistent features to focus 
> attention upon expected features. This property of attention is well 
> supported by lots of psychological and neurobiological data.
>
> You also contrasted models that "cycle", one being ART, with your own 
> network which "does not cycle". I therefore mentioned that ART 
> hypothesis testing and search, which involve "cycling" through 
> operations of mismatch, arousal, and reset, are directly supported by 
> a lot of data. For example, in an oddball paradigm, one can compare 
> mismatch with properties of the P120 ERP, arousal with properties of 
> the N200 ERP, and reset with properties of the P300 ERP.
>
> I am not sure why this reply led you to write bitterly about an old 
> review of one of your articles, which I know nothing about, and which 
> is not relevant to my specific points of information.
>
> Best,
>
> Steve
>
>
> On Mar 22, 2014, at 4:38 PM, Tsvi Achler wrote:
>
>> Dear Steve,
>> Isn't Section 8.2 exactly about the cycling (and labeled as such) and 
>> figure 2 a depiction of the cycling?
>>
>> Your response is similar to feedback I received years ago in an 
>> evaluation of my algorithm, where the reviewer clearly didnt read the 
>> paper in detail but it seems was intent not to let it through because 
>> it seemed as if they had their own algorithm and agenda.  I took out 
>> snippets of that review and placed it here because this is continuing 
>> today.
>>
>> From the review: "The network, in effect, implements a winner-takes 
>> all scheme, when only a single output neuron reacts to each input..."
>> This is not true my network does not implement a Winner take all, in 
>> fact that is the point, there is no lateral inhibition.
>> "... this network type can be traced back to the sixties, most 
>> notably to the work of Grossberg ...  As a result, I do not see any 
>> novelty in this paper ... Overall recommendation: 1 (Reject) .. 
>> Reviewer's confidence: 5 (highest)."
>>
>> This is exactly what I mean when I stated that it seems academia 
>> would rather bury new ideas.  Such a callous and strong dismissal is 
>> devastating to a young student and detrimental to the field.
>> Someone as decorated and established as you has the opportunity to 
>> move the field forward.  However instead the neural network aspect of 
>> feedback during recognition is being actively inhibited by 
>> unsubstantive and destructive efforts.
>>
>> I would be happy to work with you offline to write a joint statement 
>> on this with all of the technical details.
>>
>> Sincerely,
>> -Tsvi
>>
>>
>> On Mar 22, 2014 4:08 AM, "Stephen Grossberg" <steve at cns.bu.edu 
>> <mailto:steve at cns.bu.edu>> wrote:
>> >
>> > Dear Tsvi,
>> >
>> > You mention Adaptive Resonance below and suggest that it "requires 
>> complex signals indicating when to stop, compare, and cycle". That is 
>> not correct.
>> >
>> > ART uses a simple measure of pattern mismatch. Moreover, 
>> psychological, neurophysiological, anatomical, and ERP data support 
>> the operations that it models during hypothesis testing and memory 
>> search. ART predicted various of these data before they were collected.
>> >
>> > If you would like to pursue this further, see 
>> http://cns.bu.edu/~steve/ART.pdf <http://cns.bu.edu/%7Esteve/ART.pdf> 
>> for a recent heuristic review.
>> >
>> > Best,
>> >
>> > Steve
>> >
>> >
>> > On Mar 21, 2014, at 10:29 PM, Tsvi Achler wrote:
>> >
>> > Sorry for the length of this response but I wanted to go into some
>> > detail here.
>> >
>> > I see the habituation paradigm as somewhat analogous to surprise and
>> > measurement of error during recognition. I can think of a few
>> > mathematical Neural Network classifiers that can generate an internal
>> > pattern for match during recognition to calculate this
>> > habituation/surprise orientation.   Feedforward networks definitely
>> > will not work because they don't recall the internal stimulus very
>> > well.  One option is adaptive resonance (which I assume you use), but
>> > it cycles through the patterns one at a time and requires complex
>> > signals indicating when to stop, compare, and cycle.  I assume
>> > Juyang's DN can also do something similar but I suspect it also must
>> > cycle since it also has lateral inhibition. Bidirectional Associative
>> > Memories (BAM) may also be used.  Others such as Bayes networks and
>> > free-energy principle can used, although they are not as easily
>> > translatable to neural networks.
>> >
>> > Another option is a network like mine which does not have lateral
>> > connections but also generates internal patterns.  The advantage is
>> > that it can also generate mixtures of patterns at once, does not cycle
>> > through individual patterns, does not require signals associated with
>> > cycling, and can be shown mathematically to be analogous to
>> > feedforward networks.  The error signal it produces can be used for an
>> > orientation reflex or what I rather call attention. It is essential
>> > for recognition and planning.
>> >
>> > I would be happy to give a talk on this and collaborate on a rigorous
>> > comparison.  Indeed it is important to look at models other than those
>> > using feedforward connections during recognition.
>> >
>> > Sincerely,
>> >
>> > -Tsvi
>> >
>> >
>> >
>> >
>> > On Mar 21, 2014 5:25 AM, "Kelley, Troy D CIV (US)"
>> > <troy.d.kelley6.civ at mail.mil <mailto:troy.d.kelley6.civ at mail.mil>> 
>> wrote:
>> >
>> >
>> > Classification: UNCLASSIFIED
>> >
>> > Caveats: NONE
>> >
>> >
>> > Yes, Mark, I would argue that habituation is anticipatory 
>> prediction.  The
>> >
>> > neuron creates a model of the incoming stimulus and the neuron is
>> >
>> > essentially predicting that the next stimuli will be comparatively 
>> similar
>> >
>> > to the previous stimulus. If this prediction is met, the neuron 
>> habituates.
>> >
>> > That is a simple, low level, predictive model.
>> >
>> >
>> > -----Original Message-----
>> >
>> > From: Mark H. Bickhard [mailto:mhb0 at Lehigh.EDU 
>> <mailto:mhb0 at Lehigh.EDU>]
>> >
>> > Sent: Thursday, March 20, 2014 5:28 PM
>> >
>> > To: Kelley, Troy D CIV (US)
>> >
>> > Cc: Tsvi Achler; Andras Lorincz; bower at uthscsa.edu 
>> <mailto:bower at uthscsa.edu>;
>> >
>> > connectionists at mailman.srv.cs.cmu.edu 
>> <mailto:connectionists at mailman.srv.cs.cmu.edu>
>> >
>> > Subject: Re: Connectionists: how the brain works?
>> >
>> >
>> > I would agree with the importance of Sokolov habituation, but there 
>> is more
>> >
>> > than one way to understand and generalize from this phenomenon:
>> >
>> >
>> > http://www.lehigh.edu/~mhb0/AnticipatoryBrain20Aug13.pdf 
>> <http://www.lehigh.edu/%7Emhb0/AnticipatoryBrain20Aug13.pdf>
>> >
>> >
>> > Mark H. Bickhard
>> >
>> > Lehigh University
>> >
>> > 17 Memorial Drive East
>> >
>> > Bethlehem, PA 18015
>> >
>> > mark at bickhard.name <mailto:mark at bickhard.name>
>> >
>> > http://bickhard.ws/
>> >
>> >
>> > On Mar 20, 2014, at 4:41 PM, Kelley, Troy D CIV (US) wrote:
>> >
>> >
>> > We have found that the habituation algorithm that Sokolov 
>> discovered way
>> >
>> > back in 1963 provides an useful place to start if one is trying to 
>> determine
>> >
>> > how the brain works.  The algorithm, at the cellular level, is 
>> capable of
>> >
>> > determining novelty and generating implicit predictions - which it then
>> >
>> > habituates to. Additionally, it is capable of regenerating the original
>> >
>> > response when re-exposed to the same stimuli.  All of these behaviors
>> >
>> > provide an excellent framework at the cellular level for explain 
>> all sorts
>> >
>> > of high level behaviors at the functional level.  And it fits the 
>> Ockham's
>> >
>> > razor principle of using a single algorithm to explain a wide 
>> variety of
>> >
>> > explicit behavior.
>> >
>> >
>> > Troy D. Kelley
>> >
>> > RDRL-HRS-E
>> >
>> > Cognitive Robotics and Modeling Team Leader Human Research and 
>> Engineering
>> >
>> > Directorate U.S. Army Research Laboratory Aberdeen, MD 21005 Phone
>> >
>> > 410-278-5869 or 410-278-6748 Note my new email address:
>> >
>> > troy.d.kelley6.civ at mail.mil <mailto:troy.d.kelley6.civ at mail.mil>
>> >
>> >
>> >
>> >
>> >
>> >
>> > On 3/20/14 10:41 AM, "Tsvi Achler" <achler at gmail.com 
>> <mailto:achler at gmail.com>> wrote:
>> >
>> >
>> > I think an Ockham's razor principle can be used to find the most
>> >
>> > optimal algorithm if it is interpreted to mean the model with the
>> >
>> > least amount of free parameters that captures the most phenomena.
>> >
>> > http://reason.cs.uiuc.edu/tsvi/Evaluating_Flexibility_of_Recognition.p
>> >
>> > df
>> >
>> > -Tsvi
>> >
>> >
>> > On Wed, Mar 19, 2014 at 10:37 PM, Andras Lorincz 
>> <lorincz at inf.elte.hu <mailto:lorincz at inf.elte.hu>>
>> >
>> > wrote:
>> >
>> > Ockham works here via compressing both the algorithm and the structure.
>> >
>> > Compressing the structure to stem cells means that the algorithm
>> >
>> > should describe the development, the working, and the time dependent
>> >
>> > structure of the brain. Not compressing the description of the
>> >
>> > structure of the evolved brain is a different problem since it saves
>> >
>> > the need for the description of the development, but the working.
>> >
>> > Understanding the structure and the working of one part of the brain
>> >
>> > requires the description of its communication that increases the
>> >
>> > complexity of the description. By the way, this holds for the whole
>> >
>> > brain, so we might have to include the body at least; a structural
>> >
>> > minimist may wish to start from the genetic code, use that hint and
>> >
>> > unfold the already compressed description. There are (many and
>> >
>> > different) todos 'outside' ...
>> >
>> >
>> >
>> > Andras
>> >
>> >
>> >
>> >
>> >
>> > .
>> >
>> >
>> > ________________________________
>> >
>> > From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu 
>> <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>>
>> >
>> > on behalf of james bower <bower at uthscsa.edu <mailto:bower at uthscsa.edu>>
>> >
>> > Sent: Thursday, March 20, 2014 3:33 AM
>> >
>> >
>> > To: Geoffrey Goodhill
>> >
>> > Cc: connectionists at mailman.srv.cs.cmu.edu 
>> <mailto:connectionists at mailman.srv.cs.cmu.edu>
>> >
>> > Subject: Re: Connectionists: how the brain works?
>> >
>> >
>> > Geoffrey,
>> >
>> >
>> > Nice addition to the discussion actually introducing an interesting
>> >
>> > angle on the question of brain organization (see below)  As you note,
>> >
>> > reaction diffusion mechanisms and modeling have been quite successful
>> >
>> > in replicating patterns seen in biology - especially interesting I
>> >
>> > think is the modeling of patterns in slime molds, but also for very
>> >
>> > general pattern formation in embryology.  However, more and more
>> >
>> > detailed analysis of what is diffusing, what is sensing what is
>> >
>> > diffusing, and what is reacting to substances once sensed -- all
>> >
>> > linked to complex patterns of gene regulation and expression have
>> >
>> > made it clear that actual embryological development is much much more
>> >
>> > complex, as Turing himself clearly anticipated, as the quote you 
>> cite pretty
>> >
>> > clearly indicates.   Clearly a smart guy.   But, I don't actually think
>> >
>> > that
>> >
>> > this is an application of Ochham's razor although it might appear to
>> >
>> > be after the fact.  Just as Hodgkin and Huxley were not applying it
>> >
>> > either in
>> >
>> > their model of the action potential.   Turing apparently guessed (based
>> >
>> > on a
>> >
>> > lot of work at the time on pattern formation with reaction diffusion)
>> >
>> > that such a mechanism might provide the natural basis for what
>> >
>> > embryos do. Thus, just like for Hodgkin and Huxley, his model
>> >
>> > resulted from a bio-physical insight, not an explicit attempt to
>> >
>> > build a stripped down model for its own sake.  I  seriously doubt
>> >
>> > that Turning would have claimed that he, or his models could more
>> >
>> > effectively do what biology actually does in forming an embrio, or
>> >
>> > substitute for the actual process.
>> >
>> >
>> > However, I think there is another interesting connection here to the
>> >
>> > discussion on modeling the brain. Almost certainly communication and
>> >
>> > organizational systems in early living beings were reaction diffusion
>> >
>> > based.
>> >
>> > This is still a dominant effect for many 'sensing' in small organisms.
>> >
>> > Perhaps, therefore, one can look at nervous systems as structures
>> >
>> > specifically developed to supersede reaction diffusion mechanisms,
>> >
>> > thus superseding this very 'natural' but complexity limited type of
>> >
>> > communication and organization.  What this means, I believe, is that
>> >
>> > a simplified or abstracted physical or mathematical model of the
>> >
>> > brain explicitly violates the evolutionary pressures responsible for
>> >
>> > its structure.  Its where the wires go, what the wires do, and what
>> >
>> > the receiving neuron does with the information that forms the basis
>> >
>> > for neural computation, multiplied by a very large number.  And that
>> >
>> > is dependent on the actual physical structure of those elements.
>> >
>> >
>> > One more point about smart guys,  as a young computational
>> >
>> > neurobiologist I questioned how insightful John von Neumann actually
>> >
>> > was because I was constantly hearing about a lecture he wrote (but
>> >
>> > didn't give) at Yale suggesting that dendrites and neurons might be
>> >
>> > digital ( John von Neumann's The Computer and the Brain. (New
>> >
>> > Haven/London: Yale Univesity Press, 1958.) Very clearly a not very
>> >
>> > insightful idea for a supposedly smart guy.  It wasn't until a few
>> >
>> > years later, when I actually read the lecture - that I found out that
>> >
>> > he ends by stating that this idea is almost certainly wrong, given
>> >
>> > the likely nonlinearities in neuronal dendrites.  So von Neumann
>> >
>> > didn't lack insight, the people who quoted him did.  It is a
>> >
>> > remarkable fact that more than 60 years later, the majority of 
>> models of
>> >
>> > so called neurons built by engineers AND neurobiologists don't consider
>> >
>> > these nonlinearities.
>> >
>> > The point being the same point, to the Hopfield, Mead, Feynman list,
>> >
>> > we can now add Turing and von Neumann as suspecting that for
>> >
>> > understanding, biology and the nervous system must be dealt with in 
>> their
>> >
>> > full complexity.
>> >
>> >
>> > But thanks for the example from Turing - always nice to consider actual
>> >
>> > examples.   :-)
>> >
>> >
>> > Jim
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Mar 19, 2014, at 8:30 PM, Geoffrey Goodhill 
>> <g.goodhill at uq.edu.au <mailto:g.goodhill at uq.edu.au>>
>> >
>> > wrote:
>> >
>> >
>> > Hi All,
>> >
>> >
>> > A great example of successful Ockham-inspired biology is Alan
>> >
>> > Turing's model for pattern formation (spots, stripes etc) in
>> >
>> > embryology (The chemical basis of morphogenesis, Phil Trans Roy Soc,
>> >
>> > 1953). Turing introduced a physical mechanism for how inhomogeneous
>> >
>> > spatial patterns can arise in a biological system from a spatially
>> >
>> > homogeneous starting point,  based on the diffusion of morphogens. The
>> >
>> > paper begins:
>> >
>> >
>> > "In this section a mathematical model of the growing embryo will be
>> >
>> > described. This model will be a simplification and an idealization,
>> >
>> > and consequently a falsification. It is to be hoped that the features
>> >
>> > retained for discussion are those of greatest importance in the
>> >
>> > present state of knowledge."
>> >
>> >
>> > The paper remained virtually uncited for its first 20 years following
>> >
>> > publication, but since then has amassed 8000 citations (Google
>> >
>> > Scholar). The subsequent discovery of huge quantities of molecular
>> >
>> > detail in biological pattern formation have only reinforced the
>> >
>> > importance of this relatively simple model, not because it explains
>> >
>> > every system, but because the overarching concepts it introduced have
>> >
>> > proved to be so fertile.
>> >
>> >
>> > Cheers,
>> >
>> >
>> > Geoff
>> >
>> >
>> >
>> > On Mar 20, 2014, at 6:27 AM, Michael Arbib wrote:
>> >
>> >
>> > Ignoring the gross differences in circuitry between hippocampus and
>> >
>> > cerebellum, etc., is not erring on the side of simplicity, it is
>> >
>> > erring, period. Have you actually looked at a
>> >
>> > Cajal/Sxentagothai-style drawing of their circuitry?
>> >
>> >
>> > At 01:07 PM 3/19/2014, Brian J Mingus wrote:
>> >
>> >
>> > Hi Jim,
>> >
>> >
>> > Focusing too much on the details is risky in and of itself. Optimal
>> >
>> > compression requires a balance, and we can't compute what that
>> >
>> > balance is (all models are wrong). One thing we can say for sure is
>> >
>> > that we should err on the side of simplicity, and adding detail to
>> >
>> > theories before simpler explanations have failed is not Ockham's
>> >
>> > heuristic. That said it's still in the space of a Big Data fuzzy
>> >
>> > science approach, where we throw as much data from as many levels of
>> >
>> > analysis as we can come up with into a big pot and then construct a
>> >
>> > theory. The thing to keep in mind is that when we start pruning this
>> >
>> > model most of the details are going to disappear, because almost all
>> >
>> > of them are irrelevant. Indeed, the size of the description that
>> >
>> > includes all the details is almost infinite, whereas the length of
>> >
>> > the description that explains almost all the variance is extremely
>> >
>> > short, especially in comparison. This is why Ockham's razor is a good
>> >
>> > heuristic. It helps prevent us from wasting time on unnecessary
>> >
>> > details by suggesting that we only inquire as to the details once our
>> >
>> > existing simpler theory has failed to work.
>> >
>> >
>> > On 3/14/14 3:40 PM, Michael Arbib wrote:
>> >
>> >
>> > At 11:17 AM 3/14/2014, Juyang Weng wrote:
>> >
>> >
>> > The brain uses a single architecture to do all brain functions we are
>> >
>> > aware of!  It uses the same architecture to do vision, audition,
>> >
>> > motor, reasoning, decision making, motivation (including pain
>> >
>> > avoidance and pleasure seeking, novelty seeking, higher emotion, etc.).
>> >
>> >
>> >
>> > Gosh -- and I thought cerebral cortex, hippocampus and cerebellum
>> >
>> > were very different from each other.
>> >
>> >
>> >
>> >
>> >
>> > Troy D. Kelley
>> >
>> > RDRL-HRS-E
>> >
>> > Cognitive Robotics and Modeling Team Leader Human Research and 
>> Engineering
>> >
>> > Directorate U.S. Army Research Laboratory Aberdeen, MD 21005 Phone
>> >
>> > 410-278-5869 or 410-278-6748 Note my new email address:
>> >
>> > troy.d.kelley6.civ at mail.mil <mailto:troy.d.kelley6.civ at mail.mil>
>> >
>> >
>> >
>> >
>> > Classification: UNCLASSIFIED
>> >
>> > Caveats: NONE
>> >
>> >
>> >
>> >
>> > Stephen Grossberg
>> > Wang Professor of Cognitive and Neural Systems
>> > Professor of Mathematics, Psychology, and Biomedical Engineering
>> > Director, Center for Adaptive Systems 
>> http://www.cns.bu.edu/about/cas.html
>> > http://cns.bu.edu/~steve <http://cns.bu.edu/%7Esteve>
>> > steve at bu.edu <mailto:steve at bu.edu>
>> >
>> >
>> >
>> >
>
> Stephen Grossberg
> Wang Professor of Cognitive and Neural Systems
> Professor of Mathematics, Psychology, and Biomedical Engineering
> Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html
> http://cns.bu.edu/~steve <http://cns.bu.edu/%7Esteve>
> steve at bu.edu <mailto:steve at bu.edu>
>
>
>
>

-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
428 S Shaw Ln Rm 3115
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: weng at cse.msu.edu
URL: http://www.cse.msu.edu/~weng/
----------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140405/87e297ab/attachment.html>


More information about the Connectionists mailing list