<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Dear Steve,<br>
<br>
This is one of my long-time questions that I did not have a chance
to ask you when I met you many times before. <br>
But they may be useful for some people on this list. <br>
Please accept my apology of my question implies any false impression
that I did not intend.<br>
<br>
(1) Your statement below seems to have confirmed my understanding:
<br>
Your top-down process in ART in the late 1990's is basically for
finding an acceptable match <br>
between the input feature vector and the stored feature vectors
represented by neurons (not meant for the nearest match). <br>
The currently active neuron is the one being examined by the top
down process<br>
in a sequential fashion: one neuron after another, until an
acceptable neuron is found.<br>
<br>
(2) The input to the ART in the late 1990's is for a single feature
vector as a monolithic input. <br>
By monolithic, I mean that all neurons take the entire input feature
vector as input. <br>
I raise this point here because neuron in ART in the late 1990's
does not have an explicit local sensory receptive field (SRF), <br>
i.e., are fully connected from all components of the input vector.
A local SRF means that each neuron is only connected to a small
region <br>
in an input image. <br>
<br>
My apology again if my understanding above has errors although I
have examined the above two points carefully <br>
through multiple your papers.<br>
<br>
Best regards,<br>
<br>
-John<br>
<br>
<br>
<div class="moz-cite-prefix">On 3/22/14 10:04 PM, Stephen Grossberg
wrote:<br>
</div>
<blockquote
cite="mid:C1DD9952-131A-439F-88E7-CA82C6C12D42@cns.bu.edu"
type="cite">Dear Tsvi,
<div><br>
</div>
<div>You stated that ART "requires complex signals". I noted that
this statement is not correct.</div>
<div><br>
</div>
<div>To illustrate what I meant, I noted that ART uses a simple
measure of pattern mismatch. In particular, a top-down
expectation selects consistent features and suppresses
inconsistent features to focus attention upon expected features.
This property of attention is well supported by lots of
psychological and neurobiological data. </div>
<div><br>
</div>
<div>You also contrasted models that "cycle", one being ART, with
your own network which "does not cycle". I therefore mentioned
that ART hypothesis testing and search, which involve "cycling"
through operations of mismatch, arousal, and reset, are directly
supported by a lot of data. For example, in an oddball paradigm,
one can compare mismatch with properties of the P120 ERP,
arousal with properties of the N200 ERP, and reset with
properties of the P300 ERP.</div>
<div><br>
</div>
<div>I am not sure why this reply led you to write bitterly about
an old review of one of your articles, which I know nothing
about, and which is not relevant to my specific points of
information. </div>
<div><br>
</div>
<div>Best,</div>
<div><br>
</div>
<div>Steve</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>On Mar 22, 2014, at 4:38 PM, Tsvi Achler wrote:</div>
<br class="Apple-interchange-newline">
<blockquote type="cite">
<div dir="ltr">Dear Steve,<br>
<div>Isn't Section 8.2 exactly about the cycling (and
labeled as such) and figure 2 a depiction of the cycling?<br>
<br>
</div>
<div>Your response is similar to feedback I received years
ago in an evaluation of my algorithm, where the reviewer
clearly didnt read the paper in detail but it seems was
intent not to let it through because it seemed as if they
had their own algorithm and agenda. I took out snippets
of that review and placed it here because this is
continuing today.<br>
<br>
</div>
<div>From the review: "The network, in effect, implements a
winner-takes all scheme, when only a single output neuron
reacts to each input..." <br>
This is not true my network does not implement a Winner
take all, in fact that is the point, there is no lateral
inhibition. <br>
"... this network type can be traced back to the sixties,
most notably to the work of Grossberg ... As a result, I
do not see any novelty in this paper ... Overall
recommendation: 1 (Reject) .. Reviewer's confidence: 5
(highest)."<br>
<br>
</div>
<div>This is exactly what I mean when I stated that it seems
academia would rather bury new ideas. Such a callous and
strong dismissal is devastating to a young student and
detrimental to the field. <br>
Someone as decorated and established as you has the
opportunity to move the field forward. However instead
the neural network aspect of feedback during recognition
is being actively inhibited by unsubstantive and
destructive efforts.</div>
<div><br>
</div>
<div>I would be happy to work with you offline to write a
joint statement on this with all of the technical details.<br>
<br>
</div>
<div>Sincerely,<br>
-Tsvi<br>
<br>
<br>
On Mar 22, 2014 4:08 AM, "Stephen Grossberg" <<a
moz-do-not-send="true" href="mailto:steve@cns.bu.edu">steve@cns.bu.edu</a>>
wrote:<br>
><br>
> Dear Tsvi,<br>
><br>
> You mention Adaptive Resonance below and suggest that
it "requires complex signals indicating when to stop,
compare, and cycle". That is not correct.<br>
><br>
> ART uses a simple measure of pattern mismatch.
Moreover, psychological, neurophysiological, anatomical,
and ERP data support the operations that it models during
hypothesis testing and memory search. ART predicted
various of these data before they were collected.<br>
><br>
> If you would like to pursue this further, see <a
moz-do-not-send="true"
href="http://cns.bu.edu/%7Esteve/ART.pdf">http://cns.bu.edu/~steve/ART.pdf</a>
for a recent heuristic review.<br>
><br>
> Best,<br>
><br>
> Steve<br>
><br>
><br>
> On Mar 21, 2014, at 10:29 PM, Tsvi Achler wrote:<br>
><br>
> Sorry for the length of this response but I wanted to
go into some<br>
> detail here.<br>
><br>
> I see the habituation paradigm as somewhat analogous
to surprise and<br>
> measurement of error during recognition. I can think
of a few<br>
> mathematical Neural Network classifiers that can
generate an internal<br>
> pattern for match during recognition to calculate
this<br>
> habituation/surprise orientation. Feedforward
networks definitely<br>
> will not work because they don't recall the internal
stimulus very<br>
> well. One option is adaptive resonance (which I
assume you use), but<br>
> it cycles through the patterns one at a time and
requires complex<br>
> signals indicating when to stop, compare, and cycle.
I assume<br>
> Juyang's DN can also do something similar but I
suspect it also must<br>
> cycle since it also has lateral inhibition.
Bidirectional Associative<br>
> Memories (BAM) may also be used. Others such as
Bayes networks and<br>
> free-energy principle can used, although they are not
as easily<br>
> translatable to neural networks.<br>
><br>
> Another option is a network like mine which does not
have lateral<br>
> connections but also generates internal patterns.
The advantage is<br>
> that it can also generate mixtures of patterns at
once, does not cycle<br>
> through individual patterns, does not require signals
associated with<br>
> cycling, and can be shown mathematically to be
analogous to<br>
> feedforward networks. The error signal it produces
can be used for an<br>
> orientation reflex or what I rather call attention.
It is essential<br>
> for recognition and planning.<br>
><br>
> I would be happy to give a talk on this and
collaborate on a rigorous<br>
> comparison. Indeed it is important to look at models
other than those<br>
> using feedforward connections during recognition.<br>
><br>
> Sincerely,<br>
><br>
> -Tsvi<br>
><br>
><br>
><br>
><br>
> On Mar 21, 2014 5:25 AM, "Kelley, Troy D CIV (US)"<br>
> <<a moz-do-not-send="true"
href="mailto:troy.d.kelley6.civ@mail.mil">troy.d.kelley6.civ@mail.mil</a>>
wrote:<br>
><br>
><br>
> Classification: UNCLASSIFIED<br>
><br>
> Caveats: NONE<br>
><br>
><br>
> Yes, Mark, I would argue that habituation is
anticipatory prediction. The<br>
><br>
> neuron creates a model of the incoming stimulus and
the neuron is<br>
><br>
> essentially predicting that the next stimuli will be
comparatively similar<br>
><br>
> to the previous stimulus. If this prediction is met,
the neuron habituates.<br>
><br>
> That is a simple, low level, predictive model.<br>
><br>
><br>
> -----Original Message-----<br>
><br>
> From: Mark H. Bickhard [mailto:<a
moz-do-not-send="true" href="mailto:mhb0@Lehigh.EDU">mhb0@Lehigh.EDU</a>]<br>
><br>
> Sent: Thursday, March 20, 2014 5:28 PM<br>
><br>
> To: Kelley, Troy D CIV (US)<br>
><br>
> Cc: Tsvi Achler; Andras Lorincz; <a
moz-do-not-send="true" href="mailto:bower@uthscsa.edu">bower@uthscsa.edu</a>;<br>
><br>
> <a moz-do-not-send="true"
href="mailto:connectionists@mailman.srv.cs.cmu.edu">connectionists@mailman.srv.cs.cmu.edu</a><br>
><br>
> Subject: Re: Connectionists: how the brain works?<br>
><br>
><br>
> I would agree with the importance of Sokolov
habituation, but there is more<br>
><br>
> than one way to understand and generalize from this
phenomenon:<br>
><br>
><br>
> <a moz-do-not-send="true"
href="http://www.lehigh.edu/%7Emhb0/AnticipatoryBrain20Aug13.pdf">http://www.lehigh.edu/~mhb0/AnticipatoryBrain20Aug13.pdf</a><br>
><br>
><br>
> Mark H. Bickhard<br>
><br>
> Lehigh University<br>
><br>
> 17 Memorial Drive East<br>
><br>
> Bethlehem, PA 18015<br>
><br>
> <a moz-do-not-send="true"
href="mailto:mark@bickhard.name">mark@bickhard.name</a><br>
><br>
> <a moz-do-not-send="true" href="http://bickhard.ws/">http://bickhard.ws/</a><br>
><br>
><br>
> On Mar 20, 2014, at 4:41 PM, Kelley, Troy D CIV (US)
wrote:<br>
><br>
><br>
> We have found that the habituation algorithm that
Sokolov discovered way<br>
><br>
> back in 1963 provides an useful place to start if one
is trying to determine<br>
><br>
> how the brain works. The algorithm, at the cellular
level, is capable of<br>
><br>
> determining novelty and generating implicit
predictions - which it then<br>
><br>
> habituates to. Additionally, it is capable of
regenerating the original<br>
><br>
> response when re-exposed to the same stimuli. All of
these behaviors<br>
><br>
> provide an excellent framework at the cellular level
for explain all sorts<br>
><br>
> of high level behaviors at the functional level. And
it fits the Ockham's<br>
><br>
> razor principle of using a single algorithm to
explain a wide variety of<br>
><br>
> explicit behavior.<br>
><br>
><br>
> Troy D. Kelley<br>
><br>
> RDRL-HRS-E<br>
><br>
> Cognitive Robotics and Modeling Team Leader Human
Research and Engineering<br>
><br>
> Directorate U.S. Army Research Laboratory Aberdeen,
MD 21005 Phone<br>
><br>
> 410-278-5869 or 410-278-6748 Note my new email
address:<br>
><br>
> <a moz-do-not-send="true"
href="mailto:troy.d.kelley6.civ@mail.mil">troy.d.kelley6.civ@mail.mil</a><br>
><br>
><br>
><br>
><br>
><br>
><br>
> On 3/20/14 10:41 AM, "Tsvi Achler" <<a
moz-do-not-send="true" href="mailto:achler@gmail.com">achler@gmail.com</a>>
wrote:<br>
><br>
><br>
> I think an Ockham's razor principle can be used to
find the most<br>
><br>
> optimal algorithm if it is interpreted to mean the
model with the<br>
><br>
> least amount of free parameters that captures the
most phenomena.<br>
><br>
> <a moz-do-not-send="true"
href="http://reason.cs.uiuc.edu/tsvi/Evaluating_Flexibility_of_Recognition.p">http://reason.cs.uiuc.edu/tsvi/Evaluating_Flexibility_of_Recognition.p</a><br>
><br>
> df<br>
><br>
> -Tsvi<br>
><br>
><br>
> On Wed, Mar 19, 2014 at 10:37 PM, Andras Lorincz <<a
moz-do-not-send="true" href="mailto:lorincz@inf.elte.hu">lorincz@inf.elte.hu</a>><br>
><br>
> wrote:<br>
><br>
> Ockham works here via compressing both the algorithm
and the structure.<br>
><br>
> Compressing the structure to stem cells means that
the algorithm<br>
><br>
> should describe the development, the working, and the
time dependent<br>
><br>
> structure of the brain. Not compressing the
description of the<br>
><br>
> structure of the evolved brain is a different problem
since it saves<br>
><br>
> the need for the description of the development, but
the working.<br>
><br>
> Understanding the structure and the working of one
part of the brain<br>
><br>
> requires the description of its communication that
increases the<br>
><br>
> complexity of the description. By the way, this holds
for the whole<br>
><br>
> brain, so we might have to include the body at least;
a structural<br>
><br>
> minimist may wish to start from the genetic code, use
that hint and<br>
><br>
> unfold the already compressed description. There are
(many and<br>
><br>
> different) todos 'outside' ...<br>
><br>
><br>
><br>
> Andras<br>
><br>
><br>
><br>
><br>
><br>
> .<br>
><br>
><br>
> ________________________________<br>
><br>
> From: Connectionists <<a moz-do-not-send="true"
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu">connectionists-bounces@mailman.srv.cs.cmu.edu</a>><br>
><br>
> on behalf of james bower <<a
moz-do-not-send="true" href="mailto:bower@uthscsa.edu">bower@uthscsa.edu</a>><br>
><br>
> Sent: Thursday, March 20, 2014 3:33 AM<br>
><br>
><br>
> To: Geoffrey Goodhill<br>
><br>
> Cc: <a moz-do-not-send="true"
href="mailto:connectionists@mailman.srv.cs.cmu.edu">connectionists@mailman.srv.cs.cmu.edu</a><br>
><br>
> Subject: Re: Connectionists: how the brain works?<br>
><br>
><br>
> Geoffrey,<br>
><br>
><br>
> Nice addition to the discussion actually introducing
an interesting<br>
><br>
> angle on the question of brain organization (see
below) As you note,<br>
><br>
> reaction diffusion mechanisms and modeling have been
quite successful<br>
><br>
> in replicating patterns seen in biology - especially
interesting I<br>
><br>
> think is the modeling of patterns in slime molds, but
also for very<br>
><br>
> general pattern formation in embryology. However,
more and more<br>
><br>
> detailed analysis of what is diffusing, what is
sensing what is<br>
><br>
> diffusing, and what is reacting to substances once
sensed -- all<br>
><br>
> linked to complex patterns of gene regulation and
expression have<br>
><br>
> made it clear that actual embryological development
is much much more<br>
><br>
> complex, as Turing himself clearly anticipated, as
the quote you cite pretty<br>
><br>
> clearly indicates. Clearly a smart guy. But, I
don't actually think<br>
><br>
> that<br>
><br>
> this is an application of Ochham's razor although it
might appear to<br>
><br>
> be after the fact. Just as Hodgkin and Huxley were
not applying it<br>
><br>
> either in<br>
><br>
> their model of the action potential. Turing
apparently guessed (based<br>
><br>
> on a<br>
><br>
> lot of work at the time on pattern formation with
reaction diffusion)<br>
><br>
> that such a mechanism might provide the natural basis
for what<br>
><br>
> embryos do. Thus, just like for Hodgkin and Huxley,
his model<br>
><br>
> resulted from a bio-physical insight, not an explicit
attempt to<br>
><br>
> build a stripped down model for its own sake. I
seriously doubt<br>
><br>
> that Turning would have claimed that he, or his
models could more<br>
><br>
> effectively do what biology actually does in forming
an embrio, or<br>
><br>
> substitute for the actual process.<br>
><br>
><br>
> However, I think there is another interesting
connection here to the<br>
><br>
> discussion on modeling the brain. Almost certainly
communication and<br>
><br>
> organizational systems in early living beings were
reaction diffusion<br>
><br>
> based.<br>
><br>
> This is still a dominant effect for many 'sensing' in
small organisms.<br>
><br>
> Perhaps, therefore, one can look at nervous systems
as structures<br>
><br>
> specifically developed to supersede reaction
diffusion mechanisms,<br>
><br>
> thus superseding this very 'natural' but complexity
limited type of<br>
><br>
> communication and organization. What this means, I
believe, is that<br>
><br>
> a simplified or abstracted physical or mathematical
model of the<br>
><br>
> brain explicitly violates the evolutionary pressures
responsible for<br>
><br>
> its structure. Its where the wires go, what the
wires do, and what<br>
><br>
> the receiving neuron does with the information that
forms the basis<br>
><br>
> for neural computation, multiplied by a very large
number. And that<br>
><br>
> is dependent on the actual physical structure of
those elements.<br>
><br>
><br>
> One more point about smart guys, as a young
computational<br>
><br>
> neurobiologist I questioned how insightful John von
Neumann actually<br>
><br>
> was because I was constantly hearing about a lecture
he wrote (but<br>
><br>
> didn't give) at Yale suggesting that dendrites and
neurons might be<br>
><br>
> digital ( John von Neumann's The Computer and the
Brain. (New<br>
><br>
> Haven/London: Yale Univesity Press, 1958.) Very
clearly a not very<br>
><br>
> insightful idea for a supposedly smart guy. It
wasn't until a few<br>
><br>
> years later, when I actually read the lecture - that
I found out that<br>
><br>
> he ends by stating that this idea is almost certainly
wrong, given<br>
><br>
> the likely nonlinearities in neuronal dendrites. So
von Neumann<br>
><br>
> didn't lack insight, the people who quoted him did.
It is a<br>
><br>
> remarkable fact that more than 60 years later, the
majority of models of<br>
><br>
> so called neurons built by engineers AND
neurobiologists don't consider<br>
><br>
> these nonlinearities.<br>
><br>
> The point being the same point, to the Hopfield,
Mead, Feynman list,<br>
><br>
> we can now add Turing and von Neumann as suspecting
that for<br>
><br>
> understanding, biology and the nervous system must be
dealt with in their<br>
><br>
> full complexity.<br>
><br>
><br>
> But thanks for the example from Turing - always nice
to consider actual<br>
><br>
> examples. :-)<br>
><br>
><br>
> Jim<br>
><br>
><br>
><br>
><br>
><br>
><br>
> On Mar 19, 2014, at 8:30 PM, Geoffrey Goodhill <<a
moz-do-not-send="true"
href="mailto:g.goodhill@uq.edu.au">g.goodhill@uq.edu.au</a>><br>
><br>
> wrote:<br>
><br>
><br>
> Hi All,<br>
><br>
><br>
> A great example of successful Ockham-inspired biology
is Alan<br>
><br>
> Turing's model for pattern formation (spots, stripes
etc) in<br>
><br>
> embryology (The chemical basis of morphogenesis, Phil
Trans Roy Soc,<br>
><br>
> 1953). Turing introduced a physical mechanism for how
inhomogeneous<br>
><br>
> spatial patterns can arise in a biological system
from a spatially<br>
><br>
> homogeneous starting point, based on the diffusion
of morphogens. The<br>
><br>
> paper begins:<br>
><br>
><br>
> "In this section a mathematical model of the growing
embryo will be<br>
><br>
> described. This model will be a simplification and an
idealization,<br>
><br>
> and consequently a falsification. It is to be hoped
that the features<br>
><br>
> retained for discussion are those of greatest
importance in the<br>
><br>
> present state of knowledge."<br>
><br>
><br>
> The paper remained virtually uncited for its first 20
years following<br>
><br>
> publication, but since then has amassed 8000
citations (Google<br>
><br>
> Scholar). The subsequent discovery of huge quantities
of molecular<br>
><br>
> detail in biological pattern formation have only
reinforced the<br>
><br>
> importance of this relatively simple model, not
because it explains<br>
><br>
> every system, but because the overarching concepts it
introduced have<br>
><br>
> proved to be so fertile.<br>
><br>
><br>
> Cheers,<br>
><br>
><br>
> Geoff<br>
><br>
><br>
><br>
> On Mar 20, 2014, at 6:27 AM, Michael Arbib wrote:<br>
><br>
><br>
> Ignoring the gross differences in circuitry between
hippocampus and<br>
><br>
> cerebellum, etc., is not erring on the side of
simplicity, it is<br>
><br>
> erring, period. Have you actually looked at a<br>
><br>
> Cajal/Sxentagothai-style drawing of their circuitry?<br>
><br>
><br>
> At 01:07 PM 3/19/2014, Brian J Mingus wrote:<br>
><br>
><br>
> Hi Jim,<br>
><br>
><br>
> Focusing too much on the details is risky in and of
itself. Optimal<br>
><br>
> compression requires a balance, and we can't compute
what that<br>
><br>
> balance is (all models are wrong). One thing we can
say for sure is<br>
><br>
> that we should err on the side of simplicity, and
adding detail to<br>
><br>
> theories before simpler explanations have failed is
not Ockham's<br>
><br>
> heuristic. That said it's still in the space of a Big
Data fuzzy<br>
><br>
> science approach, where we throw as much data from as
many levels of<br>
><br>
> analysis as we can come up with into a big pot and
then construct a<br>
><br>
> theory. The thing to keep in mind is that when we
start pruning this<br>
><br>
> model most of the details are going to disappear,
because almost all<br>
><br>
> of them are irrelevant. Indeed, the size of the
description that<br>
><br>
> includes all the details is almost infinite, whereas
the length of<br>
><br>
> the description that explains almost all the variance
is extremely<br>
><br>
> short, especially in comparison. This is why Ockham's
razor is a good<br>
><br>
> heuristic. It helps prevent us from wasting time on
unnecessary<br>
><br>
> details by suggesting that we only inquire as to the
details once our<br>
><br>
> existing simpler theory has failed to work.<br>
><br>
><br>
> On 3/14/14 3:40 PM, Michael Arbib wrote:<br>
><br>
><br>
> At 11:17 AM 3/14/2014, Juyang Weng wrote:<br>
><br>
><br>
> The brain uses a single architecture to do all brain
functions we are<br>
><br>
> aware of! It uses the same architecture to do
vision, audition,<br>
><br>
> motor, reasoning, decision making, motivation
(including pain<br>
><br>
> avoidance and pleasure seeking, novelty seeking,
higher emotion, etc.).<br>
><br>
><br>
><br>
> Gosh -- and I thought cerebral cortex, hippocampus
and cerebellum<br>
><br>
> were very different from each other.<br>
><br>
><br>
><br>
><br>
><br>
> Troy D. Kelley<br>
><br>
> RDRL-HRS-E<br>
><br>
> Cognitive Robotics and Modeling Team Leader Human
Research and Engineering<br>
><br>
> Directorate U.S. Army Research Laboratory Aberdeen,
MD 21005 Phone<br>
><br>
> 410-278-5869 or 410-278-6748 Note my new email
address:<br>
><br>
> <a moz-do-not-send="true"
href="mailto:troy.d.kelley6.civ@mail.mil">troy.d.kelley6.civ@mail.mil</a><br>
><br>
><br>
><br>
><br>
> Classification: UNCLASSIFIED<br>
><br>
> Caveats: NONE<br>
><br>
><br>
><br>
><br>
> Stephen Grossberg<br>
> Wang Professor of Cognitive and Neural Systems<br>
> Professor of Mathematics, Psychology, and Biomedical
Engineering<br>
> Director, Center for Adaptive Systems <a
moz-do-not-send="true"
href="http://www.cns.bu.edu/about/cas.html">http://www.cns.bu.edu/about/cas.html</a><br>
> <a moz-do-not-send="true"
href="http://cns.bu.edu/%7Esteve">http://cns.bu.edu/~steve</a><br>
> <a moz-do-not-send="true" href="mailto:steve@bu.edu">steve@bu.edu</a><br>
><br>
><br>
><br>
></div>
</div>
</blockquote>
</div>
<br>
<div>
<span class="Apple-style-span" style="border-collapse: separate;
color: rgb(0, 0, 0); font-family: Helvetica; font-style:
normal; font-variant: normal; font-weight: normal;
letter-spacing: normal; line-height: normal; orphans: 2;
text-align: -webkit-auto; text-indent: 0px; text-transform:
none; white-space: normal; widows: 2; word-spacing: 0px;
-webkit-border-horizontal-spacing: 0px;
-webkit-border-vertical-spacing: 0px;
-webkit-text-decorations-in-effect: none;
-webkit-text-size-adjust: auto; -webkit-text-stroke-width:
0px; font-size: medium; "><span class="Apple-style-span"
style="border-collapse: separate; color: rgb(0, 0, 0);
font-family: Helvetica; font-style: normal; font-variant:
normal; font-weight: normal; letter-spacing: normal;
line-height: normal; orphans: 2; text-align: -webkit-auto;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 2; word-spacing: 0px;
-webkit-border-horizontal-spacing: 0px;
-webkit-border-vertical-spacing: 0px;
-webkit-text-decorations-in-effect: none;
-webkit-text-size-adjust: auto; -webkit-text-stroke-width:
0px; font-size: medium; ">
<div style="word-wrap: break-word; -webkit-nbsp-mode: space;
-webkit-line-break: after-white-space; "><span
class="Apple-style-span" style="border-collapse:
separate; color: rgb(0, 0, 0); font-family: Helvetica;
font-style: normal; font-variant: normal; font-weight:
normal; letter-spacing: normal; line-height: normal;
orphans: 2; text-align: -webkit-auto; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-border-horizontal-spacing:
0px; -webkit-border-vertical-spacing: 0px;
-webkit-text-decorations-in-effect: none;
-webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; font-size: medium; ">
<div style="word-wrap: break-word; -webkit-nbsp-mode:
space; -webkit-line-break: after-white-space; ">
<div>
<div>
<div>
<div>Stephen Grossberg</div>
<div>Wang Professor of Cognitive and Neural
Systems</div>
<div>Professor of Mathematics, Psychology, and
Biomedical Engineering</div>
<div>
<div>Director, Center for Adaptive Systems <a
moz-do-not-send="true"
href="http://www.cns.bu.edu/about/cas.html">http://www.cns.bu.edu/about/cas.html</a></div>
</div>
<div><a moz-do-not-send="true"
href="http://cns.bu.edu/%7Esteve">http://cns.bu.edu/~steve</a></div>
<div><a moz-do-not-send="true"
href="mailto:steve@bu.edu">steve@bu.edu</a></div>
</div>
</div>
</div>
<div><br>
</div>
</div>
</span></div>
</span><br class="Apple-interchange-newline">
</span><br class="Apple-interchange-newline">
</div>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
428 S Shaw Ln Rm 3115
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: <a class="moz-txt-link-abbreviated" href="mailto:weng@cse.msu.edu">weng@cse.msu.edu</a>
URL: <a class="moz-txt-link-freetext" href="http://www.cse.msu.edu/~weng/">http://www.cse.msu.edu/~weng/</a>
----------------------------------------------
</pre>
</body>
</html>