Connectionists: New paper on why modules evolve, and how to evolve modular neural networks

Juyang Weng weng at cse.msu.edu
Sat Feb 23 16:54:12 EST 2013


Modular or not modular, which itself will affect whether the system can 
assign resources to other sensing or effector modalities when one of 
such modality is amputated.  Existing neuroscience experiments have 
demonstrated that the brain does that well.

However, without thinking about modules at all will liberate us to 
understand how the brain develops a wide array of amazing capabilities.

For example, for vision, how does a brain perform general-purpose vision 
and generate vision-enabled behaviors?

Without recognition, detection is difficult.
Without detection, recognition is difficult.

Therefore, recognition and detection is a chicken-and-egg problem.

By detection, we mean to find an object of interest in an unknown 
cluttered natural background.   By recognition, we mean to recognize 
what it is if the top-down attention has decided roughly the location 
and scale.  This is an open chicken-and-egg problem in the computer 
vision and pattern recognition for over 50 years.

Our Developmental Networks (DN) has showed how the network solves this 
chicken-and-egg problem.

Interestingly, how the brain solves this chicken-and-egg problem is 
directly related to how the brain solves many other problems, such as 
top-down attention, spatiotemporal event recognition, language 
acquisition and language understanding.

Best regards,

-John

On 2/22/13 10:20 PM, Levine, Daniel S wrote:
> Dear Steve, John, Richard et al.,
> I am reminded of a recent IJCNN when I heard Ali Minai describe a 
> neural architecture for idea generation as "modular."  Then about a 
> day later I heard Steve describe another architecture for something 
> else as "not modular."  But both were describing a network composed of 
> subnetworks with distinct functions, subnetworks that were not 
> independent of one another but mutually interacting and strongly 
> influencing one another.  The same is true of all of my own model 
> networks.  In other words, Ali's "modularity" and Steve's 
> "non-modularity" were essentially describing the same concept!  Since 
> then I have strenuously avoided use of the term "modular" as too 
> ambiguous.
> Best,
> Dan Levine
> ------------------------------------------------------------------------
> *From:* connectionists-bounces at mailman.srv.cs.cmu.edu 
> [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang 
> Weng [weng at cse.msu.edu]
> *Sent:* Friday, February 22, 2013 8:03 PM
> *To:* Stephen Grossberg
> *Cc:* steve Grossberg; connectionists
> *Subject:* Re: Connectionists: New paper on why modules evolve, and 
> how to evolve modular neural networks
>
> Dear Richard, Steve and other connectionist colleagues,
>
> Many researchers have said that neuroscience today is rich in data and 
> poor in theory.  I agree.   Unlike organs in the body, brain is 
> basically a signal processor.  Therefore, it should have an 
> overarching theory well explained in mathematics.
>
> However, I am probably a minority to hold the following position.   
> After coming up an overarching theory of the brain, I start to 
> disbelieve the modular view of the brain. A modular view of the brain 
> is like to categorize plants based on apparent look instead of their 
> genes.
>
> The apparent Brodmann areas in the brain should be largely due to the 
> body organs (eyes, ears, skins, muscles, glands etc.).   The 
> re-assignment of visual areas to other sensing modality in the brain 
> of a blind person seems to justify my this theoretical view, since my 
> theory explains why and how this re-assignment takes place.   If my 
> theory is correct, neuroscience textbooks should be very differently 
> written in the future.  Until then, few care to pay attention to this 
> theory.
>
> Humbly,
>
> -Juyang Weng
> Juyang (John) Weng, Professor
> Department of Computer Science and Engineering
> MSU Cognitive Science Program and MSU Neuroscience Program
> 3115 Engineering Building
> Michigan State University
> East Lansing, MI 48824 USA
> Tel: 517-353-4388
> Fax: 517-432-1061
> Email:weng at cse.msu.edu
> URL:http://www.cse.msu.edu/~weng/
> On 2/22/13 8:02 PM, Stephen Grossberg wrote:
>> Dear Richard and other Connectionist colleagues,
>>
>> I think that it is important to clarify how the work "module" is 
>> being used. Many people think of modules as implying /independent/ 
>> modules that should be able to fully compute their particular 
>> processes on their own. However, much behavioral and neurobiological 
>> data argue against this possibility. The brain’s organization into 
>> distinct anatomical areas and processing streams shows that brain 
>> processing is indeed specialized. However, specialization does not 
>> imply the kind of independence that modularity is often taken to 
>> imply. Then what is the nature of this specialization?
>> /
>> /
>> /Complementary Computing/ concerns the proposal that pairs of 
>> parallel cortical processing streams compute complementary properties 
>> in the brain. Each stream has complementary computational strengths 
>> and weaknesses, much as in physical principles like the Heisenberg 
>> Uncertainty Principle. Each cortical stream can also possess multiple 
>> processing stages. These stages realize a /hierarchical resolution of 
>> uncertainty/. "Uncertainty" here means that computing one set of 
>> properties at a given stage prevents computation of a complementary 
>> set of properties at that stage. Complementary Computing proposes 
>> that the computational unit of brain processing that has behavioral 
>> significance consists of parallel interactions between complementary 
>> cortical processing streams with multiple processing stages to 
>> compute complete information about a particular type of biological 
>> intelligence. It has been suggested that such complementary 
>> processing streams may arise from a hierarchical multi-scale process 
>> of morphogenetic symmetry-breaking.
>>
>> The concept of Complementary Computing arose as it gradually became 
>> clear, as a result of decades of behavioral and neural modeling, that 
>> essentially all biological neural models exhibit such complementary 
>> processes. Articles that provide examples of Complementary Computing 
>> can be found on my web page http://cns.bu.edu/~steve 
>> <http://cns.bu.edu/%7Esteve> . They include:
>>
>> Grossberg, S. (2000). The complementary brain: Unifying brain 
>> dynamics and modularity. /Trends in Cognitive Sciences, /*4,* 233-246.
>>
>> Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns 
>> to consciously attend, learn, and recognize a changing world. /Neural 
>> Networks, /*37*, 1-47.
>>
>> About minimum length: It's important to keep in mind the work of van 
>> Essen (1997, Nature, 385, 313-318) concerning his tension-based 
>> theory of morphogenesis and compact wiring, which clarifies how folds 
>> in the cerebral cortex may develop and make connections more compact; 
>> i.e., shorter.
>>
>> A possible role of tension in other developmental processes, such as 
>> in the formation during morphogenesis of a gastrula from a blastula, 
>> illustrates that such a mechanism may be used in biological systems 
>> other than brains. The article below describes such a process 
>> heuristically, also on my web page:
>>
>> Grossberg, S. (1978). Communication, Memory, and Development. In R. 
>> Rosen and F. Snell (Eds.), *Progress in theoretical biology, Volume 
>> 5.* New York: Academic Press, pp. 183-232. See Sections XIV - XVI.
>>
>> About cortical columns: They are important, but no more important 
>> than the long-range horizontal interactions among columns that are 
>> ubiquitous in the cerebral cortex. Indeed, understanding how 
>> bottom-up, horizontal, and top-down interactions interact in 
>> neocortex has led to the paradigm of Laminar Computing, which 
>> attempts to clarify how specializations of this shared laminar design 
>> embody different types of biological intelligence, including vision, 
>> speech and language, and cognition. On my web page, articles with 
>> colleagues like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh 
>> (2005) for vision, Pearson (2008) for cognitive working memory and 
>> list chunking, and Kazerounian (2011) for speech perception 
>> illustrate this theme.
>>
>> Laminar Computing has begun to explain how the laminar design of 
>> neocortex may realize the best properties of feedforward and feedback 
>> processing, digital and analog processing, and bottom-up data-driven 
>> processing and top-down attentive hypothesis-driven processing. 
>> Embodying such designs into VLSI chips promises to enable the 
>> development of increasingly general-purpose adaptive autonomous 
>> algorithms for multiple applications.
>>
>> The existence and critical importance of long-range horizontal 
>> connections in neocortex raises the following issue: Why is the 
>> spatial resolution of columns as fine as it is? Why does not the 
>> long-range correlation length force the columns to become spatially 
>> more diffuse than they are? The following article on my web page 
>> suggests at least for the case of cortical area V! how the cortical 
>> subplate may play a role in this:
>>
>> Grossberg, S. and Seitz, A. (2003). Laminar development of receptive 
>> fields, maps, and columns in visual cortex: The coordinating role of 
>> the subplate. /Cerebral Cortex/, *13*, 852-863.
>>
>> Best,
>>
>> Steve Grossberg
>>
>> Wang Professor of Cognitive and Neural Systems
>> Professor of Mathematics, Psychology, and Biomedical Engineering
>> Director, Center for Adaptive Systems 
>> http://www.cns.bu.edu/about/cas.html
>> http://cns.bu.edu/~steve <http://cns.bu.edu/%7Esteve>
>> steve at bu.edu <mailto:steve at bu.edu>
>>
>>
>>
>> On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote:
>>
>>>
>>> I hate to say this, but during discussions with fellow students back 
>>> in 1987, I remember pointing out that it was not terribly surprising 
>>> that the cortex consisted of columns (i.e. modules) with dense 
>>> internal connectivity, with less-dense connections between columns 
>>> -- not surprising, because the alternative was to try to make the 
>>> brain less modular and connect every neuron in each column to all 
>>> the neurons in all the other columns, and the result would be brains 
>>> that were a million times larger than they are (due to all the extra 
>>> wiring).
>>>
>>> The same logic applies in all systems where it is costly to connect 
>>> every element to every other: the optimal connectivity is 
>>> well-connected, tightly clustered groups of elements.
>>>
>>> During those discussions the point was considered so obvious that it 
>>> sparked little comment. Ever since then I have told students in my 
>>> lectures that this would be the evolutionary reason for cortical 
>>> columns to exist.
>>>
>>> So I am a little confused now. Can someone explain what I am missing 
>>> .........?
>>>
>>> Richard Loosemore
>>> Department of Physical and Mathematical Sciences,
>>> Wells College
>>>
>>>
>>>
>>> On 2/13/13 9:48 AM, Juergen Schmidhuber wrote:
>>>> The paper mentions that Santiago Ramón y Cajal already pointed out 
>>>> that evolution has created mostly short connections in animal brains.
>>>>
>>>> Minimization of connection costs should also encourage 
>>>> modularization, e.g., http://arxiv.org/abs/1210.0118 (2012).
>>>>
>>>> But who first had such a wire length term in an objective function 
>>>> to be minimized by evolutionary computation or other machine 
>>>> learning methods?
>>>> I am aware of pioneering work by Legenstein and Maass:
>>>>
>>>> R. A. Legenstein and W. Maass. Neural circuits for pattern 
>>>> recognition with small total wire length. Theoretical Computer 
>>>> Science, 287:239-249, 2002.
>>>> R. A. Legenstein and W. Maass. Wire length as a circuit complexity 
>>>> measure. Journal of Computer and System Sciences, 70:53-72, 2005.
>>>>
>>>> Is there any earlier relevant work? Pointers will be appreciated.
>>>>
>>>> Jürgen Schmidhuber
>>>> http://www.idsia.ch/~juergen/whatsnew.html 
>>>> <http://www.idsia.ch/%7Ejuergen/whatsnew.html>
>>>>
>>>>
>>>>
>>>>
>>>> On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> I believe that many in the neuroscience community will be 
>>>>> interested in a new paper that sheds light on why modularity 
>>>>> evolves in biological networks, including neural networks. The 
>>>>> same discovery also provides AI researchers a simple technique for 
>>>>> evolving neural networks that are modular and have increased 
>>>>> evolvability, meaning that they adapt faster to new environments.
>>>>>
>>>>> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary 
>>>>> origins of modularity. Proceedings of the Royal Society B. 280: 
>>>>> 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf)
>>>>>
>>>>> Abstract: A central biological question is how natural organisms 
>>>>> are so evolvable (capable of quickly adapting to new 
>>>>> environments). A key driver of evolvability is the widespread 
>>>>> modularity of biological networks—their organization as 
>>>>> functional, sparsely connected subunits—but there is no consensus 
>>>>> regarding why modularity itself evolved. Although most hypotheses 
>>>>> assume indirect selection for evolvability, here we demonstrate 
>>>>> that the ubiquitous, direct selection pressure to reduce the cost 
>>>>> of connections between network nodes causes the emergence of 
>>>>> modular networks. Computational evolution experiments with 
>>>>> selection pressures to maximize network performance and minimize 
>>>>> connection costs yield networks that are significantly more 
>>>>> modular and more evolvable than control experiments that only 
>>>>> select for performance. These results will catalyse research in 
>>>>> numerous disciplines, such as neuroscience and genetics, and 
>>>>> enhance our ability to harness evolution for engineering pu!
>>>>> rposes.
>>>>>
>>>>> Video: 
>>>>> http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng
>>>>>
>>>>> There has been some nice coverage of this work in the popular 
>>>>> press, in case you are interested:
>>>>>
>>>>> • National Geographic: 
>>>>> http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/
>>>>> • MIT's Technology Review: 
>>>>> http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/
>>>>> • Fast Company: 
>>>>> http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning
>>>>> • Cornell Chronicle: 
>>>>> http://www.news.cornell.edu/stories/Jan13/modNetwork.html
>>>>> • ScienceDaily: 
>>>>> http://www.sciencedaily.com/releases/2013/01/130130082300.htm
>>>>>
>>>>> I hope you enjoy the work. Please let me know if you have any 
>>>>> questions.
>>>>>
>>>>> Best regards,
>>>>> Jeff Clune
>>>>>
>>>>> Assistant Professor
>>>>> Computer Science
>>>>> University of Wyoming
>>>>> jeffclune at uwyo.edu <mailto:jeffclune at uwyo.edu>
>>>>> jeffclune.com <http://jeffclune.com>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>>
>
> -- 
> --
> Juyang (John) Weng, Professor
> Department of Computer Science and Engineering
> MSU Cognitive Science Program and MSU Neuroscience Program
> 3115 Engineering Building
> Michigan State University
> East Lansing, MI 48824 USA
> Tel: 517-353-4388
> Fax: 517-432-1061
> Email:weng at cse.msu.edu
> URL:http://www.cse.msu.edu/~weng/
> ----------------------------------------------
>

-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
3115 Engineering Building
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: weng at cse.msu.edu
URL: http://www.cse.msu.edu/~weng/
----------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20130223/b61a9fb4/attachment-0001.html>


More information about the Connectionists mailing list