From julian at togelius.com Fri Feb 1 14:05:29 2013 From: julian at togelius.com (Julian Togelius) Date: Fri, 1 Feb 2013 20:05:29 +0100 Subject: Connectionists: IEEE Conference on Computational Intelligence and Games 2013 - second call for papers Message-ID: CALL FOR PAPERS IEEE Conference on Computational Intelligence and Games CIG 2013 Niagara Falls, Canada, 11-13 August 2013 http://eldar.mathstat.uoguelph.ca/dashlock/cig2013/ Games are an ideal domain for the study of computational intelligence. They are not only fun to play and interesting to observe, but they also provide competitive and dynamic environments that model many real-world problems. The 2013 IEEE Conference on Computational Intelligence and Games brings together leading researchers and practitioners from academia and industry to discuss recent advances and explore future directions in this field. The financial support of the IEEE Computational Intelligence Society is pending. PAPER SUBMISSION 15 March 2013 ACCEPTANCE NOTIFICATION 15 May 2013 CAMERA READY SUBMISSION 15 June 2013 CONFERENCE START 11 August 2013 ORGANIZING COMMITTEE: General Chair: Dan Ashlock/University of Guelph Technical Chairs: Julian Togelius/IT University of Copenhagen Graham Kendall/University of Nottingham Competitions Chair: Philip Hingston/Edith Cowan University Finance Chair: Wendy Ashlock/York University Proceedings Chair: Joseph Brown/University of Guelph Tutorial Chair: Clare Bates Congdon/University of Southern Maine General Contact: dashlock at uoguelph.ca Contest Proposals: phi at philiphingston.com Tutorial Proposals: congdon at gmail.com SUBMISSIONS: Researchers are invited to submit high quality papers on original research in the intersection of computational intelligence and games. Computational intelligence must play a strong role in any accepted papers. Accepted papers will be indexed in IEEE Explore. TOPICS OF INTEREST INCLUDE: Learning in games Coevolution in games Neural network approaches for games Fuzzy logic approaches for games Player/opponent modeling in games Computational and artificial intelligence based game design Multi-agent and multi-strategy learning Applications of game theory Computational intelligence for player affective modeling Intelligent interactive narrative Imperfect information and non-deterministic games Player satisfaction and experience in games Theoretical or empirical analysis of CI techniques for games Computational intelligence for non-player characters in games Comparative studies and game-based benchmarking Computational intelligence based digital design assistants Automatic creation of modules or game levels Computational and artificial intelligence in: Video games Board and card games Economic or mathematical games Serious games Augmented and mixed-reality games Games for mobile platforms Want to ask if a topic is in scope? Contact: dashlock at uoguelph.ca -- Julian Togelius Associate Professor IT University of Copenhagen Rued Langgaards Vej 7, 2300 Copenhagen, Denmark mail: julian at togelius.com, web: http://julian.togelius.com mobile: +46-705-192088, office: +45-7218-5277 From smd501 at york.ac.uk Fri Feb 1 09:03:18 2013 From: smd501 at york.ac.uk (Sam Devlin) Date: Fri, 1 Feb 2013 14:03:18 +0000 Subject: Connectionists: 2nd CFP ALA @ AAMAS 2013 - Paper Deadline Extended Message-ID: *Final Call For Papers * *(Paper Deadline Extended)* * * *Adaptive and Learning Agents Workshop 2013 * *at AAMAS 2013 (Saint Paul, Minnesota, USA)* We apologize if you receive more than one copy. Please share with colleagues and students. Paper deadline: ***February 9, 2013* * AAMAS workshop with a long and successful history, now in its 13th edition. * ACM proceedings format with up to 8 pages. * Accepted papers are eligible for inclusion in a special issue journal. ******************************************************* *1st Call for Papers* ALA 2013: Adaptive and Learning Agents Workshop held at AAMAS 2013 (Saint Paul, Minnesota, USA). The ALA workshop has a long and successful history and is now in its 13th edition. The workshop is a merger of European ALAMAS and the American ALAg series which is usually held at AAMAS. Details may be found on the workshop web site: http://swarmlab.unimaas.nl/ala2013/ Paper management will be done at the site: https://www.easychair.org/conferences/?conf=ala20130 * * ******************************************************* * Submission Deadline: *February 9, 2013* * Notification of acceptance: February 27, 2013 * Camera-ready copies: March 10, 2013 * Workshop: May 6-7, 2013 ******************************************************* Adaptive and Learning Agents, particularly those in a multi-agent setting are becoming more and more prominent as the sheer size and complexity of many real world systems grows. How to adaptively control, coordinate and optimize such systems is an emerging multi-disciplinary research area at the intersection of Computer Science, Control theory, Economics, and Biology. The ALA workshop will focus on agent and multi-agent systems which employ learning or adaptation. The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design). This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular emphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to: * Novel combinations of reinforcement and supervised learning approaches * Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc. * Supervised multi-agent learning * Reinforcement learning (single and multi-agent) * Planning (single and multi-agent) * Reasoning (single and multi-agent) * Distributed learning * Adaptation and learning in dynamic environments * Evolution of agents in complex environments * Co-evolution of agents in a multi-agent setting * Cooperative exploration and learning to cooperate and collaborate * Learning trust and reputation * Communication restrictions and their impact on multi-agent coordination * Design of reward structure and fitness measures for coordination * Scaling learning techniques to large systems of learning and adaptive agents * Emergent behaviour in adaptive multi-agent systems * Game theoretical analysis of adaptive multi-agent systems * Neuro-control in multi-agent systems * Bio-inspired multi-agent systems * Applications of adaptive and learning agents and multi-agent systems to real world complex systems * Learning of Co-ordination ******************************************************* Submission Details: Papers can be submitted through Easychair: https://www.easychair.org/conferences/?conf=ala20130 Submissions may be up to 8 pages in the ACM proceedings format (i.e., the same as AAMAS papers in the main conference track). Accepted work will be allocated time for oral presentation during the one day workshop. Papers accepted at the workshop will also be eligible for inclusion in a planned special issue journal published after the workshop. ******************************************************* Organization *Workshop chairs:* Sam Devlin (University of York, UK) Daniel Hennes (Maastricht University, The Netherlands) Enda Howley (National University of Ireland, Galway, Ireland) If you have any questions about the ALA workshop, please contact Enda Howley at: enda.howley AT nuigalway.ie * * *Senior Steering Committee Members:* Daniel Kudenko (University of York, UK) Ann Now? (Vrije Universiteit Brussels, Belgium) Peter Stone (University of Texas at Austin, USA) Matthew Taylor (Lafayette College, USA) Kagan Tumer (Oregon State University, USA) Karl Tuyls (Maastricht University, The Netherlands) ******************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From wermter at informatik.uni-hamburg.de Fri Feb 1 13:13:42 2013 From: wermter at informatik.uni-hamburg.de (Stefan Wermter) Date: Fri, 01 Feb 2013 19:13:42 +0100 Subject: Connectionists: PhD Studentships in Cognitive Assistive Systems In-Reply-To: <51092E3B.1040505@informatik.uni-hamburg.de> References: <05AC0888479D0E4B97808401D54A00DD1080FFB1@SRV-EXDAG02> <50B4A174.1070504@informatik.uni-hamburg.de> <51028179.7070902@informatik.uni-hamburg.de> <51092E3B.1040505@informatik.uni-hamburg.de> Message-ID: <510C05D6.8040102@informatik.uni-hamburg.de> The DAAD-funded CASY Project invites applications for several PhD studentships in the area of Cognitive Assistive Systems. The programme ?Cognitive Assistive Systems (CASY)? will contribute to the focus on a next generation of human-centred systems for human-robot collaboration. A central need of these systems is a high level of robustness, learning, and increased adaptivity to be able to act more natural under uncertain conditions. To address this need, research will focus on neural robotics, cognitively motivated multi-modal integration and learning human-robot interaction. Different modalities from sensors and processed representational modalities will be combined and evaluated for their use in assistive systems based on symbolic, statistical, neural or hybrid approaches. Applications details are available at: http://www.informatik.uni-hamburg.de/WTM/projects/casy.shtml best wishes Stefan Wermter *********************************************** Professor Dr. Stefan Wermter Chair of Knowledge Technology Department of Computer Science, WTM, Building F University of Hamburg Vogt Koelln Str. 30 22527 Hamburg, Germany Email: wermter AT informatik.uni-hamburg.de http://www.informatik.uni-hamburg.de/~wermter/ http://www.informatik.uni-hamburg.de/WTM/ *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjoern.ommer at iwr.uni-heidelberg.de Fri Feb 1 20:05:33 2013 From: bjoern.ommer at iwr.uni-heidelberg.de (=?ISO-8859-1?Q?Bj=F6rn_Ommer?=) Date: Sat, 02 Feb 2013 02:05:33 +0100 Subject: Connectionists: PhD Position in Web-Scale Computer Vision - University of Heidelberg Message-ID: <510C665D.5020206@iwr.uni-heidelberg.de> PhD Position in Web-Scale Computer Vision A major aim of Computer Vision is to enable machines to see and to recognize objects and actions in images and videos. Thus, vision research offers great potential for both fundamental research as well as exciting new applications that have the power to revolutionize the utility of computers. The University of Heidelberg invites applications for a fully funded PhD student position in Computer Vision within a recently acquired research project on web-scale data analysis. The position will be affiliated with the Heidelberg Collaboratory for Image Processing (HCI). The HCI is the largest university-based research unit in its field in Germany and offers a unique environment for fundamental research on image analysis. The HCI comprises five research groups (Garbe, Hamprecht, Jaehne, Ommer, Schnoerr) and is funded by the German Research Foundation (DFG), industrial partners, and the University. Heidelberg is an attractive, lively city and the University and the HCI provide an inspiring research atmosphere. The open position is situated in the research area of visual object recognition in images and video, esp. web-scale object detection. Successful candidates will have an excellent degree (Diploma, M.Sc. or equivalent) in Computer Science, Electrical Engineering, Mathematics, Physics, or a related field. They will have a strong mathematical background, solid programming experience in C++ and Matlab, and above all a strong motivation and desire to learn. Prior experience in computer vision and/or pattern recognition is desired and fluency in English is required (both written and spoken). Applications should be submitted by email until March 1, 2013 and include a statement of research interests, a complete curriculum vitae, academic transcripts with grades, and recommendation letters from two academic references. The screening process will already start before the deadline so early submissions are encouraged. The starting date is negotiable. Email: hci [dot] applications [at] iwr [dot] uni-heidelberg [dot] de Prof. Dr. Bj?rn Ommer Heidelberg Collaboratory for Image Processing (HCI) University of Heidelberg http://hci.iwr.uni-heidelberg.de/Staff/bommer/ From m.lengyel at eng.cam.ac.uk Mon Feb 4 08:18:06 2013 From: m.lengyel at eng.cam.ac.uk (=?iso-8859-1?Q?M=E1t=E9_Lengyel?=) Date: Mon, 4 Feb 2013 13:18:06 +0000 Subject: Connectionists: Faculty position in Computational Neuroscience, Cambridge University Message-ID: <76C674D6-54EE-4319-B875-078D042E4800@eng.cam.ac.uk> Faculty position in Computational Neuroscience, Cambridge University Applications are invited for a tenure-track University Lectureship (~ US Assistant Professor equivalent) in the broad area of Computational Neuroscience, including Computational Cognitive Science in the Computational and Biological Learning Lab (cbl.eng.cam.ac.uk). CBL combines expertise in computational neuroscience and cognitive science (Daniel Wolpert, Mate Lengyel, Rich Turner) and machine learning (Zoubin Ghahramani, Carl Rasmussen). We particularly encourage applicants who would complement our current research activities. The successful applicant?s research should use computational or theoretical approaches to neuroscience, and may combine these approaches with behavioural experiments. Informal enquiries may be made to Daniel Wolpert (wolpert at eng.cam.ac.uk), Mate Lengyel (m.lengyel at eng.cam.ac.uk) or Richard Turner (ret26 at cam.ac.uk). Timeline: * Closing date of applications: Monday 4th March 2013 * Interviews: March 18th & 19th 2013 * Decisions: immediately after interviews Further information and details of the application process can be found at http://cbl.eng.cam.ac.uk/Public/VacancyFaculty2013 -- Mate Lengyel, PhD Computational and Biological Learning Lab Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ, UK tel: +44 (0)1223 748 532, fax: +44 (0)1223 332 662 email: m.lengyel at eng.cam.ac.uk web: www.eng.cam.ac.uk/~m.lengyel From pelillo at dsi.unive.it Tue Feb 5 03:51:15 2013 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Tue, 5 Feb 2013 09:51:15 +0100 (CET) Subject: Connectionists: Call for Papers: IEEE TNNLS Special Issue on "Learning in non-(geo)metric spaces" Message-ID: CALL FOR PAPERS IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Special Issue on Learning in Non-(geo)metric Spaces Traditional machine learning and pattern recognition techniques are intimately linked to the notion of feature space. Adopting this view, each object is described in terms of a vector of numerical attributes and is therefore mapped to a point in a Euclidean (geometric) vector space so that the distances between the points reflect the observed (dis)similarities between the respective objects. This kind of representation is attractive because geometric spaces offer powerful analytical as well as computational tools that are simply not available in other representations. However, the geometric approach suffers from a major intrinsic limitation which concerns the representational power of vectorial, feature-based descriptions. In fact, there are numerous application domains where either it is not possible to find satisfactory features or they are inefficient for learning purposes. By departing from vector-space representations one is confronted with the challenging problem of dealing with (dis)similarities that do not necessarily possess the Euclidean behavior or not even obey the requirements of a metric. The lack of the Euclidean and/or metric properties undermines the very foundations of traditional machine learning theories and algorithms, and poses totally new theoretical/computational questions and challenges that the research community is currently trying to address. The goal of the special issue is to consolidate research efforts in this area by soliciting and publishing high-quality papers which, together, will present a clear picture of the state of the art. SCOPE OF THE SPECIAL ISSUE We will encourage submissions of papers addressing theoretical, algorithmic, and practical issues related to the two fundamental questions that arise when abandoning the realm of vectorial, feature-based representations, namely: - how can one obtain suitable similarity information from data representations that are more powerful than, or simply different from, the vectorial? - how can one use similarity information in order to perform learning and classification tasks? Accordingly, topics of interest include (but are not limited to): - Embedding and embeddability - Graph spectra and spectral geometry - Indefinite and structural kernels - Game-theoretic models of pattern recognition and learning - Characterization of non-(geo)metric behavior - Foundational issues - Measures of (geo)metric violations - Learning and combining similarities - Multiple-instance learning - Applications We aim at covering a wide range of problems and perspectives, from supervised to unsupervised learning, from generative to discriminative models, and from theoretical issues to real-world applications. IMPORTANT DATES October 1, 2013 Deadline for manuscript submission April 1, 2014 Notification to authors July 1, 2014 Deadline for submission of revised manuscripts October 1, 2014 Final decision GUEST EDITORS Marcello Pelillo, Ca Foscari University,Venice, Italy (pelillo at dsi.unive.it) Edwin Hancock, University of York, UK (edwin.hancock at york.ac.uk) Xuelong Li, Chinese Academy of Sciences, China (xuelong_li at ieee.org) Vittorio Murino, Italian Institute of Technology, Italy (vittorio.murino at univr.it) SUBMISSION INSTRUCTIONS 1. Read the information for authors at: http://cis.ieee.org/publications.html 2. Submit the manuscript by October 1, 2013 at the IEEE-TNNLS webpage (http://mc.manuscriptcentral.com/tnnls) and follow the submission procedure. Please, clearly indicate on the first page of the manuscript and in the author's cover letter that the manuscript has been submitted to the Special Issue on Learning in non-(geo)metric spaces. Send also an e-mail to the guest editors to notify them of your submission. --- Prof. Marcello Pelillo, FIEEE, FIAPR Professor of Computer Science Computer Vision and Pattern Recognition Lab, Director Center for Knowledge, Interaction and Intelligent Systems (KIIS), Director DAIS Ca' Foscari University, Venice Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: marcello.pelillo at gmail.com URL: http://www.dsi.unive.it/~pelillo From brefeld at kma.informatik.tu-darmstadt.de Wed Feb 6 11:37:09 2013 From: brefeld at kma.informatik.tu-darmstadt.de (Ulf Brefeld) Date: Wed, 6 Feb 2013 16:37:09 +0000 Subject: Connectionists: Doctoral Scholarschips in KD / ML Message-ID: Doctoral scholarships in knowledge discovery / machine learning In close co-operation with the German Institute for International Educational Research and Educational Information (DIPF) in Frankfurt am Main, a member of the Leibniz Association, the Technical University of Darmstadt is offering Doctoral Scholarships in knowledge discovery/ machine learning within the newly established PhD program "Knowledge Discovery in Scientific Publications" [1]. The regular maximum duration of funding is 36 months. Scholarships are granted for completing a doctoral thesis in computer science with a strong focus on machine learning. The research is applied to the domain of educational research literature. To this end, DIPF offers excellent opportunities for a close co-operation with subsequent users. Successful candidates will be granted 1,400 Euros per month. The program will be located at DIPF in Frankfurt (Main). The successful candidates are expected to work on the projects "Personalized Content Acquisition from Heterogeneous Sources" (Prof. Brefeld), or "Preference-Based Profiling of Scientific Publications" (Prof. F?rnkranz). The PhD program brings together the disciplines of "Knowledge Engineering", "Algorithmics", "Language Technology", "Ubiquitous Knowledge Processing", "Knowledge Mining and Assessment", and "Information Management". The concept for supervision strongly relies on close contacts between postgraduate students and their supervisors, regular joint meetings, co-supervision by professors and senior researchers from the above disciplines and a lively exchange in the research and qualification program. Furthermore, the program strives to publish research findings at leading scientific conferences and provide its software freely accessible as open source product. Excellently qualified graduates from computer science are invited to apply. Successful candidates are expected to possess very good programming skills in Java, to work independently, demonstrate their personal commitment, team and communication skills as well as a readiness to cooperate with others. Research experience, in particular in machine learning, is a plus. Women are expressly invited to submit their application. According to the pursuant legal requirements, applicants with disabilities will be preferably treated in the appointment procedure. Candidates from abroad are encouraged to apply. The Department of Computer Science at TU Darmstadt regularly ranks among the top in Germany. Among its distinguishing features are its research initiative "Knowledge Discovery on the Web" focusing on powerful language technology procedures, text mining, machine learning and scalable infrastructures for assessing and aggregating knowledge. As a scientific institute belonging to the Leibniz Association, the DIPF targets top-class basic research as well as innovative scientific services. Education is addressed as an area with high visibility and significance. The DIPF is currently establishing a research priority domain for educational information science, by joining competencies with computer scientists at TU Darmstadt. In this context, the doctoral program will constitute a central element. Please submit your application by February 28, 2013. Applications should include a letter of motivation related to the research program [1] and its corresponding projects [2] and [3], CV and details regarding previous scientific work, certifications of studies and work, including the graduate thesis and possibly electronic publications. Applications should be sent to Prof. Dr. Iryna Gurevych and Prof. Dr. Marc Rittberger, e-mail: phd-application at ukp.informatik.tu-darmstadt.de. [1] http://www.kdsl.tu-darmstadt.de [2] http://www.kdsl.tu-darmstadt.de/de/home/research-program/personalized-content-acquisition-from-heterogeneous-sources/ [3] http://www.kdsl.tu-darmstadt.de/de/home/research-program/preference-based-profiling/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookie at ucsd.edu Wed Feb 6 18:19:39 2013 From: cookie at ucsd.edu (Santamaria, Cookie) Date: Wed, 6 Feb 2013 23:19:39 +0000 Subject: Connectionists: Postdoctoral Research Opportunity in the Dynamics of Multifunction Brain Networks Message-ID: The BioCircuits Institute (BCI) and Departments of Physics, Chemistry/Biochemistry, Bioengineering, and Psychology at the University of California, San Diego invite applications for several postdoctoral positions conducting research at the interface between dynamical systems neuroscience, statistical physics, and neuromorphic engineering. The aims of the research program, sponsored by the Office of Naval Research, are to conduct basic research on how collective action in the brain learns, modulates and produces coherent functional neural activity for coordinated behavior of complex systems, and to engineer a neuromorphic cognitive agent applying these principles to real-time decision making in real-world environments. The ideal candidate will demonstrate experience necessary to work closely with at least two of the participating laboratories at UCSD (Abarbanel, Cauwenberghs, Gentner, Lindenberg, Rabinovich, Sejnowski) and interact at all of the interfaces of the discipli! nes represented by the researchers in our groups. Candidates with demonstrated expertise in electrophysiology in awake behaving animals, computational and theoretical neuroscience, nonlinear dynamical systems, thermodynamics, fluctuations, and neuromorphic engineering and analog VLSI circuits and systems are strongly encouraged to apply. We will accept applications immediately and will begin making selections on May 1, 2013, until all positions are filled. Appointments are for two years (in one-year increments) with the possibility of a third. Send your statement of qualifications and interest with curriculum vitae, your two most significant publications and three letters of reference to Henry Abarbanel, Gert Cauwenberghs, Tim Gentner, Katja Lindenberg, Mikhail Rabinovich, and Terrence Sejnowski via email to: postdoc.muri at gmail.com. A Ph.D. or equivalent doctoral degree is required prior to the appointment. UCSD is an EO/AA employer. From erik at tnb.ua.ac.be Wed Feb 6 21:10:18 2013 From: erik at tnb.ua.ac.be (Erik De Schutter) Date: Thu, 7 Feb 2013 11:10:18 +0900 Subject: Connectionists: Okinawa Computational Neuroscience Course 2013: Application deadline this Sunday Message-ID: <54D3F3F0-6A43-422D-B1CD-DC7FFEB86D2A@tnb.ua.ac.be> OKINAWA COMPUTATIONAL NEUROSCIENCE COURSE 2013 Methods, Neurons, Networks and Behaviors June 17 - July 4, 2013. OIST, Okinawa, Japan new website: https://groups.oist.jp/ocnc The aim of the Okinawa Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling. We invite graduate students and postgraduate researchers to participate in the course, held from June 17th through July 4th, 2013 at an oceanfront seminar house of the Okinawa Institute of Science and Technology Graduate University. Applications are through the course web page only; they close Sunday February 10th, 2013. Applicants are required to propose a project at the time of application. Applicants will receive confirmation of acceptance in March. Like in preceding years, OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with ample time for student projects. The first week will focus exclusively on methods with hands-on tutorials during the afternoons, while the second and third weeks will have lectures by international experts. We invite those who are interested in integrating experimental and computational approaches at each level, as well as in bridging different levels of complexity. There is no tuition fee. The sponsor will provide lodging and meals during the course and may support travel for those without funding. We hope that this course will be a good opportunity for theoretical and experimental neuroscientists to meet each other and to explore the attractive nature and culture of Okinawa, the southernmost island prefecture of Japan. Invited faculty: ? Angelo Arleo (Universit? Pierre & Marie Curie, France) ? Avrama Blackwell (George Mason University, USA) ? Erik De Schutter (OIST) ? Karl Deisseroth (Stanford University, USA) ? Sophie Deneve (Ecole Normale Sup?rieure, France) ? Kenji Doya (OIST) ? Gaute Einevoll (Norwegian University of Life Sciences) ? Mike Hasselmo (Boston University, USA) ? Mitsuo Kawato (ATR, Japan) ? Bernd Kuhn (OIST) ? Henry Markram (EPFL, Lausanne, Switzerland) ? Jonathan Pillow (University Texas Austin, USA) ? Idan Segev (Hebrew University, Israel) ? Greg Stephens (OIST) ? Jeff Wickens (OIST) ? Yoko Yazaki-Sugiyama (OIST) From gros at itp.uni-frankfurt.de Thu Feb 7 07:00:34 2013 From: gros at itp.uni-frankfurt.de (Prof. Claudius Gros) Date: Thu, 7 Feb 2013 13:00:34 +0100 (CET) Subject: Connectionists: PhD position: complex/cognitive system theory Message-ID: Open PhD-position : Complex Dynamical Systems / Cognitive System Theory At the Institute for Theoretical Physics, University of Frankfurt am Main, Field(s): complex systems, cognitive systems, neural networks, dynamical system theory Application deadline: March 15, 2013 Supervisor: Prof. Claudius Gros E-mail: cgr at itp.uni-frankfurt.de Address: Institute for Theoretical Physics, Goethe University Frankfurt, Job description: Applications are invited for a fully funded PhD position at the Institute for Theoretical Physics, Frankfurt University. The general focus of the research group is the development of the theory of complex and cognitive systems: generating principles, functionality and control. The focus of the work will be on models for the self-generated brain dynamics including generating functionals and working principles for the sensorimotor loop. The work will include analytical investigations and numerical simulations of neural networks, within the framework of dynamical system theory. The candidates should have a Diploma/Master in physics with an excellent academic track record and good computational skills, preferable with JAVA or C++. Experience or strong interest in the fields of complex systems, dynamical system theory and/or artificial or biological cognitive systems is expected. The degree of scientific research experience is expected to be on the level of a German Diploma/Master. The appointments will start summer 2013, for up to three years. Interested applicants should submit a curriculum vitae, a list of publications and arrange for two letters of reference to be sent to the address below. Prof. Dr. C. Gros Institute for Theoretical Physics Goethe University Frankfurt Max-von-Laue-Str. 1 60438 Frankfurt am Main Germany cgr at itp.uni-frankfurt.de http://itp.uni-frankfurt.de/~gros ***************************************** *** Prof. Dr. Claudius Gros *** *** +49 (0)69 798 47818 *** *** http://itp.uni-frankfurt.de/~gros *** ***************************************** -------------------------------------------------------- --- Complex and Adaptive Dynamical Systems, A Primer --- --- A graduate-level textbook, Springer (2008/09/10) --- -------------------------------------------------------- From Julien.Mayor at unige.ch Wed Feb 6 09:18:49 2013 From: Julien.Mayor at unige.ch (Julien Mayor) Date: Wed, 6 Feb 2013 14:18:49 +0000 Subject: Connectionists: Special Issue in Frontiers in Language Sciences: 2nd Call for Papers Message-ID: <01BD9C3D6FD1504DA2F5871F4E7A57F6CC0A43@mike.isis.unige.ch> Dear colleagues, This is a reminder that, in collaboration with Frontiers in Psychology, we are currently organizing a Research Topic (Special Issue), "50 years after the perceptron, 25 years after PDP: Neural computation in language sciences". The structure of this Research Topic is provided below. Host Specialty: Frontiers in Language Sciences Research Topic Title: 50 years after the perceptron, 25 years after PDP: Neural computation in language sciences Topic Editor(s): Julien Mayor, Pablo Gomez, Franklin Chang, Gary Lupyan Description: The special issue aims to showcase the state of the art in language research while celebrating the 25th anniversary of the tremendously influential work of the PDP group, and the 50th anniversary of the perceptron. Although PDP models are often the gold standard to which new models are compared, the scope of this special issue is not constrained to connectionist models. Instead, we aim to create a landmark forum in which experts in the field define the state of the art and future directions of the psychological processes underlying language learning and use, broadly defined. We thus invite papers involving computational modeling and original research as well as technical, philosophical, or historical discussions pertaining to models of cognition. We especially invite submissions aimed at contrasting different computational frameworks, and their relationship to imaging and behavioral data. Article Submission Deadline: Apr 30, 2013 Frontiers Research Topics are designed to be an organized, encyclopedic coverage of a particular research area, and a forum for discussion and debate. Contributions can be of different article types (Original Research, Methods, Hypothesis & Theory, and others). Our Research Topic has a dedicated homepage on the Frontiers website, where contributing articles are accumulated and discussions can be easily held. Once all articles are published, the topic will be compiled into an e-book, which can be sent to foundations that fund your research, to journalists and press agencies, and to any number of other organizations. As the ultimate reference source from leading scientists, Frontiers Research Topic articles become highly cited. Frontiers is a Swiss-based, open access publisher. As such an article accepted for publication incurs a publishing fee, which varies depending on the article type. The publishing fee for accepted articles is below average compared to most other open access journals - and lower than subscription-based journals that apply page and color figure charges. Moreover, for Research Topic articles, the publishing fee is discounted quite steeply thanks to the support of the Frontiers Research Foundation. Details on Frontiers? fees can be found at http://www.frontiersin.org/about/PublishingFees. When published, your article will be freely available to visitors to the Frontiers site, and will be indexed in PubMed and other academic archives. As an author in Frontiers, you will retain the copyright to your own paper and all figures. For more information about this topic and Frontiers in Language Sciences, please visit: http://www.frontiersin.org/Language_Sciences/researchtopics/50_years_after_the_perceptron_/1287 It would be wonderful if you considered participating in this Research Topic. With best regards, Julien Mayor Guest Associate Editor, Frontiers in Language Sciences www.frontiersin.org University of Geneva 40 Bd Pont d'Arve 1205 Gen?ve Tel: +41 (0)22 3798150 http://www.unige.ch/fapse/psycholinguistique/model.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From K.Tsaneva-Atanasova at bristol.ac.uk Fri Feb 8 10:59:03 2013 From: K.Tsaneva-Atanasova at bristol.ac.uk (Krasi Tsaneva) Date: Fri, 8 Feb 2013 15:59:03 +0000 Subject: Connectionists: Research Fellow in Modelling and Control of Socio-Motor Coordination In-Reply-To: <20742.38253.582839.729909@hebb.inf.ed.ac.uk> References: <20742.38253.582839.729909@hebb.inf.ed.ac.uk> Message-ID: We seek to recruit a Research Fellow to undertake research into mathematical modelling and control of human social interactions. You will contribute to a joint venture between movement scientists from Montpellier 1 University in France, computer science experts from the DFKI centre (Germany), mathematicians from the University of Bristol (UK), roboticists from the Ecole Polytechnique F?d?rale de Lausanne (CH), as well as clinicians, psychologists and psychiatrists from the Academic Hospital of Montpellier (CHRU, FR). You will have an excellent background in mathematics and/or engineering, and should be committed to applying their research to make real artificial agents? systems interacting with people in challenging circumstances. You will possess a relevant PhD and be able to demonstrate sufficient knowledge in mathematical modelling, numerical bifurcation analysis and feedback control design in order to work within the project. You are expected to produce reliable mathematical models; feedback control strategies and numerical algorithms that i) allow real-time adaptation of the coupled human-artificial agent dynamics based on feedback control techniques and ii) integrate all parts of the interactive cognitive architecture together. For informal enquires please contact: Dr KT Tsaneva-Atanasova, Reader in Applied Mathematics Email: K.Tsaneva-Atanasova at bristol.ac.uk For further details and to apply please visit: http://www.bris.ac.uk/jobs/find/list.html?keywords=&jobnum=ACAD100163&srcsubmit=Search&statlog=1&ID=Q50FK026203F3VBQBV7V77V83&mask=uobext&LG=UK Dr Krasimira Tsaneva-Atanasova Reader (Associate Professor) in Applied Mathematics Department of Engineering Mathematics University of Bristol Queen's Building Bristol BS8 1TR, UK Phone: +44 (0)117 331-5603 Fax: +44 (0)117 331-5606 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaytsev at fz-juelich.de Tue Feb 5 11:34:45 2013 From: zaytsev at fz-juelich.de (Yury V. Zaytsev) Date: Tue, 5 Feb 2013 17:34:45 +0100 Subject: Connectionists: Release of NEST 2.2.1 and PyNN 0.7.5 Message-ID: <1360082085.2856.34.camel@newpride> Dear colleagues, We are very happy to announce the simultaneous release of NEST 2.2.1 and PyNN 0.7.5. PyNN is a simulator-independent language for building neuronal network models developed by the NeuralEnsemble community: http://neuralensemble.org/PyNN/ NEST is a fast and efficient simulator for networks of spiking neurons developed by NEST Initiative, distributed under the terms of the GNU General Public License version 2 (or later) and can be downloaded from http://www.nest-initiative.org/index.php/Software:Download Additionally, we made Ubuntu-based live media available (both in ISO and OVA formats), suitable, for example, for importing into VirtualBox or burning to a DVD. NEST, NEURON, Brian, PyNN and other software comes pre-installed, which is useful for trying out NEST without installing it on your computer, especially for Windows and Mac OS X users. NEST 2.2.1 is primarily a bugfix release following NEST 2.2.0, released in December, 2012, resolving several bugs in the build system, PyNEST, topology module and built-in models. NEST 2.2.1 has been verified to be compatible with the latest released PyNN 0.7.5. Below we briefly present the major new features in NEST 2.2.x series. == Features == NEST 2.2.0 contains substantial improvements and many new features. The most important ones are: 1. Better speed, scaling, and memory footprint 2. Support for connection set algebra (CSA) 3. Data driven network generation === Better speed, scaling, and memory footprint === Many of NEST's major data structures have been re-written to improve speed, scalability and memory footprint. The most significant changes concern the simulation kernel and the way it represents networks internally. As a result, NEST has a considerably lower memory consumption and improved scaling, in particular when using a large number of cores. The full details and theory behind these changes are published in two papers: * Helias et al. Front. Neuroinform. (2012) http://dx.doi.org/10.3389/fninf.2012.00026 * Kunkel et al. Front. Neuroinform. (2012) http://dx.doi.org/10.3389/fninf.2011.00035 * Many connect functions now use OpenMP to connect neurons in parallel. * OpenMP has also replaced Pthreads as a default for multi-threaded simulations, yielding better scaling. * SLI, the built-in simulation language interpreter of NEST, is now up to 5 times faster. * The topology library has been re-written, greatly improving its speed and reducing memory requirements. === Support for connection set algebra (CSA) === NEST 2.2.0 supports the Connection Set Algebra by Mikael Djurfeldt ( http://dx.doi.org/10.1007/s12021-012-9146-1 ). The Connection Set Algebra is a powerful notation that allows to formulate complex network architectures in a concise manner. === Data driven network generation === NEST 2.2.0 has a new function `DataConnect` which allows the efficient connection and parametrization of connections from connection data. `DataConnect` will efficiently create and parameterize synapses. The new function `GetConnections` allows to efficiently retrieve the afferent and efferent connections of a neuron or the connections between groups of neurons. The combination of `GetConnections` and `DataConnect` allows users to retrieve, save, and restore the synaptic state of a network. This is particularly useful in models with synaptic plasticity and learning. === Topology library === The topology library supports the creation of spatially organized networks, e.g. for models of the visual system. NEST 2.2.0 supports 3-dimensional networks, where neurons are placed in a volume rather than on a sheet. There is also a new API to add user defined connection kernels. Please refer to the updated user manual and examples for more details. == Detailed list of changes == * Kernel and PyNEST changes - Major improvements to the memory consumption and scaling properties of the simulation kernel as described in Helias et al. (2012) and Kunkel et al. (2012). - NEST now supports the connection generator interface, an interface allowing external modules to generate connectivity, see http://software.incf.org/software/libneurosim . - Support for the Connection Set Algebra (CSA) by Mikael Djurfeldt has been added, see http://software.incf.org/software/csa and http://dx.doi.org/10.3389/conf.fninf.2011.08.00085 . - The new command `GetConnections` allows the fast retrieval of connections and will replace the slow and memory intensive `FindConnections`; `FindConnections` is deprecated and will be removed in future releases. - The new command `DataConnect` allows to efficiently create network connections from data, e.g. when synapse parameters are explicitly given. - Connection objects are now represented as Python lists or NumPy arrays; code which relies on the old connection dictionaries will have to be changed. - The function `GetNodes` has been removed, one has to explicitly use either `GetLocalNodes`, or `GetGlobalNodes`. - Support for node addresses has been removed and with it the functions `GetAddress` and `GetGID`. - Threading support for OpenMP has been added and is the default now. - Many connect routines and node calibration are now parallel, using OpenMP. - New function `CGConnect` for connecting neurons using connection generators (see doc/conngen.txt). - New function `abort`, which terminates NEST and all its MPI processes without deadlocks. * Topology module changes - Major rewrite which resulted in improved performance and reduced memory requirements for freely placed neurons. - Topology now supports 3-dimensional layers. - New API for adding your own kernel functions. - 'Nested' layer layout (subnets within subnets) is no longer supported; this was previously discouraged for performance reasons. - Composite layers (layers with multiple nodes per position) no longer contain subnets. - Semantics of `GetElement` have changed, in particular a list of GIDs is returned for a composite layer, where previously a single subnet GID would be returned. - `GetPosition`, `Displacement` and `Distance` now only work for nodes local to the current MPI process. - See the updated Topology User Manual for details. * SLI Interpreter improvements - The SLI Interpreter has been optimized for speed and memory; in particular handling and lookup of names is much faster now. - SLI now supports fixed size vectors of doubles and integers. The new types are called `IntVector` (/intvectortype) and `DoubleVector` (/doublevectortype). - NumPy arrays are automatically converted to SLI vectors to conserve memory and CPU time. - SLI has new functions `arange`, `zeros` and `ones` to easily create vectors. - The vector types support all common math operations. - New functions `DictQ` and `SubnetQ` to test the argument types for being a dictionary or a subnet, respectively. * Miscellaneous changes - The installation prefix should now be given explicitly, since installing to the default `/usr/local` is strongly discouraged. - Substantial documentation updates; a large number of broken documentation cross-references has been resolved. - Fixed numerical instabilities of the AdEx models (`aeiaf_cond_exp` and `aeiaf_cond_alpha`). - New significantly faster binomial random number generator replaced the previous implementation in librandom (#390). - New wrapper for the GSL binomial random deviate generator under the name `gsl_binomial`. - Support for IBM BlueGene and K supercomputers (configure with --enable-bluegene=l/p/q). - The command `setenvironment` has been removed; this functionality was broken on Mac OS X for quite some time. - Much improved test coverage for SLI, PyNEST and MPI in particular. - Many new SLI and PyNEST examples and updates for the existing ones. - Code quality of NEST and all examples is continuously monitored now using CI ( http://dx.doi.org/10.3389/fninf.2012.00031 ). == Known issues == * The simulation progress indicator does not work with OpenMP and some error messages are unreadable. * On multiarch systems (i.e. 64-bit Red Hat Linux) one has to manually move all PyNEST-related files from $PREFIX/lib/python2.6/site-packages to $PREFIX/lib64/python2.6/site-packages; this will be resolved in next releases. This also breaks `make installcheck`. As always, please send bug reports to the NEST user mailing list. Happy simulating! Beautiful greetings on behalf of the NEST Initiative, -- Dipl.-Phys. Yury V. Zaytsev Institute of Neuroscience and Medicine (INM-6) Functional Neural Circuits Group J?lich Research Center http://www.fz-juelich.de/inm/inm-6/ Office: +49 2461 61-9466 Fax # : +49 2461 61-9460 ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ From mhb0 at lehigh.edu Sat Feb 9 13:57:09 2013 From: mhb0 at lehigh.edu (Mark H. Bickhard) Date: Sat, 9 Feb 2013 13:57:09 -0500 Subject: Connectionists: 2nd CFP Interactivist Summer Institute 2013 Message-ID: Interactivist Summer Institute 2013 August 1 - 4, 2013 University of South Florida, St. Petersburg Join us in exploring the frontiers of understanding of life, mind, and cognition. There is a growing recognition - across many disciplines - that phenomena of life and mind, including cognition and representation, are emergents of far-from-equilibrium, interactive, autonomous systems. Mind and biology, mind and agent, are being re-united. The classical treatment of cognition and representation within a formalist framework of encodingist assumptions is widely recognized as a fruitless maze of blind alleys. From neurobiology to robotics, from cognitive science to philosophy of mind and language, dynamic and interactive alternatives are being explored. Dynamic systems approaches and autonomous agent research join in the effort. The interactivist model offers a theoretical approach to matters of life and mind, ranging from evolutionary- and neuro-biology (including the emergence of biological function) through representation, perception, motivation, memory, learning and development, emotions, consciousness, language, rationality, sociality, personality and psychopathology. This work has developed interfaces with studies of central nervous system functioning, the ontology of process, autonomous agents, philosophy of science, and all areas of psychology, philosophy, and cognitive science that address the person. The conference will involve both tutorials addressing central parts and aspects of the interactive model, and papers addressing current work of relevance to this general approach. This will be our seventh Summer Institute; the first was in 2001 at Lehigh University, Bethlehem, PA, USA, the second in 2003 in Copenhagen, Denmark, the third in 2005 at Clemson University, South Carolina, USA, the fourth in 2007 at The American University in Paris, the fifth in 2009 at Simon Fraser University, Vancouver., and the sixth on Syros, Greece. The Summer Institute is a biennial meeting where those sharing the core ideas of interactivism will meet and discuss their work, try to reconstruct its historical roots, put forward current research in different fields that fits the interactivist framework, and define research topics for prospective graduate students. People working in philosophy of mind, linguistics, social sciences, artificial intelligence, cognitive robotics, theoretical biology, and other fields related to the sciences of mind are invited to send their paper submission or statement of interest for participation to the organizers. ISI 2013 web site: http://www.lehigh.edu/%7einteract/isi2013web/index.htm Mark H. Bickhard Lehigh University 17 Memorial Drive East Bethlehem, PA 18015 mark at bickhard.name http://bickhard.ws/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgd at eecs.oregonstate.edu Wed Feb 6 16:53:40 2013 From: tgd at eecs.oregonstate.edu (Thomas G. Dietterich) Date: Wed, 6 Feb 2013 13:53:40 -0800 Subject: Connectionists: Please Help: Survey on Machine Learning and Probabilistic Modeling Message-ID: <01ea01ce04b4$6d65de20$48319a60$@eecs.oregonstate.edu> I'm conducting a brief survey to assess the current use of probabilistic and non-probabilistic machine learning/data mining/statistical tools in research, engineering, and commerce. Please participate by going to http://oregonstate.qualtrics.com/SE/?SID=SV_b4bEwFkAhEAiewJ All responses are anonymous. Thanks! -- Thomas G. Dietterich, Professor School of Electrical Engineering and Computer Science http://eecs.oregonstate.edu/~tgd 1148 Kelley Engineering Center Oregon State Univ., Corvallis, OR 97331-5501 -------------- next part -------------- An HTML attachment was scrubbed... URL: From derry.fitzgerald at dit.ie Tue Feb 5 12:01:31 2013 From: derry.fitzgerald at dit.ie (Derry FitzGerald) Date: Tue, 5 Feb 2013 17:01:31 +0000 Subject: Connectionists: 2nd Call for Papers: Special Issue on Informed Acoustic Source Separation; EURASIP Journal on Advances in Signal Processing Message-ID: <51113AEB.2030905@dit.ie> We apologize for cross-distribution and multiple copies. *************************************************** 2nd CALL FOR PAPERS EURASIP Journal on Advances in Signal Processing Special Issue on Informed Acoustic Source Separation The complete call of papers is accessible at: http://asp.eurasipjournals.com/sites/10233/pdf/H9386_DF_CFP_EURASIP_JASP_A4_3.pdf DEADLINE: PAPER SUBMISSION: 31st May 2013 ------------------------------------------------------------------------------------------------------ Short Description The proposed topic of this special issue is informed acoustic source separation. As source separation has long become a field of interest in the signal processing community, recent works increasingly point out the fact that separation can only be reliably achieved in real-world use cases when accurate prior information can be successfully incorporated. Informed separation algorithms can be characterized by the fact that case-specific prior knowledge is made available to the algorithm for processing. In this respect, they contrast with blind methods for which no specific prior information is available. Following on the success of the special session on the same topic in EUSIPCO 2012 at Bucharest, we would like to present recent methods, discuss the trends and perspectives of this domain and to draw the attention of the signal processing community to this important problem and its potential applications. We are interested in both methodological advances and applications. Topics of interest include (but are not limited to): . Sparse decomposition methods . Subspace learning methods for sparse decomposition . Non-negative matrix / tensor factorization . Robust principal component analysis . Probabilistic latent component analysis . Independent component analysis . Multidimensional component analysis . Multimodal source separation . Video-assisted source separation . Spatial audio object coding . Reverberant models for source separation . Score-informed source separation . Language-informed speech separation . User-guided source separation . Source separation informed by cover version . Informed source separation applied to speech, music or environmental signals . ... ------------------- Guest Editors Taylan Cemgil, Bogazici University, Turkey, Tuomas Virtanen, Tampere University of Technology, Finland, Alexey Ozerov, Technicolor, France, Derry Fitzgerald, Dublin institute of Technology, Ireland, Lead Guest Editor: Ga?l Richard, Institut Mines-T?l?com, T?l?com ParisTech, CNRS-LTCI, France. T? an teachtaireacht seo scanta ? thaobh ?bhar agus v?reas ag Seirbh?s Scanta R?omhphost de chuid Seirbh?s? Faisn?ise, ITB?C agus meastar ? a bheith sl?n. http://www.dit.ie This message has been scanned for content and viruses by the DIT Information Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eugene.Izhikevich at braincorporation.com Thu Feb 7 17:10:30 2013 From: Eugene.Izhikevich at braincorporation.com (Eugene Izhikevich) Date: Thu, 7 Feb 2013 14:10:30 -0800 Subject: Connectionists: Scholarpedia leaders: Brain Corporation $10k Prize Message-ID: Scholarpedia (www.scholarpedia.org) and Brain Corporation (http://braincorporation.com/) announced a global experiment in scholarship and collaboration. The goal is to complete the most comprehensive, open, current , and scholarly resource in computational neuroscience: http://www.scholarpedia.org/article/Encyclopedia_of_computational_neuroscience Brain Corporation is offering $10,000 (US) in prizes for writing and publishing the most popular Scholarpedia article in the field of Computational Neuroscience. As a Scholarpedia entry, each such article will undergo normal peer-review and publication process. The article must be published in Scholarpedia between October 1, 2012 and June 30, 2013 in order to participate in the contest. All participating articles will be publicly available under Creative Commons BY-NC-SA 3.0 License. The winning Scholarpedia articles will be determined based on the number of Google +1 votes receive after publication, and the award will be given during the CNS meeting in July in Paris. Currently, the leaders are: http://www.scholarpedia.org/article/Frontal_eye_field Google+ count: 244 http://www.scholarpedia.org/article/SPIKE-distance Google+ count: 243 http://www.scholarpedia.org/article/Homeostatic_Regulation_of_Neuronal_Excitability Google+ count: 177 A typical time to get first 100 votes is just a few weeks, so people joining now still have a chance of winning the competition. In order to reserve an article in Scholarpedia for this competition, students and post-docs will need to team up with established experts on the topic they choose to cover. For details on how to participate, please visit: http://www.scholarpedia.org/article/Scholarpedia:2012_Brain_Corporation_Prize_in_Computational_Neuroscience -- Dr. Eugene M. Izhikevich Founder and CEO of Brain Corporation, San Diego, California Founder and Editor-in-Chief of Scholarpedia -- the peer-reviewed open-access encyclopedia. From grlmc at urv.cat Fri Feb 8 12:48:27 2013 From: grlmc at urv.cat (GRLMC) Date: Fri, 8 Feb 2013 18:48:27 +0100 Subject: Connectionists: SLSP 2013: 3rd call for papers Message-ID: <4DBCF099D9D142A19FE2C100F2EFD7D4@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject* ********************************************************************* 1st INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING SLSP 2013 Tarragona, Spain July 29-31, 2013 Organised by: Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Research Institute for Information and Language Processing (RIILP) University of Wolverhampton http://grammars.grlmc.com/SLSP2013/ ********************************************************************* AIMS: SLSP is the first event in a series to host and promote research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences including papers in any of these fields, SLSP is a more focused meeting where synergies between areas and people will hopefully happen. SLSP will reserve significant space for young scholars at the beginning of their careers. VENUE: SLSP 2013 will take place in Tarragona, 100 km. to the south of Barcelona. SCOPE: The conference invites submissions discussing the employment of statistical methods (including machine learning) within language and speech processing. The list below is indicative and not exhaustive: - phonology, morphology - syntax, semantics - discourse, dialogue, pragmatics - statistical models for natural language processing - supervised, unsupervised and semi-supervised machine learning methods applied to natural language, including speech - statistical methods, including biologically-inspired methods - similarity - alignment - language resources - part-of-speech tagging - parsing - semantic role labelling - natural language generation - anaphora and coreference resolution - speech recognition - speaker identification/verification - speech transcription - text-to-speech synthesis - machine translation - translation technology - text summarisation - information retrieval - text categorisation - information extraction - term extraction - spelling correction - text and web mining - opinion mining and sentiment analysis - spoken dialogue systems - author identification, plagiarism and spam filtering STRUCTURE: SLSP 2013 will consist of: ? invited talks ? invited tutorials ? peer-reviewed contributions INVITED SPEAKERS: Yoshua Bengio (Montr?al), tutorial Learning Deep Representations Christof Monz (Amsterdam), Challenges and Opportunities of Multilingual Information Access Tanja Schultz (Karlsruhe Tech), Multilingual Speech Processing with a special emphasis on Rapid Language Adaptation PROGRAMME COMMITTEE: Carlos Mart?n-Vide (Tarragona, Co-Chair) Ruslan Mitkov (Wolverhampton, Co-Chair) Jerome Bellegarda (Apple Inc., Cupertino) Robert C. Berwick (MIT) Laurent Besacier (LIG, Grenoble) Bill Byrne (Cambridge) Jen-Tzung Chien (National Chiao Tung U, Hsinchu) Kenneth Church (IBM Research) Koby Crammer (Technion) Renato De Mori (McGill & Avignon) Thierry Dutoit (U Mons) Marcello Federico (Bruno Kessler Foundation, Trento) Katherine Forbes-Riley (Pittsburgh) Sadaoki Furui (Tokyo Tech) Yuqing Gao (IBM Thomas J. Watson) Ralp Grishman (New York U) Dilek Hakkani-T?r (Microsoft Research, Mountain View) Adam Kilgarriff (Lexical Computing Ltd., Brighton) Dietrich Klakow (Saarbr?cken) Philipp Koehn (Edinburgh) Mikko Kurimo (Aalto) Lori Lamel (CNRS-LIMSI, Orsay) Philippe Langlais (Montr?al) Haizhou Li (Institute for Infocomm Research, Singapore) Qun Liu (Dublin City) Daniel Marcu (SDL) Manuel Montes-y-G?mez (INAOEP, Puebla) Masaaki Nagata (NTT, Kyoto) Joakim Nivre (Uppsala) Kemal Oflazer (Carnegie Mellon Qatar, Doha) Miles Osborne (Edinburgh) Manny Rayner (Geneva) Giuseppe Riccardi (U Trento) Jos? A. Rodr?guez Fonollosa (Technical U Catalonia, Barcelona) Paolo Rosso (Technical U Valencia) Mark Steedman (Edinburgh) Tomek Strzalkowski (Albany) G?khan T?r (Microsoft Research, Redmond) Stephan Vogel (Qatar Computing Research Institute, Doha) Kuansan Wang (Microsoft Research, Redmond) Dekai Wu (HKUST, Hong Kong) Min Zhang (Institute for Infocomm Research, Singapore) Yunxin Zhao (U Missouri, Columbia) ORGANISING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, Co-Chair) Ruslan Mitkov (Wolverhampton, Co-Chair) Bianca Truthe (Magdeburg) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 single?spaced pages (including eventual appendices) and should be formatted according to the standard format for Springer Verlag's LNAI series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions are to be uploaded to: https://www.easychair.org/conferences/?conf=slsp2013 PUBLICATIONS: A volume of proceedings published by Springer in the LNAI topical subseries of the LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions will be by invitation. REGISTRATION: The period for registration is open from November 30, 2012 to July 29, 2013. The registration form can be found at: http://grammars.grlmc.com/SLSP2013/Registration DEADLINES: Paper submission: March 5, 2013 (23:59h, CET) Notification of paper acceptance or rejection: April 9, 2013 Final version of the paper for the LNAI proceedings: April 17, 2013 Early registration: April 24, 2013 Late registration: July 19, 2013 Submission to the post-conference journal special issue: October 31, 2013 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SLSP 2013 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34-977-559543 Fax: +34-977-558386 ACKNOWLEDGEMENTS: Diputaci? de Tarragona Universitat Rovira i Virgili University of Wolverhampton -------------- next part -------------- An HTML attachment was scrubbed... URL: From okada at ntt.dis.titech.ac.jp Tue Feb 5 04:25:45 2013 From: okada at ntt.dis.titech.ac.jp (Shogo Okada) Date: Tue, 05 Feb 2013 18:25:45 +0900 Subject: Connectionists: CALL FOR PAPERS: IML-IJCNN2013 Message-ID: <5110D019.8050905@ntt.dis.titech.ac.jp> Dear Colleague, We would like to inform you about the special session on Incremental Machine Learning: Methods and Applications (IML) Special Session at IJCNN 2013 http://lipn.univ-paris13.fr/~grozavu/IML-IJCNN2013/default.html and to invite you to submit a contribution and/or help us to disseminate this information to your colleagues. CALL FOR PAPERS This Special Session aims to act as a forum for new ideas and paradigms concernig the Incremental Learning (non-stationary learning). This session would solicit theoretical and applicative research papers including but not limited to the following topics : Theory: - Incremental Supervised Learning - Incremental Unsupervised Learning - Online Learning - Online Feature Selection - Clustering data stream - Distributed Clustering - Consensus Clustering - Incremental Probabilistical Models - Active Learning Application: - Incremental learning for data mining - Incremental learning for computer vision and speech processing - Incremental learning for web intelligence - Incremental learning for robotics Fairmont Hotel Dallas, TX August 4-9, 2013 http://www.ijcnn2013.org/ Submission procedures: http://www.ijcnn2013.org/paper-submission.php#content (Important - Submission Guidelines: Please follow the regular submission guidelines of IJCNN 2013 and submit your paper to the paper submission system. Be careful to select the correct special session. ) UPCOMING DEADLINES: Paper Submission Deadline February 22, 2013 Camera-Ready Paper Submission May 1, 2013 Organizers Shogo Okada, Tokyo Institute of Technology, Japan okada at ntt.dis.titech.ac.jp Seiichi Ozawa Kobe Univerasassity, Kobe, Japan ozawasei at kobe-u.ac.jp Nicoleta Rogovschi LIPADE, Paris Descartes University, Paris, France nicoleta.rogovschi at parisdescartes.fr Nistor Grozavu LIPN, Paris 13 University, Villetaneuse, France nistor at lipn.univ-paris13.fr Contact okada at ntt.dis.titech.ac.jp, ozawasei at kobe-u.ac.jp, nicoleta.rogovschi at parisdescartes.fr,nistor at lipn.univ-paris13.fr Shogo Okada,Dr ?????????????????????????????????????? Shogo Okada, Dr. Dept. of Computational Intelligence and Systems Science, Tokyo Institute of Technology.?Assistant Professor 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8503 TEL/FAX ?+81-45-924-5218, E-mail : okada at ntt.dis.titech.ac.jp ?????????????????????????????????????? From calendarsites at insticc.org Mon Feb 11 14:05:28 2013 From: calendarsites at insticc.org (CalendarSites) Date: Mon, 11 Feb 2013 19:05:28 -0000 Subject: Connectionists: CFP IJCCI 2013 - 5th International Joint Conference on Computational Intelligence Message-ID: <06ad01ce088b$18df5950$4a9e0bf0$@insticc.org> CALL FOR PAPERS 5th International Joint Conference on Computational Intelligence - IJCCI 2013 Website: http://www.ijcci.org September 20 - 22, 2013 Vilamoura, Algarve, Portugal Important Deadlines: Regular Papers Paper Submission: March 13, 2013 Authors Notification: May 15, 2013 Final Paper Submission and Registration: June 5, 2013 Sponsored by: INSTICC-Institute for Systems and Technologies of Information, Control and Communication INSTICC is Member of: wfMC- Workflow Management Coaliton The purpose of IJCCI is to bring together researchers, engineers and practitioners on the areas of Fuzzy Computation, Evolutionary Computation and Neural Computation. IJCCI is composed of three co-located conferences, each specialized in at least one of the aforementioned main knowledge areas. - ECTA: International Conference on Evolutionary Computation Theory and Applications (http://www.ecta.ijcci.org) CONFERENCE TOPICS Genetic Algorithms Machine Learning Cognitive Systems Artificial Life Representation techniques Software engineering issues; Metamodelling Evolutionary multiobjective optimization Game Theory and applications Evolution strategies Evolutionary robotics and intelligent agents Society and cultural aspects of evolution Concurrent co-operation Co-evolution and collective behavior Biocomputing and complex adaptive systems Bio-inspired hardware and networks Swarm/collective intelligence Evolutionary art and design Hybrid Systems - FCTA: International Conference on Fuzzy Computation Theory and Applications (http://www.fcta.ijcci.org) CONFERENCE TOPICS Fuzzy hardware, fuzzy architectures Soft computing and intelligent agents Mathematical foundations: Fuzzy set theory and fuzzy logic Approximate reasoning and fuzzy inference System identification and fault detection Fuzzy information retrieval and data mining Fuzzy information processing, fusion, text mining Learning and adaptive fuzzy systems Complex fuzzy systems Pattern recognition: Fuzzy clustering and classifiers Fuzzy image, speech and signal processing, vision and multimedia Industrial, financial and medical applications Type-2 Fuzzy Logic Neuro-fuzzy systems Fuzzy Systems Design, Modeling and Control Real-time Learning of Fuzzy and Neuro-fuzzy Systems Fuzzy Control Fuzzy Systems in Robotics: Sensors, Navigation and Coordination - NCTA: International Conference on Neural Computation Theory and Applications (http://www.ncta.ijcci.org) CONFERENCE TOPICS Pattern Recognition Industrial, financial and medical applications Computational neuroscience Neural network software and applications Complex-valued neural networks Neuroinformatics and bioinformatics Learning paradigms and algorithms Supervised and unsupervised learning Adaptive architectures and mechanisms Support Vector Machines and Applications Complex artificial neural network based systems and dynamics Higher level artificial neural network based intelligent systems Bio-inspired and humanoid robotics Artificial Emotions and Emotional Intelligence Collective & Distributed Intelligent Systems and Dynamics Image Processing and Artificial Vision Applications Intelligent Artificial Perception and Neural Sensors Modular Implementation of Artificial Neural Networks Neural based Data Mining and Complex Information Processing Neural Multi-agent Intelligent Systems and Applications Self-organization and Emergence Stability and Instability in Artificial Neural Networks Neural Network Hardware Implementation and Applications Neural Computation issues in Social Behaviour Emergence These three concurrent conferences are held in parallel and registration to one warrants delegates to attend all three. IJCCI Keynote Speakers Kevin Warwick, University of Reading, United Kingdom Leslie Smith, University of Stirling, United Kingdom (List Not Complete) PUBLICATIONS All accepted papers (full and short) will be published in the conference proceedings, under an ISBN reference, on paper and on CD-ROM support. All papers presented at the conference venue will be available at the SCITEPRESS Digital Library (http://www.scitepress.org/DigitalLibrary/). SCITEPRESS is member of CrossRef (http://www.crossref.org/). A short list of presented papers will be selected so that revised and extended versions of these papers will be published by Springer-Verlag in a SCI Series book. The proceedings will be submitted for indexation by Thomson Reuters Conference Proceedings Citation Index (ISI), INSPEC, DBLP and EI (Elsevier Index) AWARDS Best paper awards will be distributed during the conference closing session. IJCCI Conference Co-chairs Joaquim Filipe, Polytechnic Institute of Set?bal / INSTICC, Portugal Janusz Kacprzyk, Systems Research Institute - Polish Academy of Sciences, Poland Please check further details at the conference website (http://www.ijcci.org). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffclune at uwyo.edu Sat Feb 9 21:14:08 2013 From: jeffclune at uwyo.edu (Jeff Clune) Date: Sat, 9 Feb 2013 19:14:08 -0700 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks Message-ID: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> Hello all, I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes. Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng There has been some nice coverage of this work in the popular press, in case you are interested: ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm I hope you enjoy the work. Please let me know if you have any questions. Best regards, Jeff Clune Assistant Professor Computer Science University of Wyoming jeffclune at uwyo.edu jeffclune.com From jesus.m.cortes at gmail.com Tue Feb 12 05:59:15 2013 From: jesus.m.cortes at gmail.com (Jesus Cortes) Date: Tue, 12 Feb 2013 11:59:15 +0100 Subject: Connectionists: Ikerbasque Research Fellow ("Tenure"). Methods in Neuroimaging, Bilbao (Spain) In-Reply-To: References: Message-ID: Dear subscriber, Subject: Ikerbasque Research Fellow ("Tenure"). Methods in Neuroimaging, Bilbao (Spain) One position for an "Ikerbasque Research Fellow" will be host by the Group of Computational Neuroimaging in Biocruces. More information about "Ikerbasque Research Fellow" at http://www.ikerbasque.net/your_cv/insert_your_cv/research_fellows.html More information about Biocruces http://www.biocruces.com/ For applying, it is compulsory to have a min of 24 months of postdoctoral research in leading Institutions outside Spain. Interested researchers must contact me before March 15th, 2013. Feel free in contacting me for any further question or comment. Provide a copy of an updated CV remarking best 5 publications. Jesus M Cortes Ikerbasque Research Professor Biocruces Institute. From samuel.kaski at aalto.fi Tue Feb 12 02:20:13 2013 From: samuel.kaski at aalto.fi (Kaski Samuel) Date: Tue, 12 Feb 2013 07:20:13 +0000 Subject: Connectionists: Postdoc in machine learning for neuroinformatics, DL 28 Feb 2013 Message-ID: POSTDOC IN MACHINE LEARNING FOR NEUROINFORMATICS, DL 28 FEB 2013 3:00 PM EET We are looking for a postdoctoral researcher to work in a collaboration project of a machine learning group (The Finnish Center of Excellence in Computational Inference Research COIN, prof. Samuel Kaski) and a neuroscience group (O.V.Lounasmaa Laboratory, prof. Riitta Salmelin). There are opportunities for both methods development and brain imaging work. Deadline 28 February 2013 at 3:00 p.m. EET. Please see more details of the application procedure at http://research.ics.aalto.fi/coin/vacancies.shtml (see comp biol and medicine and specify neuroscience in the application) More information: http://research.ics.aalto.fi/coin/index.shtml (COIN) http://http://ltl.tkk.fi/wiki/BRU (O.V.Lounasmaa Laboratory) From sebastian.risi at cornell.edu Tue Feb 12 16:04:42 2013 From: sebastian.risi at cornell.edu (Sebastian Risi) Date: Tue, 12 Feb 2013 16:04:42 -0500 Subject: Connectionists: Announcing ES-HyperNEAT: An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons In-Reply-To: References: Message-ID: Dear Connectionists, The recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) is a step beyond traditional neural network evolution (i.e. neuroevolution) algorithms towards evolving more brain-like structures through evolutionary algorithms. In particular, neural networks evolved by HyperNEAT feature topography in addition to topology. That is, neurons exist at spatial locations just as they do in real brains, which means that connectivity patterns evolve that can be analyzed for emergent topographic map-like characteristics. In addition, the ability to encode and evolve large connectivity patterns with regularities means that HyperNEAT can evolve larger networks than past approaches, with up to millions of connections. Yet the positions and number of the neurons connected through this approach must be decided a priori by the user and, unlike in living brains, cannot change during evolution. We are pleased to announce a new paper that introduces Evolvable-substrate HyperNEAT (ES-HyperNEAT), which addresses this limitation by automatically deducing the density and positions of neurons from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement. This approach not only can evolve the location of every neuron in the network (while still preserving the advances introduced by the original HyperNEAT), but also can represent regions of varying density, which means resolution can increase holistically over evolution. In this paper, we show that ES-HyperNEAT can significantly expand the scope of neural structures that evolution can discover. Cite: An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons. Sebastian Risi and Kenneth O. Stanley. Artificial Life journal, Vol. 18, No. 4, Pages 331-363. MIT Press, 2012 http://www.mitpressjournals.org/doi/pdf/10.1162/ARTL_a_00071 Manuscript: http://eplex.cs.ucf.edu/publications/2012/risi-alife12 In the past four years, a significant body of research from a growing HyperNEAT community has emerged. Many of these publications, source code, and a short online introduction to the technique are available at the HyperNEAT Users Page: http://eplex.cs.ucf.edu/hyperNEATpage/HyperNEAT.html Additional ES-HyperNEAT specific information is available here: http://eplex.cs.ucf.edu/ESHyperNEAT/ Best, Sebastian Risi -- Dr. Sebastian Risi Postdoctoral Fellow Creative Machines Laboratory Cornell University Email: sebastian.risi at cornell.edu Tel: (407) 929-5113 Web: http://www.cs.ucf.edu/~risi/ From agostino.gibaldi at unige.it Thu Feb 14 12:36:32 2013 From: agostino.gibaldi at unige.it (Agostino Gibaldi) Date: Thu, 14 Feb 2013 18:36:32 +0100 Subject: Connectionists: CFP: From sensing machine to sensorimotor control Message-ID: <511D20A0.8050601@unige.it> Dear Researcher, Since the deadline is approaching quickly (22^nd of February) we remind you that the PSPC Lab (www.pspc.unige.it ) is organizing the Special Session *"From sensing machine to sensorimotor control"*, within the International Joint Conference on Neural Networks (www.ijcnn2013.org ), to be held in Dallas, Texas, August 4-9, 2013. The session is intended to be a follow-up of the EC-FP7 project EYESHOTS (www.eyeshots.it ). The Special Session (see below for a more detailed description) aims to investigate how the mutual influence between the perception of the environment and the interaction with it can be extended to support co-evolution mechanisms of perceptual and motor processes. We would be glad to receive a contribution from you in the form of a short paper (8 pages). All the received contributions will be refereed by a panel of experts according to the policies of the IJCNN conference. We apologies if you receive multiple copies of this call. Regards, Agostino Gibaldi *From sensing machine to sensorimotor control* Following the recent evolution of robotics and AI in different fields of application, the increasing complexity of the actions that an artificial agent needs to perform, is directly dependent on the complexity of the sensory information that it can acquire and interpret, /i.e./ perceive. From this point of view, an efficient and internal representation of the sensory information is at the base of a robot to *develop a human-like capability* of interaction with the surrounding environment. Particularly in the space at a reachable distance, not only visual and auditory, but also tactile and proprioceptive information rise to be relevant to gain a comprehensive spatial cognition. This information, coming from different senses, can be in principle used to experience an *awareness of the environment* both to actively *interact* with it, and to *calibrate* the interaction itself. Besides, the early sensory and sensorimotor mechanisms, that at a first glance may appear simple processes, are grounded on highly structured and complex algorithms that are far from being understood and modeled. By directly *integrating sensing modules and motor control*, the loop between action and perception comes to be not just closed at system level, but shortened at an inner one. The aim of this special session is to challenge the development of methodologies, concepts, algorithms and techniques that would serve as bricks on which to build and develop a *sensing machine*, i.e. an artificial agent capable of human-like behaviours. We invite original contributions that provide novel solutions addressing theoretical or practical aspects of computer vision, multidimensional signal processing,neural computation and modeling, machine learning, neural networks, and computational intelligence to be applied to sensory representation, sensorimotor interaction, embodied learning. The action-perception loop has never been so close! Agostino Gibaldi, PhD agostino.gibaldi at unige.it PostDoc at the PSPC Research Group (http://pspc.unige.it/) University of Genova, Via Opera Pia, 11?, 16145 Genova (IT) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bazhenov at salk.edu Sun Feb 17 15:55:30 2013 From: bazhenov at salk.edu (Maxim Bazhenov) Date: Sun, 17 Feb 2013 12:55:30 -0800 Subject: Connectionists: postdoctoral position to study sleep rhythms Message-ID: <512143C2.8090403@salk.edu> Applications are invited for NIH-funded post-doctoral position in the laboratory of Dr. Maxim Bazhenov at the University of California, Riverside to study mechanisms and functions of sleep oscillations. The successful candidate will join a research team involving the laboratories of Eric Halgren (UCSD), Terry Sejnowski (UCSD) and Maxim Bazhenov (UC Riverside). For relevant references see, Chen et al, Journal of Physiology (London), 2012, Jul 9; Bonjean et al, Journal of Neuroscience, 2012, 32(15):5250-63. The ultimate goal of this work is to understand mechanisms and functions of sleep rhythms and the role of sleep oscillations in memory and learning. The successful candidate will be responsible for the design of a thalamocortical model generating sleep rhythms based on existing experimental data. These models will be used to understand underlying neural mechanisms, as well as guide data analysis and produce novel experimental predictions. Qualified applicants are expected to have experience in computational/theoretical neuroscience and conductance-based neural modeling. Programming experience with C/C++ is required. Knowledge of PYTHON or MATLAB is a plus. The University of California offers excellent benefits. Salary is based on research experience. The initial appointment is for 1 year with a possibility of extension. Applicants should send a brief statement of research interests, a CV and the names of three references to Maxim Bazhenov at maksim.bazhenov at ucr.edu -- Maxim Bazhenov, Ph.D. Professor, Cell Biology and Neuroscience University of California Riverside, CA 92521 Ph: 951-827-4370 http://biocluster.ucr.edu/~mbazhenov/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From borisyuk at math.utah.edu Wed Feb 20 01:41:34 2013 From: borisyuk at math.utah.edu (Alla Borisyuk) Date: Tue, 19 Feb 2013 23:41:34 -0700 Subject: Connectionists: CNS*2013: Deadline extension In-Reply-To: References: <450980859.825207.1360966494245.JavaMail.mcapp@membe1-vmapp15.inetuhosted.net> Message-ID: Organization for Computational Neurosciences (OCNS) 22nd Annual Meeting University of Paris "Ren? Descartes", Paris, France July 13-18, 2013 Deadline for abstract submission and author registration has now been extended. IMPORTANT!!!! Please note that IT IS REQUIRED THAT ONE OF THE AUTHORS REGISTERS FOR THE MEETING BEFORE SUBMITTING AN ABSTARCT. In the case that the abstract is not accepted for presentation, the registration fee will be refunded. NEW AND FINAL Deadlines: 23 Feb 2013 Latest date for Member applications before abstract submission 25 Feb 2013 Abstract submission closes (11:00 pm Pacific time USA) Please visit: http://www.cnsorg.org/cns-2013-abstract-submission -------------------------------------- The main meeting (July 14 - 16, 2013) will be preceded by a day of tutorials (July 13) and followed by two days of workshops (July 17-18). Conference banquet will be held in the Mus?e des Arts-Forains on July 15 http://www.cnsorg.org/cns-2013-paris Confirmed Invited Speakers: Sophie Den?ve (ENS-Paris) Simon Laughlin (University of Cambridge) Nikos Logothetis (Max Planck Institute T?bingen) Rafael Yuste (Columbia University) ------------------------------ OCNS is the international member-based society for computational neuroscientists. Become a member to be eligible for travel awards and more. Visit our website for more information: http://www.cnsorg.org ---------------------------------------------------------------------- We apologize if you receive multiple copies of this message From d.polani at herts.ac.uk Thu Feb 14 20:30:36 2013 From: d.polani at herts.ac.uk (Daniel Polani) Date: Fri, 15 Feb 2013 01:30:36 +0000 Subject: Connectionists: Research Fellow in Agent Learning/Adaptation Algorithms Message-ID: <20765.36796.842641.507881@thelma.stca.herts.ac.uk> RESEARCH FELLOW IN AGENT LEARNING/ADAPTATION ALGORITHMS ------------------------------------------------------------------ Adaptive Systems Research Group (http://adapsys.feis.herts.ac.uk/) School of Computer Science University of Hertfordshire, UK (www.herts.ac.uk) Research Fellowship in Agent/Robot Learning Algorithms Salary per annum: UH6 - 25,504-30,424 GBP pa (depending on qualifications and experience) Contact for informal inquiries: Dr. Daniel Polani (E-mail: d.polani at herts.ac.uk) PROJECT AND REQUIREMENTS ------------------------ A Research Fellow post is available in the EU Framework VII funded project CORBYS (Cognitive Control Framework for Robotic Systems). As a part of a European project, this full-time research post will allow the postholder to pursue research into novel methods for self-motivated behaviour generation, behaviour anticipation, intentionality and initiative detection, based on principled information-theoretic approaches, in the context of robotic agents. The CORBYS consortium consists of several European partners in Germany, UK, Belgium, Spain, Norway and Slovenia. The University of Hertfordshire team is involved in the development of novel algorithms for above tasks. The development of the algorithms and software as well as learning and adaptation algorithms suitable to run on physical robots, based on previous research expertise by the project team, forms a central part in the research. Applicants for the post should have a strong postgraduate degree (MSc or PhD) in a quantitative research-oriented discipline, such as computer science, mathematics or physics. The post requires strong mathematical background, with emphasis on the areas of probabilistic modeling, information theory and/or stochastic control. Excellent programming skills are essential. It is desirable for applicants to have experience in robot learning, machine learning, or related areas; experience with robotic software development and relevant frameworks such as ROS are a plus. Applicants will have a high degree of motivation and, at the same time the ability to work both independently and in collaboration with the other investigators in the group and the project consortium in an exciting and ambitious research project. FURTHER INFORMATION ------------------- The postholder will be a member of the Adaptive Systems Research Group (http://adapsys.feis.herts.ac.uk/) at the University of Hertfordshire in the School of Computer Science which includes more than 30 research staff members (postdocs and PhD students). The Adaptive Systems Research Group is an enthusiastic, vibrant and highly innovative multidisciplinary research group with an excellent international research track record, which includes work on principled mathematical methods to construct biologically inspired models for Artificial Intelligence, cognitive embodied systems and Artificial Life. Research in Computer Science at the University of Hertfordshire has been recognized as excellent by the latest Research Assessment Exercise, with 55% of the research submitted being rated as world leading or internationally excellent. The University of Hertfordshire itself ranks 27th in England for post-2008 RAE-funding in Computer Science and Informatics. The university is located in Hatfield, less than 25 minutes by train from London Kings Cross and with convenient access to Stansted, Luton and Heathrow airports and, via St. Albans Thameslink, also to Gatwick airport. The position is full-time. The work will be based at University of Hertfordshire and may include short stays at European partner institutions. The position is based on a fixed-term contract ending on 31. January 2015. The position is to be filled as soon as possible. CONTACT AND APPLICATION ----------------------- All formal applications must be made via the Human Resources Department at University of Hertfordshire: http://web-apps.herts.ac.uk/uhweb/apps/hr/research-vacancies.cfm The above website will allow potential applications to find out more information about the post, including a detailed job and person specification. Please consult this website in order to evaluate your suitability for the post. Note that under current UKBA regulations, the University is unlikely to be able to get a work permit in respect of this post. We can therefore only accept applications from people who will have the right to work in the UK for at least one year from the date of appointment. For other informal inquiries related to the post please contact Dr. Daniel Polani (d.polani at herts.ac.uk). Closing date: 12. March 2013 ----------------------------------------------------- Dr. Daniel Polani Reader in Artificial Life Adaptive Systems Research Group The University of Hertfordshire, School of Computer Science College Lane, Hatfield, Hertfordshire AL10 9AB, United Kingdom URL: http://homepages.feis.herts.ac.uk/~comqdp1 E-mail: d.polani at herts.ac.uk Fax: +44-1707-284-303 Tel: +44-1707-284-380 From daniele.marinazzo at ugent.be Wed Feb 20 15:33:43 2013 From: daniele.marinazzo at ugent.be (Daniele Marinazzo) Date: Wed, 20 Feb 2013 21:33:43 +0100 Subject: Connectionists: Workshop at CNS 2013 - announcement and call for short talks Message-ID: Hi everybody, we are glad to announce a workshop at the next CNS meeting in Paris on "Network neuroscience: structure and dynamics". http://www.namasen.net/drupal/sites/CNS2013Workshop In order to complement the talks of the eight invited speakers, we are glad to accept four contributed short talks (15 minutes) related to the themes of the workshop. If interested, please send a tentative title and abstract to Michele Giugliano (michele.giugliano at ua.ac.be) or Daniele Marinazzo ( daniele.marinazzo at ugent.be) Thanks, and looking forward to see you in Paris Daniele and Michele -- Daniele Marinazzo -- Department of Data Analysis Faculty of Psychology and Pedagogical Sciences, Gent University Henri Dunantlaan 1, B-9000 Gent, Belgium +32 (0) 9 264 6375 http://users.ugent.be/~dmarinaz/ http://helpdesk.ugent.be/e-maildisclaimer.php -------------- next part -------------- An HTML attachment was scrubbed... URL: From dayan at gatsby.ucl.ac.uk Fri Feb 15 03:13:21 2013 From: dayan at gatsby.ucl.ac.uk (Peter Dayan) Date: Fri, 15 Feb 2013 08:13:21 +0000 Subject: Connectionists: Postdoctoral Training Fellowships @ Gatsby In-Reply-To: <20110926204008.GF1049@gatsby.ucl.ac.uk> References: <20101011230824.GA1449@gatsby.ucl.ac.uk> <20110926204008.GF1049@gatsby.ucl.ac.uk> Message-ID: <20130215081321.GA7247@gatsby.ucl.ac.uk> The Gatsby Computational Neuroscience Unit invites applications for one or more postdoctoral training fellowships in theoretical neuroscience and related areas. Research in the Unit focuses on the interpretation of neural data, population coding, perceptual processing, neural dynamics, neuromodulation, and various aspects of learning. The Unit also has significant interests across a range of areas in machine learning. For further details of our research please see: http://www.gatsby.ucl.ac.uk/research.html Details are available through http://www.gatsby.ucl.ac.uk/vacancies/Research%20Associate%202013%20%28TN%29.html Academic enquiries should be directed to: dayan at gatsby.ucl.ac.uk, maneesh at gatsby.ucl.ac.uk or pel at gatsby.ucl.ac.uk. Applications must be made online via the UCL job vacancies website: https://atsv7.wcn.co.uk/search_engine/jobs.cgi?SID=amNvZGU9MTMwOTk3OSZ2dF90ZW1wbGF0ZT05NjUmb3duZXI9NTA0MTE3OCZvd25lcnR5cGU9ZmFpciZicmFuZF9pZD0wJnBvc3RpbmdfY29kZT0yMjQmcmVxc2lnPTEzNjA4MTgwOTQtMzkwOTM2ZmI5MGY3YWIxZmE4YjIzYzk1NDM2MTI0MzVjNzMwYjVhZg== The closing date for applications is 11th March, 2013. Interviews for shortlisted candidates will be held in April 2013. From holger.schwenk at lium.univ-lemans.fr Thu Feb 21 10:23:09 2013 From: holger.schwenk at lium.univ-lemans.fr (Holger Schwenk) Date: Thu, 21 Feb 2013 16:23:09 +0100 Subject: Connectionists: Two Postdoc Positions in Machine Learning applied to Natural Language Processing at LIUM, France Message-ID: <51263BDD.6080208@lium.univ-lemans.fr> Two Postdoc Positions in Machine Learning applied to Natural Language Processing The computer science laboratory of the University of Le Mans (LIUM) has openings for two postdoc positions in the field of machine learning applied to natural language processing. We are particularly interested in candidates with a proven experience in areas like - neural networks - deep learning - structure learning - fast implementations on GPU Knowledge in one or several of the following application areas is a strong plus - natural language processing - language modeling - statistical machine translation - automatic speech recognition The postdoc positions are immediately available. Initial appointment is for one year, renewable for up to three years. Competitive salaries are available, including health care and other social benefits. The working language is English or French. LIUM performs research in machine learning applied to statistical machine translation and large vocabulary speech recognition since several years. We are in particular working on the use of continuous space methods. The postdoc candidates are expected to continue the research along these lines. Possible directions could be deep learning, unsupervised training, distributed training, etc. We are also particularly interested in machine learning techniques involving structured knowledge. LIUM is participating in several international projects, financed by the European Commission, DARPA and the French government. We collaborate with leading research groups in USA and Europe. A large computer cluster is available to support the research (500 CPU cores with a total of 6 TBytes of memory and more than 130 TBytes of RAID disk space). We also own a cluster with 10 Tesla GPU cards, connected by a fast Infiniband network. Le Mans is located in between Paris and the Atlantic ocean. Both can be reached in about 1 hour by high speed train. The Loire valley with many wineries and other attractions is just a short drive away ... For more information, please contact Holger Schwenk by email: Holger.Schwenk at lium.univ-lemans.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlam at bccn-tuebingen.de Wed Feb 13 06:56:08 2013 From: jlam at bccn-tuebingen.de (Judith Lam) Date: Wed, 13 Feb 2013 12:56:08 +0100 Subject: Connectionists: Bernstein Conference 2013 - Call for Workshop proposals Message-ID: <511B7F58.2050009@bccn-tuebingen.de> Call for Workshop proposals: Bernstein Conference 2013 Deadline of proposal submission: April 2, 2013 ************************************************************** Workshops September 24-25, 2013 Main Conference September 25-27, 2013 ************************************************************** The Bernstein Conference on Computational Neuroscience started out as the annual meeting of the Bernstein Network (www.nncn.de) and has become the largest European Conference in Computational Neuroscience in recent years. This year, the Conference is organized by the Bernstein Center Tuebingen and will take place *S**eptember 25-27, 2013*. The Bernstein Conference is a single-track conference, covering all aspects of Computational Neuroscience and Neurotechnology. Sessions for poster presentations are an integral part of the conference. Call for abstract submission for posters will follow soon. This year the Bernstein conference will feature, for the first time, a series of *pre-conference workshops* on *September 24-25**, 2013*. The goal is to provide an informal forum for the discussion of timely research questions and challenges. Controversial issues, open problems, and comparisons of competing approaches are encouraged. * *DETAILS FOR WORKSHOP PROPOSALS: Submission form can be downloaded here . Deadline for submission of proposals: April 2, 2013 Workshop times: Sept 24, 14:00 - 18:30 & Sept 25, 9:00 - 12:30. You may apply for a half-day workshop, but preference will be given to full-day workshops. Workshop costs: The Bernstein Conference does not provide any financial support, but will provide 5 waivers per workshop (assigned by organizers). Please find details on registration costs on our website http://www.bernstein-conference.de. For more information on the conference, please visit the website: http://www.bernstein-conference.de IMPORTANT DATES: Workshop proposal submission deadline: April 2, 2013 Registration start: May 1, 2013 Early registration deadline: June 1, 2013 CONFERENCE DATE AND VENUE: Workshops September 24-25, 2013, Neue Aula, Geschwister Scholl Platz, Tuebingen, Germany Main Conference September 25-27, 2013, Brechtbau, Wilhelmstr. 50, Tuebingen, Germany PhD STUDENT SYMPOSIUM: September 28, 2013 PROGRAM COMMITTEE: Matthias Bethge, Michael Black, Michael Brecht, Jakob Macke, Anthony Movshon, Felix Wichmann, Fred Wolf ORGANIZING COMMITTEE: Matthias Bethge (General Chair) Judith Lam, Jakob Macke, Felix Wichman We look forward to seeing you in Tuebingen in September! -- -- Judith Lam Executive Coordinator Bernstein Center for Computational Neuroscience T?bingen Eberhard Karls University of T?bingen Max Planck Institute for Biological Cybernetics http://www.bccn-tuebingen.de/about-bccn/contact.html Otfried-M?ller-Str. 25, 72076 T?bingen Tel: +49 7071 29 89019 Fax: +49 7071 29 25015 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.touboul at gmail.com Fri Feb 15 05:44:35 2013 From: jonathan.touboul at gmail.com (Jonathan Touboul) Date: Fri, 15 Feb 2013 11:44:35 +0100 Subject: Connectionists: =?iso-8859-1?q?Postdoctoral_position_at_INRIA_and?= =?iso-8859-1?q?_Coll=E8ge_de_France_=28Paris=29_in_mathematical_neuroscie?= =?iso-8859-1?q?nces?= Message-ID: <0E75FBF7-17FC-42CC-A283-F4F2C0B2FA2F@gmail.com> We invite applications for an INRIA postdoctoral position to be held jointly in the Mathematical Neuroscience Team (CIRB, Coll?ge de France) and team SYSIPHE of INRIA Paris-Rocquencourt, on the exciting subject of application of mathematics to neurosciences. The topic of the research is to analyze the dynamics of infinite-dimensional dynamical systems in the presence of multiple timescales. This includes in particular delayed differential equations, partial differential equations, integral equations and integro-differential equations. Such problems are motivated by the analysis of models of neuronal activity. One possible direction of the research are phenomena related to the presence of sensitive dynamics such as canards and mixed-mode oscillations, which are one of the signatures of slow/ fast dynamics. One possible goal is extend the existing finite dimensional theory of canards and folded singularities to infinite dimensions. The postdoc will be given freedom to influence or possibly redesign the research direction. Best suited for this position are candidates with background in applied mathematics and dynamical systems. Familiarity with computational neuroscience and software for analyzing dynamical systems (e.g. AUTO) is an asset. The position is funded for one year renewable, is accessible to young PhDs (or future PhDs) with maximum one year postdoctoral experience, and offers an attractive salary, benefits and social security. Candidates will be selected through a national level selection committee of INRIA. Interested applicants are invited to apply online: http://www.inria.fr/en/institute/recruitment/offers/post-doctoral-research-fellowships/campaign-2013/(view)/details.html? id=PNGFK026203F3VBQB6G68LOE1&LOV5=4508&LG=EN&Resultsperpage=20 &nPostingID=7110&nPostingTargetID=12506&option=52&sort=DESC&nDepartmen tID=19 The postdoc will take place in part in the historic building of the Coll?ge de France, very well located in the quartier latin of Paris, in close relationship with different high level institutions (ENS Paris, Institut Curie, Coll?ge de France). The scientific life in Paris is very exciting and lively. Inquiries should be directed to Martin Krupa (maciej.krupa at inria.fr) or Jonathan Touboul (jonathan.touboul at inria.fr). -- Jonathan Touboul, PhD Mathematical Neuroscience Lab, CIRB - Coll?ge de France & INRIA, BANG Laboratory 11, Place Marcelin Berthelot, 75005 Paris Phone: (+33) 1 44 27 13 88 http://www-roc.inria.fr/bang/JT -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmtn at atr.jp Sat Feb 16 08:47:15 2013 From: kmtn at atr.jp (Yukiyasu Kamitani) Date: Sat, 16 Feb 2013 22:47:15 +0900 Subject: Connectionists: Researcher position at ATR, Kyoto, Japan Message-ID: <680AA22B-B373-45E7-B9FE-A765938618C4@atr.jp> Researcher position at Department of Neuroinformatics (Kamitani group), ATR, Kyoto, Japan Researcher positions (equivalent to postdocs) are available at Dr. Kamitani?s group of ATR for research in data-driven neural prediction models for behavior and mental contents. We aim to build neural prediction models using machine learning based analysis of large-scale behavioral and neural data. Applicants would have opportunities for research combining experiments and computational modeling using functional MRI data in humans (collected at ATR Brain Imaging Center) and ECoG data in humans and monkeys (collaboration with neurosurgeons and neurophysiologists at Osaka Univ., Niigata Univ., and others). Our projects would involve data analysis of natural images/movies/text and behavioral measurements in realistic settings as well as massive neural data. Thus, we are seeking applicants from broad backgrounds including neuroscience, machine learning, computer vision, and natural language processing. We are also specifically looking for an applicant having the ability of sleep EEG analysis for our sleep project. The position is one year, renewable. Start date is negotiable. Applicants should send a CV, reprints (pdf) of representative papers, and names of two references to dni-info at atr.jp. Yukiyasu Kamitani, Ph.D. ATR Computational Neuroscience Laboratories 2-2-2 Hikaridai, Keihanna Science City, Kyoto 619-0288, JAPAN URL: http://www.cns.atr.jp/dni/ From lucy.davies4 at plymouth.ac.uk Tue Feb 19 08:56:53 2013 From: lucy.davies4 at plymouth.ac.uk (Lucy Davies) Date: Tue, 19 Feb 2013 13:56:53 +0000 Subject: Connectionists: 2nd Call for Papers: The Lure of the New 2013 Message-ID: Call for abstracts (Deadline 1st March): The Cognition Institute is a new trans-disciplinary research centre focused on understanding human cognition. We believe that through forging links with researchers from psychology, cognitive robotics, neuroscience, biology, humanities and the arts we can develop new ways of thinking about cognition. In our first 1st international conference, we will explore how novelty and creativity are key drivers of human cognition. Each of our themed symposia will bring a different approach to this topic, and will cover such areas as embodied cognition, auditory neuroscience and psychophysics, language development, mental imagery, creativity and cognition, the relationship between the arts and sciences, modelling and imaging of brain processes & deception research. We welcome abstracts in any of these areas and are very keen for submissions which take a trans-disciplinary approach to cognition research. Symposia * Computational Modelling of Brain Processes (Thomas Wennekers, Ingo Bojak, Chris Harris, Jonathan Waddington) * Embodied Cognition and Mental Simulation (Haline Schenden, Diane Pecher, Rob Ellis, Patric Bach) * Current trends in deception research (Giorgio Ganis, Gershon Ben-Shakhar) * Sounds for Communication (Sue Denham, Roy Patterson, Judy Edworthy, Sarah Collins) * Developments in infant speech perception (Katrin Skoruppa, Silvia Benavides-Varelaa, Caroline Floccia, Laurence White, Ian Howard) * Engineering Creativity - can the arts help scientific research more directly? (Alexis Kirke, Greg B. Davies, Simon Ingram) * Imagery, Dance and Creativity (Jon May, Scott deLaHunta, Emma Redding, Tony Thatcher, Phil Barnard, John Matthias, Jane Grant) As well as the symposia, there will be a panel discussion drawing together all the themes of the conference, a performance by Emma Redding & Tony Thatcher (as part of the Imagery, Dance and Creativity symposium) and keynote talks by: -Linda Lanyon (Head of Programs at the INCF): Toward globally collaborative science: the International Neuroinformatics Coordinating Facility -Guy Orban (Dept. of Neuroscience, University of Parma): Finding a home for the mirror neurons in human premotor cortex. In addition, the evening programme includes a reception and CogTalk debate to mark the official launch of the Cognition Institute, and a special film screening in association with SciScreen. http://cognition.plymouth.ac.uk/annual-conference-lure-new/ The programme can be found here: http://cognition.plymouth.ac.uk/annual-conference-lure-new/programme/ Registration is now open and details of how to apply can be found here: http://cognition.plymouth.ac.uk/annual-conference-lure-new/registration/ Deadline for abstract submission: 1st March 2013 Abstracts should be emailed to info.cognition at plymouth.ac.uk in Word doc or docx format, no longer than 1 A4 page, including all refs, title, all authors names and affiliations, corresponding authors contact address and email. For all conference enquiries please contact Lucy Davies at: info.cognition at plymouth.ac.uk. Lucy Davies Cognition Institute Plymouth University Room A222, Portland Square Plymouth UK PL4 8AA Tel: 44+ (0)1752 584920 Website: http://www1.plymouth.ac.uk/research/cognition/Pages/default.aspx Like us on Facebook: http://www.facebook.com/PlymCogInst Follow us on Twitter: https://twitter.com/PlymCogInst YouTube Channel: http://www.youtube.com/user/PlymouthCognition?feature=watch [cid:image001.jpg at 01CE0EA7.7A111430] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3387 bytes Desc: image001.jpg URL: From michel.verleysen at uclouvain.be Wed Feb 20 02:35:52 2013 From: michel.verleysen at uclouvain.be (Michel Verleysen) Date: Wed, 20 Feb 2013 08:35:52 +0100 Subject: Connectionists: ESANN 2013: program Message-ID: <001a01ce0f3c$e98dc5c0$bca95140$@uclouvain.be> We apologize for possible duplicates of this message sent to distribution lists. ESANN 2013: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning Bruges, Belgium, 24-25-26 April 2013 http://www.esann.org/ Preliminary program The preliminary program of the ESANN 2013 conference is now available on the Web: http://www.esann.org/ For those of you who maintain WWW pages including lists of related machine learning and artificial neural networks sites: we would appreciate if you could add the above URL to your list; thank you very much! For 21 years the ESANN conference has become a major event in the field of neural computation and machine learning. ESANN is a selective conference focusing on fundamental aspects of artificial neural networks, machine learning, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered. The titles of the sessions are: - Machine Learning Methods for Processing and Analysis of Hyperspectral Data - Recurrent networks and modeling - Dimensionality reduction - Image, signal and time series analysis - Feature selection - Reinforcement learning, control and optimization - Machine Learning for multimedia applications - Clustering - Regression and forecasting - Developments in kernel design - Human Activity and Motion Disorder Recognition: towards smarter Interactive Cognitive Environments - Human activity recognition competition - Classification - Sparsity for interpretation and visualization in inference models The program of the conference can be found at http://www.esann.org/, together with practical information about the conference venue, registration, etc. Other information can be obtained by sending an e-mail to esann at uclouvain.be. Venue ------ The conference will be held in Bruges (also called "Venice of the North"), one of the most beautiful medieval towns in Europe. Bruges can be reached by train from Brussels in less than one hour (frequent trains). Designated as the "Venice of the North", the city has preserved all the charms of the medieval heritage. Its centre, which is inscribed on the Unesco World Heritage list, is in itself a real open air museum. Steering and local committee ---------------------------- Fran?ois Blayo Ipseite (CH) Gianluca Bontempi Univ. Libre Bruxelles (B) Marie Cottrell Univ. Paris I (F) Jeanny H?rault INPG Grenoble (F) Mia Loccufier Univ. Gent (B) Bernard Manderick Vrije Univ. Brussel (B) Jean-Pierre Peters FUNDP Namur (B) Joos Vandewalle KUL Leuven (B) Michel Verleysen UCL Louvain-la-Neuve (B) Louis Wehenkel Univ. Li?ge (B) Scientific committee -------------------- Cecilio Angulo Univ. Polit. de Catalunya (E) Aluizio Ara?jo Univ. Federal de Pernambuco (Brazil) Miguel Atencia Univ. Malaga (E) Michael Aupetit CEA (F) Michael Biehl University of Groningen (NL) Martin Bogdan Univ. Leipzig (D) Herv? Bourlard IDIAP Martigny (CH) Charles Bouveyron Univ. Paris 1 Panth?on-Sorbonne (F) Antonio Braga Federal Univ. of Minas Gerais (Brazil) Joan Cabestany Univ. Polit. de Catalunya (E) St?phane Canu Inst. Nat. Sciences App. (F) Sylvain Chevallier University of Versailles (F) Valentina Colla Scuola Sup. Sant'Anna Pisa (I) Nigel Crook Oxford Brookes University (UK) Holk Cruse Universit?t Bielefeld (D) Giovanni Da San Martino Univ. of Padova (I) Bernard de Baets Univ. Gent (B) Kris De Brabanter K. U. Leuven (B) Massimo De Gregorio Istituto di Cibernetica-CNR (I) Dante Del Corso Politecnico di Torino (I) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton CEA Saclay (F) Richard Duro Univ. Coruna (E) Deniz Erdogmus Oregon Health & Science Univ. (USA) Anibal Figueiras-Vidal Univ. Carlos III Madrid (E) Jean-Claude Fort Universit? Paris Descartes (F) Felipe M. G. Fran?a Univ. Federal Rio de Janeiro (Brazil) Leonardo Franco Univ. Malaga (E) Damien Fran?ois Universit? catholique de Louvain (B) Colin Fyfe Univ. Paisley (UK) Jo?o Gama Univ. do Porto (P) Manjunath Gandhi Jacobs University (D) Luis Gonzalez Abril University of Sevilla (E) Marco Gori Univ. Siena (I) Bernard Gosselin Univ. Mons (B) Manuel Grana UPV San Sebastian (E) Barbara Hammer Bielefeld Univ. (D) Martin Hasler EPFL Lausanne (CH) Verena Heidrich-Meisner CAU Kiel (D) Tom Heskes Univ. Nijmegen (NL) Katerina Hlavackova-Schindler Univ. Life Sci. & Natural Resources (A) Christian Igel Univ. Copenhagen (DK) Jose Jerez Univ. Malaga (E) Gonzalo Joya Univ. Malaga (E) Christian Jutten INPG Grenoble (F) Juha Karhunen Aalto Univ. (FIN) Stefanos Kollias National Tech. Univ. Athens (GR) Jouko Lampinen Aalto Univ. (FIN) Petr Lansky Acad. of Sci. of the Czech Rep. (CZ) Beatrice Lazzerini Univ. Pisa (I) John Lee Univ. cat Louvain (B) Amaury Lendasse Aalto Univ. (FIN) Priscila M. V. Lima Univ. Fed. Rural Rio de Janeiro (Brazil) Paulo Lisboa Liverpool John Moores Univ. (UK) Jos? D. Mart?n Univ. of Valencia (E) Erzsebet Merenyi Rice Univ. (USA) David Meunier Universit? Claude Bernard Lyon 1 (F) Anke Meyer-B?se Florida State university (USA) Yoan Miche Aalto Univ. (FIN) Alessio Micheli University of Pisa (I) Erkki Oja Aalto Univ. (FIN) Tjeerd olde Scheper Oxford Brookes University (UK) Madalina Olteanu Univ. Paris 1 Panth?on-Sorbonne (F) Gilles Pag?s Univ. Pierre et Marie Curie (Paris 6) (F) H?l?ne Paugam-Moisy INRIA Saclay (F) Kristiaan Pelckmans Uppsala University (SE) Gadi Pinkas The Center for Acad. Studies (Israel) Alberto Prieto Universitad de Granada (E) Jose Principe Univ. of Florida (USA) Didier Puzenat Univ. Antilles-Guyane (F) John Quinn Makerere Univ., Kampala (Uganda) S?bastien Rebecchi INRIA Saclay (F) Jean-Pierre Rospars INRA Versailles (F) Fabrice Rossi Univ. Paris 1 Panth?on-Sorbonne (F) Ulrich R?ckert Bielefeld University (D) David Saad Aston Univ. (UK) Francisco Sandoval Univ.Malaga (E) Jose Santos Reyes Univ. Coruna (E) Frank-Michael Schleif Univ. Bielefeld (D) Benjamin Schrauwen Univ. Gent (B) Udo Seiffert Fraunhofer-Institute IFF Magdeburg (D) Jochen Steil Univ. Bielefeld (D) Johan Suykens K. U. Leuven (B) Peter Tino Univ. of Birmingham (UK) Claude Touzet Univ. Provence (F) Thiago Turchetti Maia Vetta Group (Brazil) Marc Van Hulle K. U. Leuven (B) Alfredo Vellido Polytechnic University of Catalonia (E) Thomas Villmann Univ. Apllied Sciences Mittweida (D) Willem Waegeman Univ. Gent (B) Heiko Wersing Honda Research Institute Europe (D) Axel Wism?ller Univ. of Rochester, New York (USA) Dietlind Z?hlke Fraunhofer Inst. for App. Information (D) ======================================================== ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning http://www.esann.org/ * For submissions of papers, reviews, registrations: Michel Verleysen Univ. Cath. de Louvain - Machine Learning Group 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at uclouvain.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at uclouvain.be ======================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From muftimahmud at gmail.com Thu Feb 21 05:35:33 2013 From: muftimahmud at gmail.com (Mufti Mahmud) Date: Thu, 21 Feb 2013 11:35:33 +0100 Subject: Connectionists: Final CFP: 2013 International Conference on Brain Inspired Cognitive Systems (BICS 2013) Message-ID: Dear Fellow Researchers, On behalf of the General Chair Prof. Derong Liu, and the Program Chair Prof. Amir Hussain, I am delighted to announce the sixth international Conference on Brain Inspired Cognitive Systems (BICS'2013), to be held at Beijing, China (June 9-11, 2013). We would like to cordially invite you to submit your research works to BICS2013. Special Session and Workshop proposals are also very welcome. We believe your participation and contribution to BICS2013 will help a great deal in making it another highly acclaimed success like the previous ones. For details on submissions, please see the CFP below, and kindly circulate to your colleagues. Many thanks and best regards, Mufti BICS 2013: the paper submission deadline is postponed to March 1 2013 6th International Conference on Brain Inspired Cognitive Systems (BICS 2013) will be held on June 9-11, 2013, in Beijing, China. Conference website: http://www.conference123.org/bics2013/ Important Dates Paper submission deadline: March 1, 2013 Special session proposals deadline: February 25, 2013 Notification of paper acceptance: April 5, 2013 Final paper submission deadline: April 20, 2013 The IEEE Computational Intelligence Society and the International Neural Networks Society will be technical co-sponsors of BICS 2013. All accepted papers will be published in a volume of Springer's LNAI and indexed in EI Compendex. Selected papers will be published in special issues of several SCI journals, such as Soft Computing (Springer). Online submission link: https://www.easychair.org/conferences/?conf=bics2013 Beijing, as the capital of the People's Republic of China, is the nation's political, economic, and cultural center as well as China's most important center for international trade and communications. BICS 2013 aims to provide a high-level international forum for scientists, engineers, and educators working in the areas of Brain Inspired Cognitive Systems to present state-of-the-art sciences and technologies, and their applications in diverse fields. The conference will feature plenary speeches given by worldwide renowned scholars, regular sessions with broad coverage, and special sessions focusing on some popular topics. -- Mufti Mahmud, PhD Marie Curie Research Fellow Theoretical Neurobiology & Neuroengineering Lab University of Antwerp T6.60, Universiteitsplein 1 2610 - Wilrijk, Belgium Lab: +32 3 265 2648 http://www.muftimahmud.co.nr/ & Assistant Professor (on leave) Institute of Information Technology Jahangirnagar University Savar, 1342 - Dhaka, Bangladesh. http://www.researcherid.com/rid/C-7752-2012 From retienne at jhu.edu Thu Feb 14 20:57:09 2013 From: retienne at jhu.edu (retienne) Date: Thu, 14 Feb 2013 20:57:09 -0500 Subject: Connectionists: Telluride 2013 Call for Participation Message-ID: <511D95F5.1060407@jhu.edu> *------------------------------------------------------------------------------------------------------------------ 2013 Neuromorphic Cognition Engineering Workshop* */Telluride, Colorado, June 30^th - July 20^th , 2013/* *CALL FOR APPLICATIONS: Deadline is April 15th, 2013* NEUROMORPHIC COGNITION ENGINEERING WORKSHOP www.ine-web.org Sunday June 30th- Saturday July 20th, 2013, Telluride, Colorado We invite applications for a three-week summer workshop that will be held in Telluride, Colorado. Sunday June 30th- Saturday July 20th, 2013. The application deadline is *Monday, April 15th* and application instructions are described at the bottom of this document. The 2013 Workshop and Summer School on Neuromorphic Engineering is sponsored by the National Science Foundation, Institute of Neuromorphic Engineering, Qualcomm Corporation, The EU-Collaborative Convergent Science Network (CNS-II), , University of Maryland - College Park, Institute for Neuroinformatics - University and ETH Zurich, Georgia Institute of Technology, Johns Hopkins University, Boston University, University of Western Sydney and the Salk Institute. *Directors:* Cornelia Fermuller, University of Maryland, College Park Ralph Etienne-Cummings, Johns Hopkins University Shih-Chii Liu, Institute of Neuroinformatics, UNI/ETH Zurich Timothy Horiuchi, University of Maryland, College Park *Workshop Advisory Board:* Andreas ANDREOU (The Johns Hopkins University) Andre van SCHAIK (University Western Sydney) Avis COHEN (University of Maryland) Barbara SHINN-CUNNINGHAM (Boston University) Giacomo INDIVERI (Institute of Neuroinformatics, UNI/ETH Zurich, Switzerland) Jonathan TAPSON (University Western Sydney, Australia) Malcolm SLANEY (Microsoft Research) Paul HASLER (Georgia Institute of Technology) Rodney DOUGLAS (Institute of Neuroinformatics, UNI/ETH Zurich, Switzerland) Shihab SHAMMA (University of Maryland) Tobi Delbruck (Institute for Neuroinformatics, Zurich) *Previous year workshop can be found at:* http://ine-web.org/workshops/workshops-overview/index.html and last year's wiki is https://neuromorphs.net/nm/wiki/2011 . *GOALS:* Neuromorphic engineers design and fabricate artificial neural systems whose organizing principles are based on those of biological nervous systems. Over the past 18 years, this research community has focused on the understanding of low-level sensory processing and systems infrastructure; efforts are now expanding to apply this knowledge and infrastructure to addressing higher-level problems in perception, cognition, and learning. In this 3-week intensive workshop and through the Institute for Neuromorphic Engineering (INE), the mission is to promote interaction between senior and junior researchers; to educate new members of the community; to introduce new enabling fields and applications to the community; to promote on-going collaborative activities emerging from the Workshop, and to promote a self-sustaining research field. *FORMAT:* The three week summer workshop will include background lectures on systems and cognitive neuroscience (in particular sensory processing, learning and memory, motor systems and attention), practical tutorials on emerging hardware design, mobile robots, hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed. They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, some of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Projects and interest groups meet in the late afternoons, and after dinner. In the early afternoon there will be tutorials on a wide spectrum of topics, including analog VLSI, mobile robotics, vision and auditory systems, central-pattern-generators, selective attention mechanisms, cognitive systems, etc. *2013 TOPIC AREAS:* *1) Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces* Project Leaders: Shihab Shamma (UM-College Park), Malcolm Slaney (Microsoft), Barbara Shinn-Cunningham (Boston U), Edward Lalor (Trinity College, Dublin) Featuring: Chris Assad (NASA -- JPL) *2) Recognizing Manipulation Actions in Cluttered Environments from Vision and Sound* Project leaders: Cornelia Ferm?ller (UM-College Park), Andreas Andreou (JHU) Featuring: Bert Shi (HKUST, Hong Kong) and Ryad Benosman (UPMC, Paris) * 3) Dendritic Computation in Neurons and Engineered Devices* Project Leaders: Klaus M. Stiefel (UWS, Australia), Jonathan Tapson (UWS, Australia) *4) Universal Neuromorphic Systems and Sensors for Real-Time Mobile Robotics* Project Leaders: Jorg Conradt (TUM, Munich), Francesco Galluppi (U. Manchester, UK), Shih-Chii Liu (INI-ETH, Zurich) and Ralph Etienne-Cummings (JHU) Featuring: Bert Shi (HKUST, Hong Kong) and Ryad Benosman (UPMC, Paris) *5) Emerging Technology and Discussion Group: * *Cognitive Computing with Emerging Nanodevices* Group Leaders: Omid Kavehei (U. Melbourne, Australia), Tara Julia Hamilton (UNSW, Australia) *6) Terry Sejnowski (Salk Institute) -- Computational Neuroscience (invitational mini-workshop)* *LOCATION AND ARRANGEMENTS:* The summer school will take place in the small town of Telluride, 9000 feet high in southwest Colorado, about 6 hours drive away from Denver (350 miles). Great Lakes Aviation and America West Express airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Wireless internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of PCs running LINUX and Microsoft Windows for the workshop projects. We encourage participants to bring along their personal laptop. No cars are required. Given the small size of the town, we recommend that you do not rent a car. Bring hiking boots, warm clothes, rain gear, and a backpack, since Telluride is surrounded by beautiful mountains. Unless otherwise arranged with one of the organizers, we expect participants to stay for the entire duration of this three week workshop. *FINANCIAL ARRANGEMENTS:* Notification of acceptances will be mailed out around the April 30^th , 2013.The Workshop covers all your accommodations and facilities costs for the 3 weeks duration. You are responsible for your own travel to the Workshop, however, sponsored fellowships will be available as described below to further subsidize your cost. Registration Fees:For expenses not covered by federal funds, a Workshop registration fee is required. The fee is $1250 per participant for the 3-week Workshop.This is expected from all participants at the time of acceptance. Accommodations:The cost of a shared condominium, typically a bedroom in a shared condo for senior participants or a shared room for students, will be covered for all academic participants.Upgrades to a private rooms or condos will cost extra. Participants from National Laboratories and Industry are expected to pay for these condominiums. Fellowships:This year we will offer two Fellowships to subsidize your costs: 1)Qualcomm Corporation Fellowship:Three non-corporate participants will have their accommodation and registration fees ($2750) directly covered by Qualcomm, and will be reimbursed for travel costs up to $500. Additional generous funding from Qualcomm will partially subsidize accommodation ($250) and registration fees ($250) for the top 13 participants that are not chosen for the Qualcomm or EU-CSNII Fellowships. 2)EU-CSNII Fellowship (http://csnetwork.eu/) which is funded by the 7th Research Framework Program FP7-ICT-CSNII-601167:The top 8 EU applicants will be reimbursed for their registration fees ($1250), subsistence/travel subsidy (up to Euro 2000) and accommodations cost ($1500). The registration and accommodations costs will go directly to the INE (the INE will reimburse the participant's registration fees after receipt from CSNII), while the subsistence/travel reimbursement will be provided directly to the participants by the CSNII at the University of Pompeu Fabra, Barcelona, Spain. *HOW TO APPLY:* Applicants should be at the level of graduate students or above (i.e. postdoctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage women and minority candidates to apply. Anyone interested in proposing or discussing specific projects should contact the appropriate topic leaders directly. The application website is (after February 15th, 2013): http://ine-web.org/telluride-conference-2013/apply-info Application information needed: * contact email address * First name, Last name, Affiliation, valid e-mail address. * Curriculum Vitae (a short version, please). * One page summary of background and interests relevant to the workshop, including possible ideas for workshop projects. Please indicate which topic areas you would most likely join. * Two letters of recommendation (uploaded directly by references). Applicants will be notified by e-mail. 15^th February, 2013 - Applications accepted on website 15^th April, 2013 - Applications Due 30^th April, 2013 - Notification of Acceptance -- ------------------------------------------------- Ralph Etienne-Cummings Professor Department of Electrical and Computer Engineering The Johns Hopkins University 105 Barton Hall 3400 N. Charles Street Baltimore, MD 21218 Tel: (410) 516 3494 Fax: (410) 516 2939 Email:retienne at jhu.edu URL:http://etienne.ece.jhu.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From roby at hials.no Thu Feb 14 17:17:40 2013 From: roby at hials.no (Robin T. Bye) Date: Thu, 14 Feb 2013 23:17:40 +0100 Subject: Connectionists: ECMS 2013: Extended deadline In-Reply-To: <509856BD.1090408@hials.no> References: <509856BD.1090408@hials.no> Message-ID: <511D6284.7030207@hials.no> * Apologies for potential cross-posting * The ECMS 2013 conference will extend its paper submission deadline until 22 February 2013. Please see below for the call for papers. Kind regards from the ECMS organisation committee at Aalesund University College Robin T. Bye Program Chair and Conference Co-Chair 27TH EUROPEAN CONFERENCE ON MODELLING AND SIMULATION (ECMS2013) http://ecms2013.hials.no Dear fellow researchers in areas involving modelling, simulation, and its applications, We would like to cordially invite you to join us for the 27th European Conference on Modelling and Simulation (ECMS2013), to be held on May 27 - May 30 at AalesundUniversity College (AAUC), Norway. Please forward this information colleagues and research networks. CONFERENCE CHAIRS * Conference Chair: Associate Professor and Prorector, Webj?rnRekdalsbakken, AAUC * Conference Co-Chair and Programme Chair: Associate Professor Robin T. Bye, AAUC * Programme Co-Chair: Professor HouxiangZhang, AAUC KEYNOTE SPEAKERS Please visit http://ecms2013.hials.no/keynote-speakersfor details on keynote speakers and abstracts. ECMS2013 features outstanding researchers in their fields as keynote speakers: * Stephen Grossberg, Wang Professor of Cognitive and Neural Systems, Professor of Mathematics, Psychology, and Biomedical Engineering, and Founding Director of the Center for Adaptive Systems, and the Center of Excellence for Learning in Education, Science, and Technology, Boston University, Boston, USA * May-BrittMoser, Professor and Founding Director at the KavliInstitute for Systems Neuroscience and Centre for the Biology of Memory, NTNU, Trondheim, Norway * Peter D. Neilson, Associate Professor, Senior Visiting Fellow, and Founding Director of the NeuroengineeringLaboratory, School of Electrical Engineering and Telecommunications, UNSW, Sydney, Australia * MeganD. Neilson, Lecturer, Researcher, and Founding Director of the NeuroengineeringLaboratory, School of Electrical Engineering and Telecommunications, UNSW, Sydney, Australia * SigalBerman, Lecturer, Researcher and Head of the Intelligent Systems M.Sc. track in the Department of Industrial Engineering and Management, Ben-GurionUniversity of the Negev, Beer Sheva, Israel. CALL FOR PAPERS Please visit http://ecms2013.hials.no/call-for-papersto download the call for papers. We welcome paper submissions on all aspects of modelling and simulation, including theoretical and applied research. Accepted papers will be published in the conference proceedings. In addition, a small selection of high quality papers across all conference tracks may be published in well-known international journals. All submitted papers will be evaluated for the Best Conference Paper Award, and Best Student Paper Award. For details on deadlines, submissions, and fees, visit http://www.scs-europe.net/conf/ecms2013/deadlines.html The deadline for the full paper submission is February 15th, 2013. CONFERENCE TRACKS Please visit http://ecms2013.hials.no/conference-tracksfor a complete listing of track description and chairs. We now offer a variety of 16 tracks, including many of the traditional tracks but also many new ones. New tracks: ESE - Simulation, Experimental Science and Engineering in Maritime Operations (new chairs and different scope with maritime focus) MSRA- Modelingand Simulation in Robotic Applications (new) NEUROSIM- Simulation and Computational Neuroscience (new) SIMO- Simulation and Optimization (new) SIMVIS- Modelingand Simulation in Computer Vision for Image Understanding (new) SOCINT- Simulation of Social Interaction (new) SVT- Simulation and Visualization for Training and Education (new) Traditional tracks: ABS - Agent-Based Simulation CSM- Simulation of Complex Systems and Methodologies FES- Finance, Economics and Social Science HIPMOS- High Performance Modelling and Simulation IBS- Simulation in Industry, Business and Services IS - Simulation of Intelligent Systems LT - Discrete Event Modelling and Simulation in Logistics, Transport and Supply Chain Management MCT- Modelling, Simulation and Control of Technological Processes PM - Policy Modelling BACKGROUND The European Conference on Modelling and Simulation (ECMS) is the international conference dedicated to help define the state of the art in the field. For several years, ECMShas proven to be an outstanding forum for researchers and practitioners from different fields involved in creating, defining and building innovative simulation systems, simulation and modelling tools and techniques, and novel applications for modelling and simulation. SIMULATOR TOURS The local organizers and ECMSare very happy to announce an exciting programme that includes guided tours of the world's most advanced offshore simulator and world-renowned invited keynote speakers, thanks to financial support from local companies such as Rolls-Royce Marine, Offshore Simulator Centre, and Farstadshipping. LOCAL ATTRACTIONS Please visit http://www.visitalesund-geiranger.comfor travel information about ?lesundand the nearby UNESCO world heritage site Geirangerfjord. The town of ?lesundis beautifully situated on several islands on the coast of Sunnm?re, and is the gateway to some of the world's most famous fjords and natural attractions. The town has a strong and vibrant history, dating back to the Viking era, even though it was not granted municipal status until 1848. After being devastated in an enormous fire on January 23rd 1904, ?lesundwas rebuilt in the Art Nouveau style that characterizes the town today with its many turrets, ornaments and colourful facades. The town mountain Akslahas got a spectacular panoramic view of the Sunnm?reAlps, the town centre, the ocean and the islands in the archipelago. Today ?lesundis a modern and rapidly growing town, and with its 45.000 inhabitants it is the natural commercial and industrial capital of the Sunnm?reregion. The town has got a strong tradition in the maritime industry, and is considered the fisheries capital of Norway. ABOUT AALESUNDUNIVERSITY COLLEGE AalesundUniversity College (AAUC) is a university college with five faculties and more than 200 staff. We offer a wide range of study programmes in engineering, marine operations, biotechnology, business, and health. More information can be found on the ECMSoffice website http://www.scs-europe.net/conf/ecms2013 We look forward to welcoming you in Aalesund! Webj?rnRekdalsbakken Robin T. Bye HouxiangZhang Conference Chair Conference Co-Chair & Programme Chair Programme Co-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From bowlby at bu.edu Fri Feb 15 10:09:34 2013 From: bowlby at bu.edu (Brian Bowlby) Date: Fri, 15 Feb 2013 10:09:34 -0500 Subject: Connectionists: FINAL CALL FOR ABSTRACTS: 17th ICCNS conference (February 28 contributed abstract submission deadline) Message-ID: <7352F4FE-BD47-48CD-A623-5C1ACB708FD3@bu.edu> SEVENTEENTH INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS (ICCNS) June 4 ? 7, 2013 Boston University 677 Beacon Street Boston, Massachusetts 02215 USA http://cns.bu.edu/cns-meeting/conference.html Sponsored by the Boston University Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology (CompNet), and Center of Excellence for Learning in Education, Science, and Technology (CELEST) with financial support from the National Science Foundation This interdisciplinary conference is attended each year by approximately 300 people from 30 countries around the world. As in previous years, the conference will focus on solutions to the questions: HOW DOES THE BRAIN CONTROL BEHAVIOR? HOW CAN TECHNOLOGY EMULATE BIOLOGICAL INTELLIGENCE? The conference is aimed at researchers and students of computational neuroscience, cognitive science, neural networks, neuromorphic engineering, and artificial intelligence. It includes invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is particularly interested in exploring how the brain and biologically-inspired algorithms and systems in engineering and technology can learn. Single-track oral and poster sessions enable all presented work to be highly visible. Three-hour poster sessions with no conflicting events will be held on two of the conference days. Posters will be up all day, and can also be viewed during breaks in the talk schedule. This year's conference will include, in addition to regular invited and contributed talks and posters, two workshops on the topics: NEURAL DYNAMICS OF VALUE-BASED DECISION-MAKING AND COGNITIVE PLANNING and SOCIAL COGNITION: FROM BABIES TO ROBOTS See the url above for the complete program of invited speakers. CONFIRMED INVITED SPEAKERS Todd Braver (Washington University, St. Louis) Flexible neural mechanisms of cognitive control: Influences on reward-based decision-making Marisa Carrasco (New York University) Effects of attention on early vision Patrick Cavanagh (Universit? Paris Descartes) Common functional architecture for spatial attention and perceived location Robert Desimone [Plenary Speaker] (Massachusetts Institute of Technology) Prefrontal-visual cortex interactions in attention Asif Ghazanfar (Princeton University) Evolving and developing communication through coupled oscillations Stephen Grossberg (Boston University) Behavioral economics and neuroeconomics: Cooperation, competition, preference, and decision-making Joy Hirsch (Columbia University Medical Center) Neural circuits for conflict resolution Roberta Klatzky (Carnegie Mellon University) Multi-modal interactions within and between senses Kevin LaBar (Duke University) Neural systems for fear generalization Randi Martin (Rice University) Memory retrieval and interference during language comprehension Andrew Meltzoff (University of Washington) How to build a baby with social cognition: Accelerating learning by generalizing across self and other Javier Movellan (University of California, San Diego) Optimal control approaches to the analysis and synthesis of social behavior Mary Potter (Massachusetts Institute of Technology) Recognizing briefly presented pictures: Feedforward processing? Pieter Roelfsema (The Netherlands Institute for Neuroscience) Neuronal mechanisms for perceptual organization Daniel Salzman (Columbia University) Cognitive signals in the amygdala Daniel Schacter [Plenary Speaker] (Harvard University) Constructive memory and imagining the future Wolfram Schultz (University of Cambridge) Neuronal reward and risk signals Helen Tager-Flusberg (Boston University) Identifying early neurobiological risk markers for autism spectrum disorder in the first year of life Jan Theeuwes (Vrije Universiteit Amsterdam) Prior history shapes selection James Todd (Ohio State University) The perception of 3D shape from texture Leslie Ungerleider (National Institutes of Health) Functional architecture for face processing in the primate brain Jeremy Wolfe (Harvard Medical School and Brigham & Women's Hospital) How selective and non-selective pathways contribute to visual search in scenes CALL FOR ABSTRACTS Session Topics: * vision * object recognition * image understanding * neural circuit models * audition * neural system models * speech and language * mathematics of neural systems * unsupervised learning * robotics * supervised learning * hybrid systems (fuzzy, evolutionary, digital) * reinforcement and emotion * neuromorphic VLSI * sensory-motor control * industrial applications * cognition, planning, and attention * other * spatial mapping and navigation Contributed abstracts must be received, in English, by February 28, 2013. Email notification of acceptance will be provided by March 15, 2013. Abstracts must not exceed one 8.5"x11" page in length, with 1" margins on top, bottom, and both sides in a single-column format with a font of 10 points or larger. The title, authors, affiliations, surface, and email addresses should begin each abstract. A separate cover letter should include the abstract title; name and contact information for corresponding and presenting authors; requested preference for oral or poster presentation; and a first and second choice from the topics above, including whether it is biological (B) or technological (T) work [Example: first choice: vision (T); second choice: neural system models (B)]. Contributed talks will be 15 minutes long. Posters will be displayed for a full day. Overhead and computer projector facilities will be available for talks. Copies of the accepted abstracts will be provided electronically to all registered conference participants and will be made publicly available via posting on the conference web site, in accordance with funding agency guidelines. No extended paper will be required. A meeting registration fee must accompany each abstract. The fee will be refunded if the abstract is not accepted for presentation. Fees of accepted abstracts will be returned upon written request only until April 30, 2013. Abstracts, cover letters, and completed registration forms with fee payment information should be submitted electronically tocindy at bu.edu using the phrase ?17th ICCNS abstract submission? in the subject line. Fax submissions of the abstract page will not be accepted. Fax or surface mail submissions of the registration form are acceptable (to Cynthia Bradford, using the contact information shown on the registration form below). Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. Postdoctoral fellows and faculty members should register at the regular rate. REGISTRATION FORM Seventeenth International Conference on Cognitive and Neural Systems June 4 ? 7, 2013 Boston University 677 Beacon Street Boston, Massachusetts 02215 USA Fax: +1 617 353 7755 Mr/Ms/Dr/Prof:_____________________________________________________ Affiliation:_________________________________________________________ Address:__________________________________________________________ City, State, Postal Code:______________________________________________ Phone and Fax:_____________________________________________________ Email:____________________________________________________________ The registration fee includes a conference reception and multiple daily coffee breaks. CHECK ONE: ( ) $135 Conference (Regular) ( ) $85 Conference (Student) METHOD OF PAYMENT: [ ] Enclosed is a check made payable to "Boston University" Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay by credit card (MasterCard, Visa, or Discover Card only) Name as it appears on the card:___________________________________________ Type of card: _____________________________ Expiration date:________________ Account number: _______________________________________________________ Signature:____________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: brochure (02-04-13).docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 23204 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookie at ucsd.edu Thu Feb 14 15:35:50 2013 From: cookie at ucsd.edu (Santamaria, Cookie) Date: Thu, 14 Feb 2013 20:35:50 +0000 Subject: Connectionists: =?windows-1252?q?FW=3A_ICCN_2013_=96_Extended_Abs?= =?windows-1252?q?tract_Submission_to_28_Feb?= Message-ID: ***PLEASE ADDRESS ALL INQUIRIES TO Hans Liljenstrom (Agora for Biosystems, Sweden), general chair, hans.liljenstrom at agoraforbiosystems.org*** Dear Colleagues, Due to numerous requests, the upcoming 4th International Conference on Cognitive Neurodynamics, ICCN 2013, extends the abstract submission deadline to 28 February 2013. We cordially invite you to submit on-line a one-page abstract to ICCN 2013, to be held 23-27 June 2013 in Sigtuna, Sweden. The conference is organized by and located at: Agora for Biosystems, Sigtuna Foundation, Sigtuna, SWEDEN, www.sigtunastiftelsen.se. IMPORTANT DATES: 28 February 2013 Abstract Submission Deadline 15 March 2013 Decision Notification 15 April 2013 Early Registration Deadline 23-27 June 2013 Conference INVITED SPEAKERS: ? Erol Basar (Istanbul Kultur University, Turkey) ? Yanchao Bi (Beijing Normal University, China) ? Steven Bressler (Florida Atlantic University, USA) ? Walter J. Freeman (University of California at Berkeley, USA) ? Aike Guo (Chinese Academy of Sciences, Beijing, China) ? Riitta Hari (Aalto University, Finland) ? John Hertz (NORDITA, Sweden) ? John J. Hopfield (Princeton University, USA) ? Scott Kelso (Florida Atlantic University, USA) ? Paul Rapp (Uniformed Services University of the Health Sciences, USA) ? Barry Richmond (NIMH, USA) ? James Wright (Auckland University, New Zealand) ? Yoko Yamaguchi (RIKEN Brain Science Institute, Japan) For more information, visit the conference web page: www.agoraforbiosystems.org/ICCN2013 If you have any questions regarding the web page or submission procedures, please email the secretariat at felix.peniche at agoraforbiosystems.org For other questions on the ICCN 2013, please contact anyone in the organizing committee below. Join us at this exciting event in historical Sigtuna, close to the Stockholm-Arlanda airport (www.destinationsigtuna.se)! For the Organizing Committee, Hans Liljenstrom (Agora for Biosystems, Sweden), general chair, hans.liljenstrom at agoraforbiosystems.org Hans Braun (Philipps University of Marburg, Germany), co-chair, braun at staff.uni-marburg.de Rubin Wang (East China University of Science and Technology, China), co-chair, rbwang at 163.com Yoko Yamaguchi (RIKEN Brain Science Institute, Japan), co-chair, yokoy at brain.riken.jp ==================================== Hans Liljenstr?m, Prof. Agora for Biosystems, Director Dept. of Energy and Technology, SLU P.O. Box 7032, 750 07 Uppsala, Sweden Phone: +46 (0)18-671728, +46 (0)73-654 7977 Fax: +46 (0)18 673502 Email: Hans.Liljenstrom at slu.se Home page: www.slu.se/et, www.agora.kva.se ICCN2013: www.agoraforbiosystems.org/ICCN2013 ==================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From fbln at ecomp.poli.br Mon Feb 11 18:03:21 2013 From: fbln at ecomp.poli.br (Prof. Fernando Buarque) Date: Mon, 11 Feb 2013 20:03:21 -0300 Subject: Connectionists: 1st Call For Papers - BRICS-CCI & CBIC2013 - Brazil (8-11th Sept, 2013) Message-ID: * Sorry for cross-posting & thanks for forwarding this to your network of collaborators; we greatly appreciate your help * =============================================================================== *FIRST* Call FOR PAPERS International BRICS Countries Congress on Computational Intelligence (BRICS-CCI) co-located with sister conference 11th Brazilian Congress on Computational Intelligence (CBIC 2013) Recife (Porto de Galinhas beach), Brazil, September 8th-11th, 2013 Website: http://www.brics-cci.org .OR http://www.cbic2013.org =============================================================================== *** BACKGROUND Nature inspired techniques, soft computation, machine learning and stochastic models are ubiquitous mainly because their evoked features of tackling complex problems without necessarily formal prior knowledge required. This is quite an edge to more traditional approaches in computer science. Naturally the myriad of approaches and technologies available seems insurmountable and this is why scientists and practitioners would profit to joint together to discuss advances as well as to debate creating and understanding better the challenging tasks of how to design and use these tools (and theory behind them), effectively. The crescent creative novel means to tackle new as well as old problems is also a challenge to be taught in regular programs, that is why the organizers have catered for including two schools aimed at students in the area of Computational Intelligence ? they will serve as good introduction and also specialization for various levels on future scientists and practitioners. The very many advancements and people that contributed to the field will be awarded for their deed, alongside competition sessions and six thematic workshops (symposia) in transversally related areas to main tracks. Sports, cultural and social agendas will also be well cared for as plans are to form/enhance international links between participants. The stunning venue and impressive record of top notch speakers will certainly make this one of the highlights of the year in Computational Intelligence. *** KEY DATES Submission Deadline: April 19th, 2013 Notification of Acceptance: July 12th, 2013 Final Papers Due: July 31th, 2013 *** TOPICS Topics of interest are neatly organized in three tracks and classified in Problems, Approaches and Technologies. The mains tracks are: 1) Neural and Learning Systems; 2) Fuzzy and Stochastic Modeling; and 3) Evolutionary and Swarm Computing. The complete list is at http://brics-cci.org/research-topics-brics-cci-cbic/ The six planned symposia will have transversal specific topics of interest, such as: Bioinformatics, Complex Systems, Social Simulation, CI Apps. in Defense Systems, CI Applications in Industry, and Competitions. *** INTERNATIONAL CONFIRMED INVITED SPEAKERS Alan Kirman (d?Aix Marseille Universit?, EHESS ? France) Andries Engelbrecht (U. Pretoria ? South Africa) Cristovam Buarque (UNB ? Brazil) Derong Liu (University Illinois ? USA) Hani Hagras (University Essex ? England) Hojjat Adeli (Ohio State University ? USA) Igor Aleksander (Imperial College London ? England) Jaime Sichman (USP ? Brazil) Jos? Carlos Pr?ncipe (University of Florida ? USA) Marco Dorigo (Universit? Libre de Bruxelles ? Belgium) Philippe De Wilde (Heriot-Watt University ? Scotland) Ronald Yager (Iona College ? USA) Russell Eberhart (Purdue University ? USA) Sanaz Mostaghim (KIT ? Germany) Sankar Pal (ISI ? India) Swagatam Das (ISI ? India) Xin Yao (University of Birminham ? England) Yaser Abu-Mostafa (Caltech Institute ? USA) Ying Tan (Peking University ? China) Yuhui Shi (U. Jiaotong-Liverpool ? China) *** PAPER PUBLICATION We have already confirmed six special issues of the five journals below (three more are still pending confirmation): +International Journal of Swarm Intelligence Research (IJSIR) +International Journal of Natural Computing Research (IJNCR) +International Journal of Machine Learning and Computing (IJMLC) +Linear and NonLinear Models (L&NLM) +Journal of the Brazilian Computer Society (JBCS) *** PAPER SUBMISSION i) Submissions should be formatted according to the IEEE A4 double column; templates are at http://www.ieee.org/conferences_events/conferences/publishing/templates.html . ii) To allow the proper kind of scientific communication, all papers will be complete (no short papers). And they can be presented orally as well as posters. This to be reccommended after the evalution by the Technical Program Committee. iii) Paper types are: + Type-O (on-going work): up to 2-page papers for presenting positions, on-going work and/or interesting ideas. They will be presented orally (15 minutes each, including questions) in the ?On going work sessions? and will be listed in the electronic proceedings (e-proceedings); + Type-R (Regular): up to 6-page papers will be presented orally in regular parallel sessions (15 minutes each, including questions) that will be published in full in electronic proceedings. Papers maybe be accepted for presentation as poster, but they will also be entitled to full publication in electronic proceedings. Many good papers will be selected to be published in extended manner in one of the already confirmed several special journal edition already committed; + Type-X (Extended): selected type-R papers will be invited to extension up to 10 pages and will be presented orally in special parallel sessions (20 minutes each, including questions). They are all going to be published extended in special journal editions; iv) For submissions (both on-going work, regular or extended), use https://www.easychair.org/conferences/?conf=bricsccicbic2013 [or via website] v) For more details please check the website of congress at www.brics-cci.org *** INTERNATIONAL PROGRAM COMMITTEE (see website for up-to-date list of names) ***ORGANIZATION GENERAL JOINT-CHAIRS of BRICS-CCI & CBIC ? 2013 (B) Prof. Fernando Buarque (University of Pernambuco ? Brazil) (R) Prof. Ildar Batyrshin (Moscow Power Energy Institute ? Russia) (I) Prof. Sankar Pal (Indian Statistical Institute ? India) (C) Prof. Ying Tan (Peking University ? China) (S) Prof. Tshilidzi Marwala (University of Johannesburg ? South Africa) For a complete list of organizers, please check: http://brics-cci.org/us/ We are waiting for your paper and to welcome in the stunning Porto de Galinas beach! =============================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From fjaekel at uos.de Fri Feb 22 12:16:43 2013 From: fjaekel at uos.de (=?iso-8859-1?Q?Frank_J=E4kel?=) Date: Fri, 22 Feb 2013 18:16:43 +0100 Subject: Connectionists: OCCAM 2013 Message-ID: <084CE1ED-9161-468F-BC78-40FFAC777612@uos.de> Dear Colleague, we would like to invite you to register for the 3rd +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Osnabrueck Computational Cognition Alliance Meeting (OCCAM 2013) on "The Brain as a Self-Organized Dynamical System" May 29-31, 2013. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ List of invited speakers: Elizabeth Buffalo, Emory University* Sen Cheng, Ruhr-Universitaet Bochum* Gabriel Curio, Charite* Mathew Diamond, SISSA* Markus Diesmann, Forschungszentrum J?lich* Daniel Durstewitz, Zentralinstitut fuer Seelische Gesundheit* Udo Ernst, Universitaet Bremen* Christoph Kayser, University of Glasgow* Jason Kerr, MPI for Biological Cybernetics* Peter K?nig, Universitaet Osnabrueck* Eve Marder, Brandeis University* Hava Siegelmann, University of Massachusetts Tatjana Tchumatchenko, MPI for Brain Resarch* (* confirmed) The workshop will take place in Osnabrueck, Germany, and will be hosted by the Institute of Cognitive Science (University of Osnabrueck). Details can be found below and on the following webpage: http://www.occam-os.de The registration deadline is April 3, 2013 (first come first served). The registration fee is 100,- Euros. This fee covers the workshop attendance incl. coffee, buffet on the first day, and the conference lunch on the last day. The goal of the OCCAM workshop series is to foster our understanding of mechanisms and principles of information processing in self-organized hierarchical and recurrent systems. Our knowledge of such systems is still very limited despite being a focus of research for many years. The OCCAM workshop series aims at understanding the principles of information processing with a particular focus on 3 major topics: 1. Neural coding and representation in hierarchical systems 2. Self-organisation in dynamical systems 3. Mechanisms for probabilistic inference The OCCAM 2013 theme is: "The Brain as a Self-Organized Dynamical System" There will also be a poster session where conference participants will have the opportunity to present their work. Best regards, Frank Jaekel, Peter K?nig, Gordon Pipa (Organizing committee) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4771 bytes Desc: not available URL: From jeffclune at uwyo.edu Thu Feb 14 01:08:24 2013 From: jeffclune at uwyo.edu (Jeff Clune) Date: Wed, 13 Feb 2013 23:08:24 -0700 Subject: Connectionists: Fully funded Ph.D. position in evolving artificial intelligence (neural networks, robotics, and/or deep learning) Message-ID: **Please forward this email to anyone who might be interested** A fully funded computer science Ph.D. position is available in any of the following areas, especially in combinations of them: evolving artificial intelligence, neural networks, robotics, and deep learning. Postdoctoral positions are also available, but under different funding arrangements (please email jeffclune at uwyo.edu for details). Positions ideally start this coming Fall (August 2013), but alternate start dates are possible. I (Jeff Clune) direct the Evolving Artificial Intelligence Lab at the University of Wyoming. The lab focus is on evolving artificial intelligence by producing artificially intelligent robots, including physical robots and agents in simulated worlds, such as video games. The lab will also study other bio-inspired AI techniques, such as deep learning. Part of these efforts will involve investigating how evolution produced the complex, intelligent, diverse life on this planet by trying to computationally recreate it. A major focus will be on evolving large-scale, structurally organized neural networks (i.e. networks with millions of connections that are modular, regular, and hierarchical). I am also interested in combining neuroevolution with learning algorithms (Hebbian, neuromodulation, etc.). Please see my website (http://JeffClune.com) for example publications, press articles about the work, videos, etc. Here are some keywords that describe related fields: evolutionary algorithms (also known as genetic algorithms or evolutionary computation), neural networks (including evolving neural networks, having them learn, deep learning, and computational neuroscience), robotics, artificial intelligence, and research into the evolution of intelligence, complexity, evolvability, and diversity. If you are interested in joining the lab or would like more information about the positions, please follow the instructions at http://jeffclune.com/positionsAvailable.html Here is a video that summarizes my research: http://goo.gl/wA6Fe Other videos about my research are here: http://jeffclune.com/videos.html The University of Wyoming is located in Laramie, a college town in the heart of the Rocky Mountain West. Nestled between two mountain ranges, Laramie has more than 300 days of sunshine a year and is home to year-round outdoor activities including hiking, camping, rock climbing, downhill skiing, cross-country skiing, fishing and mountain biking. Laramie is also near many of Colorado's major cities and university communities (e.g. Fort Collins, Boulder, and Denver). The University of Wyoming is an Affirmative Action/Equal Opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, pregnancy, sexual orientation, age, national origin, disability, marital, veteran or any other legally protected status. Best regards, Jeff Clune Assistant Professor Computer Science University of Wyoming jeffclune at uwyo.edu jeffclune.com From jesus.m.cortes at gmail.com Fri Feb 22 03:38:19 2013 From: jesus.m.cortes at gmail.com (Jesus Cortes) Date: Fri, 22 Feb 2013 09:38:19 +0100 Subject: Connectionists: 1 week to deadline for a single page abstract, Frontiers Research Topic: Information and Neuroimaging In-Reply-To: References: Message-ID: In collaboration with Frontiers in Neuroscience, we are currently organizing a Research Topic, "Information-based methods for Neuroimaging: analyzing structure, function and dynamics". More information at: http://www.frontiersin.org/Neuroinformatics/researchtopics/Information-based_methods_for_/1241 Abstract Submission Deadline, [only a single page abstract]: March 01, 2013 Full Article Submission Deadline: November 01, 2013 Confirmed contributors are (so far): 1. Daniel Chicharro 2. Demian Battaglia 3. Joaquin Go?i 4. Timothy Mullen 5. Olaf Sporns 6. Stefano Panzeri 7. Rafael Molina 8. Aggelos K. Katsaggelos 9. Jos? Luis P. Vel?zquez 10. Luis G. Dom?nguez 11. Roberto F. Gal?n 12. Carlos G?mez 13. Michael Schaum 14. Joseph T. Lizier 15. Crhistine Gr?etzner 16. Peter Uhlhaas 17. Sabine Schlitt 18. Fritz Poustka 19. Sven B?lte 20. Christine M. Freitag 21. Roberto Hornero 22. Michael Wibral 23. Nicolae C. Pampu 24. Raul C. Muresan 25. Vlad V. Moca 26. Ioana Tincas 27. Andreas Hess 28. Dante Chialvo. Best Regards Daniele Marinazzo, Jesus M Cortes, Miguel Angel Mu?oz From juergen at idsia.ch Wed Feb 13 09:48:04 2013 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 13 Feb 2013 15:48:04 +0100 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> Message-ID: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. Minimization of connection costs should also encourage modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? I am aware of pioneering work by Legenstein and Maass: R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. Is there any earlier relevant work? Pointers will be appreciated. J?rgen Schmidhuber http://www.idsia.ch/~juergen/whatsnew.html On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: > Hello all, > > I believe that many in the neuroscience community will be interested > in a new paper that sheds light on why modularity evolves in > biological networks, including neural networks. The same discovery > also provides AI researchers a simple technique for evolving neural > networks that are modular and have increased evolvability, meaning > that they adapt faster to new environments. > > Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins > of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 > (pdf) > > Abstract: A central biological question is how natural organisms are > so evolvable (capable of quickly adapting to new environments). A > key driver of evolvability is the widespread modularity of > biological networks?their organization as functional, sparsely > connected subunits?but there is no consensus regarding why > modularity itself evolved. Although most hypotheses assume indirect > selection for evolvability, here we demonstrate that the ubiquitous, > direct selection pressure to reduce the cost of connections between > network nodes causes the emergence of modular networks. > Computational evolution experiments with selection pressures to > maximize network performance and minimize connection costs yield > networks that are significantly more modular and more evolvable than > control experiments that only select for performance. These results > will catalyse research in numerous disciplines, such as neuroscience > and genetics, and enhance our ability to harness evolution for > engineering pu! > rposes. > > Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng > > There has been some nice coverage of this work in the popular press, > in case you are interested: > > ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ > ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ > ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning > ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html > ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm > > I hope you enjoy the work. Please let me know if you have any > questions. > > Best regards, > Jeff Clune > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > From juergen at idsia.ch Tue Feb 19 10:45:21 2013 From: juergen at idsia.ch (Schmidhuber Juergen) Date: Tue, 19 Feb 2013 16:45:21 +0100 Subject: Connectionists: Jobs at the Swiss AI Lab IDSIA: Postdocs & PhD students / eu2013 Message-ID: We are seeking postdocs and PhD students for exciting research projects on deep neural networks, machine learning and artificial intelligence, including one on novel touch-based user interfaces and biological processes responsible for touch. Highly competitive salary at the award-winning Swiss AI Lab IDSIA http://www.idsia.ch/ in the world's leading science nation http://www.idsia.ch/~juergen/switzerland.html Please follow instructions under http://www.idsia.ch/~juergen/eu2013.html Juergen Schmidhuber http://www.idsia.ch/~juergen/whatsnew.html From smart at neuralcorrelate.com Fri Feb 15 21:47:30 2013 From: smart at neuralcorrelate.com (Susana Martinez-Conde) Date: Fri, 15 Feb 2013 19:47:30 -0700 Subject: Connectionists: Illusion submission EXTENSION: 9th annual Best Illusion of the Year Contest! Message-ID: <003301ce0bef$fe5f70c0$fb1e5240$@com> ***DUE TO POPULAR DEMAND*** --The deadline for the 9th annual Best Illusion of the Year Contest has been extended. The FINAL (no exceptions) submission date is now ***March 1st***! http://illusioncontest.neuralcorrelate.com *** We are happy to announce the world's 9th annual Best Illusion of the Year Contest!!*** Submissions are now welcome! The 2013 contest will be held in Naples, Florida (Naples Philharmonic Center for the Arts, http://www.thephil.org/) on Monday, May 13th, 2013, as an official satellite of the Vision Sciences Society (VSS) conference. The Naples Philharmonic Center is an 8-minute walk from the main VSS headquarters hotel in Naples, and is thus central to the VSS conference. Past contests have been highly successful in drawing public attention to perceptual research, with over ***FIVE MILLION*** website hits from viewers all over the world, as well as hundreds of international media stories. The First, Second and Third Prize winners from the 2012 contest were Roger Newport, Helen Gilpin and Catherine Preston (University of Nottingham, UK), Jason Tangen, Sean Murphy and Matthew Thompson (The University of Queensland, Australia), and Arthur Shapiro, William Kistler, and Alex Rose-Henig (American University, USA). To see the illusions, photo galleries and other highlights from the 2012 and previous contests, go to http://illusionoftheyear.com. Eligible submissions to compete in the 2013 contest are novel perceptual or cognitive illusions (unpublished, or published no earlier than 2012) of all sensory modalities (visual, auditory, etc.) in standard image, movie or html formats. Exciting new variants of classic or known illusions are admissible. An international panel of impartial judges will rate the submissions and narrow them to the TOP TEN. Then, at the Contest Gala in Naples, the TOP TEN illusionists will present their contributions and the attendees of the event (that means you!) will vote to pick the TOP THREE WINNERS! Illusions submitted to previous editions of the contest can be re-submitted to the 2013 contest, so long as they meet the above requirements and were not among the TOP THREE winners in previous years. Submissions will be held in strict confidence by the panel of judges and the authors/creators will retain full copyright. The TOP TEN illusions will be posted on the illusion contest's website *after* the Contest Gala. Illusions not chosen among the TOP TEN will not be disclosed. As with submitting your work to any scientific conference, participating in to the Best Illusion of the Year Contest does not preclude you from also submitting your work for publication elsewhere. Submissions can be made to Dr. Susana Martinez-Conde (Illusion Contest Executive Producer, Neural Correlate Society) via email (smart at neuralcorrelate.com) until March 1st, 2013. Illusion submissions should come with a (no more than) one-page description of the illusion and its theoretical underpinnings (if known). Illusions will be rated according to: . Significance to our understanding of the mind and brain . Simplicity of the description . Sheer beauty . Counterintuitive quality . Spectacularity Visit the illusion contest website for further information and to see last year's illusions: http://illusionoftheyear.com. Submit your ideas now and take home this prestigious award! On behalf of the Executive Board of the Neural Correlate Society: Jose-Manuel Alonso, Stephen Macknik, Susana Martinez-Conde, Luis Martinez, Xoana Troncoso, Peter Tse ---------------------------------------------------------------- Susana Martinez-Conde, PhD Executive Producer, Best Illusion of the Year Contest President, Neural Correlate Society Columnist, Scientific American Mind Author, Sleights of Mind Director, Laboratory of Visual Neuroscience Division of Neurobiology Barrow Neurological Institute 350 W. Thomas Rd Phoenix AZ 85013, USA Phone: +1 (602) 406-3484 Fax: +1 (602) 406-4172 Email: smart at neuralcorrelate.com http://smc.neuralcorrelate.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wermter at informatik.uni-hamburg.de Fri Feb 15 05:16:52 2013 From: wermter at informatik.uni-hamburg.de (Stefan Wermter) Date: Fri, 15 Feb 2013 11:16:52 +0100 Subject: Connectionists: New International MSc Intelligent Adaptive Systems Message-ID: <511E0B14.50400@informatik.uni-hamburg.de> New International MSc Programme "Intelligent Adaptive Systems" The University of Hamburg is pleased to announce a newly approved Master's programme "Intelligent Adaptive Systems". This MSc is a 2 year, research-oriented course taught fully in English. It starts in October 2013 and tuition fees are waived for successful applicants on this course. Core modules include Bio-Inspired Artificial Intelligence, Neural Networks, Intelligent Robotics, Databases and Information Systems, Machine Learning, Software Architectures and Research Methods. Electives can be chosen from focus modules (including Natural Language Processing, Image Processing, Robot Technology, Mobile Computing, etc) More detailed information on application requirements and procedure can be found at: Homepage: http://www.master-intelligent-adaptive-systems.com/wtm Email: ias-info at informatik.uni-hamburg.de Please forward to potentially interested students. Sven Magg & Stefan Wermter *********************************************** Professor Dr. Stefan Wermter Head of Knowledge Technology Department of Computer Science, WTM, Building F University of Hamburg Vogt Koelln Str. 30 22527 Hamburg, Germany http://www.informatik.uni-hamburg.de/~wermter/ http://www.informatik.uni-hamburg.de/WTM/ From rloosemore at susaro.com Fri Feb 22 16:18:36 2013 From: rloosemore at susaro.com (Richard Loosemore) Date: Fri, 22 Feb 2013 16:18:36 -0500 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> Message-ID: <5127E0AC.90900@susaro.com> I hate to say this, but during discussions with fellow students back in 1987, I remember pointing out that it was not terribly surprising that the cortex consisted of columns (i.e. modules) with dense internal connectivity, with less-dense connections between columns -- not surprising, because the alternative was to try to make the brain less modular and connect every neuron in each column to all the neurons in all the other columns, and the result would be brains that were a million times larger than they are (due to all the extra wiring). The same logic applies in all systems where it is costly to connect every element to every other: the optimal connectivity is well-connected, tightly clustered groups of elements. During those discussions the point was considered so obvious that it sparked little comment. Ever since then I have told students in my lectures that this would be the evolutionary reason for cortical columns to exist. So I am a little confused now. Can someone explain what I am missing .........? Richard Loosemore Department of Physical and Mathematical Sciences, Wells College On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: > The paper mentions that Santiago Ram?n y Cajal already pointed out > that evolution has created mostly short connections in animal brains. > > Minimization of connection costs should also encourage modularization, > e.g., http://arxiv.org/abs/1210.0118 (2012). > > But who first had such a wire length term in an objective function to > be minimized by evolutionary computation or other machine learning > methods? > I am aware of pioneering work by Legenstein and Maass: > > R. A. Legenstein and W. Maass. Neural circuits for pattern recognition > with small total wire length. Theoretical Computer Science, > 287:239-249, 2002. > R. A. Legenstein and W. Maass. Wire length as a circuit complexity > measure. Journal of Computer and System Sciences, 70:53-72, 2005. > > Is there any earlier relevant work? Pointers will be appreciated. > > J?rgen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html > > > > > On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: > >> Hello all, >> >> I believe that many in the neuroscience community will be interested >> in a new paper that sheds light on why modularity evolves in >> biological networks, including neural networks. The same discovery >> also provides AI researchers a simple technique for evolving neural >> networks that are modular and have increased evolvability, meaning >> that they adapt faster to new environments. >> >> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins >> of modularity. Proceedings of the Royal Society B. 280: 20122863. >> http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >> >> Abstract: A central biological question is how natural organisms are >> so evolvable (capable of quickly adapting to new environments). A key >> driver of evolvability is the widespread modularity of biological >> networks?their organization as functional, sparsely connected >> subunits?but there is no consensus regarding why modularity itself >> evolved. Although most hypotheses assume indirect selection for >> evolvability, here we demonstrate that the ubiquitous, direct >> selection pressure to reduce the cost of connections between network >> nodes causes the emergence of modular networks. Computational >> evolution experiments with selection pressures to maximize network >> performance and minimize connection costs yield networks that are >> significantly more modular and more evolvable than control >> experiments that only select for performance. These results will >> catalyse research in numerous disciplines, such as neuroscience and >> genetics, and enhance our ability to harness evolution for >> engineering pu! >> rposes. >> >> Video: >> http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >> >> There has been some nice coverage of this work in the popular press, >> in case you are interested: >> >> ? National Geographic: >> http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >> ? MIT's Technology Review: >> http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >> ? Fast Company: >> http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >> ? Cornell Chronicle: >> http://www.news.cornell.edu/stories/Jan13/modNetwork.html >> ? ScienceDaily: >> http://www.sciencedaily.com/releases/2013/01/130130082300.htm >> >> I hope you enjoy the work. Please let me know if you have any questions. >> >> Best regards, >> Jeff Clune >> >> Assistant Professor >> Computer Science >> University of Wyoming >> jeffclune at uwyo.edu >> jeffclune.com >> >> > > > From andrew.coward at anu.edu.au Fri Feb 22 17:21:41 2013 From: andrew.coward at anu.edu.au (Andrew Coward) Date: Fri, 22 Feb 2013 14:21:41 -0800 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <7450cff14c39d.5127ef6b@anu.edu.au> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <7450f5a94dd6e.5127eb6e@anu.edu.au> <76008cdf4bce7.5127ebad@anu.edu.au> <7790ebbe4e454.5127ebeb@anu.edu.au> <7700a3d649a5d.5127ec2b@anu.edu.au> <7750f2624df4a.5127ec6c@anu.edu.au> <73a0d15c4bd41.5127ecac@anu.edu.au> <7790c6834be46.5127eceb@anu.edu.au> <7450d4f449afe.5127ed2b@anu.edu.au> <7390c0ed4eba1.5127ed6a@anu.edu.au> <7460904c4ef7d.5127eda9@anu.edu.au> <73a091244ca88.5127edea@anu.edu.au> <7360cfcd48f6b.5127ee29@anu.edu.au> <7450d2754cf19.5127ee6a@anu.edu.au> <73b0f4194e194.5127eeaa@anu.edu.au> <7790c8fe496ce.5127eeea@anu.edu.au> <7790cb0b4b15c.5127ef2a@anu.edu.au> <7450cff14c39d.5127ef6b@anu.edu.au> Message-ID: <74508d3749577.51277ef5@anu.edu.au> In a paper published in 2001, I demonstrated that natural selection pressures on the brain resulting from: i. the need to learn with minimized interference with prior learning; ii. the need to "construct" the brain from DNA "blueprints" by a process that minimized the risk of errors; and iii. the need to minimize requirements for biological resources, led to some striking constraints on brain architectural forms. One of the constraints identified in the paper is the need for a modular hierarchy in which modules are made up of submodules, submodules of sub-sub-modules and so on, in such a way that connectivity is minimized, and there is much less connectivity between modules than within their submodules and so on. I pointed out that the cortex has a striking resemblance to this type of hierarchy. I have subsequently applied these ideas to developing ways to use the architectural forms to understand cognition in terms of anatomy physiology, and I have been teaching a course on this topic for a number of years. The 2001 paper is Coward, L.A. (2001). The Recommendation Architecture: lessons from the design of large scale electronic systems for cognitive science. Journal of Cognitive Systems Research 2(2), 111-156. It can be downloaded from my website along with more recent papers. Andrew Coward http://cs.anu.edu.au/~Andrew.Coward/index.html ? On 22/02/13, Juergen Schmidhuber wrote: > The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. > > Minimization of connection costs should also encourage modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). > > But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? > I am aware of pioneering work by Legenstein and Maass: > > R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. > R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. > > Is there any earlier relevant work? Pointers will be appreciated. > > J?rgen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html > > > > > On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: > > >Hello all, > > > >I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. > > > >Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) > > > >Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! > >rposes. > > > >Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng > > > >There has been some nice coverage of this work in the popular press, in case you are interested: > > > >? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ > >? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ > >? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning > >? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html > >? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm > > > >I hope you enjoy the work. Please let me know if you have any questions. > > > >Best regards, > >Jeff Clune > > > >Assistant Professor > >Computer Science > >University of Wyoming > >jeffclune at uwyo.edu > >jeffclune.com > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathis at jhu.edu Fri Feb 22 16:53:40 2013 From: mathis at jhu.edu (Don Mathis) Date: Fri, 22 Feb 2013 21:53:40 +0000 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <5127E0AC.90900@susaro.com> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> Message-ID: <3ADD0C45C4ED734D8D4EAD60D906D71D5B5D63@JHEMTEBEX7.win.ad.jhu.edu> I'm sure Jurgen remembers this paper: http://dl.acm.org/citation.cfm?id=1326963 Computational consequences of a bias toward short connections Authors: Robert A. Jacobs Michael I. Jordan Massachusetts Institute of Technology Published in: Journal of Cognitive Neuroscience Volume 4 Issue 4, Fall 1992 Pages 323-336 MIT Press Cambridge, MA, USA doi>10.1162/jocn.1992.4.4.323 ------------------------------------------------------------ Donald W. Mathis Johns Hopkins University Cognitive Science Department 3400 N. Charles Street Krieger Hall 141A Baltimore, MD 21218 Tel: 410-516-6851 Fax: 410-516-8020 mathis at jhu.edu http://cogsci.jhu.edu On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: > > I hate to say this, but during discussions with fellow students back in 1987, I remember pointing out that it was not terribly surprising that the cortex consisted of columns (i.e. modules) with dense internal connectivity, with less-dense connections between columns -- not surprising, because the alternative was to try to make the brain less modular and connect every neuron in each column to all the neurons in all the other columns, and the result would be brains that were a million times larger than they are (due to all the extra wiring). > > The same logic applies in all systems where it is costly to connect every element to every other: the optimal connectivity is well-connected, tightly clustered groups of elements. > > During those discussions the point was considered so obvious that it sparked little comment. Ever since then I have told students in my lectures that this would be the evolutionary reason for cortical columns to exist. > > So I am a little confused now. Can someone explain what I am missing .........? > > Richard Loosemore > Department of Physical and Mathematical Sciences, > Wells College > > > > On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: >> The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. >> >> Minimization of connection costs should also encourage modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). >> >> But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? >> I am aware of pioneering work by Legenstein and Maass: >> >> R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. >> R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. >> >> Is there any earlier relevant work? Pointers will be appreciated. >> >> J?rgen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> >> On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: >> >>> Hello all, >>> >>> I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. >>> >>> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >>> >>> Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! >>> rposes. >>> >>> Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >>> >>> There has been some nice coverage of this work in the popular press, in case you are interested: >>> >>> ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >>> ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >>> ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >>> ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html >>> ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm >>> >>> I hope you enjoy the work. Please let me know if you have any questions. >>> >>> Best regards, >>> Jeff Clune >>> >>> Assistant Professor >>> Computer Science >>> University of Wyoming >>> jeffclune at uwyo.edu >>> jeffclune.com >>> >>> >> >> >> > > > From steve at cns.bu.edu Fri Feb 22 20:02:21 2013 From: steve at cns.bu.edu (Stephen Grossberg) Date: Fri, 22 Feb 2013 20:02:21 -0500 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <5127E0AC.90900@susaro.com> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> Message-ID: <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu> Dear Richard and other Connectionist colleagues, I think that it is important to clarify how the work "module" is being used. Many people think of modules as implying independent modules that should be able to fully compute their particular processes on their own. However, much behavioral and neurobiological data argue against this possibility. The brain?s organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, specialization does not imply the kind of independence that modularity is often taken to imply. Then what is the nature of this specialization? Complementary Computing concerns the proposal that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage. Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence. It has been suggested that such complementary processing streams may arise from a hierarchical multi-scale process of morphogenetic symmetry-breaking. The concept of Complementary Computing arose as it gradually became clear, as a result of decades of behavioral and neural modeling, that essentially all biological neural models exhibit such complementary processes. Articles that provide examples of Complementary Computing can be found on my web page http://cns.bu.edu/~steve . They include: Grossberg, S. (2000). The complementary brain: Unifying brain dynamics and modularity. Trends in Cognitive Sciences, 4, 233-246. Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks, 37, 1-47. About minimum length: It's important to keep in mind the work of van Essen (1997, Nature, 385, 313-318) concerning his tension-based theory of morphogenesis and compact wiring, which clarifies how folds in the cerebral cortex may develop and make connections more compact; i.e., shorter. A possible role of tension in other developmental processes, such as in the formation during morphogenesis of a gastrula from a blastula, illustrates that such a mechanism may be used in biological systems other than brains. The article below describes such a process heuristically, also on my web page: Grossberg, S. (1978). Communication, Memory, and Development. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Volume 5. New York: Academic Press, pp. 183-232. See Sections XIV - XVI. About cortical columns: They are important, but no more important than the long-range horizontal interactions among columns that are ubiquitous in the cerebral cortex. Indeed, understanding how bottom-up, horizontal, and top-down interactions interact in neocortex has led to the paradigm of Laminar Computing, which attempts to clarify how specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition. On my web page, articles with colleagues like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh (2005) for vision, Pearson (2008) for cognitive working memory and list chunking, and Kazerounian (2011) for speech perception illustrate this theme. Laminar Computing has begun to explain how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications. The existence and critical importance of long-range horizontal connections in neocortex raises the following issue: Why is the spatial resolution of columns as fine as it is? Why does not the long-range correlation length force the columns to become spatially more diffuse than they are? The following article on my web page suggests at least for the case of cortical area V! how the cortical subplate may play a role in this: Grossberg, S. and Seitz, A. (2003). Laminar development of receptive fields, maps, and columns in visual cortex: The coordinating role of the subplate. Cerebral Cortex, 13, 852-863. Best, Steve Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: > > I hate to say this, but during discussions with fellow students back in 1987, I remember pointing out that it was not terribly surprising that the cortex consisted of columns (i.e. modules) with dense internal connectivity, with less-dense connections between columns -- not surprising, because the alternative was to try to make the brain less modular and connect every neuron in each column to all the neurons in all the other columns, and the result would be brains that were a million times larger than they are (due to all the extra wiring). > > The same logic applies in all systems where it is costly to connect every element to every other: the optimal connectivity is well-connected, tightly clustered groups of elements. > > During those discussions the point was considered so obvious that it sparked little comment. Ever since then I have told students in my lectures that this would be the evolutionary reason for cortical columns to exist. > > So I am a little confused now. Can someone explain what I am missing .........? > > Richard Loosemore > Department of Physical and Mathematical Sciences, > Wells College > > > > On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: >> The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. >> >> Minimization of connection costs should also encourage modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). >> >> But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? >> I am aware of pioneering work by Legenstein and Maass: >> >> R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. >> R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. >> >> Is there any earlier relevant work? Pointers will be appreciated. >> >> J?rgen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html >> >> >> >> >> On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: >> >>> Hello all, >>> >>> I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. >>> >>> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >>> >>> Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! >>> rposes. >>> >>> Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >>> >>> There has been some nice coverage of this work in the popular press, in case you are interested: >>> >>> ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >>> ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >>> ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >>> ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html >>> ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm >>> >>> I hope you enjoy the work. Please let me know if you have any questions. >>> >>> Best regards, >>> Jeff Clune >>> >>> Assistant Professor >>> Computer Science >>> University of Wyoming >>> jeffclune at uwyo.edu >>> jeffclune.com >>> >>> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Fri Feb 22 21:03:09 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Fri, 22 Feb 2013 21:03:09 -0500 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu> Message-ID: <5128235D.7040509@cse.msu.edu> Dear Richard, Steve and other connectionist colleagues, Many researchers have said that neuroscience today is rich in data and poor in theory. I agree. Unlike organs in the body, brain is basically a signal processor. Therefore, it should have an overarching theory well explained in mathematics. However, I am probably a minority to hold the following position. After coming up an overarching theory of the brain, I start to disbelieve the modular view of the brain. A modular view of the brain is like to categorize plants based on apparent look instead of their genes. The apparent Brodmann areas in the brain should be largely due to the body organs (eyes, ears, skins, muscles, glands etc.). The re-assignment of visual areas to other sensing modality in the brain of a blind person seems to justify my this theoretical view, since my theory explains why and how this re-assignment takes place. If my theory is correct, neuroscience textbooks should be very differently written in the future. Until then, few care to pay attention to this theory. Humbly, -Juyang Weng Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ On 2/22/13 8:02 PM, Stephen Grossberg wrote: > Dear Richard and other Connectionist colleagues, > > I think that it is important to clarify how the work "module" is being > used. Many people think of modules as implying /independent/ modules > that should be able to fully compute their particular processes on > their own. However, much behavioral and neurobiological data argue > against this possibility. The brain?s organization into distinct > anatomical areas and processing streams shows that brain processing is > indeed specialized. However, specialization does not imply the kind of > independence that modularity is often taken to imply. Then what is the > nature of this specialization? > / > / > /Complementary Computing/ concerns the proposal that pairs of parallel > cortical processing streams compute complementary properties in the > brain. Each stream has complementary computational strengths and > weaknesses, much as in physical principles like the Heisenberg > Uncertainty Principle. Each cortical stream can also possess multiple > processing stages. These stages realize a /hierarchical resolution of > uncertainty/. "Uncertainty" here means that computing one set of > properties at a given stage prevents computation of a complementary > set of properties at that stage. Complementary Computing proposes that > the computational unit of brain processing that has behavioral > significance consists of parallel interactions between complementary > cortical processing streams with multiple processing stages to compute > complete information about a particular type of biological > intelligence. It has been suggested that such complementary processing > streams may arise from a hierarchical multi-scale process of > morphogenetic symmetry-breaking. > > The concept of Complementary Computing arose as it gradually became > clear, as a result of decades of behavioral and neural modeling, that > essentially all biological neural models exhibit such complementary > processes. Articles that provide examples of Complementary Computing > can be found on my web page http://cns.bu.edu/~steve > . They include: > > Grossberg, S. (2000). The complementary brain: Unifying brain dynamics > and modularity. /Trends in Cognitive Sciences, /*4,* 233-246. > > Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns to > consciously attend, learn, and recognize a changing world. /Neural > Networks, /*37*, 1-47. > > About minimum length: It's important to keep in mind the work of van > Essen (1997, Nature, 385, 313-318) concerning his tension-based theory > of morphogenesis and compact wiring, which clarifies how folds in the > cerebral cortex may develop and make connections more compact; i.e., > shorter. > > A possible role of tension in other developmental processes, such as > in the formation during morphogenesis of a gastrula from a blastula, > illustrates that such a mechanism may be used in biological systems > other than brains. The article below describes such a process > heuristically, also on my web page: > > Grossberg, S. (1978). Communication, Memory, and Development. In R. > Rosen and F. Snell (Eds.), *Progress in theoretical biology, Volume > 5.* New York: Academic Press, pp. 183-232. See Sections XIV - XVI. > > About cortical columns: They are important, but no more important than > the long-range horizontal interactions among columns that are > ubiquitous in the cerebral cortex. Indeed, understanding how > bottom-up, horizontal, and top-down interactions interact in neocortex > has led to the paradigm of Laminar Computing, which attempts to > clarify how specializations of this shared laminar design embody > different types of biological intelligence, including vision, speech > and language, and cognition. On my web page, articles with colleagues > like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh (2005) for > vision, Pearson (2008) for cognitive working memory and list chunking, > and Kazerounian (2011) for speech perception illustrate this theme. > > Laminar Computing has begun to explain how the laminar design of > neocortex may realize the best properties of feedforward and feedback > processing, digital and analog processing, and bottom-up data-driven > processing and top-down attentive hypothesis-driven processing. > Embodying such designs into VLSI chips promises to enable the > development of increasingly general-purpose adaptive autonomous > algorithms for multiple applications. > > The existence and critical importance of long-range horizontal > connections in neocortex raises the following issue: Why is the > spatial resolution of columns as fine as it is? Why does not the > long-range correlation length force the columns to become spatially > more diffuse than they are? The following article on my web page > suggests at least for the case of cortical area V! how the cortical > subplate may play a role in this: > > Grossberg, S. and Seitz, A. (2003). Laminar development of receptive > fields, maps, and columns in visual cortex: The coordinating role of > the subplate. /Cerebral Cortex/, *13*, 852-863. > > Best, > > Steve Grossberg > > Wang Professor of Cognitive and Neural Systems > Professor of Mathematics, Psychology, and Biomedical Engineering > Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html > http://cns.bu.edu/~steve > steve at bu.edu > > > > On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: > >> >> I hate to say this, but during discussions with fellow students back >> in 1987, I remember pointing out that it was not terribly surprising >> that the cortex consisted of columns (i.e. modules) with dense >> internal connectivity, with less-dense connections between columns -- >> not surprising, because the alternative was to try to make the brain >> less modular and connect every neuron in each column to all the >> neurons in all the other columns, and the result would be brains that >> were a million times larger than they are (due to all the extra wiring). >> >> The same logic applies in all systems where it is costly to connect >> every element to every other: the optimal connectivity is >> well-connected, tightly clustered groups of elements. >> >> During those discussions the point was considered so obvious that it >> sparked little comment. Ever since then I have told students in my >> lectures that this would be the evolutionary reason for cortical >> columns to exist. >> >> So I am a little confused now. Can someone explain what I am missing >> .........? >> >> Richard Loosemore >> Department of Physical and Mathematical Sciences, >> Wells College >> >> >> >> On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: >>> The paper mentions that Santiago Ram?n y Cajal already pointed out >>> that evolution has created mostly short connections in animal brains. >>> >>> Minimization of connection costs should also encourage >>> modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). >>> >>> But who first had such a wire length term in an objective function >>> to be minimized by evolutionary computation or other machine >>> learning methods? >>> I am aware of pioneering work by Legenstein and Maass: >>> >>> R. A. Legenstein and W. Maass. Neural circuits for pattern >>> recognition with small total wire length. Theoretical Computer >>> Science, 287:239-249, 2002. >>> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >>> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >>> >>> Is there any earlier relevant work? Pointers will be appreciated. >>> >>> J?rgen Schmidhuber >>> http://www.idsia.ch/~juergen/whatsnew.html >>> >>> >>> >>> >>> >>> On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: >>> >>>> Hello all, >>>> >>>> I believe that many in the neuroscience community will be >>>> interested in a new paper that sheds light on why modularity >>>> evolves in biological networks, including neural networks. The same >>>> discovery also provides AI researchers a simple technique for >>>> evolving neural networks that are modular and have increased >>>> evolvability, meaning that they adapt faster to new environments. >>>> >>>> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins >>>> of modularity. Proceedings of the Royal Society B. 280: 20122863. >>>> http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >>>> >>>> Abstract: A central biological question is how natural organisms >>>> are so evolvable (capable of quickly adapting to new environments). >>>> A key driver of evolvability is the widespread modularity of >>>> biological networks?their organization as functional, sparsely >>>> connected subunits?but there is no consensus regarding why >>>> modularity itself evolved. Although most hypotheses assume indirect >>>> selection for evolvability, here we demonstrate that the >>>> ubiquitous, direct selection pressure to reduce the cost of >>>> connections between network nodes causes the emergence of modular >>>> networks. Computational evolution experiments with selection >>>> pressures to maximize network performance and minimize connection >>>> costs yield networks that are significantly more modular and more >>>> evolvable than control experiments that only select for >>>> performance. These results will catalyse research in numerous >>>> disciplines, such as neuroscience and genetics, and enhance our >>>> ability to harness evolution for engineering pu! >>>> rposes. >>>> >>>> Video: >>>> http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >>>> >>>> There has been some nice coverage of this work in the popular >>>> press, in case you are interested: >>>> >>>> ? National Geographic: >>>> http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >>>> ? MIT's Technology Review: >>>> http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >>>> ? Fast Company: >>>> http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >>>> ? Cornell Chronicle: >>>> http://www.news.cornell.edu/stories/Jan13/modNetwork.html >>>> ? ScienceDaily: >>>> http://www.sciencedaily.com/releases/2013/01/130130082300.htm >>>> >>>> I hope you enjoy the work. Please let me know if you have any >>>> questions. >>>> >>>> Best regards, >>>> Jeff Clune >>>> >>>> Assistant Professor >>>> Computer Science >>>> University of Wyoming >>>> jeffclune at uwyo.edu >>>> jeffclune.com >>>> >>>> >>> >>> >>> >> >> > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From levine at uta.edu Fri Feb 22 22:20:09 2013 From: levine at uta.edu (Levine, Daniel S) Date: Fri, 22 Feb 2013 21:20:09 -0600 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <5128235D.7040509@cse.msu.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu>, <5128235D.7040509@cse.msu.edu> Message-ID: <581625BB6C84AB4BBA1C969C69E269ECF92A4F4362@MAVMAIL2.uta.edu> Dear Steve, John, Richard et al., I am reminded of a recent IJCNN when I heard Ali Minai describe a neural architecture for idea generation as "modular." Then about a day later I heard Steve describe another architecture for something else as "not modular." But both were describing a network composed of subnetworks with distinct functions, subnetworks that were not independent of one another but mutually interacting and strongly influencing one another. The same is true of all of my own model networks. In other words, Ali's "modularity" and Steve's "non-modularity" were essentially describing the same concept! Since then I have strenuously avoided use of the term "modular" as too ambiguous. Best, Dan Levine ________________________________ From: connectionists-bounces at mailman.srv.cs.cmu.edu [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng [weng at cse.msu.edu] Sent: Friday, February 22, 2013 8:03 PM To: Stephen Grossberg Cc: steve Grossberg; connectionists Subject: Re: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks Dear Richard, Steve and other connectionist colleagues, Many researchers have said that neuroscience today is rich in data and poor in theory. I agree. Unlike organs in the body, brain is basically a signal processor. Therefore, it should have an overarching theory well explained in mathematics. However, I am probably a minority to hold the following position. After coming up an overarching theory of the brain, I start to disbelieve the modular view of the brain. A modular view of the brain is like to categorize plants based on apparent look instead of their genes. The apparent Brodmann areas in the brain should be largely due to the body organs (eyes, ears, skins, muscles, glands etc.). The re-assignment of visual areas to other sensing modality in the brain of a blind person seems to justify my this theoretical view, since my theory explains why and how this re-assignment takes place. If my theory is correct, neuroscience textbooks should be very differently written in the future. Until then, few care to pay attention to this theory. Humbly, -Juyang Weng Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ On 2/22/13 8:02 PM, Stephen Grossberg wrote: Dear Richard and other Connectionist colleagues, I think that it is important to clarify how the work "module" is being used. Many people think of modules as implying independent modules that should be able to fully compute their particular processes on their own. However, much behavioral and neurobiological data argue against this possibility. The brain?s organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, specialization does not imply the kind of independence that modularity is often taken to imply. Then what is the nature of this specialization? Complementary Computing concerns the proposal that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage. Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence. It has been suggested that such complementary processing streams may arise from a hierarchical multi-scale process of morphogenetic symmetry-breaking. The concept of Complementary Computing arose as it gradually became clear, as a result of decades of behavioral and neural modeling, that essentially all biological neural models exhibit such complementary processes. Articles that provide examples of Complementary Computing can be found on my web page http://cns.bu.edu/~steve . They include: Grossberg, S. (2000). The complementary brain: Unifying brain dynamics and modularity. Trends in Cognitive Sciences, 4, 233-246. Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks, 37, 1-47. About minimum length: It's important to keep in mind the work of van Essen (1997, Nature, 385, 313-318) concerning his tension-based theory of morphogenesis and compact wiring, which clarifies how folds in the cerebral cortex may develop and make connections more compact; i.e., shorter. A possible role of tension in other developmental processes, such as in the formation during morphogenesis of a gastrula from a blastula, illustrates that such a mechanism may be used in biological systems other than brains. The article below describes such a process heuristically, also on my web page: Grossberg, S. (1978). Communication, Memory, and Development. In R. Rosen and F. Snell (Eds.), Progress in theoretical biology, Volume 5. New York: Academic Press, pp. 183-232. See Sections XIV - XVI. About cortical columns: They are important, but no more important than the long-range horizontal interactions among columns that are ubiquitous in the cerebral cortex. Indeed, understanding how bottom-up, horizontal, and top-down interactions interact in neocortex has led to the paradigm of Laminar Computing, which attempts to clarify how specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition. On my web page, articles with colleagues like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh (2005) for vision, Pearson (2008) for cognitive working memory and list chunking, and Kazerounian (2011) for speech perception illustrate this theme. Laminar Computing has begun to explain how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications. The existence and critical importance of long-range horizontal connections in neocortex raises the following issue: Why is the spatial resolution of columns as fine as it is? Why does not the long-range correlation length force the columns to become spatially more diffuse than they are? The following article on my web page suggests at least for the case of cortical area V! how the cortical subplate may play a role in this: Grossberg, S. and Seitz, A. (2003). Laminar development of receptive fields, maps, and columns in visual cortex: The coordinating role of the subplate. Cerebral Cortex, 13, 852-863. Best, Steve Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems http://www.cns.bu.edu/about/cas.html http://cns.bu.edu/~steve steve at bu.edu On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: I hate to say this, but during discussions with fellow students back in 1987, I remember pointing out that it was not terribly surprising that the cortex consisted of columns (i.e. modules) with dense internal connectivity, with less-dense connections between columns -- not surprising, because the alternative was to try to make the brain less modular and connect every neuron in each column to all the neurons in all the other columns, and the result would be brains that were a million times larger than they are (due to all the extra wiring). The same logic applies in all systems where it is costly to connect every element to every other: the optimal connectivity is well-connected, tightly clustered groups of elements. During those discussions the point was considered so obvious that it sparked little comment. Ever since then I have told students in my lectures that this would be the evolutionary reason for cortical columns to exist. So I am a little confused now. Can someone explain what I am missing .........? Richard Loosemore Department of Physical and Mathematical Sciences, Wells College On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. Minimization of connection costs should also encourage modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? I am aware of pioneering work by Legenstein and Maass: R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. Is there any earlier relevant work? Pointers will be appreciated. J?rgen Schmidhuber http://www.idsia.ch/~juergen/whatsnew.html On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: Hello all, I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! rposes. Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng There has been some nice coverage of this work in the popular press, in case you are interested: ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm I hope you enjoy the work. Please let me know if you have any questions. Best regards, Jeff Clune Assistant Professor Computer Science University of Wyoming jeffclune at uwyo.edu jeffclune.com -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.coward at anu.edu.au Sat Feb 23 13:34:50 2013 From: andrew.coward at anu.edu.au (Andrew Coward) Date: Sat, 23 Feb 2013 10:34:50 -0800 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <7740df15490d0.51290bba@anu.edu.au> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu> <5128235D.7040509@cse.msu.edu> <581625BB6C84AB4BBA1C969C69E269ECF92A4F4362@MAVMAIL2.uta.edu> <7400f1b34ba85.512909ac@anu.edu.au> <75709f0f4bbbf.512909ed@anu.edu.au> <7570dac54b59d.51290a2f@anu.edu.au> <76309b714b280.51290a71@anu.edu.au> <7780cb384b6df.51290ab2@anu.edu.au> <7780b6044c3b2.51290af4@anu.edu.au> <7740c06f4ca8a.51290b36@anu.edu.au> <7740f89248ceb.51290b78@anu.edu.au> <7740df15490d0.51290bba@anu.edu.au> Message-ID: <7770ad604bc2b.51289b4a@anu.edu.au> I think there are a number of conflicting (sometimes implicit) definitions of modules which generates a lot of confusion. In the late 1970s and early 1980s I was involved in perhaps the first design of a very complex telecommunications system using electronics (rather than electromechanical relays). The project required transistor design, integrated circuit design, printed circuit assembly design, system design, software language design, and the writing of millions of lines of code. Several thousand engineers worked for about 5 years before delivery of the first system. The experience provided a fair amount of insight into the architectural constraints on extremely complex systems, which through natural selection apply to the brain even though it is a completely different kind of system. In this context the driving force for modules is the need to limit the physical information processing resources required. A module is a set of physical resources that performs a group of similar information processes. The resources of each module are optimized to perform one type of process very efficiently, minimizing the overall resources required. The criterion for grouping information processes together is simply that they can be performed efficiently on the same customized physical resources, so the groupings have little to do with the way system behaviour is divided up into features. Any system feature (or cognitive feature) will require information processes performed by many different modules and there will be minimal correspondence between modules and features. A problem therefore arises because a change to a feature is implemented by changes to some of the modules it uses. The more features that use a type of information process (i.e. share the same module), the greater the risk that changes to one feature (e.g. through learning) will result in undesirable side-effects on other features using the same modules. Furthermore, exchange of information between modules means that a change to one module may affect the behaviour of other modules, making undesirable side effects proliferate through those other modules. Minimisation of information exchange between modules as far as possible is necessary to control the proliferation of side effects. Unfortunately, an information exchange between two modules generally means that processing performed by one module does not have to be duplicated in the other module, so there is a resource cost to minimizing information exchange. So finding the optimal definition of the information processes that each module performs and the information exchanges between them is the very complex problem of finding an adequate compromise between limiting information processing resource requirements and retaining the ability to change features without excessive undesirable side effects on other features. All this is documented in my 2001 paper. On 22/02/13, "Levine, Daniel S" wrote: > > > > > > > > Dear Steve, John, Richard?et al., > > I am reminded of a recent IJCNN when I heard Ali Minai describe a neural architecture for idea generation as "modular." Then about a day later I heard Steve describe another architecture for something else as "not modular." But both were describing a network composed of subnetworks with distinct functions, subnetworks that were not independent of one another but mutually interacting and strongly influencing one another. The same is true of all of my own model networks. In other words, Ali's "modularity" and Steve's "non-modularity" were essentially describing the same concept! Since then I have strenuously avoided use of the term "modular" as too ambiguous. > > Best, > Dan Levine > > > From: connectionists-bounces at mailman.srv.cs.cmu.edu [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng [weng at cse.msu.edu] > > Sent: Friday, February 22, 2013 8:03 PM > > To: Stephen Grossberg > > Cc: steve Grossberg; connectionists > > Subject: Re: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks > > > > Dear Richard, Steve and other connectionist colleagues, > > > > Many researchers have said that neuroscience today is rich in data and poor in theory. I agree. Unlike organs in the body, brain is basically a signal processor. Therefore, it should have an overarching theory well explained in mathematics. > > > > However, I am probably a minority to hold the following position. After coming up an overarching theory of the brain, I start to disbelieve the modular view of the brain. A modular view of the brain is like to categorize plants based on apparent look instead of their genes. > > > > The apparent Brodmann areas in the brain should be largely due to the body organs (eyes, ears, skins, muscles, glands etc.). The re-assignment of visual areas to other sensing modality in the brain of a blind person seems to justify my this theoretical view, since my theory explains why and how this re-assignment takes place. If my theory is correct, neuroscience textbooks should be very differently written in the future. Until then, few care to pay attention to this theory. > > > > > Humbly, > > > > -Juyang Weng > > > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 3115 Engineering Building > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > On 2/22/13 8:02 PM, Stephen Grossberg wrote: > > > > Dear Richard and other Connectionist colleagues, > > > > I think that it is important to clarify how the work "module" is being used. Many people think of modules as implying > > independent modules that should be able to fully compute their particular processes on their own. However, much behavioral and neurobiological data argue against this possibility. The brain?s organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, specialization does not imply the kind of independence that modularity is often taken to imply. Then what is the nature of this specialization? > > > > Complementary Computing?concerns the proposal that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a?hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage. Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence. It has been suggested that such complementary processing streams may arise from a hierarchical multi-scale process of morphogenetic symmetry-breaking. > > > > > > The concept of Complementary Computing arose as it gradually became clear, as a result of decades of behavioral and neural modeling, that essentially all biological neural models exhibit such complementary processes. Articles that provide examples of Complementary Computing can be found on my web page http://cns.bu.edu/~steve(http://cns.bu.edu/%7Esteve) . They include: > > > > > > Grossberg, S. (2000). The complementary brain: Unifying brain dynamics and modularity.?Trends in Cognitive Sciences, 4,?233-246. > > > > > > Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.?Neural Networks, 37, 1-47. > > > > > > About minimum length: It's important to keep in mind the work of van Essen (1997, Nature, 385, 313-318) concerning his tension-based theory of morphogenesis and compact wiring, which clarifies how folds in the cerebral cortex may develop and make connections more compact; i.e., shorter. > > > > > > A possible role of tension in other developmental processes, such as in the formation during morphogenesis of a gastrula from a blastula, illustrates that such a mechanism may be used in biological systems other than brains. The article below describes such a process heuristically, also on my web page: > > > > > > Grossberg, S. (1978). Communication, Memory, and Development. In R. Rosen and F. Snell (Eds.),?Progress in theoretical biology, Volume 5.?New York: Academic Press, pp. 183-232.?See Sections XIV - XVI. > > > > > > About cortical columns: They are important, but no more important than the long-range horizontal interactions among columns that are ubiquitous in the cerebral cortex. Indeed, understanding how bottom-up, horizontal, and top-down interactions interact in neocortex has led to the paradigm of Laminar Computing, which attempts to clarify how specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition. On my web page, articles with colleagues like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh (2005) for vision, Pearson (2008) for cognitive working memory and list chunking, and Kazerounian (2011) for speech perception illustrate this theme. > > > > > > Laminar Computing has begun to explain how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications. > > > > > > The existence and critical importance of long-range horizontal connections in neocortex raises the following issue: Why is the spatial resolution of columns as fine as it is? Why does not the long-range correlation length force the columns to become spatially more diffuse than they are? The following article on my web page suggests at least for the case of cortical area V! how the cortical subplate may play a role in this: > > > > > > Grossberg, S. and Seitz, A. (2003). Laminar development of receptive fields, maps, and columns in visual cortex: The coordinating role of the subplate.?Cerebral Cortex,?13, 852-863. > > > > > > Best, > > > > > > Steve Grossberg > > > > > > Wang Professor of Cognitive and Neural Systems > > Professor of Mathematics, Psychology, and Biomedical Engineering > > Director, Center for Adaptive Systems?http://www.cns.bu.edu/about/cas.html > > > > http://cns.bu.edu/~steve(http://cns.bu.edu/%7Esteve) > > steve at bu.edu > > > > > > > > > > > > > > > > > > > > > > > > On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: > > > > > > > > > > > > > I hate to say this, but during discussions with fellow students back in 1987, I remember pointing out that it was not terribly surprising that the cortex consisted of columns (i.e. modules) with dense internal connectivity, with less-dense connections between columns -- not surprising, because the alternative was to try to make the brain less modular and connect every neuron in each column to all the neurons in all the other columns, and the result would be brains that were a million times larger than they are (due to all the extra wiring). > > > > > > > > > > > > The same logic applies in all systems where it is costly to connect every element to every other: the optimal connectivity is well-connected, tightly clustered groups of elements. > > > > > > > > > > > > During those discussions the point was considered so obvious that it sparked little comment. Ever since then I have told students in my lectures that this would be the evolutionary reason for cortical columns to exist. > > > > > > > > > > > > So I am a little confused now. Can someone explain what I am missing .........? > > > > > > > > > > > > Richard Loosemore > > > > > > Department of Physical and Mathematical Sciences, > > > > > > Wells College > > > > > > > > > > > > > > > > > > > > > > > > On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: > > > > > > > The paper mentions that Santiago Ram?n y Cajal already pointed out that evolution has created mostly short connections in animal brains. > > > > > > > > > > > > > > > > > > > > > > Minimization of connection costs should also encourage modularization, e.g., > > > > http://arxiv.org/abs/1210.0118 (2012). > > > > > > > > > > > > > > > > > > > > > > But who first had such a wire length term in an objective function to be minimized by evolutionary computation or other machine learning methods? > > > > > > > > > > > I am aware of pioneering work by Legenstein and Maass: > > > > > > > > > > > > > > > > > > > > > > R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theoretical Computer Science, 287:239-249, 2002. > > > > > > > > > > > R. A. Legenstein and W. Maass. Wire length as a circuit complexity measure. Journal of Computer and System Sciences, 70:53-72, 2005. > > > > > > > > > > > > > > > > > > > > > > Is there any earlier relevant work? Pointers will be appreciated. > > > > > > > > > > > > > > > > > > > > > > J?rgen Schmidhuber > > > > > > > > > > > http://www.idsia.ch/~juergen/whatsnew.html(http://www.idsia.ch/%7Ejuergen/whatsnew.html) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > Hello all, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. > > > > > http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! > > > > > > > > > > > > > > > > > > > > > rposes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Video: > > > > > > > > > > http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng(http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There has been some nice coverage of this work in the popular press, in case you are interested: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ? National Geographic: > > > > > > > > > > http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/(http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/) > > > > > > > > > > > > > > > > > > > > > ? MIT's Technology Review: > > > > > > > > > > http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/(http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/) > > > > > > > > > > > > > > > > > > > > > ? Fast Company: > > > > > > > > > > http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning(http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning) > > > > > > > > > > > > > > > > > > > > > ? Cornell Chronicle: > > > > > > > > > > http://www.news.cornell.edu/stories/Jan13/modNetwork.html(http://www.news.cornell.edu/stories/Jan13/modNetwork.html) > > > > > > > > > > > > > > > > > > > > > ? ScienceDaily: > > > > > > > > > > http://www.sciencedaily.com/releases/2013/01/130130082300.htm(http://www.sciencedaily.com/releases/2013/01/130130082300.htm) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I hope you enjoy the work. Please let me know if you have any questions. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > > > > > > > > > > > > Jeff Clune > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Assistant Professor > > > > > > > > > > > > > > > > > > > > > Computer Science > > > > > > > > > > > > > > > > > > > > > University of Wyoming > > > > > > > > > > > > > > > > > > > > > jeffclune at uwyo.edu > > > > > > > > > > > > > > > > > > > > > jeffclune.com(http://jeffclune.com) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 3115 Engineering Building > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sat Feb 23 16:54:12 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 23 Feb 2013 16:54:12 -0500 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <581625BB6C84AB4BBA1C969C69E269ECF92A4F4362@MAVMAIL2.uta.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <5127E0AC.90900@susaro.com> <27724456-6F50-4D6E-A77D-D6FA917D52A4@cns.bu.edu>, <5128235D.7040509@cse.msu.edu> <581625BB6C84AB4BBA1C969C69E269ECF92A4F4362@MAVMAIL2.uta.edu> Message-ID: <51293A84.4040001@cse.msu.edu> Modular or not modular, which itself will affect whether the system can assign resources to other sensing or effector modalities when one of such modality is amputated. Existing neuroscience experiments have demonstrated that the brain does that well. However, without thinking about modules at all will liberate us to understand how the brain develops a wide array of amazing capabilities. For example, for vision, how does a brain perform general-purpose vision and generate vision-enabled behaviors? Without recognition, detection is difficult. Without detection, recognition is difficult. Therefore, recognition and detection is a chicken-and-egg problem. By detection, we mean to find an object of interest in an unknown cluttered natural background. By recognition, we mean to recognize what it is if the top-down attention has decided roughly the location and scale. This is an open chicken-and-egg problem in the computer vision and pattern recognition for over 50 years. Our Developmental Networks (DN) has showed how the network solves this chicken-and-egg problem. Interestingly, how the brain solves this chicken-and-egg problem is directly related to how the brain solves many other problems, such as top-down attention, spatiotemporal event recognition, language acquisition and language understanding. Best regards, -John On 2/22/13 10:20 PM, Levine, Daniel S wrote: > Dear Steve, John, Richard et al., > I am reminded of a recent IJCNN when I heard Ali Minai describe a > neural architecture for idea generation as "modular." Then about a > day later I heard Steve describe another architecture for something > else as "not modular." But both were describing a network composed of > subnetworks with distinct functions, subnetworks that were not > independent of one another but mutually interacting and strongly > influencing one another. The same is true of all of my own model > networks. In other words, Ali's "modularity" and Steve's > "non-modularity" were essentially describing the same concept! Since > then I have strenuously avoided use of the term "modular" as too > ambiguous. > Best, > Dan Levine > ------------------------------------------------------------------------ > *From:* connectionists-bounces at mailman.srv.cs.cmu.edu > [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang > Weng [weng at cse.msu.edu] > *Sent:* Friday, February 22, 2013 8:03 PM > *To:* Stephen Grossberg > *Cc:* steve Grossberg; connectionists > *Subject:* Re: Connectionists: New paper on why modules evolve, and > how to evolve modular neural networks > > Dear Richard, Steve and other connectionist colleagues, > > Many researchers have said that neuroscience today is rich in data and > poor in theory. I agree. Unlike organs in the body, brain is > basically a signal processor. Therefore, it should have an > overarching theory well explained in mathematics. > > However, I am probably a minority to hold the following position. > After coming up an overarching theory of the brain, I start to > disbelieve the modular view of the brain. A modular view of the brain > is like to categorize plants based on apparent look instead of their > genes. > > The apparent Brodmann areas in the brain should be largely due to the > body organs (eyes, ears, skins, muscles, glands etc.). The > re-assignment of visual areas to other sensing modality in the brain > of a blind person seems to justify my this theoretical view, since my > theory explains why and how this re-assignment takes place. If my > theory is correct, neuroscience textbooks should be very differently > written in the future. Until then, few care to pay attention to this > theory. > > Humbly, > > -Juyang Weng > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 3115 Engineering Building > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email:weng at cse.msu.edu > URL:http://www.cse.msu.edu/~weng/ > On 2/22/13 8:02 PM, Stephen Grossberg wrote: >> Dear Richard and other Connectionist colleagues, >> >> I think that it is important to clarify how the work "module" is >> being used. Many people think of modules as implying /independent/ >> modules that should be able to fully compute their particular >> processes on their own. However, much behavioral and neurobiological >> data argue against this possibility. The brain?s organization into >> distinct anatomical areas and processing streams shows that brain >> processing is indeed specialized. However, specialization does not >> imply the kind of independence that modularity is often taken to >> imply. Then what is the nature of this specialization? >> / >> / >> /Complementary Computing/ concerns the proposal that pairs of >> parallel cortical processing streams compute complementary properties >> in the brain. Each stream has complementary computational strengths >> and weaknesses, much as in physical principles like the Heisenberg >> Uncertainty Principle. Each cortical stream can also possess multiple >> processing stages. These stages realize a /hierarchical resolution of >> uncertainty/. "Uncertainty" here means that computing one set of >> properties at a given stage prevents computation of a complementary >> set of properties at that stage. Complementary Computing proposes >> that the computational unit of brain processing that has behavioral >> significance consists of parallel interactions between complementary >> cortical processing streams with multiple processing stages to >> compute complete information about a particular type of biological >> intelligence. It has been suggested that such complementary >> processing streams may arise from a hierarchical multi-scale process >> of morphogenetic symmetry-breaking. >> >> The concept of Complementary Computing arose as it gradually became >> clear, as a result of decades of behavioral and neural modeling, that >> essentially all biological neural models exhibit such complementary >> processes. Articles that provide examples of Complementary Computing >> can be found on my web page http://cns.bu.edu/~steve >> . They include: >> >> Grossberg, S. (2000). The complementary brain: Unifying brain >> dynamics and modularity. /Trends in Cognitive Sciences, /*4,* 233-246. >> >> Grossberg, S. (2012). Adaptive Resonance Theory: How a brain learns >> to consciously attend, learn, and recognize a changing world. /Neural >> Networks, /*37*, 1-47. >> >> About minimum length: It's important to keep in mind the work of van >> Essen (1997, Nature, 385, 313-318) concerning his tension-based >> theory of morphogenesis and compact wiring, which clarifies how folds >> in the cerebral cortex may develop and make connections more compact; >> i.e., shorter. >> >> A possible role of tension in other developmental processes, such as >> in the formation during morphogenesis of a gastrula from a blastula, >> illustrates that such a mechanism may be used in biological systems >> other than brains. The article below describes such a process >> heuristically, also on my web page: >> >> Grossberg, S. (1978). Communication, Memory, and Development. In R. >> Rosen and F. Snell (Eds.), *Progress in theoretical biology, Volume >> 5.* New York: Academic Press, pp. 183-232. See Sections XIV - XVI. >> >> About cortical columns: They are important, but no more important >> than the long-range horizontal interactions among columns that are >> ubiquitous in the cerebral cortex. Indeed, understanding how >> bottom-up, horizontal, and top-down interactions interact in >> neocortex has led to the paradigm of Laminar Computing, which >> attempts to clarify how specializations of this shared laminar design >> embody different types of biological intelligence, including vision, >> speech and language, and cognition. On my web page, articles with >> colleagues like Cao (2005), Raizada (2000, 2001), and Yazdanbakhsh >> (2005) for vision, Pearson (2008) for cognitive working memory and >> list chunking, and Kazerounian (2011) for speech perception >> illustrate this theme. >> >> Laminar Computing has begun to explain how the laminar design of >> neocortex may realize the best properties of feedforward and feedback >> processing, digital and analog processing, and bottom-up data-driven >> processing and top-down attentive hypothesis-driven processing. >> Embodying such designs into VLSI chips promises to enable the >> development of increasingly general-purpose adaptive autonomous >> algorithms for multiple applications. >> >> The existence and critical importance of long-range horizontal >> connections in neocortex raises the following issue: Why is the >> spatial resolution of columns as fine as it is? Why does not the >> long-range correlation length force the columns to become spatially >> more diffuse than they are? The following article on my web page >> suggests at least for the case of cortical area V! how the cortical >> subplate may play a role in this: >> >> Grossberg, S. and Seitz, A. (2003). Laminar development of receptive >> fields, maps, and columns in visual cortex: The coordinating role of >> the subplate. /Cerebral Cortex/, *13*, 852-863. >> >> Best, >> >> Steve Grossberg >> >> Wang Professor of Cognitive and Neural Systems >> Professor of Mathematics, Psychology, and Biomedical Engineering >> Director, Center for Adaptive Systems >> http://www.cns.bu.edu/about/cas.html >> http://cns.bu.edu/~steve >> steve at bu.edu >> >> >> >> On Feb 22, 2013, at 4:18 PM, Richard Loosemore wrote: >> >>> >>> I hate to say this, but during discussions with fellow students back >>> in 1987, I remember pointing out that it was not terribly surprising >>> that the cortex consisted of columns (i.e. modules) with dense >>> internal connectivity, with less-dense connections between columns >>> -- not surprising, because the alternative was to try to make the >>> brain less modular and connect every neuron in each column to all >>> the neurons in all the other columns, and the result would be brains >>> that were a million times larger than they are (due to all the extra >>> wiring). >>> >>> The same logic applies in all systems where it is costly to connect >>> every element to every other: the optimal connectivity is >>> well-connected, tightly clustered groups of elements. >>> >>> During those discussions the point was considered so obvious that it >>> sparked little comment. Ever since then I have told students in my >>> lectures that this would be the evolutionary reason for cortical >>> columns to exist. >>> >>> So I am a little confused now. Can someone explain what I am missing >>> .........? >>> >>> Richard Loosemore >>> Department of Physical and Mathematical Sciences, >>> Wells College >>> >>> >>> >>> On 2/13/13 9:48 AM, Juergen Schmidhuber wrote: >>>> The paper mentions that Santiago Ram?n y Cajal already pointed out >>>> that evolution has created mostly short connections in animal brains. >>>> >>>> Minimization of connection costs should also encourage >>>> modularization, e.g., http://arxiv.org/abs/1210.0118 (2012). >>>> >>>> But who first had such a wire length term in an objective function >>>> to be minimized by evolutionary computation or other machine >>>> learning methods? >>>> I am aware of pioneering work by Legenstein and Maass: >>>> >>>> R. A. Legenstein and W. Maass. Neural circuits for pattern >>>> recognition with small total wire length. Theoretical Computer >>>> Science, 287:239-249, 2002. >>>> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >>>> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >>>> >>>> Is there any earlier relevant work? Pointers will be appreciated. >>>> >>>> J?rgen Schmidhuber >>>> http://www.idsia.ch/~juergen/whatsnew.html >>>> >>>> >>>> >>>> >>>> >>>> On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: >>>> >>>>> Hello all, >>>>> >>>>> I believe that many in the neuroscience community will be >>>>> interested in a new paper that sheds light on why modularity >>>>> evolves in biological networks, including neural networks. The >>>>> same discovery also provides AI researchers a simple technique for >>>>> evolving neural networks that are modular and have increased >>>>> evolvability, meaning that they adapt faster to new environments. >>>>> >>>>> Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary >>>>> origins of modularity. Proceedings of the Royal Society B. 280: >>>>> 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >>>>> >>>>> Abstract: A central biological question is how natural organisms >>>>> are so evolvable (capable of quickly adapting to new >>>>> environments). A key driver of evolvability is the widespread >>>>> modularity of biological networks?their organization as >>>>> functional, sparsely connected subunits?but there is no consensus >>>>> regarding why modularity itself evolved. Although most hypotheses >>>>> assume indirect selection for evolvability, here we demonstrate >>>>> that the ubiquitous, direct selection pressure to reduce the cost >>>>> of connections between network nodes causes the emergence of >>>>> modular networks. Computational evolution experiments with >>>>> selection pressures to maximize network performance and minimize >>>>> connection costs yield networks that are significantly more >>>>> modular and more evolvable than control experiments that only >>>>> select for performance. These results will catalyse research in >>>>> numerous disciplines, such as neuroscience and genetics, and >>>>> enhance our ability to harness evolution for engineering pu! >>>>> rposes. >>>>> >>>>> Video: >>>>> http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >>>>> >>>>> There has been some nice coverage of this work in the popular >>>>> press, in case you are interested: >>>>> >>>>> ? National Geographic: >>>>> http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >>>>> ? MIT's Technology Review: >>>>> http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >>>>> ? Fast Company: >>>>> http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >>>>> ? Cornell Chronicle: >>>>> http://www.news.cornell.edu/stories/Jan13/modNetwork.html >>>>> ? ScienceDaily: >>>>> http://www.sciencedaily.com/releases/2013/01/130130082300.htm >>>>> >>>>> I hope you enjoy the work. Please let me know if you have any >>>>> questions. >>>>> >>>>> Best regards, >>>>> Jeff Clune >>>>> >>>>> Assistant Professor >>>>> Computer Science >>>>> University of Wyoming >>>>> jeffclune at uwyo.edu >>>>> jeffclune.com >>>>> >>>>> >>>> >>>> >>>> >>> >>> >> >> >> > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 3115 Engineering Building > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email:weng at cse.msu.edu > URL:http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjhealy at ece.unm.edu Sat Feb 23 17:55:49 2013 From: mjhealy at ece.unm.edu (Michael J Healy) Date: Sat, 23 Feb 2013 15:55:49 -0700 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks Message-ID: Dear Colleagues, Tom Caudell and I are developing a theory of the structure-function relationship in neural networks that implies that the brain is organized into parallel hierarchies. We call this theory the Categorical Neural Semantic Theory (CNST). It is a mathematical model of the correspondence between the structure of knowledge and neural structure. The CNST is pretty much theoretical at present, but it is consistent with the data we have been able to obtain. It implies some interesting structure and phenomena. A part of this is the parallel hierarchy structure, which you may think of in terms of interconnected modules---or not. Since the theory posits structures which are mathematically-based, it does not assume evolutionary or physiological constraints---it just produces neural structure-to-function relationships based upon some basic neuroscience and takes off from there, and constraints arise ``naturally''. Each hierarchy corresponds to a brain region associated with a major function, examples being major subdivisions of visual, motor, association, prefrontal and other cortices. The hierarchies are built up based upon constructs of category theory, principally colimits and limits. The latter constructs model concepts obtained by combining simpler ones through ``concept blending'' and by abstracting from more complex, specialized concepts according to their common use in more yet more complex concepts. Starting with some examples of these constructs in its initial connectionist structure, the brain builds representations incrementally by ``learning'' further colimits and limits, each step building upon constructs formed in previous steps. The representations are associated with cells that respond to concepts representing phenomena in the sensor-motor environment or, in some regions (such as pre-frontal regions), they can represent internally-generated concepts. Bundles of connection paths with shared active states represent concept relationships. The interaction of the hierarchies is modelled via natural transformations between functors. The functors are structure-preserving mappings of a category of concepts and relationships between them to a category of neural structures. The natural transformations model the activity of longer-range connections between regions, which consequently obey a rule we call "knowledge coherence". Neither the idea of parallel hierarchies on the one hand, or the use of category theory in brain modelling on the other, is unique to our theory. However, we are unique in the way we combine these ideas and use them in semantic modelling. We have some publications and technical reports on this accessible from my web site, http://www.ece.unm.edu/~mjhealy/ . These include M. J. Healy and T. P. Caudell (2010) Temporal Sequencing via Supertemplates, UNM Technical Report EECE-TR-10-0001, DspaceUNM, University of New Mexico. M. J. Healy, R. D. Olinger, R. J. Young, S. E. Taylor, T. P. Caudell, and K. W. Larson (2009) Applying Category Theory to Improve the Performance of a Neural Architecture, Neurocomputing, vol. 72, pp. 3158-3173. M. J. Healy, T. P. Caudell, and T. E. Goldsmith (2008) A Model of Human Categorization and Similarity Based Upon Category Theory, UNM Technical Report EECE-TR-08-0010, DSpaceUNM, University of New Mexico. M. J. Healy and T. P. Caudell (2006a) Ontologies and Worlds in Category Theory: Implications for Neural Systems, Axiomathes, vol. 16, nos. 1-2, pp. 165-214. From terry at salk.edu Sat Feb 23 22:49:09 2013 From: terry at salk.edu (Terry Sejnowski) Date: Sat, 23 Feb 2013 19:49:09 -0800 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> Message-ID: G. Mitchison, Neuronal branching patterns and the economy of cortical wiring, Proc. Roy. Soc. London B Biol. Sci. 245 (1991) 151{158 D.B. Chklovskii, C.F. Stevens, Wiring optimization in the brain, Neural Information Processing Systems (1999) Koulakov AA, Chklovskii DB. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. Neuron. 2001 Feb;29(2):519-27. Chklovskii DB, Schikorski T, Stevens CF. Wiring optimization in cortical circuits. Neuron. 2002 Apr 25;34(3):341-7. Terry ----- > The paper mentions that Santiago Ramn y Cajal already pointed out > that evolution has created mostly short connections in animal brains. > > Minimization of connection costs should also encourage modularization, > e.g., http://arxiv.org/abs/1210.0118 (2012). > > But who first had such a wire length term in an objective function to > be minimized by evolutionary computation or other machine learning > methods? > I am aware of pioneering work by Legenstein and Maass: > > R. A. Legenstein and W. Maass. Neural circuits for pattern recognition > with small total wire length. Theoretical Computer Science, > 287:239-249, 2002. > R. A. Legenstein and W. Maass. Wire length as a circuit complexity > measure. Journal of Computer and System Sciences, 70:53-72, 2005. > > Is there any earlier relevant work? Pointers will be appreciated. > > Juergen Schmidhuber > http://www.idsia.ch/~juergen/whatsnew.html From t.j.prescott at sheffield.ac.uk Sun Feb 24 10:05:41 2013 From: t.j.prescott at sheffield.ac.uk (Tony Prescott) Date: Sun, 24 Feb 2013 15:05:41 +0000 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> Message-ID: Dear colleagues, The Clune et al. article we are discussing mentions that selection for reduced connectivity could be a "spandrel" (the consequence of selection for something else) but does not explore this possibility in much depth. In the case of biological brains it is hard to see why low connectivity should be directly selected rather than arising through the need to keep a lid on the size and metabolic cost of maintaining the brain. A 1991 paper by Ringo (http://www.ncbi.nlm.nih.gov/pubmed/1657274) shows that larger brains cannot maintain the same degree of inter-connectedness as smaller ones and therefore long-range connections are necessary sparser if increased an in neuron count is not going to give rise to an exponential increase in brain size. Reduced connectivity is therefore an architectural constraint for larger brains in not too dissimilar way to the need for spandrels in cathedral domes (as discussed by Gould, 1979). An important consideration for biological brains is connection length. Leise 1990 (http://www.ncbi.nlm.nih.gov/pubmed/2194614) provides a useful summary of the reasons why nervous systems are composed of physically modular components with a high number of short-range connections and low number of longer range ones. As the literature on small world networks show, however, it is important not to assume that physical modularity requires functional modularity. Appropriate sparse connectivity can allow fast communication and synchronisation across large networks that can support distributed functional modules. Regards, Tony Prescott On 24 February 2013 03:49, Terry Sejnowski wrote: > G. Mitchison, Neuronal branching patterns and the economy of cortical wiring, Proc. Roy. Soc. London > B Biol. Sci. 245 (1991) 151{158 > > D.B. Chklovskii, C.F. Stevens, Wiring optimization in the brain, Neural Information Processing Systems > (1999) > > Koulakov AA, Chklovskii DB. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. Neuron. 2001 Feb;29(2):519-27. > > Chklovskii DB, Schikorski T, Stevens CF. Wiring optimization in cortical circuits. > Neuron. 2002 Apr 25;34(3):341-7. > > Terry > > ----- > >> The paper mentions that Santiago Ramn y Cajal already pointed out >> that evolution has created mostly short connections in animal brains. >> >> Minimization of connection costs should also encourage modularization, >> e.g., http://arxiv.org/abs/1210.0118 (2012). >> >> But who first had such a wire length term in an objective function to >> be minimized by evolutionary computation or other machine learning >> methods? >> I am aware of pioneering work by Legenstein and Maass: >> >> R. A. Legenstein and W. Maass. Neural circuits for pattern recognition >> with small total wire length. Theoretical Computer Science, >> 287:239-249, 2002. >> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >> >> Is there any earlier relevant work? Pointers will be appreciated. >> >> Juergen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html -- ------------------------------------------------------------------------------------------------------------------------------------------------------ Consider submitting to Living Machines II (http://csnetwork.eu/livingmachines/conf2013) The 2nd International Conference on Biomimetic and Biohybrid Systems, 29th July to 2nd August 2013, Natural History Museum, London. Deadline for papers March 15th, 2013. ------------------------------------------------------------------------------------------------------------------------------------------------------ Tony J Prescott, Professor of Cognitive Neuroscience Department of Psychology, Western Bank, Sheffield, S10 2TN, United Kingdom. email: t.j.prescott at sheffield.ac.uk, skype: tonyjprescott, twitter: shefrobotics phone: +44 114 2226547, fax: +44 114 2766515 http://www.shef.ac.uk/psychology/staff/academic/tony-prescott http://www.abrg.group.shef.ac.uk/people/tony/ Director, Sheffield Centre for Robotics (SCentRo) (http://www.scentro.ac.uk/) Director, Active Touch Laboratory (ATL at S) (http://www.shef.ac.uk/psychology/research/groups/atlas) Co-director, Adaptive Behaviour Research Group (ABRG) (http://www.abrg.group.shef.ac.uk/) Co-Chair, Living Machines 2013 (http://csnetwork.eu/livingmachines/conf2013) ------------------------------------------------------------------------------------------------------------------------------------------------------ From terry at salk.edu Mon Feb 25 16:40:02 2013 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 25 Feb 2013 13:40:02 -0800 Subject: Connectionists: NEURAL COMPUTATION - February 1, 2013 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 25, Number 2 - February 1, 2013 Article Impact of Correlated Neural Activity on Decision Making Performance Nicholas H Cain, Eric Shea-Brown Letters Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, and Stefan Schaal Stochastic Optimal Control as a Theory of Brain-machine Interface Operation Manuel Lagang, Lakshminarayan Srinivasan Accelerated Spike Resampling for Accurate Multiple Testing Controls Matthew Harrison A Spike-Timing Based Integrated Model for Pattern Recognition Jun Hu, Huajin Tang, K. C. Tan, Haizhou Li, and Luping Shi Supervised Learning in Multilayer Spiking Neural Networks Ioana Sporea, Andre Gruening Temporal Order Detection and Coding in Nervous Systems Klaus M Stiefel, Jonathan Tapson, and Andre van Schaik The Kernel Semi-least Squares Method for Sparse Distance Approximation Margrit Betke, Samuel Epstein Pavlov's Dog Associative Learning Demonstrated on Synaptic-like Organic Transistors Olivier Bichler, Weisheng Zhao, Fabien Alibart, Stephane Pleutin, Stephane Lenfant, Dominique Vuillaume, and Christian Gamrat ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2013 - VOLUME 25 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From schierwa at informatik.uni-leipzig.de Tue Feb 26 06:01:04 2013 From: schierwa at informatik.uni-leipzig.de (A.S.) Date: Tue, 26 Feb 2013 12:01:04 +0100 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> Message-ID: <512C95F0.6060104@informatik.uni-leipzig.de> Dear colleagues, When we look for reasons why modules are formed, we have assumed from the outset that modularity is an ubiquitous property of the brain (neural systems) as a cognitive system. Next the localization principle often comes into play, assuming an one-to-one relationship between the corresponding structural and functional modules. The hypothesis of the columnar organization of the cerebral cortex basically rests on this idea: columns are structural modules computing certain basis functions, and the columnar network computes any reasonable (cognitive) function (e.g. Maas and Markram?s 2006 model of how cortical microcircuits compute cognitive functions). Along these lines of thinking the method of reverse engineering works: 1. Capacity analysis: Specify a certain cognitive capacity which is assumed to be produced through the cortex by computing a certain function. 2. Decompositional analysis: (a) Functional (computational) analysis: Select a set of basis functions which might serve as functional components or computational units in the cortex. (b) Structural analysis: Identify a set of anatomical components of the cortex. Provide evidence that cortical microcircuits are the anatomical components of the cortex. 3. Localization: Provide evidence for the functional / computational components being linked with the anatomical components. 4. Synthesis: (a) Modeling: i. Establish a structurally adequate functional model of the computational unit (the presumed 'canonical circuit') which computes the basis functions of step 2.(a). ii. Build a structurally adequate network model of the cortex (or some subsystem) composed of the canonical circuit models. (b) Simulation: Prove that the specific cognitive capacity or function under study is computed by the network of circuit models, i.e. through superposition of the specified basis functions. This `recipe? - if reasonable - would be fine. We know, however, there are serious problems with this method. In short: @ 1. Specification of a cognitive capacity: Requires a taxonomy of cognitive processes which is out of sight, as is obvious from recent attempts to build cognitive ontologies. @ 2.-3. Decomposition -Localization: It has been impossible to find the cortical microcircuit that computes a specific basis function. No genetic mechanism has been deciphered that designates how to construct a column. The column structures encountered in many species (but not in all) seem to represent spandrels. @ 4. Synthesis / Proof by simulation: Sure, producing and understanding complex phenomena from the interaction of simple nonlinear elements like artificial neurons or cellular automata is possible. One expects then, that this would also work for cortical circuits which are recognized as nonlinear devices, and theories could be applied (or developed, if not yet available) that would guide us to which model setup might have generated a given network behavior. However, inverse problems in complex systems (which processes caused a specific complex behavior of a given system?) are hard because of ill-posedness. Thus, from observed activity or function of cortical circuits and networks we cannot, in principle, infer the internal organization, and the proof is not possible that the particular cognitive capacity under study is generated by the network model. My conclusion is: In cognitive / computational neuroscience we (should) deal with complex, integrated systems. This means, there is no "natural" way to decompose or modularize the brain, neither structurally nor functionally! Details of the arguments can be found here: Schierwagen, A.: On Reverse Engineering in the Cognitive and Brain Sciences. Natural Computing: 11 (2012), 141-150, doi:10.1007/s11047-012-9306-0 Best wishes, Andreas --------------------------------------------------------------- Prof. Dr. Andreas Schierwagen Universit?t Leipzig, Institut f?r Informatik, Germany http://www.informatik.uni-leipzig.de/~schierwa/ Am 24.02.2013 16:05, schrieb Tony Prescott: > Dear colleagues, > > The Clune et al. article we are discussing mentions that selection for > reduced connectivity could be a "spandrel" (the consequence of > selection for something else) but does not explore this possibility in > much depth. In the case of biological brains it is hard to see why > low connectivity should be directly selected rather than arising > through the need to keep a lid on the size and metabolic cost of > maintaining the brain. A 1991 paper by Ringo > (http://www.ncbi.nlm.nih.gov/pubmed/1657274) shows that larger brains > cannot maintain the same degree of inter-connectedness as smaller ones > and therefore long-range connections are necessary sparser if > increased an in neuron count is not going to give rise to an > exponential increase in brain size. Reduced connectivity is therefore > an architectural constraint for larger brains in not too dissimilar > way to the need for spandrels in cathedral domes (as discussed by > Gould, 1979). > > An important consideration for biological brains is connection length. > Leise 1990 (http://www.ncbi.nlm.nih.gov/pubmed/2194614) provides a > useful summary of the reasons why nervous systems are composed of > physically modular components with a high number of short-range > connections and low number of longer range ones. As the literature on > small world networks show, however, it is important not to assume that > physical modularity requires functional modularity. Appropriate > sparse connectivity can allow fast communication and synchronisation > across large networks that can support distributed functional > modules. > > Regards, > > Tony Prescott > > > > On 24 February 2013 03:49, Terry Sejnowski wrote: >> G. Mitchison, Neuronal branching patterns and the economy of cortical wiring, Proc. Roy. Soc. London >> B Biol. Sci. 245 (1991) 151{158 >> >> D.B. Chklovskii, C.F. Stevens, Wiring optimization in the brain, Neural Information Processing Systems >> (1999) >> >> Koulakov AA, Chklovskii DB. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. Neuron. 2001 Feb;29(2):519-27. >> >> Chklovskii DB, Schikorski T, Stevens CF. Wiring optimization in cortical circuits. >> Neuron. 2002 Apr 25;34(3):341-7. >> >> Terry >> >> ----- >> >>> The paper mentions that Santiago Ramn y Cajal already pointed out >>> that evolution has created mostly short connections in animal brains. >>> >>> Minimization of connection costs should also encourage modularization, >>> e.g., http://arxiv.org/abs/1210.0118 (2012). >>> >>> But who first had such a wire length term in an objective function to >>> be minimized by evolutionary computation or other machine learning >>> methods? >>> I am aware of pioneering work by Legenstein and Maass: >>> >>> R. A. Legenstein and W. Maass. Neural circuits for pattern recognition >>> with small total wire length. Theoretical Computer Science, >>> 287:239-249, 2002. >>> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >>> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >>> >>> Is there any earlier relevant work? Pointers will be appreciated. >>> >>> Juergen Schmidhuber >>> http://www.idsia.ch/~juergen/whatsnew.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bower at uthscsa.edu Tue Feb 26 15:04:01 2013 From: bower at uthscsa.edu (james bower) Date: Tue, 26 Feb 2013 14:04:01 -0600 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <512C95F0.6060104@informatik.uni-leipzig.de> References: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <512C95F0.6060104@informatik.uni-leipzig.de> Message-ID: Wonderful to see a real conversation on connectionists (instead of an endless stream of meeting and postdoc announcements) Reminds me of the old days :-) Speaking of which, now historical and written for a more general audience, but might be of interest to some: Nelson. M.E. and Bower, J.M. 1990 Brain maps and parallel computers. Trends in Neuroscience 13: 403-408. Also, I think it is probably important to consider the extent to which we, scientists, like modularity and all it implies and allows. If you look carefully at the literature, including, even for example recent data published by the Blue Brain Project itself, it turns out that a number of assumptions about local connectivity in neo-cortical networks may not hold. to quote: "Contrary to expectations, we found that the average number of connections in groups of six or more neurons initially increased rather than decreased monotonically with mean inter-somatic distance". Perin, Berger, Markram, PNAS 108: 13, 5419-5424. While these authors still extrapolated their data to support the the existence of a columnar structure, it is very likely that, even in neocortex, the large majority of synaptic inputs to individual neurons may be "extra-collumnar". Never the less, we remain committed to the functional concept of cortical columns. Why? For more than 30 years now I (and others) have been suggesting that perhaps 3-layered cortex (including olfactory structures especially) may be a better model for thinking about cortical architecture (and core algorithms) than neo (and especially maybe visual) cortex. There are good reasons to suggest so evolutionarily - and even computationally. The number of sparse but distant connections is no surprise to those of us who study 'old cortex', where, despite many efforts, there is no evidence for a columnar structure. In the long run, it is my guess that the concept of 'modules' and accordingly cortical columns will be seen as theoretically (financially??) convenient rather than a reflection of cortical reality. Jim Bower Beware Ptolemy On Feb 26, 2013, at 5:01 AM, A.S. wrote: > Dear colleagues, > When we look for reasons why modules are formed, we have assumed from the outset that modularity is an ubiquitous property of the brain (neural systems) as a cognitive system. Next the localization principle often comes into play, assuming an one-to-one relationship between the corresponding structural and functional modules. The hypothesis of the columnar organization of the cerebral cortex basically rests on this idea: columns are structural modules computing certain basis functions, and the columnar network computes any reasonable (cognitive) function (e.g. Maas and Markram?s 2006 model of ??how cortical microcircuits compute cognitive functions). > > Along these lines of thinking the method of reverse engineering works: > > 1. Capacity analysis: Specify a certain cognitive capacity which is assumed to be produced through the cortex by computing a certain function. > > 2. Decompositional analysis: > (a) Functional (computational) analysis: Select a set of basis functions which might serve as functional components or computational units in the cortex. > (b) Structural analysis: Identify a set of anatomical components of the cortex. Provide evidence that cortical microcircuits are the anatomical components of the cortex. > > 3. Localization: Provide evidence for the functional / computational components being linked with the anatomical components. > > 4. Synthesis: > (a) Modeling: > i. Establish a structurally adequate functional model of the computational unit (the presumed ?canonical circuit?) which computes the basis functions of step 2.(a). > ii. Build a structurally adequate network model of the cortex (or some subsystem) composed of the canonical circuit models. > (b) Simulation: Prove that the specific cognitive capacity or function under study is computed by the network of circuit models, i.e. through superposition of the specified basis functions. > > This `recipe? - if reasonable - would be fine. We know, however, there are serious problems with this method. In short: > > @ 1. Specification of a cognitive capacity: > Requires a taxonomy of cognitive processes which is out of sight, as is obvious from recent attempts to build cognitive ontologies. > @ 2.-3. Decomposition -Localization: > It has been impossible to find the cortical microcircuit that computes a specific basis function. No genetic mechanism has been deciphered that designates how to construct a column. The column structures encountered in many species (but not in all) seem to represent spandrels. > @ 4. Synthesis / Proof by simulation: > Sure, producing and understanding complex phenomena from the interaction of simple nonlinear elements like artificial neurons or cellular automata is possible. One expects then, that this would also work for cortical circuits which are recognized as nonlinear devices, and theories could be applied (or developed, if not yet available) that would guide us to which model setup might have generated a given network behavior. > However, inverse problems in complex systems (which processes caused a specific complex behavior of a given system?) are hard because of ill-posedness. Thus, from observed activity or function of cortical circuits and networks we cannot, in principle, infer the internal organization, and the proof is not possible that the particular cognitive capacity under study is generated by the network model. > > My conclusion is: In cognitive / computational neuroscience we (should) deal with complex, integrated systems. This means, there is no "natural" way to decompose or modularize the brain, neither structurally nor functionally! > > Details of the arguments can be found here: > > Schierwagen, A.: On Reverse Engineering in the Cognitive and Brain Sciences. Natural Computing: 11 (2012), 141-150, doi:10.1007/s11047-012-9306-0 > > > Best wishes, > > Andreas > --------------------------------------------------------------- > Prof. Dr. Andreas Schierwagen > Universit?t Leipzig, Institut f?r Informatik, Germany > http://www.informatik.uni-leipzig.de/~schierwa/ > > > > Am 24.02.2013 16:05, schrieb Tony Prescott: >> Dear colleagues, >> >> The Clune et al. article we are discussing mentions that selection for >> reduced connectivity could be a "spandrel" (the consequence of >> selection for something else) but does not explore this possibility in >> much depth. In the case of biological brains it is hard to see why >> low connectivity should be directly selected rather than arising >> through the need to keep a lid on the size and metabolic cost of >> maintaining the brain. A 1991 paper by Ringo >> (http://www.ncbi.nlm.nih.gov/pubmed/1657274) shows that larger brains >> cannot maintain the same degree of inter-connectedness as smaller ones >> and therefore long-range connections are necessary sparser if >> increased an in neuron count is not going to give rise to an >> exponential increase in brain size. Reduced connectivity is therefore >> an architectural constraint for larger brains in not too dissimilar >> way to the need for spandrels in cathedral domes (as discussed by >> Gould, 1979). >> >> An important consideration for biological brains is connection length. >> Leise 1990 (http://www.ncbi.nlm.nih.gov/pubmed/2194614) provides a >> useful summary of the reasons why nervous systems are composed of >> physically modular components with a high number of short-range >> connections and low number of longer range ones. As the literature on >> small world networks show, however, it is important not to assume that >> physical modularity requires functional modularity. Appropriate >> sparse connectivity can allow fast communication and synchronisation >> across large networks that can support distributed functional >> modules. >> >> Regards, >> >> Tony Prescott >> >> >> >> On 24 February 2013 03:49, Terry Sejnowski wrote: >>> G. Mitchison, Neuronal branching patterns and the economy of cortical wiring, Proc. Roy. Soc. London >>> B Biol. Sci. 245 (1991) 151{158 >>> >>> D.B. Chklovskii, C.F. Stevens, Wiring optimization in the brain, Neural Information Processing Systems >>> (1999) >>> >>> Koulakov AA, Chklovskii DB. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. Neuron. 2001 Feb;29(2):519-27. >>> >>> Chklovskii DB, Schikorski T, Stevens CF. Wiring optimization in cortical circuits. >>> Neuron. 2002 Apr 25;34(3):341-7. >>> >>> Terry >>> >>> ----- >>> >>>> The paper mentions that Santiago Ramn y Cajal already pointed out >>>> that evolution has created mostly short connections in animal brains. >>>> >>>> Minimization of connection costs should also encourage modularization, >>>> e.g., http://arxiv.org/abs/1210.0118 (2012). >>>> >>>> But who first had such a wire length term in an objective function to >>>> be minimized by evolutionary computation or other machine learning >>>> methods? >>>> I am aware of pioneering work by Legenstein and Maass: >>>> >>>> R. A. Legenstein and W. Maass. Neural circuits for pattern recognition >>>> with small total wire length. Theoretical Computer Science, >>>> 287:239-249, 2002. >>>> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >>>> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >>>> >>>> Is there any earlier relevant work? Pointers will be appreciated. >>>> >>>> Juergen Schmidhuber >>>> http://www.idsia.ch/~juergen/whatsnew.html >> >> > > Dr. James M. Bower Ph.D. Professor of Computational Neurobiology Barshop Institute for Longevity and Aging Studies. 15355 Lambda Drive University of Texas Health Science Center San Antonio, Texas 78245 Phone: 210 382 0553 Email: bower at uthscsa.edu Web: http://www.bower-lab.org twitter: superid101 linkedin: Jim Bower CONFIDENTIAL NOTICE: The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.GARCEZ at city.ac.uk Tue Feb 26 15:34:40 2013 From: A.GARCEZ at city.ac.uk (Garcez, Artur) Date: Tue, 26 Feb 2013 20:34:40 +0000 Subject: Connectionists: Call for Papers: 9th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy13) Message-ID: <86C6ABFFF3D1474991561FA1541557BA0147B8@DBXPRD0311MB408.eurprd03.prod.outlook.com> Second Call for Papers 9th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy13) 3-4 Aug 2013 http://neural-symbolic.org/NeSy13 In conjunction with IJCAI-13 Beijing, China Artificial Intelligence continues to face big challenges in its quest for adaptable intelligent systems. The recent developments in the field of neural-symbolic computing bring an opportunity to integrate well- founded symbolic reasoning with robust neural computing and learning to tackle these challenges. The Workshop on Neural-Symbolic Learning and Reasoning is intended to create an atmosphere of exchange of ideas, providing a forum for the presentation and discussion of the key topics related to neural-symbolic integration, including: Representation of symbolic knowledge by connectionist systems; New neural-symbolic learning approaches; Extraction of symbolic knowledge from trained neural networks; New neural-symbolic reasoning approaches; Neural-symbolic cognitive models and agents; Biologically-inspired neural-symbolic systems; Knowledge-based SVMs and deep networks; Structured learning and relational learning in connectionist systems; Applications in robotics, simulation, fraud prevention, semantic web, software verification and adaptation, fault diagnosis, bioinformatics, visual intelligence, language processing, etc. Submission Researchers and practitioners are invited to submit original papers that have not been submitted for review or published elsewhere. Submitted papers must be written in English and should not exceed 6 pages in the case of research and experience papers, or 4 pages in the case of position papers (including figures, bibliography and appendices). All submitted papers will be judged based on their relevance, originality, significance, technical quality and organisation. Papers must be submitted through easychair at https://www.easychair.org/conferences/?conf=nesy13. Presentation Selected papers will be presented during the workshop. The workshop will include extra time for discussion of the presentation allowing the group to have a better understanding of the issues, challenges, and ideas being presented. Publication Accepted papers will be published in official workshop proceedings, which will be distributed during the workshop. Authors of the best papers will be invited to submit a revised and extended version of their papers to the Journal of Logic and Computation, Oxford University Press. Important Dates Paper submission deadline: March 15, 2013 Notification of result: April 19, 2013 Camera-ready papers due: May 3, 2013 Workshop day: 3 or 4 Aug 2013 IJCAI-13 main conference: Aug 3 ? 9, 2013 Workshop Organisers: Artur d?Avila Garcez (City University London, UK) Pascal Hitzler (Wright State University, OH, USA) Luis Lamb (Universidade Federal do Rio Grande do Sul, Brazil) Additional Information General questions concerning the workshop should be addressed to a.garcez at city.ac.uk For additional information, please visit the workshop series website at http://www.neural-symbolic.org/ Please join the neural-symbolic mailing list (http://maillists.city.ac.uk/mailman/listinfo/nesy) for announcements and discussions - it's a low traffic list. ---------------------------------------------------------------------------------- Dr. Artur d'Avila Garcez, FBCS Reader in Neural-Symbolic Computing Department of Computer Science City University London Email: a.garcez at city.ac.uk URL: http://www.soi.city.ac.uk/~aag ---------------------------------------------------------------------------------- From alexei at bicasymposium.com Sun Feb 24 20:52:33 2013 From: alexei at bicasymposium.com (Alexei Samsonovich) Date: Sun, 24 Feb 2013 20:52:33 -0500 Subject: Connectionists: BICA 2013 Call for Papers In-Reply-To: References: <4C98FE99-693F-4A26-B7E4-FE2BC69E9A00@bicasymposium.com> <66AB5340-9FEA-4BDF-9DA5-32F3F81B5B45@bicasymposium.com> Message-ID: <84395269-D523-4A9C-8DE6-819F16F37471@bicasymposium.com> Greetings! I am delighted to announce open submission to BICA 2013. WHAT: Annual International Conference on Biologically Inspired Cognitive Architectures WHERE: Kiev, Ukraine, Intercontinental hotel WHEN: Saturday-Sunday, September 14-15, 2013 Submission deadline is March 15. Two volumes of proceedings will be published, and you can select where to publish: in the Elsevier's journal BICA or in the highly indexed volume of Procedia. Call for Papers and further details are posted at the conference web site: http://bicasociety.org/meetings/2013/ Summary: The challenge of creating a real-life computational equivalent of the human mind calls for our joint efforts to develop biologically-inspired intelligent agents and co-robots that can be accepted and trusted by the human society as partners in various roles. To do this, we need to better understand at a computational level how natural intelligent systems develop their cognitive and learning functions. Please reply and let us know whether you are planning to submit a paper or an abstract - or have any questions. Cheers, -Alexei -- Alexei V. Samsonovich, Ph.D., Co-Chair of BICA 2013 Editor-in-Chief, Biologically Inspired Cognitive Architectures (Elsevier) Research Assistant Professor, George Mason University at Fairfax VA P.S. Please do circulate and forward to whom it may be relevant. Apologies for cross-posting. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengioy at iro.umontreal.ca Sat Feb 16 10:59:10 2013 From: bengioy at iro.umontreal.ca (Yoshua Bengio) Date: Sat, 16 Feb 2013 10:59:10 -0500 Subject: Connectionists: machine learning + optimization excellence chair position in Montreal Message-ID: <90A6F90E-BAD6-43AD-A131-35E624C0DE97@iro.umontreal.ca> Hello, The Canadian government has recently created a very selective 'Excellence Chairs' program and we (the U. Montreal campus including the Polytechnique engineering school and the HEC business school) have won the competition with a proposal for a chair in the area at the intersection/union of machine learning and optimization / operations research, with the title 'Data Science for Real-Time Decision Making'. We are now looking for a candidate for the chair, full-professor level, with very strong research track and leadership skills. The chair comes with a very good salary and a big lump of money for 7 years (10 M$ for direct research costs, up to 30 M$ in total including infrastructure and equipment costs). There is an official announcement with more details there: http://oraweb.aucc.ca/pls/ua/ua_rf3?advertisement_number_in=26498 but please do not hesitate to contact me if you want to learn more or if you know somebody who might be a good candidate. Not knowing French is not an obstacle. -- Yoshua Bengio -------------- next part -------------- An HTML attachment was scrubbed... URL: From boularias at gmail.com Mon Feb 25 14:46:23 2013 From: boularias at gmail.com (Abdeslam Boularias) Date: Mon, 25 Feb 2013 20:46:23 +0100 Subject: Connectionists: ICML 2013 Workshop on Machine Learning For System Identification Message-ID: ------------------------------------------------------------------------------------------------ ICML 2013 Workshop on Machine Learning For System Identification http://mlsysid.tuebingen.mpg.de/ Call for Posters and Papers We solicit submission of extendend abstracts or papers discussing high quality research on all aspects of dynamical system modeling and identification with machine learning tools. Both theoretical and applied contributions presenting recent or ongoing research are welcomed. The list of tools, problems, and applications includes, but is not limited to the following: Tools: kernel methods, regularization techniques, Bayesian estimation, deep learning, manifold learning, spectral methods, causal inference, active learning, reinforcement learning. Problems: predictor estimation, state-space model identification, impulse response modeling, non-linear system modeling, experiment design. Applications: robotics, automotive, process control, motion tracking, system biology, computational sustainability. Submissions and Publication An extended abstract suffices for a poster submission. Additionally, we welcome position papers, as well as papers discussing open problems and potential future research directions. Both extended abstracts and position/future research papers will be reviewed by program committee members on the basis of relevance, significance, and clarity. Submissions should be formatted according to the ICML 2013 conference template. The length of abstracts and papers should not exceed 8 pages. Submission website: https://www.easychair.org/conferences/?conf=mlsysid2013 Important Dates Mar 20, 2013 - Deadline of Submission Apr 15, 2013 - Notification of Acceptance June 20-21, 2013 - Workshop ------------------------------------------------------------------------------- Francesco Dinuzzo, Abdeslam Boularias and Lennart Ljung From contact2013 at ecvp.uni-bremen.de Tue Feb 26 06:28:41 2013 From: contact2013 at ecvp.uni-bremen.de (ECVP2013) Date: Tue, 26 Feb 2013 12:28:41 +0100 Subject: Connectionists: ECVP 2013 - Announcing Grants & Showtime Message-ID: <016701ce1414$6df46e00$49dd4a00$@ecvp.uni-bremen.de> The 36th European Conference on Visual Perception (ECVP) will take place in Bremen, Germany, from August 25th to August 29th 2013. ----------------------------- TRAVEL GRANTS----------------------------- ECVP is offering a number of Travel Grants. Application is open via http://www.ecvp.uni-bremen.de/node/42. Travel grants will be awarded competitively to a limited number of attendees. Please submit your application via our Web-site, according to the guidelines stated there. For any inquiries please mail to ecvptravel at ecvp.uni-bremen.de. Check also the call for the TOM TROSCIANKO AWARD, announced this year for the first time, at http://www.ecvp.uni-bremen.de/node/57. -------------------------------- SHOWTIME-------------------------------- Following on from its success last year, we will be holding a second "ShowTime" at this year's ECVP in Bremen. This will be an opportunity for anyone attending the meeting to present a demonstration or exhibit that is related to their research. We would like to invite all those coming to the meeting to submit a brief proposal (100 words max) of what they would like to show or demonstrate. This might be a new visual effect (that could be related to a presentation actually scheduled as a paper or poster at the conference); a well-known illusion; an old effect presented in a new way; a novel piece of equipment or apparatus; or something artistic that is visually exciting. We would like to encourage many submissions and we should be able to help you if your "Show" needs a projector, screen, poster-board or similar equipment. Please email your proposal to showtime2013 at ecvp.uni-bremen.de giving approximate details of how much space would be needed and what (if any) facilities you would like the organisers to provide. (There will be a number of small rooms that can be blacked out completely if this is essential to your demonstration.). Please send your proposal not later than June, 15th to provide us sufficient time for setting up the event. ECVP2013 "ShowTime" will be held on Tuesday evening (August, 27th). There will be two prizes for the best "Shows" - a Richard Gregory prize for the most amusing demonstration and a Tom Troscianko prize for the most outrageous exhibit. ECVP 2013 Organizing Committee Udo Ernst | Cathleen Grimsen | Detlef Wegener | Agnes Janssen Universitaet Bremen / University of Bremen Zentrum fuer Kognitionswissenschaften / Center for Cognitive Sciences Hochschulring 18 28359 Bremen / Germany Website: www.ecvp.uni-bremen.de Facebook: www.facebook.com/EuropeanConferenceOnVisualPerception Contact - emails: contact2013 at ecvp.uni-bremen.de (For any comments, questions or suggestions) abstracts2013 at ecvp.uni-bremen.de (For questions regarding abstract submission) showtime2013 at ecvp.uni-bremen.de (For submitting proposals for SHOWTIME) symp2013 at ecvp.uni-bremen.de (For organization and submission of symposia) exhibition2013 at ecvp.uni-bremen.de (For any query regarding the exhibition) -------------- next part -------------- An HTML attachment was scrubbed... URL: From emj at uci.edu Mon Feb 25 13:24:22 2013 From: emj at uci.edu (Eric Mjolsness) Date: Mon, 25 Feb 2013 10:24:22 -0800 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> Message-ID: Here are two old and possibly relevant references that don't quite answer Jurgen's question on neural wirelength optimization in evolutionary computation and machine learning. Wirelength minimization was a key criterion for the hand-design of an artificial neural network architecture in my 1985 PhD thesis: "Neural networks, pattern recognition, and fingerprint hallucination", PhD thesis, posted at: http://emj.ics.uci.edu/?page_id=93 . Later my collaborators and I experimented with genotype-phenotype automated design of artificial neural network architectures [1989], but we replaced the wirelength objective with a more easily computable "parsimony cost" that measured genotypic information content required to specify a network. We suggested but didn't adopt an additional L1 wiring sparseness objective. [1989] "Scaling, Machine Learning, and Genetic Neural Nets", Eric Mjolsness, David H. Sharp, and Bradley K. Alpert. Advances in Applied Mathematics, June 1989, available at http://computableplant.ics.uci.edu/papers/1989/ScalingMachineLearningGNN.pdf . - Eric Mjolsness >The paper mentions that Santiago Ram?n y Cajal >already pointed out that evolution has created >mostly short connections in animal brains. > >Minimization of connection costs should also >encourage modularization, e.g., >http://arxiv.org/abs/1210.0118 (2012). > >But who first had such a wire length term in an >objective function to be minimized by >evolutionary computation or other machine >learning methods? >I am aware of pioneering work by Legenstein and Maass: > >R. A. Legenstein and W. Maass. Neural circuits >for pattern recognition with small total wire >length. Theoretical Computer Science, >287:239-249, 2002. >R. A. Legenstein and W. Maass. Wire length as a >circuit complexity measure. Journal of Computer >and System Sciences, 70:53-72, 2005. > >Is there any earlier relevant work? Pointers will be appreciated. > >J?rgen Schmidhuber >http://www.idsia.ch/~juergen/whatsnew.html > > > > >On Feb 10, 2013, at 3:14 AM, Jeff Clune wrote: > >>Hello all, >> >>I believe that many in the neuroscience >>community will be interested in a new paper >>that sheds light on why modularity evolves in >>biological networks, including neural networks. >>The same discovery also provides AI researchers >>a simple technique for evolving neural networks >>that are modular and have increased >>evolvability, meaning that they adapt faster to >>new environments. >> >>Cite: Clune J, Mouret J-B, Lipson H (2013) The >>evolutionary origins of modularity. Proceedings >>of the Royal Society B. 280: 20122863. >>http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) >> >>Abstract: A central biological question is how >>natural organisms are so evolvable (capable of >>quickly adapting to new environments). A key >>driver of evolvability is the widespread >>modularity of biological networks-their >>organization as functional, sparsely connected >>subunits-but there is no consensus regarding >>why modularity itself evolved. Although most >>hypotheses assume indirect selection for >>evolvability, here we demonstrate that the >>ubiquitous, direct selection pressure to reduce >>the cost of connections between network nodes >>causes the emergence of modular networks. >>Computational evolution experiments with >>selection pressures to maximize network >>performance and minimize connection costs yield >>networks that are significantly more modular >>and more evolvable than control experiments >>that only select for performance. These results >>will catalyse research in numerous disciplines, >>such as neuroscience and genetics, and enhance >>our ability to harness evolution for >>engineering pu! >>rposes. >> >>Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng >> >>There has been some nice coverage of this work >>in the popular press, in case you are >>interested: >> >>* National Geographic: >>http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ >>* MIT's Technology Review: >>http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ >>* Fast Company: >>http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning >>* Cornell Chronicle: >>http://www.news.cornell.edu/stories/Jan13/modNetwork.html >>* ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm >> >>I hope you enjoy the work. Please let me know if you have any questions. >> >>Best regards, >>Jeff Clune >> >>Assistant Professor >>Computer Science >>University of Wyoming >>jeffclune at uwyo.edu >>jeffclune.com -- Eric Mjolsness Professor of Computer Science and Mathematics Director, Center for Computational Morphodynamics University of California, Irvine emj at uci.edu http://www.ics.uci.edu/~emj -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.a.bullinaria at cs.bham.ac.uk Mon Feb 25 11:32:26 2013 From: j.a.bullinaria at cs.bham.ac.uk (John Bullinaria) Date: Mon, 25 Feb 2013 16:32:26 +0000 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: Message-ID: Further references along these lines can be found in my own attempts to understand this topic by simulating how restricted neural connectivity proportions can lead to the evolution of modularity: Bullinaria, J.A. (2007). Understanding the Emergence of Modularity in Neural Systems. Cognitive Science, 31, 673-695. Bullinaria, J.A. (2009). The Importance of Neurophysiological Constraints for Modelling the Emergence of Modularity. In: D. Heinke & E. Mavritsaki (Eds), Computational Modelling in Behavioural Neuroscience: Closing the Gap Between Neurophysiology and Behaviour, 187-208. Hove, UK: Psychology Press. (http://www.cs.bham.ac.uk/~jxb/ PUBS/INCMEM.pdf) John. On 24 Feb, 2013, at 03:49, Terry Sejnowski wrote: > G. Mitchison, Neuronal branching patterns and the economy of > cortical wiring, Proc. Roy. Soc. London B Biol. Sci. 245 (1991) 151 > {158 > > D.B. Chklovskii, C.F. Stevens, Wiring optimization in the brain, > Neural Information Processing Systems (1999) > > Koulakov AA, Chklovskii DB. Orientation preference patterns in > mammalian visual cortex: a wire length minimization approach. > Neuron. 2001 Feb;29(2):519-27. > > Chklovskii DB, Schikorski T, Stevens CF. Wiring optimization in > cortical circuits. Neuron. 2002 Apr 25;34(3):341-7. > > Terry > > ----- > >> The paper mentions that Santiago Ramn y Cajal already pointed out >> that evolution has created mostly short connections in animal brains. >> >> Minimization of connection costs should also encourage >> modularization, >> e.g., http://arxiv.org/abs/1210.0118 (2012). >> >> But who first had such a wire length term in an objective function to >> be minimized by evolutionary computation or other machine learning >> methods? >> I am aware of pioneering work by Legenstein and Maass: >> >> R. A. Legenstein and W. Maass. Neural circuits for pattern >> recognition >> with small total wire length. Theoretical Computer Science, >> 287:239-249, 2002. >> R. A. Legenstein and W. Maass. Wire length as a circuit complexity >> measure. Journal of Computer and System Sciences, 70:53-72, 2005. >> >> Is there any earlier relevant work? Pointers will be appreciated. >> >> Juergen Schmidhuber >> http://www.idsia.ch/~juergen/whatsnew.html From juha.karhunen at aalto.fi Tue Feb 26 05:48:34 2013 From: juha.karhunen at aalto.fi (Karhunen Juha) Date: Tue, 26 Feb 2013 10:48:34 +0000 Subject: Connectionists: Postdoc in Deep learning at Aalto University, DL March 25 Message-ID: <14AC457F5BC1FC44848DCF25348EB18BDD85ADB0@EXMDB01.org.aalto.fi> POSTDOC IN DEEP LEARNING AT AALTO UNIVERSITY, DL MARCH 25 The Deep learning and Bayesian modeling research group at Aalto University (formerly Helsinki Univ. of Technology) is seeking for a qualified postdoctoral researcher to strengthen the group and develop new ideas on deep learning independently and in collaboration with the other members of the group. The group is located at the Dept. of Information and Computer Science in Espoo near Helsinki, Finland. The new postdoc position is initially for two years. The research group has published new results on deep learning in high-quality international conferences and journals, and has connections to leading researchers in the world. The group is led by Prof. Juha Karhunen and its leading researcher is Dr. Tapani Raiko. For more information on the research group, see http://research.ics.aalto.fi/bayes/ The deadline for the applications is March 25, 2013. Detailed information on the application procedure, required documents, salary level, and other matters are available at http://dept.ics.aalto.fi/calls/postdocs2013/ If you have questions, you can contact Prof. Juha Karhunen, email juha.karhunen at aalto.fi . From lucy.davies4 at plymouth.ac.uk Tue Feb 26 08:19:22 2013 From: lucy.davies4 at plymouth.ac.uk (Lucy Davies) Date: Tue, 26 Feb 2013 13:19:22 +0000 Subject: Connectionists: Bilingual minds, bilingual machines: Summer school: Message-ID: Summer school: Bilingual minds, bilingual machines Cognition Institute, Plymouth University, 24th-28th June 2013 Call for applications (participation and supported places) The young human brain copes remarkably well with two or more languages, with early bilingualism often resulting in perfect parallel proficiency. In contrast, automatic translation and language recognition systems remain suboptimal, despite powerful computing resources, large-scale corpora and increasingly sophisticated training algorithms. The summer school is a unique opportunity for postgraduate students and early-career researchers in psychology and computer science to engage over common issues informed by both disciplines. How can we model bilingual processing? How are two phonologies represented? How is meaning related to word forms? How far can meanings be shared between languages in the human mind and in machines? The school will feature interactive lectures from world-leading researchers: psychologists investigating early and late bilingualism; cognitive scientists modelling the bilingual mental lexicon, from phonology to semantics; computer scientists and roboticists designing automated translators and language recognition/learning systems. Participants will also gain hands-on experience in computational modelling, cutting-edge robotic technology and advanced techniques of experimental psychology. They will also have the opportunity to submit an abstract for poster session that will take place during the week, with two selected for a 20-minute conference-style presentation. The school is supported by EUCog (www.eucognition.org). Registration is open to a maximum of 30 participants, with a competitive selection process to award five fully subsidised places (tinyurl.com/bilingualismsummerschool). Confirmed speakers * Professor Tony Belpaeme (Plymouth University) - developmental robotics * Professor Marc Brysbaert (Ghent University) - adult bilingualism * Dr Bill Byrne (University of Cambridge) - automatic translation * Professor Angelo Cangelosi (Plymouth University) - developmental robotics * Professor Detmar Meurers (University of T?bingen) - learner corpora and computational linguistics * Dr Katerina Pastra (Cognitive Systems Research Institute, Athens) - embodied machine translation * Dr Patrick Rebuschat (Bangor University) - bilingual cognition * Professor Majd Sakr (Carnegie Mellon University in Qatar) - bilingual robotics * Professor Nuria Sebastian-Galles (Universitat Pompeu Fabra) - developmental bilingualism * Dr Yinjang Wu (University of Sheffield) - developmental bilingualism Application and registration We are seeking applications from postgraduate students and early-career researchers (i.e., not more than three years post-PhD). Registration fees for the week are ?400, which includes accommodation, breakfasts, lunches, refreshments and a full social programme. Places are limited to 30 and fees will be waived for up to five participants. To register your interest, please submit the following documents in PDF format, and indicate whether you wish to be considered for a fee waiver. - Curriculum vitae (up to two pages), indicating level of English language attainment. - Description of research interests and potential benefits of participation (one page). If you wish to be considered for a fee waiver, then please additionally make a case for support with reference to your current funding (not more than one page). Your request for a fee waiver will not affect the outcome of your application to participate in the school. Please submit your application by 31 March 2013. Selected applicants will be notified of acceptance on 15th April. Applications should be emailed to: bilingualismsummerschool at gmail.com. Organising committee Allegra Cattani, School of Social Science and Social Work, Plymouth University Caroline Floccia, School of Psychology, Plymouth University Laurence White, School of Psychology, Plymouth University -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.vanotterlo at donders.ru.nl Mon Feb 25 07:09:29 2013 From: m.vanotterlo at donders.ru.nl (Otterlo, M. van (Martijn)) Date: Mon, 25 Feb 2013 13:09:29 +0100 (CET) Subject: Connectionists: CFP: IJCAI Workshop on Machine Learning for Interactive Systems (abstract submission: April 13) Message-ID: <1513153011.1373485.1361794169759.JavaMail.root@draco.zimbra.ru.nl> CALL FOR PAPERS: IJCAI Workshop on Machine Learning for Interactive Systems (MLIS'13): Bridging the Gap between Perception, Action and Communication August 3-4, 2013, Beijing, China http://mlis-workshop.org/2013 Intelligent systems or robots that interact with their environment by perceiving, acting or communicating often face a challenge in how to bring these different concepts together. One of the main reasons for this challenge is the fact that the core concepts in perception, action and communication are typically studied by different communities: the computer vision, robotics and natural language processing communities, among others, without much interchange between them. As machine learning lies at the core of these communities, it can act as a unifying factor in bringing the communities closer together. Unifying these communities is highly important for understanding how state-of-the-art approaches from different disciplines can be combined (and applied) to form generally interactive intelligent systems. The goal of this workshop is to bring researchers from multiple disciplines together who are in one way or another affected by the gap between action, perception and communication that typically exists for interactive systems or robots. Topics of interest include, but are not limited to: Machine Learning: - Reinforcement Learning - Supervised Learning - Unsupervised Learning - Semi-Supervised Learning - Active Learning - Learning from human feedback - Learning from teaching, tutoring, instruction and demonstration - Combinations or generalisations of the above Interactive Systems: - (Socially) Interactive Robotics - Embodied Virtual Agents - Avatars - Multimodal systems - Cognitive (robotics) architectures Types of Communication: - System interacting with a single human user - System interacting with multiple human users - System interacting with the environment - System interacting with other machines Example applications could include: (1) a robot may learn to coordinate its speech with its actions, taking into account visual feedback during their execution; (2) an autonomous car may learn to coordinate its acceleration and steering behaviours depending on observations of obstacles; (3) a team of robots playing soccer may learn to coordinate their ball kicks depending on the dynamic locations of their opponents; (4) a sensorimotor system may learn to drive a wheelchair through feedback from visual signals of the environment; (5) a mobile robot may interactively learn from human guidance how to manipulate objects and move through a building, based on human feedback using language, gestures and interactive dialogue; or (6) a multimodal smart phone can adapt its input and output modalities to the user's goals, workload and surroundings. Submissions can take two forms. Long papers should not exceed 8 pages, and short (position) papers should not exceed 4 pages. They should follow the ACM SIG proceedings format (option 1): http://www.acm.org/sigs/publications/proceedings-templates. All submissions should be anonymised for peer-review. Submission link: https://www.easychair.org/conferences/?conf=mlis2013 Accepted papers will be published by ACM International Conference Proceedings Series under ISBN 978-1-4503-2019-1. The proceedings of MLIS?13 will be available on the ACM digital library on the day of the workshop. Invited Speakers: Prof. Dr. Martin Riedmiller, University of Freiburg Talk: "Learning Machines that Perceive, Act and Communicate" Prof. Dr. Olivier Pietquin, Sup?lec, France Title: "Inverse Reinforcement Learning for Interactive Systems" Important Dates: April 13, Abstract registration April 20, Paper submission deadline May 20, Notification of acceptance May 30, Camera-ready deadline August 3-4, MLIS workshop Organising Committee: Heriberto Cuayahuitl, Heriot-Watt University, Edinburgh, UK Lutz Frommberger, University of Bremen, Germany Nina Dethlefs, Heriot-Watt University, Edinburgh, UK Martijn van Otterlo, Radboud University Nijmegen, The Netherlands For all enquires, please mail: organizers at mlis-workshop.org -- Check out our new/cool/ultimate reinforcement learning book at Springer (march 2012) http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-642-27644-6 Martijn van Otterlo Artificial Intelligence Radboud University Nijmegen The Netherlands http://www.socsci.ru.nl/~martijvo "Don't say it's just your personal opinion; you have no other" (Bomans). From peter.andras at newcastle.ac.uk Wed Feb 13 09:00:57 2013 From: peter.andras at newcastle.ac.uk (Peter Andras) Date: Wed, 13 Feb 2013 14:00:57 +0000 Subject: Connectionists: Postdoc in Applied Machine Learning Message-ID: <5755EA32B2C05A4796B6650B2EF9A39E0D4ADCB3@EXMBDB02.campus.ncl.ac.uk> KTP Associate - Applied Machine Learning Salary: up to ?27,900 per annum Closing date: 22 March 2013 Reference: D1181R Applications are invited for a postdoctoral research position in applied machine learning at Newcastle University and XACT PCB Ltd. You will work on the application of cutting edge machine learning and computational intelligence methods to real world industrial problems. In particular, we are interested in the combined use of support vector machines and unsupervised clustering methods to analyse high dimensional heterogeneous industrial data. The aim of the analysis is to improve the precision of the manufacturing process of printed circuit boards. The only company in the world that offers a fully automated solution to this problem is the XACT PCB Ltd based near Newcastle in the North East of England. The size of the market of printed circuit boards is in the range of billions of dollars. You can have a real impact on how printed circuit boards are made by joining this project. The project is a collaboration between the XACT PCB Ltd and Newcastle University. You will work most of the time at the offices of the company and every week you will spend at least a half day at the School of Computing Science of Newcastle University where you will also have an office place. The School of Computing Science recently expanded its academic staff with interests in applied machine learning and computational intelligence. Ongoing research includes the analysis of human behavioural data recorded by movement sensors, development of intelligent and adaptive living environments, data mining of bioinformatics databases, analysis of neuroinformatics imaging data, optimisation of gene regulation for synthetic biology, and experimental validation of cyber-physical systems. You will work under the supervision of Dr Peter Andras (peter.andras at ncl.ac.uk). We are looking for a self-motivated person with a PhD on a topic related to machine learning or computational intelligence (the actual PhD area can be computer science, mathematics, physics, statistics, engineering or any other related field). You should have exceptionally strong skills in developing and coding machine learning algorithms (preferably in C# or other similar languages, including Java, C++, Matlab, R). You should have a clear desire to move towards industry and to make a real world impact through top quality research. You must have a First Class honours degree or a Distinction level MSc degree (or international equivalent) in Computer Science, Mathematics, Physics or related fields and preferably a PhD in one of these fields. You will have the experience in software development, large-scale data analytics, development and application of machine learning methods, together with a positive attitude and good interpersonal, communication and team working skills. The position comes with benefits including a ?4,000 individual training budget and management training. The post is fixed term for a duration of 2 Years. To apply go to www.ncl.ac.uk/vacancies/ and search for the vacancy with reference D1181R. Please note this position is subject to the confirmation of funding. Applicants are expected to be contacted by April 2013. XACT is the world's leading provider of integrated registration solutions to many of the world's highest technology PCB plants. For further details about the XACT PCB Ltd please see www.xactpcb.com Newcastle University is one of the top UK universities, member of the select Russell Group formed by the 24 leading UK universities. The School of Computing Science is one of the top research departments in the UK in the area of computer science with a research budget of over ?4 million per year. The School is a partner in the Newcastle Culture Lab which is a leading UK hub for innovative cultural and social applications of digital technology. For further details about Newcastle University please visit our information page at http://www.ncl.ac.uk/about/ For further details on the School of Computing Science please see http://www.ncl.ac.uk/computing/ For further details about Knowledge Transfer Partnerships please visit our Research and Enterprise Service webpage at: http://www.ncl.ac.uk/res/knowledge/ktp/index.htm ---------- Dr Peter Andras Reader in Complex Systems and Computational Intelligence Director of Postgraduate Programmes School of Computing Science Newcastle University Newcastle upon Tyne NE1 7RU UK tel. +44-191-2227946 fax. +44-191-2228232 email: peter.andras at ncl.ac.uk web: www.staff.ncl.ac.uk/peter.andras/ From phkywong at ust.hk Tue Feb 26 03:18:53 2013 From: phkywong at ust.hk (WONG Michael K Y) Date: Tue, 26 Feb 2013 16:18:53 +0800 (HKT) Subject: Connectionists: IAS Program on Statistical Physics and Computational Neuroscience Message-ID: <61684.143.89.19.28.1361866733.squirrel@sqmail.ust.hk> The Hong Kong University of Science and Technology IAS Program on Statistical Physics and Computational Neuroscience IAS Workshop 2-14 July 2013 Student Conference 15-16 July 2013 StatPhysHK Conference 17-19 July 2013 Invited speakers: Guoqiang Bi (U of Science and Technology of China) Chi Keung Chan (Academica Sinica) Ying Shing Chan (University of Hong Kong) Steven Coombes (Nottingham University) Michael Crair (Yale University) Tomaki Fukai (RIKEN) David Hansel (Universit? Ren? Descartes) Jufang He (Hong Kong Polytechnic University) Yong He (Beijing Normal University) Claus Hilgetag (Hamburg University) Zhaoping Li (University College London) Marcelo Magnasco (Rockefeller University) Masato Okada (University of Tokyo) Stefano Panzeri (Italian Inst of Tech and U of Glasgow) Barry Richmond (NIH) Thomas Trappenberg (Dalhousie University) Mark van Rossum (University of Edinburgh) Xiaoqin Wang (Johns Hopkins University) Jia Yi Zhang (Fudan University) Organizing Committee: Chair - K. Y. Michael Wong (HKUST) Co-chair - Changsong Zhou (HKBU) Emily S. C. Ching (CUHK) Gang Hu (Beijing Normal University) Jian-Dong Huang (HKU) H. Benjamin Peng (HKUST) Leihan Tang (HKBU) Si Wu (Beijing Normal University) http://ias.ust.hk/program/201307/ Abstract submission deadline: 20 March 2013 Venue: The new IAS Building Sponsors: Hong Kong Baptist University Croucher Foundation -------------- next part -------------- A non-text attachment was scrubbed... Name: flyer.pdf Type: application/pdf Size: 497911 bytes Desc: not available URL: From venkateshrao1976 at gmail.com Sat Feb 16 16:21:39 2013 From: venkateshrao1976 at gmail.com (Venkateswara Rao) Date: Sat, 16 Feb 2013 16:21:39 -0500 Subject: Connectionists: IICAI-13 Call for papers Message-ID: IICAI-13 Call for papers The 6th Indian International Conference on Artificial Intelligence (IICAI-13) will be held during December 18-20, 2013 I in Tumkur (near Bangalore), India. The 2013 International Conference on Advances in Data Mining and Security Informatics (DMSI-13) and the 2013 International Conference on Image, Video and Signal Processing (IVSP-13) will also be held at the same time and place. We invite draft paper submissions. See the website: http://www.iiconference.org for more details. IICAI is a series of high quality technical events in Artificial Intelligence (AI) and is also one of the major AI events in the world. The primary goal of the conference is to promote research and developmental activities in AI and related fields in India and the rest of the world. Another goal is to promote scientific information interchange between AI researchers, developers, engineers, students, and practitioners working in India and abroad. The conference will be held every two years to make it an ideal platform for people to share views and experiences in AI and related areas. Sincerely, Venkateswara Rao Organizing Committee From sabu.thampi at iiitmk.ac.in Wed Feb 27 05:04:20 2013 From: sabu.thampi at iiitmk.ac.in (Sabu M Thampi) Date: Wed, 27 Feb 2013 15:34:20 +0530 Subject: Connectionists: CFP - IEEE ICACCI2013 - SYMPOSIUM ON PATTERN RECOGNITION AND IMAGE PROCESSING (PRIP - 2013) Message-ID: -------------------------------------------------------------------------------------------------- Apologies for cross-postings! -------------------------------------------------------------------------------------------------- SECOND INTERNATIONAL SYMPOSIUM ON PATTERN RECOGNITION AND IMAGE PROCESSING (PRIP - 2013) August 22-25, 2013, Mysore, India http://icacci-conference.org/site/prip2013 The International Symposium Pattern Recognition and Image Processing (PRIP-2013) aims to bring together researchers in the fields of pattern recognition and image processing. The symposium will address recent advances in theory, methodologies and applications. PRIP-2013 is affiliated with Second IEEE International Conference on Advances in Computing, Communications and Informatics (ICACCI 2013) - http://icacci-conference.org. The conference will feature keynote presentations, workshops, demonstrations, parallel technical sessions and tutorials. ICACCI-2013 is technically co-sponsored by IEEE Communications Society. CALL FOR PAPERS ----------------------------- PRIP-2013 invites original and unpublished work from individuals active in the broad theme of the Workshop. Authors should submit their papers online using EDAS. Unregistered authors should first create an account on EDAS to log on. Further guidelines for submission are posted at: http://icacci-conference.org/site/callpaper All papers that conform to submission guidelines will be peer reviewed and evaluated based on originality, technical and/or research content/depth, correctness, relevance to conference, contributions, and readability. The manuscripts should be submitted in PDF format. Acceptance of papers will be communicated to authors by email. Accepted papers will be published in the ICACCI 2013 Proceedings which will be available through IEEE Xplore? after the conference. The event will be held on August 22-25, 2013 in Mysore, India. Mysore, known as the City of Palaces, is one of the most preferred tourist destinations in South India. Topics of interest include but not limited to: Category 1: Image Processing Character, graphical and text Processing Compression sensing in imaging Computer graphics and vision Gamming application Human computer interactions using images Image analysis, understanding Image and video based Gesture analysis Image and volume segmentation Image compression and coding Image description and recognition Image sensors and systems Mathematical theory of image analysis Multimedia processing Raw data representation, computer vision Real-time issues in imaging Remote sensor imaging systems Robotic vision systems Shape and texture analysis Spectral imaging and reconstruction Super-resolution imaging Tensor techniques in imaging Volumetric imaging in real-time Category 2: Pattern Recognition Biological taxonomy Biometric techniques and algorithms Bionic eye, ear and systems Character, text and language recognition Classification and clustering methods Feature selection and extraction Image recognition - faces, objects, gestures, emotions Industrial applications Information fusion techniques for multimodal situations Machine learning and large-scale data analytics Memory networks and temporal memories Meteorology applications Neural computing and fuzzy systems Neuromorphic systems Object recognition and tracking Pattern formation systems Physical intelligence and artificial life Real-time implementations Security and defence applications Self-aware robotic systems Signal, Video and Speech recognition Space science applications Statistical techniques and pattern analysis Target recognition systems IMPORTANT DATES ------------------------------ Submission Deadline: April 30, 2013 Notification of Decision: May 31, 2013 Proceedings Version Due: June 15, 2013 Author Registration Closes: June 20, 2013 Conference: August 22-25, 2013 TECHNICAL PROGRAMME COMMITTEE ------------------------------------------------------------ Program Chairs C Krishna Mohan, Indian Institute of Technology, Hyderabad V. N. Manjunath Aradhya, SJCE, Mysore, India TPC Members Ali Al-Qayedi, Etisalat, UAE Arun Agarwal, University of Hyderabad, India Chad Spooner, NorthWest Research Associates, USA Chengwen Xing, Beijing Institute of Technology, P.R. China Dah-Chung Chang, National Central University, Taiwan Dong Wang, Philips Research North America, USA Feng Li, Apple Inc., USA GianLuca Foresti, University of Udine, Italy Hadj Bourdoucen, Sultan Qaboos University, Oman Hani Mehrpouyan, California State University, USA Hassen Alsafi, IIUM, Malaysia Himal Suraweera, Singapore University of Technology and Design, Singapore Ilka Miloucheva, Media Applications Research, Germany Ioannis Pitas, Aristotle University of Thessaloniki, Greece Jia-Chin Lin, National Central University, Taiwan John Thompson, University of Edinburgh, United Kingdom Jozef Juhair, Technical University of Kosice, Slovakia Kanapathippillai Cumanan, Newcastle University, United Kingdom Kazunori Hayashi, Kyoto University, Japan Khaled Assaleh, American University of Sharjah, UAE Koji Ishibashi, The University of Electro-Communications, Japan Lisheng Fan, Shantou University, P.R. China Md. Liakot Ali, Bangladesh University of Engineering and Technology (BUET), Bangladesh Mohamed Dahmane, University of Montreal, Canada Mohamed Cheriet, Ecole de technologie superieure (University of Quebec), Canada Mohammad Banat, Jordan University of Science and Technology, Jordan Nazar Ali, Khaifa University, UAE Rajesh Bawa, Punjabi University, India Rajesh Panicker, National University of Singapore, Singapore Roman Jarina, University of Zilina, Slovakia Seung-Hoon Hwang, Dongguk University, Korea K.S. Hareesh, Manipal University, Manipal S. Manjunath, JSSASC, Mysore Stephan Saur, Alcatel-Lucent Deutschland AG, Germany Tien Nguyen, Raytheon Corporation, USA Tomohiko Taniguchi, Fujitsu Laboratories Ltd., Japan Weidong Xiang, University of Michigan, Dearborn, USA Wen Zhou, Shantou University, P.R. China Waleed Abdulla, The University of Auckland, New Zealand Wen-Liang Hwang, Institute of Information Science, Academia Sinica, Taiwan Xia Yousheng, Fuzhou University, P.R. China Yahya Elhadj, Al-Imam Muhammad Ibn Sau Islamic University, Saudi Arabia Yifan Chen, South University of Science and Technology of China, P.R. China Yulong Zou, University of Western Ontario, Canada Zhemin Xu, Broadcom Corporation, USA Zhiqiang Wu, Wright State University, USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalakhu at cs.toronto.edu Tue Feb 26 23:44:35 2013 From: rsalakhu at cs.toronto.edu (Ruslan Salakhutdinov) Date: Tue, 26 Feb 2013 23:44:35 -0500 (EST) Subject: Connectionists: CFP: ICML 2013 Workshop on Inferning: Interactions between Inference and Learning Message-ID: Call for Papers ICML 2013 Workshop on Inferning: Interactions between Inference and Learning http://inferning.cs.umass.edu inferning2013 at gmail.com Important Dates: Submission Deadline: Mar 30th, 2013 (11:59pm PST) Author Notification: April 21st, 2013 Workshop: June 20-21, 2013, Atlanta, GA There are strong interactions between learning algorithms which estimate the parameters of a model from data, and inference algorithms which use a model to make predictions about data. Understanding the intricacies of these interactions is crucial for advancing the state-of-the-art on real-world tasks in natural language processing, computer vision, computation biology, etc. Yet, many facets of these interactions remain unknown. In this workshop, we study the interactions between inference and learning using two reciprocating perspectives. Perspective one: how does inference affect learning? The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, efficient approximate inference techniques may be used, such as MCMC sampling, stochastic approximation, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (e.g., piece-wise training, pseudo-max and decomposed learning). Perspective two: how does learning affect inference? Traditionally, the goal of learning has been to find a model for which prediction (i.e., inference) accuracy is as high as possible. However, an increasing emphasis on modeling complexity has shifted the goal of learning: find models for which prediction (i.e., inference) is as efficient as possible. Thus, there has been recent interest in more unconventional approaches to learning that combine generalization accuracy with other desiderata such as faster inference. Some examples of this kind are: learning classifiers for greedy inference (e.g., Searn, Dagger); structured cascade models that learn a cost function to perform multiple runs of inference from coarse to fine level of abstraction by trading-off accuracy and efficiency at each level; learning cost function to search in the space of complete outputs (e.g., SampleRank, search in Limited Discrepancy Search space); learning structures that exhibit efficient exact inference etc. Similarly, there has been work that learns operators for efficient search-based inference, approaches that trade-off speed and accuracy by incorporating resource constraints such as run-time and memory into the learning objective. This workshop brings together practitioners from different fields (information extraction, machine vision, natural language processing, computational biology, etc.) in order to study a unified framework for understanding and formalizing the interactions between learning and inference. The following is a partial list of relevant keywords for the workshop: * learning with approximate inference * cost-aware learning * learning sparse structures * pseudo-likelihood, composite likelihood training * contrastive divergence * piece-wise and decomposed training * decomposed learning * coarse to fine learning and inference * score matching * stochastic approximation * incremental gradient methods * adaptive proposal distributions * learning for anytime inference * learning approaches that trade-off speed and accuracy * learning to speed up inference * learning structures that exhibit efficient exact inference * lifted inference for first-order models * more ... New benchmark problems: This line of research can hugely benefit from new challenge problems from various fields (e.g., computer vision, natural language processing, speech, computational biology, computational sustainability, etc.). Therefore, we especially request relevant papers describing such problems, main challenges, evaluations and public data sets. Invited Speakers: Dan Roth, University of Illinois, Urbana-Champaign Rina Dechter, University of California, Irvine Ben Taskar, University of Washington Hal Daume, University of Maryland, College Park Alan Fern, Oregon State University Important Dates: Submission Deadline: Mar 30th, 2013 (11:59pm PST) Author Notification: April 21st, 2013 Workshop: June 20-21, 2013 Author Guidelines: Submissions are encouraged as extended abstracts of ongoing research. The recommended page length is 4-6 pages. Additional supplementary content may be included, but may not be considered during the review process. Previously published or currently in submission papers are also encouraged (we will confirm with authors before publishing the papers online). The format of the submissions should follow the ICML 2013 style, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icml2013stylefiles.tar.gz However, since the review process is not double-blind, submissions need not be anonymized and author names may be included. Submission site: https://www.easychair.org/conferences/?conf=inferning2013 Organizers: Janardhan Rao (Jana) Doppa, Oregon State University Pawan Kumar, Ecole Centrale Paris Michael Wick, University of Massachusetts, Amherst Sameer Singh, University of Massachusetts, Amherst Ruslan Salakhutdinov, University of Toronto From p.gleeson at ucl.ac.uk Wed Feb 27 05:59:49 2013 From: p.gleeson at ucl.ac.uk (Padraig Gleeson) Date: Wed, 27 Feb 2013 10:59:49 +0000 Subject: Connectionists: Open Source Brain workshop on cerebellar modelling Message-ID: <512DE725.7030806@ucl.ac.uk> Dear colleagues, Announcing a workshop which will be the kick off meeting of the Open Source Brain initiative (http://www.opensourcebrain.org). This aims to create a public, open source repository of detailed neuron and network models from multiple brain regions and species, which can be collaboratively developed and contributed to by the whole community. The workshop will be held from May 13-15 in Alghero, Sardinia. This initial workshop will concentrate on modelling of the cerebellum, and will bring together a number of modellers, experimentalists and software developers who are interested in producing a set of well tested, cross simulator cell and network models of the cerebellum. Modellers of other brain regions are also welcome and there will be tutorials and hands on sessions on the technologies behind OSB and collaborative model development. The models on OSB are shared via public, open source repositories (e.g. at GitHub ), and can be submitted in any simulator format or modelling language, with the aim of converting the core physiological elements to simulator independent formats like NeuroML and PyNN , to allow them to be simulated/analysed/visualised in the widest possible range of applications. The meeting is free to attend, but registration is required (deadline April 15th). Please see http://www.opensourcebrain.org/projects/osb/wiki/Meetings for more details. The OSB initiative is primarily supported by the Wellcome Trust. *Confirmed speakers* Egidio D'Angelo Chris De Zeeuw Arnd Roth Angus Silver Volker Steuber *Organising committee* Matteo Cantarelli Sharon Crook Padraig Gleeson Eugenio Piasini Angus Silver Sergio Solinas Irene Solinas ----------------------------------------------------- Padraig Gleeson Room 321, Anatomy Building Department of Neuroscience, Physiology & Pharmacology University College London Gower Street London WC1E 6BT United Kingdom +44 207 679 3214 p.gleeson at ucl.ac.uk ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaap at murre.com Wed Feb 27 15:48:31 2013 From: jaap at murre.com (Jaap Murre) Date: Wed, 27 Feb 2013 21:48:31 +0100 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <512C95F0.6060104@informatik.uni-leipzig.de> Message-ID: An approach I found was interesting but which has not been pursued very much is a 'four-step' approach to modelling: 1. genetic algorithm -> 2. neural network architecture -> 3. exposure to patterns in environment -> 4. test for generalization This corresponds very roughly to 1. evolution -> 2. structured (e.g., modular) brain (coarse structure) -> 3. developmental phase (fine structure) -> 4. behavior When we tried this approach, we were surprised by the interesting neural architectures we obtained in a number recognition task. The evolved architectures showed better generalization than the ones we had thought up ourselves and they showed better performance (faster development and better generalization). Also, many of the efficient architectures incorporated elements we find in the visual system, such as coarse-grained versus fine-grained processing (cf. dorsolateral vs ventral stream) and extensive recurrent connectivity whereby higher areas feed back to lower areas. Happel, B. L. M., & Murre, J. M. J. (1994). Design and evolution of modular neural-network architectures. Neural Networks, 7(6-7), 985-1004. We also did a mathematical analyses of the feasibility of modular neural networks in terms of wiring length and brain volume, aiming to incorporate as much as possible the quantitative neuroanatomy known in the early nineties. We found that such architectures would indeed fit into our skull, whereas say a sparse random architecture would not. Murre, J. M. J., & Sturdy, D. P. F. (1995). The connectivity of the brain: Multi-level quantitative analysis. Biological Cybernetics, 73, 529-545. A surprising outcome to us in this study was that when working from first principles, in large brain structures (above a certain number of neurons), having a architecture with a cortex and white matter underneath is actually more efficient (less volume) than when white and grey matter are mixed together. Intuitively, a cortex architecture seems to waste space and increase wiring length but this does not seem to be the case, which may explain why large brains use it so much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pascal.fua at epfl.ch Wed Feb 27 16:37:40 2013 From: pascal.fua at epfl.ch (Pascal Fua) Date: Wed, 27 Feb 2013 22:37:40 +0100 Subject: Connectionists: Post Doctoral Fellowship at EPFL's Computer Vision Lab Message-ID: <512E7CA4.2020608@epfl.ch> Post-doctoral Fellowship in Computer Vision at EPFL EPFL's Computer Vision Laboratory (http://cvlab.epfl.ch/) has an opening for a post-doctoral fellow in the field of Computer Vision and Augmented Reality. The position is initially offered for 1 year and can be extended for up to 4 years total. Description: The work will be carried out within the context of a European Union project in collaboration with CERN. Its overall goal is to develop Augmented Reality techniques that can eventually be deployed in the ATLAS detector at CERN for maintenance purposes. Position: The Computer Vision Laboratory offers a creative international environment, a possibility to conduct competitive research on a global scale and involvement in teaching. There will be ample opportunities to cooperate with some of the best groups in Europe and elsewhere. EPFL is located next to Lake Geneva in a beautiful setting 60 kilometers away from the city of Geneva. Salaries are in the order CHF 80,000 per year, the precise amount to be determined by EPFL's department of human resources. Education: Applicants are expected to have finished, or be about to finish their Ph.D. degrees, to have a strong background in Computer Vision and 3D Tracking, and to have a track record of publications in top conferences and journals. Strong programming skills (C or C++) are a plus. French language skills are not required, English is mandatory. Application: Applications must be sent by email to Ms. Gisclon (josiane.gisclon at epfl.ch). They must contain a statement of interest, a CV, a list of publications, and the names of three references. From esalinas at wakehealth.edu Wed Feb 27 17:04:16 2013 From: esalinas at wakehealth.edu (Emilio Salinas) Date: Wed, 27 Feb 2013 22:04:16 +0000 Subject: Connectionists: Postdoctoral training in multisensory integration Message-ID: <3D9CB6CA9ACBD84CA5F0FC88183B20B61495CCAD@exchdb7.medctr.ad.wfubmc.edu> The Department of Neurobiology and Anatomy at Wake Forest School of Medicine announces open positions for Postdoctoral Training in Multisensory Processes. We seek strong candidates for postdoctoral training funded by an NIH T32 Training Grant. The training program provides a rich collaborative research environment that fosters interdisciplinary approaches to understanding how the brain integrates information from multiple senses to produce perception and adaptive behavior. Candidates with direct experience as well as those in related fields are encouraged to apply. Trainees will have access to any of 10 laboratories using human subjects and/or a variety of animal models (rodents-primates) with approaches spanning molecular/cellular to behavioral/computational. A description of the faculty and the program can be accessed via the website: http://graduate.wfu.edu/admissions/training_ms.html Inquiries and applications (including a current curriculum vitae) should be sent to the training grant director, Barry E. Stein (bestein at wakehealth.edu), or to its co-directors Terry Stanford (stanford at wakehealth.edu) and Dwayne Godwin (dgodwin at wakehealth.edu). Wake Forest School of Medicine is an affirmative action/equal opportunity employer and especially encourages applications from women and minority candidates. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bazhenov at salk.edu Wed Feb 27 17:38:17 2013 From: bazhenov at salk.edu (Maxim Bazhenov) Date: Wed, 27 Feb 2013 14:38:17 -0800 Subject: Connectionists: postdoctoral position to study olfactory coding Message-ID: <512E8AD9.6070602@salk.edu> Applications are invited for NIH-funded post-doctoral position in the laboratory of Dr. Maxim Bazhenov at the University of California, Riverside to study olfactory coding. This project aims at understanding how odors and odor concentrations are encoded in the insect olfactory system. It involves close collaboration with laboratory of Dr. Mark Stopfer at NIH. For relevant references see, Assisi et al., Neuron 2011, 69(2):373-86; Ito et al., Neuron 2009, 64(5):692-706; Assisi at al., Nature Neurosci, 2007, 10(9):1176-84. The ultimate goal of this work is to understand mechanisms and functions of biological rhythms and the role of neuronal oscillations and synchrony in information processing. The successful candidate will be responsible for design and analysis of the network models of olfactory system based on existing experimental data. These models will be used to understand underlying neural mechanisms, as well as guide data analysis and produce novel experimental predictions. Qualified applicants are expected to have experience in computational/theoretical neuroscience and conductance-based neural modeling. Programming experience with C/C++ is required. Knowledge of PYTHON or MATLAB is a plus. The University of California offers excellent benefits. Salary is based on research experience. The initial appointment is for 1 year with a possibility of extension. Applicants should send a brief statement of research interests, a CV and the names of three references to Maxim Bazhenov at maksim.bazhenov at ucr.edu -- Maxim Bazhenov, Ph.D. Professor, Cell Biology and Neuroscience University of California Riverside, CA 92521 Ph: 951-827-4370 http://biocluster.ucr.edu/~mbazhenov/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Wed Feb 27 19:57:38 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Wed, 27 Feb 2013 19:57:38 -0500 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <9EB0EE5C-D18F-427A-8F34-354C77AF0BE9@idsia.ch> <512C95F0.6060104@informatik.uni-leipzig.de> Message-ID: <512EAB82.4040604@cse.msu.edu> In terms of genetic algorithms (evolution), I guess that they are useful probably for fine tuning some species-specific global parameters. However, the evolved biology (e.g., genome) seems to make sure that the structured (not modular) brain (coarse structure) is, in the first order approximation, largely shaped by statistics of signals observed during postnatal developmental activities. By activities, I do not mean just autonomous activities of the individual, since every cell sends signals to other cells during the entire process of development. In other words, the brain seems to be very different from other organs in the body if one thinks that evolution has largely determined an organ (other than the brain). -John On 2/27/13 3:48 PM, Jaap Murre wrote: > > An approach I found was interesting but which has not been pursued > very much is a 'four-step' approach to modelling: > > 1. genetic algorithm -> 2. neural network architecture -> 3. exposure > to patterns in environment -> 4. test for generalization > > This corresponds very roughly to > > 1. evolution -> 2. structured (e.g., modular) brain (coarse structure) > -> 3. developmental phase (fine structure) -> 4. behavior > > When we tried this approach, we were surprised by the interesting > neural architectures we obtained in a number recognition task. The > evolved architectures showed better generalization than the ones we > had thought up ourselves and they showed better performance (faster > development and better generalization). Also, many of the efficient > architectures incorporated elements we find in the visual system, such > as coarse-grained versus fine-grained processing (cf. dorsolateral vs > ventral stream) and extensive recurrent connectivity whereby higher > areas feed back to lower areas. > > Happel, B. L. M., & Murre, J. M. J. (1994). Design and evolution of > modular neural-network architectures. Neural Networks, 7(6-7), 985-1004. > > We also did a mathematical analyses of the feasibility of modular > neural networks in terms of wiring length and brain volume, aiming to > incorporate as much as possible the quantitative neuroanatomy known in > the early nineties. We found that such architectures would indeed fit > into our skull, whereas say a sparse random architecture would not. > > Murre, J. M. J., & Sturdy, D. P. F. (1995). The connectivity of the > brain: Multi-level quantitative analysis. Biological Cybernetics, 73, > 529-545. > > A surprising outcome to us in this study was that when working from > first principles, in large brain structures (above a certain number of > neurons), having a architecture with a cortex and white matter > underneath is actually more efficient (less volume) than when white > and grey matter are mixed together. Intuitively, a cortex architecture > seems to waste space and increase wiring length but this does not seem > to be the case, which may explain why large brains use it so much. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucy.davies4 at plymouth.ac.uk Thu Feb 28 08:31:06 2013 From: lucy.davies4 at plymouth.ac.uk (Lucy Davies) Date: Thu, 28 Feb 2013 13:31:06 +0000 Subject: Connectionists: Lure of the New: Deadline extended In-Reply-To: References: Message-ID: The deadline for abstract submissions for Lure of the New has been extended to March 8th 2013 Call for abstracts: The Cognition Institute is a new trans-disciplinary research centre focused on understanding human cognition. We believe that through forging links with researchers from psychology, cognitive robotics, neuroscience, biology, humanities and the arts we can develop new ways of thinking about cognition. In our first 1st international conference, we will explore how novelty and creativity are key drivers of human cognition. Each of our themed symposia will bring a different approach to this topic, and will cover such areas as embodied cognition, auditory neuroscience and psychophysics, language development, mental imagery, creativity and cognition, the relationship between the arts and sciences, modelling and imaging of brain processes & deception research. We welcome abstracts in any of these areas and are very keen for submissions which take a trans-disciplinary approach to cognition research. Symposia ? Embodied Cognition and Mental Simulation (Haline Schendan, Diane Pecher, Rob Ellis, Patric Bach) ? Developments in infant speech perception (Katrin Skoruppa, Silvia Benavides-Varelaa, Caroline Floccia, Laurence White, Ian Howard) ? Engineering Creativity - can the arts help scientific research more directly? (Alexis Kirke, Greg B. Davies, Simon Ingram) ? Computational Modelling of Brain Processes (Thomas Wennekers, Ingo Bojak, Chris Harris, Jonathan Waddington) ? Current trends in deception research (Giorgio Ganis, Gershon Ben-Shakhar) ? Sounds for Communication (Sue Denham, Roy Patterson, Judy Edworthy, Sarah Collins) ? Imagery, Dance and Creativity (Jon May, Scott deLaHunta, Emma Redding, Tony Thatcher, Phil Barnard, John Matthias, Jane Grant) As well as the symposia, there will be a panel discussion drawing together all the themes of the conference, a performance by Emma Redding & Tony Thatcher (as part of the Imagery, Dance and Creativity symposium) and keynote talks by: -Linda Lanyon (Head of Programs at the INCF): Toward globally collaborative science: the International Neuroinformatics Coordinating Facility -Guy Orban (Dept. of Neuroscience, University of Parma): Finding a home for the mirror neurons in human premotor cortex. In addition, the evening programme includes a reception and CogTalk debate to mark the official launch of the Cognition Institute, and a special film screening in association with SciScreen. http://cognition.plymouth.ac.uk/annual-conference-lure-new/ The programme can be found here: http://cognition.plymouth.ac.uk/annual-conference-lure-new/programme/ Registration is now open and details of how to apply can be found here: http://cognition.plymouth.ac.uk/annual-conference-lure-new/registration/ Deadline for abstract submission: 1st March 2013 Abstracts should be emailed to info.cognition at plymouth.ac.uk in Word doc or docx format, no longer than 1 A4 page, including all refs, title, all authors names and affiliations, corresponding authors contact address and email. For all conference enquiries please contact Lucy Davies at: info.cognition at plymouth.ac.uk. Lucy Davies Cognition Institute Plymouth University Room A222, Portland Square Plymouth UK PL4 8AA Tel: 44+ (0)1752 584920 Website: http://www1.plymouth.ac.uk/research/cognition/Pages/default.aspx Like us on Facebook: http://www.facebook.com/PlymCogInst Follow us on Twitter: https://twitter.com/PlymCogInst YouTube Channel: http://www.youtube.com/user/PlymouthCognition?feature=watch