From kkuehnbe at uos.de Fri Mar 1 03:51:13 2013 From: kkuehnbe at uos.de (Kai-Uwe Kuehnberger) Date: Fri, 01 Mar 2013 09:51:13 +0100 Subject: Connectionists: Deadline Extension of the 6th Conference of Artificial General Intelligence (AGI 2013) until March 8th, 2013 Message-ID: <51306C01.5000101@uos.de> ***Deadline Extension of the Sixth Conference on Artificial General Intelligence until March 8th, 2013*** *** Final Call for Papers: The Sixth Conference on Artificial General Intelligence (AGI-13) *** July 31, 2013 ? August 3, 2013, Beijing, China Conference Website: http://www.agi-conference.org/2013/ * Mission * The AGI conference series (http://www.agi-conf.org/) is the premier international forum for cutting-edge research focusing on the original goal of the AI field ? the creation of thinking machines with general intelligence at the human level and ultimately beyond. The AGI conference series is held in cooperation with AAAI, and AGI-13 will co-locate with IJCAI-13. * Topics * As in prior AGI conferences, we welcome papers on all aspects of AGI R&D, with the key proviso that each paper should in some way contribute specifically to the development of Artificial General Intelligence. * Special Session on Cognitive Robotics and AGI * This Special Session will feature papers giving new AGI ideas inspired by current research in Cognitive Robotics. * Workshops * AGI-13 will include the following workshops: 1. Formalizing Mechanisms for Artificial General Intelligence and Cognition (Formal MAGIC) 2. Probability Theory or Not? Practical and Theoretical Concerns on Uncertainty Handling in AGI See http://www.agi-conference.org/2013/workshops/ for details of the workshops. * Tutorials / Demonstrations * Tutorials and demonstrations will be held alongside the conference. For the requirements for proposals, please see the AGI-13 website. * Confirmed Keynotes * Thomas G. Dietterich (President-Elect of AAAI, Oregon State University) Dileep George (Vicarious Systems, Inc.) * Important Dates * Conference paper submission: March 8, 2013 Workshop/tutorial/demonstration submissions: April 10, 2013 Acceptance Notification: April 20, 2013 Camera-ready copy: May 15, 2013 Conference: July 31, 2013 ? August 3, 2013 * Submission Information * All papers have to be submitted via the conference submission page, to be announced at the AGI-13 website. The authors should not expect any extension of the deadlines, though special situations can be arranged in a case-by-case manner. If for a special reason your submission will be delayed for a few days, please contact the program committee co-chairs for advance approval. The conference papers will be published by Springer in the Lecture Notes in Artificial Intelligence (LNAI) series. Paper templates for both LaTeX and Word may be found here: http://www.springer.com/computer/lncs/lncs+authors?SGWID=0-40209-0-0-0 . Use the templates for "LNCS Proceedings and Other Multiauthor Volumes". The LaTeX template (use of which is preferred) is also given directly here: ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip. Papers must be in English, and submitted in PDF format. There are two types of submissions: 1. Full papers (up to 10 pages): Original research in the above areas. 2. Technical Communications (up to 4 pages): Results and ideas with interest to the AGI audience, including reports about recent own publications, position papers, and preliminary results. All accepted conference papers will be included in the proceedings, as well as presented at the conference as talks or as posters. At least one author of each accepted paper must register for the conference and present the paper there. * AGI Summer School * Collocating with AGI-13, An AGI Summer School will be held in July 17 to 30, 2013. For details, see http://www.mindmakers.org/projects/agi-summer-school-2013 --- From jenny.bizley at gmail.com Sat Mar 2 11:52:25 2013 From: jenny.bizley at gmail.com (Jenny Bizley) Date: Sat, 2 Mar 2013 16:52:25 +0000 Subject: Connectionists: Post-doc position at UCL Ear Institute Message-ID: Dear Colleagues, please forward the following advertisement to anyone who might be interested. Best wishes, Jenny *UCL Ear Institute* *Research Associate Position* This project aims to understand when visual information can enhance listening, and how integration of visual information into auditory cortex underpins this process. Experimental work will include training animals in listening tasks and using state-of-the-art recording techniques to measure the activity of single neurons and populations of neurons in the auditory cortex of freely moving animals, whilst they discriminate auditory-visual stimuli. The successful applicant will have a PhD in auditory or multisensory neuroscience, or a related discipline. Experience in one or more of the following areas is essential; electrophysiological recordings, animal or human psychophysics, digital signal processing, computational techniques and/or MATLAB programming. The post is funded by a Wellcome Trust / Royal Society Fellowship award to Dr Jennifer Bizley and is available for 3 years in the first instance. Informal enquiries about the project and position are welcome; please contact Dr Jennifer Bizley (*j.bizley at ucl.ac.uk* ). Applications should be made through UCL?s on-line application process ( http://www.ucl.ac.uk/hr/jobs/) searching under reference 1311893. The closing date for applications is April 1st 2013. -------------- next part -------------- An HTML attachment was scrubbed... URL: From retienne at jhu.edu Fri Mar 1 09:44:06 2013 From: retienne at jhu.edu (retienne) Date: Fri, 01 Mar 2013 09:44:06 -0500 Subject: Connectionists: Telluride 2013 Call for Participation - Accepting Applications Message-ID: <5130BEB6.2080402@jhu.edu> *------------------------------------------------------------------------------------------------------------------ 2013 Neuromorphic Cognition Engineering Workshop* */Telluride, Colorado, June 30^th - July 20^th , 2013/* *CALL FOR APPLICATIONS: Deadline is April 15th, 2013* NEUROMORPHIC COGNITION ENGINEERING WORKSHOP www.ine-web.org Sunday June 30th- Saturday July 20th, 2013, Telluride, Colorado We invite applications for a three-week summer workshop that will be held in Telluride, Colorado. Sunday June 30th- Saturday July 20th, 2013. The application deadline is *Monday, April 15th* and application instructions are described at the bottom of this document. The 2013 Workshop and Summer School on Neuromorphic Engineering is sponsored by the National Science Foundation, Institute of Neuromorphic Engineering, Qualcomm Corporation, The EU-Collaborative Convergent Science Network (CNS-II), , University of Maryland - College Park, Institute for Neuroinformatics - University and ETH Zurich, Georgia Institute of Technology, Johns Hopkins University, Boston University, University of Western Sydney and the Salk Institute. *Directors:* Cornelia Fermuller, University of Maryland, College Park Ralph Etienne-Cummings, Johns Hopkins University Shih-Chii Liu, Institute of Neuroinformatics, UNI/ETH Zurich Timothy Horiuchi, University of Maryland, College Park *Workshop Advisory Board:* Andreas ANDREOU (The Johns Hopkins University) Andre van SCHAIK (University Western Sydney) Avis COHEN (University of Maryland) Barbara SHINN-CUNNINGHAM (Boston University) Giacomo INDIVERI (Institute of Neuroinformatics, UNI/ETH Zurich, Switzerland) Jonathan TAPSON (University Western Sydney, Australia) Malcolm SLANEY (Microsoft Research) Paul HASLER (Georgia Institute of Technology) Rodney DOUGLAS (Institute of Neuroinformatics, UNI/ETH Zurich, Switzerland) Shihab SHAMMA (University of Maryland) Tobi Delbruck (Institute for Neuroinformatics, Zurich) *Previous year workshop can be found at:* http://ine-web.org/workshops/workshops-overview/index.html and last year's wiki is https://neuromorphs.net/nm/wiki/2011 . *GOALS:* Neuromorphic engineers design and fabricate artificial neural systems whose organizing principles are based on those of biological nervous systems. Over the past 18 years, this research community has focused on the understanding of low-level sensory processing and systems infrastructure; efforts are now expanding to apply this knowledge and infrastructure to addressing higher-level problems in perception, cognition, and learning. In this 3-week intensive workshop and through the Institute for Neuromorphic Engineering (INE), the mission is to promote interaction between senior and junior researchers; to educate new members of the community; to introduce new enabling fields and applications to the community; to promote on-going collaborative activities emerging from the Workshop, and to promote a self-sustaining research field. *FORMAT:* The three week summer workshop will include background lectures on systems and cognitive neuroscience (in particular sensory processing, learning and memory, motor systems and attention), practical tutorials on emerging hardware design, mobile robots, hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed. They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, some of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Projects and interest groups meet in the late afternoons, and after dinner. In the early afternoon there will be tutorials on a wide spectrum of topics, including analog VLSI, mobile robotics, vision and auditory systems, central-pattern-generators, selective attention mechanisms, cognitive systems, etc. *2013 TOPIC AREAS:* *1) Human Cognition: Decoding Perceived, Attended, Imagined Acoustic Events and Human-Robot Interfaces* Project Leaders: Shihab Shamma (UM-College Park), Malcolm Slaney (Microsoft), Barbara Shinn-Cunningham (Boston U), Edward Lalor (Trinity College, Dublin) Featuring: Chris Assad (NASA -- JPL) *2) Recognizing Manipulation Actions in Cluttered Environments from Vision and Sound* Project leaders: Cornelia Ferm?ller (UM-College Park), Andreas Andreou (JHU) Featuring: Bert Shi (HKUST, Hong Kong) and Ryad Benosman (UPMC, Paris) * 3) Dendritic Computation in Neurons and Engineered Devices* Project Leaders: Klaus M. Stiefel (UWS, Australia), Jonathan Tapson (UWS, Australia) *4) Universal Neuromorphic Systems and Sensors for Real-Time Mobile Robotics* Project Leaders: Jorg Conradt (TUM, Munich), Francesco Galluppi (U. Manchester, UK), Shih-Chii Liu (INI-ETH, Zurich) and Ralph Etienne-Cummings (JHU) Featuring: Bert Shi (HKUST, Hong Kong) and Ryad Benosman (UPMC, Paris) *5) Emerging Technology and Discussion Group: * *Cognitive Computing with Emerging Nanodevices* Group Leaders: Omid Kavehei (U. Melbourne, Australia), Tara Julia Hamilton (UNSW, Australia) *6) Terry Sejnowski (Salk Institute) -- Computational Neuroscience (invitational mini-workshop)* *LOCATION AND ARRANGEMENTS:* The summer school will take place in the small town of Telluride, 9000 feet high in southwest Colorado, about 6 hours drive away from Denver (350 miles). Great Lakes Aviation and America West Express airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Wireless internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of PCs running LINUX and Microsoft Windows for the workshop projects. We encourage participants to bring along their personal laptop. No cars are required. Given the small size of the town, we recommend that you do not rent a car. Bring hiking boots, warm clothes, rain gear, and a backpack, since Telluride is surrounded by beautiful mountains. Unless otherwise arranged with one of the organizers, we expect participants to stay for the entire duration of this three week workshop. *FINANCIAL ARRANGEMENTS:* Notification of acceptances will be mailed out around the April 30^th , 2013.The Workshop covers all your accommodations and facilities costs for the 3 weeks duration. You are responsible for your own travel to the Workshop, however, sponsored fellowships will be available as described below to further subsidize your cost. Registration Fees:For expenses not covered by federal funds, a Workshop registration fee is required. The fee is $1250 per participant for the 3-week Workshop.This is expected from all participants at the time of acceptance. Accommodations:The cost of a shared condominium, typically a bedroom in a shared condo for senior participants or a shared room for students, will be covered for all academic participants.Upgrades to a private rooms or condos will cost extra. Participants from National Laboratories and Industry are expected to pay for these condominiums. Fellowships:This year we will offer two Fellowships to subsidize your costs: 1)Qualcomm Corporation Fellowship:Three non-corporate participants will have their accommodation and registration fees ($2750) directly covered by Qualcomm, and will be reimbursed for travel costs up to $500. Additional generous funding from Qualcomm will provide $5000 to help organize and stage the Workshop. 2)EU-CSNII Fellowship (http://csnetwork.eu/) which is funded by the 7th Research Framework Program FP7-ICT-CSNII-601167:The top 8 EU applicants will be reimbursed for their registration fees ($1250), subsistence/travel subsidy (up to Euro 2000) and accommodations cost ($1500). The registration and accommodations costs will go directly to the INE (the INE will reimburse the participant's registration fees after receipt from CSNII), while the subsistence/travel reimbursement will be provided directly to the participants by the CSNII at the University of Pompeu Fabra, Barcelona, Spain. *HOW TO APPLY:* Applicants should be at the level of graduate students or above (i.e. postdoctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage women and minority candidates to apply. Anyone interested in proposing or discussing specific projects should contact the appropriate topic leaders directly. The application website is (after February 15th, 2013): http://ine-web.org/telluride-conference-2013/apply-info Application information needed: * contact email address * First name, Last name, Affiliation, valid e-mail address. * Curriculum Vitae (a short version, please). * One page summary of background and interests relevant to the workshop, including possible ideas for workshop projects. Please indicate which topic areas you would most likely join. * Two letters of recommendation (uploaded directly by references). Applicants will be notified by e-mail. 15^th February, 2013 - Applications accepted on website 15^th April, 2013 - Applications Due 30^th April, 2013 - Notification of Acceptance -- ------------------------------------------------- Ralph Etienne-Cummings Professor Department of Electrical and Computer Engineering The Johns Hopkins University 105 Barton Hall 3400 N. Charles Street Baltimore, MD 21218 Tel: (410) 516 3494 Fax: (410) 516 2939 Email:retienne at jhu.edu URL:http://etienne.ece.jhu.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.lindner at physik.hu-berlin.de Tue Mar 5 10:22:03 2013 From: benjamin.lindner at physik.hu-berlin.de (Benjamin Lindner) Date: Tue, 05 Mar 2013 16:22:03 +0100 Subject: Connectionists: Post-Doc position at the BCCN Berlin Message-ID: <51360D9B.3030407@physik.hu-berlin.de> Applications are invited for a 2.5-years post-doctoral position in the group of Prof. Lindner (Theory of Complex Systems and Neurophysics) at the Bernstein Center for Computational Neuroscience Berlin starting in June, 2013. The focus of the theoretical project is on the spreading of activity and the signal transmission in recurrent networks of spiking neurons. This is motivated by reverse physiology experiments at the BCCN Berlin, in which evoked single-neuron activity in the cortex causes behavioral or motor responses. Ideally, theoretical insights will be instrumental for the interpretation of experimental data and the formulation of novel experimental questions. In order to pursue this kind of research, the successful candidate should have a strong interest in developing analytical approaches for the stochastic dynamics of single neurons and neural networks. A background in theoretical physics, nonlinear dynamics, and/or the theory of stochastic processes is thus advantageous although not obligatory. The position will also entail a moderate amount of teaching as well as the co-supervision of PhD and Master students. Applications, including a motivation letter, a CV, and, a list of three potential referees should be send by email to benjamin.lindner at physik.hu-berlin.de (cc to nikola.schrenk at bccn-berlin.de) Prof. Dr. B. Lindner Humboldt-Universit?t zu Berlin Bernstein Center for Computational Neuroscience Institut f?r Physik Sitz: Philippstr. 13, Haus 2 10115 Berlin Postanschrift: Unter den Linden 6 10099 Berlin Deadline for the application is March 31 2013, however, later applications may be also considered. -- -------------------------------------------------------------------------------------------------------------------- Benjamin Lindner Theory of Complex Systems and Neurophysics Bernstein Center for Computational Neuroscience Berlin Philippstr. 13, Haus 2, 10115 Berlin Room: 1.17, phone: 0049(0)302093 6336 Department of Physics Humboldt University Berlin Newtonstr. 15 12489 Berlin Room: 3.408, phone: 0049(0)302093 7934 http://people.physik.hu-berlin.de/~lindner/index.html -------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Mar 2 15:37:53 2013 From: grlmc at urv.cat (GRLMC) Date: Sat, 2 Mar 2013 21:37:53 +0100 Subject: Connectionists: SLSP 2013: submission deadline extended Message-ID: <730F86FCF31A42A291577B2F8513D07D@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject* ---------------------------------------------------------------------------------------------------- SUBMISSION DEADLINE EXTENDED: March 15 !!! ---------------------------------------------------------------------------------------------------- ********************************************************************* 1st INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING SLSP 2013 Tarragona, Spain July 29-31, 2013 Organised by: Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Research Institute for Information and Language Processing (RIILP) University of Wolverhampton http://grammars.grlmc.com/SLSP2013/ ********************************************************************* AIMS: SLSP is the first event in a series to host and promote research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences including papers in any of these fields, SLSP is a more focused meeting where synergies between areas and people will hopefully happen. SLSP will reserve significant space for young scholars at the beginning of their careers. VENUE: SLSP 2013 will take place in Tarragona, 100 km. to the south of Barcelona. SCOPE: The conference invites submissions discussing the employment of statistical methods (including machine learning) within language and speech processing. The list below is indicative and not exhaustive: - phonology, morphology - syntax, semantics - discourse, dialogue, pragmatics - statistical models for natural language processing - supervised, unsupervised and semi-supervised machine learning methods applied to natural language, including speech - statistical methods, including biologically-inspired methods - similarity - alignment - language resources - part-of-speech tagging - parsing - semantic role labelling - natural language generation - anaphora and coreference resolution - speech recognition - speaker identification/verification - speech transcription - text-to-speech synthesis - machine translation - translation technology - text summarisation - information retrieval - text categorisation - information extraction - term extraction - spelling correction - text and web mining - opinion mining and sentiment analysis - spoken dialogue systems - author identification, plagiarism and spam filtering STRUCTURE: SLSP 2013 will consist of: ? invited talks ? invited tutorials ? peer-reviewed contributions INVITED SPEAKERS: Yoshua Bengio (Montr?al), tutorial Learning Deep Representations Christof Monz (Amsterdam), Challenges and Opportunities of Multilingual Information Access Tanja Schultz (Karlsruhe Tech), Multilingual Speech Processing with a special emphasis on Rapid Language Adaptation PROGRAMME COMMITTEE: Carlos Mart?n-Vide (Tarragona, Co-Chair) Ruslan Mitkov (Wolverhampton, Co-Chair) Jerome Bellegarda (Apple Inc., Cupertino) Robert C. Berwick (MIT) Laurent Besacier (LIG, Grenoble) Bill Byrne (Cambridge) Jen-Tzung Chien (National Chiao Tung U, Hsinchu) Kenneth Church (IBM Research) Koby Crammer (Technion) Renato De Mori (McGill & Avignon) Thierry Dutoit (U Mons) Marcello Federico (Bruno Kessler Foundation, Trento) Katherine Forbes-Riley (Pittsburgh) Sadaoki Furui (Tokyo Tech) Yuqing Gao (IBM Thomas J. Watson) Ralp Grishman (New York U) Dilek Hakkani-T?r (Microsoft Research, Mountain View) Adam Kilgarriff (Lexical Computing Ltd., Brighton) Dietrich Klakow (Saarbr?cken) Philipp Koehn (Edinburgh) Mikko Kurimo (Aalto) Lori Lamel (CNRS-LIMSI, Orsay) Philippe Langlais (Montr?al) Haizhou Li (Institute for Infocomm Research, Singapore) Qun Liu (Dublin City) Daniel Marcu (SDL) Manuel Montes-y-G?mez (INAOEP, Puebla) Masaaki Nagata (NTT, Kyoto) Joakim Nivre (Uppsala) Kemal Oflazer (Carnegie Mellon Qatar, Doha) Miles Osborne (Edinburgh) Manny Rayner (Geneva) Giuseppe Riccardi (U Trento) Jos? A. Rodr?guez Fonollosa (Technical U Catalonia, Barcelona) Paolo Rosso (Technical U Valencia) Mark Steedman (Edinburgh) Tomek Strzalkowski (Albany) G?khan T?r (Microsoft Research, Redmond) Stephan Vogel (Qatar Computing Research Institute, Doha) Kuansan Wang (Microsoft Research, Redmond) Dekai Wu (HKUST, Hong Kong) Min Zhang (Institute for Infocomm Research, Singapore) Yunxin Zhao (U Missouri, Columbia) ORGANISING COMMITTEE: Adrian Horia Dediu (Tarragona) Carlos Mart?n-Vide (Tarragona, Co-Chair) Ruslan Mitkov (Wolverhampton, Co-Chair) Bianca Truthe (Magdeburg) Florentina Lilica Voicu (Tarragona) SUBMISSIONS: Authors are invited to submit papers presenting original and unpublished research. Papers should not exceed 12 single?spaced pages (including eventual appendices) and should be formatted according to the standard format for Springer Verlag's LNAI series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions are to be uploaded to: https://www.easychair.org/conferences/?conf=slsp2013 PUBLICATIONS: A volume of proceedings published by Springer in the LNAI topical subseries of the LNCS series will be available by the time of the conference. A special issue of a major journal will be later published containing peer-reviewed extended versions of some of the papers contributed to the conference. Submissions will be by invitation. REGISTRATION: The period for registration is open from November 30, 2012 to July 29, 2013. The registration form can be found at: http://grammars.grlmc.com/SLSP2013/Registration DEADLINES: Paper submission: March 15, 2013 (23:59h, CET) ? EXTENDED ? Notification of paper acceptance or rejection: April 15, 2013 Final version of the paper for the LNAI proceedings: April 22, 2013 Early registration: April 24, 2013 Late registration: July 19, 2013 Submission to the post-conference journal special issue: October 31, 2013 QUESTIONS AND FURTHER INFORMATION: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SLSP 2013 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34-977-559543 Fax: +34-977-558386 ACKNOWLEDGEMENTS: Diputaci? de Tarragona Universitat Rovira i Virgili University of Wolverhampton From jbongard at uvm.edu Mon Mar 4 11:53:01 2013 From: jbongard at uvm.edu (Joshua C. Bongard) Date: Mon, 04 Mar 2013 11:53:01 -0500 Subject: Connectionists: modularity vs. sparsity Message-ID: <20130304115301.Horde.ywMJJ16adDBRNNFtaO6QwwA@webmail.uvm.edu> A lot of discussion here lately about the evolution of modularity, sparked by the recent paper http://arxiv.org/abs/1207.2743 In it, the authors evolve ANNs to perform a specific task while placing a penalty on total wiring length. There were a lot of responses here along the lines of "it's not surprising that they got modularity because they placed a cost on connection length." However, what I found interesting about this paper is that they got modularity rather than sparsity. It is important to keep in mind that these two properties of networks are not the same thing. So, if I were to use Clune et al.'s approach to evolve ANNs for another task, would I get modularity or sparsity? Food for thought. From mail at mkaiser.de Tue Mar 5 13:04:43 2013 From: mail at mkaiser.de (Marcus Kaiser) Date: Tue, 5 Mar 2013 18:04:43 +0000 Subject: Connectionists: Master program in Neuroinformatics@Newcastle University Message-ID: Dear all, our one-year master degree program in Neuroinformatics at Newcastle University is now accepting student applications. The course focuses on handling brain connectivity datasets, analyzing electrophysiological recordings, and simulating neural activity and development. Neuroinformatics is one of the strategic areas of neuroscience research within Newcastle University (see overview at http://research.ncl.ac.uk/neuroinformatics/ ). Close interactions with experimental and clinical researchers are a key component of the course and the dissertation research project. Ongoing research areas in Newcastle include neuroimaging, psychophysics, systems neuroscience (visual, auditory, and motor system), aging, neurorehabilitation, brain rhythms, brain-machine interfaces, neurochips, and connectomics (http://www.ncl.ac.uk/ion ). Newcastle University hosts around 100 principal investigators in the neurosciences. You can find out more about the program and how to apply at http://www.ncl.ac.uk/computing/study/postgrad/taught/5199/ COURSE OUTLINE The MSc in Neuroinformatics is a full-time, one-year advanced masters course designed for students who have a good degree in the biological sciences or the physical sciences (computer science, mathematics, physics, engineering). It provides the specialist skills in core Neuroinformatics courses (such as computing and biology) with a significant focus on the development of research skills. The program aims to equip its graduates with the necessary skills to contribute to biologically realistic simulations of neural activity and development that are rapidly becoming the key focus of Neuroinformatics research. Prior experience with computers or computer programming is not required. The program is ideal for students aiming for careers in industry or academia. The course is based in the School of Computing Science and taught jointly by the Schools of Computing Science, Mathematics and Statistics, Biology, Cell and Molecular Biosciences and The Institute of Human Genetics. In addition, there are strong links with the Institute of Neuroscience and graduates of this master program might either apply for PhD studies at the School of Computing Science or for the Wellcome Trust 4-year PhD program in Systems Neuroscience (http://www.ncl.ac.uk/ion/study/wellcome/ ). WHY STUDY AT NEWCASTLE? The MSc in Neuroinformatics is a truly interdisciplinary degree and provides the dual skills necessary to establish a rewarding career in this research area. The Newcastle program has a research focus on data management, network analysis (e.g. Kaiser, Neuroimage, 2011), and simulation, whilst delivering sound training and an introduction to research in computation and statistics, including exciting new areas such as e-science and cloud computing. Newcastle is among the pioneers of the field in the UK and hosted the ?4m EPSRC-funded CARMEN project for managing and processing electrophysiology data. Newcastle has strong links with the International Neuroinformatics Coordinating Facility (INCF). Currently, members of the faculty lead the data-sharing special interest group and the UK special interest groups in image-based Neuroinformatics and brain connectivity as well as in neurally-inspired engineering. COURSE CONTENT Semester 1 contains modules to build the basic grounding in, and understanding of, Neuroinformatics theory and applications, together with necessary computational and numeric understanding to undertake more specialist modules next semester. Training in mathematics and statistics is also provided. Semester 2 introduces modules that focus heavily on introducing subject-specific research skills and includes three option slots for choosing modules. A major part of the Newcastle MSc in Neuroinformatics is a research project that will occupy approximately six months. This project may be associated with staff in any of the Schools mentioned above, thus providing a wide range of exciting areas in which the newly learnt Neuroinformatics skills can be deployed. HOW TO APPLY Applications for this program are now being accepted. You can apply online using the electronic application system with the degree identifier 5199F. Please check http://www.ncl.ac.uk/computing/study/postgrad/taught/5199/ for more information. Best, Marcus -- Marcus Kaiser, Ph.D. Associate Professor (Reader) in Neuroinformatics School of Computing Science Newcastle University Claremont Tower Newcastle upon Tyne NE1 7RU, UK Lab website: http://www.biological-networks.org/ Neuroinformatics at Newcastle: http://research.ncl.ac.uk/neuroinformatics/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmaniada at ics.forth.gr Tue Mar 5 04:13:38 2013 From: mmaniada at ics.forth.gr (Michail Maniadakis) Date: Tue, 05 Mar 2013 11:13:38 +0200 Subject: Connectionists: Call For Paper Contributions: Sense of Time in Robotics Message-ID: <5135B742.1080906@ics.forth.gr> Dear colleagues, Marc Wittmann, Sylvie Droit-Volet and I, in collaboration with Frontiers in Neuroscience, organize a Research Topic (a collection of papers) with title: "Towards embodied artificial cognition: TIME is on my side". You may find the relevant call-for-papers in the following link http://www.frontiersin.org/Neurorobotics/researchtopics/Towards_embodied_artificial_co/1554 As host editors, we would like to encourage you to submit an article to this topic. Contributions can be articles describing original research, methods, hypothesis & theory, opinions, etc. The idea is to create an organized, comprehensive collection of several contributions, as well as a forum for discussion and debate. Frontiers will compile an e-book, as soon as all contributing articles are published, that can be used in classes, be sent to foundations that fund your research, to journalists and press agencies, or to any number of other organizations. Among others, we have contacted all following authors: Rolf Pfeifer Alessandro Moscatelli Wolfgang Ambach Ravi Rao Argiro Vatakis Takashi Ikegami AD Craig Caspar Addyman Yoonsuck Choe Warren Meck Stefano Nolfi Dean Buonomano Deborah Lynn Harrington David Eagleman Elena Sgouramani Elizabeth Thomas Katya Rubia Jun Tani Frontiers is a Swiss Gold-model open-access publisher. As such, a manuscript accepted for publication incurs a publishing fee, which varies depending on the article type. Research Topic manuscripts receive a significant discount on publishing fees. Please take a look at this fee table: http://www.frontiersin.org/about/PublishingFees. Once published, your articles will remain free to access for all readers, and will be indexed in PubMed and other academic archives. As an author in Frontiers, you retain the copyright to your own papers and figures. We would be delighted if you considered participating in this Research Topic. Should you choose to participate, please confirm by sending us a quick email and then your abstract no later than May 15. Please note that the deadline for the full manuscript submission is on: Oct 30, 2013 With best regards, Michail Maniadakis, Marc Wittmann and Sylvie Droit-Volet, Guest Associate Editors, Frontiers in Neurorobotics www.frontiersin.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From qliu1 at uci.edu Mon Mar 4 01:41:20 2013 From: qliu1 at uci.edu (Qiang Liu) Date: Sun, 3 Mar 2013 22:41:20 -0800 Subject: Connectionists: CFP: ICML'13 Workshop: Machine Learning Meets Crowdsourcing In-Reply-To: References: Message-ID: CALL FOR CONTRIBUTIONS ICML '13 Workshop: Machine Learning Meets Crowdsourcing Date: June 21, 2013 Location: Atlanta, USA http://www.ics.uci.edu/~qliu1/MLcrowd_ICML_workshop/ Deadline: 15th Apr, 2013 Acceptance notification: 15th May, 2013. Workshop date: 21st June, 2013. Our ability to solve challenging scientific and engineering problems relies on a mix of human and machine intelligence. The machine learning (ML) research in the past two decades has created a set of powerful theoretical and empirical tools for exploiting machine intelligence. On the other side, the recent rise of human computation and crowdsourcing approaches enables us to systematically harvest and organize human intelligence, for solving problems that are easy for human but difficult for computers. The past few years have witnessed widespread use of the crowdsourcing paradigm, including task-solving platforms like Amazon Mechanical Turk and CrowdFlower, crowd-powered scientific projects like GalaxyZoo and Foldit game, as well as various successful crowdsourcing business such as crowdfunding and open Innovation, to name a few. This trend yields both new opportunities and challenges for the machine learning community. On one side, crowdsourcing systems provide machine learning researchers with the ability to gather large amount of valuable data and information, leading advances in challenging problems in areas like computer vision and natural language processing. On the other side, crowdsourcing confronts challenges on increasing its reliability, efficiency and scalability, for which machine learning can provide power computational tools. More importantly, building systems that seamlessly integrate machine learning and crowdsourcing techniques can greatly push the frontier of our ability to solve challenging and large-scale problems. The goal of this workshop is to bring together experts on fields related to crowdsourcing such as economics, game theory, cognitive science and human-computer interaction with the machine learning community to have a workshop focused on areas where crowdsourcing can contribute to machine learning and vice versa. We are interested in a wide variety of topics, including but not limited to: ---- State of the field. What are the emerging crowdsourcing tasks and new opportunities for machine learning? What are the latest and greatest tasks being tackled by crowdsourcing and human intelligence and how do these tasks highlight the need for new machine learning approaches that aren?t being studied already? ---- Integrating machine and human intelligence. How to build practical systems that seamlessly integrate machine and human intelligence? Machine learning algorithms can help the crowdsourcing component to manage work flows and control workers? qualities, while the crowds can be used to handle the tasks that are difficult for machines to adaptively boost the performance of machine learning algorithms. ---- Machine learning for crowdsourcing. Many machine learning approaches have been applied to crowdsourcing on problems such as output aggregation, quality control, work flow management and incentive mechanism design. We expect to see more machine learning contribution to crowdsourcing, either by novel ML methods, or on new crowdsourcing problems. ---- Crowdsourcing for machine learning. Machine learning largely relies on big and high quality data, which can be provided by crowdsourcing systems, perhaps in an automatic and adaptive way. Also, most machine learning algorithms have many design choices that require human intelligence, including tuning hyper-parameters, selecting score functions, and designing kernel functions. How can we systematically "outsource" these typically expert-level design choices to the crowds in order to achieve results that match expert-level human experience? ---- Crowdsourcing complicated tasks. How to design work flows and aggregate answers in crowdsourcing systems that collect structured labels, such as bounding box annotations in computer vision, protein folding structures in biology, or solve complicated tasks such as proof reading, and machine translation? How can machine learning provide help in these cases? ---- Theoretical analysis. There are many open theoretical questions in crowdsourcing that can be addressed by statistics and learning theory. Examples include analyzing label aggregation algorithms such as EM, or budget allocation strategies. Invited Speakers ~~~~~~~~~~~~~~~~~~ - Jeffrey P. Bigham. University of Rochester - Yiling Chen. Harvard University - Panagiotis G. Ipeirotis. NYU Stern School of Business - Mark Steyvers. UC Irvine - More ... Submission Details ~~~~~~~~~~~~~~~~~ Submissions should follow the ICML format (http://icml.cc/2013/wp-** content/uploads/2012/12/**icml2013stylefiles.tar.gz) and are encouraged to be up to eight pages. Papers submitted for review do not need to be anonymized. There will be no official proceedings, but the accepted papers will be made available on the workshop website. Accepted papers will be either presented as a talk or poster.We welcome submissions both on novel research work as well as extended abstracts on work recently published or under review in another conference or journal (please state the venue of publication in the later case); we particularly encourage submission of visionary position papers on the emerging trends on crowdsourcing and machine learning. Please submit papers in PDF format at https://cmt.research.microsoft.com/MLCROWD2013/. Important Dates ~~~~~~~~~~~~~~ Extended abstract submission deadline: 15th Apr, 2013. Acceptance notification: 15st May, 2013. Workshop date: 21st June, 2013. Workshop Organizers: ~~~~~~~~~~~~~~~~~~~~~ Paul Bennett, Dengyong Zhou, John Platt, Microsoft Research Redmond Qiang Liu, UC Irvine Xi Chen, Qihang Lin, CMU Contact: ~~~~~~~ lqiang67+MLcrowdworkshop at gmail.com -- Qiang Liu Ph.D student Information & Computer Department University of California, Irvine Homepage: http://www.ics.uci.edu/~qliu1/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rogrady at ulb.ac.be Mon Mar 4 11:08:17 2013 From: rogrady at ulb.ac.be (Rehan O'Grady) Date: Mon, 4 Mar 2013 17:08:17 +0100 Subject: Connectionists: Reminder: AAAI Video Competition 2013 - Call For Videos Message-ID: Dear Colleague, AAAI is pleased to announce the continuation of the AAAI Video Competition, now entering its seventh year. The video competition will be held in conjunction with the AAAI-13 conference in Bellevue, Washington, July 14?18, 2013. At the award ceremony, authors of award-winning videos will be presented with "Shakeys", trophies named in honour of SRI's Shakey robot and its pioneering video. Award winning videos will be screened at this ceremony. The goal of the competition is to show the world how much fun AI is by documenting exciting artificial intelligence advances in research, education, and application. View previous entries and award winners at http://www.aaaivideos.org/past_competitions. The rules are simple: Compose a short video about an exciting AI project, and narrate it so that it is accessible to a broad online audience. We strongly encourage student participation. VIDEO FORMAT AND CONTENT Either 1 minute (max) short video or a 5 minute (max) long video, with English narration (or English subtitles). Consider combining screen shots, interviews, and video of a system in action. Make the video self-contained, so that newcomers to AI can understand and learn from it. We encourage a good sense of humor, but will only accept submissions with serious AI content. For example, we welcome submissions of videos that: * Highlight a research topic - contemporary or historic, your own or from another group * Introduce viewers to an exciting new AI-related technology * Provide a window into the research activities of a laboratory and/or senior researcher * Attract prospective students to the field of AI * Explain AI concepts - your video could be used in the classroom Please note that this list is not exhaustive. Novel ideas for AI-based videos, including those not necessarily based on a "system in action", are encouraged. No matter what your choice, creativity is encouraged! (Please note: The authors of previous, award-winning videos typically used humor, background music, and carefully selected movie clips to make their presentations come alive.) Please also note that videos should contain only material for which the authors have copyright. Clips from films or television and music for soundtrack should only be used if copyright permission has been granted by the copyright holders. SUBMISSION INSTRUCTIONS Submit your video by making it available for download on a (preferably password-protected) ftp or web site. Once you have done so, please fill out the submission form ( http://www.aaaivideos.org/aaaistatic/submission/aaai_video_comp_2013_submission_form.txt) and send it to us by email (submission at aaaivideos.org). All submissions are due no later than April 30, 2013. REVIEW AND AWARD PROCESS Submitted videos will be peer-reviewed by members of the programme committee according to the criteria below. Videos that receive positive reviews will be accepted for publication in the AAAI Video Competition proceedings, published on the dedicated website (http://www.aaaivideos.org). The best videos will be nominated for awards. Winners will be revealed at the award ceremony during AAAI-13. All authors of accepted videos will be asked to sign a distribution license form. Review criteria: 1. Relevance to AI (research or application) 2. Excitement generated by the technology presented 3. Educational content 4. Entertainment value 5. Presentation (cinematography, narration, soundtrack, production values) AWARD CATEGORIES Best Video, Best Short Video, Best Student Video, Most Jaw-Dropping Technology, Most Educational, Most Entertaining and Best Presentation. (Categories may be changed at the discretion of the chairs.) AWARDS Trophies ("Shakeys"). KEY DATES * Submission Deadline: April 30, 2013 * Reviewing Decision Notifications & Award Nominations: May 31, 2013 * Final Version Due: June 16, 2013 * Screening and Award Presentations: TBD FOR MORE INFORMATION Please contact us at info at aaaivideos.org We look forward to your participation in this exciting event! Marco Dorigo, Mauro Birattari and Rehan O'Grady Co-Chairs, AAAI Video Competition 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.carnevale at yale.edu Mon Mar 4 14:39:35 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Mon, 04 Mar 2013 14:39:35 -0500 Subject: Connectionists: NEURON 2013 Summer Course Message-ID: <5134F877.20702@yale.edu> COURSE ANNOUNCEMENT What: "The NEURON Simulation Environment" (NEURON 2013 Summer Course) http://www.neuron.yale.edu/neuron/static/courses/nscsd2013/nscsd2013.html When: 9 AM Saturday, June 22, through noon Tuesday, June 25, 2013 Where: The Institute for Neural Computation at the University of California, San Diego, CA Organizers: N.T. Carnevale and M.L. Hines Description: In three and a half days of intensive lectures, demonstrations, and hands-on exercises, this course will cover the principles and practice of the design, construction, and use of models in the NEURON simulation environment. It is designed primarily for those who are concerned with models of biological neurons and neural networks that are closely linked to empirical observations, e.g. experimentalists who wish to incorporate modeling in their research plans, and theoreticians who are interested in the principles of biological computation. The course will be useful and informative for registrants at all levels of experience, from those who are just beginning to those who are already quite familiar with NEURON or other simulation tools. Registration is limited to 20, and the deadline for receipt of applications is Friday, May 24, 2013. For more information see http://www.neuron.yale.edu/neuron/static/courses/nscsd2013/nscsd2013.html or contact Ted Carnevale Neurobiology Dept. Yale University School of Medicine PO Box 208001 New Haven, CT 06520-8001 phone 203-494-7381 email ted.carnevale at yale.edu Supported in part by: National Institutes of Health National Science Foundation Institute for Neural Computation http://inc.ucsd.edu/ Contractual terms require inclusion of the following statement: This course is not sponsored by the University of California. From ted.carnevale at yale.edu Mon Mar 4 14:41:05 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Mon, 04 Mar 2013 14:41:05 -0500 Subject: Connectionists: Course: Parallelizing NEURON Models Message-ID: <5134F8D1.7070006@yale.edu> COURSE ANNOUNCEMENT What: "Parallelizing NEURON Models" http://www.neuron.yale.edu/neuron/static/courses/parnrn2013/parnrn2013.html When: 9 AM Wednesday, June 26, through 5 PM Sunday, June 30, 2013 Where: The Institute for Neural Computation at the University of California, San Diego, CA Organizers: N.T. Carnevale and M.L. Hines Description: This five day intensive course is designed for advanced modelers who --already know how to write hoc or Python code for NEURON, AND -- need to develop a model that runs on parallel hardware, or port an existing model from serial hardware to parallel hardware. This course goes into considerable depth on key strategies for developing, debugging, and using NEURON models in high performance computing environments. Each day will start with didactic presentations and discussions that address general strategies and specific tactics for solving common problems. However, this is very much a "hands-on" course, so afternoons will be devoted to coding sessions in which participants put what they have learned to practical use while expert consultation is close at hand. Registration is limited to 12, and the deadline for receipt of applications is Friday, May 24, 2013. For more information see http://www.neuron.yale.edu/neuron/static/courses/parnrn2013/parnrn2013.html or contact Ted Carnevale Neurobiology Dept. Yale University School of Medicine PO Box 208001 New Haven, CT 06520-8001 phone 203-494-7381 email ted.carnevale at yale.edu Supported in part by: National Institutes of Health National Science Foundation Institute for Neural Computation http://inc.ucsd.edu/ Contractual terms require inclusion of the following statement: This course is not sponsored by the University of California. From terry at salk.edu Wed Mar 6 01:09:00 2013 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 05 Mar 2013 22:09:00 -0800 Subject: Connectionists: NEURAL COMPUTATION - March 1, 2013 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 25, Number 3 - March 1, 2013 Review Online Learning With (Multiple) Kernels: A Review Tom Diethe, Mark Girolami Article Opening the Black Box: Low-dimensional Dynamics in High-dimensional Recurrent Neural Networks David Sussillo, Omri Barak Letters System Identification of mGluR-dependent Long-term Depression Jean-Marie Aerts, Tim Tambuyzer, Tariq Ahmed, C. James Taylor, Daniel Berckmans, and Detlef Balschun Echo State Property Linked to an Input: Exploring a Fundamental Characteristic of Recurrent Neural Networks G Manjunath, Herbert Jaeger Movement Duration, Fitts's Law, and an Infinite-horizon Optimal Feedback Control Model for Biological Motor Systems Ning Qian, Yu Jiang, Zhong-Ping Jiang, and Pietro Mazzoni Sufficient Dimension Reduction via Squared-loss Mutual Information Estimation Taiji Suzuki, Masashi Sugiyama A Unified Classification Model Based on Robust Optimization Akiko Takeda, Hiroyuki Mitsugi, and Takafumi Kanamori Enhanced Gradient for Training Restricted Boltzmann Machines KyungHyun Cho, Tapani Raiko, and Alexander Ilin ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2013 - VOLUME 25 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From giles at ist.psu.edu Wed Mar 6 09:43:50 2013 From: giles at ist.psu.edu (Lee Giles) Date: Wed, 6 Mar 2013 09:43:50 -0500 Subject: Connectionists: modularity vs. sparsity In-Reply-To: <20130304115301.Horde.ywMJJ16adDBRNNFtaO6QwwA@webmail.uvm.edu> References: <20130304115301.Horde.ywMJJ16adDBRNNFtaO6QwwA@webmail.uvm.edu> Message-ID: <51375626.3060708@ist.psu.edu> I believe Joshua has an interesting point. In our '97 paper in IEEE TSP titled "A Delay Damage Model Selection Algorithm for NARX Neural Networks," we found significant performance improvement with memory pruning which is a special kind of imposed cost on connection lengths. It would be interesting to redo these experiments to see if modularity as defined by Clune, et al. emerges. I speculate it would. (Paper attached for convenience) On 3/4/13 11:53 AM, Joshua C. Bongard wrote: > A lot of discussion here lately about the evolution of modularity, > sparked by the recent paper > > http://arxiv.org/abs/1207.2743 > > In it, the authors evolve ANNs to perform a specific task while placing > a penalty on total wiring length. > > There were a lot of responses here along the lines of "it's not > surprising > that they got modularity because they placed a cost on connection > length." > > However, what I found interesting about this paper is that they got > modularity rather than sparsity. It is important to keep in mind that > these two properties of networks are not the same thing. > > So, if I were to use Clune et al.'s approach to evolve ANNs for another > task, would I get modularity or sparsity? > > Food for thought. > > -------------- next part -------------- A non-text attachment was scrubbed... Name: IEEETSP-1997-delay-damage.pdf Type: application/download Size: 297974 bytes Desc: not available URL: From i.bojak at reading.ac.uk Thu Mar 7 10:59:09 2013 From: i.bojak at reading.ac.uk (Ingo Bojak) Date: Thu, 07 Mar 2013 15:59:09 +0000 Subject: Connectionists: Summer School/Creative Workshop: Data Assimilation & Inverse Problems - From Weather Forecasting to Neuroscience In-Reply-To: <50F80BB0.8040105@reading.ac.uk> References: <50F80BB0.8040105@reading.ac.uk> Message-ID: <5138B94D.7090504@reading.ac.uk> (Please note: the following workshop will happen just after the CNS*2013 conference in Paris, and the University of Reading is about an hour of travel away from London Heathrow. So this could be an interesting detour if you are flying in from overseas to attend CNS*2013 anyhow. The workshop is focusing on methods and as the subtitle makes clear, neuroscientists are very welcome. - Ingo) ** *Summer School/Creative Workshop* Data Assimilation & Inverse Problems *From Weather Forecasting to Neuroscience * July 22-26, 2013, University of Reading, United Kingdom Data Assimilation is concerned with the reconstruction of the state of a dynamical system from measured data. It is usually employed in a cycled way, where data are used to correct or guide the dynamical system step by step, while forecasts are generated on a desired time grid. This is a key ingredient for important applications, for example for Numerical Weather Prediction (NWP) or predictions of Climate Change, but it is also increasingly employed in medical and industrial applications. The Summer School and Creative Workshop on Data Assimilation and Inverse Problems at the University of Reading provides an introduction into the key concepts and techniques in the field both in algorithmic developments as well as important applications. In a framework which consists of introductory lectures by selected international experts in the field we aim to have time for mutual exchange and discussions in a relaxed and stimulating atmosphere at the green campus of the University of Reading. *Lecturers* include: Steven Schiff, Dennis McLaughlin, Takemasa Miyoshi, Tijana Janjic-Pfander, Roland Potthast, Peter-Jan van Leeuwen, Sarah Dance, Jochen Broecker, Nancy Nichols, Ingo Bojak, Hendrik Reich, Africa Perianez. *Local Organizers:* Africa Perianez, Etienne Roesch, Ingo Bojak, Roland Potthast, Kelly Sloan and Douglas Saddy Time Monday Tuesday Wednesday Thursday Friday 09:00-09:45 Special Lecture Numerical Weather Prediction I Discussion time/Poster Special Lecture Weak constraint 4dVar 09:45-10:30 Special Lecture Numerical Weather Prediction II Discussion time/Poster Special Lecture Discussion time/Poster Coffee Coffee Coffee Coffee Coffee 11:00-11:45 Lecture 1: DA Introduction Lecture 3: Variational DA I 3dVar and Stability Lecture 5: Variational DA II: Adjoints and 4dVar Lecture 7: Ensemble Methods Lecture 9: Particle Filters I 11:45-12:30 Lecture 2: Basics of Inverse Problems Lecture 4: Kalman Filter Lecture 6: Covariances, Obs. and Model Error Lecture 8: Localization Techniques Lecture 10: Particle Filters II Lunch Lunch Lunch Lunch Finish 14:00-14:45 Discussion time/Poster Discussion time/Poster Special Lecture Reservoir Modeling I Discussion time/Poster 14:45-15:30 Discussion time/Poster Discussion time/Poster Special Lecture Reservoir Modeling II Discussion time/Poster Coffee Coffee Coffee Coffee 16:15-17:00 Special Lecture Neuroscience I Exercise: Coding Examples Discussion time/Poster Exercise: Coding Examples 17:00-17:45 Special Lecture Neuroscience II Exercise: Coding Examples Discussion time/Poster Exercise: Coding Examples 18:00 Icebreaker Joint Dinner Fee for Participants GBP 50,-. Lunches and Coffee will be provided. Please register before May 31st by email with Africa Perianez: a.perianez at pgr.reading.ac.uk . Poster contributions welcome, reserve space before May 31st! For more infos see: PDF Flyer -------------- next part -------------- An HTML attachment was scrubbed... URL: From shyam at amrita.edu Wed Mar 6 15:25:05 2013 From: shyam at amrita.edu (Shyam Diwakar) Date: Thu, 7 Mar 2013 01:55:05 +0530 (IST) Subject: Connectionists: Amrita BioQuest 2013 - Intl. Conf. on Biotechnology for Innovative Applications - Abstract Deadline Extended (UPDATED) In-Reply-To: <5790439.19437.1362596926521.JavaMail.root@mail.amrita.edu> Message-ID: <2382364.19448.1362601505438.JavaMail.root@mail.amrita.edu> Dear Colleagues, Kindly note the deadline for call for abstracts has been ext ended . --------------------Extended Deadline - Call for Abstracts ------------------------ International Conference on Biotechnology for Innovative Applications - Amrita Bio Quest 2013 The growing impact of Biotechnology in all spheres has resulted in the active and focused pursuit of innovative applications that capitalize on novel approaches to enhance our strengths in frontier areas of Biotechnology. Amrita Bioquest 2013 will provide an ideal forum to discuss and deliberate on established as well as emerging platforms and strategies. Date of Conference: 10-14 August, 2013 Venue: Kochi (Aug 10), Amritapuri (Aug 11-14), Kerala , India. Extended Abstract deadline: March 31, 2013 Tracks: Genomics and translation medicine, Neuro biology and Computational Neuroscience , Biomedical engineering , Computational Biology and Bioinformatics , Bioanalytical techniques, Bioprospecting and Bioengineering , Biotechnology in I ndia . More info: Conference website: http :// amrita . edu / bioquest Abstract submission link: http :// www . amritabioquest .org/abstract. html Facebook page: https :// www . facebook .com/ amritabioquestconference Invited Speakers ( updat ing /current ): http :// www . amritabioquest .org/speakers. html ------------------------------------------------------------------------------- shyam -- Dr. Shyam Diwakar VALUE @ Amrita - Virtual & Accessible Laboratories Universalizing Education School of Biotechnology Amrita Vishwa Vidyapeetham ( Amrita University) Amritapuri , Clappana P.O. Kollam , India. Pin: 690525 Ph :+91-476-2803116 Fax:+91-476-2899722 http ://biotech.amrita.edu/research/compneuro -------------- next part -------------- An HTML attachment was scrubbed... URL: From siccoverwer at gmail.com Wed Mar 6 06:07:55 2013 From: siccoverwer at gmail.com (Sicco Verwer) Date: Wed, 6 Mar 2013 12:07:55 +0100 Subject: Connectionists: Final CfP: Benelearn 2013 Message-ID: ==========Last Call for Papers========== Benelearn 2013 22nd Belgian-Dutch Conference on Machine Learning http://benelearn2013.org Nijmegen, Netherlands, June 3, 2013 ================================== Important dates: 12 March 2013 ? paper submission deadline 16 April 2013 ? notification of acceptance 3 June 2013 ? conference ================================== BENELEARN is the annual machine learning conference of Belgium and The Netherlands. It serves as a forum for researchers to exchange ideas, present recent work, and foster collaboration in the broad field of Machine Learning and its applications. The 22nd edition of the annual Belgian-Dutch Conference on Machine Learning (BENELEARN) will be organized in Nijmegen, the Netherlands, with cooperation of the SIKS research school (http://www.siks.nl). Scope We invite the submission of abstracts or full papers on all aspects of machine learning and related disciplines, including, but not limited to Kernel Methods Bayesian Learning Case-based Learning Causal Learning Ensemble Methods Computational Learning Theory Data Mining Evolutionary Computation Hybrid Learning Systems Inductive Logic Programming Knowledge Discovery in Databases Online Learning Learning in Multi-Agent Systems Neural Networks Reinforcement Learning Robot Learning Feature Selection and Dimensionality Reduction Scientific Discovery Transfer Learning Statistical Learning Ranking / Preference Learning / Information Retrieval Computational models of Human Learning Structured Output Learning Learning for Language and Speech Media Mining and Text Analytics Learning and Ubiquitous Computing Applications of Machine Learning Submission One can submit full papers up to 8 pages (excluding references) or one-page abstracts (including references). Full papers need to report original, unpublished work and they are meant for researchers that prefer to obtain an official publication via BENELEARN, including feedback from the program committee. Abstracts can also concern work in progress and past published work, if the precise references to the original publication are mentioned. This way we aim to provide the same broad overview as in previous years, and make it attractive and easy to attend by both senior as well as junior researchers, from both academia and industry. There will be a keynote talk given by prof. Dan Roth, CS Dept and Beckman Institute, University of Illinois at Urbana-Champaign. The program committee will make a selection for oral presentations among abstracts and full papers to ensure a program that has academic quality but is also interesting and inspiring for the attendees. The remainder of the submissions will be presented during a poster session and poster spotlight presentations. Abstracts and papers should be submitted electronically, no later than Tuesday March 12, 2013. Please follow the guidelines that will be added to this page. During submission authors will be able to indicate their preference for an oral or a poster presentation. Antal van den Bosch, David van Leeuwen and Tom Heskes on behalf of the BENELEARN 2013 team -------------- next part -------------- An HTML attachment was scrubbed... URL: From getoor at cs.umd.edu Thu Mar 7 14:11:08 2013 From: getoor at cs.umd.edu (Lise Getoor) Date: Thu, 7 Mar 2013 14:11:08 -0500 Subject: Connectionists: CFP: ICML Workshop on Structured Learning: Inferring Graphs from Structured and Unstructured Inputs (SLG2013) Message-ID: Structured Learning: Inferring Graphs from Structured and Unstructured Inputs (SLG2013) ICML workshop, June 16, 2013 (day between ICML & NAACL) https://sites.google.com/site/slgworkshop2013/ Submission Deadline: April 15, 2013 OVERVIEW Structured learning involves learning and making inferences from inputs that can be both unstructured (e.g., text) and structured (e.g., graphs and graph fragments), and making predictions about outputs that are also structured as graphs. Examples include the construction of knowledge bases from noisy extraction data, inferring temporal event graphs from newswire or social media, and inferring influence structure on social graphs from multiple sources. One of the challenges of this setting is that it often does not fit into the classic supervised or unsupervised learning paradigm. In essence, we have one large (potentially infinite) partially observed input graph, and we are trying to make inferences about the unknown aspects of this graph's structure. Often times there is side information available, which can be used for enrichment, but in order to use this information, we need to infer mappings for schema and ontologies that describe that side information, perform alignment and entity resolution, and reason about the added utility of the additional sources. The topic is extremely pressing, as many of the modern challenges in extracting usable knowledge from (big) data fall into this setting. Our focus in this workshop is on the machine learning and inference methods that are useful in such settings. Topics of interest include, but are not limited to: - Graph-based methods for entity resolutions and word sense disambiguation - Graph-based representations for ontology learning - Graph-based strategies for semantic relations identification - Making use of taxonomies and encoding semantic distances in graphs - Random walk methods in graphs - Spectral graph clustering and multi-relational clustering - Semi-supervised graph-based methods - Graph summarization Our goal is to bring together researchers in graphical models, structured prediction, latent variable relational models and statistical relational learning in order to (a) exchange experiences applying various methods to these graph learning domains, (b) share their successes, and (c) identify common challenges. The outcomes we expect are (1) a better understanding of the different existing methods across disparate communities, (2) an identification of common challenges, and (3) a venue for sharing resources, tools and datasets. The workshop will consist of a number of invited talks in each of these areas, a poster session where participants can present their work, and discussion. SUBMISSION INFORMATION We solicit short, poster-length submissions of up to 2 pages. All accepted submissions will be presented as posters, and a subset of them may be considered for oral presentation. Submissions reporting work in progress are acceptable, as we aim for the workshop to offer a venue for stimulating discussions. Submissions must be in PDF format, and should be made through Easychair at https://www.easychair.org/conferences/?conf=slg-2013 Important dates - Submission deadline: April 15, 2013 - Notification of acceptance: April 30, 2013 - Final versions of accepted submissions due: May 15, 2013 - Workshop date: Sunday, June 16, 2013 (Note: Most of the ICML workshops are taking place AFTER the conference, while this workshop takes place before the main conference, on the same day as ICML tutorials, in order to make it easier for participants from NAACL HLT conference to attend). ORGANIZING COMMITTEE - Hal Daume III, University of Maryland - Evgeniy Gabrilovich, Google - Lise Getoor, University of Maryland - Kevin Murphy, Google PROGRAM COMMITTEE(confirmed to date) - Jeff Dalton, UMass - Laura Dietz, UMass - Thorsten Joachims, Cornell - Daniel Lowd, University of Oregon - Mausam, University of Washington - Jennifer Neville, Purdue University - Stuart Russell, UC Berkeley - Ivan Titov, Saarland University - Daisy Zhe Wang, University of Florida - Jerry Zhu, University of Wisconsin - Madison FURTHER INFORMATION For further information, please contact the workshop organizers at slg-2013-chairs at googlegroups.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.murray at aya.yale.edu Thu Mar 7 19:26:41 2013 From: john.murray at aya.yale.edu (John D. Murray) Date: Thu, 7 Mar 2013 19:26:41 -0500 Subject: Connectionists: Computational & Cognitive Neuroscience Summer School, in Beijing Message-ID: *COMPUTATIONAL & COGNITIVE NEUROSCIENCE SUMMER SCHOOL* Organizers: - Xiao-Jing Wang (Yale University) - Si Wu (Beijing Normal University) - Zach Mainen (Champalimaud Neuroscience Programme) - Upinder Bhalla (Natl Ctr Biological Sci, Bangalore) July 6-24 in Beijing, China http://www.ccnss.org http://www.csh-asia.org/s-cosyne13.html Currently accepting applications, deadline March 15. The 4th Computational and Cognitive Neuroscience Summer School (CCNSS) will be held in Beijing, China. The objective of this course is to train in Computational Neuroscience talented and highly motivated students and postdocs from Asia and other countries in the world. Applicants with either quantitative, including Physics, Mathematics, Engineering and Computer Science or experimental background are welcomed. The lectures will introduce the basic concepts and methods, as well as cutting-edge research, in Computational and Systems/Cognitive Neurosciences. Modeling will be taught at multiple levels, ranging from subcellular processes, single neuron computation, to microcircuits and large-scale systems. Programming labs coordinated with the lectures will provide practical training in important computational methods. Invited Lecturers: - Michael Hausser (UCL) - Eve Marder (Brandeis) - Misha Tsodyks (Weizmann) - Bob Shapley (NYU) - Tony Movshon (NYU) - Bob Desimone (MIT) - Bill Newsome (Stanford) - Mark Churchland (Columbia) - Weiji Ma (Baylor) - Matthew Botvinick (Princeton) - Christof Koch (Caltech/Allen Institute) - Kechen Zhang (Johns Hopkins) - Xiaoqin Wang (Johns Hopkins) Feel free to direct any questions to: john.murray at yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucy.davies4 at plymouth.ac.uk Fri Mar 8 12:29:10 2013 From: lucy.davies4 at plymouth.ac.uk (Lucy Davies) Date: Fri, 8 Mar 2013 17:29:10 +0000 Subject: Connectionists: Lure of the New: Final Call Message-ID: Final Call for abstract submissions for Lure of the New! Abstracts arriving before 9.00 AM, GMT, 11th March 2013 will be accepted! Abstracts should be emailed to: info.cognition at plymouth.ac.uk Word doc or docx format, no longer than 1 A4 page, including all refs, title, all authors names and affiliations, corresponding authors contact address and email. Call for abstracts: The Cognition Institute is a new trans-disciplinary research centre focused on understanding human cognition. We believe that through forging links with researchers from psychology, cognitive robotics, neuroscience, biology, humanities and the arts we can develop new ways of thinking about cognition. In our first 1st international conference, we will explore how novelty and creativity are key drivers of human cognition. Each of our themed symposia will bring a different approach to this topic, and will cover such areas as embodied cognition, auditory neuroscience and psychophysics, language development, mental imagery, creativity and cognition, the relationship between the arts and sciences, modelling and imaging of brain processes & deception research. We welcome abstracts in any of these areas and are very keen for submissions which take a trans-disciplinary approach to cognition research. Symposia * Embodied Cognition and Mental Simulation (Haline Schendan, Diane Pecher, Rob Ellis, Patric Bach) * Developments in infant speech perception (Katrin Skoruppa, Silvia Benavides-Varelaa, Caroline Floccia, Laurence White, Ian Howard) * Engineering Creativity - can the arts help scientific research more directly? (Alexis Kirke, Greg B. Davies, Simon Ingram) * Computational Modelling of Brain Processes (Thomas Wennekers, Ingo Bojak, Chris Harris, Jonathan Waddington) * Current trends in deception research (Giorgio Ganis, Gershon Ben-Shakhar) * Sounds for Communication (Sue Denham, Roy Patterson, Judy Edworthy, Sarah Collins) * Imagery, Dance and Creativity (Jon May, Scott deLaHunta, Emma Redding, Tony Thatcher, Phil Barnard, John Matthias, Jane Grant) As well as the symposia, there will be a panel discussion drawing together all the themes of the conference, a performance by Emma Redding & Tony Thatcher (as part of the Imagery, Dance and Creativity symposium) and keynote talks by: -Linda Lanyon (Head of Programs at the INCF): Toward globally collaborative science: the International Neuroinformatics Coordinating Facility -Guy Orban (Dept. of Neuroscience, University of Parma): Finding a home for the mirror neurons in human premotor cortex. In addition, the evening programme includes a reception and CogTalk debate to mark the official launch of the Cognition Institute, and a special film screening in association with SciScreen. http://www1.plymouth.ac.uk/research/cognition/events/Pages/Conference.aspx Registration is now open until the 12th March: http://estore.plymouth.ac.uk/browse/extra_info.asp?compid=1&modid=2&prodid=414&deptid=9&catid=21 For all conference enquiries please contact Lucy Davies at: info.cognition at plymouth.ac.uk. Lucy Davies Cognition Institute Plymouth University Room A222, Portland Square Plymouth UK PL4 8AA Tel: 44+ (0)1752 584920 Website: http://www1.plymouth.ac.uk/research/cognition/Pages/default.aspx Like us on Facebook: http://www.facebook.com/PlymCogInst Follow us on Twitter: https://twitter.com/PlymCogInst YouTube Channel: http://www.youtube.com/user/PlymouthCognition?feature=watch From rsousa at dcc.fc.up.pt Fri Mar 8 03:55:06 2013 From: rsousa at dcc.fc.up.pt (Ricardo Sousa) Date: Fri, 08 Mar 2013 08:55:06 +0000 Subject: Connectionists: [visum summer school] Deadline: March, 15! Message-ID: <5139A76A.7090009@dcc.fc.up.pt> Call for Participation VISion Understanding and Machine intelligence school -- applications in biotechnology http://www.fe.up.pt/visum/ We invite everyone interested in computer vision to attend the VISion Understanding and Machine intelligence summer school - visum 2013. List of Speakers Professor Jaime S. Cardoso Professor Theo Gevers Professor Krystian Mikolajczyk Doctor Ferr?ol Soulez Professor Matthias Harders Doctor Juergen Sturm Professor Miguel Coimbra Application To apply, please send an email to visum at fe.up.pt under the subject "[APPLICATION VISUM]" following the name of the applicant, with the following information: CV with only 2 pages describing the academic background, webpage, other information that you may consider important. Motivation letter. And a possible title and abstract of a poster. Any further questions should be directed to: visum at fe.up.pt Please visit our webpage for up to date information: http://www.fe.up.pt/visum/. NYTimes has recommended Porto as one of the cities to visit in 2013: "a city where labyrinthine narrow streets, ancient buildings and black-cloaked students inspired a young English tutor who lived here in the early 1990s named J. K. Rowling". Think Different. Be all you can be and try harder. Participate in visum 2013 (http://www.fe.up.pt/visum)! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 263 bytes Desc: OpenPGP digital signature URL: From siccoverwer at gmail.com Fri Mar 8 08:07:45 2013 From: siccoverwer at gmail.com (Sicco Verwer) Date: Fri, 8 Mar 2013 14:07:45 +0100 Subject: Connectionists: BENELEARN DEADLINE EXTENDED Message-ID: Due to several requests the deadline for paper submission to Benelearn has been extended by one week to 19 March! ==========EXTENDED DEADLINE========== Benelearn 2013 22nd Belgian-Dutch Conference on Machine Learning http://benelearn2013.org Nijmegen, Netherlands, June 3, 2013 ================================== Important dates: 19 March 2013 ? paper submission deadline (extended!) 16 April 2013 ? notification of acceptance 3 June 2013 ? conference ================================== BENELEARN is the annual machine learning conference of Belgium and The Netherlands. It serves as a forum for researchers to exchange ideas, present recent work, and foster collaboration in the broad field of Machine Learning and its applications. The 22nd edition of the annual Belgian-Dutch Conference on Machine Learning (BENELEARN) will be organized in Nijmegen, the Netherlands, with cooperation of the SIKS research school (http://www.siks.nl). Scope We invite the submission of abstracts or full papers on all aspects of machine learning and related disciplines, including, but not limited to Kernel Methods Bayesian Learning Case-based Learning Causal Learning Ensemble Methods Computational Learning Theory Data Mining Evolutionary Computation Hybrid Learning Systems Inductive Logic Programming Knowledge Discovery in Databases Online Learning Learning in Multi-Agent Systems Neural Networks Reinforcement Learning Robot Learning Feature Selection and Dimensionality Reduction Scientific Discovery Transfer Learning Statistical Learning Ranking / Preference Learning / Information Retrieval Computational models of Human Learning Structured Output Learning Learning for Language and Speech Media Mining and Text Analytics Learning and Ubiquitous Computing Applications of Machine Learning Submission One can submit full papers up to 8 pages (excluding references) or one-page abstracts (including references). Full papers need to report original, unpublished work and they are meant for researchers that prefer to obtain an official publication via BENELEARN, including feedback from the program committee. Abstracts can also concern work in progress and past published work, if the precise references to the original publication are mentioned. This way we aim to provide the same broad overview as in previous years, and make it attractive and easy to attend by both senior as well as junior researchers, from both academia and industry. We are happy to announce the keynote talk given by prof. Dan Roth, CS Dept and Beckman Institute, University of Illinois at Urbana-Champaign. The program committee will make a selection for oral presentations among abstracts and full papers to ensure a program that has academic quality but is also interesting and inspiring for the attendees. The remainder of the submissions will be presented during a poster session and poster spotlight presentations. Abstracts and papers should be submitted electronically, no later than Tuesday March 19, 2013. Please follow the guidelines that will be added to this page. During submission authors will be able to indicate their preference for an oral or a poster presentation. Antal van den Bosch, David van Leeuwen and Tom Heskes on behalf of the BENELEARN 2013 team -------------- next part -------------- An HTML attachment was scrubbed... URL: From digital-olfaction at digital-olfaction.com Sat Mar 9 07:54:34 2013 From: digital-olfaction at digital-olfaction.com (digital-olfaction at digital-olfaction.com) Date: Sat, 9 Mar 2013 13:54:34 +0100 Subject: Connectionists: Innovative Appliances/Digital Olfaction Society Conference/Call for Innovations Message-ID: <108301ce1cc5$40a60440$c1f20cc0$@com> Dear Members, On behalf of Scientific Committee, we are pleased to inform you about the first Digital Olfaction Society world congress, which will be held in Berlin on April 11-12, 2013. Invitation to Present the most Innovative Appliances, Devices, Methods, Ideas? The Digital Olfaction Society World Congress will present the most innovative appliances, devices, methods, ideas? that will be judged by a panel of industrial, engineers and members of the press, who delivered the Innovation Award after having had the opportunity to meet and interact with the new equipment. If you are interested to present your innovative appliances, a special session is dedicated. So please find all information on www.digital-olfaction.com Berlin will showcase the most exciting Digital Olfaction Displaying & Demonstrations never organized ? Virtual Ice Cream shop, Phone Device, Multi Aroma Shooter... The Scientific Committee of the Digital Society Olfaction (DOS) announced the program of demonstrations of its first meeting to be held in Berlin on 11 and 12 April. The Chairman of DOS Pr EDEAS stated, Our idea is to show all devices which can capture odors, turn them into digital data so as to transmit them everywhere in the world, We managed to digitize sound and light and the remaining challenge is the digitalization of smells, flavors and fragrances. But can we digitize, transmit, reproduce and recapture the smells? Program of three innovations and devices : ? NICT Japan will "Smell-O-Vision". The "Multi Aroma Shooter" allows you to associate a flavor with digital computer, TV, game console ... The Japanese team insists that this technology adds a flavor more sounds. ? ChatPerf presents a retransmission/Restitution device for iPhone which allow you to download a "piece" of your favorite perfume. ? Tokyo Institute of Technology will present a virtual store of ice cream, with a transmission distance of flavorings. Best regards, C.mercier Digital Olfaction Society 15 rue de la Paix -75002 Paris Tel: 33 1 55 04 77 55 Fax: 33 9 72 16 84 14 www.digital-olfaction.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlmc at urv.cat Sat Mar 9 13:27:41 2013 From: grlmc at urv.cat (GRLMC) Date: Sat, 9 Mar 2013 19:27:41 +0100 Subject: Connectionists: SSTiC 2013: 2nd announcement Message-ID: *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject* ********************************************************************* 2013 INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2013 Tarragona, Spain July 22-26, 2013 Organized by Rovira i Virgili University http://grammars.grlmc.com/SSTiC2013/ ********************************************************************* AIM: SSTiC 2013 will be an open forum for the convergence of top class well recognized computer scientists and people at the beginning of their research career (typically PhD students) as well as consolidated researchers. SSTiC 2013 will cover the whole spectrum of computer science by means of more than 75 six-hour courses dealing with hot topics at the frontiers of the field. By actively participating, lecturers and attendees will share the idea of scientific excellence as the main motto of their research work. ADDRESSED TO: Graduate students from around the world. There are no pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, in the description of some of them reference may be made to specific knowledge background. SSTiC 2013 is appropriate also for people more advanced in their career who want to keep themselves updated on developments in the field. Finally, senior researchers will find it fruitful to listen and discuss with people who are main references of the diverse branches of computing nowadays. REGIME: 8 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: Palau Firal i de Congressos de Tarragona Arquitecte Rovira, 2 43001 Tarragona http://www.palaucongrestgna.com COURSES AND PROFESSORS: Divyakant Agrawal (Santa Barbara) [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Shun-ichi Amari (Riken) [introductory] Information Geometry and Its Applications James Anderson (Chapel Hill) [intermediate] Scheduling and Synchronization in Real-Time Multicore Systems Pierre Baldi (Irvine) [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Yoshua Bengio (Montr?al) [introductory/intermediate] Deep Learning of Representations Stephen Brewster (Glasgow) [advanced] Multimodal Human-Computer Interaction Bruno Buchberger (Linz) [introductory] Groebner Bases: An Algorithmic Method for Multivariate Polynomial Systems. Foundations and Applications Rajkumar Buyya (Melbourne) [intermediate] Cloud Computing Jan Camenisch (IBM Zurich) [intermediate] Cryptography for Privacy John M. Carroll (Penn State) [introductory] Usability Engineering and Scenario-based Design Jeffrey S. Chase (Duke) [intermediate] Trust Logic as an Enabler for Secure Federated Systems Larry S. Davis (College Park) [intermediate] Video Analysis of Human Activities Paul De Bra (Eindhoven) [intermediate] Adaptive Systems Marco Dorigo (Brussels) [introductory] An Introduction to Swarm Intelligence and Swarm Robotics Paul Dourish (Irvine) [introductory] Ubiquitous Computing in a Social Context Max J. Egenhofer (Maine) [introductory/intermediate] Qualitative Spatial Relations: Formalizations and Inferences Richard M. Fujimoto (Georgia Tech) [introductory] Parallel and Distributed Simulation David Garlan (Carnegie Mellon) [advanced] Software Architecture: Past, Present and Future Mario Gerla (Los Angeles) [intermediate] Vehicle Cloud Computing Georgios B. Giannakis (Minnesota) [advanced] Sparsity and Low Rank for Robust Data Analytics and Networking Ralph Grishman (New York) [intermediate] Information Extraction from Natural Language Mark Guzdial (Georgia Tech) [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science Francisco Herrera (Granada) [intermediate] Imbalanced Classification: Current Approaches and Open Problems Paul Hudak (Yale) [introductory] Euterpea: From Signals to Symphonies Using Haskell Syed Ali Jafar (Irvine) [intermediate] Interference Alignment Niraj K. Jha (Princeton) [intermediate] FinFET Circuit Design George Karypis (Minnesota) [introductory] Introduction to Parallel Computing: Architectures, Algorithms, and Programming Aggelos K. Katsaggelos (Northwestern) [intermediate/advanced] Sparsity-based Advances in Image Processing Arie E. Kaufman (Stony Brook) [advanced] Advances in Visualization Carl Kesselman (Southern California) [intermediate] Biomedical Informatics and Big Data Hugo Krawczyk (IBM Research) [intermediate] An Introduction to the Design and Analysis of Authenticated Key Exchange Protocols Pierre L'Ecuyer (Montr?al) [intermediate] Quasi-Monte Carlo Methods in Simulation: Theory and Practice Laks Lakshmanan (British Columbia) [intermediate/advanced] Information and Influence Spread in Social Networks Wenke Lee (Georgia Tech) [introductory] DNS-based Monitoring of Malware Activities Maurizio Lenzerini (Roma La Sapienza) [intermediate] Ontology-based Data Integration Ming C. Lin (Chapel Hill) [introductory/intermediate] Physically-based Modeling and Simulation Jane W.S. Liu (Academia Sinica) [intermediate] Critical Information and Communication Technologies for Disaster Preparedness and Response Nadia Magnenat-Thalmann (Nanyang Tech) [introductory] Modelling and Animating Virtual Humans Satoru Miyano (Tokyo) [intermediate] How to Hack Cancer Systems with Computational Methods Aloysius K. Mok (Austin) [intermediate] From Real-time Systems to Cyber-physical Systems Daniel Moss? (Pittsburgh) [intermediate] Asymmetric Multicore Management Hermann Ney (Aachen) [intermediate/advanced] Probabilistic Modelling for Natural Language Processing - with Applications to Speech Recognition, Handwriting Recognition and Machine Translation David M. Nicol (Urbana) [intermediate] Cyber-security and Privacy in the Power Grid Cathleen A. Norris (North Texas) & Elliot Soloway (Ann Arbor) [introductory] Primary & Secondary Educational Computing in the Age of Mobilism Jeff Offutt (George Mason) [intermediate] Cutting Edge Research in Engineering of Web Applications David Padua (Urbana) [intermediate] Parallel Programming with Abstractions Bijan Parsia (Manchester) [introductory] The Semantic Web: Conceptual and Technical Foundations Massoud Pedram (Southern California) [intermediate] Energy Efficient Architectures and Information Processing Systems Jian Pei (Simon Fraser) [intermediate/advanced] Mining Uncertain and Probabilistic Data Charles E. Perkins (FutureWei) [intermediate/advanced] Beyond 4G Prabhakar Raghavan (Google) [introductory/intermediate] Web Search and Advertising Sudhakar M. Reddy (Iowa) [introductory] Design for Test and Test of Digital VLSI Circuits Phillip Rogaway (Davis) [introductory/intermediate] Provably Secure Symmetric Encryption Gustavo Rossi (La Plata) [intermediate] Topics in Model Driven Web Engineering Kaushik Roy (Purdue) [introductory/intermediate] Low-energy Computing Yousef Saad (Minnesota) [intermediate] Projection Methods and Their Applications Robert Sargent (Syracuse) [introductory] Validating Models Douglas C. Schmidt (Vanderbilt) [intermediate] Patterns and Frameworks for Concurrent and Networked Software Bart Selman (Cornell) [intermediate] Fast Large-scale Probabilistic and Logical Inference Methods Mubarak Shah (Central Florida) [intermediate/advanced] Visual Crowd Surveillance Ron Shamir (Tel Aviv) [introductory] Revealing Structure in Disease Regulation and Networks Micha Sharir (Tel Aviv) [introductory/intermediate] Geometric Arrangements and Incidences: Algorithms, Combinatorics, and Algebra Satinder Singh (Ann Arbor) [introductory/advanced] Reinforcement Learning: On Machines Learning to Act from Experience Dawn Xiaodong Song (Berkeley) [introductory] Selected Topics in Computer Security Daniel Thalmann (Nanyang Tech) [intermediate] Simulation of Individuals, Groups and Crowds and Their Interaction with the User Mike Thelwall (Wolverhampton) [introductory] Sentiment Strength Detection for the Social Web Julita Vassileva (Saskatchewan) [introductory/intermediate] Engaging Users in Social Computing Systems Philip Wadler (Edinburgh) [introductory] Lambda Calculus and Blame Yao Wang (Polytechnic New York) [introductory/advanced] Video Compression: Fundamentals and Recent Development Gio Wiederhold (Stanford) [introductory] Software Economics: How Do the Results of the Intellectual Efforts Enter the Global Market Place Ian H. Witten (Waikato) [introductory] Data Mining Using Weka Limsoon Wong (National Singapore) [introductory/intermediate] The Use of Context in Gene Expression and Proteomic Profile Analysis Michael Wooldridge (Oxford) [introductory] Autonomous Agents and Multi-Agent Systems Ronald R. Yager (Iona) [introductory/intermediate] Fuzzy Sets and Soft Computing Philip S. Yu (Illinois Chicago) [advanced] Mining Big Data Justin Zobel (Melbourne) [introductory/intermediate] Writing and Research Skills for Computer Scientists REGISTRATION: It has to be done at http://grammars.grlmc.com/SSTiC2013/Registration.php Since a large number of attendees are expected and the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. FEES: They are the same (a flat rate) for all people by the corresponding deadline. They give the right to attend all courses. ACCOMMODATION: Information about accommodation will be available on the website of the School. CERTIFICATE: Participants will be delivered a certificate of attendance. IMPORTANT DATES: Announcement of the programme: January 26, 2013 Six registration deadlines: February 26, March 26, April 26, May 26, June 26, July 26, 2013 QUESTIONS AND FURTHER INFORMATION: Lilica Voicu: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2013 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34-977-559543 Fax: +34-977-558386 ACKNOWLEDGEMENTS: Ajuntament de Tarragona Diputaci? de Tarragona Universitat Rovira i Virgili From n.lepora at sheffield.ac.uk Sat Mar 9 02:46:27 2013 From: n.lepora at sheffield.ac.uk (Nathan F Lepora) Date: Sat, 9 Mar 2013 07:46:27 +0000 Subject: Connectionists: Living Machines 2013: Final Call for Papers, Exhibits, Satellite Events and Sponsors Message-ID: ______________________________________________________________ Final Call for Papers, Exhibits, Satellite Events and Sponsors The 2nd International Conference on Biomimetic and Biohybrid Systems. A Convergent Science Network Event 29th July to 2nd August 2013 Natural History Museum, London http://csnetwork.eu/livingmachines/conf2013 Paper deadline: March 22nd, 2013 Deadline for Satellite Event proposals, March 22nd, 2013 ______________________________________________________________ ABOUT LIVING MACHINES 2013 The development of future real-world technologies will depend strongly on our understanding and harnessing of the principles underlying living systems and the flow of communication signals between living and artificial systems. Biomimetics is the development of novel technologies through the distillation of principles from the study of biological systems. The investigation of biomimetic systems can serve two complementary goals. First, a suitably designed and configured biomimetic artefact can be used to test theories about the natural system of interest. Second, biomimetic technologies can provide useful, elegant and efficient solutions to unsolved challenges in science and engineering. Biohybrid systems are formed by combining at least one biological component?an existing living system?and at least one artificial, newly-engineered component. By passing information in one or both directions, such a system forms a new hybrid bio-artificial entity. The development of either biomimetic or biohybrid systems requires a deep understanding of the operation of living systems, and the two fields are united under the theme of ?living machines??the idea that we can construct artefacts, such as robots, that not only mimic life but share the same fundamental principles; or build technologies that can be combined with a living body to restore or extend its functional capabilities. Biomimetic and biohybrid technologies, from nano- to macro-scale, are expected to produce major societal and economical impacts in quality of life and health, information and communication technologies, robotics, prosthetics, brain-machine interfacing and nanotechnology. Such systems should also lead to significant advances in the biological and brain sciences that will help us to better understand ourselves and the natural world. The following are some examples: ? Biomimetic robots and their component technologies (sensors, actuators, processors) that can intelligently interact with their environments. ? Active biomimetic materials and structures that self-organize and self-repair. ? Biomimetic computers?neuromimetic emulations of the physiological basis for intelligent behaviour. ? Biohybrid brain-machine interfaces and neural implants. ? Artificial organs and body-parts including sensory organ-chip hybrids and intelligent prostheses. ? Organism-level biohybrids such as robot-animal or robot-human systems. ACTIVITIES The main conference will take the form of a three-day single-track oral and poster presentation programme, 30th July to 1st August 2013, that will include five plenary lectures from leading international researchers in biomimetic and biohybrid systems. Agreed speakers are: Mark Cutkosky, Stanford University (Biomimetics and Dextrous Manipulation); Terrence Deacon, University of California, Berkeley (Natural and Artificial Selves); Ferdinando Rodriguez y Baena, Imperial College London (Biomimetics for medical devices); Robert Full, University of California, Berkeley (Locomotion); Andrew Pickering, University of Exeter (History of living machines). Submissions will be in the form of full papers or extended abstracts. The proceedings will be published in the Springer-Verlag LNAI Series. Submissions are also invited for a one-day exhibition to feature working biomimetic or biohybrid systems and biomimetic/biohybrid art. The exhibition, will take place on the afternoon and evening of Thursday 1st August with the evening event including a press reception and buffet dinner. Active researchers in biomimetic and biohybrid systems are also invited to propose topics for 1-day tutorials or workshops on related themes. ABOUT THE VENUE The organisers are delighted to have secured the Flett Theatre at the Natural History Museum in London as the main venue for our conference. The NHM is an international centre for the study of the natural world featuring many important biological collections. The exhibition and poster session on Thursday 1st will be hosted at the nearby Science Museum, and the satellite events at Imperial College London. All three venues are conveniently located within a short walking distance of each other in South Kensington, the Museum district of the UK capital, and close to many of London?s tourist sights. SUBMITTING TO LIVING MACHINES 2013 Oral and poster programme We invite both full papers (12 pages, LNCS format) and extended abstracts (3 pages, LNCS format). All contributions will be refereed. Full papers are invited from researchers at any stage in their career but should present significant findings and advances in biomimetic or biohybid research; more preliminary work would be better suited to extended abstract submission. Full papers will be accepted for either oral presentation (single track) or poster presentation. Extended abstracts will be accepted for poster presentation only. All submissions must be formatted according to Springer LNCS guidelines. Papers should be submitted via the Living Machines web-site by midnight on March 22nd 2013. http://senldogo0039.springer-sbm.com/ocs/home/LM2013 Exhibition The Living Machines 2013 Exhibition is intended to feature working biomimetic or biohybrid systems and biomimetic/biohybrid art. It will take place in the London Science Museum Level 1 Galleries on Thursday 1st August 2013. The exhibition is expected to include intelligent artefacts such as biomimetic robotics; however, we are open to proposals for display of biomimetic or biohybrid systems of any kind. The exhibition will be in two sessions. In the afternoon session exhibits will be displayed alongside conference posters. This session will be open to conference delegates and sponsors only. The evening session will be alongside the LM2013 buffet dinner and reception. This session will be open to invited representatives of the press, VIPs, and conference delegates and members of the public who have registered for the evening event. For registered conference participants there is no additional charge to participate in the exhibition but you must register your exhibit using the proforma available through the LM2013 web-site. Note that, if you wish to continue to display your exhibit during the evening session, you must also register for the buffet dinner and reception in addition to the main conference. We strongly encourage authors of accepted papers and extended abstracts to bring their working biomimetic or biohybrid artefacts to include in the exhibition. A prize will be awarded for the best exhibit. The conference organisers would also be interested in performance type material for the evening session. Please contact us if you have a proposal. Satellite events LM2013 will support satellite events, such as symposia, workshops or tutorials, in any of the areas listed below, which can be scheduled for either the 29th July or 2nd August. Attendance at satellite events will attract a small fee intended to cover the costs of the meeting. There is a lot of flexibility about the content, organisation, and budgeting for these events. We have reserved meeting rooms at Imperial College London to host the satellites each with capacity for up to 40 people (though larger rooms could be arranged if needed) and will have projection equipment with technical support. Proposals for satellites should be submitted using the proforma available from the LM2013 web-page by March 22nd, 2013 but please contact us sooner if you are thinking of organising an event. Confirmation of accepted proposals will be provided be early April at the latest. SCOPE OF CONTRIBUTIONS Submissions of papers, exhibits and satellite events are invited in, but not limited to, the following topics and related areas. Biomimetics can, in principle, extend to all fields of biological research from physiology and molecular biology to ecology, and from zoology to botany. Promising research areas include system design and structure, self-organization and co-operativity, new biologically active materials, self-assembly and self-repair, learning, memory, control architectures and self-regulation, movement and locomotion, sensory systems, perception, and communication. Biomimetic research, particularly at the nano-scale, should also lead to important advances in component miniaturisation, self-configuration, and energy-efficiency. A key focus of the conference will be on complete behaving systems in the form of biomimetic robots that can operate on different substrates on sea, on land, or in the air. A further central theme will be the physiological basis for intelligent behaviour as explored through neuromimetics?the modelling of neural systems. Exciting emerging topics within this field include the embodiment of neuromimetic controllers in hardware, termed neuromorphics, and within the control architectures of robots, sometimes termed neurorobotics. Biohybrid systems usually involve structures from the nano-scale (molecular) through to the macro-scale (entire organs or body parts). Important implementation examples are: Bio-machine hybrids where, for instance, biological muscle is used to actuate a synthetic device. Brain-machine interfaces where neurons and their molecular machineries are connected to microscopic sensors and actuators by means of electrical or chemical communication, either in vitro or in the living organism. Intelligent prostheses such as artificial limbs, wearable exoskeletons, or sensory organ-chip hybrids (such cochlear implants and artificial retina devices) designed to assist the disabled or elderly, or to aid rehabilitation from illness. Implantable or portable devices that have been fabricated for monitoring health care or for therapeutic purposes such as artificial implants to control insulin release. Biohybrid systems at the organism level such as robot-animal or robot-human communities. Biohybrid systems may take advantage of progress in the field of synthetic biology. Contributions from biologists, neuroscientists, and theoreticians, that are of direct relevance to the development of future biomimetic or biohybrid devices are also welcome, as are papers considering ethical issues and/or societal impacts arising from the advances made in this field. ACCOMODATION West London has many excellent hotels that are suitable for conference delegates. We are also organizing the provision of reasonably-priced accommodation for LM2013 events in the Imperial College Halls of Residence. KEY DATES March 22nd, 2013 Paper submission deadline March 22nd, 2013 Satellite Event proposal deadline Early April, notification of accepted satellites April 29th, 2013 Notification of acceptance of papers May 20th, 2013 Camera ready copy May 31st, Early registration deadline July 29-August 2nd 2013 Conference SPONSORSHIP Living Machines 2013 is sponsored by the Convergent Science Network (CSN) for Biomimetic and Biohybrid Systems which is an EU FP7 Future Emerging Technologies Co-ordination Activity. CSN also organises two highly successful workshop series: the Barcelona Summer School on Brain, Technology and Cognition and the Capoccaccia Neuromorphic Cognitive Engineering Workshop. Living Machines 2013 is supported by the IOP Physics Journal Biomimetics & Bio-inspiration, who this year will publish a special issue of articles based on last years? LM2012 best papers. A review of the state of the art in biomimetics, by the conference chairs, and reporting strong recent growth in the field, has just been published in the journal (http://dx.doi.org/10.1088/1748-3182/8/1/013001). Other organisations wishing to sponsor the conference in any way and gain the corresponding benefits by promoting themselves and their products through conference publications, the conference web-site, and conference publicity are encouraged to contact the conference organisers to discuss the terms of sponsorship and necessary arrangements. We offer a number of attractive and good-value packages to potential sponsors. We are looking forwards to seeing you in London. Organising Committee: Tony Prescott (co-chair) Paul Verschure (co-chair) Nathan Lepora (programme chair) Holger Krapp (workshops & symposia) Anna Mura (web-site) Conference Secretariat: living-machines at sheffield.ac.uk c/o Gill Ryder, Sheffield Centre for Robotics Department of Psychology University of Sheffield Western Bank Sheffield, S10 2TN United Kingdom From jbgao at csu.edu.au Sat Mar 9 18:33:42 2013 From: jbgao at csu.edu.au (Gao, Jun bin (Dr)) Date: Sun, 10 Mar 2013 10:33:42 +1100 Subject: Connectionists: Research Associate Position at Charles Sturt University, Australia Message-ID: <7CE789BD5A9C3C4DA01F3E74FC779DC14E0DA252F4@MAIL01.CSUMain.csu.edu.au> Applications are invited for a 32 months research associate position working with Prof. Junbin Gao (Machine Learning) at the Centre for Research in Complex Systems at Charles Sturt University, Australia, starting from July 2013. The research associate is to work on machine learning particularly dimensionality reduction and representation learning. The project is funded by the Australian Research Council (ARC http://www.arc.gov.au). A PhD and background in machine learning or computational math or computer vision are highly desired. Experience with computational modelling and matlab is required. More official details can be found at http://www.csu.edu.au/jobs/vacancies/acad-vacancies (the last one). Please follow the link to submit your application. Any inquiry please email me at jbgao at csu.edu.au Regards Junbin Gao Prof Junbin Gao Professor in Computer Science Deputy Director, Centre for Research in Complex Systems (CRiCS) School of Computing and Mathematics Charles Sturt University Bathurst, NSW 2795 Australia email: jbgao at csu.edu.au tel: +61 2 6338 4213 [cid:csu-logo7e87.bmp] | ALBURY-WODONGA | BATHURST | CANBERRA | DUBBO | GOULBURN | MELBOURNE | ONTARIO | ORANGE | PORT MACQUARIE | SYDNEY | WAGGA WAGGA | ________________________________ LEGAL NOTICE This email (and any attachment) is confidential and is intended for the use of the addressee(s) only. If you are not the intended recipient of this email, you must not copy, distribute, take any action in reliance on it or disclose it to anyone. Any confidentiality is not waived or lost by reason of mistaken delivery. Email should be checked for viruses and defects before opening. Charles Sturt University (CSU) does not accept liability for viruses or any consequence which arise as a result of this email transmission. Email communications with CSU may be subject to automated email filtering, which could result in the delay or deletion of a legitimate email before it is read at CSU. The views expressed in this email are not necessarily those of CSU. Charles Sturt University in Australia The Grange Chancellery, Panorama Avenue, Bathurst NSW Australia 2795 (ABN: 83 878 708 551; CRICOS Provider Numbers: 00005F (NSW), 01947G (VIC), 02960B (ACT)). TEQSA Provider Number: PV12018 Charles Sturt University in Ontario 860 Harrington Court, Burlington Ontario Canada L7N 3N4 Registration: www.peqab.ca Consider the environment before printing this email. Disclaimer added by CodeTwo Exchange Rules 2007 www.codetwo.com -------------- next part -------------- A non-text attachment was scrubbed... Name: csu-logo7e87.bmp Type: image/bmp Size: 37976 bytes Desc: csu-logo7e87.bmp URL: From etienne.roesch at gmail.com Sun Mar 10 17:59:24 2013 From: etienne.roesch at gmail.com (=?windows-1252?Q?Etienne_Beno=EEt_Roesch?=) Date: Sun, 10 Mar 2013 21:59:24 -0000 Subject: Connectionists: PhD Studentship "Modelling of retinal neural networks" (Full-time, full-tuition, UK/EU only) Message-ID: <38FAF495-53B4-4793-97E4-03D210649C52@gmail.com> University of Reading PhD Studentship (UK/EU only) Project Title: Mesoscopic modelling of retinal neural networks Supervisor: Dr. Etienne B. Roesch School/Department: School of Systems Engineering & Centre for Integrative Neuroscience and Neurodynamics Overview: The goal of the project is to build neural field models of the retina that will allow the investigation of the architecture underlying visual information processing. These models will also be used to simulate the disturbances yielding visual impairment in early diabetic retinopathy. Neural fields are integro-differential equations, similar to wave equations, that represent electrical and chemical neurodynamics on continuous space-time scales. They are thus ideal to study populations of cells as homogeneously structured, and as dependent on spatial contiguity as the retina, whilst exploring complex nonlinear dynamics of neural information processing. The construction of the models will be informed by connectomic and physiological data, and the models subjected to extensive parameter-sensitivity analyses. The project falls into the remit of the University of Reading?s strategic investment to support neuroscience and interdisciplinary research. The student will be supervised by Dr. Etienne B. Roesch and Prof. Ingo Bojak. This is a computational neuroscience project, which requires skills and knowledge in neuroscience, applied mathematics and programming. Candidates that have a strong background in at least two of these three fields are welcome to apply, if they are enthusiastic about the third. Neural field models are a particularly accommodating subject for transitions from physics, engineering, etc. into the life sciences. However, we will also place a strong focus on describing real-world data; depending on the student?s aptitude and preference, the candidate will be given the opportunity to engage with ongoing electrophysiological experimentation directly relevant to this project, in our lab and with collaborators in the UK and internationally, in order to identify and validate exploitable applications of the models. Additionally, the candidate will be granted access to the cluster of NVIDIA Tesla GPUs and other facilities at the Centre for Integrative Neuroscience and Neurodynamics, as well as at the Brain Embodiments Laboratory. Eligibility: Applicants should hold a minimum of a UK Honours Degree at 2:1 level or equivalent in a relevant subject. Please note that due to restrictions on the funding this studentship is for UK/EU applicants only. Funding Details: Studentship will cover Home/EU Fees and pay the Research Council minimum stipend (?13,590 for 2011/12) for up to 3 years. The studentship will begin in October 2013. How to apply: To apply for this studentship please submit an application for a PhD in Cybernetics (full time) to the University ? see http://www.reading.ac.uk/Study/apply/pg-applicationform.aspx . Once you have submitted your application, you should receive an email to confirm receipt of your online application. Please forward this email, along with a covering letter, to Dr. Etienne B. Roesch, e.b.roesch at reading.ac.uk, by the application deadline. Please quote the reference GS13-15 in the ?Scholarships applied for? box that appears within the Funding Section of your online application. Application Deadline: Friday 15th March 2013 Further Enquiries: Please contact Dr. Etienne B. Roesch, e.b.roesch at reading.ac.uk. ??? Dr. Etienne B. Roesch Lecturer, University of Reading, UK ? Cybernetics, School of Systems Engineering ? Centre for Integrative Neuroscience and Neurodynamics From triesch at fias.uni-frankfurt.de Mon Mar 11 13:40:30 2013 From: triesch at fias.uni-frankfurt.de (Jochen Triesch) Date: Mon, 11 Mar 2013 18:40:30 +0100 Subject: Connectionists: Open PhD/Post-doc Position in Autonomously Learning Robot Vision Systems Message-ID: <9E7186B3-ED11-48AF-9FC1-2DB8483CB782@fias.uni-frankfurt.de> We have an opening for a fully funded 3-year interdisciplinary PhD or Post-doc position at the Frankfurt Institute for Advanced Studies (FIAS) focusing on autonomously learning vision systems for a humanoid robot. We are developing learning methods that allow a humanoid robot to learn to perceive its environment and interact with it in a completely autonomous fashion, much like humans and animals learn to see. The robot's learning is fueled by curiosity drives, making it want to acquire more and more knowledge about its environment and how it can interact with it. Topics include but are not limited to the autonomous learning of depth perception [1], object recognition [2], object tracking, segmentation, cue integration and social interaction skills [3]. The successful applicant will have the opportunity to work with the iCub robot (http://www.icub.org). FIAS provides a stimulating interdisciplinary environment with close collaborations with neuroscientists and psychologists. Please visit our group's web page for more information: http://fias.uni-frankfurt.de/de/neuro/triesch We are seeking an outstanding and highly motivated PhD candidate or Post-doc for this project. Applicants should hold a Master or PhD degree in a relevant field such as engineering, computer science, physics, or mathematics. The candidate should have excellent programming skills (Matlab, Python, C/C++) and very good analytic skills. Knowledge of machine learning, computer vision, robotics, signal processing and statistics are desirable. Experience with programming Graphics Processing Units (GPUs) is a plus. The position can be filled immediately, and lasts 36 months. The Frankfurt Institute for Advanced Studies is a research institution dedicated to fundamental theoretical research in various areas of science. It is embedded into Frankfurt's recently established natural science research campus. Frankfurt itself is the hub of one of the most vibrant metropolitan areas in Europe. Apart from its strong economic and financial sides, it boasts a rich cultural community and repeatedly earns high rankings in worldwide surveys of quality of living. Applications should include a brief statement of research interests, CV including publications and relevant course work, transcripts, and contact information for at least two references. Send applications to Ms Gaby Ehlgen (ehlgen at fias.uni-frankfurt.de). Informal enquiries may be addressed to Jochen Triesch (triesch at fias.uni-frankfurt.de). [1] A Unified Model of the Joint Development of Disparity Selectivity and Vergence Control. Zhao Y, Rothkopf CA, Triesch J, Shi B. IEEE Int. Conf. on Development and Learning and Epigenetic Robotics (ICDL), 2012. (Paper of excellence award.) http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6400876 [2] Let it Learn: A Curious Vision System for Autonomous Object Learning. Chandrashekhariah P, Spina G, Triesch J. Int. Conf. on Computer Vision Theory and Applications (VISAPP), 2013. http://fias.uni-frankfurt.de/~triesch/publications/ChandrashekhariahEtAl-VISAPP-2013.pdf [3] Emergence of Mirror Neurons in a Model of Gaze Following. Triesch J, Jasso H, Deak GO. Adaptive Behavior 15(2):149-165, 2007. http://fias.uni-frankfurt.de/~triesch/publications/TrieschEtAl-AdaptiveBehavior-2007.pdf -- Prof. Dr. Jochen Triesch Johanna Quandt Research Professor Frankfurt Institute for Advanced Studies http://fias.uni-frankfurt.de/~triesch/ From contact2013 at ecvp.uni-bremen.de Mon Mar 11 11:29:53 2013 From: contact2013 at ecvp.uni-bremen.de (ECVP 2013) Date: Mon, 11 Mar 2013 16:29:53 +0100 Subject: Connectionists: REMINDER: Deadline for Abstract submission approaches Message-ID: <002201ce1e6d$47244170$d56cc450$@ecvp.uni-bremen.de> This is to remind you that the deadline for abstract submission to ECVP 2013 (Bremen, Germany) is March, 24th. You may submit your abstracts by registering to the main meeting at http://www.ecvp.uni-bremen.de/node/15 and then using the link we will send you to upload your abstract. ECVP2013 also hosts a satellite symposium on Arts & Perception (http://www.ecvp.uni-bremen.de/node/51) and a total of 10 tutorials on data analysis, experimental approaches and other topics (http://www.ecvp.uni-bremen.de/node/46). For these events, the number of participants is limited and will be given on a first-come first-served basis. Registering is possible during the registration procedure for the main conference. Please note that registration to only the satellite symposium or the tutorials is possible, too. In this case, please enter the normal registration procedure and then choose the event you want to register to and uncheck registration to the main meeting. Looking forward seeing you in Bremen, -- ECVP 2013 Organizing Committee Udo Ernst | Cathleen Grimsen | Detlef Wegener | Agnes Janssen Universitaet Bremen / University of Bremen Zentrum fuer Kognitionswissenschaften / Center for Cognitive Sciences Hochschulring 18 28359 Bremen / Germany Website: www.ecvp.uni-bremen.de Facebook: www.facebook.com/EuropeanConferenceOnVisualPerception Contact - emails: contact2013 at ecvp.uni-bremen.de (For any comments, questions or suggestions) symp2013 at ecvp.uni-bremen.de (For organization and submission of symposia) exhibition2013 at ecvp.uni-bremen.de (For any query regarding the exhibition) showtime2013 at ecvp.uni-bremen.de (For proposal submission and any query regarding the SHOWTIME) ecvptravel at ecvp.uni-bremen.de (For any query regarding travel grants) -------------- next part -------------- An HTML attachment was scrubbed... URL: From deak at cogsci.ucsd.edu Mon Mar 11 00:23:24 2013 From: deak at cogsci.ucsd.edu (=?ISO-8859-1?Q?Gedeon_De=E1k?=) Date: Sun, 10 Mar 2013 21:23:24 -0700 Subject: Connectionists: Deadline Extended: International Conference on Learning and Development/EpiRob 2013 Message-ID: (Apologies if you receive multiple copies of this message.) The organizers of ICDL/EpiRob 2013 (Osaka, Japan) would like to announce an extension of the deadline for paper and poster submissions. The new deadlines and expected notification dates are as follows: Submission Deadline: March 31, 2013 Notification Due: May 16, 2013 Final Version Due: June 16, 2013 Conference: August 18-22, 2013 For more information about ICDL/EpiRob, please see: http://www.icdl-epirob.org/ -- Gedeon O. De?k, Ph.D. Professor of Cognitive Science University of California, San Diego 9500 Gilman Dr. La Jolla, CA 92093-0515 http://www.cogsci.ucsd.edu/~deak/cdlab http://ucsd.academia.edu/GedeonDe?k ph (858) 822-3352 fax (858) 534-1128 From guangliang.li2010 at gmail.com Mon Mar 11 11:38:49 2013 From: guangliang.li2010 at gmail.com (Guangliang Li) Date: Mon, 11 Mar 2013 16:38:49 +0100 Subject: Connectionists: 2013 Reinforcement Learning Competition and ICML Workshop Message-ID: 2013 Reinforcement Learning Competition and ICML Workshop 16-21 June 2013 Atlanta, USA https://sites.google.com/site/rlcomp2013/icml_workshop * * Competition goal After a four year hiatus, the reinforcement learning competition is back. The primary aim of the competition is to test both general and domain-specific reinforcement learning algorithms, using an unbiased and transparent methodology. As a side-effect, the competition will generate a set of benchmarks domains and benchmark results in those domains, which can then be used as a basis of comparison in future work. In the reinforcement learning competition , researchers can test their algorithms and insights in a friendly competitive way, on new and challenging domains. The reinforcement learning competition has not been organized for a while, but will take place again this year after a long hiatus. The primary aim of the competition is to test both general and domain-specific reinforcement learning algorithms, using an unbiased and transparent methodology. The domains used in the competition will form a set of benchmarks, and the results of the submitted algorithms will form a body of benchmark results, which researchers can then use as a basis of comparison in future work. A secondary aim is to discuss methodological approaches for comparing reinforcement learning algorithms. This remains an issue in reinforcement learning in general. Another important aim is to ensure the continuing existence of the competition, and the prevention of further hiatuses. Our aim is to ensure that the competition will organized annually again, from this year onward. To this extend we will try to build up an organization of interested researchers. As in the previous years, the competition will be hosted at http://www.rl-competition.org/ The format will be remain as is. Agents may compete in one or more of a set of known domains. This set will include a polyathlon event, where the agents compete in a sequence of arbitrary environments. We envisage the actual competition to take place in *June 2013*. ICML Workshop on the Reinforcement Learning Competition 2013 (WRLCOMP) The competition is associated with an ICML Workshop in conjunction with ICML 2013 . The ICML reinforcement learning competition workshop will bring together researchers, who participated in the competition, in order to present and discuss their results. We will evaluate what works, and also under what conditions established methods may not work so well. In this way we hope to broaden our insight into state-of-the-art RL algorithms, and important properties of RL problems. The workshop will be organized along the lines of the domains. One time slot will be reserved for each competition category winner. In addition, all competition entrants will be invited to submit a short paper describing their approach. We will reserve a special time slot for papers on methodological problems that arise when comparing reinforcement learning algorithms. Workshop website: https://sites.google.com/site/rlcomp2013/icml_workshop** Topics 1. For entrants All entrants are additionally invited to submit a paper describing the approach used in the competition, including findings and ideas that relate to more general research questions. 2. For everyone You are invited to submit a paper discussing methodological issue when comparing reinforcement learning algorithms, including but not limited to the following topics: Effects of PRNGs in large-scale comparisons Experimental design for an unbiased evaluation Metrics for performance evaluation Reproducibility of results Robustness of algorithms Statistical testing standards for reinforcement learning Submission Interested authors should format their papers according to ICML formatting guidelines, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icml2013stylefiles.tar.gz Papers should not exceed 4 pages and are due by May 15, 2013. All papers must be submitted as PDF, and be made through Easychair at * https://www.easychair.org/conferences/?conf=wrlcomp2013* Important Dates Domains available: April 1 Testing starts: May 1 Competition: June 1 Paper Submission Deadline: May 15, 2013 Author Notification: June 1, 2013 Workshop: June 20-21, 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phkywong at ust.hk Mon Mar 11 07:32:00 2013 From: phkywong at ust.hk (WONG Michael K Y) Date: Mon, 11 Mar 2013 19:32:00 +0800 (HKT) Subject: Connectionists: IAS Program on Statistical Physics and Computational Neuroscience Message-ID: <57048.143.89.19.28.1363001520.squirrel@sqmail.ust.hk> The Hong Kong University of Science and Technology IAS Program on Statistical Physics and Computational Neuroscience IAS Workshop 2-14 July 2013 Student Conference 15-16 July 2013 StatPhysHK Conference 17-19 July 2013 Invited speakers: Guoqiang Bi (U of Science and Technology of China) Chi Keung Chan (Academica Sinica) Ying Shing Chan (University of Hong Kong) Steven Coombes (Nottingham University) Michael Crair (Yale University) Tomaki Fukai (RIKEN) David Hansel (Universit? Ren? Descartes) Jufang He (Hong Kong Polytechnic University) Yong He (Beijing Normal University) Claus Hilgetag (Hamburg University) Zhaoping Li (University College London) Marcelo Magnasco (Rockefeller University) Masato Okada (University of Tokyo) Stefano Panzeri (Italian Inst of Tech and U of Glasgow) Barry Richmond (NIMH) Thomas Trappenberg (Dalhousie University) Mark van Rossum (University of Edinburgh) Xiaoqin Wang (Johns Hopkins University) Jia Yi Zhang (Fudan University) Organizing Committee: Chair - K. Y. Michael Wong (HKUST) Co-chair - Changsong Zhou (HKBU) Emily S. C. Ching (CUHK) Gang Hu (Beijing Normal University) Jian-Dong Huang (HKU) H. Benjamin Peng (HKUST) Leihan Tang (HKBU) Si Wu (Beijing Normal University) http://ias.ust.hk/program/201307/ Abstract submission deadline: 15 April 2013 (updated) Venue: The new IAS Building Sponsors: Hong Kong Baptist University Croucher Foundation From alfredo.petrosino at uniparthenope.it Mon Mar 11 16:39:03 2013 From: alfredo.petrosino at uniparthenope.it (Alfredo Petrosino) Date: Mon, 11 Mar 2013 21:39:03 +0100 Subject: Connectionists: ICIAP 2013 in Napoli, Italy - Deadline: March 29, 2013 Message-ID: <513E40E7.50104@uniparthenope.it> * Please forward to your colleagues and contacts who may be interested in attending this event. * ---- ICIAP 2013 in Napoli, Italy, http://www.iciap2013-naples.org/, is an historic international conference in image processing and pattern recognition endorsed by the International Association for Pattern Recognition (IAPR), the Technical Committe PAMI, and the IEEE Computational Intelligence Society. One of the main tracks is: - Pattern Recognition and Machine Learning: Supervised and unsupervised learning; Neural networks; Kernel machines; Ensemble methods; Graphical models; Deep learning; Soft computing (fuzzy theory, rough theory, possibilistic theory); Dimensionality reduction; Feature selection Please note that selected best papers will be collected in the 'Pattern Recognition Letters' and 'Information Sciences' journals. We are also running other special issues of journals of your interest. The submission deadline is March 29, 2013. Don't miss Napoli in September! Best regards, Alfredo Petrosino ==================================================== CALL FOR PAPERS ** Deadline: March 29, 2013 ** International Conference on Image Analysis and Processing (ICIAP) 2013 September 11-13, 2013 Castel dell'Ovo, Napoli, Italy http://www.iciap2013-naples.org ==================================================== ICIAP 2013 - INVITED SPEAKERS Antonio Torralba, Massachusetts Institute of Technology, USA Ching Y. Suen, Concordia University, Canada Sankar K. Pal, Indian Statistical Institute, India Jiri Matas, Czech Technical University, Czech Republic Fei-Fei Li, Stanford University, USA ICIAP 2013 - CONFERENCE PROCEEDINGS The Proceedings of ICIAP 2013 will be published in the Springer Lecture Notes in Computer Science (LNCS) series, indexed as peer-reviewed publication in the Web of Science. ICIAP 2013 - PUBLICATIONS IN JOURNALS Extended versions of selected papers will be considered for inclusion in a special section of the "Pattern Recognition Letters" journal and in the "Information Sciences" journal. ICIAP 2013 - TUTORIALS AND WORKSHOPS Tutorials and workshops (to be announced soon) will be held on September 9, 10 and 11, 2013, right before the main conference. ICIAP 2013 - IMPORTANT DATES Conference paper submission: March 29, 2013 Conference paper acceptance: May 10, 2013 Camera ready submission: June 10, 2013 ICIAP 2013 - SUBMISSION Papers (10 pages at most, in single blind submission) should be submitted on the ICIAP2013 conference system https://cmt.research.microsoft.com/ICIAP2013/ From kaschube at fias.uni-frankfurt.de Mon Mar 11 16:28:40 2013 From: kaschube at fias.uni-frankfurt.de (Matthias Kaschube) Date: Mon, 11 Mar 2013 16:28:40 -0400 Subject: Connectionists: Open PhD/Postdoc Position in Theoretical Visual Neuroscience Message-ID: We have an opening for a fully funded 3-year interdisciplinary PhD studentship / postdoc position in theoretical/computational visual neuroscience at the Frankfurt Institute for Advanced Studies (FIAS). The position is part of the Bernstein Focus: Neurotechnology (BFNT) Frankfurt and is focused on neural circuit models of the visual cortex to study memory mediated top-down control of visual processing. We are seeking outstanding and highly motivated PhD candidates or Postdocs to apply for work within the above framework. Applicants should hold degrees (Master or PhD, respectively) in relevant subjects, for example physics, mathematics, computer science, neuroscience, engineering, etc. The candidate should have excellent analytic skills, knowledge of signal processing and statistics and good programming skills (Matlab, Python or C/C++). The position can be filled immediately, and the term of the appointment is 36 months. The salary for these full-time positions (PhD or Postdoc) is according to TV G-U 13, 100%. The Frankfurt Institute for Advanced Studies is a research institution dedicated to fundamental theoretical research in various areas of science. It is embedded in Frankfurt?s recently established research campus, which is home to a thriving research community in the field of neuroscience. Tight links exist to the experimental neuroscience centers of the biology department and medical campus of the Goethe University, the adjacent Max-Planck Institutes for Brain Research and for Biophysics and the Ernst Str?ngmann Institute for Neuroscience (in cooperation with the Max-Planck Society) at Frankfurt?s medical campus. Frankfurt itself is the hub of one of the most vibrant metropolitan areas in Europe. Apart from its strong economic and financial sides, it boasts a rich cultural community and repeatedly earns high rankings in worldwide surveys of quality of living. For further information, please visit http://fias.uni-frankfurt.de/neuro/kaschube/ Applications should be sent to Matthias Kaschube (kaschube at fias.uni-frankfurt.de) and Gaby Ehlgen (ehlgen at fias.uni-frankfurt.de). Applicants should send a CV (including a list of publications) and contact information of 2 references. Informal inquiries can be addressed to Matthias Kaschube. --- Matthias Kaschube Fellow, Frankfurt Institute for Advanced Studies Professor for Computational Neuroscience and Computational Vision Faculty of Computer Science and Mathematics Johann Wolfgang Goethe University, Frankfurt am Main Ruth-Moufang-Str. 1 60438 Frankfurt am Main Germany Phone: +49 69 798 47521 or E-mail: kaschube at fias.uni-frankfurt.de Website: http://fias.uni-frankfurt.de/neuro/kaschube/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From heimel at nin.knaw.nl Tue Mar 12 08:51:37 2013 From: heimel at nin.knaw.nl (Alexander Heimel) Date: Tue, 12 Mar 2013 13:51:37 +0100 Subject: Connectionists: INCF Short Course on Neuroinformatics, Neurogenomics and Brain Disease, September 2013 Message-ID: To understand the brain and its disorders, we needed to get data. Now we have data, and we need to analyze it. INCF Short Course on Neuroinformatics, Neurogenomics and Brain DIsease https://sites.google.com/site/neuroinformaticsjamboree/ 14-20 September 2013, Fraueninsel (Bavaria), Germany OPEN FOR APPLICATIONS until April 30th, 2013 Program In this exciting five-day course at an idyllic European location, we will discuss and demonstrate many neuroinformatics tools and public data resources in the context of neurogenomics and brain diseases. The mornings and evenings are filled with lectures on areas of neuroscience in which large datasets are becoming available, such as the synaptic complex and the proteome, synaptic plasticity, impulsivity, mRNAs, neuro-ontology, neurogenomics, and neurogenesis. In the afternoons, students and lecturers together will attempt to apply the new tools and datasets to answer specific research questions. Our aim is to have jointly written draft papers for each of the questions by the end of the course. Target audience All neuroscientists with an interest in neuroinformatics, neurogenomics or brain disease. Graduate students and postdocs are welcome, but also more advanced researchers eager to learn how to exploit the potential of neuroinformatics tools and publicly available datasets in answering their research questions. We aim for about 40 participants. Lodging Lodging, food and coffee will be provided. The course fee is 100 euros only, thanks to generous gifts of our sponsors. We hope to make travel stipends available by application for a selected number of students. Organizers: Alexander Heimel, Rupert Overall and Rob Williams. More info or applications: neuroinformatics.jamboree at gmail.com From kaschube at fias.uni-frankfurt.de Mon Mar 11 17:49:23 2013 From: kaschube at fias.uni-frankfurt.de (Matthias Kaschube) Date: Mon, 11 Mar 2013 17:49:23 -0400 Subject: Connectionists: Open PhD/Postdoc Position in Theoretical Visual Neuroscience Message-ID: <6CA2226B-3FDF-4357-ADED-44F50ABA5F46@fias.uni-frankfurt.de> We have an opening for a fully funded 3-year interdisciplinary PhD studentship / postdoc position in theoretical/computational visual neuroscience at the Frankfurt Institute for Advanced Studies (FIAS). The position is part of the Bernstein Focus: Neurotechnology (BFNT) Frankfurt and is focused on neural circuit models of the visual cortex to study memory mediated top-down control of visual processing. We are seeking outstanding and highly motivated PhD candidates or Postdocs to apply for work within the above framework. Applicants should hold degrees (Master or PhD, respectively) in relevant subjects, for example physics, mathematics, computer science, neuroscience, engineering, etc. The candidate should have excellent analytic skills, knowledge of signal processing and statistics and good programming skills (Matlab, Python or C/C++). The position can be filled immediately, and the term of the appointment is 36 months. The salary for these full-time positions (PhD or Postdoc) is according to TV G-U 13, 100%. The Frankfurt Institute for Advanced Studies is a research institution dedicated to fundamental theoretical research in various areas of science. It is embedded in Frankfurt?s recently established research campus, which is home to a thriving research community in the field of neuroscience. Tight links exist to the experimental neuroscience centers of the biology department and medical campus of the Goethe University, the adjacent Max-Planck Institutes for Brain Research and for Biophysics and the Ernst Str?ngmann Institute for Neuroscience (in cooperation with the Max-Planck Society) at Frankfurt?s medical campus. Frankfurt itself is the hub of one of the most vibrant metropolitan areas in Europe. Apart from its strong economic and financial sides, it boasts a rich cultural community and repeatedly earns high rankings in worldwide surveys of quality of living. For further information, please visit http://fias.uni-frankfurt.de/neuro/kaschube/ Applications should be sent to Matthias Kaschube (kaschube at fias.uni-frankfurt.de) and Gaby Ehlgen (ehlgen at fias.uni-frankfurt.de). Applicants should send a CV (including a list of publications) and contact information of 2 references. Informal inquiries can be addressed to Matthias Kaschube. --- Matthias Kaschube Fellow, Frankfurt Institute for Advanced Studies Professor for Computational Neuroscience and Computational Vision Faculty of Computer Science and Mathematics Johann Wolfgang Goethe University, Frankfurt am Main Ruth-Moufang-Str. 1 60438 Frankfurt am Main Germany Phone: +49 69 798 47521 or E-mail: kaschube at fias.uni-frankfurt.de Website: http://fias.uni-frankfurt.de/neuro/kaschube/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Tue Mar 12 16:47:06 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Tue, 12 Mar 2013 16:47:06 -0400 Subject: Connectionists: INCF Short Course on Neuroinformatics, Neurogenomics and Brain Disease, September 2013 In-Reply-To: References: Message-ID: <513F944A.8030904@cse.msu.edu> Some researchers argued that neuroscience is rich in data and poor in theory. This seems to be a good reason for this situation: If one knew the brain theory, he would have known that analyzing the brain data (e.g., Connectome) has no hope to get the brain theory, since the data like Connectome miss so much, such as synaptic weights (conductances), the distribution of neurotransmitters, and most important of all, the brain's external environment at all the moments ... Analyzing the brain data without the guidance of a brain-scale theory is like the following: A person who does not understand how a modern computer works (i.e., theory) got the wiring diagram of a modern computer (Connectome) and all snapshots of voltages at all the spots of the wires (high definition fMRI movie). Then he said "now we have data, and we need to analyze it." -John On 3/12/13 8:51 AM, Alexander Heimel wrote: > To understand the brain and its disorders, we needed to get data. > Now we have data, and we need to analyze it. > > INCF Short Course on Neuroinformatics, Neurogenomics and Brain DIsease > https://sites.google.com/site/neuroinformaticsjamboree/ > 14-20 September 2013, Fraueninsel (Bavaria), Germany > > OPEN FOR APPLICATIONS until April 30th, 2013 > > Program > In this exciting five-day course at an idyllic European location, we > will discuss and demonstrate many neuroinformatics tools and public > data resources in the context of neurogenomics and brain diseases. The > mornings and evenings are filled with lectures on areas of > neuroscience in which large datasets are becoming available, such as > the synaptic complex and the proteome, synaptic plasticity, > impulsivity, mRNAs, neuro-ontology, neurogenomics, and neurogenesis. > In the afternoons, students and lecturers together will attempt to > apply the new tools and datasets to answer specific research > questions. Our aim is to have jointly written draft papers for each of > the questions by the end of the course. > > Target audience > All neuroscientists with an interest in neuroinformatics, > neurogenomics or brain disease. Graduate students and postdocs are > welcome, but also more advanced researchers eager to learn how to > exploit the potential of neuroinformatics tools and publicly available > datasets in answering their research questions. We aim for about 40 > participants. > > Lodging > Lodging, food and coffee will be provided. The course fee is 100 euros > only, thanks to generous gifts of our sponsors. We hope to make travel > stipends available by application for a selected number of students. > > Organizers: Alexander Heimel, Rupert Overall and Rob Williams. > More info or applications: neuroinformatics.jamboree at gmail.com -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From kiebel at cbs.mpg.de Wed Mar 13 03:29:59 2013 From: kiebel at cbs.mpg.de (Stefan Kiebel) Date: Wed, 13 Mar 2013 08:29:59 +0100 (CET) Subject: Connectionists: postdoc and PhD position in comp neuroscience & MEG/EEG methods, Jena, Germany In-Reply-To: <1768705570.408.1363159755603.JavaMail.root@zimbra> Message-ID: <1713087225.414.1363159799451.JavaMail.root@zimbra> extended deadline - 20th of March: We are inviting applications for a postdoctoral and a PhD student position in the Computational Neuroscience & Magnetoencephalography group at the Biomagnetic Centre (http://www.neuro.uniklinikum-jena.de/neuro/en/Research/Biomag.html), Friedrich-Schiller-University of Jena, Germany. The successful candidates will develop novel analysis methods for Magneto- and Electroencephalography (MEG/EEG) such as connectivity analysis, source reconstruction and advanced single trial analysis. The positions are devoted to research only without any teaching or administrative duties. The work will be done in collaboration with the MEG and theoretical neuroscience groups at the Wellcome Trust Centre for Neuroimaging in London, UK. These positions are ideal for candidates with a computational/theoretical background and a strong interest in collaborating with experimental researchers in neuroimaging. The lab runs a 306 channels MEG (Neuromag Vectorview) with 128 integrated EEG sensors, a high-density EEG system, and high-performing compute servers. In addition the group has access to a research-only 3T MRI-scanner. All experimental facilities (MEG, EEG, MRI) are supported by experienced physics and IT staff. The applicants should have worked in neuroscience before and be motivated to work in a multidisciplinary team (e.g. mathematicians, engineers, psychologists, physicians). The postdoc applicant must have a PhD (or equivalent) in computational neuroscience, physics, or a related field and should, ideally, have expertise in EEG or MEG and nonlinear dynamical systems. The PhD student should have a mathematically oriented background in computational neuroscience, physics, or a related field but students with a cognitive neuroscience, psychology, or related background will be considered as well. The starting dates for both positions are flexible. Salary is based on German Public service regulations (postdoc TV-L E13, PhD student TV-L E13 65%). The postdoc position is initially for two years with possible extension; the PhD position is for three years with one year possible extension. Interested candidates are encouraged to get in touch at their earliest convenience. Applications are considered until 20th of March 2013 but reviewing of the applications will start immediately. For questions or an informal discussion about these positions please contact Prof. Stefan Kiebel (skiebel at biomag.uni-jena.de). The following documents should be included in the application in a single PDF-file and sent by email to skiebel at biomag.uni-jena.de: A cover letter including a brief description of personal qualifications and future research interests, curriculum vitae, and contact details of two personal references. -- Prof. Dr. Stefan Kiebel Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany Phone: ++49 341/9940-2435 Fax: ++49 341/9940-2221 http://www.cbs.mpg.de/~kiebel From mark.plumbley at elec.qmul.ac.uk Wed Mar 13 04:09:31 2013 From: mark.plumbley at elec.qmul.ac.uk (mark.plumbley at elec.qmul.ac.uk) Date: Wed, 13 Mar 2013 08:09:31 +0000 Subject: Connectionists: Workshop on Information Dynamics of Music - 21 March 2013 - London, UK In-Reply-To: <513919C6.3080403@eecs.qmul.ac.uk> References: <513919C6.3080403@eecs.qmul.ac.uk> Message-ID: <64FF9E21C217C9438E68D86940CB7DE50975B0@staff-mail2.vpn.elec.qmul.ac.uk> This workshop may be of interest to people working on information theory and the brain. Best wishes, Mark ---- Workshop on Information Dynamics of Music (IDyOM) We invite you to join us for a one day workshop to celebrate the successful completion of our research project Information and Neural Dynamics in the Perception of Musical Structure funded by the Engineering and Physical Science Research Council (EPSRC). The project includes members at Queen Mary, University of London and Goldsmiths, University of London. When: Thursday 21 March 2013, 9am - 5pm Where: Room RHB 137a, Goldsmiths, University of London, London SE14 6NW, UK The workshop will provide a forum for dissemination and discussion of cutting edge research on dynamic predictive processing of musical structure in: * probabilistic and information-theoretic models; * cognitive, psychological and neural processing; * musicological analysis. For further details on the project see: http://www.idyom.org. Speakers will include: * Prof. Moshe Bar , Gonda Multidisciplinary Brain Center, Bar-Ilan University and Harvard Medical School * editor of Predictions in the Brain: Using our Past to Generate a Future * Prof. David Feldman , College of the Atlantic, Maine * author of Chaos and Fractals: An Elementary Introduction * Prof. David Huron , Ohio State University * author of Sweet Anticipation: Music and the Psychology of Expectation * Prof. Israel Nelken , Hebrew University, Jerusalem * author of Auditory Neuroscience: Making Sense of Sound in addition to members of the IDyOM team. Attendance is free but places must be booked in advance. For more information and to reserve a place please visit: http://idyom2013.eventbrite.co.uk/ Dr Marcus Pearce -- Lecturer in Sound and Music Processing Centre for Digital Music & Research Centre in Psychology Queen Mary, University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 5352 http://webprojects.eecs.qmul.ac.uk/marcusp Music Cognition Lab http://bit.ly/Pp3rcD MSc in Media Arts Technology http://bit.ly/x3tX9C MSc in Digital Music Processing http://bit.ly/zb21lm -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.jpg Type: image/jpeg Size: 7371 bytes Desc: ATT00001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.png Type: image/png Size: 11477 bytes Desc: ATT00002.png URL: From ted.carnevale at yale.edu Wed Mar 13 19:17:28 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Wed, 13 Mar 2013 19:17:28 -0400 Subject: Connectionists: NEURON Summer course early registration discount Message-ID: <51410908.9060803@yale.edu> NEURON Summer course early registration discount In response to the "sequester," we are offering a $150 early registration discount on a limited number of seats in the NEURON 2013 Summer Course. The discount will apply to registrants who sign up and pay for the course by 5 PM EDT (New York time), Friday, April 12, 2013. This reduces the total registration fee from $1050 to $900. For information about the course, and the on-line registration form, see http://www.neuron.yale.edu/neuron/static/courses/nscsd2013/nscsd2013.html --Ted From weng at cse.msu.edu Wed Mar 13 22:33:46 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Wed, 13 Mar 2013 22:33:46 -0400 Subject: Connectionists: modularity vs. sparsity In-Reply-To: <20130304115301.Horde.ywMJJ16adDBRNNFtaO6QwwA@webmail.uvm.edu> References: <20130304115301.Horde.ywMJJ16adDBRNNFtaO6QwwA@webmail.uvm.edu> Message-ID: <5141370A.7040404@cse.msu.edu> You wrote: "it's not surprising that they got modularity because they placed a cost on connection length." If modularity is primarily due to connection length minimization, this explanation puts most factors into evolution. This should be largely wrong. Why? You consider only computer simulations but not biology. Biology seems to be smarter. According to my understanding of brain plasticity, the connections in the brain are primarily due to prenatal and postnatal activities during development. That is, signals from firing neurons guide other neurons to wire, not primarily genes! By view: the brain modularity is primarily due to (A) sensor discontinuities (e.g., between eye and skin) and continuities (e.g., between neighboring receptors on the retina of sensors), and (B) effector discontinuities (e.g., between hand muscles and leg muscles) and continuities (e.g., between neighboring muscles in the same figure) Note: not just sensors, but also effectors. According to what I understand from the neuroanatomy and our brain-inspired DN (Developmental Network) model, the sources of signals that guide the brain wiring are not only sensors (receptors) but also effectors (muscles and glands). I know that I am among a very small minority to hold this view, but I hope that more and more people can see that this is true. The major drawback of many existing studies discussed here is that the researchers confuse evolution with development --- they have little idea what a call does during development. I suggest that everybody learn neurobiology to understand how each neuron migrates and connects during development to form tissues (e.g., cortex). The genes inside a cell determine largely what a cell is now, but how a cell wires is largely determined by the activities of other cells that this cell communicates using what is called cell-to-cell signaling. All those who took BMI 811 Biology for Brain-Mind Research last summer have learned the above. I encourage that you take advantage of BMI summer school 2013, which will be announced soon. Please let me know if you are interested in but have some blackout dates, so that we can do our best to avoid your blackout dates. A BMI course will last for three weeks in this summer. Now, you can take a look at the BMI program last year: http://www.brain-mind-institute.org/programs.html -John On 3/4/13 11:53 AM, Joshua C. Bongard wrote: > A lot of discussion here lately about the evolution of modularity, > sparked by the recent paper > > http://arxiv.org/abs/1207.2743 > > In it, the authors evolve ANNs to perform a specific task while placing > a penalty on total wiring length. > > There were a lot of responses here along the lines of "it's not > surprising > that they got modularity because they placed a cost on connection > length." > > However, what I found interesting about this paper is that they got > modularity rather than sparsity. It is important to keep in mind that > these two properties of networks are not the same thing. > > So, if I were to use Clune et al.'s approach to evolve ANNs for another > task, would I get modularity or sparsity? > > Food for thought. > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 3115 Engineering Building Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- From stefano.panzeri at gmail.com Thu Mar 14 10:24:36 2013 From: stefano.panzeri at gmail.com (Stefano Panzeri) Date: Thu, 14 Mar 2013 14:24:36 +0000 Subject: Connectionists: postdoctoral position in computational neuroscience at University of Glasgow Message-ID: I have a postdoctoral position in computational neuroscience available in my laboratory at the University of Glasgow. The position if funded for three years and application deadline is April 2nd 2013. The project aims at developing a novel set of novel information theoretic algorithms for the analysis of simultaneous recordings from large populations of retinal ganglion cells under dynamic natural visual stimulation. The purpose of these algorithms is to understand how interactions among groups of neurons contribute to visual processing. These algorithms will be applied to analyse the functional response properties of these cells and will expose new principles of spike timing encoding that bridge the gap between single cell and population information processing. The post is funded by the European Union Future Emerging Technology ?VISUAL MODELLING USING GANGLION CELLS? grant. The project will involve collaborations with a retinal neurophysiology laboratory at Gottingen University (Prof Tim Gollisch), as well as collaborations with neuromorphic electronic engineers and roboticists (University of Zurich, and University of Ulster) to contribute to the development of novel theoretical and hardware models of biological retinal ganglion cell types for dynamic vision applications.** Salary will be on the University?s Research and Teaching Grade 7 ?32,267 - ?36,298 per annum Prospective candidates are encouraged to contact me by email ( stefano.panzeri at glasgow.ac.uk). For further information about how to apply, please visit http://www.jobs.ac.uk/job/AGC660/research-associate/ For further information about the institute, please visit http://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/ Regards Stefano Panzeri University of Glasgow http://www.gla.ac.uk/researchinstitutes/neurosciencepsychology/staff/stefanopanzeri/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.rougier at inria.fr Thu Mar 14 05:28:48 2013 From: nicolas.rougier at inria.fr (Nicolas Rougier) Date: Thu, 14 Mar 2013 10:28:48 +0100 Subject: Connectionists: PhD Position, Computational Neuroscience, Bordeaux, France Message-ID: <5ED02B68-DF09-40AF-A470-CE626BF2071D@inria.fr> Dear colleagues, Please find below a PhD proposal that may be of some interests for your students. Best Regards, Nicolas Rougier Neuroscience of decision making: from motor primitives to actions ================================================================= The aim of this proposal is to investigate and to model decision making at different levels in order to establish the link between motor primitives and higher level actions. The question is to understand how continuous complex motor sequences can be dynamically represented as actions such that they can be later manipulated to resolve conflict when several actions are possible. This research will take place at the institute of neurodegenerative diseases (Bordeaux, France) whose goal is to develop therapeutic approaches to neurodegenerative diseases using both vertical and translational approaches. More specifically, this research will be driven in cooperation with the Basal Gang group which has extended expertise in the physiology of decision making and the neuromodulation of the cortex-basal ganglia loop. The candidate should have education in mathematics and computer science (modeling, differential equations, simulation, analysis) with a strong motivation or experience in neurosciences. Knowledge of the python language and numerical libraries will be appreciated. Administrative Information ========================== Position type: PhD Student Functional area: Bordeaux (France) Research theme: Computation sciences for biology, medicine and the environnement Scientific advisor: Nicolas Rougier (http://www.loria.fr/~rougier) HR Contact: laure.pottier_schupp at inria.fr Application deadline: 03/05/2013 Eligibility: Master's in computer science, control engineering, mathematics, scientific computation or an equivalent diploma. More details at: ================ http://www.inria.fr/en/institute/recruitment/offers/phd/campaign-2013 From neumann at ias.tu-darmstadt.de Thu Mar 14 05:53:06 2013 From: neumann at ias.tu-darmstadt.de (Gerhard Neumann) Date: Thu, 14 Mar 2013 10:53:06 +0100 Subject: Connectionists: Reminder: Call for Posters - ICRA 2013 Workshop on "Novel Methods for Learning and Optimization of Control Policies and Trajectories for Robotics" Message-ID: <51419E02.4070609@ias.tu-darmstadt.de> REMINDER - CALL FOR POSTERS - Poster Submission Deadline: March 15, 2013 ICRA 2013 WORKSHOP ON "NOVEL METHODS FOR LEARNING AND OPTIMIZATION OF CONTROL POLICIES AND TRAJECTORIES FOR ROBOTICS"" ================================================================================================== QUICK FACTS Organizers: Katja Mombaur, Gerhard Neumann, Martin Felis, Jan Peters Conference: ICRA 2013 Location: Karlsruhe, Germany Workshop Date and Time: Friday, May 10, 2013, 9:00 - 18:30 Website: http://www.robot-learning.de/Research/ICRA2013 Poster Submission Deadline: March 15, 2013 Poster Acceptance Notification: March 20, 2013 Submission Form: 1 page extended abstract ABSTRACT The current challenges defined for robots require them to automatically generate and control a wide range of motions in order to be more flexible and adaptive in uncertain and changing environments. However, anthropomorphic robots with many degrees of freedom are complex dynamical systems. The generation and control of motions for such systems are very demanding tasks. Cost functions appear to be the most succinct way of describing desired behavior without over- specification and appear to underlie human movement generation in pointing/reaching movement as well as locomotion. Common cost functions in robotics include goal achievement, minimization of energy consumption, minimization of time, etc. A myriad of approaches have been suggested to obtain control policies and trajectories that are optimal with respect to such cost function. However, to date, it remains an open question what is the best algorithm for designing or learning optimal control policies and trajectories in robotics would work. The goal ofthis workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of thestate of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation andcontrol methods for the robotics community. Some of the core topics are: - State of the art methods in model-based optimal control and model predictive control for robotics as well as inverse optimal control - State of the art methods in robot learning, model learning, imitation learning, reinforcement learning, inverse reinforcement learning, etc . - Shared open questions in both reinforcement learning and optimal control approaches - How could methods from optimal control and machine learning be combined? FORMAT The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to: - How far can optimal control approaches based on analytical models come? - When using learned models, will the optimization biases be increased orreduced? - Can a mix of analytical and learned models help? - Can a full Bayesian treatment of model errors ensure high performance in general? - What are the advantages and disadvantages of model-free and model-basedapproaches? - How does real-time optimization / model predictive control relate to learning? - Is it easier to optimize a trajectory or a control policy? - Which can be represented with fewer parameters? - Is it easier to optimize a trajectory/control policy directly in parameter space or to first obtain a value function for subsequent backwards steps? - Is less data needed for learning a model (to be used in optimal control, or model-based reinforcement learning) or for directly learning an optimal control policy from data? - What applications in robotics are better suited for model-based, model-learning and model-free approaches? All of these questions are of crucial importance for furthering the state-of-the-art both in optimal control and in robot reinforcement learning. The goal of this workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of the state of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation and control methods for the robotics community. IMPORTANT DATES March 15 - Deadline of submission for Posters March 20th - Notification of Poster Acceptance SUBMISSIONS Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work maybe considered for talks. Submissions should be formatted according to the conference templates and submitted via email toneumann at ias.tu-darmstadt.de. ORGANIZERS Katja Mombaur, Universitaet Heidelberg Gerhard Neumann, Technische Universitaet Darmstadt Martin Felis, Universitaet Heidelberg Jan Peters, Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems LOCATION AND MORE INFORMATION The most up-to-date information about the workshop can be found on the ICRA 2013 webpage. From pascal.hitzler at wright.edu Thu Mar 14 12:17:50 2013 From: pascal.hitzler at wright.edu (Pascal Hitzler) Date: Thu, 14 Mar 2013 12:17:50 -0400 Subject: Connectionists: Call for Papers: NeSy'13 workshop at IJCAI 2013 In-Reply-To: References: Message-ID: <5141F82E.8020400@wright.edu> Deadline extended to: March 22, 2013 Call for Papers 9th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy?13) (3 or 4 Aug 2013) http://neural-symbolic.org/NeSy13 In conjunction with IJCAI-13 Beijing, China Artificial Intelligence researchers continue to face huge challenges in their quest to develop truly intelligent systems. The recent developments in the field of neural-symbolic integration bring an opportunity to integrate well-founded symbolic artificial intelligence with robust neural computing machinery to help tackle some of these challenges. The Workshop on Neural-Symbolic Learning and Reasoning is intended to create an atmosphere of exchange of ideas, providing a forum for the presentation and discussion of the key topics related to neural-symbolic integration. Topics of interest include: The representation of symbolic knowledge by connectionist systems; Integrated neural-symbolic learning approaches; Extraction of symbolic knowledge from trained neural networks; Integrated neural-symbolic reasoning; Neural-symbolic cognitive models; Biologically-inspired neural-symbolic integration; Integration of logic and probabilities in deep networks; Structured learning and relational learning in neural networks; Applications in robotics, simulation, fraud prevention, semantic web, software engineering, fault diagnosis, bioinformatics, visual intelligence, etc. Submission Researchers and practitioners are invited to submit original papers that have not been submitted for review or published elsewhere. Submitted papers must be written in English and should not exceed 6 pages in the case of research and experience papers, and 4 pages in the case of position papers (including figures, bibliography and appendices). All submitted papers will be judged based on their quality, relevance, originality, significance, and soundness. Papers must be submitted through easychair at https://www.easychair.org/conferences/?conf=nesy13. Presentation Selected papers will be presented during the workshop. The workshop will include extra time for audience discussion of the presentation allowing the group to have a better understanding of the issues, challenges, and ideas being presented. Publication Accepted papers will be published in official workshop proceedings, which will be distributed during the workshop. Authors of the best papers will be invited to submit a revised and extended version of their papers to the Journal of Logic and Computation, OUP. Important Dates Paper submission deadline: March 22, 2013 *extended* Notification of acceptance: April 19, 2013 Camera-ready papers due: May 3, 2013 Workshop day: 3 or 4 Aug 2013 IJCAI-13 main conference: Aug 3 ? 9, 2013 Workshop Organisers Artur d?Avila Garcez (City University London, UK) Pascal Hitzler (Wright State University, USA) Luis Lamb (Universidade Federal do Rio Grande do Sul, Brazil) Programme Committee Howard Bowman, University of Kent, England Claudia d'Amato, University of Bari, Italy Marco Gori, University of Siena, Italy Barbara Hammer, TU Clausthal, Germany Steffen H?lldobler, TU Dresden, Germany Ekaterina Komendantskaya, University of Dundee, Scotland Kai-Uwe K?hnberger, University of Osnabr?ck, Germany Gadi Pinkas, Center for Academic Studies, Israel Florian Roehrbein, Albert Einstein College of Medicine, New York, U.S.A. Rudy Setiono, National University of Singapore Jude Shavlik, University of Wisconsin-Madison, U.S.A. Gerson Zaverucha, Universidade Federal do Rio de Janeiro, Brazil Additional Information General questions concerning the workshop should be addressed to a.garcez at city.ac.uk For additional information, please visit the workshop website at http://www.neural-symbolic.org/ Please join the neural-symbolic mailing list (http://maillists.city.ac.uk/mailman/listinfo/nesy) for announcements and discussions - it's a low traffic list. -- Prof. Dr. Pascal Hitzler Kno.e.sis Center, Wright State University, Dayton, OH pascal at pascal-hitzler.de http://www.knoesis.org/pascal/ Semantic Web Textbook: http://www.semantic-web-book.org Semantic Web Journal: http://www.semantic-web-journal.net From pkoprinkova at yahoo.com Thu Mar 14 13:07:11 2013 From: pkoprinkova at yahoo.com (Petia Koprinkova) Date: Thu, 14 Mar 2013 10:07:11 -0700 (PDT) Subject: Connectionists: ICANN'2013 second call and deadline extension announcement Message-ID: <1363280831.11567.YahooMailNeo@web160604.mail.bf1.yahoo.com> Dear colleagues, Due to numerous requests for extension of deadline for paper submission we would like to inform you that ICANN 2013 deadline is extended until March 31, 2013. It is our pleasure to announce that the following outstanding plenary speakers will present their current research developments: Stephen Grossberg, Center for Adaptive Systems, Boston University. Karlheinz Meier, Coordinator of the EU BrainScaleS Consortium. Nikola Kasabov, Director of KEDRI, Auckland University of Technology, New Zealand Erkki Oja, Aalto University, Finland Jun Wang, Chinese University of Hong Kong Alessandro E.P. Villa, Director of the Laboratory of Neuroheuristics and Neuroeconomics, University of Lausanne G?nther Palm, Director of the Institute of Neural Information Processing, University of Ulm Paper format and the link to the OCS of Springer can be found at the conference web site: http://www.icann2013.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From terry at salk.edu Fri Mar 15 00:27:53 2013 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 14 Mar 2013 21:27:53 -0700 Subject: Connectionists: NEURAL COMPUTATION - April 1, 2013 In-Reply-To: Message-ID: Neural Computation - Contents -- Volume 25, Number 4 - April 1, 2013 Letters Optimality and Saturation in Axonal Chemotaxis Geoffrey J Goodhill, Jiajia Yuan, Stanley Chan, and Duncan Mortimer Information Transmission Using Non-Poisson Regular Firing Shinsuke Koyama, Robert E. Kass, Takahiro Omi, and Shigeru Shinomoto Continuation-based Numerical Detection of After-depolarisation and Spike-adding Threshold Krasimira Tsaneva-Atanasova, Jakub Nowacki, and Hinke M Osinga Some Sampling Properties of Common Phase Estimators Kyle Lepage, Mark Kramer, and Uri Eden Double-Gabor Filters Are Independent Components of Small "Translation-invariant" Image Patches Saeed Saremi, Terrence J Sejnowski, and Tatyana Sharpee Solving the Distal Reward Problem With Rare Correlations Andrea Soltoggio, Jochen Steil Modular Encoding and Decoding Models Derived From Bayesian Canonical Correlation Analysis Yukiyasu Kamitani, Yusuke Fujiwara, and Yoichi Miyawaki A Self-Organized Neural Comparator Guillermo Andres Luduena, Claudius Gros Learning With Boundary Conditions Marcello Sanguineti, Giorgio Gnecco, and Marco Gori Error Analysis of Coefficient-based Regularized Algorithm for Density Level Detection Hong Chen, Zhibin Pan, Luoqing Li, and Yuanyan Tang ------------ ON-LINE -- http://www.mitpressjournals.org/neuralcomp SUBSCRIPTIONS - 2013 - VOLUME 25 - 12 ISSUES USA Others Electronic Only Student/Retired $70 $193 $65 Individual $124 $187 $115 Institution $1,035 $1,098 $926 Canada: Add 5% GST MIT Press Journals, 238 Main Street, Suite 500, Cambridge, MA 02142-9902 Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ------------ From john.murray at aya.yale.edu Thu Mar 14 20:36:44 2013 From: john.murray at aya.yale.edu (John D. Murray) Date: Thu, 14 Mar 2013 20:36:44 -0400 Subject: Connectionists: Computational & Cognitive Neuroscience Summer School, in Beijing -- Application extension to March 25 Message-ID: COMPUTATIONAL & COGNITIVE NEUROSCIENCE SUMMER SCHOOL July 6-24 in Beijing, China http://www.ccnss.org http://www.csh-asia.org/s-cosyne13.html *** DEADLINE EXTENDED TO MARCH 25 *** *** School is FREE -- Application fee, tuition fee, the expenses for accommodation and meals during the summer school are FULLY WAIVED. *** The 4th Computational and Cognitive Neuroscience Summer School (CCNSS) will be held in Beijing, China. The objective of this course is to train in Computational Neuroscience talented and highly motivated students and postdocs from Asia and other countries in the world. Applicants with either quantitative, including Physics, Mathematics, Engineering and Computer Science or experimental background are welcomed. The lectures will introduce the basic concepts and methods, as well as cutting-edge research, in Computational and Systems/Cognitive Neurosciences. Modeling will be taught at multiple levels, ranging from subcellular processes, single neuron computation, to microcircuits and large-scale systems. Programming labs coordinated with the lectures will provide practical training in important computational methods. Organizers: - Xiao-Jing Wang (Yale University) - Si Wu (Beijing Normal University) - Zach Mainen (Champalimaud Neuroscience Programme) - Upinder Bhalla (Natl Ctr Biological Sci, Bangalore) Lecturers: - Michael Hausser (UCL) - Eve Marder (Brandeis) - Misha Tsodyks (Weizmann) - Bob Shapley (NYU) - Tony Movshon (NYU) - Bob Desimone (MIT) - Bill Newsome (Stanford) - Mark Churchland (Columbia) - Weiji Ma (Baylor) - Matthew Botvinick (Princeton) - Kechen Zhang (Johns Hopkins) - Xiaoqin Wang (Johns Hopkins) Feel free to direct any questions to: john.murray at yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at togelius.com Fri Mar 15 08:33:06 2013 From: julian at togelius.com (Julian Togelius) Date: Fri, 15 Mar 2013 13:33:06 +0100 Subject: Connectionists: IEEE CIG deadline extended to April 7 Message-ID: Due to requests, expectations, and the deadline extension of related conferences the CIG deadline has been extended until April 7 CALL FOR PAPERS IEEE Conference on Computational Intelligence and Games CIG 2013 Niagara Falls, Canada, 11-13 August 2013 http://eldar.mathstat.uoguelph.ca/dashlock/CIG2013/ Games are an ideal domain for the study of computational intelligence. They are not only fun to play and interesting to observe, but they also provide competitive and dynamic environments that model many real-world problems. The 2013 IEEE Conference on Computational Intelligence and Games brings together leading researchers and practitioners from academia and industry to discuss recent advances and explore future directions in this field. The financial support of the IEEE Computational Intelligence Society is pending. PAPER SUBMISSION 7 April 2013 ACCEPTANCE NOTIFICATION 29 May 2013 CAMERA READY SUBMISSION 15 June 2013 CONFERENCE START 11 August 2013 ORGANIZING COMMITTEE: General Chair: Dan Ashlock/University of Guelph Technical Chairs: Julian Togelius/IT University of Copenhagen Graham Kendall/University of Nottingham Competitions Chair: Philip Hingston/Edith Cowan University Finance Chair: Wendy Ashlock/York University Proceedings Chair: Joseph Brown/University of Guelph Tutorial Chair: Clare Bates Congdon/University of Southern Maine General Contact: dashlock at uoguelph.ca Contest Proposals: phi at philiphingston.com Tutorial Proposals: congdon at gmail.com SUBMISSIONS: Researchers are invited to submit high quality papers on original research in the intersection of computational intelligence and games. Computational intelligence must play a strong role in any accepted papers. Accepted papers will be indexed in IEEE Explore. TOPICS OF INTEREST INCLUDE: Learning in games Coevolution in games Neural network approaches for games Fuzzy logic approaches for games Player/opponent modeling in games Computational and artificial intelligence based game design Multi-agent and multi-strategy learning Applications of game theory Computational intelligence for player affective modeling Intelligent interactive narrative Imperfect information and non-deterministic games Player satisfaction and experience in games Theoretical or empirical analysis of CI techniques for games Computational intelligence for non-player characters in games Comparative studies and game-based benchmarking Computational intelligence based digital design assistants Automatic creation of modules or game levels Computational and artificial intelligence in: Video games Board and card games Economic or mathematical games Serious games Augmented and mixed-reality games Games for mobile platforms Want to ask if a topic is in scope? Contact: dashlock at uoguelph.ca -- Julian Togelius Associate Professor IT University of Copenhagen Rued Langgaards Vej 7, 2300 Copenhagen, Denmark mail: julian at togelius.com, web: http://julian.togelius.com mobile: +46-705-192088, office: +45-7218-5277 From maass at igi.tugraz.at Thu Mar 14 11:45:37 2013 From: maass at igi.tugraz.at (Wolfgang Maass) Date: Thu, 14 Mar 2013 16:45:37 +0100 Subject: Connectionists: Phd positions in Human Brain Project (Principles of Brain Computation) Message-ID: <5141F0A1.2050507@igi.tugraz.at> We are inviting applications for Phd positions at the Graz University of Technology (Faculty for Computer Science) for research on Principles of Brain Computation in the Human Brain Project http://www.humanbrainproject.eu/ and in the EU-Project BrainScaleS http://brainscales.kip.uni-heidelberg.de/index.html The Human Brain Project is expected to start on October 1, and our Lab will lead research on principles of brain computation in this project. We can employ Phd students already now for closely related work in the BrainScales project https://brainscales.kip.uni-heidelberg.de/index.html (this project provides intermediate financing until the beginning of the Human Brain Project). The Phd students will investigate computational properties and learning features of data-based models for cortical microcircuits, through computer simulations and theoretical analysis. Excellent research skills, a genuine interest in answering fundamental open questions about information processing in the brain, and the capability to work in an interdisciplinary research team are expected. Experience in programming, computer simulations or data analysis will be helpful. Applications are invited by students with a master degree in one of the areas computer science (especially machine learning, software design, large-scale simulations), physics, mathematics, statistics, and computational neuroscience. Our doctoral program will lead to a Phd in Computer Science. Please send your CV, information about your grades, and a letter describing your scientific interests and goals by March 24 to my assistant Regina Heidinger: regina.heidinger at igi.tugraz.at It would be helpful if you could include names and email addresses of referees, and pdf files of your master thesis and/or other publications. -- Prof. Dr. Wolfgang Maass Institut fuer Grundlagen der Informationsverarbeitung Technische Universitaet Graz Inffeldgasse 16b , A-8010 Graz, Austria Tel.: ++43/316/873-5811 Fax ++43/316/873-5805 http://www.igi.tugraz.at/maass/Welcome.html From ted.carnevale at yale.edu Wed Mar 13 19:21:31 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Wed, 13 Mar 2013 19:21:31 -0400 Subject: Connectionists: Parallel NEURON course early registration discount Message-ID: <514109FB.4030701@yale.edu> Parallel NEURON course early registration discount In response to the "sequester," we are offering a $150 early registration discount on a limited number of seats in the Parallelizing NEURON Models course. The discount will apply to registrants who sign up and pay for the course by 5 PM EDT (New York time), Friday, April 12, 2013. This reduces the total registration fee from $1350 to $1200. For information about the course, and the on-line registration form, see http://www.neuron.yale.edu/neuron/static/courses/parnrn2013/parnrn2013.html --Ted From udo at neuro.uni-bremen.de Fri Mar 15 11:26:03 2013 From: udo at neuro.uni-bremen.de (Udo Ernst) Date: Fri, 15 Mar 2013 16:26:03 +0100 Subject: Connectionists: IJCAI-13 WORKSHOP: INTELLIGENCE SCIENCE - Call for Papers Message-ID: <51433D8B.1070909@neuro.uni-bremen.de> IJCAI-13 WORKSHOP: INTELLIGENCE SCIENCE ================================================== August 3-4, 2013, Beijing, China http://www.intsci.ac.cn/WIS2013/ Call for Papers =============== Artificial Intelligence research has made substantial progress since the 1950s. However, many state-of-the-art intelligent systems are still not able to outperform human intelligence. To advance the research in artificial intelligence, it is beneficial to investigate intelligence, both artificial and natural, in an interdisciplinary context. The objective of this workshop is to bring together researchers from brain science, cognitive science, and artificial intelligence to explore the essence and technology of intelligence. This workshop provides a platform to discuss some key issues in intelligence science: (1) What new methodologies and ideas may cognitive science and brain science bring into the research of artificial intelligence? (2) What are the underlying algorithmic principles and circuitry wiring of the observed intelligent behaviors? (3) How are intelligent behaviors realized as hierarchical organization of functions across multiple modalities and time scales? (4) What are the key problems of intelligence science that requires joint research in brain science, cognitive science, and artificial intelligence? This workshop takes advantage of IJCAI-13 by hoping to attract participants from academia and industry worldwide. The communication among researchers from these fields will strongly boost the understanding of intelligence, provoke new theories on intelligent behaviors and lead to new experiments or systems. The workshop is to be held at the beginning of IJCAI-13, August 3-4, 2013. Workshop participants will have the opportunity to meet and discuss issues with a selected focus ? providing an informal setting for active exchange among researchers and evelopers on topics of common interest. Topics of interest related to Intelligence Science include but are not limited to the following areas: ? Basic process of neural activity in brain ? Perceptual representation and feature binding ? Coding and retrieval of memory ? Linguistic cognition ? Learning and synaptic plasticity ? Exploration and active sampling ? Thought and decision making ? Emotion and affection ? Development and adaptation of intelligence ? Nature of consciousness ? Mind modeling ? Cognitive computing and simulation ? Brain-computer integration ? Intelligent robots and virtual humans ? Brain-like machine ? Creativity ? Abstraction ? Integration of aspects of intelligence Submissions process: ===================== Authors should submit their papers through the Easychair system using the IJCAI formatting guidelines. If you would like to participate, submit either a full paper of no more than 6 pages (or 6,000 words); a short paper, or problem instance (at most 3 pages or 3,000 words); or a position statement (1 page). Short papers may address an important problem for further research. Important Dates: ================== Paper submission deadline: March 31, 2013 Acceptance Notification: May 1, 2013 Final Version: May 20, 2013 Workshops date: August 3-4, 2013 Confirmed Speakers (more to be announced): ===================================== David Van Essen (Washington University in St. Louis, USA) Klaus Pawelzik (University of Bremen, Germany) David Cai (NYU/Shanghai Jiaotong University, China) K Y Michael Wong (Hong Kong University of Science and Technology, China) Hailan Hu (Chinese Academy of Sciences, China) General Chairs ================== Randal A. Koene (Boston University, USA) Xiaowei Tang (Institute of High Energy Physics, Chinese Academy of Sciences, Zhejiang University, China) Jean-Daniel Zucker (IRD and University Pierre and Marie Curie in Paris, France) Program Chairs: ================== Zhongzhi Shi (Institute of Computing Technology, Chinese Academy of Sciences, China) Paul S. Rosenbloom (Dept. of Computer Science and Institute for Creative Technologies, University of Southern California) Udo Ernst (Institute for Theoretical Physics, University of Bremen, Germany) Program Committee: ==================== A. Aamodt (Department of Computer and Information Science, Norwegian University of Science and Technology, Norway) F. Battaglia (Radboud University Nijmegen, Netherlands) G. Bi (University of Science and Technology of China, China) W. Chen (Zhejiang University, China) S. Ding (China Mining University, China) F. Dylla (Computer Science, University of Bremen, Germany) G. Li (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China) L. Li (Hangzhou Dianzi University, China) J. Liang (Shanxi University, China) F. Neri (University of Naples Federico Ii, Italy) B. Si (Dept. of Neurobiology Weizmann Institute of Science, Israel) X. Tian (Department of Biomedical Engineering, Tianjin Medical University, China) S. Vadera (University of Salford, UK) G. Wang (Chongqing Institute of Green and Intelligent Technology, CAS, China) J. Weng (Department of Computer Science and Engineering, Michigan State University, USA) S. Wu (Brain and Cognitive Neuroscience, Beijing Normal University, China) C. Zhang (Tsinghua University, China) L. Zhang (Shanghai Jiao Tong University, China) X. Zheng (Zhejiang University, China) C. Zhou (Xiamen University, China) X. Zhou (Peking University, China) Y. Zhou (Shanghai Normal University, China) Local Organization: ============================ Contact Person: Zhongzhi Shishizz at ics.ict.ac.cn Tel? 86-10-82610254 Secretary: Jianhua Zhangzhangjh at ics.ict.ac.cn From ag-applic-2013 at montefiore.ulg.ac.be Wed Mar 20 06:26:40 2013 From: ag-applic-2013 at montefiore.ulg.ac.be (Renaud Detry) Date: Wed, 20 Mar 2013 11:26:40 +0100 Subject: Connectionists: =?iso-8859-1?q?PhD_position_in_Grasp_Learning_at_?= =?iso-8859-1?q?the_University_of_Li=E8ge/KTH_Stockholm?= Message-ID: <69F25049-449C-4156-8266-A8DCD47B6362@montefiore.ulg.ac.be> DOCTORAL POSITION AVAILABLE IN ADAPTIVE ROBOTIC GRASPING Shared-time between: - University of Li?ge, Belgium, - KTH, Stockholm, Sweden. The Systems and Modeling Group at the University of Li?ge, Belgium, and the Centre for Autonomous Systems at KTH, Stockholm, Sweden, are looking for a highly motivated PhD student to work on a research project that focuses on robot learning and autonomous robot grasping, similar in spirit to the projects presented at: http://www.csc.kth.se/~detryr/research.php The position is available from January 2013. The PhD student will work with both Dr. Renaud Detry and Prof. Danica Kragic. The student will spend approximately three semesters in Stockholm, and approximately five semesters in Li?ge. The position comes with a competitive salary. The successful candidate must have a degree(s) in Engineering, Physics, Maths, Computer Science or a related field, a solid background in machine learning, and fluency in at least one mainstream computer programming language. Applicants should submit: - a one-page cover letter describing their background and interests, - curriculum vitae (including publications), - contact information of at least one referee, - a copy of the academic transcripts (i.e., your grades in your bachelor and master degree(s)). Applications should be sent, in a single PDF document, to: ag-applic-2013 at montefiore.ulg.ac.be Applications can be sent immediately and will be evaluated until the position is taken. -- Renaud Detry Postdoctoral Researcher @ Systmod, University of Li?ge Web: http://renaud-detry.net/ Tel: +32 04 366 26 43 Office II.100, Montefiore Institute (Building B28) Grande Traverse 10, B-4000 Li?ge, Belgium From ale at sissa.it Tue Mar 19 10:24:46 2013 From: ale at sissa.it (Alessandro Treves) Date: Tue, 19 Mar 2013 15:24:46 +0100 Subject: Connectionists: Postdocs with Italian connection sought for the Israeli ICOREs in Computer Algorithms and in Cognition Message-ID: <20130319152446.Horde.7hE0eB8V4mxRSHUuVkeUOFA@webmail.sissa.it> The Ministry of Foreign Affairs of Italy (MAE) has identified the recently established Israeli Centers of Research Excellence (I-COREs) as outstanding environments where early stage researchers could benefit from advanced training while contributing to research and innovation in those Centers. The first four I-COREs operate in key domains of great interest to Italian research. MAE has therefore determined to fund postdoctoral fellowships, that can be awarded to young researchers with some connection to Italy, not solely to Italian citizens. Calls are now open, with Deadline April 15, in particular for: The I-CORE on Computer Algorithms http://www.icore-algo.org.il/ The I-CORE on Cognitive Science http://www.cognitionicore.org.il/ See http://www.itembassy.com/tel-aviv/ for further details and how to apply. -- Alessandro Treves http://people.sissa.it/~ale/limbo.html SISSA - Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy from Sept 2011 on leave to +972-3-5104004 ext. 129 Embassy of Italy - Science Office, HaMered 25, 68125 Tel Aviv, Israel From axel.hutt at inria.fr Thu Mar 21 14:37:53 2013 From: axel.hutt at inria.fr (Axel Hutt) Date: Thu, 21 Mar 2013 19:37:53 +0100 (CET) Subject: Connectionists: Postdoc position in data analysis at INRIA - France In-Reply-To: <1316876380.86986.1315581117473.JavaMail.root@zmbs1.inria.fr> Message-ID: <1898055418.5998442.1363891073175.JavaMail.root@inria.fr> ------------------------------------------------------------------------------ Postdoc position in recurrence analysis of sleep-EEG at INRIA in Nancy, France ------------------------------------------------------------------------------ A full-time Postdoc position is available in the team INRIA-team NEUROSYS (http://neurosys.loria.fr/) on data analysis of electroencephalographic data (EEG) observed during sleep of adolescents. The position is financed for 16 months and will start November 1st 2013. A full description of the position is given under http://www.loria.fr/~huttaxel/INRIA_Postdoc_2013.htm The candidate should hold a degree in electrical engineering, computer science, neuroscience or physics. She/he should also demonstrate a strong interest in neuroscience since she/he will cooperate with medical researchers measuring the data. Please send electronically applications including a CV and a list of publications to Axel Hutt (axel.hutt at inria.fr). -- Axel Hutt INRIA CR Nancy - Grand Est Equipe CORTEX 615, rue du Jardin Botanique 54603 Villers-les-Nancy Cedex France http://www.loria.fr/~huttaxel From boularias at gmail.com Wed Mar 20 07:30:59 2013 From: boularias at gmail.com (Abdeslam Boularias) Date: Wed, 20 Mar 2013 12:30:59 +0100 Subject: Connectionists: EXTENDED DEADLINE: ICML 2013 Workshop on Machine Learning For System Identification Message-ID: ICML 2013 Workshop on Machine Learning For System Identification http://mlsysid.tuebingen.mpg.de/ Call for Posters and Papers We solicit submission of extended abstracts or papers discussing high quality research on all aspects of dynamical system modeling and identification with machine learning tools. Both theoretical and applied contributions presenting recent or ongoing research are welcomed. The list of tools, problems, and applications includes, but is not limited to the following: Tools: kernel methods, regularization techniques, Bayesian estimation, deep learning, manifold learning, spectral methods, causal inference, active learning, reinforcement learning. Problems: predictor estimation, state-space model identification, impulse response modeling, non-linear system modeling, experiment design. Applications: robotics, automotive, process control, motion tracking, system biology, computational sustainability. Submissions and Publication A one-page extended abstract suffices for a poster submission. Additionally, we welcome position papers, as well as papers discussing open problems and potential future research directions. Both extended abstracts and position/future research papers will be reviewed by program committee members on the basis of relevance, significance, and clarity. Submissions should be formatted according to the ICML 2013 conference template. The length of abstracts and papers should not exceed 8 pages. Submission website: https://www.easychair.org/conferences/?conf=mlsysid2013 Important Dates Apr 3, 2013 - Deadline of Submission (extended from Mar 20, 2013) Apr 15, 2013 - Notification of Acceptance June 20-21, 2013 - Workshop Francesco Dinuzzo, Abdeslam Boularias and Lennart Ljung From camda at bioinf.jku.at Sat Mar 23 08:45:24 2013 From: camda at bioinf.jku.at (CAMDA 2013) Date: Sat, 23 Mar 2013 13:45:24 +0100 Subject: Connectionists: CAMDA 2013 Challenge: Big Data in Life Sciences Message-ID: <6626B67A-0230-4DF0-9A4B-04F60CEF1842@bioinf.jku.at> CAMDA 2013 Challenge: Big Data in Life Sciences http://www.camda.info The CAMDA conference contains a competitive challenge on Big Data in life sciences. Extracting usable knowledge from Big Data is an extremely pressing topic which requires advanced data mining, machine learning, statistical, and data management techniques. The challenge includes the analysis of large toxicogenomic and genetic data obtained by Next Generation Sequencing (NGS). Relevant tasks for the toxicogenomics data include, but are not limited to, feature selection, classification, regression, and clustering. Relevant tasks for the NGS data set, include, but are not limited to, variant detection, identical by descent detection, structural variants detection, and population genetics. To facilitate the data handling, we provide the data set in several formats: - CSV (toxicogenomics) - LIBSVM (toxicogenomics) - VCF (NGS data) - EXCEL Sheet (annotation data and labels) which are ready to use e.g., for binary classification. The raw data is provided as - CEL files (Affymetrix microarray measurements) - BAM files (NGS data) IMPORTANT DATES - Abstract submission deadline for oral presentation / 20 May 2013 - Abstract submission deadline for poster presentation / 25 May 2013 - CAMDA Conference / 19?20 July 2013 You find additional information about the challenge data sets, submissions, etc. at the conference website: http://www.camda.info We look forward to a lively contest! The organizers and chairs of CAMDA 2013 Chairs: Joaquin Dopazo, CIPF, Spain Sepp Hochreiter, Johannes Kepler University, Austria David Kreil, Boku University, Austria Simon Lin, Marshfield Clinic, U.S.A. Local organizer: Djork-Arn? Clevert, Johannes Kepler University, Austria Contact: camda at bioinf.jku.at Conference website: http://www.camda.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From cardoso at bcos.uni-freiburg.de Fri Mar 22 02:07:40 2013 From: cardoso at bcos.uni-freiburg.de (Simone Cardoso de Oliveira) Date: Fri, 22 Mar 2013 07:07:40 +0100 Subject: Connectionists: Call for applications: Brains for Brains Awards 2013 Message-ID: <514BF52C.3000004@bcos.uni-freiburg.de> Dear colleagues, for the fourth time, the Bernstein Association for Computational Neuroscience is announcing the "Brains for Brains Young Researchers' Computational Neuroscience Award". The call is open for researchers of any nationality who have contributed to a peer reviewed publication (as coauthor) or peer reviewed conference abstract (as first author) that was submitted before starting their doctoral studies, is written in English and was accepted or published in 2012 or 2013. The award comprises 500 ? prize money, plus a travel grant of up to 2.000 ? to cover a trip to Germany, including participation in the Bernstein Conference 2013 in T?bingen (www.bernstein-conference.de), and an individually planned visit to up to two German research institutions in Computational Neuroscience. Deadline for application is April 26, 2013. Detailed information about the application procedure can be found under: www.nncn.de/verein-en/brains4brains2013 Best regards, Simone Cardoso -- Dr. Simone Cardoso de Oliveira Bernstein Network Computational Neuroscience Head of the Bernstein Coordination Site (BCOS) Albert Ludwigs University Freiburg Hansastr. 9A 79104 Freiburg, Germany phone: +49-761-203-9583 fax: +49-761-203-9585 cardoso at bcos.uni-freiburg.de www.nncn.de Twitter: NNCN_Germany YouTube: Bernstein TV Facebook: Bernstein Network Computational Neuroscience, Germany LinkedIn: Bernstein Network Computational Neuroscience, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From cie.conference.series at gmail.com Wed Mar 20 21:03:34 2013 From: cie.conference.series at gmail.com (CiE Conference Series) Date: Thu, 21 Mar 2013 01:03:34 +0000 Subject: Connectionists: CiE 2013: Call for Informal Presentations Message-ID: ---------------------------------------------------------------------------- COMPUTABILITY IN EUROPE 2013: The Nature of Computation Milan, Italy July 1 - 5, 2013 http://cie2013.disco.unimib.it ---------------------------------------------------------------------------- CALL FOR INFORMAL PRESENTATIONS There is a remarkable difference in conference style between computer science and mathematics conferences. Mathematics conferences allow for informal presentations that are prepared very shortly before the conference and inform the participants about current research and work in progress. The format of computer science conferences with pre-conference proceedings is not able to accommodate this form of scientific communication. Continuing the tradition of past CiE conferences, this year's CiE conference endeavours to get the best of both worlds. In addition to the formal presentations based on our LNCS proceedings volume, we invite researchers to present informal presentations. For this, please send us a brief description of your talk (between one paragraph and one page) by the DEADLINE: MAY 31, 2013 Please submit your abstract electronically, via EasyChair , selecting the category "Informal Presentation". You will be notified whether your talk has been accepted for informal presentation usually within a week after your submission. If you intend to apply for the ASL Student Travel Award, you might need us to confirm that your are going to give a presentation at CiE 2013 (applications of students who are presenting get higher priority). Therefore, we would like to ask you to submit your informal presentations by 26 March so that we can send you the notification before the ASL deadline of 1 April 2013. GRANTS: Grants for students, members of the ASL: The Association for Symbolic Logic before the deadline of April 1, 2013. EACTS Students The European Association for Theoretical Computer Science has decided to sponsor all students that are EATCS members and willing to attend CiE2013. The early registration fee for EATCS students is 30??? cheaper than the one for non-members. __________________________________________________________________________ ASSOCIATION COMPUTABILITY IN EUROPE http://www.computability.org.uk CiE Conference Series http://www.illc.uva.nl/CiE CiE 2013 http://cie2013.disco.unimib.it CiE Membership Application Form http://www.cs.swan.ac.uk/acie __________________________________________________________________________ From contact2013 at ecvp.uni-bremen.de Thu Mar 21 10:01:23 2013 From: contact2013 at ecvp.uni-bremen.de (ECVP 2013) Date: Thu, 21 Mar 2013 15:01:23 +0100 Subject: Connectionists: ECVP 2013: New Deadline for Abstract Submission! Message-ID: <028601ce263c$927334c0$b7599e40$@ecvp.uni-bremen.de> ECVP 2013, Bremen, Germany, August 25th - 29th, 2013 ############################################################################ ######################################## ECVP NEWS: - Deadline Extension - Travel Grants ############################################################################ ######################################## DEADLINE EXTENSION The deadline for abstract submission to ECVP 2013 is extended until April, 2nd. IMPORTANT: Please note that this will be the only extension. Please submit your abstract by first registering at www.ecvp.uni-bremen.de/node/15 and then using the login data which is emailed to you to enter the online submission system. TRAVEL GRANTS You can still aplply for travel grants. Deadline for application is April, 2nd. Please submit your application at www.ecvp.uni-bremen.de/node/42 using the webform. Any inquiries regarding travel grants may be send to ecvptravel at ecvp.uni-bremen.de. IMPORTANT: Due to a really bad word-twisting the mail address for travel grants as listed at node 42 of the website was wrong. In case you have send any mail to this and did not get an answer yet, please send your mail again to the address given above. We really apologise for this! -- ECVP 2013 Organizing Committee Udo Ernst | Cathleen Grimsen | Detlef Wegener | Agnes Janssen Universitaet Bremen / University of Bremen Zentrum fuer Kognitionswissenschaften / Center for Cognitive Sciences Hochschulring 18 28359 Bremen / Germany Website: www.ecvp.uni-bremen.de Facebook: www.facebook.com/EuropeanConferenceOnVisualPerception Contact - emails: contact2013 at ecvp.uni-bremen.de (For any comments, questions or suggestions) symp2013 at ecvp.uni-bremen.de (For organization and submission of symposia) exhibition2013 at ecvp.uni-bremen.de (For any query regarding the exhibition) showtime2013 at ecvp.uni-bremen.de (For proposal submission and any query regarding the SHOWTIME) ecvptravel at ecvp.uni-bremen.de (For any query regarding travel grants) From frank.ritter at psu.edu Wed Mar 20 18:16:58 2013 From: frank.ritter at psu.edu (Frank Ritter) Date: Wed, 20 Mar 2013 18:16:58 -0400 Subject: Connectionists: CogModel notes: ICCM13/BRIMS13/books/outlets/Jobs Message-ID: [please forward to Lapsus or your list related to GDR I3-fer] This is based on the International Cognitive Modeling Conference mailing list that I maintain. I forward messages about twice a year. (this is the second one for ICCM 2013, and it's late.) The first announcement is driving this email -- the call for papers for ICCM 2013 in Ottawa with a new due date. The rest indicate new publication outlets and jobs in Cog Sci and in cognitive modeling. If you would like to be removed, please just let me know. I maintain it by hand to keep it small. I've added a conflict of interest note to each item. I've become more aware of CoI recently. (As an aside, PSU's web site requires exact dollar amounts for each stock owned and consulting fees, does yours? PSU says most universities do, and I know of none that do. Happy to correspond on this.) cheers, Frank Ritter frank.e.ritter at gmail.com http://acs.ist.psu.edu http://www.frankritter.com **************************************************************** 1. ICCM 2013, Ottawa, 11-14 July 2013, Papers due: 5 April 2013 http://www.iccm-conference.org/2013/ 2. ICCM 2012 tutorials proposals call, Ottawa, due: 20 mar 12 http://acs.ist.psu.edu/iccm2013/tutorials-call.html 3. Update on BRIMS 2013 4. How to run studies book, discount code available http://www.sagepub.com/books/Book237263 http://www.frankritter.com/rbs/sage-30pc-flyer.pdf 5. New (6/30) deadline for Brain Corporation Prize in Comp Neuroscience http://www.scholarpedia.org/article/Scholarpedia:2012_Brain_Corporation_Prize_in_Computational_Neuroscience 6. BICA 2013 deadline extension to 15 april 13 14-15 Sept. 2013, Kiev, Ukraine http://bicasociety.org/meetings/2013/ 7. New journal announcement: IEEE Transactions on Human-Machine Systems http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6392959 8. Integrated Cognition, in the Fall 2013 AAAI Symposium Series 9. Cog. Sci: the Computational Paradigm Symposium Workshop at IJCNN, 6 aug 2013, Dallas, TX http://www.ijcnn2013.org/files/ws-prog1.pdf 10. Proceedings for Advances in Cognitive Systems conference http://www.cogsys.org/schedule 11. Book chapter on Intelligent tutoring systems available http://walden-family.com/bbn/feurzeig.pdf 12. Call for papers/Fast Publication for BICA journal http://www.journals.elsevier.com/biologically-inspired-cognitive-architectures 13. Advances in Cognitive Systems (journal) http://www.cogsys.org/journal/volume-1 14. CALL FOR PAPERS: Journal of Cognitive Science http://j-cs.org/ 15. 2nd Workshop on LifeLong User Modelling June 2013, Rome, Italy due 1 April 2013 http://lifelogging-workshop.org 16. IITSEC conference 2013 http://www.iitsec.org 17. Springer's Cognitive Computation journal: ToC 4(2)/ June 2012 issue http://www.springerlink.com/content/1866-9956/4/2/ 18. Summer School in Cognitive Linguistics, July 22-26, 2013, Bangor, UK Early registration: 15 april 2013 http://www.bangor.ac.uk/cogling-summerschool 19. 2nd Call for Papers: Conf. on Spatial Info Theory COSIT 2013 http://cosit.info 20. Postdoc and RAs at Wright State caroline.cao at wright.edu 21. Job Openings in the field of Model-Based Human-Centred Design, Germany 22. Cognitive Science Research Position / Opportunity http://www.tier1performance.com/jobs/srjob/56947114 23. Research Positions at the US Air Force Research Laboratory, Dayton, OH 24. 3 year PhD position at the Quality & Usability Lab, TU/Berlin 25. Tenure-track position at Dept of Comp. and Info. Science, Fordham U. 26. Postdoc at the HathiTrust Research Center, U. of Ill. http://bit.ly/WkaTvT 27. Tenure tracked faculty posts, postdocs in Fudan U., China **************************************************************** **************************************************************** 1. ICCM 2013, Ottawa, 11-14 July 2013, Papers due: 5 April 2013 http://www.iccm-conference.org/2013/ [Please note that the paper deadline has been extended to avoid the Easter break] The conference will be held from 11 to 14 July 2013 in Ottawa, Canada at Carleton Universitat. The tutorials will be held 11 July 2013. We hope to see you in Ottawa! The International Conference on Cognitive Modeling (ICCM) is the premier conference for research on computational models and computation-based theories of human behavior. ICCM is a forum for presenting, discussing, and evaluating the complete spectrum of cognitive modeling approaches, including connectionism, symbolic modeling, dynamical systems, Bayesian modeling, and cognitive architectures. ICCM includes basic and applied research, across a wide variety of domains, ranging from low-level perception and attention to higher-level problem-solving and learning. The chairs are: Robert L. West Terrence C. Stewart (tcstewar at uwaterloo.ca) The proceedings from previous conferences are now available at http://iccm-conference.org/previous-conferences [CoI disclosure: program committee and tutorial chair] **************************************************************** 2. ICCM 2012 tutorials proposals call, Ottawa, due: 20 mar 12 http://acs.ist.psu.edu/iccm2013/tutorials-call.html The Tutorials program at the International Conference on Cognitive Modeling (ICCM) 2013 will be held on 11 July 2013. It will provide conference participants with the opportunity to gain new insights, knowledge, and skills from a broad range of areas in the field of cognitive modeling. Tutorial topics will be presented in a taught format and are likely to range from practical guidelines to theoretical issues or software. Tutorials at ICCM have been held many times before, and this year's program will be modelled after them and after the series held at the Cognitive Science Conference. http://acs.ist.psu.edu/iccm2013/tutorials-call.html provides details for submitting a tutorial proposal. [CoI disclosure: tutorial chair] **************************************************************** 3. BRIMS 2013, likely to be held concurrent with ICCM 2013 http://brimsconference.org/ This is written by Ritter. I thought I should note that BRIMS 2013 was not held in San Antonio as planned. The government sequestation and a general ban by governmetn agencie on travel lead to the conference not (yet) being held. It looks like currently that much of the BRIMS program will be absorbed into the ICCM meeting in Ottawa. Details are still being arranged, but it looks like ICCM will be bigger and have a wider program than has been typical. What will happen in future years will evolve based on how funding and government stories proceed over the course of this year. For more information, watch the web sites or write to a chair: "William G Kennedy" , Program Co-Chair Robert St Amant , Program Co-Chair "David Reitter" dreitter at ist.psu.edu, Program Co-Chair Dan Cassenti , General Chair [CoI disclosure: program committee] **************************************************************** 4. How to run studies book, review copies and 30% off http://www.frankritter.com/rbs/sage-30pc-flyer.pdf Running Behavioral Experiments With Human Participants: A Practical Guide (Ritter, Kim, Morgan & Carlson, 2013) provides a concrete, practical roadmap for the implementation of experiments and controlled observation using human participants. Covering both conceptual and practical issues critical to implementing an experiment, the book is organized to follow the standard process in experiment-based research, covering such issues as potential ethical problems, risks to validity, experimental setup, running a study, and concluding a study. The detailed guidance on each step of an experiment is ideal for those in both universities and industry who have had little or no previous practical training in research methodology. The book provides example scenarios to help readers organize how they run experimental studies and anticipate problems, and example forms that can serve as effective initial "recipes." Examples and forms are drawn from areas such as cognitive psychology, human factors, human-computer interaction, and human-robotic interaction. You can order copies with 30% off ($27.30) or review copies using this flyer: http://www.frankritter.com/rbs/sage-30pc-flyer.pdf [CoI disclosure: author, compensated slightly] **************************************************************** 5. New (6/30) deadline for Brain Corporation Prize in Comp Neuroscience http://www.scholarpedia.org/article/Scholarpedia:2012_Brain_Corporation_Prize_in_Computational_Neuroscience The Brain Corporation Prize aims to encourage researchers to make freely available the latest and best scholarly information concerning topics in computational neuroscience. To provide more time for submissions, the contest deadline has been extended to June 30th, 2013. The winners will be recognized during the CNS'03 in Paris. Presently, the leaders are: 215 votes: http://www.scholarpedia.org/article/Frontal_eye_field 210 votes: http://www.scholarpedia.org/article/SPIKE-distance 42 votes: http://www.scholarpedia.org/article/Brian_simulator We sincerely hope to see your participation -- here are 10 reasons to get involved: 1. Help discover what works in scholarly collaboration -- participate in a global experiment on the future of scholarly research. 2. Add a peer-reviewed article with a famous co-author to your C.V. 3. Support open-access publishing. 4. Help the public -- provide to the world an accurate article on a topic of importance to you. 5. For posterity -- be the author of a review that will be useful for decades to come. 6. To support interdisciplinary research -- encourage others to participate in compiling a free, current, and scholarly online resource. 7. To see your work appear in a normal Google search -- your article will likely appear within the top five search results when its topic is queried. 8. For Curatorship -- become a topic Curator, and help ensure that the world has trustworthy information available to them on a topic of your expertise. 9. To accelerate research -- help science and scholarship advance more quickly by providing an easily accessible and updatable review. 10. To promote scholarly information online -- help resist the glut of redundant and generic online "content" with a substantive, thoughtful, and enduring contribution. Contest rules and guidelines: http://www.scholarpedia.org/article/Scholarpedia:2012_Brain_Corporation_Prize_in_Computational_Neuroscience Dr. Eugene M. Izhikevich Eugene.Izhikevich at braincorporation.com CEO, Brain Corporation Editor-in-chief, Scholarpedia - the peer-reviewed open-access encyclopedia To: comp-neuro at neuroinf.org, Wed, 23 Jan 2013 17:57:10 -0800 [CoI: no relationship] **************************************************************** 6. BICA 2013 deadline extension to 15 april 13 14-15 Sept. 2013, Kiev, Ukraine http://bicasociety.org/meetings/2013/ Following numerous requests, the submission deadline for BICA 2013 has been extended until April 15th: http://bicasociety.org/meetings/2013/ This conference will take place in the best hotel in Kiev on 14-15 of September (Saturday-Sunday). Papers will be published before the conference in a special issue of the quarterly academic journal BICA: http://www.journals.elsevier.com/biologically-inspired-cognitive-architectures/ * Submission categories include Letters (2,500 words, preferred) or Research Papers (around 8,000 words). * All submissions at this time should be made via the journal web site (email submissions are not considered). * Your cover letter should clearly indicate at the beginning that the paper is intended for BICA 2013. * The manuscript should NOT be camera-ready, and the simplest formatting is preferred (details are at the journal web site; please use APA style for references). * Inclusion of the paper in this special issue implies participation in the conference, which should be guaranteed by a nonrefundable payment of the BICA 2013 registration fee (registration opens soon). We are looking forward to seeing you in Kiev in September, --BICA 2013 Organizing Committee [CoI disclosure: previous program committee] **************************************************************** 7. New journal announcement: IEEE Transactions on Human-Machine Systems http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6392959 [this journal can be an outlet for modeling papers!] Journal Announcement: IEEE Transactions on Human-Machine Systems A publication of the IEEE Systems, Man and Cybernetics Society We are pleased to announce the launch of the IEEE Transactions on Human-Machine Systems, a journal focusing on the dissemination of results in the area of human-machine systems that inform theory and improve engineering practice by: ? taking into account human aspects related to systems including sensory, motor, and cognitive capabilities, knowledge, skills, preferences, emotions, limitations, biases, learning, and adaptation; ? considering human synchronous and asynchronous interactions with each other, intelligent agents, computational support, and assistive devices via associated input and output technologies within the person's operational, organizational, cultural, and regulatory contexts; ? developing, instantiating, testing and refining measures, methods, models, and apparatus that address the points above and that can provide insights given real world imprecision, uncertainty, and constraints that impact human characteristics, performance, behavior, and learning; and ? supporting operational concept development, architecture, design, implementation, and evaluation of dynamic, complex systems that include human participants in their multifaceted roles (such as analyst, decision maker, operator, collaborator, communicator, and learner). Submissions IEEE Transactions on Human-Machine Systems (THMS) encourages submission of theoretical and applied work from across a range of methods and application domains. In addition to significant original research articles, THMS also welcomes technical correspondences that provide insight into methods, models, and apparatus, as well as other development topics such as proof of concepts and pilot studies. Papers can be submitted electronically at: http://mc.manuscriptcentral.com/thms To read more about THMS, including our scope, information for authors, editorial board membership, history, and related topics, please visit: http://www.ieeesmc.org/publications/ and http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6392959 Published papers can be viewed at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221037 Ellen Bass, Drexel Editor-in-Chief IEEE Transactions on Human-Machine Systems THMS-EIC at ieee.org [CoI: editorial board member] **************************************************************** 8. Integrated Cognition, in the Fall 2013 AAAI Symposium Series There will be a workshop on Integrated Cognition at the Fall 2013 AAAI Symposium series. This series of approx. 8 workshops is not announced yet, but will be Friday - Sunday, November 15-17 at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The paper call is not out, but will likely have a submission deadline of around 20 June 2013. june >From the proposal: We propose a Symposium on Integrated Cognition - consolidating the functionality and phenomena implicated in natural minds/brains (whether in human or other animal bodies) and/or artificial cognitive systems (whether in virtual humans, intelligent agents or intelligent robots) ... The focus is on how the mind arises from the interaction of its constituent parts, and includes everything implicated in human(-level) performance in complex, possibly social, environments. ... In addition to contributed papers/talks, we will also include a number of crosscutting panels on important topics in integrated cognition, ... possibilities that have been suggested include: * Approaches to studying integrated cognition * Approaches to building integrated cognition * Approaches to integration in integrated cognition * Combining natural and artificial approaches to integrated cognition * Integration across levels of cognition * Alternative substrates for integrated cognition * Approaches to connecting integrated cognition with the world * Integration across cognitive and non---cognitive (perceptuomotor, affective, physiological, etc.) aspects of integrated cognition * Common approaches to intelligent robots and virtual humans * System development methodologies for integrated cognition * Experimental methodologies for integrated cognition [CoI: steering board member] **************************************************************** 9. Cog. Sci: the Computational Paradigm Symposium Workshop at IJCNN, 6 aug 2013, Dallas, TX http://www.ijcnn2013.org/files/ws-prog1.pdf NSF-sponsored Special Workshop at the International Joint Conference on Neural Networks Peter Erdi (Center for Complex Systems Studies, Kalamazoo C. and Wigner Res. Center for Physics, Hungarian Academy of Sciences, Budapest, Hungary), Organizer Speakers: Paul Thagard, Cog Sci Program, U. of Waterloo, Canada Synthesizing Symbolic and Connectionist Approaches to Cognitive Science Amir Hussain, U. of Stirling, Scotland Cognitive Computation Vassilis Cutsuridis, Institute of Molecular Bio and Biotech at FORTH, Crete, Greece Cognitive Informatics Simona Doboli, CS, Hofstra U. Cognitive Science and Idea Generation Juyang (John) Weng, Dept. of CS and Engineering, Michigan State Cognitive Robotics Christian Lebiere, Psychology, CMU Neurally-inspired modeling of cognitive architectures Barbara Knowlton, Psychology, UCLA Cognitive Science and Soft Computation Gergo Orban, Wigner Res. Centre for Physics, Hungarian Academy of Sci.s, Budapest Bayesian (not necessarily network) model of cognition and perception Steven Bressler, Center for Complex Systems and Brain Sciences, Florida Atlantic U. Cognitive Neurodynamics Luiz Pessoa, Psychology, U. Maryland Integrating Cognition and Emotion Frank Ritter, College of IST, Penn State Cognitive Modeling Cynthia O. Dominguez, Applied Research Associates, Exeter, NH Cognitive Engineering Evening session 20.30 Panel discussion: Teaching cognitive science [CoI: speaker] **************************************************************** 10. Proceedings for Advances in Cognitive Systems conference http://www.cogsys.org/schedule "The First Annual Conference on Advances in Cognitive Systems took place late last year, from December 6 to 8, in Palo Alto, California. The meeting was diverse, intense, engaging, and productive, attracting about 60 registered participants, and including some 24 talks and ten poster presentations." You can find the schedule for the conference, along with links to the associated papers, at http://www.cogsys.org/schedule/. The latter are available at http://www.cogsys.org/journal/volume-2/, the second volume of Advances in Cognitive Systems, the electronic journal associated with the meeting." -Pat Langley [It is related to ICCM but is more interested in human-level AI.] [CoI: I know some board members] **************************************************************** 11. Book chapter on Intelligent tutoring systems available http://walden-family.com/bbn/feurzeig.pdf or, as Google puts it: http://www.google.com/url?sa=t&rct=j&q=wally%20feurzeig%20obituary&source=web&cd=11&ved=0CDEQFjAAOAo&url=http%3A%2F%2Fwalden-family.com%2Fbbn%2Ffeurzeig.pdf&ei=T93qUPjcHPLh0wHxh4DYBg&usg=AFQjCNF3iOqE4THnQNYENIijuzZFjEeq3w&bvm=bv.1355534169,d.dmQ This chapter provides a summary of intelligent tutoring systems developed at BBN. There are other chapters available on line there as well that are useful (http://walden-family.com/bbn/). This chapter provides a useful historical and theoretical review of numerous ITSs. It was one of Wally Feurzeig's last papers. His memorial service is scheduled for 25 May in Boston, email me if you want details. [CoI: I got to work with with Wally, it changed my life] **************************************************************** 12. Call for papers/Fast Publication for BICA journal http://www.journals.elsevier.com/biologically-inspired-cognitive-architectures Fast Publication in Biologically Inspired Cognitive Architectures Your research citable online within 4.1 weeks of editorial acceptance! Biologically Inspired Cognitive Architectures aims to offer you the fastest possible speed of publication, without compromising quality. This is a promise the editorial and publishing teams work hard to keep. Every year we aim to improve on the publication speed. Just one of many reasons to submit your priority research to Biologically Inspired Cognitive Architectures. [CoI: I've published in this journal, they are fast, and provide good feedback] **************************************************************** 13. Advances in Cognitive Systems (journal) http://www.cogsys.org/journal/volume-1 This is a new journal. It is more interested in human-level AI than cognitive modeling, but will be a potential outlet for modeling papers. The editorial board overlaps somewhat with the ICCM community. [CoI: I know some board members] **************************************************************** 14. CALL FOR PAPERS: Journal of Cognitive Science http://j-cs.org/ The Journal of Cognitive Science (JCS) is published quarterly (from the year 2011) as the official journal of International Association for Cognitive Science (IACS) by the Institute for Cognitive Science at Seoul National U., located in Seoul, Korea. It aims to publish research articles of the highest quality and significance within the disciplines that form cognitive science, including philosophy, psychology, linguistics, artificial intelligence, neuroscience, anthropology, and education. Submissions that cross traditional disciplinary boundaries in either themes or methods are especially encouraged. Contributions may be in the form of articles, brief reports, reviews, or squibs. The JCS showcases quality research, encourages the exchange of ideas, and illustrates the interdisciplinary work that is the hallmark of cognitive science. Authors who have published in JCS include Paul Smolensky, Alfonso Caramazza, Dedre Gentner, Paul Thagard, and Jean-Pierre Descles. Three consecutive Special Issues on David Chalmers' Computational Theory of Mind and his detailed reply to other scholars are available all online free (2012 vol and 2011 vol) at http://j-cs.org/. JCS vol. 10 (2009) includes the special issues of 'Color in Thought and Language' and 'Quantification in East Asian Languages,' and JCS vol. 11, Issue 1 (2010) is the special issue of 'Reading Development and Reading Disorders in Asian Languages.' Editor-in-Chief: Chungmin Lee, Seoul National U. Editors: Gualtiero Piccinini, U. of Missouri - St. Louis Naomi Miyake, U. of Tokyo Koiti Hasida, National Institute of Advanced Industrial Science and Technology, Japan Kyoung-Min Lee, Seoul National U. The Editorial Board and Advisory Editorial Board: http://j-cs.org/editors/editors.php. Submission Guidelines: All submissions must be in English, written clearly and in sufficient detail that referees can assess the merits of the work. Papers should be no longer than 10,000 words and should conform to the JCS style guide( http://j-cs.org/). Authors should send an electronic copy (both MS Word and PDF files) of their submission to j-cs at j-cs.org and clee at snu.ac.kr. [CoI: no relationship] **************************************************************** 15. 2nd Workshop on LifeLong User Modelling June 2013, Rome, Italy due 1 April 2013 http://lifelogging-workshop.org CALL FOR PAPERS 2nd Workshop on LifeLong User Modelling in conjunction with UMAP, June, 2013, Rome, Italy ============================ Deadline for paper submission: April 01, 2013 http://lifelogging-workshop.org We are pleased to announce the 2nd Workshop on on LifeLong User Modelling in conjunction with UMAP 2013. Nowadays, we are surrounded by technology that assists us in our everyday life. We use GPS devices to navigate from A to B, we use all kind of sensors to track our sport activities, we query the WWW for information while on the go and we use all kinds of devices and software to communicate with our friends and family to share opinions, pictures, etc. With today's technology, we have the capability to automatically record at large-scale the places that we have been to, things we have seen, people we communicate with and how active we are - we're already creating a lifelog. This creation of lifelogs offers new possibilities for personalization but the resulting data volume raises new challenges. Analyzing this large data corpus will enable us to better understand ourselves: What are my habits and interests? Or, even more specific: Do I live a healthy life? Answering these questions can lead to a more conscious lifestyle. One big challenge is the creation and management of long term, even life long, user models that capture salient aspects about the user over very long periods of time, possibly spanning periods from early childhood to old age. Further, these models have to handle changing interests over time. Also, such lifelogging models have to be usable by different applications. Other challenges pertain processing big data and identifying user interests, skills etc. and their usage in real world systems like health or recommendation systems. Following the successful first Workshop on Lifelong User Modelling which was held in conjunction with UMAP 2009 (http://rp-www.cs.usyd.edu.au/~llum/2009_UMAP09_llum/), this workshop aims to engage researchers from both user modelling and lifelogging communities to discuss emerging research trends in this field. Invited Speaker: Dr. Cathal Gurrin, Dublin City U., Ireland Title: Experience of a Lifelogger: Tasks and Challenges TOPICS -------------------- The workshop aims at improving the exchange of ideas between the different research communities and practitioners involved in the research on user modeling and lifelogging. The workshop will focus on the following key questions: - What lifelogging techniques exist that can benefit from long-term user modeling? - How can user interfaces assist to explore lifelogs and/or the underlying user models? - What personalisation techniques can be used in the context of life logging? - How should privacy issues be addressed when it is possible to create detailed user models covering every aspect of one's life? - What are the particular representational requirements for life-long user modeling? - What are the requirements for enabling a life-long user model to be useful for a range of applications? - Which aspects need to be part of the foundation design of technical solutions that will ensure the user's privacy over their life-long user model? - How will we ensure users can control and share their life-long user model effectively? - What are the relevant existing standards that should be part of life-long user modelling and where is there a need for additional standards? SUBMISSION AND PUBLICATION -------------------- All papers must represent original and unpublished work that is not currently under review. All submissions will be reviewed by at least 3 members of the workshop committee and will be evaluated according to their significance, originality, technical content, style, clarity, and relevance to the workshop. At least one author of each accepted paper is expected to attend the workshop. Full papers should be 4-10 pages. Submissions must adhere to the Springer LNCS format (see the example document with author instructions), and be made through the EasyChair conference system. https://www.easychair.org/conferences/?conf=lum2013 Accepted papers will be published online in a combined UMAP Workshop & Poster Proceedings volume of the CEUR workshop proceedings. IMPORTANT DATES -------------------- Submissions due: April 01 Notification: May 01 Workshop day: tba ORGANIZERS -------------------- Frank Hopfgartner, DAI Labor, TU Berlin, Germany Judy Kay, U. of Sydney, Australia Bob Kummerfeld, U. of Sydney, Australia Till Plumbaum, DAI Labor, TU Berlin, Germany CONTACT -------------------- Web: http://lifelogging-workshop.org Twitter: @lifeloggingWS Mail: info at lifelogging-workshop.org Till Plumbaum Co-Director Competence Center Information Retrieval & Machine Learning till.plumbaum at dai-labor.de Fon +49 (0) 30/314 -74 068 from: CHI-ANNOUNCEMENTS at LISTSERV.ACM.ORG [CoI: no relationship] **************************************************************** 16. IITSEC conference http://www.iitsec.org The Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) is the premier conference in the world for advancing the interests of the modeling, simulation, training and education communities. Each year, over 18,000 people congregate together to promote cooperation in support of the sciences behind learning and to exchange information, share knowledge, align business interests, and in general stimulate the growth of the industry. I/ITSEC papers are an integral component of the conference and represent the best technical and scientific tools, technologies, methods, processes, and systems our industry has to offer. The theme selected for this year's conference; "Concepts and Technologies: Empowering an Agile Force", emphasizes our commitment to exploring common challenges and developing innovative learning solutions that help maintain a professional and efficient workforce. Being an author at I/ITSEC puts you at the forefront of our industry and presents an ideal venue to interface with key decision makers and other like-minded individuals. In writing a paper for I/ITSEC, your ideas, concepts and technologies will be exposed to leading experts across government, academia, industry, research organizations and the military. For the 2013 conference, we welcome papers that discuss new ideas, new ways of doing things, and new technologies that continue to move our industry forward. It's important to note that our audience is not interested in sales pitches or marketing statistics. Rather, I/ITSEC papers are typically focused on the science behind the technology. Please review the attached Call for Papers for additional information about what each subcommittee is looking for. [their submission is passed, but I note it for next year] 21 January Abstract Submittal Opens 25 February Abstract Submittal Closes NLT 5 April Authors Notified 13 May Paper/Draft Tutorial Presentation Submittal Opens 12 July Paper/Draft Tutorial Presentation Submittal Closes NLT 29 July Authors Notified 24 June Clearance Forms Due 19 August Presentation Submittal Opens 26 August Paper Revisions Due 27 September Presentation submission closes 2 December Speakers' Meeting and Reception [CoI: no relationship] **************************************************************** 17. Springer's Cognitive Computation journal: ToC 4(2)/ June 2012 issue http://www.springerlink.com/content/1866-9956/4/2/ Springer's Cognitive Computation j.: ToC 4(2)/ June 2012 & First ISI Impact Factor! The individual list of published articles (Table of Contents) for 4(2) / June 2012 can be viewed http://www.springerlink.com/content/1866-9956/4/2/ (and also here as plaintext) A list of the most downloaded articles (which can always be read for free): http://www.springer.com/biomed/neuroscience/journal/12559#realtime Other 'Online First' published articles not yet in a print issue: http://www.springerlink.com/content/121361/?Content+Status=Accepted ======================================================= NEW: First ISI Impact Factor for Cognitive Computation of 1.00 for 2011! ======================================================= As you may know, Cognitive Computation was recently selected for coverage in Thomson Reuter's products and services. Beginning with V.1 (1) 2009, this publication is now indexed and abstracted in: Science Citation Index Expanded (also known as SciSearch(R)) Journal Citation Reports/Science Edition Current Contents(R)/Engineering Computing and Technology Neuroscience Citation Index(R) For further information on the journal and to sign up for electronic "Table of Contents alerts" please visit the Cognitive Computation homepage: http://www.springer.com/12559 or follow us on Twitter at: http://twitter.com/CognComput for the latest On-line First Issues. For any questions with regards to LinkedIn and/or Twitter, please contact Springer's Publishing Editor: Dr. Martijn Roelandse: martijn.roelandse at springer.com Finally, we would like to invite you to submit short or regular papers describing original research or timely review of important areas - our aim is to peer review all papers within approximately six weeks of receipt. We also welcome relevant high quality proposals for Special Issues - five are already planned for 2012-13, including a new special issue to celebrate the work of the late Professor John Taylor, founding Chair of Cognitive Computation's Editorial Advisory Board, CFP can be found here (with a submission deadline of 1 Sep 2012): http://www.springer.com/cda/content/document/cda_downloaddocument/CogComp-Special-Issue-cfp-1-Vass-rev-Amir.doc?SGWID=0-0-45-1326237-p173836203 Amir Hussain, PhD (Editor-in-Chief: Cognitive Computation) ahu at cs.stir.ac.uk (U. of Stirling, Scotland, http://www.cs.stir.ac.uk/~ahu/) Igor Aleksander, PhD (Honorary Editor-in-Chief: Cognitive Computation) ---------------------------------------------------------------- Table of Contents: Springer's Cognitive Computation, Vol.4, No.2 / June 2012 ---------------------------------------------------------------- A Time-Dependent Saliency Model Combining Center and Depth Biases for 2D and 3D Viewing Conditions J. Gautier & O. Le Meur http://www.springerlink.com/content/p487157836305731/ CO-WORKER: Toward Real-Time and Context-Aware Systems for Human Collaborative Knowledge Building Stefano Squartini & Anna Esposito http://www.springerlink.com/content/f69j56942733571u/ Extended Sparse Distributed Memory and Sequence Storage Javier Snaider & Stan Franklin http://www.springerlink.com/content/nw6327w8663q785t/ Qualitative Information Processing in Tripartite Synapses: A Hypothetical Model Bernhard J. Mitterauer http://www.springerlink.com/content/y135h23114j17u55/ An Information Analysis of In-Air and On-Surface Trajectories in Online Handwriting Enric Sesa-Nogueras, Marcos Faundez-Zanuy & Jifii Mekyska http://www.springerlink.com/content/m43370741458736g/ Non-Classical Connectionist Models of Visual Object Recognition Tarik Hadzibeganovic & F. W. S. Lima http://www.springerlink.com/content/913t237184319875/ To: connectionists at cs.cmu.edu, comp-neuro at neuroinf.org [CoI: no connection] **************************************************************** 18. Summer School in Cognitive Linguistics, July 22-26, 2013, Bangor, UK Early registration: 15 april 2013 http://www.bangor.ac.uk/cogling-summerschool Summer School in Cognitive Linguistics July 22-26, 2013, Bangor U., UK The Summer School in Cognitive Linguistics is a one-week international programme held at Bangor U. in July 2013. The Summer School will consist of 16 courses on topics in cognitive linguistics and will be taught by leading researchers in the field. Our teaching faculty will be drawn from across the cognitive sciences and include local instructors as well as distinguished researchers from outside Bangor. The Summer School will also feature keynote speeches by Gilles Fauconnier, Adele Goldberg, and Vyvyan Evans, and a poster session during which participants can present their work and obtain feedback. Teaching faculty: - Benjamin Bergen (U. of California, San Diego) - Silke Brandt (Lancaster U.) - Daniel Casasanto (New School for Social Research, New York) - Alan J. Cienki (Vrije Universiteit Amsterdam) - Ewa Dabrowska (Northumbria U.) - Christopher Hart (Northumbria U.) - Willem Hollmann (Lancaster U.) - June Luchjenbroers (Bangor U.) - Laura Michaelis (U. of Colorado, Boulder) - Aliyah Morgenstern (Universite Sorbonne Nouvelle - Paris 3) - Patrick Rebuschat (Bangor U.) - Gabriella Rundblad (King's College London) - Christopher Shank (Bangor U.) - Luc Steels (Vrije Universiteit Brussels) - Thora Tenbrink (Bangor U.) - Alan Wallington (Bangor U.) This event provides a unique opportunity for students and researchers to get a snapshot of the exciting work done in cognitive linguistics and to discuss their research. It is also a wonderful opportunity to visit North Wales and to enjoy some of the most beautiful landscapes and historical sites in the United Kingdom. Registration opens in October 2012 and closes in June 2013. Early-bird rates are available for participants who register by April 15, 2013. - Early-bird fee with accommodation: ?475* - Early-bird fee without accommodation: ?375 *includes transfer to/from Manchester airport For more information, please consult the Summer School website (www.bangor.ac.uk/cogling-summerschool) or email the School Director, Dr. Patrick Rebuschat (p.rebuschat at bangor.ac.uk). ------------------------------------------------ Thora Tenbrink, t.tenbrink at bangor.ac.uk +44 (0)1248 38 2263 School of Linguistics & English Language/ Ysgol Ieithyddiaeth ac Iaith Saesneg Bangor U./ Prifysgol Bangor www.bangor.ac.uk/linguistics [CoI: no connection] **************************************************************** 19. 2nd Call for Papers: Conf. on Spatial Info Theory COSIT 2013 http://cosit.info Second announcement - and updated information - for 11th International Conference on Spatial Information Theory, COSIT 2013 September 2-6, 2013, Scarborough, UK. Contact: cosit2013 at exeter.ac.uk Spatial information theory is concerned with all aspects of space and spatial environments as experienced and represented by humans and also by other animals and artificial agents. The scope of the conference includes both applications to specific domains and also the development of general theories of space and spatial information. Papers may address aspects of spatial information from the viewpoint of any discipline including (but not limited to) the following. Cognitive, Perceptual, and Environmental Psychology Geography and Geoinformation Science Computer Science, Artificial Intelligence, and Cognitive Science Mathematics, Logic, Philosophy and Ontology Engineering and Human Factors Cognitive Anthropology, Psycholinguistics and Linguistics Architecture, Planning, and Environmental Design Papers will be selected through a rigorous review of full papers based on relevance to the conference, scientific significance, novelty, relation to previously published literature, clarity of presentation, and interdisciplinary context. The proceedings will be published by Springer in the Lecture Notes in Computer Science (LNCS) series. Papers should not exceed 20 pages in the LNCS format. Submissions should be uploaded via EasyChair: https://www.easychair.org/conferences/?conf=cosit13 There will also be the opportunity to present a poster. For this, a one page description is requested, which should be sent by email to cosit2013 at exeter.ac.uk. Since 1993 the COSIT series has been one of the most important events in this highly interdisciplinary area. An idea of the conference's orientation can be gained from the previous COSIT proceedings published by Springer in the LNCS series. The following (non-exclusive) topics are indicative of the fields of interest: activity-based models of spatial knowledge cognitive structure of spatial knowledge cognitive vision cooperative work with spatial information events and processes in geographic space and time incomplete or imprecise spatial knowledge knowledge representation for space and time languages of spatial relations naive geography/behavioral geography navigation and wayfinding, including robot navigation ontology of space presentation and communication of spatial information qualitative and commonsense spatial representation quality issues in geographic information semantics of geographic information social and cultural organization of space spatial and temporal language spatial aspects of social networks spatial data integration/interoperability spatial decision-support systems structure of geographic information theory and practice of spatial and temporal reasoning time in geographic information user-interface design/spatialization of interfaces virtual spaces We are happy to announce the following keynote speakers for this year's COSIT: Karen Emmorey, Director of the Laboratory for Language and Cognitive Neuroscience at San Diego State U. Jens Riegelsberger, Google Geo User Research Team Trevor Bailey, College of Engineering, Mathematics and Physical Sciences, U. of Exeter, UK Three full-day workshops and two half-day tutorials will be offered on the first day of the conference. Additionally there will be a doctoral colloquium after the conference, which provides a forum for PhD students working on any aspect of spatial information. The workshops are: Eye Tracking for Spatial Research Organisers: Peter Kiefer, Ioannis Giannopoulos, Martin Raubal, and Mary Hegarty Visually-Supported Reasoning with Uncertainty Organisers: Jennifer Smith, Susanne Bleisch, Matt Duckham, and Alexander Klippel Spatio-temporal theories and research for environmental, urban and social sciences: Where do we stand? Organisers: Christophe Claramunt, Kathleen Stewart, Mike Worboys, and Stephan Winter The tutorials are: Graphs and their embeddings as found in spatial information theory Instructor: Michael Worboys Qualitative Spatial Reasoning and the SparQ Toolbox Instructors: Diedrich Wolter, Jan Oliver Wallgrun, and Reinhard Moratz The conference will be held at the Royal Hotel, Scarborough, North Yorkshire, UK. The town of Scarborough is a characterful Victorian seaside resort on the East coast of England [Stoker's Dracula landed there-fer]. There are good road and rail links to the rest of the UK including a direct train service from Manchester Airport which has flights from many international airports. Registration fees shall be kept as low as possible and will be posted on the website cosit.info shortly. Important dates March 4, 2013 - Full paper submission April 20, 2013 - Notification of acceptance June 10, 2013 - Poster submission June 17, 2013 - Camera-Ready copy of accepted full papers due September 2, 2013 - Workshops and Tutorials September 3-5, 2013 - Conference September 6, 2013 - Doctoral Colloquium General Chairs Brandon Bennett, U. of Leeds, UK Antony Galton, U. of Exeter, UK Program Chairs John Stell, U. of Leeds, UK Thora Tenbrink, Bangor U., UK Sponsorship Chair Zena Wood, U. of Exeter, UK Thora Tenbrink t.tenbrink at bangor.ac. 44 (0)1248 38 2263 [CoI: no relationship] **************************************************************** 20. Postdoc and RAs at Wright State caroline.cao at wright.edu Postdoctoral Research Fellow and Research Assistant in Human Factors in Surgical Simulation and Training Department of Biomedical, Industrial and Human Factors Engineering Wright State U. We are seeking several highly qualified postdoctoral research associates (2) and research assistants (3) to perform research in surgical simulation and training. This multidisciplinary research is funded by 3 NIH R01 grants and is multi-institutional, involving engineers and physicians from Rensselaer Polytechnic Institute, Wright State U., Beth Israel Deaconess Medical Center, Massachusetts General Hospital, Brigham & Women's Hospital, Tufts Medical Center, and the Cambridge Health Alliance Hospitals. Successful candidates will hold one-year appointments, renewable for four years. Job Title: Postdoctoral Research Associate or Research Assistant Qualifications: 1. Earned Ph.D. degree in human factors engineering, experimental psychology, biomedical engineering, computer science, or equivalent, with an interest in medical devices and systems design, virtual reality simulation, haptics, and/or human performance evaluation and training. 2. Ability to work independently as well as collaboratively on research projects. 3. Excellent communication skills, both verbal and written. 4. Experience in conducting research with human and animal subjects, using both quantitative and qualitative methodologies, and the IRB and IACUC process. 5. Ability to prioritise tasks, manage team members, and disseminate results in a timely manner. 6. Experience with virtual or augmented reality, haptic devices, HCI and UI design, programming in C, C++, OpenGL, and statistical data analysis packages such as SAS, SPSS, or R. Research Assistant Qualifications: 1. Earned Bachelor's or Master's degree in human factors engineering, experimental psychology, biomedical engineering, computer science, or equivalent, with an interest in medical devices and systems design, virtual reality simulation, haptics, and/or human performance evaluation and training. 2. Ability to work independently as well as collaboratively on research projects. 3. Excellent communication skills, both verbal and written. 4. Experience in conducting research with human and animal subjects, using both quantitative and qualitative methodologies. 5. Experience with virtual or augmented reality, haptic devices, HCI and UI design, programming in C, C++, OpenGL, and statistical data analysis packages such as SAS, SPSS, or R. Qualified and interested candidates are invited to send a copy of their CV, along with the name and contact information of three references, to Dr. Caroline Cao at caroline.cao at wright.edu with the subject line "NIH application" or "NIH Research Assistant application. Review of applications will begin immediately until positions are filled. [CoI: I know Dr. Cao] **************************************************************** 21. Job Openings in the field of Model-Based Human-Centred Design, Germany We have five open positions in the field of model-based human-centred design at OFFIS, Germany: 1) Post-Doc with Management Role: Model-Based Design Processes for Human-Machine Systems http://www.offis.de/stellenangebot_detail/stellenangebote/post-doc-with-management-role-design-processes-for-human-machine-systems.html 2) Research Assistant: Studying and Modelling Interaction between Aircraft Pilots and Pilot Assistance Systems http://www.offis.de/stellenangebot_detail/stellenangebote/research-assistant-fm-studying-and-modelling-interaction-between-aircraft-pilots-and-pilot-ass-1.html 3) Research Assistant: Studying and Modelling Interaction between Car Drivers and Driver Assistance Systems http://www.offis.de/stellenangebot_detail/stellenangebote/research-assistant-fm-studying-and-modelling-interaction-between-car-drivers-and-driver-assista-1.html 4) Research Assistant: Pilot State and Intention Inference for Adaptive Pilot Assistance Systems http://www.offis.de/stellenangebot_detail/stellenangebote/research-assistants-fm-pilot-state-and-intention-inference-for-adaptive-pilot-assistance-system-1.html 5) Research Assistant: Development of a software- and hardware-based simulation environment for adaptive pilot assistance systems http://www.offis.de/stellenangebot_detail/stellenangebote/research-assistants-fm-pilot-state-and-intention-inference-for-adaptive-pilot-assistance-system-1.html Dr. Andreas Luedtke Group Manager Human-Centred Design OFFIS FuE Bereich Verkehr | R&D Division Transportation Escherweg 2 - 26121 Oldenburg - Germany Phone: +49 441 9722-530 luedtke at offis.de http://www.offis.de CHI-JOBS at LISTSERV.ACM.ORG, Mon, 11 Feb 2013 [CoI: no relationship] **************************************************************** 22. Cognitive Science Research Position / Opportunity http://www.tier1performance.com/jobs/srjob/56947114 We have a research scientist position (an entry to mid-career position within the field of cognitive science or closely related with a focus on interactive learning environments, simulations, and/or human performance modeling) for which we are actively seeking high quality candidates. If you are aware of someone who may be a fit and wants to be part of a fun and growing company (inc 5000 6 times and best places to work), I would greatly appreciate you forwarding a reference. Feel free to forward the listing. We (TiER1) have offices in Cincinnati, Dayton, Denver, Pittsburgh, and Chicago. We are a human performance consulting firm and offer services in strategic change, corporate learning, and organizational effectiveness. We are about 75% commercial services and 25% government services/research. Our research division focuses on adaptive and accelerated learning solutions including engaging, game-based learning solutions. There is no closing date. We usually keep looking until we find a great person that is a good cultural fit. Stuart Rodgers Managing Director TiER1 Performance Solutions 100 E. Rivercenter Blvd., Suite 100, Covington, KY 41011 o: 859.663.2114 | ext 2228 | m: 937.903.0558 s.rodgers at tier1performance.com http://www.tier1performance.com/ soar-group at lists.sourceforge.net 3 Jan 2013 [CoI: I know Stu] **************************************************************** 23. Research Positions at the US Air Force Research Laboratory (Please note, individuals must be U.S. citizens or permanent legal residents of the United States to be eligible for these positions) The U.S. Air Force Research Laboratory's Cognitive Models and Agents Branch has a variety of research positions available for talented cognitive, computational, and computer scientists interested in working on basic and applied cognitive science research. Full-time, paid positions range from undergraduate and graduate-level internships and research assistantships, to post-doctoral research appointments, to visiting faculty appointments. Salaries are commensurate with experience. We conduct empirical, computational, and mathematical research in the cognitive sciences to develop valid models of the human mind. We are committed to scientific excellence and technological innovation to improve the operational efficiency and effectiveness of the people defending our nation. We hire motivated, skilled, productive people who share our passion for this mission. There are research efforts underway in a variety of basic and applied research areas. Brief descriptions of current projects are available here: http://palm.mindmodeling.org/palmListings/ Anyone interested in working with us on one or more of our ongoing research efforts is encouraged to contact the PI for that particular research area as soon as possible. Email addresses are available on the website. Glenn Gunzelmann, Ph.D. Senior Research Psychologist S&T Advisor, Cognitive Models & Agents Branch 711 HPW/RHAC Wright-Patterson AFB, OH 45433-7905 Phone: (937) 938-3554 glenn.gunzelmann at wpafb.af.mil soar-mailing list, Fri, 18 Jan 2013 16:15:50 -0500 [CoI: have done/will do projects with them] **************************************************************** 24. 3 year PhD position at the Quality & Usability Lab, TU/Berlin The Quality and Usability Lab of TU Berlin seeks to assign a THREE-YEAR PHD POSITION in the area of AUTOMATIC EVALUATION OF MODEL-BASED USER INTERFACES. The position is available in the frame of a project funded by Deutsche Forschungsgemeinschaft (DFG) and is conducted in collaboration with the Distributed Artificial Intelligence Laboratory at TU Berlin (DAI-Labor). The successful candidate will be employed as a "Wissenschaftlicher Mitarbeiter" (German scientific researcher, salary according to the TVL 13 scale). Applications including a CV, a motivation letter, copies of the most important certificates and 1-2 references should be sent as soon as possible to Irene Hube-Achter (irene.hube at telekom.de) TU Berlin seeks to increase the percentage of female employees and particularly encourages female candidates to apply. In case of equal qualifications, female candidates will be preferred. Handicapped candidates will be preferred in case of equal qualifications. Project description The project aims at combining a framework for model-based development of user interfaces (MASP platform) with a framework for model-based evaluation of user interfaces (MeMo workbench). In MASP, user interfaces are specified in the form of models following the Cameleon reference framework. The actual interface is rendered at runtime, taking into account context parameters such as the screen size. In MeMo, similar models are used as a basis for simulating the interaction behavior of users with applications. Using such simulations, the user interface can be tested easily during development time. In the project, a focus is set on the analysis and modeling of user errors and error recovery strategies. In the beginning, the focus of the candidate will be on user tests conducted to learn about the behavior of real users. In the successive experimentation phase, different user simulation approaches will be implemented and evaluated by the candidate in collaboration with DAI-Labor. The candidate should be able to program in Java and show genuine interest in user interface evaluation. The willingness to work in an international and interdisciplinary team is a must. Applicants are expected to be available in the very near future. Further information on the Quality and Usability Lab and our research topics can be found under www.qu.tlabs.tu-berlin.de. Klaus-Peter Engelbrecht Quality and Usability Lab Telekom Innovation Laboratories TU Berlin D-10587 Berlin, Germany +49 308 3535 8486 (Phone) klaus-peter.engelbrecht at telekom.de http://www.qu.tlabs.tu-berlin.de from: CHI-JOBS at LISTSERV.ACM.ORG, 22 Feb 2013 [CoI: no relation] **************************************************************** 25. Tenure-track position at Fordham U., Dept of Comp. and Info. Science [I send this late, but note that there is not a closing date] Fordham U. Assistant Professor, Computer & Information Science The Department of Computer and Information Science (CIS) invites applications for a tenure-track Assistant Professor to begin in September 2013. A Ph.D. in Computer Science, Information Science, Informatics, or closely related field is required. The position requires excellence in teaching undergraduate and graduate courses, good communication skills, and demonstrated research potential with the ability to attract external research funding. We are interested in candidates with expertise in computational neuroscience, systems neuroscience, or other closely related areas such as neuroinformatics, brain and cognitive science, or cognitive computing and informatics. The CIS department offers graduate and undergraduate programs at Fordham's Rose Hill campus in the Bronx, Lincoln Center campus in Manhattan, and Westchester campus in West Harrison, NY. For information about the department please visit http://www.cis.fordham.edu. Review of applications will begin February 1st, 2013. Preferably submit your application electronically using the system at https://secure.interfolio.com/apply/15923. Alternatively you may send a letter of application, research summary, curriculum vitae, statement of teaching philosophy, and three letters of reference to faculty_search at cis.fordham.edu, or to: Faculty Search Committee Chair, CIS Department Fordham U., JMH 340 441 E. Fordham Road Bronx, NY 10458 Fordham is an independent, Catholic U. in the Jesuit tradition that welcomes applications from men and women of all backgrounds. Fordham is an Equal Opportunity/Affirmative Action Employer. http://www.neuroinf.org/mailman/listinfo/comp-neuro Date: 3 Jan 2013 From: "Xiaoxu Han [Staff/Faculty [A&S]]" To: comp-neuro at neuroinf.org [CoI: no relationship] **************************************************************** 26. Postdoc at the HathiTrust Research Center, U. of Ill. http://bit.ly/WkaTvT The HathiTrust Research Center (HTRC) is funding a postdoctoral position for up to three years at the U. of Illinois. This position will be located at both at the Center for Informatics Research in Science and Scholarship (CIRSS) at the Graduate School of Library and Information Science (GSLIS) and the U. Library's Scholarly Commons. The successful candidate will join a multidisciplinary group of faculty and doctoral students formulating the research agenda and the future of the HTRC and will design services within the Scholarly Commons for scholars using the HTRC. The successful candidate may also choose to participate in the Council on Library and Information Resources (CLIR) Postdoctoral Fellowship Program. Possible areas for postdoctoral research include but are not limited to: digital humanities, data curation, data modeling, metadata, machine learning, data mining, and text analysis. To discuss this post informally, candidates may contact J. Stephen Downie, Professor and Associate Dean for Research at GSLIS, Co-Director of the HathiTrust Research Center (jdownie at illinois.edu) J. Stephen Downie, PhD Graduate School of Library and Information Science U. of Illinois at Urbana-Champaign (217) 649-3839 NEMA Project Home: http://nema.lis.uiuc.edu [CoI: alumni of UofI, heard about this 5 feb 13] **************************************************************** 27. Tenure tracked faculty posts, postdocs in Fudan U. The Centre for Computational Systems Biology (http://ccsb.fudan.edu.cn/) and Shanghai Centre for Mathematic Science (see also http://www.ams.org/notices/201206/rtx120600881p.pdf) at Fudan U. is plannig to make 3-5 new PI appointments in Computational Biology this year. There are also a number of postdocs positions in the two centres. Depending on a candidate's experience, the rank is open and salary is negotiable. Applicants using computational or mathematical models and/or data analyzing approaches are encouraged to apply. All areas of biomedical science will be considered. These include but are not limited to bioinformatics and genomics, cellular and molecular biology, computational neuroscience, and a systems approach to cancer/brain diseases. The appointee will be expected to develop a rigorous research program, with some teaching at both graduate and undergraduate levels. The appointment requires a 9 months commitment per year to Fudan U.. Enquiries and applications can be made by email to zqy at fudan.edu.cn. The positions will be open until filled. Applications should include a CV (including bibliography), a brief statement of research and teaching interests, and copies of representative scholarly papers. Candidates should also arrange for three letters of reference to be sent. Fudan U. is one of the leading institutions in China and is located in Shanghai, one of the most dynamical cities in the world. With substantial supports from the Chinese government, Fudan U. aims to become one of the highest rank universities in the world. J. Feng, Prof. of Biology, Computer Science and Mathematics jianfeng feng jianfeng64 at gmail.com Warwick U., UK http://www.dcs.warwick.ac.uk/~feng Fudan U., PR China http://ccsb.fudan.edu.cn/ _______________________________________________ Comp-neuro mailing list Wed, 13 Feb 2013 Comp-neuro at neuroinf.org http://www.neuroinf.org/mailman/listinfo/comp-neuro [CoI: no relationship] *************************************************************** -30- From grlmc at urv.cat Sat Mar 23 11:52:25 2013 From: grlmc at urv.cat (GRLMC) Date: Sat, 23 Mar 2013 16:52:25 +0100 Subject: Connectionists: SSTiC 2013: 2nd registration deadline 26 March Message-ID: <1018EE839BF54D5781ED6241EAE6E3EF@Carlos1> *To be removed from our mailing list, please respond to this message with UNSUBSCRIBE in the subject* ********************************************************************* 2013 INTERNATIONAL SUMMER SCHOOL ON TRENDS IN COMPUTING SSTiC 2013 Tarragona, Spain July 22-26, 2013 Organized by Rovira i Virgili University http://grammars.grlmc.com/SSTiC2013/ ********************************************************************* +++ 2nd registration deadline: March 26 +++ AIM: SSTiC 2013 will be an open forum for the convergence of top class well recognized computer scientists and people at the beginning of their research career (typically PhD students) as well as consolidated researchers. SSTiC 2013 will cover the whole spectrum of computer science by means of 74 six-hour courses dealing with hot topics at the frontiers of the field. By actively participating, lecturers and attendees will share the idea of scientific excellence as the main motto of their research work. ADDRESSED TO: Graduate students from around the world. There are no pre-requisites in terms of the academic degree the attendee must hold. However, since there will be several levels among the courses, in the description of some of them reference may be made to specific knowledge background. SSTiC 2013 is appropriate also for people more advanced in their career who want to keep themselves updated on developments in the field. Finally, senior researchers will find it fruitful to listen and discuss with people who are main references of the diverse branches of computing nowadays. REGIME: 8 parallel sessions will be held during the whole event. Participants will be able to freely choose the courses they will be willing to attend as well as to move from one to another. VENUE: Palau Firal i de Congressos de Tarragona Arquitecte Rovira, 2 43001 Tarragona http://www.palaucongrestgna.com COURSES AND PROFESSORS: Divyakant Agrawal (Santa Barbara) [intermediate] Scalable Data Management in Enterprise and Cloud Computing Infrastructures Shun-ichi Amari (Riken) [introductory] Information Geometry and Its Applications James Anderson (Chapel Hill) [intermediate] Scheduling and Synchronization in Real-Time Multicore Systems Pierre Baldi (Irvine) [intermediate] Big Data Informatics Challenges and Opportunities in the Life Sciences Yoshua Bengio (Montr?al) [introductory/intermediate] Deep Learning of Representations Stephen Brewster (Glasgow) [advanced] Multimodal Human-Computer Interaction Bruno Buchberger (Linz) [introductory] Groebner Bases: An Algorithmic Method for Multivariate Polynomial Systems. Foundations and Applications Rajkumar Buyya (Melbourne) [intermediate] Cloud Computing Jan Camenisch (IBM Zurich) [intermediate] Cryptography for Privacy John M. Carroll (Penn State) [introductory] Usability Engineering and Scenario-based Design Jeffrey S. Chase (Duke) [intermediate] Trust Logic as an Enabler for Secure Federated Systems Larry S. Davis (College Park) [intermediate] Video Analysis of Human Activities Paul De Bra (Eindhoven) [intermediate] Adaptive Systems Marco Dorigo (Brussels) [introductory] An Introduction to Swarm Intelligence and Swarm Robotics Paul Dourish (Irvine) [introductory] Ubiquitous Computing in a Social Context Max J. Egenhofer (Maine) [introductory/intermediate] Qualitative Spatial Relations: Formalizations and Inferences Richard M. Fujimoto (Georgia Tech) [introductory] Parallel and Distributed Simulation David Garlan (Carnegie Mellon) [advanced] Software Architecture: Past, Present and Future Mario Gerla (Los Angeles) [intermediate] Vehicle Cloud Computing Georgios B. Giannakis (Minnesota) [advanced] Sparsity and Low Rank for Robust Data Analytics and Networking Ralph Grishman (New York) [intermediate] Information Extraction from Natural Language Mark Guzdial (Georgia Tech) [introductory] Computing Education Research: What We Know about Learning and Teaching Computer Science Francisco Herrera (Granada) [intermediate] Imbalanced Classification: Current Approaches and Open Problems Paul Hudak (Yale) [introductory] Euterpea: From Signals to Symphonies Using Haskell Syed Ali Jafar (Irvine) [intermediate] Interference Alignment Niraj K. Jha (Princeton) [intermediate] FinFET Circuit Design George Karypis (Minnesota) [introductory] Introduction to Parallel Computing: Architectures, Algorithms, and Programming Aggelos K. Katsaggelos (Northwestern) [intermediate/advanced] Sparsity-based Advances in Image Processing Arie E. Kaufman (Stony Brook) [advanced] Advances in Visualization Carl Kesselman (Southern California) [intermediate] Biomedical Informatics and Big Data Hugo Krawczyk (IBM Research) [intermediate] An Introduction to the Design and Analysis of Authenticated Key Exchange Protocols Pierre L'Ecuyer (Montr?al) [intermediate] Quasi-Monte Carlo Methods in Simulation: Theory and Practice Laks Lakshmanan (British Columbia) [intermediate/advanced] Information and Influence Spread in Social Networks Wenke Lee (Georgia Tech) [introductory] DNS-based Monitoring of Malware Activities Maurizio Lenzerini (Roma La Sapienza) [intermediate] Ontology-based Data Integration Ming C. Lin (Chapel Hill) [introductory/intermediate] Physically-based Modeling and Simulation Jane W.S. Liu (Academia Sinica) [intermediate] Critical Information and Communication Technologies for Disaster Preparedness and Response Nadia Magnenat-Thalmann (Nanyang Tech) [introductory] Modelling and Animating Virtual Humans Satoru Miyano (Tokyo) [intermediate] How to Hack Cancer Systems with Computational Methods Aloysius K. Mok (Austin) [intermediate] From Real-time Systems to Cyber-physical Systems Daniel Moss? (Pittsburgh) [intermediate] Asymmetric Multicore Management Hermann Ney (Aachen) [intermediate/advanced] Probabilistic Modelling for Natural Language Processing - with Applications to Speech Recognition, Handwriting Recognition and Machine Translation Cathleen A. Norris (North Texas) & Elliot Soloway (Ann Arbor) [introductory] Primary & Secondary Educational Computing in the Age of Mobilism Jeff Offutt (George Mason) [intermediate] Cutting Edge Research in Engineering of Web Applications David Padua (Urbana) [intermediate] Parallel Programming with Abstractions Bijan Parsia (Manchester) [introductory] The Semantic Web: Conceptual and Technical Foundations Massoud Pedram (Southern California) [intermediate] Energy Efficient Architectures and Information Processing Systems Jian Pei (Simon Fraser) [intermediate/advanced] Mining Uncertain and Probabilistic Data Charles E. Perkins (FutureWei) [intermediate/advanced] Beyond 4G Prabhakar Raghavan (Google) [introductory/intermediate] Web Search and Advertising Sudhakar M. Reddy (Iowa) [introductory] Design for Test and Test of Digital VLSI Circuits Phillip Rogaway (Davis) [introductory/intermediate] Provably Secure Symmetric Encryption Gustavo Rossi (La Plata) [intermediate] Topics in Model Driven Web Engineering Kaushik Roy (Purdue) [introductory/intermediate] Low-energy Computing Robert Sargent (Syracuse) [introductory] Validating Models Douglas C. Schmidt (Vanderbilt) [intermediate] Patterns and Frameworks for Concurrent and Networked Software Bart Selman (Cornell) [intermediate] Fast Large-scale Probabilistic and Logical Inference Methods Mubarak Shah (Central Florida) [intermediate/advanced] Visual Crowd Surveillance Ron Shamir (Tel Aviv) [introductory] Revealing Structure in Disease Regulation and Networks Micha Sharir (Tel Aviv) [introductory/intermediate] Geometric Arrangements and Incidences: Algorithms, Combinatorics, and Algebra Satinder Singh (Ann Arbor) [introductory/advanced] Reinforcement Learning: On Machines Learning to Act from Experience Dawn Xiaodong Song (Berkeley) [introductory] Selected Topics in Computer Security Daniel Thalmann (Nanyang Tech) [intermediate] Simulation of Individuals, Groups and Crowds and Their Interaction with the User Mike Thelwall (Wolverhampton) [introductory] Sentiment Strength Detection for the Social Web Julita Vassileva (Saskatchewan) [introductory/intermediate] Engaging Users in Social Computing Systems Philip Wadler (Edinburgh) [introductory] Topics in Lambda Calculus and Life Yao Wang (Polytechnic New York) [introductory/advanced] Video Compression: Fundamentals and Recent Development Gio Wiederhold (Stanford) [introductory] Software Economics: How Do the Results of the Intellectual Efforts Enter the Global Market Place Ian H. Witten (Waikato) [introductory] Data Mining Using Weka Limsoon Wong (National Singapore) [introductory/intermediate] The Use of Context in Gene Expression and Proteomic Profile Analysis Michael Wooldridge (Oxford) [introductory] Autonomous Agents and Multi-Agent Systems Ronald R. Yager (Iona) [introductory/intermediate] Fuzzy Sets and Soft Computing Philip S. Yu (Illinois Chicago) [advanced] Mining Big Data Justin Zobel (Melbourne) [introductory/intermediate] Writing and Research Skills for Computer Scientists REGISTRATION: It has to be done at http://grammars.grlmc.com/SSTiC2013/Registration.php Since a large number of attendees are expected and the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed when the capacity of the venue will be complete. FEES: They are the same (a flat rate) for all people by the corresponding deadline. They give the right to attend all courses. ACCOMMODATION: Information about accommodation is available on the website of the School. CERTIFICATE: Participants will be delivered a certificate of attendance. IMPORTANT DATES: Announcement of the programme: January 26, 2013 Six registration deadlines: February 26, March 26, April 26, May 26, June 26, July 26, 2013 QUESTIONS AND FURTHER INFORMATION: Lilica Voicu: florentinalilica.voicu at urv.cat POSTAL ADDRESS: SSTiC 2013 Research Group on Mathematical Linguistics (GRLMC) Rovira i Virgili University Av. Catalunya, 35 43002 Tarragona, Spain Phone: +34-977-559543 Fax: +34-977-558386 ACKNOWLEDGEMENTS: Ajuntament de Tarragona Diputaci? de Tarragona Universitat Rovira i Virgili From guangliang.li2010 at gmail.com Mon Mar 18 05:07:45 2013 From: guangliang.li2010 at gmail.com (Guangliang Li) Date: Mon, 18 Mar 2013 10:07:45 +0100 Subject: Connectionists: 2013 Reinforcement Learning Competition and ICML Workshop Message-ID: [We apologize in advance if you receive multiple copies of this message] ---------------------------------------------------------------------------------------------- 2013 Reinforcement Learning Competition and ICML Workshop 20-21 June 2013 Atlanta, USA https://sites.google.com/site/rlcomp2013/icml_workshop * * Competition goal After a four year hiatus, the reinforcement learning competition is back. The primary aim of the competition is to test both general and domain-specific reinforcement learning algorithms, using an unbiased and transparent methodology. As a side-effect, the competition will generate a set of benchmarks domains and benchmark results in those domains, which can then be used as a basis of comparison in future work. In the reinforcement learning competition , researchers can test their algorithms and insights in a friendly competitive way, on new and challenging domains. The reinforcement learning competition has not been organized for a while, but will take place again this year after a long hiatus. The primary aim of the competition is to test both general and domain-specific reinforcement learning algorithms, using an unbiased and transparent methodology. The domains used in the competition will form a set of benchmarks, and the results of the submitted algorithms will form a body of benchmark results, which researchers can then use as a basis of comparison in future work. A secondary aim is to discuss methodological approaches for comparing reinforcement learning algorithms. This remains an issue in reinforcement learning in general. Another important aim is to ensure the continuing existence of the competition, and the prevention of further hiatuses. Our aim is to ensure that the competition will organized annually again, from this year onward. To this extend we will try to build up an organization of interested researchers. As in the previous years, the competition will be hosted at http://www.rl-competition.org/ The format will be remain as is. Agents may compete in one or more of a set of known domains. This set will include a polyathlon event, where the agents compete in a sequence of arbitrary environments. We envisage the actual competition to take place in *June 2013*. ICML Workshop on the Reinforcement Learning Competition 2013 (WRLCOMP) The competition is associated with an ICML Workshop in conjunction with ICML 2013 . The ICML reinforcement learning competition workshop will bring together researchers, who participated in the competition, in order to present and discuss their results. We will evaluate what works, and also under what conditions established methods may not work so well. In this way we hope to broaden our insight into state-of-the-art RL algorithms, and important properties of RL problems. The workshop will be organized along the lines of the domains. One time slot will be reserved for each competition category winner. In addition, all competition entrants will be invited to submit a short paper describing their approach. We will reserve a special time slot for papers on methodological problems that arise when comparing reinforcement learning algorithms. Workshop website: https://sites.google.com/site/rlcomp2013/icml_workshop** Topics 1. For entrants All entrants are additionally invited to submit a paper describing the approach used in the competition, including findings and ideas that relate to more general research questions. 2. For everyone You are invited to submit a paper discussing methodological issue when comparing reinforcement learning algorithms, including but not limited to the following topics: Effects of PRNGs in large-scale comparisons Experimental design for an unbiased evaluation Metrics for performance evaluation Reproducibility of results Robustness of algorithms Statistical testing standards for reinforcement learning Submission Interested authors should format their papers according to ICML formatting guidelines, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icml2013stylefiles.tar.gz Papers should not exceed 4 pages and are due by May 15, 2013. All papers must be submitted as PDF, and be made through Easychair at * https://www.easychair.org/conferences/?conf=wrlcomp2013* Important Dates Domains available: April 1 Testing starts: May 1 Competition: June 1 Paper Submission Deadline: May 15, 2013 Author Notification: June 1, 2013 Workshop: June 20-21, 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirsch at bcf.uni-freiburg.de Fri Mar 22 10:29:19 2013 From: kirsch at bcf.uni-freiburg.de (Janina Kirsch) Date: Fri, 22 Mar 2013 15:29:19 +0100 Subject: Connectionists: Postdoc-Position at the Bernstein Center Freiburg - Computational Neuroscience & Neurotechnology Message-ID: <00b401ce2709$a3b99300$eb2cb900$@bcf.uni-freiburg.de> We are inviting applications for a Postdoctoral position at the Bernstein Center Freiburg, University of Freiburg, Germany for research on "Controllability of biological neuronal networks" This research will be in the framework of the new Excellence Cluster "BrainLinks-BrainTools", an interdisciplinary research initiative in the field of neurotechnology that was selected by the Joint Commission of the German Research Foundation and the German Council of Science and Humanities as a new Cluster of Excellence at the University of Freiburg. The main goal of this project is to understand and devise mechanisms to control the activity dynamics of neuronal networks by external stimulation. To this end we combine tools from graph theory, dynamical systems and control systems engineering and numerical simulations of biological neuronal networks. Experimental data on large-scale brain connectivity and activity to constrain the mathematical models will be available within BrainLinks-BrainTools. Applications are invited from candidates with a PhD degree in computational neuroscience, physics, mathematics, or electrical engineering. Previous experience in neuronal network modeling and dynamical systems is welcomed. The postdoctoral researcher will join the group of Dr. Arvind Kumar and Prof. Ad Aertsen, and will be part of the Bernstein Center Freiburg and participate in the postdoctoral program of the BCF [http://www.bcf.uni-freiburg.de/teaching-and-training/postdoc-program] The post-doc salary will be according to TV-L E13 the standard German pay-scale for postdoc positions. Application: 1. Please send your CV, a two page description of your current research, scientific interests and goals by April 15, 2013 to the following address: postdoc.program at bcf.uni-freiburg.de 2. Furthermore, please download our reference report form , forward it to at least 2 referees and ask them to provide their letter of recommendation on this form. ------ The Bernstein Center Freiburg is an interdisciplinary research facility comprising young researchers from mathematics, physics, electrical engineering and biology. The research at the BCF ranges from theoretical approaches on the function and dynamics of neuronal networks over neuroanatomy and experimentally driven neurophysiology up to the development of technologies for medical application. For more info: http://brainlinks-braintools.uni-freiburg.de http://www.bcf.uni-freiburg.de/ http://www.bcf.uni-freiburg.de/teaching-and-training/postdoc-program Contact: Dr. Arvind Kumar Bernstein Center Freiburg Faculty of Biology University of Freiburg, Germany Email: arvind.kumar at biologie.uni-freiburg.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From mehdi.khamassi at isir.upmc.fr Thu Mar 21 10:56:06 2013 From: mehdi.khamassi at isir.upmc.fr (Mehdi Khamassi) Date: Thu, 21 Mar 2013 15:56:06 +0100 Subject: Connectionists: Call for posters/registration: Third Symposium on Biology of Decision Making, Paris, France, 29-30 May 2013 Message-ID: <514B1F86.5040809@isir.upmc.fr> [ Please accept our apologies if you get multiple copies of this message ] Dear colleagues, Registration is now open for the Third Symposium on Biology of Decision Making which will take place in Paris on May, 29-30th 2013. The deadline for registration and poster submission is on April, 28th. Best regards ------------------------------------------------------------------------------------------------ THIRD SYMPOSIUM ON BIOLOGY OF DECISION MAKING (SBDM 2013) May 29-30, 2013, Paris, France Institut du Cerveau et de la Moelle, H?pital La Piti? Salp?tri?re, 47 Bd de l'H?pital, 75013 Paris & Universit? Pierre et Marie Curie, Paris, France. http://sbdm2013.isir.upmc.fr ------------------------------------------------------------------------------------------------ PRESENTATION: The Third Symposium on Biology of Decision Making will take place on May 29-30, 2013 at the Institut du Cerveau et de la Moelle, H?pital La Piti? Salp?tri?re, 47 boulevard de l'H?pital, 75013 Paris, France. The objective of this two day symposium is to gather people from different research fields with different approaches (economical, behavioral, neural and computational approaches) to decision making. The symposium will be a single-track, will last for 2 days and will include 4 sessions: Neuro-Physiology, Neuro-Systems, Neuro-Computations and Decision-Theory (Economics and Ethology). Please circulate widely and encourage your students and postdocs to attend. INVITED SPEAKERS: Hagai Bergman (Hebrew University of Jerusalem, Israel) Alain Berthoz (Coll?ge de France, France) Rafal Bogacz (Bristol University, UK) Erie Boorman (Oxford University, UK) Christophe Chamley (Boston University, USA) Giorgio Coricelli (CNRS / ?cole Normale Sup?rieure, France) Michael J Frank (Brown University, USA) Emmanuel Guigon (CNRS / UPMC, France) David Hansel (CNRS / Universit? Ren? Descartes, France) Masaki Isoda (Kansai Medical University, Japan) Karim Jerbi (INSERM, France) Ian Krajbich (University of Zurich, Switzerland) Samuel McClure (Stanford University, USA) Yael Niv (Princeton University, USA) Geoffrey Schoenbaum (NIDA-IRP, USA) Wolfram Schultz (Cambridge University, UK) Hidehiko Takahashi (Kyoto University, Japan) Taiki Takahashi (Hokkaido University, Japan) Frans de Waal (Emory University, USA) Jeffrey Wickens (Okinawa Institute of Science and Technology, Japan) IMPORTANT DATES: April 28, 2013 Deadline for Registration and Poster Submission May 29-30, 2013 Symposium Venue ORGANIZING COMMITTEE: Thomas Boraud (CNRS, Bordeaux, France) Sacha Bourgeois-Gironde (La Sorbonne, Paris, France) Kenji Doya (OIST, Okinawa, Japan) Mehdi Khamassi (CNRS - UPMC, Paris, France) Mathias Pessiglione (ICM - INSERM, Paris, France) CONTACT INFORMATION : Website, registration, poster submission and detailed program: http://sbdm2013.isir.upmc.fr Contact: sbdm2013 [ at ] isir.upmc.fr -- Mehdi Khamassi, PhD Researcher (CNRS) Institut des Syst?mes Intelligents et de Robotique (UMR7222) CNRS - Universit? Pierre et Marie Curie Pyramide, Tour 55 - Bo?te courrier 173 4 place Jussieu, 75252 Paris Cedex 05, France tel: + 33 1 44 27 28 85 fax: +33 1 44 27 51 45 cell: +33 6 50 76 44 92 http://people.isir.upmc.fr/khamassi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmaniada at ics.forth.gr Tue Mar 19 10:19:07 2013 From: mmaniada at ics.forth.gr (Michail Maniadakis) Date: Tue, 19 Mar 2013 16:19:07 +0200 Subject: Connectionists: Call For Paper Contributions: Sense of Time in Robotics In-Reply-To: <5135B742.1080906@ics.forth.gr> References: <5135B742.1080906@ics.forth.gr> Message-ID: <514873DB.1050701@ics.forth.gr> Dear colleagues, Michail Maniadakis, Marc Wittmann and Sylvie Droit-Volet in collaboration with Frontiers in Neuroscience, organize a Research Topic (a collection of papers) with title: "Towards embodied artificial cognition: TIME is on my side". You may find the relevant call-for-papers in the following link http://www.frontiersin.org/Neurorobotics/researchtopics/Towards_embodied_artificial_co/1554 As host editors, we would like to encourage you to submit an article to this topic. Contributions can be articles describing original research, methods, hypothesis & theory, opinions, etc. The idea is to create an organized, comprehensive collection of several contributions, as well as a forum for discussion and debate. Frontiers will compile an e-book, as soon as all contributing articles are published, that can be used in classes, be sent to foundations that fund your research, to journalists and press agencies, or to any number of other organizations. Frontiers is a Swiss Gold-model open-access publisher. As such, a manuscript accepted for publication incurs a publishing fee, which varies depending on the article type. Research Topic manuscripts receive a significant discount on publishing fees. Please take a look at this fee table: http://www.frontiersin.org/about/PublishingFees. Once published, your articles will remain free to access for all readers, and will be indexed in PubMed and other academic archives. As an author in Frontiers, you retain the copyright to your own papers and figures. We would be delighted if you considered participating in this Research Topic. Should you choose to participate, please confirm by sending us a quick email and then your abstract no later than May 15. Please note that the deadline for the full manuscript submission is on: Oct 30, 2013 With best regards, Michail Maniadakis, Marc Wittmann and Sylvie Droit-Volet, Guest Associate Editors, Frontiers in Neurorobotics www.frontiersin.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgado at uma.pt Sat Mar 23 11:09:48 2013 From: morgado at uma.pt (Morgado Dias) Date: Sat, 23 Mar 2013 15:09:48 +0000 Subject: Connectionists: IEEE WISP 2013 - Special Session on Signal Processing and Artificial Intelligence Message-ID: <514DC5BC.4080700@uma.pt> Dear Researcher, We are organizing a Special Session on Signal Processing and Artificial Intelligence at the 8th IEEE International Symposium on Intelligent Signal Processing to be held in Funchal, Madeira Island, Portugal from 16 to 18 of September 2013. The aim of the special session is to bring together researchers using Artificial Intelligence Tools that can be applied to Signal Processing, with the hope to function as a support for the improvement of current technologies and the growth of new integrated methodologies for the use of Artificial Intelligence Tools and solutions to Signal Processing. The session scope includes but is not limited to the following topics: Applications of Artificial Neural Networks, Fuzzy-Logic and other Artificial Intelligence Tools to Signal Processing Training algorithms Architecture of Artificial Intelligence Tools Hardware implementation of Artificial Intelligence Tools Processing of biological signals Industrial applications of Artificial Intelligence Dates and deadlines for the first Call for Papers: Abstract submission - March 31, 2013 Acceptance notification - May 17, 2013 Full paper submission and early payment - July 1, 2013 Best regards, Morgado Dias General Co-Chair WISP 2013 -- Com os melhores cumprimentos, Com os melhores cumprimentos, Morgado Dias *Universidade da Madeira * *Morgado Dias * Electr?nica e Telecomunica??es Pr?-Reitor da Universidade da Madeira Presidente da APCA www.apca.pt *morgado at uma.pt* Tel.: 291-705307 *International Symposium on Intelligent Signal Processing - WISP 2013* *http://www.trivent.hu/WISP2013/* ** -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.lepora at sheffield.ac.uk Wed Mar 20 05:41:44 2013 From: n.lepora at sheffield.ac.uk (Nathan F Lepora) Date: Wed, 20 Mar 2013 09:41:44 +0000 Subject: Connectionists: Living Machines - Extended deadline March 29th for submissions Message-ID: ______________________________________________________________ Extended deadline for Papers, Exhibits, Satellite Events and Sponsors The 2nd International Conference on Biomimetic and Biohybrid Systems. A Convergent Science Network Event 29th July to 2nd August 2013 Natural History Museum, London http://csnetwork.eu/livingmachines/conf2013 Paper deadline: now March 29th, 2013 (was 22nd March) Deadline for Satellite Event proposals, March 29th, 2013 ______________________________________________________________ ABOUT LIVING MACHINES 2013 The development of future real-world technologies will depend strongly on our understanding and harnessing of the principles underlying living systems and the flow of communication signals between living and artificial systems. Biomimetics is the development of novel technologies through the distillation of principles from the study of biological systems. The investigation of biomimetic systems can serve two complementary goals. First, a suitably designed and configured biomimetic artefact can be used to test theories about the natural system of interest. Second, biomimetic technologies can provide useful, elegant and efficient solutions to unsolved challenges in science and engineering. Biohybrid systems are formed by combining at least one biological component?an existing living system?and at least one artificial, newly-engineered component. By passing information in one or both directions, such a system forms a new hybrid bio-artificial entity. The development of either biomimetic or biohybrid systems requires a deep understanding of the operation of living systems, and the two fields are united under the theme of ?living machines??the idea that we can construct artefacts, such as robots, that not only mimic life but share the same fundamental principles; or build technologies that can be combined with a living body to restore or extend its functional capabilities. Biomimetic and biohybrid technologies, from nano- to macro-scale, are expected to produce major societal and economical impacts in quality of life and health, information and communication technologies, robotics, prosthetics, brain-machine interfacing and nanotechnology. Such systems should also lead to significant advances in the biological and brain sciences that will help us to better understand ourselves and the natural world. The following are some examples: ? Biomimetic robots and their component technologies (sensors, actuators, processors) that can intelligently interact with their environments. ? Active biomimetic materials and structures that self-organize and self-repair. ? Biomimetic computers?neuromimetic emulations of the physiological basis for intelligent behaviour. ? Biohybrid brain-machine interfaces and neural implants. ? Artificial organs and body-parts including sensory organ-chip hybrids and intelligent prostheses. ? Organism-level biohybrids such as robot-animal or robot-human systems. ACTIVITIES The main conference will take the form of a three-day single-track oral and poster presentation programme, 30th July to 1st August 2013, that will include five plenary lectures from leading international researchers in biomimetic and biohybrid systems. Agreed speakers are: Mark Cutkosky, Stanford University (Biomimetics and Dextrous Manipulation); Terrence Deacon, University of California, Berkeley (Natural and Artificial Selves); Ferdinando Rodriguez y Baena, Imperial College London (Biomimetics for medical devices); Robert Full, University of California, Berkeley (Locomotion); Andrew Pickering, University of Exeter (History of living machines). Submissions will be in the form of full papers or extended abstracts. The proceedings will be published in the Springer-Verlag LNAI Series. Submissions are also invited for a one-day exhibition to feature working biomimetic or biohybrid systems and biomimetic/biohybrid art. The exhibition, will take place on the afternoon and evening of Thursday 1st August with the evening event including a press reception and buffet dinner. Active researchers in biomimetic and biohybrid systems are also invited to propose topics for 1-day tutorials or workshops on related themes. ABOUT THE VENUE The organisers are delighted to have secured the Flett Theatre at the Natural History Museum in London as the main venue for our conference. The NHM is an international centre for the study of the natural world featuring many important biological collections. The exhibition and poster session on Thursday 1st will be hosted at the nearby Science Museum, and the satellite events at Imperial College London. All three venues are conveniently located within a short walking distance of each other in South Kensington, the Museum district of the UK capital, and close to many of London?s tourist sights. SUBMITTING TO LIVING MACHINES 2013 Oral and poster programme We invite both full papers (12 pages, LNCS format) and extended abstracts (3 pages, LNCS format). All contributions will be refereed. Full papers are invited from researchers at any stage in their career but should present significant findings and advances in biomimetic or biohybid research; more preliminary work would be better suited to extended abstract submission. Full papers will be accepted for either oral presentation (single track) or poster presentation. Extended abstracts will be accepted for poster presentation only. All submissions must be formatted according to Springer LNCS guidelines. Papers should be submitted via the Living Machines web-site by midnight on March 29th 2013. http://senldogo0039.springer-sbm.com/ocs/home/LM2013 Exhibition The Living Machines 2013 Exhibition is intended to feature working biomimetic or biohybrid systems and biomimetic/biohybrid art. It will take place in the London Science Museum Level 1 Galleries on Thursday 1st August 2013. The exhibition is expected to include intelligent artefacts such as biomimetic robotics; however, we are open to proposals for display of biomimetic or biohybrid systems of any kind. The exhibition will be in two sessions. In the afternoon session exhibits will be displayed alongside conference posters. This session will be open to conference delegates and sponsors only. The evening session will be alongside the LM2013 buffet dinner and reception. This session will be open to invited representatives of the press, VIPs, and conference delegates and members of the public who have registered for the evening event. For registered conference participants there is no additional charge to participate in the exhibition but you must register your exhibit using the proforma available through the LM2013 web-site. Note that, if you wish to continue to display your exhibit during the evening session, you must also register for the buffet dinner and reception in addition to the main conference. We strongly encourage authors of accepted papers and extended abstracts to bring their working biomimetic or biohybrid artefacts to include in the exhibition. A prize will be awarded for the best exhibit. The conference organisers would also be interested in performance type material for the evening session. Please contact us if you have a proposal. Satellite events LM2013 will support satellite events, such as symposia, workshops or tutorials, in any of the areas listed below, which can be scheduled for either the 29th July or 2nd August. Attendance at satellite events will attract a small fee intended to cover the costs of the meeting. There is a lot of flexibility about the content, organisation, and budgeting for these events. We have reserved meeting rooms at Imperial College London to host the satellites each with capacity for up to 40 people (though larger rooms could be arranged if needed) and will have projection equipment with technical support. Proposals for satellites should be submitted using the proforma available from the LM2013 web-page by March 29th, 2013 but please contact us sooner if you are thinking of organising an event. Confirmation of accepted proposals will be provided be early April at the latest. SCOPE OF CONTRIBUTIONS Submissions of papers, exhibits and satellite events are invited in, but not limited to, the following topics and related areas. Biomimetics can, in principle, extend to all fields of biological research from physiology and molecular biology to ecology, and from zoology to botany. Promising research areas include system design and structure, self-organization and co-operativity, new biologically active materials, self-assembly and self-repair, learning, memory, control architectures and self-regulation, movement and locomotion, sensory systems, perception, and communication. Biomimetic research, particularly at the nano-scale, should also lead to important advances in component miniaturisation, self-configuration, and energy-efficiency. A key focus of the conference will be on complete behaving systems in the form of biomimetic robots that can operate on different substrates on sea, on land, or in the air. A further central theme will be the physiological basis for intelligent behaviour as explored through neuromimetics?the modelling of neural systems. Exciting emerging topics within this field include the embodiment of neuromimetic controllers in hardware, termed neuromorphics, and within the control architectures of robots, sometimes termed neurorobotics. Biohybrid systems usually involve structures from the nano-scale (molecular) through to the macro-scale (entire organs or body parts). Important implementation examples are: Bio-machine hybrids where, for instance, biological muscle is used to actuate a synthetic device. Brain-machine interfaces where neurons and their molecular machineries are connected to microscopic sensors and actuators by means of electrical or chemical communication, either in vitro or in the living organism. Intelligent prostheses such as artificial limbs, wearable exoskeletons, or sensory organ-chip hybrids (such cochlear implants and artificial retina devices) designed to assist the disabled or elderly, or to aid rehabilitation from illness. Implantable or portable devices that have been fabricated for monitoring health care or for therapeutic purposes such as artificial implants to control insulin release. Biohybrid systems at the organism level such as robot-animal or robot-human communities. Biohybrid systems may take advantage of progress in the field of synthetic biology. Contributions from biologists, neuroscientists, and theoreticians, that are of direct relevance to the development of future biomimetic or biohybrid devices are also welcome, as are papers considering ethical issues and/or societal impacts arising from the advances made in this field. ACCOMODATION West London has many excellent hotels that are suitable for conference delegates. We are also organizing the provision of reasonably-priced accommodation for LM2013 events in the Imperial College Halls of Residence. KEY DATES March 29th, 2013 Paper submission deadline March 29th, 2013 Satellite Event proposal deadline Early April, notification of accepted satellites April 29th, 2013 Notification of acceptance of papers May 20th, 2013 Camera ready copy May 31st, Early registration deadline July 29-August 2nd 2013 Conference SPONSORSHIP Living Machines 2013 is sponsored by the Convergent Science Network (CSN) for Biomimetic and Biohybrid Systems which is an EU FP7 Future Emerging Technologies Co-ordination Activity. CSN also organises two highly successful workshop series: the Barcelona Summer School on Brain, Technology and Cognition and the Capoccaccia Neuromorphic Cognitive Engineering Workshop. Living Machines 2013 is supported by the IOP Physics Journal Biomimetics & Bio-inspiration, who this year will publish a special issue of articles based on last years? LM2012 best papers. A review of the state of the art in biomimetics, by the conference chairs, and reporting strong recent growth in the field, has just been published in the journal (http://dx.doi.org/10.1088/1748-3182/8/1/013001). Other organisations wishing to sponsor the conference in any way and gain the corresponding benefits by promoting themselves and their products through conference publications, the conference web-site, and conference publicity are encouraged to contact the conference organisers to discuss the terms of sponsorship and necessary arrangements. We offer a number of attractive and good-value packages to potential sponsors. We are looking forwards to seeing you in London. Organising Committee: Tony Prescott (co-chair) Paul Verschure (co-chair) Nathan Lepora (programme chair) Holger Krapp (workshops & symposia) Anna Mura (web-site) Conference Secretariat: living-machines at sheffield.ac.uk c/o Gill Ryder, Sheffield Centre for Robotics Department of Psychology University of Sheffield Western Bank Sheffield, S10 2TN United Kingdom From omri.barak at gmail.com Thu Mar 21 01:57:38 2013 From: omri.barak at gmail.com (Omri Barak) Date: Thu, 21 Mar 2013 07:57:38 +0200 Subject: Connectionists: Position for PhD / postdoc in Theoretical Neuroscience Message-ID: The Theoretical Neuroscience lab in the Technion ( http://barak.net.technion.ac.il) is just starting and we are seeking postdocs and graduate students. Research topics include the nature of neural representation, interaction between multiple neural timescales, reverse engineering of recurrent neural networks, and more. Importantly, our experimental neighbors ( http://neuroscience.technion.ac.il/research_group) offer a range of stimulating results to think about. The lab is also a member of the Network Biology Research Laboratories ( http://nbr.technion.ac.il/), offering a broader perspective on various Biological systems. Successful applicants are expected to carry out both independent original research, as well as collaborative work with faculty and/or students. The main selection criteria will be outstanding research accomplishments, creativity and promise of future achievement. Postdoctoral fellowships are for two to three years pending on budget and progress. Highly motivated applicants should send a letter of interest and CV to Omri Barak at omri.barak at gmail.com. At least two letters of reference should also be mailed to the above address. Further inquiries can be sent to Omri Barak - omri.barak at gmail.com . -- http://barak.net.technion.ac.il -------------- next part -------------- An HTML attachment was scrubbed... URL: From pul8 at psu.edu Fri Mar 22 18:08:24 2013 From: pul8 at psu.edu (Ping Li) Date: Fri, 22 Mar 2013 18:08:24 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue Message-ID: Dear Colleagues, A Special Issue on Computational Modeling of Bilingualism has been published. Most of the models are based on connectionist architectures. All the papers are available for free viewing until April 30, 2013 (follow the link below to its end): http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ Please let me know if you have difficulty accessing the above link or viewing any of the PDF files on Cambridge University Press's website. With kind regards, Ping Li ================================================================= Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information Sciences & Technology | Co-Chair, Inter-College Graduate Program in Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | Pennsylvania State University | University Park, PA 16802, USA | Editor, Bilingualism: Language and Cognition, Cambridge University Press | Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher Email: pul8 at psu.edu | URL: http://cogsci.psu.edu ================================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvegaru at usal.es Tue Mar 19 05:39:27 2013 From: rvegaru at usal.es (Roberto Vega Ruiz) Date: Tue, 19 Mar 2013 10:39:27 +0100 Subject: Connectionists: =?windows-1252?q?SOCO_2013=3A_4th_CFP_=96_The_sub?= =?windows-1252?q?mission_system_is_open_now!?= Message-ID: SOCO 2013: 4th CFP ? The submission system is open now! * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. The 8th International Conference on Soft Computing Models in Industrial and Environmental Applications. September 11th-13th, 2013 Salamanca, Spain http://soco13.usal.es ---------------------------------------------------------------- *** SUBMISSION SYSTEM IS OPEN*** The submission system is open now! Please, submit your contributions through the following link: https://www.easychair.org/account/signin.cgi?conf=soco13 ---------------------------------------------------------------- SOCO 2013 is organized in conjunction with: HAIS 2013 (http://hais13.usal.es) CISIS 2013 (http://gicap.ubu.es/cisis2013) and ICEUTE 2013 (http://gicap.ubu.es/iceute2013) ---------------------------------------------------------------- ***JOURNAL SPECIAL ISSUES AND FAST TRACK: Authors of the best papers presented at SOCO 2013 will be invited to submit an extended version to highly reputed international journals, Up to now: Confirmed special issues for the journals: 1. International Journal of Neural Systems (IJNS), World Scientific, I.F.: 4.284, Rank: 4th/111 2. Integrated Computer-Aided Engineering, IOS Press, I.F.: 3.451, Rank: 7th/111 3. Computer-Aided Civil and Infrastructure Engineering, WILEY, I.F.: 3.382, Rank: 9th/111 4. Neurocomputing, ELSEVIER, I.F.: 1.580, Rank: 39th/111 5. Journal of Applied Logic, ELSEVIER, I.F.: 0.574, Rank: 70th/99 Confirmed Fast Track for the journals: 1. Applied Soft Computing, ELSEVIER, I.F.: 2.612, Rank: 13th/111 ---------------------------------------------------------------- *** PLENARY SPEAKERS: Prof. Hojjat Adeli.-The Ohio State University, Columbus, USA. Prof. Hujun Yin.-The University of Manchester, UK. Prof. Manuel Gra?a.- University of the Basque Country, Spain Prof. Cesare Alipi.- Politecnico di Milano, Italy. ---------------------------------------------------------------- ***SPECIAL SESSIONS AND WORKSHOPS: In addition to regular sessions, participants are encouraged to organize special sessions and workshops on specialized topics. Each special session should have at least 4 or 5 quality papers. Special session organizers will solicit submissions; conduct reviews jointly with the SOCO?13 PC and in the same way recommend accept/reject decisions on the submitted papers. Submission of Special Sessions and workshops are welcome! Dead-line for Special Sessions and submission: 30th March, 2013 For more information, please, send an email to: escorchado at usal.es List of Special Sessions & Workshop up to now: http://soco13.usal.es/?q=node/8 ---------------------------------------------------------------- Technical Co-Sponsors by: WORLD FEDERATION ON SOFT COMPUTING http://www.wfsc.de IEEE.-Spain Section http://www.ieee.org/spain IEEE.-Systems, Man, and Cybernetics Spanish Chapter http://www.ieee-smc.es/main/index.shtml CITY COUNCIL OF SALAMANCA http://www.aytosalamanca.es AEPIA http://www.aepia.org Machine Intelligence Research Labs (MIR Labs) http://www.mirlabs.org/ The International Federation for Computational Logic http://www.ifcolog.net/ IT4Innovations Centre of Excellence http://www.it4i.eu/en/ INFRANOR http://www.infranor.com ---------------------------------------------------------------- ***PROCEEDINGS: SOCO'13 proceedings will be published by Springer in its series of AISC Series (Advances in Intelligent and Soft Computing series) as in the last previous editions. All accepted papers must be presented by one of the authors who must register for the conference and pay the fee. -------------------------------------------------------------------- *** The 8th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO?13) Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena. This Conference is mainly focus on its industrial and environmental applications. SOCO Series of Conferences provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in this multidisciplinary research field. -------------------------------------------------------------------- *** TOPICS The topics of interest include, but are not limited to: ?Evolutionary Computing ?Neuro Computing ?Probabilistic Computing ?Immunological Computing ?Hybrid Methods ?Causal Models ?Case-based Reasoning ?Chaos Theory Fuzzy Computing ?Intelligent Agents and Agent Theory ?Interactive Computational Models ?Rough Sets and Granular Computing The application fields of interest cover, but are not limited to: ?Decision Support ?Process and System Control ?System Identification and Modelling ?Optimization ?Signal or Image Processing ?Vision or Pattern Recognition ?Condition Monitoring ?Fault Diagnosis ?Systems Integration ?Internet Tools ?Human Machine Interface ?Time Series Prediction ?Robotics ?Motion Control & Power Electronics ?Biomedical Engineering ?Virtual Reality ?Reactive Distributed AI ?Telecommunications ?Consumer Electronics ?Industrial Electronics ?Manufacturing Systems ?Power and Energy ?Data Mining ?Data Visualisation ?Intelligent Information Retrieval ?Bio-inspired Systems ?Autonomous Reasoning ?Intelligent Agents ? Databases -------------------------------------------------------------------- *** PAPER SUBMISSION AND PROCEEEDINGS SOCO'13 proceedings will be published by Springer in its series of AISC Series (Advances in Intelligent and Soft Computing series). All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper to SOCO?13 will be reviewed by at least two members of the Program Committee. Papers must be prepared according to the LNCS-LNAI style template (http://www.springer.de/comp/lncs/authors.html) and must be no more than ten (10) pages long, including figures and bibliography. Additional pages (over 10 pages) will be charged at 150 Euro each. -------------------------------------------------------------------- *** IMPORTANT DATES *** Paper submission deadline 20th April, 2013 Acceptance notification 20th May, 2013 Final version submission 10th June, 2013 Conference dates 11th - 13th September, 2013 Payment deadline 2nd June, 2013 -------------------------------------------------------------------- *** COMMITTEES *** *General Chairs* Fanny Klett Director of German Workforce Advanced Distributed Learning Partnership Laboratory (Germany) Ajith Abraham Machine Intelligence Research Labs (Europe) Vaclav Snasel University of Ostrava (Czech Republic) Emilio Corchado University of Salamanca (Spain) -------------------------------------------------------------------- *** CONTACT *** Dr. Emilio Corchado - University of Salamanca (Spain) (Chair) BISITE Research Group http://bisite.usal.es/ GICAP Research Group http://gicap.ubu.es/ University of Salamanca Email: escorchado at usal.es Phone: +34 630736755 For more information about SOCO?13, please refer to the SOCO?13 website: http://soco13.usal.es * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvegaru at usal.es Tue Mar 19 07:01:16 2013 From: rvegaru at usal.es (Roberto Vega Ruiz) Date: Tue, 19 Mar 2013 12:01:16 +0100 Subject: Connectionists: =?windows-1252?q?CISIS_2013=2C_2nd_CFP_=96_The_su?= =?windows-1252?q?bmission_system_is_open_now!?= Message-ID: * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. CISIS 2013, 2nd CFP ? The submission system is open now! 6th International Conference on Computational Intelligence in Security for Information Systems Salamanca, Spain 11th-13th September, 2013 http://gicap.ubu.es/cisis2013 -------------------------------------------------- ***SUBMISSION SYSTEM IS OPEN*** The submission system is open now! Please, submit your contributions through the following link: https://www.easychair.org/account/signin.cgi?conf=cisis13 -------------------------------------------------- CISIS 2013 is organized in conjunction with: HAIS 2013 (http://hais13.usal.es) SOCO 2013 (http://soco13.usal.es) and ICEUTE 2013 (http://gicap.ubu.es/iceute2013) -------------------------------------------------------- JOURNAL SPECIAL ISSUES: Authors of the best papers presented at CISIS 2013 will be invited to submit an extended version to highly reputed international journals, Up to now: Confirmed special issues for the journals: 1. Logic Journal of the IGPL, Oxford Journals, I.F.: 0.913, Rank: 1st/19 -------------------------------------------------------- PLENARY SPEAKERS: Prof. Hojjat Adeli.-The Ohio State University, Columbus, USA. Prof. Hujun Yin.-The University of Manchester, UK. Prof. Manuel Gra?a.-University of the Basque Country, Spain Prof. Cesare Alippi.-Politecnico di Milano, Italy -------------------------------------------------- CISIS 2013 aims to offer a meeting opportunity for academic and industry-related researchers belonging to the various, vast communities of Computational Intelligence, Information Security, Data Mining, and Biometry, HPC and Grid computing issues. The need for intelligent, flexible behaviour by large, complex systems, especially in mission-critical domains, is intended to be the catalyst and the aggregation stimulus for the overall event. CISIS?13 provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in this multidisciplinary research field. ----------------------------------------------------- PUBLICATION The CISIS'13 Proceedings are to be published by Springer in the prestigious Advances in Intelligent and Soft Computing Series. This Series is indexed by ISI Proceedings, DBLP, Ulrich's, EI-Compendex, SCOPUS, Zentralblatt Math, MetaPress, Springerlink. Papers must be written in English. For each paper, at least one author is required to register and attend the conference to present the paper, so as to have it included in the conference proceedings. ---------------------------------------------------------- RANKING CISIS is included in the ranking of the best conferences (B category) established by the Computer Science Conference Ranking of the Computing Research and Education Association of Australasia (in 2008), and Excellence in Research for Australia (in 2010). ---------------------------------------------------------- SPECIAL SESSIONS/Open call: Researchers who would like to organise a Special Session on topics falling within the scope of the conference are invited to submit a proposal for consideration. This should include: . Decide on the title and content of your session. . Publicise your session. . Obtain at least five papers from researchers in the area. . Find two suitably qualified reviewers for the each of the papers. . Manage the review process of the papers. . Ensure the editable wordprocessor versions of the papers are uploaded by the proper deadline. If you agree to accept this invitation we would be grateful if you could supply the following information by 30th of March, 2013: .Title of the session. . A paragraph describing the content of the session. . Name, Surname and Affiliation of the session/workshop chair/co-chairs. . Email address (please give only one). . Postal address. . Telephone number. . Fax Number. . URL of web page describing session (if any). Please, send all this information to: escorchado at usal.es ------------------------------------------------------------- SUBMISSION INSTRUCTIONS In order to submit a paper, authors must register at CISIS 2013 conference management system. All papers must be submitted in electronic format (PDF format, word document or latex and images). ------------------------------------------------------------- IMPORTANT DATES Paper submission deadline: 20th April, 2013 Acceptance notification: 20th May, 2013 Final version submission: 10th June, 2013 Conference dates: 11th-13th September, 2013 Payment deadline: 2nd June, 2013 -------------------------------------------------------------- TOPICS Topics are encouraged, but not limited to: - Intelligent Data Mining for Network Security: Intrusion Detection Systems, Log Correlation Methods, Adaptive Defence of Network Infrastructures. -Cyber Security and Defence. - Learning Methods for Text Mining in Intelligence and Security: Document Classification and Processing, Ontologies and Conceptual Information Processing, Semantic Information Representation, Natural Language Acquisition, Web Semantics in Intelligence and Law-Enforcement, Adaptive Reasoning, Information Forensics. - Soft-Computing Methods in Critical Infrastructure Protection: Industrial and Commercial Applications of Intelligent Methods for Security, Intelligent Control and Monitoring of Critical Systems, Dynamic Adaptive Railway Operation, Centralized Control Systems, Adaptive Planning for Strategic Reasoning. - Intelligent Secure Methods in Railway Operation: Intelligent Methods in Energy and Transportation, Planning and Automated Reasoning in Large System Control. - Computational Intelligence in Biometrics for Security: Biometric Identification and Recognition, Biometric Surveillance, Biometric Access Control, Extraction of Biometric Features (fingerprint, iris, face, voice, palm, gait). - Computer Science, namely on HPC and Grid computing issues, Computational Sciences, with requirements in HPC and Grid, Computational Engineering with a similar focus, Grid Middleware, Grid Computing, Data and Networking Infrastructures, Distributed and Large-Scale Data Access and Management, Distributed and Large-Scale Data Access and Management, Data Repositories, Distributed Resource Management and Scheduling, Supercomputer/cluster/grid integration issues, Grid Performance Evaluation, QoS and SLA Negotiation, Grid and HPC Applications, including e-Science in general and also Science Gateways, Nanomaterials, High Energy Physics, e-Health, e-Business, e-Administration, Life Sciences, Earth Sciences, Civil Protection, Computational Sciences and Engineering, User Development Environments and Programming Tools for Grid Computing ------------------------------------------------------------- TECHNICAL CO-SPONSORS: IEEE SPAIN SECTION www.ieeespain.org IEEE Systems Man and Cybernetics-Spanish Chapter http://www.ieee-smc.es CITY COUNCIL OF SALAMANCA http://www.aytosalamanca.es AEPIA http://www.aepia.org Machine Intelligence Research Labs (MIR Labs) http://www.mirlabs.org IT4Innovation Excellence Center http://www.it4i.eu/en The International Federation for Computational Logic (IFCoLog) http://www.ifcolog.net INFRANOR http://www.infranor.com ------------------------------------------------------------ CONTACT Prof. Emilio Corchado e-mail: escorchado at usal.es web: http://gicap.ubu.es/cisis2013 NOTE: please, do not replay to this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvegaru at usal.es Tue Mar 19 05:42:51 2013 From: rvegaru at usal.es (Roberto Vega Ruiz) Date: Tue, 19 Mar 2013 10:42:51 +0100 Subject: Connectionists: =?windows-1252?q?HAIS_2013=3A_4th_CFP_=96_The_sub?= =?windows-1252?q?mission_system_is_open_now!?= Message-ID: <1D28314D-060B-4543-971E-20D991B0342F@usal.es> HAIS 2013: 4th CFP ? The submission system is open now! * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. The 8th International Conference on Hybrid Artificial Intelligence Systems. September 11th-13th, 2013 Salamanca, Spain http://hais13.usal.es --------------------------------------------------------------- HAIS 2013 is organized in conjunction with: SOCO 2013 (http://soco13.usal.es) CISIS 2013 (http://gicap.ubu.es/cisis2013) and ICEUTE 2013 (http://gicap.ubu.es/iceute2013) ---------------------------------------------------------------- ***SUBMISSION SYSTEM IS OPEN **** The submission system is open now! Please, submit your contributions through the following link: https://www.easychair.org/account/signin.cgi?conf=hais13 ---------------------------------------------------------------- ***JOURNAL SPECIAL ISSUES AND FAST TRACK: Authors of the best papers presented at HAIS 2013 will be invited to submit an extended version to highly reputed international journals, Up to now: Confirmed special issues for the journals: 1. International Journal of Neural Systems (IJNS), World Scientific, I.F.: 4.284, Rank: 4th/111 2. Integrated Computer-Aided Engineering, IOS Press, I.F.: 3.451, Rank: 7th/111 3. Neurocomputing, ELSEVIER, I.F.: 1.580, Rank: 39th/111 Confirmed Fast Track for the journals: 1. Applied Soft Computing, ELSEVIER, I.F.: 2.612, Rank: 13th/111 ---------------------------------------------------------------- *** PLENARY SPEAKERS: Prof. Hojjat Adeli.-The Ohio State University, Columbus, USA. Prof. Hujun Yin.-The University of Manchester, UK. Prof. Manuel Gra?a.- University of the Basque Country, Spain Prof. Cesare Alipi.- Politecnico di Milano, Italy. ---------------------------------------------------------------- Technical Co-Sponsors by: WORLD FEDERATION ON SOFT COMPUTING http://www.wfsc.de IEEE.-Spain Section http://www.ieee.org/spain IEEE.-Systems, Man, and Cybernetics Spanish Chapter http://www.ieee-smc.es/main/index.shtml CITY COUNCIL OF SALAMANCA http://www.aytosalamanca.es AEPIA http://www.aepia.org Machine Intelligence Research Labs (MIR Labs) http://www.mirlabs.org/ The International Federation for Computational Logic http://www.ifcolog.net/ IT4Innovations Centre of Excellence http://www.it4i.eu/en/ INFRANOR http://www.infranor.com ---------------------------------------------------------------- ***PROCEEDINGS: HAIS?13 proceedings will be published by Springer in its series of LNCS/LNAI- LNAI (Lecture Notes in Artificial Intelligence) as in the last previous editions. All accepted papers must be presented by one of the authors who must register for the conference and pay the fee. -------------------------------------------------------------------- *** The 8th International Conference on Hybrid Artificial Intelligence Systems(HAIS?13) combines symbolic and sub-symbolic techniques to construct more robust and reliable problem solving models. Hybrid intelligent systems are becoming popular due to their capabilities in handling many real world complex problems, involving imprecision, uncertainty and vagueness, high-dimensionality. They provide us with the opportunity to use both, our knowledge and row data to solve problems in a more interesting and promising way. HAIS Series of Conferences provides an interesting opportunity to present and discuss the latest theoretical advances and real-world applications in this multidisciplinary research field. -------------------------------------------------------------------- *** TOPICS Topics are encouraged, but not limited to, the combination of at least two of the following areas in the field of Hybrid Intelligent Systems: - Fusion of soft computing and hard computing - Evolutionary Computation - Visualization Techniques - Ensemble Techniques - Data mining and decision support systems - Intelligent agent-based systems (complex systems), cognitive and Reactive distributed AI systems - Internet modelling - Human interface - Case base reasoning - Chance discovery - Applications in security, prediction, control, robotics, image and speech signal processing, food industry, biology and medicine, business and management, knowledge management, artificial societies, chemicals, pharmaceuticals, geographic information systems, materials and environment engineering and so on. -------------------------------------------------------------------- *** PAPER SUBMISSION AND PROCEEEDINGS HAIS'13 proceedings will be published by Springer in its series of Lecture Notes in Artificial Intelligence - LNAI (part of its prestigious Lecture Notes in Computer Science - LNCS series). All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper to HAIS?13 will be reviewed by at least two members of the Program Committee. Papers must be prepared according to the LNCS-LNAI style template (http://www.springer.de/comp/lncs/authors.html) and must be no more than ten (10) pages long, including figures and bibliography. Additional pages (over 10 pages) will be charged at 150 Euro each. -------------------------------------------------------------------- *** COMMITTEES*** *General Chair* Prof. Marios M. Polycarpou - University of Cyprus, Cyprus. Prof. Jeng-Shyang Pan - National Kaohsiung University of Applied Sciences, Taiwan Prof. Micha? Wo?niak - Wroclaw University of Technology (Poland) Dr. Emilio Corchado - University of Salamanca, Spain. -------------------------------------------------------------------- *** IMPORTANT DATES *** Paper submission deadline 20th April, 2013 Acceptance notification 20th May, 2013 Final version submission 10th June, 2013 Conference dates 11th - 13th September, 2013 Payment deadline 2nd June, 2013 -------------------------------------------------------------------- *** CONTACT *** Dr. Emilio Corchado - University of Salamanca (Spain) (Chair) BISITE Research Group http://bisite.usal.es/ GICAP Research Group http://gicap.ubu.es/ University of Salamanca Email: escorchado at usal.es Phone: +34 630736755 For more information about HAIS?13, please refer to the HAIS?13 website: http://hais13.usal.es * We apologize if you receive this CFP more than once. * PLEASE CIRCULATE this CFP among your colleagues and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.devlin at york.ac.uk Fri Mar 22 07:08:16 2013 From: sam.devlin at york.ac.uk (Sam Devlin) Date: Fri, 22 Mar 2013 11:08:16 +0000 Subject: Connectionists: Call for Participation: Adaptive Learning Agents Workshop @ AAMAS 2013 Message-ID: *Call For Participation*** * * *Adaptive and Learning Agents Workshop 2013 * *at AAMAS 2013 (Saint Paul, Minnesota, USA)* *May 6-7* ******************************************************* * Register at: http://aamas2013.cs.umn.edu/node/41 ******************************************************* ALA 2013: Adaptive and Learning Agents Workshop held at AAMAS 2013 (Saint Paul, Minnesota, USA). The ALA workshop has a long and successful history and is now in its 13th edition. The workshop is a merger of European ALAMAS and the American ALAg series which is usually held at AAMAS. Our technical program will include 19 oral presentations, 2 invited speakers and a joint panel held in collaboration with the MSDM workshop ( http://gaips.inesc-id.pt/~switwicki/msdm2013/). For which, the panelists confirmed so far include Prof. Peter Stone, Prof. Shlomo Zilberstein, and Prof. Milind Tambe. More details may be found on the workshop web site: http://swarmlab.unimaas.nl/ala2013/ * * ******************************************************* * Early Registration Deadline: March 27, 2013 * Workshop: May 6-7, 2013 * Register at: http://aamas2013.cs.umn.edu/node/41 ******************************************************* Adaptive and Learning Agents, particularly those in a multi-agent setting are becoming more and more prominent as the sheer size and complexity of many real world systems grows. How to adaptively control, coordinate and optimize such systems is an emerging multi-disciplinary research area at the intersection of Computer Science, Control theory, Economics, and Biology. The ALA workshop will focus on agent and multi-agent systems which employ learning or adaptation. The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design). This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular emphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to: * Novel combinations of reinforcement and supervised learning approaches * Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc. * Supervised multi-agent learning * Reinforcement learning (single and multi-agent) * Planning (single and multi-agent) * Reasoning (single and multi-agent) * Distributed learning * Adaptation and learning in dynamic environments * Evolution of agents in complex environments * Co-evolution of agents in a multi-agent setting * Cooperative exploration and learning to cooperate and collaborate * Learning trust and reputation * Communication restrictions and their impact on multi-agent coordination * Design of reward structure and fitness measures for coordination * Scaling learning techniques to large systems of learning and adaptive agents * Emergent behaviour in adaptive multi-agent systems * Game theoretical analysis of adaptive multi-agent systems * Neuro-control in multi-agent systems * Bio-inspired multi-agent systems * Applications of adaptive and learning agents and multi-agent systems to real world complex systems * Learning of Co-ordination ******************************************************* Organization *Workshop chairs:* Sam Devlin (University of York, UK) Daniel Hennes (Maastricht University, The Netherlands) Enda Howley (National University of Ireland, Galway, Ireland) If you have any questions about the ALA workshop, please contact Enda Howley at: enda.howley AT nuigalway.ie * * *Senior Steering Committee Members:* Daniel Kudenko (University of York, UK) Ann Now? (Vrije Universiteit Brussels, Belgium) Peter Stone (University of Texas at Austin, USA) Matthew Taylor (Lafayette College, USA) Kagan Tumer (Oregon State University, USA) Karl Tuyls (Maastricht University, The Netherlands) ******************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vittorio.Murino at iit.it Tue Mar 12 12:40:30 2013 From: Vittorio.Murino at iit.it (Vittorio Murino) Date: Tue, 12 Mar 2013 17:40:30 +0100 Subject: Connectionists: CFP: 4th Int'l Workshop on Socially Intelligent Surveillance and Monitoring - SISM 2013 Message-ID: <513F5A7E.1010306@iit.it> Apologize for multiple posting ----------------------------------------------------------------------------- 4th International Workshop on Socially Intelligent Surveillance and Monitoring SISM 2013 http://www.dcs.gla.ac.uk/~vincia/sism2013/index.html In conjunction with the IEEE Conference on Computer Vision and Pattern Recognition, Portland (Oregon), June 28, 2013 ************************************************************************** DEADLINE is March 13, 2013 BUT we accept also papers with slight delay. Please, send us an expression of interest and possibly a title and abstract so that we can start to assign reviewers! ************************************************************************** Computer vision and pattern recognition are the main technologies used for automatic monitoring of public spaces. Effective approaches for tracking people, recognizing poses, postures, gestures, collective crowd phenomena in public environments have been developed in the last years, especially in the video surveillance context, aimed at classifying (suspect, unusual, abnormal) behaviors. In parallel, new technologies are being developed for sensing and monitoring inappropriate behavior in social media (e.g., identity theft, abuses on chlidren, etc.), a setting where the problems requiring surveillance technologies in the physical space tend to appear, in a different form, more and more frequently. Not to mention the critical role that social media play nowadays in a large number of activities that have a potential impact on the public sphere, from flash-mobs stopping the traffic for a few minutes to nation-wide revolutions. This workshop aims at gathering researchers active in computer vision and pattern recognition, human sciences and automatic behavior understanding to tackle the problems above in an interdiscplinary perspective. Joint research across different communities will have a major impact on any technology that can benefit from automatic monitoring approaches, including video-surveillance, architecture, ambient intelligence, marketing, office space design, urbanism, etc. Interested participants are invited to submit papers that should describe high-quality original research joining computer vision and pattern recognition, human sciences and automatic behavior understanding areas. Topics of interest include (but are by no means limited to): Proxemics Human ethology Kinesics Spatial Empathy Territoriality Expressions and emotions Tracking: multi-person, multi-camera, group/crowd Motion segmentation and analysis Crowd/group analysis and simulation Social force models Collective and emergent behaviour Gesture/Action recognition Activity analysis Multi-person/group/crowd interaction analysis Spatial and temporal reasoning Sensory integration and data fusion Social media and security Sentiment analysis Situation awareness and understanding Applications: Ambient Intelligence, Surveillance and Monitoring, Domotics, Intelligent, Perceptual Marketing Important Dates March 13th: submission April 20th: acceptance notification May 1st: Camera Ready June 28th: Workshop Organizing Committee Vittorio Murino (Istituto Italiano di Tecnologia / University of Verona) Marco Cristani (University of Verona / Istituto Italiano di Tecnologia) Alessandro Vinciarelli (University of Glasgow / Idiap Research Institute) ================================================ -- Vittorio Murino *********************************************************** Prof. Vittorio Murino, Ph.D. Dipartimento di Informatica Universita` degli Studi di Verona Ca' Vignal 2, Strada Le Grazie 15, 37134 Verona, Italy Tel: +39 045 802 7996, Fax: +39 045 802 7068 Secretary: +39 045 802 7069 Mobile: +39 329 6508554 E-mail: vittorio.murino at univr.it WWW homepage: http://profs.sci.univr.it/~swan VIPS lab homepage: http://vips.sci.univr.it eVS homepage: www.evsys.net *********************************************************** -- Vittorio Murino **************************** Prof. Vittorio Murino, Ph.D. PAVIS - Pattern Analysis & Computer Vision IIT Istituto Italiano di Tecnologia Via Morego 30 16163 Genova, Italy Phone: +39 010 71781 504 Mobile: +39 329 6508554 Fax: +39 010 71781 236 E-mail: vittorio.murino at iit.it http://www.iit.it/pavis.html *************************************************************************** From yulei.frank.wu at gmail.com Mon Mar 18 05:39:56 2013 From: yulei.frank.wu at gmail.com (Yulei Wu) Date: Mon, 18 Mar 2013 17:39:56 +0800 Subject: Connectionists: Two Days Left: March 20, 2013 :: The 12th IEEE International Conference on Ubiquitous Computing and Communications (IUCC-2013), 16-18 July 2013, Melbourne, Australia Message-ID: <2013031817395424207610@gmail.com> The 12th IEEE International Conference on Ubiquitous Computing and Communications (IUCC-2013) http://anss.org.au/iucc2013/ 16-18 July 2013, Melbourne, Australia Important Dates Workshop Proposal: March 20, 2013 Submission Deadline: March 20, 2013 (Firm Deadline) Authors Notification: April 20, 2013 Final Manuscript Due: May 15, 2013 Publications Accepted and presented papers will be included in the IEEE CPS Proceedings. Distinguished papers presented at the conference, after further revision, will be published in special issues of the following high quality international journals (pending). - Computers & Security - Elsevier (Impact factor=0.868) - Concurrency and Computation: Practice and Experience - Wiley (Impact factor=0.636) - Security and Communication Networks - Wiley (Impact factor=0.414) - Future Generation Computer Systems - Elsevier (Impact factor=1.978) - Multimedia Tools and Applications - Springer (Impact factor=0.617) - Journal of Internet Technology (Impact factor=0.508) - Journal of Computer and System Sciences - Elsevier (Impact factor=1.157) Topics Ubiquitous Computing Track: - Autonomic Computing - Utility Computing - Cloud Computing - Mobile Computing - Real-Time Computing - Grid and Peer-to-Peer Computing - Energy-Efficient Ubiquitous Computing - Wearable Computers - Embedded Computing - Parallel and Distributed Computing - Information Visualization - Modeling and Analysis of Ubiquitous Computing Systems - Internet Computing - Ambient Intelligence - Middleware and Agent Technologies Ubiquitous Communications Track: - Autonomic Communications - Computer Networking - Communication Theory and Protocols - Wireless Networks - Cognitive Radio - Pervasive Embedded Networks - Ad Hoc and Sensor Networks - RFID - Network Middleware - Enabling Technologies (e.g., Wireless PANs, LANs, Bluetooth) - Location Systems and Technology - Multimedia Communication Systems - Human-Computer Interaction - Mobility Management - Future Networks and Protocols Ubiquitous System, Services and Applications Track: - Context-Aware Applications - Resource Management - Programming Paradigms for Ubiquitous Computing Applications - Smart Home - Pervasive Health - Ubiquitous Platforms - Embedded Systems - E-Commerce and E-Learning - Multimedia Applications - Quality-of-Service - Information Security and Privacy - Security Issues and Protocols - Key Management, Authentication and Authorization - Multimedia Information Security - Forensics and Image Watermarking - Distributed Sensing, Monitoring and Management Systems Organisation Committee General Chairs: - Wanlei Zhou, Deakin University, Australia - Beniamino Di Martino, Seconda Universita di Napoli, Italy - Geyong Min, University of Bradford, UK Program Chairs: - Yang Xiang, Deakin University, Australia - Ahmed Al-Dubai, Edinburgh Napier University, UK - Yulei Wu, Chinese Academy of Sciences, China Workshop Chairs: - Marisol Garcia Valls, Universidad Carlos III de Madrid, Spain - Xiaolong Jin, Chinese Academy of Sciences, China - Bahman Javadi, University of Western Sydney, Australia Publicity Chiar: - Roberto Di Pietro, University of Roma 3, Italy - Wenbin Jiang, Huazhong University of Science and Technology, China - Mianxiong Dong, The University of Aizu, Japan Program Committees: Please check the full list at http://anss.org.au/iucc2013/program_committee.html. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann at cs.nyu.edu Sat Mar 23 18:43:58 2013 From: yann at cs.nyu.edu (Yann LeCun) Date: Sat, 23 Mar 2013 18:43:58 -0400 Subject: Connectionists: New MS in Data Science at NYU Message-ID: <514E302E.80300@cs.nyu.edu> Dear All, NYU recently announced the launch of the Center for Data Science (CDS) as part of a university-wide initiative in data science and statistics. The CDS is a multi-disciplinary center that will bring people who produce theory, tools, and methods (from mathematics, statistics and computer science) together with people who need to extract knowledge from data from the physical, life and social sciences, as well as from business, government, and medicine. The CDS is rolling out a new Master's program in Data Science which will begin in September 2013. Applications are now open. The deadline for applications is April 1, but we will consider applications submitted after the deadline. MS in Data Science: http://cds.nyu.edu/academics/ms-in-data-science/ Center for Data Science: http://cds.nyu.edu NYU Data Science Portal: http://nyu.edu/datascience NYU press release: http://bit.ly/YkmuHn -- Yann LeCun -- _____________________________________________________________________ Yann LeCun, Silver Professor. Director, NYU Center for Data Science Prof. of Computer Science, Courant Institute of Mathematical Sciences Professor of Neural Science, Center for Neural Science Professor of Electrical and Computer Engineering, ECE Dept, NYU-Poly New York University, 719 Broadway, Room 1220, New York, NY 10003, USA http://yann.lecun.com email: yann[at]cs.nyu.edu tel:+1(212)998-3283 From mmartimay at gmail.com Mon Mar 25 12:01:46 2013 From: mmartimay at gmail.com (Martina Maggio) Date: Mon, 25 Mar 2013 17:01:46 +0100 Subject: Connectionists: Feedback Computing 2013 Message-ID: CALL FOR PAPERS Feedback Computing 2013 - The 8th International Workshop on Feedback Computing San Jose, California, USA, 25 June 2013 Co-hosted with the 2013 USENIX Federated Conferences Week https://www.usenix.org/conference/feedback13 ----------------------------------------------------------------- IMPORTANT DATES Paper submissions due: March 29, 2013, 11:59 p.m. PDT Notification to authors: April 26, 2013 Final paper files due: May 24, 2013 Workshop date: June 25, 2013 ----------------------------------------------------------------- OVERVIEW Following the success of the past six FeBID workshops and the newly debuted Feedback Computing Workshop in 2012, the 2013 International Workshop on Feedback Computing will be held in June 2013, as part of USENIX Federated Conferences Week in the heart of the Silicon Valley. Feedback Computing is a unique forum built around advancing feedback system theory and practice in modeling, analyzing, designing, and optimizing computing systems. The creation of this workshop represents the growing use of feedback in a broader agenda and is a timely response to the following two trends: 1. Computing systems are growing larger, smarter, and more complex, embedding in the physical world, human interactions, and societal infrastructure. Systematic and feedback-driven approaches are critical to address the dynamic complexity that arises in new fields such as cyber-physical systems, cloud computing, social networks, and mobile applications. 2. Advances in disciplines such as machine learning, mathematical optimization, network theories, decision theories, and data engineering provide new foundations and techniques that empower feedback approaches to address computing systems at scale and to achieve goals such as autonomy, adaptation, stabilization, robustness, or performance optimization. ----------------------------------------------------------------- TOPICS The Feedback Computing Workshop seeks original research contributions and position papers on advancing feedback control technologies and their applications in computing systems, broadly defined. Topic of interests include but are not limited to: * Theoretical foundations for feedback computing * New control paradigms and system architecture * Sensing, actuation, and data management in feedback computing * Learning and modeling of computing system dynamics * Design patterns and software engineering * Feedback computing education and awareness * Experiences and best practices from real systems * Applications in domains such as distributed systems, cloud computing, data center resource management, real-time systems, cyber-physical systems, social network, and mobility We encourage research paper submissions expressing original research results, challenge paper submissions motivating new research directions, and application paper submissions elaborating experiences from real systems. In addition, the workshop will leverage extended coffee breaks and lunch break to arrange special meetings and to discuss collaborative research agenda built among the participants. ----------------------------------------------------------------- PAPER SUBMISSIONS Authors are invited to submit three types of papers to emphasize the multiple focuses of this workshop: * Research Papers: Research paper submissions must represent original, unpublished contributions. All submissions should be typeset in two-column format in 10 point type on 12 point (single-spaced) leading, with the text block being no more than 6.5" wide by 9" deep; submission should not exceed 6 pages in length (excluding references). Manuscript templates are available for download from the USENIX templates page. * Challenge Papers: Challenge paper submissions must motivate research challenges with real systems that can take advantage of feedback computing. All submissions should be typeset in two-column format in 10 point type on 12 point (single-spaced) leading, with the text block being no more than 6.5" wide by 9" deep; submission should not exceed 3 pages in length (excluding references). Manuscript templates are available for download from the USENIX templates page. * Application Papers: Application paper submissions must be based on real experience and working systems. All submissions should be formatted as annotated slides?a visual in the upper half of a page and the explanatory text in the lower half?and should not exceed 15 slides in length. All papers are to be submitted via the Web submission form, which will be available here soon, as PDF files. Simultaneous submission of the same work to multiple venues, submission of previously published work, or plagiarism constitutes dishonesty or fraud. USENIX, like other scientific and technical conferences and journals, prohibits these practices and may take action against authors who have committed them. See the USENIX Conference Submissions Policy for details. Papers accompanied by nondisclosure agreement forms will not be considered. If you are uncertain whether your submission meets USENIX's guidelines, please contact the program co-chairs, feedback13chairs at usenix.org, or the USENIX office, submissionspolicy at usenix.org. At least one author of an accepted paper is expected to present the paper in person at the workshop. There will not be copyright-transferred formal proceedings for the workshop. The accepted papers will be available online to registered attendees before the conference and will also be distributed via USB drives at the conference. If your accepted paper should not be published prior to the event, please notify production at usenix.org. The papers will be available online to everyone beginning on June 25, 2013. Accepted submissions will be treated as confidential prior to publication on the USENIX Feedback Computing Workshop '13 Web site; rejected submissions will be permanently treated as confidential. One Best Paper Award will be announced at the end of workshop to recognize the current best work in feedback computing. ------------------------------------------------------------------ ORGANIZERS General Chair Yixin Diao, IBM T.J. Watson Research Center Program Co-Chairs Jie Liu, Microsoft Research Ming Zhao, Florida International University Program Committee Sherif Abdelwahed, Mississippi State University Christos Cassandras, Boston University Zonghua Gu, Zhejiang University Jeffrey Kephart, IBM Research Maria Kihl, Lund University Xenofon D. Koutsoukos, Vanderbilt University Charles R. Lefurgy, IBM Research Michael Lemon, University of Notre Dame Xue Liu, McGill University Martina Maggio, Lund University Ole Mengshoel, Carnegle Mellon University Arif Merchant, Google Pradeep Padala, VMware Eric Rutten, INRIA Grenoble Sharad Singhal, HP Labs Eduardo Tovar, Polytechnic Institute of Porto Zhikui Wang, HP Labs Steering Committee Tarek Abdelzaher, University of Illinois at Urbana Champaign Yixin Diao, IBM T.J. Watson Research Center Joseph L. Hellerstein, Google Chenyang Lu, Washington University in St. Louis Anders Robertsson, Lund University Xiaoyun Zhu, VMware Publicity Chair Matina Maggio, Lund University Web Chair Adrian Suarez, Florida International University From ted.carnevale at yale.edu Mon Mar 25 14:25:58 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Mon, 25 Mar 2013 14:25:58 -0400 Subject: Connectionists: Parallelizing NEURON Models course Message-ID: <515096B6.5070402@yale.edu> Space is still available for the course on Parallelizing NEURON Models that we will present June 26-30, 2013, at the Institute for Neural Computation, UCSD, San Diego. This course is designed for investigators who already have a model that they would like to port to parallel hardware, or who are committed to developing such a model. Participants will receive documentation and instruction that distills our experience in creating and using models of cells and circuits that can be simulated on parallel hardware. Each day will begin with formal presentations and targeted discussions, with coding sessions in the afternoon so that participants will be able to apply what they have learned while expert assistance is close at hand. For more information about this course and its on-line application form, see http://www.neuron.yale.edu/neuron/static/courses/parnrn2013/parnrn2013.html Please note that: 1. The official registration fee for this course is $1350, but applicants who sign up and pay by Friday, April 12, will be eligible for the early registration rate of $1200. 2. Seating is limited, and the final registration deadline is Friday, May 24. --Ted From m.vanotterlo at donders.ru.nl Mon Mar 25 16:24:49 2013 From: m.vanotterlo at donders.ru.nl (Otterlo, M. van (Martijn)) Date: Mon, 25 Mar 2013 21:24:49 +0100 (CET) Subject: Connectionists: Final CfP: Workshop on Machine Learning for Interactive Systems (abstract registration: April 13) In-Reply-To: <375043728.264645.1364242958983.JavaMail.root@draco.zimbra.ru.nl> Message-ID: <1237573796.264660.1364243089484.JavaMail.root@draco.zimbra.ru.nl> Key Features of MLIS'13: + Proceedings published in the ACM digital library + Three internationally renowned invited speakers __________________________________________________________ IJCAI Workshop on Machine Learning for Interactive Systems (MLIS'13): Bridging the Gap between Perception, Action and Communication August 3-4, 2013, Beijing, China http://mlis-workshop.org/2013 Call for Papers Intelligent systems or robots that interact with their environment by perceiving, acting or communicating often face a challenge in how to bring these different concepts together. One of the main reasons for this challenge is the fact that the core concepts in perception, action and communication are typically studied by different communities: the computer vision, robotics and natural language processing communities, among others, without much interchange between them. As machine learning lies at the core of these communities, it can act as a unifying factor in bringing the communities closer together. Unifying these communities is highly important for understanding how state-of-the-art approaches from different disciplines can be combined (and applied) to form generally interactive intelligent systems. The goal of this workshop is to bring researchers from multiple disciplines together who are in one way or another affected by the gap between action, perception and communication that typically exists for interactive systems or robots. Topics of interest include, but are not limited to: Machine Learning: - Reinforcement Learning - Supervised Learning - Unsupervised Learning - Semi-Supervised Learning - Active Learning - Learning from human feedback - Learning from teaching, tutoring, instruction and demonstration - Combinations or generalisations of the above Interactive Systems: - (Socially) Interactive Robotics - Embodied Virtual Agents - Avatars - Multimodal systems - Cognitive (robotics) architectures Types of Communication: - System interacting with a single human user - System interacting with multiple human users - System interacting with the environment - System interacting with other machines Example applications could include: (1) a robot may learn to coordinate its speech with its actions, taking into account visual feedback during their execution; (2) an autonomous car may learn to coordinate its acceleration and steering behaviours depending on observations of obstacles; (3) a team of robots playing soccer may learn to coordinate their ball kicks depending on the dynamic locations of their opponents; (4) a sensorimotor system may learn to drive a wheelchair through feedback from visual signals of the environment; (5) a mobile robot may interactively learn from human guidance how to manipulate objects and move through a building, based on human feedback using language, gestures and interactive dialogue; or (6) a multimodal smart phone can adapt its input and output modalities to the user's goals, workload and surroundings. Submissions can take two forms. Long papers should not exceed 8 pages, and short (position) papers should not exceed 4 pages. They should follow the ACM SIG proceedings format (option 1): http://www.acm.org/sigs/publications/proceedings-templates. All submissions should be anonymised for peer-review. Submission link: https://www.easychair.org/conferences/?conf=mlis2013 Accepted papers will be published by ACM International Conference Proceedings Series under ISBN 978-1-4503-2019-1. The proceedings of MLIS?13 will be available on the ACM digital library on the day of the workshop. Invited Speakers: Prof. Dr. Martin Riedmiller, University of Freiburg, Germany Title: "Learning Machines that Perceive, Act and Communicate" Prof. Dr. Olivier Pietquin, Sup?lec, France Title: "Inverse Reinforcement Learning for Interactive Systems" Dr. George Konidaris, MIT, United States Title: "Autonomous Robot Skill Acquisition" Important Dates: April 13, Abstract registration April 20, Paper submission deadline May 20, Notification of acceptance May 30, Camera-ready deadline August 3-4, MLIS workshop Organising Committee: Heriberto Cuayahuitl, Heriot-Watt University, Edinburgh, UK Lutz Frommberger, University of Bremen, Germany Nina Dethlefs, Heriot-Watt University, Edinburgh, UK Martijn van Otterlo, Radboud University Nijmegen, The Netherlands Programme Committee: Kai Arras, University of Freiburg, Germany Maren Bennewitz, University of Freiburg, Germany Dan Bohus, Microsoft Research, USA Martin Butz, University of T?bingen, Germany Paul Crook, Microsoft, USA Mary Ellen Foster, Heriot-Watt University, UK Helen Hastie, Heriot-Watt University, UK Jesse Hoey, University of Waterloo, Canada Filip Jurc?cek, Charles University in Prague, Czech Republic Simon Keizer, Heriot-Watt University, UK Kazunori Komatani, Nagoya University, Japan George Konidaris, MIT CSAIL, USA Honghai Liu, University of Portsmouth, UK Ramon Lopez de Mantaras, Spanish Council for Scientific Research, Spain Eduardo Morales, National Institute of Astrophysics, Optics and Electronics, Mexico Plinio Moreno, Instituto Superior T?cnico, Portugal Olivier Pietquin, Supelec, France Matthew Purver, Queen Mary University of London, UK Antoine Raux, Honda Research Institute, USA Alex Rudnicky, Carnegie Mellon University, USA Hiroshi Shimodaira, University of Edinburgh, UK Danijel Skocaj, University of Ljubljana, Slovenia Blaise Thomson, University of Cambridge, UK Zhuoran Wang, Heriot-Watt University, UK Marco Wiering, University of Groningen, The Netherlands Jason Williams, Microsoft Research, USA Junichi Yamagishi, University of Edinburgh, UK For all enquires, please mail: organizers at mlis-workshop.org -- Check out our new/cool/ultimate reinforcement learning book at Springer (march 2012) http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-642-27644-6 Martijn van Otterlo Artificial Intelligence Radboud University Nijmegen The Netherlands http://www.socsci.ru.nl/~martijvo "Don't say it's just your personal opinion; you have no other" (Bomans). From weng at cse.msu.edu Sat Mar 23 19:16:37 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 23 Mar 2013 19:16:37 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: References: Message-ID: <514E37D5.60400@cse.msu.edu> Ping Li: As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan and many others correctly stated. For example, traditional neural networks cannot learn a finite automaton (FA) until recently (i.e., the proof of our Developmental Network). We all know that FA is the basis for all probabilistic symbolic networks (e.g., Markov models) but they are all not connectionist. After seeing your announcement, I am confused with the book title "Bilingualism Special Issue: Computational Modeling of Bilingualism" but with your comment "most of the models are based on connectionist architectures." Without further clarifications from you, I have to predict that these connectionist architectures in the book are all grossly wrong in terms of brain-capable connectionist natural language processing, since they cannot learn an FA. This means that they cannot generalize to state-equivalent but unobserved word sequences. Without this basic capability required for natural language processing, how can they claim connectionist natural language processing, let alone bilingualism? I am concerned that many papers proceed with specific problems without understanding the fundamental problems of the traditional connectionism. The fact that the biological brain is connectionist does not necessarily mean that all connectionist researchers know about the brain's connectionism. -John Weng On 3/22/13 6:08 PM, Ping Li wrote: > > Dear Colleagues, > > A Special Issue on Computational Modeling of Bilingualism has been > published. Most of the models are based on connectionist architectures. > > All the papers are available for free viewing until April 30, 2013 > (follow the link below to its end): > > http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ > > Please let me know if you have difficulty accessing the above link or > viewing any of the PDF files on Cambridge University Press's website. > > With kind regards, > > Ping Li > > ================================================================= > > Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information > Sciences & Technology | Co-Chair, Inter-College Graduate Program in > Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition > | Pennsylvania State University | University Park, PA 16802, USA | > > Editor, Bilingualism: Language and Cognition, Cambridge University > Press | Associate Editor: Journal of Neurolinguistics, Elsevier > Science Publisher > > Email: pul8 at psu.edu | URL: > http://cogsci.psu.edu > > ================================================================= > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ole.jensen at donders.ru.nl Mon Mar 25 04:31:55 2013 From: ole.jensen at donders.ru.nl (Ole Jensen) Date: Mon, 25 Mar 2013 09:31:55 +0100 Subject: Connectionists: summer school/Donders Institute/Aug 25-30 Message-ID: <51500B7B.2070701@donders.ru.nl> This year the Donders Institute for Brain, Cognition and Behaviour is organizing the Summer school on "Brain Networks and Neuronal Communication" *Sunday 25 August - Friday 30 August 2013* One week of intensive study on brain networks spanning communication between single neurons to large scale network interactions. National and international faculty, with expertise in theoretical neuroscience, animal electrophysiology and human brain imaging will train students on how to build bridges between the various levels of brain functioning. This school will promote vertical integration, promoting new ways of conceptualizing neuronal communication and brain connectivity. Faculty lectures, journal clubs, hands-on computer exercises and demos will guide the daily program. The evenings will be reserved for the students developing novel research proposals and social activities. ** *The school*is directed at national and international graduate students (advanced MSc & PhD students; total number of 30) who wish to develop an integrated perspective on brain networks and neuronal communication; an important component of neuroscience. Students are requested to prepare for the school by reading a number of relevant [bw] research papers. A reading list will be send prior to the meeting. The school takes place at Conference Center "De Poort " in Groesbeek (NL), which is nicely situated in the woods around Nijmegen. ** *Topics include** *Experimental techniques, advances data analysis, computational modeling and computer labs. The program can be found here . ** *Scientific Organizers** *Ole Jensen, Paul Tiesinga, Markus Barth, David Norris, Eric Maris (Donders Institute for Cognition, Brain and Behaviour, Nijmegen, The Netherlands) *Application deadline: 8 April 2013* For more information please visit our website http://www.ru.nl/donders/agenda-news/summer-school-brain/ -- Ole Jensen http://www.neuosc.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From haline.schendan at plymouth.ac.uk Sun Mar 24 08:43:07 2013 From: haline.schendan at plymouth.ac.uk (Haline Schendan) Date: Sun, 24 Mar 2013 12:43:07 +0000 Subject: Connectionists: Multiple permanent faculty positions Message-ID: <185722CDD3EE3F4696E5A788BFE0AA9F12FB0D180F@ILS131.uopnet.plymouth.ac.uk> Please post: Lecturer/Associate Professor (Reader/Senior Lecturer)/Professor in Psychology (x3) Salary: ?31,331 to ?53,233 pa- Grade 7/8/9 depending on experience. Professorial appointments will be made on the Senior Manager Scale. The School of Psychology at Plymouth University is looking to recruit 3 permanent members of academic staff. Psychology at Plymouth is a large and successful discipline group, comprising 40 full-time academic staff. The School hosts the Centre for Brain, Cognition and Behaviour, which forms part of the multidisciplinary Cognition Research Institute. The centre has active research groups in Cognitive Neuroscience, Thinking and Reasoning, Memory, Vision, Developmental Psychology, Social Psychology, Health and Well Being and Behaviour Change. These groups are housed in recently-built dedicated research facilities, and are supported by a team of technicians and scientific officers. The School's research profile places it in the top third of psychology departments in the UK, with 85% of its research activity rated at international standard. Our programmes include a BSc (Hons) Psychology, MPsych Advanced Psychology (pathways in cognitive neuroscience, clinical, and behaviour change) and MSc Psychological Research Methods. Candidates with expertise in any area Psychology are encouraged to apply. You will have a track record of research consistent with a significant contribution to the REF and to the future profile of research in Psychology. You will be expected to strengthen an existing research group or build a research programme that complements existing strengths. At least one of the appointments will be made in the area of Social Psychology. Your enthusiasm for your subject will extend to providing excellent teaching at undergraduate and postgraduate levels, informed by rigorous scholarship and your own research experience. You will benefit from a positive and collegiate working environment that values and promotes excellence in all academic activities. For the full Professor post you will have outstanding research track record with a publication and research funding profile that demonstrates an internationally excellent standing in your field. You will a research-leader, demonstrating both personal research excellence, and the potential to drive forward the research agenda of the School. A competitive salary and start up package will be available. These are full-time positions working 37 hours per week on a permanent basis. For an informal discussion, please contact Dr Liz Hellier on 01752 584829 or email ehellier at plymouth.ac.uk Closing Date: 12 midnight, Thursday 18 April 2013 Ref: A3161 Jobs.ac.uk Plymouth University is committed to an inclusive culture and respecting diversity, and welcomes applications from all sections of the community -------------- next part -------------- An HTML attachment was scrubbed... URL: From haline.schendan at plymouth.ac.uk Sun Mar 24 08:45:47 2013 From: haline.schendan at plymouth.ac.uk (Haline Schendan) Date: Sun, 24 Mar 2013 12:45:47 +0000 Subject: Connectionists: four fully-funded Phd studentships Message-ID: <185722CDD3EE3F4696E5A788BFE0AA9F12FB0D1811@ILS131.uopnet.plymouth.ac.uk> Funded Phd Studentships for 2013 Plymouth University, Psychology Continuing our programme, started in 2012, the school will be awarding four fully-funded Phd studentships, to start in October 2013. Below are indicative project titles offered by members of staff. Potential students can apply for other related projects which are not on this list with the agreement of the member of staff. So, if you have an idea of your own, we will consider your application if you have first established contact and agreement with a member of staff. Four projects will be funded. The academic record of the applicant and quality of the application will influence the appointment process. Applying General information about applying for a research degree at the University of Plymouth and application forms are available on the UP website, or by contacting Miss Catherine Johnson. The closing date for applications is 12 noon on Thursday 18th April 2013. Shortlisted candidates will be invited for interview. We regret that we may not be able to respond to all applications. Applicants who have not been contacted by the end of April should consider their application has been unsuccessful on this occasion. Interviews will take place on the 9th May. Alternative arrangements will be made for an applicant who has an exam on that day. For more information: http://psychology.plymouth.ac.uk/research/funded-phd-studentships-2013/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From janetw at itee.uq.edu.au Mon Mar 25 23:30:22 2013 From: janetw at itee.uq.edu.au (Janet Wiles) Date: Tue, 26 Mar 2013 03:30:22 +0000 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: <514E37D5.60400@cse.msu.edu> References: <514E37D5.60400@cse.msu.edu> Message-ID: <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> Recurrent neural networks can represent, and in some cases learn and generalise classes of languages beyond finite state machines. For a review, of their capabilities see the excellent edited book by Kolen and Kramer. e.g., ch 8 is on "Representation beyond finite states"; and ch9 is "Universal Computation and Super-Turing Capabilities". Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", IEEE Press. From: connectionists-bounces at mailman.srv.cs.cmu.edu [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng Sent: Sunday, 24 March 2013 9:17 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Computational Modeling of Bilingualism Special Issue Ping Li: As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan and many others correctly stated. For example, traditional neural networks cannot learn a finite automaton (FA) until recently (i.e., the proof of our Developmental Network). We all know that FA is the basis for all probabilistic symbolic networks (e.g., Markov models) but they are all not connectionist. After seeing your announcement, I am confused with the book title "Bilingualism Special Issue: Computational Modeling of Bilingualism" but with your comment "most of the models are based on connectionist architectures." Without further clarifications from you, I have to predict that these connectionist architectures in the book are all grossly wrong in terms of brain-capable connectionist natural language processing, since they cannot learn an FA. This means that they cannot generalize to state-equivalent but unobserved word sequences. Without this basic capability required for natural language processing, how can they claim connectionist natural language processing, let alone bilingualism? I am concerned that many papers proceed with specific problems without understanding the fundamental problems of the traditional connectionism. The fact that the biological brain is connectionist does not necessarily mean that all connectionist researchers know about the brain's connectionism. -John Weng On 3/22/13 6:08 PM, Ping Li wrote: Dear Colleagues, A Special Issue on Computational Modeling of Bilingualism has been published. Most of the models are based on connectionist architectures. All the papers are available for free viewing until April 30, 2013 (follow the link below to its end): http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ Please let me know if you have difficulty accessing the above link or viewing any of the PDF files on Cambridge University Press's website. With kind regards, Ping Li ================================================================= Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information Sciences & Technology | Co-Chair, Inter-College Graduate Program in Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | Pennsylvania State University | University Park, PA 16802, USA | Editor, Bilingualism: Language and Cognition, Cambridge University Press | Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher Email: pul8 at psu.edu | URL: http://cogsci.psu.edu ================================================================= -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Mar 26 00:09:24 2013 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 26 Mar 2013 00:09:24 -0400 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> Message-ID: <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> I posed some important challenges for language-like generalization in PDP and SRN models in 1998 in an article in Cognitive Psychology, with further discussion in 1999 Science article (providing data from human infants), and a 2001 MIT Press book, The Algebraic Mind. For example, if one trains a standard PDP autoassociator on identity with integers represented by distribution representation consisting of binary digits and expose the model only to even numbers, the model will not generalize to odd numbers (i.e., it will not generalize identity to the least significant bit) even though (depending on the details of implementation) it can generalize to some new even numbers. Another way to put this is that these sort of models can interpolate within some cloud around a space of training examples, but can't generalize universally-quanitfied one-to-one mappings outside that space. Likewise, training an Elman-style SRN with localist inputs (one word, one node, as in Elman's work on SRNS) on a set of sentences like "a rose is a rose" and "a tulip is a tulip" leads the model to learn those individual relationships, but not to generalize to "a blicket is a blicket", where blicket represents an untrained node. These problems have to do with a kind of localism that is inherent in the back-propogation rule. In the 2001 book, I discuss some of the ways around them, and the compromises that known workarounds lead to. I believe that some alternative kind of architecture is called for. SInce the human brain is pretty quick to generalize universally-quantified one-to-one-mappings, even to novel elements, and even on the basis of small amounts of data, I consider these to be important - but largely unsolved -- problems. The brain must do it, but we still really understand how. (J. P. Thivierge and I made one suggestion in this paper in TINS.) Sincerely, Gary Marcus Gary Marcus Professor of Psychology New York University Author of Guitar Zero http://garymarcus.com/ New Yorker blog On Mar 25, 2013, at 11:30 PM, Janet Wiles wrote: > Recurrent neural networks can represent, and in some cases learn and generalise classes of languages beyond finite state machines. For a review, of their capabilities see the excellent edited book by Kolen and Kramer. e.g., ch 8 is on "Representation beyond finite states"; and ch9 is "Universal Computation and Super-Turing Capabilities". > > Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", IEEE Press. > > From: connectionists-bounces at mailman.srv.cs.cmu.edu [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng > Sent: Sunday, 24 March 2013 9:17 AM > To: connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Computational Modeling of Bilingualism Special Issue > > Ping Li: > > As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan > and many others correctly stated. For example, traditional neural networks cannot learn a finite automaton (FA) until recently (i.e., > the proof of our Developmental Network). We all know that FA is the basis for all probabilistic symbolic networks (e.g., Markov models) > but they are all not connectionist. > > After seeing your announcement, I am confused with the book title > "Bilingualism Special Issue: Computational Modeling of Bilingualism" but with your comment "most of the models are based on connectionist architectures." > > Without further clarifications from you, I have to predict that these connectionist architectures in the book are all grossly wrong in terms > of brain-capable connectionist natural language processing, since they cannot learn an FA. This means that they cannot generalize to state-equivalent but unobserved word sequences. Without this basic capability required for natural language processing, how can they claim connectionist natural language processing, let alone bilingualism? > > I am concerned that many papers proceed with specific problems without understanding the fundamental problems of the traditional connectionism. The fact that the biological brain is connectionist does not necessarily mean that all connectionist researchers know about the brain's connectionism. > > -John Weng > > On 3/22/13 6:08 PM, Ping Li wrote: > Dear Colleagues, > > A Special Issue on Computational Modeling of Bilingualism has been published. Most of the models are based on connectionist architectures. > > All the papers are available for free viewing until April 30, 2013 (follow the link below to its end): > > http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ > > Please let me know if you have difficulty accessing the above link or viewing any of the PDF files on Cambridge University Press's website. > > With kind regards, > > Ping Li > > > ================================================================= > Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information Sciences & Technology | Co-Chair, Inter-College Graduate Program in Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | Pennsylvania State University | University Park, PA 16802, USA | > Editor, Bilingualism: Language and Cognition, Cambridge University Press | Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher > Email: pul8 at psu.edu | URL: http://cogsci.psu.edu > ================================================================= > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email: weng at cse.msu.edu > URL: http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.marcus at nyu.edu Tue Mar 26 10:55:47 2013 From: gary.marcus at nyu.edu (Gary Marcus) Date: Tue, 26 Mar 2013 10:55:47 -0400 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> Message-ID: <953C6B95-D7F7-412F-AB9C-95D56A344E88@nyu.edu> Dear Thomas, Thanks for your note. I don't doubt that Deep Learning could be souped up to address the task, but my gentle nudge in my New Yorker essay on deep learning (in which I also raised this issue) didn't -- so far as I know -- yield any results. Deep learning is good at creating abstract features such as equivalence classes (when two or more inputs should be treat equally), but less good, I suspect, at the sort of one-to-one mappings problem that I described in my previous message, in which each unique input corresponds to each unique output. Topography, which Thivierge and I pointed to, is a good example of where nature gravitates towards one-to-one maps, in a way that's beginning to be fairly well-understood. In a lot of neural networks, however, individual output nodes are fully orthogonal and logically independent; they get drawn on a page as if they were ordered, but they aren't really, and as a result the localism of their outputs doesn't dovetail well with functions in which ordering matters. If anyone reading this list works on deep learning and would like to to collaborate on this issue, drop me a note. I think it would be an interesting project, regardless of the outcome. Cheers, Gary Gary Marcus Professor of Psychology New York University Author of Guitar Zero http://garymarcus.com/ New Yorker blog On Mar 26, 2013, at 7:16 AM, Thomas Trappenberg wrote: > Hello Garry, > > Keep in mind that simple back-propagating networks are not the ultimate solution but that deep networks (many layers) are necessary for more advanced representation. Ultimately we think that we would even develop higher order abstract representations that would support more human-like generalization. > > Regarding topography, this is a good point. I heard that mice don't have the topography in early visual areas as found in cats etc, which seems to contradict your statement that "topography seems to be mandatory". However, mice are not very visual, and their barrel cortex is organized. > > Regards, Thomas > > --------- > Dr. Thomas Trappenberg > Professor > Faculty of Computer Science > Dalhousie University > Halifax, Canada > > > > On Tue, Mar 26, 2013 at 1:09 AM, Gary Marcus wrote: > I posed some important challenges for language-like generalization in PDP and SRN models in 1998 in an article in Cognitive Psychology, with further discussion in 1999 Science article (providing data from human infants), and a 2001 MIT Press book, The Algebraic Mind. > > For example, if one trains a standard PDP autoassociator on identity with integers represented by distribution representation consisting of binary digits and expose the model only to even numbers, the model will not generalize to odd numbers (i.e., it will not generalize identity to the least significant bit) even though (depending on the details of implementation) it can generalize to some new even numbers. Another way to put this is that these sort of models can interpolate within some cloud around a space of training examples, but can't generalize universally-quanitfied one-to-one mappings outside that space. > > Likewise, training an Elman-style SRN with localist inputs (one word, one node, as in Elman's work on SRNS) on a set of sentences like "a rose is a rose" and "a tulip is a tulip" leads the model to learn those individual relationships, but not to generalize to "a blicket is a blicket", where blicket represents an untrained node. > > These problems have to do with a kind of localism that is inherent in the back-propogation rule. In the 2001 book, I discuss some of the ways around them, and the compromises that known workarounds lead to. I believe that some alternative kind of architecture is called for. > > SInce the human brain is pretty quick to generalize universally-quantified one-to-one-mappings, even to novel elements, and even on the basis of small amounts of data, I consider these to be important - but largely unsolved -- problems. The brain must do it, but we still really understand how. (J. P. Thivierge and I made one suggestion in this paper in TINS.) > > Sincerely, > > Gary Marcus > > > Gary Marcus > Professor of Psychology > New York University > Author of Guitar Zero > http://garymarcus.com/ > New Yorker blog > > On Mar 25, 2013, at 11:30 PM, Janet Wiles wrote: > >> Recurrent neural networks can represent, and in some cases learn and generalise classes of languages beyond finite state machines. For a review, of their capabilities see the excellent edited book by Kolen and Kramer. e.g., ch 8 is on "Representation beyond finite states"; and ch9 is "Universal Computation and Super-Turing Capabilities". >> >> Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", IEEE Press. >> >> From: connectionists-bounces at mailman.srv.cs.cmu.edu [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng >> Sent: Sunday, 24 March 2013 9:17 AM >> To: connectionists at mailman.srv.cs.cmu.edu >> Subject: Re: Connectionists: Computational Modeling of Bilingualism Special Issue >> >> Ping Li: >> >> As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan >> and many others correctly stated. For example, traditional neural networks cannot learn a finite automaton (FA) until recently (i.e., >> the proof of our Developmental Network). We all know that FA is the basis for all probabilistic symbolic networks (e.g., Markov models) >> but they are all not connectionist. >> >> After seeing your announcement, I am confused with the book title >> "Bilingualism Special Issue: Computational Modeling of Bilingualism" but with your comment "most of the models are based on connectionist architectures." >> >> Without further clarifications from you, I have to predict that these connectionist architectures in the book are all grossly wrong in terms >> of brain-capable connectionist natural language processing, since they cannot learn an FA. This means that they cannot generalize to state-equivalent but unobserved word sequences. Without this basic capability required for natural language processing, how can they claim connectionist natural language processing, let alone bilingualism? >> >> I am concerned that many papers proceed with specific problems without understanding the fundamental problems of the traditional connectionism. The fact that the biological brain is connectionist does not necessarily mean that all connectionist researchers know about the brain's connectionism. >> >> -John Weng >> >> On 3/22/13 6:08 PM, Ping Li wrote: >> Dear Colleagues, >> >> A Special Issue on Computational Modeling of Bilingualism has been published. Most of the models are based on connectionist architectures. >> >> All the papers are available for free viewing until April 30, 2013 (follow the link below to its end): >> >> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >> >> Please let me know if you have difficulty accessing the above link or viewing any of the PDF files on Cambridge University Press's website. >> >> With kind regards, >> >> Ping Li >> >> >> ================================================================= >> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information Sciences & Technology | Co-Chair, Inter-College Graduate Program in Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | Pennsylvania State University | University Park, PA 16802, USA | >> Editor, Bilingualism: Language and Cognition, Cambridge University Press | Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher >> Email: pul8 at psu.edu | URL: http://cogsci.psu.edu >> ================================================================= >> >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tt at cs.dal.ca Tue Mar 26 07:16:39 2013 From: tt at cs.dal.ca (Thomas Trappenberg) Date: Tue, 26 Mar 2013 08:16:39 -0300 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> Message-ID: Hello Garry, Keep in mind that simple back-propagating networks are not the ultimate solution but that deep networks (many layers) are necessary for more advanced representation. Ultimately we think that we would even develop higher order abstract representations that would support more human-like generalization. Regarding topography, this is a good point. I heard that mice don't have the topography in early visual areas as found in cats etc, which seems to contradict your statement that "topography seems to be mandatory". However, mice are not very visual, and their barrel cortex is organized. Regards, Thomas --------- Dr. Thomas Trappenberg Professor Faculty of Computer Science Dalhousie University Halifax, Canada On Tue, Mar 26, 2013 at 1:09 AM, Gary Marcus wrote: > I posed some important challenges for language-like generalization in PDP > and SRN models in 1998 in an article in Cognitive Psychology, > with further discussion in 1999 Science article (providing > data from human infants), and a 2001 MIT Press book, The Algebraic Mind > . > > For example, if one trains a standard PDP autoassociator on identity with > integers represented by distribution representation consisting of binary > digits and expose the model only to even numbers, the model will not > generalize to odd numbers (i.e., it will not generalize identity to the > least significant bit) even though (depending on the details of > implementation) it can generalize to some new even numbers. Another way to > put this is that these sort of models can interpolate within some cloud > around a space of training examples, but can't generalize > universally-quanitfied one-to-one mappings outside that space. > > Likewise, training an Elman-style SRN with localist inputs (one word, one > node, as in Elman's work on SRNS) on a set of sentences like "a rose is a > rose" and "a tulip is a tulip" leads the model to learn those individual > relationships, but not to generalize to "a blicket is a blicket", where > blicket represents an untrained node. > > These problems have to do with a kind of localism that is inherent in the > back-propogation rule. In the 2001 book, I discuss some of the ways around > them, and the compromises that known workarounds lead to. I believe that > some alternative kind of architecture is called for. > > SInce the human brain is pretty quick to generalize universally-quantified > one-to-one-mappings, even to novel elements, and even on the basis of small > amounts of data, I consider these to be important - but largely unsolved -- > problems. The brain must do it, but we still really understand how. (J. P. > Thivierge and I made one suggestion in this paper in TINS > .) > > Sincerely, > > Gary Marcus > > > Gary Marcus > Professor of Psychology > New York University > Author of Guitar Zero > http://garymarcus.com/ > New Yorker blog > > On Mar 25, 2013, at 11:30 PM, Janet Wiles wrote: > > Recurrent neural networks can represent, and in some cases learn and > generalise classes of languages beyond finite state machines. For a review, > of their capabilities see the excellent edited book by Kolen and Kramer. > e.g., ch 8 is on "Representation beyond finite states"; and ch9 is > "Universal Computation and Super-Turing Capabilities".**** > > Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", > IEEE Press.**** > > *From:* connectionists-bounces at mailman.srv.cs.cmu.edu [ > mailto:connectionists-bounces at mailman.srv.cs.cmu.edu > ] *On Behalf Of *Juyang Weng > *Sent:* Sunday, 24 March 2013 9:17 AM > *To:* connectionists at mailman.srv.cs.cmu.edu > *Subject:* Re: Connectionists: Computational Modeling of Bilingualism > Special Issue**** > ** ** > > Ping Li: > > As far as I understand, traditional connectionist architectures cannot do > abstraction well as Marvin Minsky, Michael Jordan > and many others correctly stated. For example, traditional neural > networks cannot learn a finite automaton (FA) until recently (i.e., > the proof of our Developmental Network). We all know that FA is the basis > for all probabilistic symbolic networks (e.g., Markov models) > but they are all not connectionist. > > After seeing your announcement, I am confused with the book title > "Bilingualism Special Issue: Computational Modeling of Bilingualism" but > with your comment "most of the models are based on connectionist > architectures." > > Without further clarifications from you, I have to predict that these > connectionist architectures in the book are all grossly wrong in terms > of brain-capable connectionist natural language processing, since they > cannot learn an FA. This means that they cannot generalize to > state-equivalent but unobserved word sequences. Without this basic > capability required for natural language processing, how can they claim > connectionist natural language processing, let alone bilingualism? > > I am concerned that many papers proceed with specific problems without > understanding the fundamental problems of the traditional connectionism. > The fact that the biological brain is connectionist does not necessarily > mean that all connectionist researchers know about the brain's > connectionism. > > -John Weng**** > On 3/22/13 6:08 PM, Ping Li wrote:**** > > Dear Colleagues,**** > **** > A Special Issue on Computational Modeling of Bilingualism has been > published. Most of the models are based on connectionist architectures. ** > ** > **** > All the papers are available for free viewing until April 30, 2013 (follow > the link below to its end):**** > **** > > http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ > **** > **** > Please let me know if you have difficulty accessing the above link or > viewing any of the PDF files on Cambridge University Press's website.**** > **** > With kind regards,**** > **** > Ping Li**** > **** > **** > =================================================================**** > Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information > Sciences & Technology | Co-Chair, Inter-College Graduate Program in > Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | > Pennsylvania State University | University Park, PA 16802, USA | **** > Editor, Bilingualism: Language and Cognition, Cambridge University Press | > Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher* > *** > Email: pul8 at psu.edu | URL: http://cogsci.psu.edu**** > =================================================================**** > **** > > > > **** > > -- **** > > --**** > > Juyang (John) Weng, Professor**** > > Department of Computer Science and Engineering**** > > MSU Cognitive Science Program and MSU Neuroscience Program**** > > 428 S Shaw Ln Rm 3115**** > > Michigan State University**** > > East Lansing, MI 48824 USA**** > > Tel: 517-353-4388**** > > Fax: 517-432-1061**** > > Email: weng at cse.msu.edu**** > > URL: http://www.cse.msu.edu/~weng/**** > > ----------------------------------------------**** > > ** ** > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulrich.hofmann at coregen.uni-freiburg.de Tue Mar 26 02:48:44 2013 From: ulrich.hofmann at coregen.uni-freiburg.de (Ulrich Hofmann) Date: Tue, 26 Mar 2013 07:48:44 +0100 Subject: Connectionists: Hands-On Workshop on Neuronal Recording and Stimulation, Freiburg, May 28.-29. Message-ID: <55EEB457-FEED-4066-A61D-4E5BF91772E7@coregen.uni-freiburg.de> Following the big success of our last hands-on workshop in L?beck in December 2012, we invite you to join us for our two-day Hands-On Workshop on Neuronal Recording and Stimulation on the 28th and 29th of May 2013 in Freiburg, Germany. We will have access to five different commercial recording setups and will acquire neuronal responses within several in vivo paradigms from anesthetized rodents. It is the goal of this workshop to go beyond single micro electrode recordings and test the most recent developments in multisite microprobes as well. We consider this workshop to be of high interest to both young researchers in need for early experience in neuronal recording and stimulation and to seasoned electrophysiologists interested in getting first hand experience with the newest technologies. While the official website will go online beginning of April, some preliminary information can be found under http://www.uniklinik-freiburg.de/nes/live/msnr.html Participation is limited to 20 persons and we expect it of fill up quickly. The workshop is organized in cooperation with the excellence cluster BrainLinks-BrainTools (www.brainlinks-braintools.uni-freiburg.de). I look forward seeing your there! uli Ulrich G. Hofmann Prof. Dr. rer.nat. Peter-Osypka-Professor for Neuroelectronic Systems Albert-Ludwigs-University Freiburg Department for Neurosurgery Engesserstr. 4, 79108 Freiburg, Germany Tel: +49.761.270 50076 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Tue Mar 26 06:57:10 2013 From: jose at psychology.rutgers.edu (Stephen =?UTF-8?B?Sm9zw6k=?= Hanson) Date: Tue, 26 Mar 2013 06:57:10 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> Message-ID: <20130326065710.40010f8e@sam> "As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan and many others correctly stated." Actually this not the case, there have been many of us over the years showing that recurrent networks for one do do abstract generalization over types and tokens and literally bootstraps required representational structure as the networks learn. See a couple of papers below. Hanson S. J. & Negishi M., (2002) On the Emergence of Rules in Neural Networks, Neural Computation, 14, 1-24. Hanson, C. & Hanson S. J. (1996), Development of Schemata During Event Parsing: Neisser's Perceptual Cycle as a Recurrent Connectionist Network, Journal of Cognitive Neuroscience, 8, 119-134. And I am pretty sure neither Marvin Minskey or Michael Jordan made a claim about cognitive/perceptual abstraction of recurrent networks. Steve Tue, 26 Mar 2013 03:30:22 +0000 __________ Recurrent neural networks can represent, and in some cases learn and generalise classes of languages beyond finite state machines. For a review, of their capabilities see the excellent edited book by Kolen and Kramer. e.g., ch 8 is on "Representation beyond finite states"; and ch9 is "Universal Computation and Super-Turing Capabilities". Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", IEEE Press. From: connectionists-bounces at mailman.srv.cs.cmu.edu [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Juyang Weng Sent: Sunday, 24 March 2013 9:17 AM To: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: Computational Modeling of Bilingualism Special Issue Ping Li: As far as I understand, traditional connectionist architectures cannot do abstraction well as Marvin Minsky, Michael Jordan and many others correctly stated. For example, traditional neural networks cannot learn a finite automaton (FA) until recently (i.e., the proof of our Developmental Network). We all know that FA is the basis for all probabilistic symbolic networks (e.g., Markov models) but they are all not connectionist. After seeing your announcement, I am confused with the book title "Bilingualism Special Issue: Computational Modeling of Bilingualism" but with your comment "most of the models are based on connectionist architectures." Without further clarifications from you, I have to predict that these connectionist architectures in the book are all grossly wrong in terms of brain-capable connectionist natural language processing, since they cannot learn an FA. This means that they cannot generalize to state-equivalent but unobserved word sequences. Without this basic capability required for natural language processing, how can they claim connectionist natural language processing, let alone bilingualism? I am concerned that many papers proceed with specific problems without understanding the fundamental problems of the traditional connectionism. The fact that the biological brain is connectionist does not necessarily mean that all connectionist researchers know about the brain's connectionism. -John Weng On 3/22/13 6:08 PM, Ping Li wrote: Dear Colleagues, A Special Issue on Computational Modeling of Bilingualism has been published. Most of the models are based on connectionist architectures. All the papers are available for free viewing until April 30, 2013 (follow the link below to its end): http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ Please let me know if you have difficulty accessing the above link or viewing any of the PDF files on Cambridge University Press's website. With kind regards, Ping Li ================================================================= Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information Sciences & Technology | Co-Chair, Inter-College Graduate Program in Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition | Pennsylvania State University | University Park, PA 16802, USA | Editor, Bilingualism: Language and Cognition, Cambridge University Press | Associate Editor: Journal of Neurolinguistics, Elsevier Science Publisher Email: pul8 at psu.edu | URL: http://cogsci.psu.edu ================================================================= -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- Stephen Jos? Hanson Professor Psychology Department Rutgers University Director RUBIC (Rutgers Brain Imaging Center) Director RUMBA (Rutgers Brain/Mind Analysis-NK) Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-5440 x 1412 > From nigel.goddard at ed.ac.uk Tue Mar 26 08:16:32 2013 From: nigel.goddard at ed.ac.uk (Nigel Goddard) Date: Tue, 26 Mar 2013 12:16:32 +0000 Subject: Connectionists: Fellowships leading to Faculty Positions in Informatics, University of Edinburgh (deadline 18th April) Message-ID: <515191A0.7080707@ed.ac.uk> The School of Informatics, University of Edinburgh is recruiting to several positions which start out as fellowships (no or highly reduced teaching/admin) and progress over 5 years to permanent faculty positions. Machine Learning and Computational Neuroscience, both theoretical and applied, are of interest, particularly where the candidate's research programme complements our considerable existing strengths. Core ML and CNS research is concentrated in the Institute for Adaptive and Neural Computation, but most of the other Institutes conduct research that includes applied ML tracks, and there is CNS-related work in the Institute for Perception Action and Behaviour. See https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=012068 for details of the positions and note the short application deadline (18th April, 5 p.m. UK time). -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From weng at cse.msu.edu Tue Mar 26 12:06:32 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Tue, 26 Mar 2013 12:06:32 -0400 Subject: Connectionists: A biological brain is fundamentally an emergent finite automaton In-Reply-To: <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> Message-ID: <5151C788.6090907@cse.msu.edu> Dear Prof. Janet Wiles: Thank you for raising the issue. I hope that this discussion is useful to everybody who has subscribed to this connectionists mailing list. Dr. Paul Werbos, a well-known expert in neural network working at NSF, raised the same point to me, but we two were talking about very different things using the same key words like Finite Automaton (FA). Communications are hard in this fast-paced modern world when the attention span of everybody is too short! Key words like FA fallen into a wrong pocket have wrong meanings. I put some key points (not all) here concisely so that people can get them quickly and understand why Marvin Minsky and Micheal Jordan correctly said (traditional) neural networks (TNNs, including the networks in the edited book by Kolen and Kramer 2001) do not abstract well and why our Developmental Network (DN) has HOLISTICALLY solved a series of major problems with TNN and the traditional AI methods: (1.a) TNNs are open to human programmers. A human programmer simply compiles a task-specific automaton (FA, PDA, or Turing Machine TM, or super TM) into a TNN. If the original automaton can do a specific task, why do you compile it into a network? Simply defend connectionism? A bad move. (1.b) A brain is not open to human programmers. It is "skull-closed" during learning and performance. Thus, a human programmer can only program the Developmental Program (DP) of a DN, but not directly DN itself. The DN must program itself through learning while being regulated by the DP. A DP is genome-like but our DP only simulates the function of genomes for brain development, not the individual genes themselves. The programmer of the DP does not know about the tasks that a DN will learn in its life. The DP is for many DNs. A different DN represents a different life. A DN can learn an open number of unknown tasks, but the TNNs in Kolen and Kramer 2001 do not. (2.a) TNNs do not use fully emergent representations. They use symbolic representations of various degrees. For example, the human programmer specifies what features an area detects. I do not think any TNN in Kolen and Kramer 2001 can do language acquisition. (2.b) All the feature detectors and all the representations in DN are fully emergent with only body-specific constraints (not environment-specific constraints). Thus, a DN can learn to acquire a (simple) language (mother tongue) that the human programmer of the DP does not even know about. (3.a) The states of TNNs are in hidden layers, not directly teachable. This is the key architecture reason that all TNN do not abstract well. (3.b) The states of DNs are in the action ports that are open to the physical environment as both input and output, directly teachable and verifiable like a biological brain. (4.a) TNN learning (if a TNN does) is slow, iterative, no global optimality. (4.b) DN learning from any huge FA is immediate. The DN is error-free (one-shot learning) not only for learned sequence but also for state-equivalent sequences that it has NOT observed, critical for language understanding since many sentences are new. The DN is optimal in the sense of maximum likelihood. (5.a) Simulating PDA, TM, or super TM using TNN seems to be on a wrong track, since the brain is not a PDA, TM, or super TM. (5.b) The most fundamental part of a biological brain seems to be an emergent FA (EFA). In theory and in practice, an EFA like DN is able to perform all kinds of practical tasks including of course mathematical logic. The Chapter 10 of my book "Natural and Artificial Intelligence" discusses this large topic with a series of practical examples. All criticisms and comments are welcome. -John On 3/25/13 11:30 PM, Janet Wiles wrote: > > Recurrent neural networks can represent, and in some cases learn and > generalise classes of languages beyond finite state machines. For a > review, of their capabilities see the excellent edited book by Kolen > and Kramer. e.g., ch 8 is on "Representation beyond finite states"; > and ch9 is "Universal Computation and Super-Turing Capabilities". > > Kolenand Kramer (2001) "A Field Guide Dynamical Recurrent Networks", > IEEE Press. > > *From:*connectionists-bounces at mailman.srv.cs.cmu.edu > > [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] *On Behalf Of > *Juyang Weng > *Sent:* Sunday, 24 March 2013 9:17 AM > *To:* connectionists at mailman.srv.cs.cmu.edu > > *Subject:* Re: Connectionists: Computational Modeling of Bilingualism > Special Issue > > Ping Li: > > As far as I understand, traditional connectionist architectures cannot > do abstraction well as Marvin Minsky, Michael Jordan > and many others correctly stated. For example, traditional neural > networks cannot learn a finite automaton (FA) until recently (i.e., > the proof of our Developmental Network). We all know that FA is the > basis for all probabilistic symbolic networks (e.g., Markov models) > but they are all not connectionist. > > After seeing your announcement, I am confused with the book title > "Bilingualism Special Issue: Computational Modeling of Bilingualism" > but with your comment "most of the models are based on connectionist > architectures." > > Without further clarifications from you, I have to predict that these > connectionist architectures in the book are all grossly wrong in terms > of brain-capable connectionist natural language processing, since they > cannot learn an FA. This means that they cannot generalize to > state-equivalent but unobserved word sequences. Without this basic > capability required for natural language processing, how can they > claim connectionist natural language processing, let alone bilingualism? > > I am concerned that many papers proceed with specific problems without > understanding the fundamental problems of the traditional > connectionism. The fact that the biological brain is connectionist > does not necessarily mean that all connectionist researchers know > about the brain's connectionism. > > -John Weng > > On 3/22/13 6:08 PM, Ping Li wrote: > > Dear Colleagues, > > A Special Issue on Computational Modeling of Bilingualism has been > published. Most of the models are based on connectionist > architectures. > > All the papers are available for free viewing until April 30, 2013 > (follow the link below to its end): > > http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ > > Please let me know if you have difficulty accessing the above link > or viewing any of the PDF files on Cambridge University Press's > website. > > With kind regards, > > Ping Li > > ================================================================= > > Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information > Sciences & Technology | Co-Chair, Inter-College Graduate Program > in Neuroscience | Co-Director, Center for Brain, Behavior, and > Cognition | Pennsylvania State University | University Park, PA > 16802, USA | > > Editor, Bilingualism: Language and Cognition, Cambridge University > Press | Associate Editor: Journal of Neurolinguistics, Elsevier > Science Publisher > > Email: pul8 at psu.edu | URL: > http://cogsci.psu.edu > > ================================================================= > > > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S ShawLn Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email:weng at cse.msu.edu > URL:http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Tue Mar 26 13:01:37 2013 From: jose at psychology.rutgers.edu (Stephen =?UTF-8?B?Sm9zw6k=?= Hanson) Date: Tue, 26 Mar 2013 13:01:37 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: <5151D2E2.5060005@cse.msu.edu> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <20130326065710.40010f8e@sam> <5151D2E2.5060005@cse.msu.edu> Message-ID: <20130326130137.0f7bd1d5@sam> Not really. Rule emergence in the Neual Comp is the first (as far as I am aware--happy to be corrected on this point) and only evidence that one can *learn* and transfer rules (grammer) across lexicons. In those experiments, RNNs were trained on same FSM while the lexicon was changed on each condition, these new lexicons were trained to criterion, then a final transfer to a NOVEL lexicon, which the RNN transfered at a 60% savings rate. All the other cases--again I am aware of--including yours use a hybrid hack to demonstrate "rules" in connectionist networks. I also refer you to an earlier article I wrote for BBS some time ago, which also attempted to broach this issue. Hanson S. J. & Burr, D. J., (1990), What Connectionist Models Learn: Toward a theory of representation in Connectionist Networks, Behavioral and Brain Sciences, 13, 471-518. Article-djvu (on my website for download--http://nwkpsych.rutgers.edu/~jose/ Steve Tue, 26 Mar 2013 12:54:58 -0400 __________ Steve, Rule emergence in neural networks has been studied for a very long time. The gap of communication is very wide here, although it is good to communicate. Does my follow-up email to Janet Wiles help to clarify? -John On 3/26/13 6:57 AM, Stephen Jos? Hanson wrote: > "As far as I understand, traditional connectionist architectures > cannot do abstraction well as Marvin Minsky, Michael Jordan and many > others correctly stated." > > Actually this not the case, there have been many of us over the years > showing that recurrent networks for one do do abstract generalization > over types and tokens and literally bootstraps required > representational structure as the networks learn. See a couple of > papers below. > > Hanson S. J. & Negishi M., (2002) On the Emergence of Rules in Neural > Networks, Neural Computation, 14, 1-24. > > Hanson, C. & Hanson S. J. (1996), Development of Schemata During Event > Parsing: Neisser's Perceptual Cycle as a Recurrent Connectionist > Network, Journal of Cognitive Neuroscience, 8, 119-134. > > And I am pretty sure neither Marvin Minskey or Michael Jordan made a > claim about cognitive/perceptual abstraction of recurrent networks. > > > Steve > > > Tue, 26 Mar 2013 03:30:22 +0000 __________ > > Recurrent neural networks can represent, and in some cases learn and > generalise classes of languages beyond finite state machines. For a > review, of their capabilities see the excellent edited book by Kolen > and Kramer. e.g., ch 8 is on "Representation beyond finite states"; > and ch9 is "Universal Computation and Super-Turing Capabilities". > > Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks", > IEEE Press. > > From: > connectionists-bounces at mailman.srv.cs.cmu.edu > [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of > Juyang Weng Sent: Sunday, 24 March 2013 9:17 AM To: > connectionists at mailman.srv.cs.cmu.edu > Subject: Re: Connectionists: Computational Modeling of Bilingualism > Special Issue > > Ping Li: > > As far as I understand, traditional connectionist architectures cannot > do abstraction well as Marvin Minsky, Michael Jordan and many others > correctly stated. For example, traditional neural networks cannot > learn a finite automaton (FA) until recently (i.e., the proof of our > Developmental Network). We all know that FA is the basis for all > probabilistic symbolic networks (e.g., Markov models) but they are all > not connectionist. > > After seeing your announcement, I am confused with the book title > "Bilingualism Special Issue: Computational Modeling of Bilingualism" > but with your comment "most of the models are based on connectionist > architectures." > > Without further clarifications from you, I have to predict that these > connectionist architectures in the book are all grossly wrong in terms > of brain-capable connectionist natural language processing, since they > cannot learn an FA. This means that they cannot generalize to > state-equivalent but unobserved word sequences. Without this basic > capability required for natural language processing, how can they > claim connectionist natural language processing, let alone > bilingualism? > > I am concerned that many papers proceed with specific problems without > understanding the fundamental problems of the traditional > connectionism. The fact that the biological brain is connectionist > does not necessarily mean that all connectionist researchers know > about the brain's connectionism. > > -John Weng > On 3/22/13 6:08 PM, Ping Li wrote: > Dear Colleagues, > > A Special Issue on Computational Modeling of Bilingualism has been > published. Most of the models are based on connectionist > architectures. > > All the papers are available for free viewing until April 30, 2013 > (follow the link below to its end): > > http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ > > Please let me know if you have difficulty accessing the above link or > viewing any of the PDF files on Cambridge University Press's website. > > With kind regards, > > Ping Li > > > ================================================================= > Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information > Sciences & Technology | Co-Chair, Inter-College Graduate Program in > Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition > | Pennsylvania State University | University Park, PA 16802, USA | > Editor, Bilingualism: Language and Cognition, Cambridge University > Press | Associate Editor: Journal of Neurolinguistics, Elsevier > Science Publisher Email: pul8 at psu.edu | URL: > http://cogsci.psu.edu > ================================================================= > > > > > -- > > -- > > Juyang (John) Weng, Professor > > Department of Computer Science and Engineering > > MSU Cognitive Science Program and MSU Neuroscience Program > > 428 S Shaw Ln Rm 3115 > > Michigan State University > > East Lansing, MI 48824 USA > > Tel: 517-353-4388 > > Fax: 517-432-1061 > > Email: weng at cse.msu.edu > > URL: http://www.cse.msu.edu/~weng/ > > ---------------------------------------------- > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- Stephen Jos? Hanson Professor Psychology Department Rutgers University Director RUBIC (Rutgers Brain Imaging Center) Director RUMBA (Rutgers Brain/Mind Analysis-NK) Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-5440 x 1412 > From lamb at inf.ufrgs.br Tue Mar 26 13:41:35 2013 From: lamb at inf.ufrgs.br (Luis Lamb) Date: Tue, 26 Mar 2013 14:41:35 -0300 Subject: Connectionists: Call for Papers, 9th International Workshop on Neural-Symbolic Learning and Reasoning Message-ID: <5151DDCF.9040003@inf.ufrgs.br> Call for Papers 9th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy?13) (3 or 4 Aug 2013) http://neural-symbolic.org/NeSy13 In conjunction with IJCAI-13, Beijing, China The Workshop on Neural-Symbolic Learning and Reasoning is intended to create an atmosphere of exchange of ideas, providing a forum for the presentation and discussion of the key topics related to the integration of symbolic and sub-symbolic computation, including: Representation of symbolic knowledge by connectionist systems; New neural-symbolic learning approaches; Extraction of symbolic knowledge from trained neural networks; New neural-symbolic reasoning approaches; Neural-symbolic cognitive models and agents; Biologically-inspired neural-symbolic systems; Knowledge-based SVMs and deep networks; Structured learning and relational learning in connectionist systems; Applications in robotics, simulation, fraud prevention, semantic web, software verification and adaptation, fault diagnosis, bioinformatics, visual intelligence, language processing, etc. Researchers and practitioners are invited to submit original papers that have not been published elsewhere. Submitted papers must be written in English and should not exceed 6 pages in the case of research and experience papers, and 4 pages in the case of position papers (including figures, bibliography and appendices). All submitted papers will be judged based on their quality, relevance, originality, significance, and soundness. Papers must be submitted through easychair at https://www.easychair.org/conferences/?conf=nesy13. Accepted papers will be published in official workshop proceedings, which will be distributed during the workshop. Authors of the best papers will be invited to submit a revised and extended version of their papers to the Journal of Logic and Computation, reasoning and learning corner, OUP. Important Dates: Paper submission deadline: 20 April 2013 Notification of acceptance: 20 May 2013 Camera-ready papers due: 30 May 2013 Workshop day: 3 or 4 Aug 2013 IJCAI-13 main conference: 3 ? 9 Aug 2013 Workshop Organisers Artur d?Avila Garcez (City University London, UK) Pascal Hitzler (Wright State University, USA) Luis Lamb (Universidade Federal do Rio Grande do Sul, Brazil) Programme Committee Howard Bowman, University of Kent, England Claudia d'Amato, University of Bari, Italy Marco Gori, University of Siena, Italy Barbara Hammer, TU Clausthal, Germany Steffen H?lldobler, TU Dresden, Germany Ekaterina Komendantskaya, University of Dundee, Scotland Kai-Uwe K?hnberger, University of Osnabr?ck, Germany Gadi Pinkas, Center for Academic Studies, Israel Florian Roehrbein, Albert Einstein College of Medicine, New York, U.S.A. Rudy Setiono, National University of Singapore Jude Shavlik, University of Wisconsin-Madison, U.S.A. Gerson Zaverucha, Universidade Federal do Rio de Janeiro, Brazil Additional Information General questions concerning the workshop should be addressed to a.garcez at city.ac.uk From juergen at idsia.ch Tue Mar 26 12:48:00 2013 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Tue, 26 Mar 2013 17:48:00 +0100 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> Message-ID: <2E463C70-3341-4511-929C-E5F0DAB903EC@idsia.ch> More than a decade ago, Long Short-Term Memory recurrent neural networks (LSTM) learned certain context-free and context-sensitive languages that cannot be represented by finite state automata such as HMMs. Parts of the network became stacks or event counters. F. A. Gers and J. Schmidhuber. LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages. IEEE Transactions on Neural Networks 12(6):1333-1340, 2001. J. Schmidhuber, F. Gers, D. Eck. Learning nonregular languages: A comparison of simple recurrent networks and LSTM. Neural Computation, 14(9):2039-2041, 2002. F. Gers and J. A. Perez-Ortiz and D. Eck and J. Schmidhuber. Learning Context Sensitive Languages with LSTM Trained with Kalman Filters. Proceedings of ICANN'02, Madrid, p 655-660, Springer, Berlin, 2002. Old slides on this: http://www.idsia.ch/~juergen/lstm/sld028.htm Juergen On Mar 26, 2013, at 5:09 AM, Gary Marcus wrote: > I posed some important challenges for language-like generalization > in PDP and SRN models in 1998 in an article in Cognitive Psychology, > with further discussion in 1999 Science article (providing data from > human infants), and a 2001 MIT Press book, The Algebraic Mind. > > For example, if one trains a standard PDP autoassociator on identity > with integers represented by distribution representation consisting > of binary digits and expose the model only to even numbers, the > model will not generalize to odd numbers (i.e., it will not > generalize identity to the least significant bit) even though > (depending on the details of implementation) it can generalize to > some new even numbers. Another way to put this is that these sort of > models can interpolate within some cloud around a space of training > examples, but can't generalize universally-quanitfied one-to-one > mappings outside that space. > > Likewise, training an Elman-style SRN with localist inputs (one > word, one node, as in Elman's work on SRNS) on a set of sentences > like "a rose is a rose" and "a tulip is a tulip" leads the model to > learn those individual relationships, but not to generalize to "a > blicket is a blicket", where blicket represents an untrained node. > > These problems have to do with a kind of localism that is inherent > in the back-propogation rule. In the 2001 book, I discuss some of > the ways around them, and the compromises that known workarounds > lead to. I believe that some alternative kind of architecture is > called for. > > SInce the human brain is pretty quick to generalize universally- > quantified one-to-one-mappings, even to novel elements, and even on > the basis of small amounts of data, I consider these to be important > - but largely unsolved -- problems. The brain must do it, but we > still really understand how. (J. P. Thivierge and I made one > suggestion in this paper in TINS.) > > Sincerely, > > Gary Marcus > > > Gary Marcus > Professor of Psychology > New York University > Author of Guitar Zero > http://garymarcus.com/ > New Yorker blog > > On Mar 25, 2013, at 11:30 PM, Janet Wiles > wrote: > >> Recurrent neural networks can represent, and in some cases learn >> and generalise classes of languages beyond finite state machines. >> For a review, of their capabilities see the excellent edited book >> by Kolen and Kramer. e.g., ch 8 is on "Representation beyond finite >> states"; and ch9 is "Universal Computation and Super-Turing >> Capabilities". >> >> Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent >> Networks", IEEE Press. >> >> From: connectionists-bounces at mailman.srv.cs.cmu.edu [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu >> ] On Behalf Of Juyang Weng >> Sent: Sunday, 24 March 2013 9:17 AM >> To: connectionists at mailman.srv.cs.cmu.edu >> Subject: Re: Connectionists: Computational Modeling of Bilingualism >> Special Issue >> >> Ping Li: >> >> As far as I understand, traditional connectionist architectures >> cannot do abstraction well as Marvin Minsky, Michael Jordan >> and many others correctly stated. For example, traditional neural >> networks cannot learn a finite automaton (FA) until recently (i.e., >> the proof of our Developmental Network). We all know that FA is >> the basis for all probabilistic symbolic networks (e.g., Markov >> models) >> but they are all not connectionist. >> >> After seeing your announcement, I am confused with the book title >> "Bilingualism Special Issue: Computational Modeling of >> Bilingualism" but with your comment "most of the models are based >> on connectionist architectures." >> >> Without further clarifications from you, I have to predict that >> these connectionist architectures in the book are all grossly wrong >> in terms >> of brain-capable connectionist natural language processing, since >> they cannot learn an FA. This means that they cannot generalize >> to state-equivalent but unobserved word sequences. Without this >> basic capability required for natural language processing, how can >> they claim connectionist natural language processing, let alone >> bilingualism? >> >> I am concerned that many papers proceed with specific problems >> without understanding the fundamental problems of the traditional >> connectionism. The fact that the biological brain is connectionist >> does not necessarily mean that all connectionist researchers know >> about the brain's connectionism. >> >> -John Weng >> >> On 3/22/13 6:08 PM, Ping Li wrote: >> Dear Colleagues, >> >> A Special Issue on Computational Modeling of Bilingualism has been >> published. Most of the models are based on connectionist >> architectures. >> >> All the papers are available for free viewing until April 30, 2013 >> (follow the link below to its end): >> >> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >> >> Please let me know if you have difficulty accessing the above link >> or viewing any of the PDF files on Cambridge University Press's >> website. >> >> With kind regards, >> >> Ping Li >> >> >> ================================================================= >> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information >> Sciences & Technology | Co-Chair, Inter-College Graduate Program >> in Neuroscience | Co-Director, Center for Brain, Behavior, and >> Cognition | Pennsylvania State University | University Park, PA >> 16802, USA | >> Editor, Bilingualism: Language and Cognition, Cambridge University >> Press | Associate Editor: Journal of Neurolinguistics, Elsevier >> Science Publisher >> Email: pul8 at psu.edu | URL: http://cogsci.psu.edu >> ================================================================= >> >> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email: weng at cse.msu.edu >> URL: http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > From weng at cse.msu.edu Tue Mar 26 13:43:14 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Tue, 26 Mar 2013 13:43:14 -0400 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> Message-ID: <5151DE32.7060007@cse.msu.edu> Gary, this is called (general-purpose) transfer in psychology. We provided a general-purpose solution to transfer using the theoretical framework of DN inspired by brain anatomy. The paper discussed in the following short article gives an example in theoretically modeling and experimentally simulating autonomous transfer in perceptual learning: When a Reviewer's Comments Became Longer than the Submitted Paper banner 16 - 17 by /Mojtaba Solgi / *Abstract: *The debate over the pros and cons of the so-called scholarly peer review of journals is as old as itself. In this short essay, I wish not to take a side, but to simply tell a story. The story is of how a recently published paper [1] was first hammered by the reviewers as vague, incomprehensible and worthy of rejection, and later praised as "unorthodox" and worthy of the attention of the research community. *Index terms: *Peer review, research evaluation, perceptual learning, transfer -John On 3/26/13 12:09 AM, Gary Marcus wrote: > SInce the human brain is pretty quick to generalize > universally-quantified one-to-one-mappings, even to novel elements, > and even on the basis of small amounts of data, I consider these to be > important - but largely unsolved -- problems. -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a6-SolgiMojtaba-i.gif Type: image/gif Size: 434572 bytes Desc: not available URL: From weng at cse.msu.edu Tue Mar 26 12:47:48 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Tue, 26 Mar 2013 12:47:48 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: References: <514E37D5.60400@cse.msu.edu> Message-ID: <5151D134.6090402@cse.msu.edu> Gary, I have cited some classical references about this FA learning issue in my recent book NAI. I guess that we are again talking about different things as I stated in my last email. My last email might be useful. If you give me a reference of "easily learned by connectionist architectures", we can communicate more effectively. For example, I had big problems in communication with Jeff Hawkins, until he came to MSU this year and we had interactive emails after his visit. His HTM belongs to symbolic AI because it consists of task-specific (or feature-specific) modules. Sorry, Jeff, please do not take this comment personally. Many people did this way. -John On 3/26/13 1:18 AM, Gary Cottrell wrote: > wow, I haven't heard this kind of misunderstanding in a long time. > FSA's are easily learned by connectionist architectures. It is PDA's > that are hard. > On Mar 23, 2013, at 4:16 PM, Juyang Weng wrote: > >> Ping Li: >> >> As far as I understand, traditional connectionist architectures >> cannot do abstraction well as Marvin Minsky, Michael Jordan >> and many others correctly stated. For example, traditional neural >> networks cannot learn a finite automaton (FA) until recently (i.e., >> the proof of our Developmental Network). We all know that FA is the >> basis for all probabilistic symbolic networks (e.g., Markov models) >> but they are all not connectionist. >> >> After seeing your announcement, I am confused with the book title >> "Bilingualism Special Issue: Computational Modeling of Bilingualism" >> but with your comment "most of the models are based on connectionist >> architectures." >> >> Without further clarifications from you, I have to predict that these >> connectionist architectures in the book are all grossly wrong in terms >> of brain-capable connectionist natural language processing, since >> they cannot learn an FA. This means that they cannot generalize to >> state-equivalent but unobserved word sequences. Without this basic >> capability required for natural language processing, how can they >> claim connectionist natural language processing, let alone bilingualism? >> >> I am concerned that many papers proceed with specific problems >> without understanding the fundamental problems of the traditional >> connectionism. The fact that the biological brain is connectionist >> does not necessarily mean that all connectionist researchers know >> about the brain's connectionism. >> >> -John Weng >> >> On 3/22/13 6:08 PM, Ping Li wrote: >>> >>> Dear Colleagues, >>> >>> >>> A Special Issue on Computational Modeling of Bilingualism has been >>> published. Most of the models are based on connectionist architectures. >>> >>> >>> All the papers are available for free viewing until April 30, 2013 >>> (follow the link below to its end): >>> >>> >>> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >>> >>> >>> Please let me know if you have difficulty accessing the above link >>> or viewing any of the PDF files on Cambridge University Press's website. >>> >>> >>> With kind regards, >>> >>> >>> Ping Li >>> >>> >>> >>> ================================================================= >>> >>> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information >>> Sciences & Technology | Co-Chair, Inter-College Graduate Program >>> in Neuroscience | Co-Director, Center for Brain, Behavior, and >>> Cognition | Pennsylvania State University | University Park, PA >>> 16802, USA | >>> >>> Editor, Bilingualism: Language and Cognition, Cambridge University >>> Press | Associate Editor: Journal of Neurolinguistics, Elsevier >>> Science Publisher >>> >>> Email: pul8 at psu.edu | URL: >>> http://cogsci.psu.edu >>> >>> ================================================================= >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage of > it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." -Matt > Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff Hinton > > "There is nothing objective about objective functions" - Jay McClelland > > "I am awaiting the day when people remember the fact that discovery > does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > > > > > > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose at psychology.rutgers.edu Tue Mar 26 14:55:15 2013 From: jose at psychology.rutgers.edu (Stephen =?UTF-8?B?Sm9zw6k=?= Hanson) Date: Tue, 26 Mar 2013 14:55:15 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: <5151D134.6090402@cse.msu.edu> References: <514E37D5.60400@cse.msu.edu> <5151D134.6090402@cse.msu.edu> Message-ID: <20130326145515.793c681d@sam> I just sent one before. There are tons of references starting NIPS going back 2 decades, for heavens to betsy! Steve Tue, 26 Mar 2013 12:47:48 -0400 __________ Gary, I have cited some classical references about this FA learning issue in my recent book NAI. I guess that we are again talking about different things as I stated in my last email. My last email might be useful. If you give me a reference of "easily learned by connectionist architectures", we can communicate more effectively. For example, I had big problems in communication with Jeff Hawkins, until he came to MSU this year and we had interactive emails after his visit. His HTM belongs to symbolic AI because it consists of task-specific (or feature-specific) modules. Sorry, Jeff, please do not take this comment personally. Many people did this way. -John On 3/26/13 1:18 AM, Gary Cottrell wrote: > wow, I haven't heard this kind of misunderstanding in a long time. > FSA's are easily learned by connectionist architectures. It is PDA's > that are hard. > On Mar 23, 2013, at 4:16 PM, Juyang Weng wrote: > >> Ping Li: >> >> As far as I understand, traditional connectionist architectures >> cannot do abstraction well as Marvin Minsky, Michael Jordan >> and many others correctly stated. For example, traditional neural >> networks cannot learn a finite automaton (FA) until recently (i.e., >> the proof of our Developmental Network). We all know that FA is the >> basis for all probabilistic symbolic networks (e.g., Markov models) >> but they are all not connectionist. >> >> After seeing your announcement, I am confused with the book title >> "Bilingualism Special Issue: Computational Modeling of Bilingualism" >> but with your comment "most of the models are based on connectionist >> architectures." >> >> Without further clarifications from you, I have to predict that >> these connectionist architectures in the book are all grossly wrong >> in terms of brain-capable connectionist natural language processing, >> since they cannot learn an FA. This means that they cannot >> generalize to state-equivalent but unobserved word sequences. >> Without this basic capability required for natural language >> processing, how can they claim connectionist natural language >> processing, let alone bilingualism? >> >> I am concerned that many papers proceed with specific problems >> without understanding the fundamental problems of the traditional >> connectionism. The fact that the biological brain is connectionist >> does not necessarily mean that all connectionist researchers know >> about the brain's connectionism. >> >> -John Weng >> >> On 3/22/13 6:08 PM, Ping Li wrote: >>> >>> Dear Colleagues, >>> >>> >>> A Special Issue on Computational Modeling of Bilingualism has been >>> published. Most of the models are based on connectionist >>> architectures. >>> >>> >>> All the papers are available for free viewing until April 30, 2013 >>> (follow the link below to its end): >>> >>> >>> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >>> >>> >>> Please let me know if you have difficulty accessing the above link >>> or viewing any of the PDF files on Cambridge University Press's >>> website. >>> >>> >>> With kind regards, >>> >>> >>> Ping Li >>> >>> >>> >>> ================================================================= >>> >>> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information >>> Sciences & Technology | Co-Chair, Inter-College Graduate Program >>> in Neuroscience | Co-Director, Center for Brain, Behavior, and >>> Cognition | Pennsylvania State University | University Park, PA >>> 16802, USA | >>> >>> Editor, Bilingualism: Language and Cognition, Cambridge University >>> Press | Associate Editor: Journal of Neurolinguistics, Elsevier >>> Science Publisher >>> >>> Email: pul8 at psu.edu | URL: >>> http://cogsci.psu.edu >>> >>> ================================================================= >>> >>> >> >> -- >> -- >> Juyang (John) Weng, Professor >> Department of Computer Science and Engineering >> MSU Cognitive Science Program and MSU Neuroscience Program >> 428 S Shaw Ln Rm 3115 >> Michigan State University >> East Lansing, MI 48824 USA >> Tel: 517-353-4388 >> Fax: 517-432-1061 >> Email:weng at cse.msu.edu >> URL:http://www.cse.msu.edu/~weng/ >> ---------------------------------------------- >> > > Gary Cottrell 858-534-6640 FAX: 858-534-7029 > > My schedule is here: http://tinyurl.com/b7gxpwo > > Computer Science and Engineering 0404 > IF USING FED EX INCLUDE THE FOLLOWING LINE: > CSE Building, Room 4130 > University of California San Diego > 9500 Gilman Drive # 0404 > La Jolla, Ca. 92093-0404 > > "Probably once or twice a week we are sitting at dinner and Richard > says, 'The cortex is hopeless,' and I say, 'That's why I work on the > worm.'" Dr. Bargmann said. > > "A grapefruit is a lemon that saw an opportunity and took advantage > of it." - note written on a door in Amsterdam on Lijnbaansgracht. > > "Physical reality is great, but it has a lousy search function." > -Matt Tong > > "Only connect!" -E.M. Forster > > "You always have to believe that tomorrow you might write the matlab > program that solves everything - otherwise you never will." -Geoff > Hinton > > "There is nothing objective about objective functions" - Jay > McClelland > > "I am awaiting the day when people remember the fact that discovery > does not work by deciding what you want and then discovering it." > -David Mermin > > Email: gary at ucsd.edu > Home page: http://www-cse.ucsd.edu/~gary/ > > > > > > > > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -- Stephen Jos? Hanson Professor Psychology Department Rutgers University Director RUBIC (Rutgers Brain Imaging Center) Director RUMBA (Rutgers Brain/Mind Analysis-NK) Member of Cognitive Science Center (NB) Member EE Graduate Program (NB) Member CS Graduate Program (NB) email: jose at psychology.rutgers.edu web: psychology.rutgers.edu/~jose lab: www.rumba.rutgers.edu fax: 866-434-7959 voice: 973-353-5440 x 1412 > From benjamin.lindner at physik.hu-berlin.de Wed Mar 27 05:03:54 2013 From: benjamin.lindner at physik.hu-berlin.de (Benjamin Lindner) Date: Wed, 27 Mar 2013 10:03:54 +0100 Subject: Connectionists: Post-Doc position at the BCCN Berlin Message-ID: <5152B5FA.8020002@physik.hu-berlin.de> Applications are invited for a 2.5-years post-doctoral position in the group of Prof. Lindner (Theory of Complex Systems and Neurophysics) at the Bernstein Center for Computational Neuroscience Berlin starting in June, 2013. The focus of the theoretical project is on the spreading of activity and the signal transmission in recurrent networks of spiking neurons. This is motivated by reverse physiology experiments at the BCCN Berlin, in which evoked single-neuron activity in the cortex causes behavioral or motor responses. Ideally, theoretical insights will be instrumental for the interpretation of experimental data and the formulation of novel experimental questions. In order to pursue this kind of research, the successful candidate should have a strong interest in developing analytical approaches for the stochastic dynamics of single neurons and neural networks. A background in theoretical physics, nonlinear dynamics, and/or the theory of stochastic processes is thus advantageous although not obligatory. The position will also entail a moderate amount of teaching as well as the co-supervision of PhD and Master students. Applications, including a motivation letter, a CV, and, a list of three potential referees should be send by email to benjamin.lindner at physik.hu-berlin.de (cc to nikola.schrenk at bccn-berlin.de) Prof. Dr. B. Lindner Humboldt-Universit?t zu Berlin Bernstein Center for Computational Neuroscience Institut f?r Physik Sitz: Philippstr. 13, Haus 2 10115 Berlin Postanschrift: Unter den Linden 6 10099 Berlin Deadline for the application is April 5 2013, however, later applications may be also considered. -- -------------------------------------------------------------------------------------------------------------------- Benjamin Lindner Theory of Complex Systems and Neurophysics Bernstein Center for Computational Neuroscience Berlin Philippstr. 13, Haus 2, 10115 Berlin Room: 1.17, phone: 0049(0)302093 6336 Department of Physics Humboldt University Berlin Newtonstr. 15 12489 Berlin Room: 3.408, phone: 0049(0)302093 7934 http://people.physik.hu-berlin.de/~lindner/index.html -------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From giles at ist.psu.edu Tue Mar 26 15:52:39 2013 From: giles at ist.psu.edu (Lee Giles) Date: Tue, 26 Mar 2013 15:52:39 -0400 Subject: Connectionists: generalizing language in neural networks [was Re: Computational Modeling of Bilingualism Special Issue] In-Reply-To: <2E463C70-3341-4511-929C-E5F0DAB903EC@idsia.ch> References: <514E37D5.60400@cse.msu.edu> <7F1713CE870C2941A370868EACF230892BBA0D@UQEXMDA6.soe.uq.edu.au> <7C3DBA3E-CDFA-45C7-A7EB-C55AEEB99631@nyu.edu> <2E463C70-3341-4511-929C-E5F0DAB903EC@idsia.ch> Message-ID: <5151FC87.1020304@ist.psu.edu> Here are some of our journal papers on this topic. Our focus was primarily on grammatical inference. Best Lee Giles > 1. C.L. Giles, S. Lawrence, A-C. Tsoi, "Noisy Time Series > Prediction Using a Recurrent Neural Network and Grammatical > Inference," Machine Learning, 44, 161-183, 2001. > > 2. S. Lawrence, C.L. Giles, S. Fong, "Natural Language Grammatical > Inference with Recurrent Neural Networks," IEEE Trans. on Knowledge > and Data Engineering, 12(1), p.126, 2000. > > 3. C.L. Giles, C.W. Omlin, K. K. Thornber, "Equivalence in > Knowledge Representation: Automata, Recurrent Neural Networks, and > Dynamical Fuzzy Systems," Proceedings of the IEEE, 87(9), 1623-1640, > 1999 (invited). > > 4. C.W. Omlin, K. K. Thornber, C.L. Giles, "Deterministic Fuzzy > Finite State Automata Can Be Deterministically Encoded into Recurrent > Neural Networks," IEEE Trans. on Fuzzy Systems, 6(1), p. 76, 1998. > > 5. D.S. Clouse, C.L. Giles, B.G. Horne, G.W. Cottrell, "Time-Delay > Neural Networks: Representation and Induction of Finite State > Machines," IEEE Trans. on Neural Networks, 8(5), p. 1065, 1997. > > 6. H.T. Siegelmann, C.L.Giles, "The Complexity of Language > Recognition by Neural Networks," Neurocomputing, Special Issue on > "Recurrent Networks for Sequence Processing," (eds) M. Gori, M. Mozer, > A.H. Tsoi, W. Watrous, 15, p. 327, 1997. > > 7. H.T. Siegelmann, B.G. Horne, C.L. Giles, "Computational > capabilities of recurrent NARX neural networks," IEEE Trans. on > Systems, Man and Cybernetics: Part B - Cybernetics, 27(2), p.208, 1997. > > 8. S. Lawrence, C.L. Giles, A-C. Tsoi, A. Back, "Face Recognition: > A Convolutional Neural Network Approach," IEEE Trans. on Neural > Networks, Special Issue on "Pattern Recognition" 8(1), p. 98, 1997. > > 9. C.W. Omlin, C.L. Giles, "Constructing Deterministic > Finite-State Automata in Recurrent Neural Networks," Journal of the > ACM, 45(6), p. 937, 1996. > > 10. C.W. Omlin, C.L. Giles, "Rule Revision with Recurrent Neural > Networks," IEEE Trans. on Knowledge and Data Engineering, 8(1), p. > 183, 1996. > > 11. C.W. Omlin, C.L. Giles, "Stable Encoding of Large Finite-State > Automata in Recurrent Neural Networks with Sigmoid Discriminants," > Neural Computation, 8(4), p. 675, 1996. > > 12. C.W. Omlin, C.L. Giles, "Extraction of Rules from Discrete-Time > Recurrent Neural Networks," Neural Networks, 9(1), p. 41. 1996. > > 13. C.L. Giles, B.G. Horne, T. Lin, "Learning a Class of Large > Finite State Machines with a Recurrent Neural Network," Neural > Networks, 8(9), p. 1359, 1995. > > 14. C.L. Giles, C.W. Omlin, "Extraction, Insertion and Refinement of > Production Rules in Recurrent Neural Networks," Connection Science, > Special Issue on "Architectures for Integrating Symbolic and Neural > Processes" 5(3-4), p. 307, 1993. > > 15. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, > "Learning and Extracting Finite State Automata with Second-Order > Recurrent Neural Networks," Neural Computation, 4(3), 393-405, 1992. > On 3/26/13 12:48 PM, Juergen Schmidhuber wrote: > More than a decade ago, Long Short-Term Memory recurrent neural > networks (LSTM) learned certain context-free and context-sensitive > languages that cannot be represented by finite state automata such as > HMMs. Parts of the network became stacks or event counters. > > F. A. Gers and J. Schmidhuber. LSTM Recurrent Networks Learn Simple > Context Free and Context Sensitive Languages. IEEE Transactions on > Neural Networks 12(6):1333-1340, 2001. > > J. Schmidhuber, F. Gers, D. Eck. Learning nonregular languages: A > comparison of simple recurrent networks and LSTM. Neural Computation, > 14(9):2039-2041, 2002. > > F. Gers and J. A. Perez-Ortiz and D. Eck and J. Schmidhuber. Learning > Context Sensitive Languages with LSTM Trained with Kalman Filters. > Proceedings of ICANN'02, Madrid, p 655-660, Springer, Berlin, 2002. > > > Old slides on this: > http://www.idsia.ch/~juergen/lstm/sld028.htm > > Juergen > > > > On Mar 26, 2013, at 5:09 AM, Gary Marcus wrote: > >> I posed some important challenges for language-like generalization in >> PDP and SRN models in 1998 in an article in Cognitive Psychology, >> with further discussion in 1999 Science article (providing data from >> human infants), and a 2001 MIT Press book, The Algebraic Mind. >> >> For example, if one trains a standard PDP autoassociator on identity >> with integers represented by distribution representation consisting >> of binary digits and expose the model only to even numbers, the model >> will not generalize to odd numbers (i.e., it will not generalize >> identity to the least significant bit) even though (depending on the >> details of implementation) it can generalize to some new even >> numbers. Another way to put this is that these sort of models can >> interpolate within some cloud around a space of training examples, >> but can't generalize universally-quanitfied one-to-one mappings >> outside that space. >> >> Likewise, training an Elman-style SRN with localist inputs (one word, >> one node, as in Elman's work on SRNS) on a set of sentences like "a >> rose is a rose" and "a tulip is a tulip" leads the model to learn >> those individual relationships, but not to generalize to "a blicket >> is a blicket", where blicket represents an untrained node. >> >> These problems have to do with a kind of localism that is inherent in >> the back-propogation rule. In the 2001 book, I discuss some of the >> ways around them, and the compromises that known workarounds lead >> to. I believe that some alternative kind of architecture is called for. >> >> SInce the human brain is pretty quick to generalize >> universally-quantified one-to-one-mappings, even to novel elements, >> and even on the basis of small amounts of data, I consider these to >> be important - but largely unsolved -- problems. The brain must do >> it, but we still really understand how. (J. P. Thivierge and I made >> one suggestion in this paper in TINS.) >> >> Sincerely, >> >> Gary Marcus >> >> >> Gary Marcus >> Professor of Psychology >> New York University >> Author of Guitar Zero >> http://garymarcus.com/ >> New Yorker blog >> >> On Mar 25, 2013, at 11:30 PM, Janet Wiles wrote: >> >>> Recurrent neural networks can represent, and in some cases learn and >>> generalise classes of languages beyond finite state machines. For a >>> review, of their capabilities see the excellent edited book by Kolen >>> and Kramer. e.g., ch 8 is on "Representation beyond finite states"; >>> and ch9 is "Universal Computation and Super-Turing Capabilities". >>> >>> Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent >>> Networks", IEEE Press. >>> >>> From: connectionists-bounces at mailman.srv.cs.cmu.edu >>> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of >>> Juyang Weng >>> Sent: Sunday, 24 March 2013 9:17 AM >>> To: connectionists at mailman.srv.cs.cmu.edu >>> Subject: Re: Connectionists: Computational Modeling of Bilingualism >>> Special Issue >>> >>> Ping Li: >>> >>> As far as I understand, traditional connectionist architectures >>> cannot do abstraction well as Marvin Minsky, Michael Jordan >>> and many others correctly stated. For example, traditional neural >>> networks cannot learn a finite automaton (FA) until recently (i.e., >>> the proof of our Developmental Network). We all know that FA is the >>> basis for all probabilistic symbolic networks (e.g., Markov models) >>> but they are all not connectionist. >>> >>> After seeing your announcement, I am confused with the book title >>> "Bilingualism Special Issue: Computational Modeling of Bilingualism" >>> but with your comment "most of the models are based on connectionist >>> architectures." >>> >>> Without further clarifications from you, I have to predict that >>> these connectionist architectures in the book are all grossly wrong >>> in terms >>> of brain-capable connectionist natural language processing, since >>> they cannot learn an FA. This means that they cannot generalize to >>> state-equivalent but unobserved word sequences. Without this basic >>> capability required for natural language processing, how can they >>> claim connectionist natural language processing, let alone >>> bilingualism? >>> >>> I am concerned that many papers proceed with specific problems >>> without understanding the fundamental problems of the traditional >>> connectionism. The fact that the biological brain is connectionist >>> does not necessarily mean that all connectionist researchers know >>> about the brain's connectionism. >>> >>> -John Weng >>> >>> On 3/22/13 6:08 PM, Ping Li wrote: >>> Dear Colleagues, >>> >>> A Special Issue on Computational Modeling of Bilingualism has been >>> published. Most of the models are based on connectionist architectures. >>> >>> All the papers are available for free viewing until April 30, 2013 >>> (follow the link below to its end): >>> >>> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >>> >>> >>> Please let me know if you have difficulty accessing the above link >>> or viewing any of the PDF files on Cambridge University Press's >>> website. >>> >>> With kind regards, >>> >>> Ping Li >>> >>> >>> ================================================================= >>> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information >>> Sciences & Technology | Co-Chair, Inter-College Graduate Program >>> in Neuroscience | Co-Director, Center for Brain, Behavior, and >>> Cognition | Pennsylvania State University | University Park, PA >>> 16802, USA | >>> Editor, Bilingualism: Language and Cognition, Cambridge University >>> Press | Associate Editor: Journal of Neurolinguistics, Elsevier >>> Science Publisher >>> Email: pul8 at psu.edu | URL: http://cogsci.psu.edu >>> ================================================================= >>> >>> >>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email: weng at cse.msu.edu >>> URL: http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.petrosino at uniparthenope.it Wed Mar 27 03:16:15 2013 From: alfredo.petrosino at uniparthenope.it (Alfredo Petrosino) Date: Wed, 27 Mar 2013 08:16:15 +0100 Subject: Connectionists: Deadline approaching - ICIAP 2013, Napoli (Italy) Message-ID: <51529CBF.8000209@uniparthenope.it> Dear colleagues, for whom of you interested in pattern recognition, soft computing, image analysis, computer vision and related I would gather your attention to ICIAP 2013 conference. ICIAP2013 http://www.iciap2013-naples.org/ is an historic international conference in image processing and pattern recognition endorsed by the International Association for Pattern Recognition (IAPR), the Technical Committee PAMI, and the IEEE Computational Intelligence Society. The 17th edition of 2013 will be held in the beautiful and ancient castle Castel dell'Ovo of Napoli, Italy on September 11-13, 2013. A good occasion to enjoy also its wonderful surroundings: the ancient ruins of Pompeii and Herculaneum, the Phlegrean Fields, and the spectacular islands -- Capri, Ischia, Procida. The official submission deadline is March 29, 2013, but due to Easter holidays the submission site will be opened till *** APRIL 8, 2013 *** Please take in consideration that selected papers will be collected in one issue of the 'Pattern Recognition Letters'journal and more than one issues of the 'Information Sciences' journal. We look forward to receiving your contribution and to meeting you in Napoli in September. Best Regards, Alfredo Petrosino, ICIAP2013 General Chair ======================================================================================================================== CALL FOR PAPERS International Conference on Image Analysis and Processing (ICIAP) 2013 September 11-13, 2013 Napoli, Italy http://www.iciap2013-naples.org ======================================================================================================================== ICIAP 2013 - TRACKS Pattern Recognition and Machine Learning - Area Chairs: Marco Gori, University of Siena, Italy, Kai Yu, Baidu Inc., Germany BioMedical Imaging Applications - Area Chairs: Joan Mart?, Universitat de Girona, Spain, Francesco Tortorella, University of Cassino, Italy Multimedia Interaction and Processing - Area Chairs: Rita Cucchiara, University of Modena and Reggio Emilia, Italy, Fatih Porikli, Mitsubishi Electric Research Labs, USA 3D Computer Vision - Area Chairs: Shaogang Gong, Queen Mary University of London, UK, Vittorio Murino, University of Verona and IIT, Italy Understanding Objects and Space - Area Chairs: Silvio Savarese, University of Michigan, USA, Jiambo Shi, University of Pennsylvania, USA ICIAP 2013 - INVITED SPEAKERS Antonio Torralba, Massachusetts Institute of Technology, USA Ching Y. Suen, Concordia University, Canada Sankar K. Pal, Indian Statistical Institute, India Jiri Matas, Czech Technical University, Czech Republic Fei-Fei Li, Stanford University, USA ICIAP 2013 - TUTORIALS "Discrete Optimization in Computer Vision", Ramin Zabih and Endre Boros, USA "Visual Attention: biology, computational models, and applications", John Tsotsos, USA and Neil Bruce, France "Change Detection in Video", Fatih Porikli, USA ICIAP 2013 - WORKSHOPS Satellite workshops (to be announced soon) will be held on September 9 and 10 2013, right before the main conference. ICIAP 2013 - CONFERENCE PROCEEDINGS The Proceedings of ICIAP 2013 will be published in the Springer Lecture Notes in Computer Science (LNCS) series, indexed as peer-reviewed publication in the Web of Science. ICIAP 2013 - PUBLICATIONS IN JOURNALS Extended versions of selected papers will be considered for inclusion in "Pattern Recognition Letters" and "Information Sciences" journals. ICIAP 2013 - IMPORTANT DATES Conference paper submission: March 29, 2013 Conference paper acceptance: May 10, 2013 Camera ready submission: June 10, 2013 ICIAP 2013 - SUBMISSION Papers (10 pages at most, in single blind submission) should be submitted on the ICIAP2013 conference system https://cmt.research.microsoft.com/ICIAP2013/ From murphyk at cs.ubc.ca Tue Mar 26 17:40:42 2013 From: murphyk at cs.ubc.ca (Kevin Murphy) Date: Tue, 26 Mar 2013 14:40:42 -0700 Subject: Connectionists: CFP: ICML Workshop on Structured Learning: Inferring Graphs from Structured and Unstructured Inputs (SLG2013) Message-ID: Structured Learning: Inferring Graphs from Structured and Unstructured Inputs (SLG2013) ICML workshop, June 16, 2013 (day between ICML & NAACL) https://sites.google.com/site/slgworkshop2013/ Submission Deadline: April 15, 2013 OVERVIEW Structured learning involves learning and making inferences from inputs that can be both unstructured (e.g., text) and structured (e.g., graphs and graph fragments), and making predictions about outputs that are also structured as graphs. Examples include the construction of knowledge bases from noisy extraction data, inferring temporal event graphs from newswire or social media, and inferring influence structure on social graphs from multiple sources. One of the challenges of this setting is that it often does not fit into the classic supervised or unsupervised learning paradigm. In essence, we have one large (potentially infinite) partially observed input graph, and we are trying to make inferences about the unknown aspects of this graph's structure. Often times there is side information available, which can be used for enrichment, but in order to use this information, we need to infer mappings for schema and ontologies that describe that side information, perform alignment and entity resolution, and reason about the added utility of the additional sources. The topic is extremely pressing, as many of the modern challenges in extracting usable knowledge from (big) data fall into this setting. Our focus in this workshop is on the machine learning and inference methods that are useful in such settings. Topics of interest include, but are not limited to: - Graph-based methods for entity resolutions and word sense disambiguation - Graph-based representations for ontology learning - Graph-based strategies for semantic relations identification - Making use of taxonomies and encoding semantic distances in graphs - Random walk methods in graphs - Spectral graph clustering and multi-relational clustering - Semi-supervised graph-based methods - Graph summarization Our goal is to bring together researchers in graphical models, structured prediction, latent variable relational models and statistical relational learning in order to (a) exchange experiences applying various methods to these graph learning domains, (b) share their successes, and (c) identify common challenges. The outcomes we expect are (1) a better understanding of the different existing methods across disparate communities, (2) an identification of common challenges, and (3) a venue for sharing resources, tools and datasets. The workshop will consist of a number of invited talks in each of these areas, a poster session where participants can present their work, and discussion. SUBMISSION INFORMATION We solicit short, poster-length submissions from 2 to 6 pages. All accepted submissions will be presented as posters, and a subset of them may be considered for oral presentation. Submissions reporting work in progress are acceptable, as we aim for the workshop to offer a venue for stimulating discussions. Submissions must be in PDF format, and should be made through Easychair at https://www.easychair.org/conferences/?conf=slg-2013 IMPORTANT DATES - Submission deadline: April 15, 2013 - Notification of acceptance: April 30, 2013 - Final versions of accepted submissions due: May 15, 2013 - Workshop date: Sunday, June 16, 2013 (Note: Most of the ICML workshops are taking place AFTER the conference, while this workshop takes place before the main conference, on the same day as ICML tutorials, in order to make it easier for participants from NAACL HLT conference to attend). ORGANIZING COMMITTEE - Hal Daume III, University of Maryland - Evgeniy Gabrilovich, Google - Lise Getoor, University of Maryland - Kevin Murphy, Google INVITED SPEAKERS - William Cohen, CMU - Andrew McCallum, UMass - Chris Re, University of Wisconsin - Madison - Dan Roth, UIUC - Stuart Russell, UC Berkeley PROGRAM COMMITTEE(confirmed to date) - Jeff Dalton, UMass - Laura Dietz, UMass - AnHai Doan, University of Wisconsin - Madison - Thorsten Joachims, Cornell - Daniel Lowd, University of Oregon - Mausam, University of Washington - Jennifer Neville, Purdue University - Ivan Titov, Saarland University - Daisy Zhe Wang, University of Florida - Jerry Zhu, University of Wisconsin - Madison FURTHER INFORMATION For further information, please contact the workshop organizers at slg-2013-chairs at googlegroups.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From giles at ist.psu.edu Wed Mar 27 11:07:03 2013 From: giles at ist.psu.edu (Lee Giles) Date: Wed, 27 Mar 2013 11:07:03 -0400 Subject: Connectionists: Computational Modeling of Bilingualism Special Issue In-Reply-To: <20130326145515.793c681d@sam> References: <514E37D5.60400@cse.msu.edu> <5151D134.6090402@cse.msu.edu> <20130326145515.793c681d@sam> Message-ID: <51530B17.7020506@ist.psu.edu> It may be useful to know that there is some work on neural networks learning Chomsky Universal Grammars. > S. Lawrence, C.L. Giles, S. Fong, ?Natural Language Grammatical > Inference with Recurrent Neural Networks,? IEEE Trans. on Knowledge > and Data Engineering, 12(1), p.126, 2000. Lee Giles On 3/26/13 2:55 PM, Stephen Jos? Hanson wrote: > I just sent one before. > > There are tons of references starting NIPS going back 2 decades, for > heavens to betsy! > > Steve > > > Tue, 26 Mar 2013 12:47:48 -0400 __________ > > Gary, I have cited some classical references about this FA learning > issue in my recent book NAI. I guess that we are again talking about > different things as I stated in my last email. My last email might be > useful. If you give me a reference of "easily learned by connectionist > architectures", we can communicate more effectively. > > For example, I had big problems in communication with Jeff Hawkins, > until he came to MSU this year and we had interactive emails after his > visit. His HTM belongs to symbolic AI because it consists of > task-specific (or feature-specific) modules. Sorry, Jeff, please do > not take this comment personally. Many people did this way. > > -John > > On 3/26/13 1:18 AM, Gary Cottrell wrote: >> wow, I haven't heard this kind of misunderstanding in a long time. >> FSA's are easily learned by connectionist architectures. It is PDA's >> that are hard. >> On Mar 23, 2013, at 4:16 PM, Juyang Weng wrote: >> >>> Ping Li: >>> >>> As far as I understand, traditional connectionist architectures >>> cannot do abstraction well as Marvin Minsky, Michael Jordan >>> and many others correctly stated. For example, traditional neural >>> networks cannot learn a finite automaton (FA) until recently (i.e., >>> the proof of our Developmental Network). We all know that FA is the >>> basis for all probabilistic symbolic networks (e.g., Markov models) >>> but they are all not connectionist. >>> >>> After seeing your announcement, I am confused with the book title >>> "Bilingualism Special Issue: Computational Modeling of Bilingualism" >>> but with your comment "most of the models are based on connectionist >>> architectures." >>> >>> Without further clarifications from you, I have to predict that >>> these connectionist architectures in the book are all grossly wrong >>> in terms of brain-capable connectionist natural language processing, >>> since they cannot learn an FA. This means that they cannot >>> generalize to state-equivalent but unobserved word sequences. >>> Without this basic capability required for natural language >>> processing, how can they claim connectionist natural language >>> processing, let alone bilingualism? >>> >>> I am concerned that many papers proceed with specific problems >>> without understanding the fundamental problems of the traditional >>> connectionism. The fact that the biological brain is connectionist >>> does not necessarily mean that all connectionist researchers know >>> about the brain's connectionism. >>> >>> -John Weng >>> >>> On 3/22/13 6:08 PM, Ping Li wrote: >>>> Dear Colleagues, >>>> >>>> >>>> A Special Issue on Computational Modeling of Bilingualism has been >>>> published. Most of the models are based on connectionist >>>> architectures. >>>> >>>> >>>> All the papers are available for free viewing until April 30, 2013 >>>> (follow the link below to its end): >>>> >>>> >>>> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/ >>>> >>>> >>>> Please let me know if you have difficulty accessing the above link >>>> or viewing any of the PDF files on Cambridge University Press's >>>> website. >>>> >>>> >>>> With kind regards, >>>> >>>> >>>> Ping Li >>>> >>>> >>>> >>>> ================================================================= >>>> >>>> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information >>>> Sciences & Technology | Co-Chair, Inter-College Graduate Program >>>> in Neuroscience | Co-Director, Center for Brain, Behavior, and >>>> Cognition | Pennsylvania State University | University Park, PA >>>> 16802, USA | >>>> >>>> Editor, Bilingualism: Language and Cognition, Cambridge University >>>> Press | Associate Editor: Journal of Neurolinguistics, Elsevier >>>> Science Publisher >>>> >>>> Email: pul8 at psu.edu | URL: >>>> http://cogsci.psu.edu >>>> >>>> ================================================================= >>>> >>>> >>> -- >>> -- >>> Juyang (John) Weng, Professor >>> Department of Computer Science and Engineering >>> MSU Cognitive Science Program and MSU Neuroscience Program >>> 428 S Shaw Ln Rm 3115 >>> Michigan State University >>> East Lansing, MI 48824 USA >>> Tel: 517-353-4388 >>> Fax: 517-432-1061 >>> Email:weng at cse.msu.edu >>> URL:http://www.cse.msu.edu/~weng/ >>> ---------------------------------------------- >>> >> Gary Cottrell 858-534-6640 FAX: 858-534-7029 >> >> My schedule is here: http://tinyurl.com/b7gxpwo >> >> Computer Science and Engineering 0404 >> IF USING FED EX INCLUDE THE FOLLOWING LINE: >> CSE Building, Room 4130 >> University of California San Diego >> 9500 Gilman Drive # 0404 >> La Jolla, Ca. 92093-0404 >> >> "Probably once or twice a week we are sitting at dinner and Richard >> says, 'The cortex is hopeless,' and I say, 'That's why I work on the >> worm.'" Dr. Bargmann said. >> >> "A grapefruit is a lemon that saw an opportunity and took advantage >> of it." - note written on a door in Amsterdam on Lijnbaansgracht. >> >> "Physical reality is great, but it has a lousy search function." >> -Matt Tong >> >> "Only connect!" -E.M. Forster >> >> "You always have to believe that tomorrow you might write the matlab >> program that solves everything - otherwise you never will." -Geoff >> Hinton >> >> "There is nothing objective about objective functions" - Jay >> McClelland >> >> "I am awaiting the day when people remember the fact that discovery >> does not work by deciding what you want and then discovering it." >> -David Mermin >> >> Email: gary at ucsd.edu >> Home page: http://www-cse.ucsd.edu/~gary/ >> >> >> >> >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Muhammad.Iqbal at ecs.vuw.ac.nz Wed Mar 27 20:52:46 2013 From: Muhammad.Iqbal at ecs.vuw.ac.nz (Muhammad.Iqbal at ecs.vuw.ac.nz) Date: Thu, 28 Mar 2013 13:52:46 +1300 Subject: Connectionists: Reminder: International Workshop on Learning Classifier Systems @ GECCO Message-ID: <6c0e95fffe5773b3ab900c0f61826d00.squirrel@mail.ecs.vuw.ac.nz> Just a soft final reminder that the deadline for the IWLCS @ GECCO'13 is March 28. A short extension may be considered by the committee if you are finalizing your papers but need more time to wrap up. IWLCS is a great place to present your quality projects and ongoing work related to LCS research. This year it is particularly important for you to make a contribution to this workshop which serves as a valuable and central exchange of knowledge and ideas for those interested in the study of these unique algorithms, as well as for those interested in learning more about them. Submission instructions are on the IWLCS 2013 website under the CFP tab. http://homepages.ecs.vuw.ac.nz/~iqbal/iwlcs2013/index.html ********************************************************************* ** CALL FOR PAPERS ** ** Sixteenth International Workshop on Learning Classifier Systems ** ** July 06-10, 2013, Amsterdam, The Netherlands ** ** Organized by ACM SIGEVO ** ********************************************************************* The Sixteenth International Workshop on Learning Classifier Systems (IWLCS 2013) will be held in Amsterdam, The Netherlands during the Genetic and Evolutionary Computation Conference (GECCO-2013), July 06-10, 2013. Originally, Learning Classifier Systems (LCSs) were introduced by John H. Holland as a way of applying evolutionary computation to machine learning and adaptive behaviour problems. Since then, the LCS paradigm has broadened greatly into a framework that encompasses many representations, rule discovery mechanisms, and credit assignment schemes. Current LCS applications range from data mining, to automated innovation and the on-line control of cognitive systems. LCS research includes various actual system approaches: While Wilson's accuracy-based XCS system (1995) has received the highest attention and gained the highest reputation; studies and developments of other LCSs are usually discussed and contrasted. Advances in machine learning, and reinforcement learning in particular, as well as in evolutionary computation have brought LCS systems the necessary competence and guaranteed learning properties. Novel insights in machine learning and evolutionary computation are being integrated into the LCS framework. Thus, we invite submissions that discuss recent developments in all areas of research on, and applications of, Learning Classifier Systems. IWLCS is the event that brings together most of the core researchers in classifier systems. The workshop also provides an opportunity for researchers interested in LCSs to get an impression of the current research directions in the field as well as a guideline for the application of LCSs to their problem domain. Topics of interests include but are not limited to: Paradigms of LCS (Michigan, Pittsburgh ...) Theoretical developments (behaviour, scalability and learning bounds ...) Representations (binary, real-valued, oblique, non-linear, fuzzy ...) Types of target problems (single-step, multiple-step, regression/function approximation ...) System enhancements (competent operators, problem structure identification and linkage learning ...) LCS for Cognitive Control (architectures, emergent behaviours ...) Applications (data mining, medical domains, bioinformatics ...) Optimizations and parallel implementations (GPU, matching algorithms ...) All accepted papers will be presented at IWLCS 2013 and will appear in the GECCO workshop volume, which will be published by ACM (Association for Computing Machinery). Authors will be invited after the workshop to submit revised (full) papers that, after a thorough review process, are to be published in a special issue of the Evolutionary Intelligence journal. Important dates March 28, 2013 - Paper submission deadline April 15, 2013 - Notification to authors April 25, 2013 - Submission of camera-ready material July 06-10, 2013 - GECCO 2013 Conference in Amsterdam, The Netherlands Organizing Committee Muhammad Iqbal, muhammad.iqbal at ecs.vuw.ac.nz Kamran Shafi, k.shafi at adfa.edu.au Ryan Urbanowicz, ryan.j.urbanowicz at dartmouth.edu Regards Muhammad Iqbal From rsalakhu at cs.toronto.edu Thu Mar 28 22:25:15 2013 From: rsalakhu at cs.toronto.edu (Ruslan Salakhutdinov) Date: Thu, 28 Mar 2013 22:25:15 -0400 (EDT) Subject: Connectionists: CFP: ICML 2013 Workshop on Inferning: Interactions between Inference and Learning Message-ID: Call For Papers for ICML 2013 Workshop on Inferning: Interactions between Inference and Learning June 20-21, 2013 Atlanta, GA http://inferning.cs.umass.edu inferning2013 at gmail.com Important Dates: Submission Deadline: April 20th, 2013 (11:59pm PST) Author Notification: May 13th, 2013 Workshop: June 20-21, 2013 Submission site: http://openreview.net/inferning2013 There are strong interactions between learning algorithms which estimate the parameters of a model from data, and inference algorithms which use a model to make predictions about data. Understanding the intricacies of these interactions is crucial for advancing the state-of-the-art on real-world tasks in natural language processing, computer vision, computation biology, etc. Yet, many facets of these interactions remain unknown. In this workshop, we study the interactions between inference and learning using two reciprocating perspectives. Perspective one: how does inference affect learning? The first perspective studies the influence of the choice of inference technique during learning on the resulting model. When faced with models for which exact inference is intractable, efficient approximate inference techniques may be used, such as MCMC sampling, stochastic approximation, belief propagation, beam-search, dual decomposition, etc. The workshop will focus on work that evaluates the impact of the approximations on the resulting parameters, in terms of both the generalization of the model, the effect it has on the objective functions, and the convergence properties. We will also study approaches that attempt to correct for the approximations in inference by modifying the objective and/or the learning algorithm (for example, contrastive divergence for deep architectures), and approaches that minimize the dependence on the inference algorithms by exploring inference-free methods (e.g., piece-wise training, pseudo-max and decomposed learning). Perspective two: how does learning affect inference? Traditionally, the goal of learning has been to find a model for which prediction (i.e., inference) accuracy is as high as possible. However, an increasing emphasis on modeling complexity has shifted the goal of learning: find models for which prediction (i.e., inference) is as efficient as possible. Thus, there has been recent interest in more unconventional approaches to learning that combine generalization accuracy with other desiderata such as faster inference. Some examples of this kind are: learning classifiers for greedy inference (e.g., Searn, Dagger); structured cascade models that learn a cost function to perform multiple runs of inference from coarse to fine level of abstraction by trading-off accuracy and efficiency at each level; learning cost function to search in the space of complete outputs (e.g., SampleRank, search in Limited Discrepancy Search space); learning structures that exhibit efficient exact inference etc. Similarly, there has been work that learns operators for efficient search-based inference, approaches that trade-off speed and accuracy by incorporating resource constraints such as run-time and memory into the learning objective. List of Topics: This workshop brings together practitioners from different fields (information extraction, machine vision, natural language processing, computational biology, etc.) in order to study a unified framework for understanding and formalizing the interactions between learning and inference. The following is a partial list of relevant keywords for the workshop: * learning with approximate inference * cost-aware learning * learning sparse structures * pseudo-likelihood, composite likelihood training * contrastive divergence * piece-wise and decomposed training * decomposed learning * coarse to fine learning and inference * scoring matching * stochastic approximation * incremental gradient methods * adaptive proposal distributions * learning for anytime inference * learning approaches that trade-off speed and accuracy * learning to speed up inference * learning structures that exhibit efficient exact inference * lifted inference for first-order models * more ... New benchmark problems: This line of research can hugely benefit from new challenge problems from various fields (e.g., computer vision, natural language processing, speech, computational biology, computational sustainability etc.). Therefore, we especially request relevant papers describing such problems, main challenges, evaluations and public data sets. Invited Speakers: * Dan Roth, University of Illinois, Urbana-Champaign * Rina Dechter, University of California, Irvine * Ben Taskar, University of Washington * Hal Daume, University of Maryland, College Park * Alan Fern, Oregon State University Author Guidelines: Submissions are encouraged as extended abstracts of ongoing research. The recommended page length is 4-6 pages. Additional supplementary content may be included, but may not be considered during the review process. Previously published or currently in submission papers are also encouraged (we will confirm with authors before publishing the papers online). The format of the submissions should follow the ICML 2013 style, available here: http://icml.cc/2013/wp-content/uploads/2012/12/icml2013stylefiles.tar.gz. However, since the review process is not double-blind, submissions need not be anonymized and author names may be included. Open Review: Our workshop will be following the open reviewing system as introduced by the International Conference on Learning Representations (ICLR). In particular, the submitted papers will be available for public comment after the submission deadline. Along with the public comments, we will also provide anonymous reviews by our program committee members. The decisions for acceptance will be based on a combination of review scores and insights from the public discourse. If you have any concerns or questions regarding the reviewing process, please email us atinferning2013 at gmail.com. Submission site: http://openreview.net/inferning2013 Organizers: * Janardhan Rao (Jana) Doppa, Oregon State University * Pawan Kumar, Ecole Centrale Paris * Michael Wick, University of Massachusetts, Amherst * Sameer Singh, University of Massachusetts, Amherst * Ruslan Salakhutdinov, University of Toronto From sebastian.risi at cornell.edu Wed Mar 27 14:18:35 2013 From: sebastian.risi at cornell.edu (Sebastian Risi) Date: Wed, 27 Mar 2013 14:18:35 -0400 Subject: Connectionists: CFP: AAAI 2013 Fall Symposium on How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or _____? Message-ID: (apologies for multiple posting.) Call for papers Dear colleagues, We invite contributions to our AAAI 2013 Fall Symposium titled ?How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or _____??. Each subfield of AI has a different perspective on intelligence and unspoken assumptions about what is critical to recreate it computationally. To better understand such differences, we aim to bring together a diverse group of AI researchers interested in discussing and comparing how intelligence and processes that might create it are abstracted in various subfields. For example, such discussion may include honest examination of the strengths and weaknesses of different approaches, and what features of biological intelligence are crucial or unnecessary to include in algorithms. We hope to encourage cross-pollination of ideas between researchers viewing intelligence in different ways (e.g. through the lens of MDPs or symbolic manipulation) and at different levels of abstraction (e.g. biologically-plausible neural simulations or restricted Boltzmann machines). One goal is to facilitate revising or creating new abstractions of intelligence and intelligence-generating processes. More information can be found here: http://www.cs.ucf.edu/~risi/AAAISymposium2013/ Contributions related to how intelligence can or should be abstracted algorithmically in artificial intelligence research are invited. Extended abstracts that summarize the results of a research program along these lines are most welcome, as are personal position papers or contributions describing speculative work or work in progress. Works bridging traditionally separate AI paradigms are encouraged. Participants should be open to inspiration from work and ideas in other subfields, and be willing to step outside their intellectual comfort zones. Interested participants are encouraged to submit extended abstracts (no more than 2 pages), or full-length papers (up to 6 pages in AAAI format) in PDF format to sebastian.risi at cornell.edu. Accepted submissions will be published as citable, peer-reviewed papers in the AAAI technical report. Areas of interest include but are not limited to: - Different levels and types of knowledge representation and reasoning - Abstractions of the following: - Neural networks (e.g. deep learning networks, spiking ANNs, and plastic ANNs) - Learning (e.g. machine learning and reinforcement learning) - Biological development (e.g. generative and developmental systems, and developmental robotics) - Evolutionary search (e.g. digital evolution and evolutionary algorithms) - Biologically-inspired computation - Evolutionary robotics - Swarm intelligence - Artificial life - Philosophical arguments on characteristics of appropriate abstractions for AI The symposium will be held Friday - Sunday, November 15-17 at the Westin Arlington Gateway in Arlington, Virginia (adjacent to Washington, DC). ** Invited Speakers ** Andrew Ng (Stanford University, USA) More TBD ** Schedule ** Full Paper/Extended Abstract Submission: May 24, 2013 Noti?cation: June 21, 2013 Final Camera-ready Paper/Extended Abstract: September 12, 2013 We look forward to hearing from you. Thank you! -- Sebastian Risi, Joel Lehman, Jeff Clune -- Dr. Sebastian Risi Postdoctoral Fellow Creative Machines Laboratory Cornell University Email: sebastian.risi at cornell.edu Tel: (407) 929-5113 Web: http://www.cs.ucf.edu/~risi/ From Jean-Philippe.Vert at mines.org Wed Mar 27 16:57:47 2013 From: Jean-Philippe.Vert at mines.org (Jean-Philippe Vert) Date: Wed, 27 Mar 2013 21:57:47 +0100 Subject: Connectionists: MLSB13, the 7th Machine Learning in Systems Biology workshop Message-ID: Call for contributions MLSB13, the Seventh International Workshop on Machine Learning in Systems Biology http://www.mlsb.cc Organized in conjunction with ISMB/ECCB 2013 Berlin, Germany, July 19-20, 2013. Important dates: May 5, 2013 : Deadline for submission of extended abstracts May 31, 2013: Author notification July 19-20, 2013: Workshop WORKSHOP DESCRIPTION Molecular biology and all the biomedical sciences are undergoing a true revolution as a result of the emergence and growing impact of a series of new disciplines/tools sharing the "-omics" suffix in their name. These include in particular genomics, transcriptomics, proteomics and metabolomics, devoted respectively to the examination of the entire systems of genes, transcripts, proteins and metabolites present in a given cell or tissue type. The availability of these new, highly effective tools for biological exploration is dramatically changing the way one performs research in at least two respects. First, the amount of available experimental data is not a limiting factor any more; on the contrary, there is a plethora of it. Given the research question, the challenge has shifted towards identifying the relevant pieces of information and making sense out of it (a "data mining" issue). Second, rather than focus on components in isolation, we can now try to understand how biological systems behave as a result of the integration and interaction between the individual components that one can now monitor simultaneously (so called "systems biology"). Taking advantage of this wealth of "genomic" information has become a conditio sine qua non for whoever ambitions to remain competitive in molecular biology and in the biomedical sciences in general. Machine learning naturally appears as one of the main drivers of progress in this context, where most of the targets of interest deal with complex structured objects: sequences, 2D and 3D structures or interaction networks. At the same time bioinformatics and systems biology have already induced significant new developments of general interest in machine learning, for example in the context of learning with structured data, graph inference, semi-supervised learning, system identification, and novel combinations of optimization and learning algorithms. The aim of this workshop is to contribute to the cross-fertilization between the research in machine learning methods and their applications to systems biology (i.e., complex biological and medical questions) by bringing together method developers and experimentalists. SUBMISSION INSTRUCTIONS We encourage submissions bringing forward methods for discovering complex structures (e.g. interaction networks, molecule structures) and methods supporting genome-wide data analysis. A non-exhaustive list of topics suitable for this workshop are found on the home page, as well as examples of work presented in previous years. We invite you to submit an extended abstract of up to 4 pages in PDF format describing new or recently published (2013) results. We also welcome original ISMB submissions of relevance to the workshop, which were accepted into the second round of reviews but unfortunately not selected for the main conference. In this case, there is no need to prepare a separate abstract; it is OK to simply submit your revised paper. Extended abstracts should be uploaded to the MLSB submission web site: https://www.easychair.org/conferences/?conf=mlsb13 by May 5, 2013, 11:59pm (time zone of your choice). No special style is required, as long as standard font size (11 pt) and margins (1 in) are used. Submissions will be reviewed by the scientific programme committee. They will be selected for oral or poster presentation according to their originality and relevance to the workshop topics. Electronic versions of the extended abstracts will be accessible to the participants prior to the conference, distributed in hardcopy form to participants at the conference, and will be made publicly available on the conference web site after the conference. However, the book of abstracts will not be published and the extended abstracts will not constitute a formal publication. INVITED SPEAKERS Sayan Mukherjee, Duke University, North Carolina, USA Guido Sanguinetti, University of Edinburgh, UK Eran Segal, Weizmann Institute, Israel Lani Wu, University of Texas Southwestern Medical Center, USA PROGRAM COMMITTEE Karsten Borgwardt (Max Planck Institute, Tuebingin) Florence d'Alch?-Buc (University of Evry, France) Sa?o D?eroski (Jo?ef Stefan Institute, Slovenia) Paolo Frasconi (University of Florence, Italy) Pierre Geurts (University of Li?ge, Belgium) Lars Kaderali (TU Dresden, Germany) Samuel Kaski (Aalto University and University of Helsinki, Finland) Ross King (Manchester University, UK) Stefan Kramer (University of Mainz, Germany) Christina Leslie (Memorial Sloan-Kettering Cancer Center, USA) Yves Moreau (Katholieke Universiteit Leuven, Belgium) Mahesan Niranjan (University of Southampton, UK) John Pinney (Imperial College London , UK) Magnus Rattray (Manchester University, UK) Simon Rogers (University of Glasgow, UK) Juho Rousu (University of Helsinki, Finland) C?line Rouveirol (Paris 13 University, France) Yvan Saeys (University of Gent, Belgium) Guido Sanguinetti (University of Edinburgh, UK) Peter Sykacek (BOKU University, Austria) Ljupco Todorovski (University of Ljubljana, Slovenia) Achim Tresch (MPI for Plant Breeding, Cologne) Koji Tsuda (National Institute of Advanced Industrial Science and Technology, Japan) Louis Wehenkel University of Li?ge, Belgium) Filip Zelezny (Czech Technical University in Prague, Czech Republic) ORGANIZERS Uwe Ohler (Berlin Institute for Molecular Systems Biology / Max Delbrueck Center, Germany) Jean-Philippe Vert (Mines ParisTech, Institut Curie, France) -- Jean-Philippe Vert Cancer computational genomics and bioinformatics Mines ParisTech - Institut Curie - INSERM U900 http://cbio.ensmp.fr/~jvert -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikeforrest at hotmail.com Thu Mar 28 12:56:13 2013 From: mikeforrest at hotmail.com (MICHAEL FORREST) Date: Thu, 28 Mar 2013 16:56:13 +0000 Subject: Connectionists: New PloS One Paper: The Sodium-Potassium Pump Controls the Intrinsic Firing of the Cerebellar Purkinje Neuron Message-ID: Dear Connectionists mailing list, This recent paper may be of interest to the readership. The paper uses modelling and electrophysiology to suggest that the sodium-potassium pump is not simply a homeostatic, housekeeping molecule; but can be a computational element, utilised by the brain for information processing. We are now conducting further work to build upon this suggestion. Forrest MD, Wall MJ, Press DA, Feng J (2012) The Sodium-Potassium Pump Controls the Intrinsic Firing of the Cerebellar Purkinje Neuron. PLoS ONE 7(12): e51169 Web link to this article at PLoS One (article is free):http://dx.plos.org/10.1371/journal.pone.0051169 Kindest regards, Michael Forrest (first author of said paper) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahu at cs.stir.ac.uk Fri Mar 29 12:53:02 2013 From: ahu at cs.stir.ac.uk (Dr Amir Hussain) Date: Fri, 29 Mar 2013 16:53:02 +0000 Subject: Connectionists: Cognitive Computation journal (Springer): Table of Contents, Vol.5, No.1 / March 2013 Issue Message-ID: Dear Colleagues: (with advance apologies for any cross-postings) We are delighted to announce the publication of Volume 5, No.1/March 2013, of Springer's Cognitive Computation journal - www.springer.com/12559 The main part of this Issue comprises a Special Issue titled: Computational Intelligence and Applications, Guest Editors: Zhigang Zeng & Haibo He, which is followed by a number of regular papers. The individual list of published articles (Table of Contents) for Vol. 5, No. 1 / March 2013, can be viewed here (and also at the end of this message, followed by an overview of the previous Issues/Archive listings): http://link.springer.com/journal/12559/5/1/page/1 A list of the most downloaded articles (which can always be read for FREE) can be found here: http://www.springer.com/biomed/neuroscience/journal/12559#realtime Other 'Online First' published articles not yet in a print issue can be viewed here: http://www.springerlink.com/content/121361/?Content+Status=Accepted All previous Volumes and Issues of the journal can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 ======================================================= NEW: First ISI Impact Factor for Cognitive Computation of 1.000 for 2011! ======================================================= As you will know, last year, Cognitive Computation was selected for coverage in Thomson Reuter?s products and services. Beginning with V.1 (1) 2009, this publication is now indexed and abstracted in: ? Science Citation Index Expanded (also known as SciSearch?) ? Journal Citation Reports/Science Edition ? Current Contents?/Engineering Computing and Technology ? Neuroscience Citation Index? Cognitive Computation also received its first Impact Factor of 1.000 (Thomson Reuters Journal Citation Reports? 2011) in 2011 ============================================ Reminder: New Cognitive Computation "LinkedIn" Group: ============================================ To further strengthen the bonds amongst the interdisciplinary audience of Cognitive Computation, we have set-up a "Cognitive Computation LinkedIn group", which has over 500 members already! We warmly invite you to join us at: http://www.linkedin.com/groups?gid=3155048 For further information on the journal and to sign up for electronic "Table of Contents alerts" please visit the Cognitive Computation homepage: http://www.springer.com/12559 or follow us on Twitter at: http://twitter.com/CognComput for the latest On-line First Issues. For any questions with regards to LinkedIn and/or Twitter, please contact Springer's Publishing Editor: Dr. Martijn Roelandse: martijn.roelandse at springer.com Finally, we would like to invite you to submit short or regular papers describing original research or timely review of important areas - our aim is to peer review all papers within approximately six weeks of receipt. We also welcome relevant high quality proposals for Special Issues - five are already planned for 2013-14, including a new special issue to celebrate the work of the late Professor John Taylor, founding Chair of Cognitive Computation's Editorial Advisory Board. With our very best wishes for the New Year to all aspiring readers and authors of Cognitive Computation, Professor Amir Hussain, PhD (Editor-in-Chief: Cognitive Computation) E-mail: ahu at cs.stir.ac.uk (University of Stirling, Scotland, UK) Professor Igor Aleksander, PhD (Honorary Editor-in-Chief: Cognitive Computation) (Imperial College, London, UK) --------------------------------------------------------------------------------------------------------------- Table of Contents Alert -- Cognitive Computation Vol 5 No 1, March 2013 --------------------------------------------------------------------------------------------------------------- Special Issue: Computational Intelligence and Applications Guest Editors: Zhigang Zeng & Haibo He Special Issue Editorial: Computational Intelligence and Applications Zhigang Zeng, Haibo He http://link.springer.com/article/10.1007/s12559-013-9204-5 Non-blind Image Deblurring from a Single Image Bo Zhao, Wensheng Zhang, Huan Ding & Hu Wang http://link.springer.com/article/10.1007/s12559-012-9139-2 Using Neural Networks and Self-Organizing Maps for Image Connecting Yi Ding, Tianjiang Wang & Xian Fu http://link.springer.com/article/10.1007/s12559-012-9161-4 Clustering-Based Extraction of Near Border Data Samples for Remote Sensing Image Classification Xiaoyong Bian, Tianxu Zhang, Xiaolong Zhang, LuXin Yan & Bo Li http://link.springer.com/article/10.1007/s12559-012-9147-2 Stochastic Hybrid System with Polynomial Growth Coefficients Lizhu Feng, Feng Jiang & Feng Li http://link.springer.com/article/10.1007/s12559-012-9149-0 Hopf Bifurcation of a Modified Leslie--Gower Predator--Prey System Wei Liu & Chaojin Fu http://link.springer.com/article/10.1007/s12559-012-9162-3 Swarm Intelligence: Based Cooperation Optimization of Multi-Modal Functions Qin Tang, Yi Shen, Chengyu Hu, Jianyou Zeng & Wenyin Gong http://link.springer.com/article/10.1007/s12559-012-9144-5 Deformation Prediction of Landslide Based on Improved Back-propagation Neural Network Huangqiong Chen & Zhigang Zeng http://link.springer.com/article/10.1007/s12559-012-9148-1 REGULAR ARTICLES Improving Visual Saliency by Adding 'Face Feature Map' and 'Center Bias' Sophie Marat, Anis Rahman, Denis Pellerin, Nathalie Guyader & Dominique Houzet http://link.springer.com/article/10.1007/s12559-012-9146-3 Visual Saliency from Image Features with Application to Compression P. Harding & N. M. Robertson http://link.springer.com/article/10.1007/s12559-012-9150-7 Counterfactuals, Computation, and Consciousness Mark Muhlestein http://link.springer.com/article/10.1007/s12559-012-9155-2 Reverse Engineering of Biochemical Reaction Networks Using Co-evolution with Eng-Genes Padhraig Gormley, Kang Li, Olaf Wolkenhauer, George W. Irwin & Dajun Du http://link.springer.com/article/10.1007/s12559-012-9159-y A New Face Database Simultaneously Acquired in Visible, Near-Infrared and Thermal Spectrums Virginia Espinosa-Dur?, Marcos Faundez-Zanuy & Ji?? Mekyska http://link.springer.com/article/10.1007/s12559-012-9163-2 Biometric Applications Related to Human Beings: There Is Life beyond Security Marcos Faundez-Zanuy, Amir Hussain, Jiri Mekyska, Enric Sesa-Nogueras, Enric Monte-Moreno, Anna Esposito, Mohamed Chetouani, Josep Garre-Olmo, Andrew Abel, Zdenek Smekal & Karmele Lopez-de-Ipi?a http://link.springer.com/article/10.1007/s12559-012-9169-9 A Neural Mechanism for Reward Discounting: Insights from Modeling Hippocampal-Striatal Interactions Patryk A. Laurent http://link.springer.com/article/10.1007/s12559-012-9178-8 --------------------------------------------------- Previous Issues/Archive: Overview: --------------------------------------------------- All previous Volumes and Issues can be viewed here: http://link.springer.com/journal/volumesAndIssues/12559 Alternatively, the full listing of the Inaugural Vol. 1, No. 1 / March 2009, can be viewed here (which included invited authoritative reviews by leading researchers in their areas - including keynote papers from London University's John Taylor, Igor Aleksander and Stanford University's James McClelland, and invited papers from Ron Sun, Pentti Haikonen, Geoff Underwood, Kevin Gurney, Claudius Gross, Anil Seth and Tom Ziemke): http://www.springerlink.com/content/1866-9956/1/1/ The full listing of Vol. 1, No. 2 / June 2009, can be viewed here (which included invited reviews and original research contributions from leading researchers, including Rodney Douglas, Giacomo Indiveri, Jurgen Schmidhuber, Thomas Wennekers, Pentti Kanerva and Friedemann Pulvermuller): http://www.springerlink.com/content/1866-9956/1/2/ The full listing of Vol.1, No. 3 / Sep 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/3/ The full listing of Vol. 1, No. 4 / Dec 2009, can be viewed here: http://www.springerlink.com/content/1866-9956/1/4/ The full listing of Vol.2, No. 1 / March 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/1/ The full listing of Vol.2, No. 2 / June 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/2/ The full listing of Vol.2, No. 3 / Aug 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/3/ The full listing of Vol.2, No. 4 / Dec 2010, can be viewed here: http://www.springerlink.com/content/1866-9956/2/4/ The full listing of Vol.3, No.1 / Mar 2011 (Special Issue on: Saliency, Attention, Active Visual Search and Picture Scanning, edited by John Taylor and Vassilis Cutsuridis), can be viewed here: http://www.springerlink.com/content/1866-9956/3/1/ The Guest Editorial can be viewed here: http://www.springerlink.com/content/hu2245056415633l/ The full listing of Vol.3, No.2 / June 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/2/ The full listing of Vol. 3, No. 3 / Sep 2011 (Special Issue on: Cognitive Behavioural Systems, Guest Edited by: Anna Esposito, Alessandro Vinciarelli, Simon Haykin, Amir Hussain and Marcos Faundez-Zanuy), can be viewed here: http://www.springerlink.com/content/1866-9956/3/3/ The Guest Editorial for the special issue can be viewed here: http://www.springerlink.com/content/h4718567520t2h84/ The full listing of Vol. 3, No. 4 / Dec 2011 can be viewed here: http://www.springerlink.com/content/1866-9956/3/4/ The full listing of Vol. 4, No.1 / Mar 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/1/ The full listing of Vol. 4, No.2 / June 2012 can be viewed here: http://www.springerlink.com/content/1866-9956/4/2/ The full listing of Vol. 4, No.3 / Sep 2012 (Special Issue on: Computational Creativity, Intelligence and Autonomy, Edited by: J. Mark Bishop and Yasemin J. Erden) can be viewed here: http://www.springerlink.com/content/1866-9956/4/3/ The full listing of Vol. 4, No.4 / Dec 2012 (Special Issue titled: "Cognitive & Emotional Information Processing", Edited by: Stefano Squartini, Bj?rn Schuller and Amir Hussain, which is followed by a number of regular papers), can be viewed here: http://link.springer.com/journal/12559/4/4/page/1 -------------------------------------------------------------------------------------------- The University of Stirling is ranked in the top 50 in the world in The Times Higher Education 100 Under 50 table, which ranks the world's best 100 universities under 50 years old. The University of Stirling is a charity registered in Scotland, number SC 011159. -- The University of Stirling is ranked in the top 50 in the world in The Times Higher Education 100 Under 50 table, which ranks the world's best 100 universities under 50 years old. The University of Stirling is a charity registered in Scotland, number SC 011159. From achler at gmail.com Fri Mar 29 16:12:04 2013 From: achler at gmail.com (Tsvi Achler) Date: Fri, 29 Mar 2013 13:12:04 -0700 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> Message-ID: I would like to bring to your attention another way to implement modularity separate from sparseness. Modularity as defined here is the ability to add or modify a weight or a neuron in a network neural network without potentially changing the integrity of representation of other neurons in the neural network. Restating this in mathematical terms: modularity is the ability to modify a single fixed point by modifying a single neuron or weight, without changing the other fixed points. In ?feedforward? neural networks a single change in weight can change multiple fixed points. One way to avoid this is to make the network sparse, evolve more-modular networks. A completely different way to implement modularity is to perform dynamics during recognition. Such networks use reentrant ?feedforward-feedback? connections during recognition. In other words, it is common in neural networks to iterate to find the best weights during learning, and use feedforward weights during recognition. Instead for better modularity, a similar iterating mechanism is implemented during recognition to find activation (not modify weights). The dynamic mechanism can be implemented using symmetric reentrant inhibitory connections. Once this is achieved, if single neuron or a reentrant weight pair is modified, it only changes a single fixed point. This may be a bit confusing at first especially since not many neural network models truly use reentrant dynamics during recognition. Dynamics are commonly used during learning and to determine sparseness. Please feel free to ask me questions. I would be happy to give a talk about this and see references below. Achler, T., Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns, Arxiv 2012, http://arxiv.org/pdf/1112.2988v2 Achler, T., Non-Oscillatory Dynamics to Disambiguate Pattern Mixtures, Chapter 4 in Relevance of the Time Domain to Neural Network Models 2011, http://reason.cs.uiuc.edu/tsvi/TimeDomain_Chapter.pdf Achler, T.,Towards Bridging the Gap between Pattern Recognition and Symbolic Representations Within Neural Networks, Neural-Symbolic Learning and Reasoning, AAAI-2012, http://reason.cs.uiuc.edu/tsvi/nesy2012.pdf Sincerely, Tsvi Achler MD/PhD On Sat, Feb 9, 2013 at 6:14 PM, Jeff Clune wrote: > Hello all, > > I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. > > Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) > > Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! > rposes. > > Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng > > There has been some nice coverage of this work in the popular press, in case you are interested: > > ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ > ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ > ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning > ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html > ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm > > I hope you enjoy the work. Please let me know if you have any questions. > > Best regards, > Jeff Clune > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > On Sat, Feb 9, 2013 at 6:14 PM, Jeff Clune wrote: > Hello all, > > I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. > > Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) > > Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! > rposes. > > Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng > > There has been some nice coverage of this work in the popular press, in case you are interested: > > ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ > ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ > ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning > ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html > ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm > > I hope you enjoy the work. Please let me know if you have any questions. > > Best regards, > Jeff Clune > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > On Sat, Feb 9, 2013 at 6:14 PM, Jeff Clune wrote: > Hello all, > > I believe that many in the neuroscience community will be interested in a new paper that sheds light on why modularity evolves in biological networks, including neural networks. The same discovery also provides AI researchers a simple technique for evolving neural networks that are modular and have increased evolvability, meaning that they adapt faster to new environments. > > Cite: Clune J, Mouret J-B, Lipson H (2013) The evolutionary origins of modularity. Proceedings of the Royal Society B. 280: 20122863. http://dx.doi.org/10.1098/rspb.2012.2863 (pdf) > > Abstract: A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks?their organization as functional, sparsely connected subunits?but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering pu! > rposes. > > Video: http://www.youtube.com/watch?feature=player_embedded&v=SG4_aW8LMng > > There has been some nice coverage of this work in the popular press, in case you are interested: > > ? National Geographic: http://phenomena.nationalgeographic.com/2013/01/30/the-parts-of-life/ > ? MIT's Technology Review: http://www.technologyreview.com/view/428504/computer-scientists-reproduce-the-evolution-of-evolvability/ > ? Fast Company: http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning > ? Cornell Chronicle: http://www.news.cornell.edu/stories/Jan13/modNetwork.html > ? ScienceDaily: http://www.sciencedaily.com/releases/2013/01/130130082300.htm > > I hope you enjoy the work. Please let me know if you have any questions. > > Best regards, > Jeff Clune > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > From horn at post.tau.ac.il Fri Mar 29 15:03:19 2013 From: horn at post.tau.ac.il (horn at post.tau.ac.il) Date: Fri, 29 Mar 2013 22:03:19 +0300 Subject: Connectionists: rule emergence Message-ID: <20130329220319.Horde.UgHMSptDWw9RVeV3MUeV1QA@webmail.tau.ac.il> Following the recent exchanges regarding rule emergence in neural networks, I wish to point out a paper we have published in 2005 in PNAS describing an unsupervised system that can learn syntax from text: Zach Solan, David Horn, Eytan Ruppin and Shimon Edelman: Unsupervised learning of natural languages Proc. Natl. Acad. Sc. 102 (2005) 11629-11634. http://www.pnas.org/content/102/33/11629 - David Horn From nicolas.brunel at parisdescartes.fr Fri Mar 29 16:37:10 2013 From: nicolas.brunel at parisdescartes.fr (Brunel Nicolas) Date: Fri, 29 Mar 2013 21:37:10 +0100 Subject: Connectionists: =?iso-8859-1?q?New_book=3A_Selected_Papers_of_Dan?= =?iso-8859-1?q?iel_Amit_=281938=962007=29?= Message-ID: <20130329213710.i8kly3dncwscscow@webmail.univ-paris5.fr> SELECTED PAPERS OF DANIEL AMIT (1938?2007) World Scientific Publishing Company edited by Nicolas Brunel (University of Chicago, Chicago, USA), Paolo del Giudice (Istituto Superiore di Sanit?, Rome, Italy), Stefano Fusi (Columbia University, New York, USA), Giorgio Parisi (Universit? la Sapienza, Rome, Italy), & Misha Tsodyks (Weizmann Institute, Rehovot, Israel) This book provides a selection of papers of the late Daniel Amit (1938?2007). Daniel Amit was a physicist who spent the last 22 years of his life working on neural network models. He was one of the pioneers in the field. The volume contains 21 papers, from the highly influential 1985 paper on the Hopfield model (published together with Hanoch Gutfreund and Haim Sompolinsky), to his last (unpublished) manuscript. Many of these papers are landmark papers in the field. The book also provides a biography; an introduction on Daniel Amit's scientific career before the Hopfield model; and introductions to each of the included papers, written by their co-authors. This book will be of interest to physicists, computational neuroscientists and neurobiologists. Contents: Introduction; The Hopfield Model; A Network Counting Chimes; Associative Memory Networks at Low Rates; Towards Networks of Spiking Neurons; The Miyashita Correlations; Learning in Networks with Discrete Synapses; The BBS Review; Dynamics of Networks of Spiking Neurons; Electronic Implementations; Prospective Activity; Multi-Item Working Memory; Learning with Spike-Driven Plastic Synapses; Familiarity Recognition; Unpublished Manuscript. Readership: Researchers in computational neuroscience and statistical physics. 484pp Pub. date: Jan 2013 ISBN: 978-981-4383-65-3 US$148 / ?98 From christos.dimitrakakis at gmail.com Fri Mar 29 17:31:02 2013 From: christos.dimitrakakis at gmail.com (Christos Dimitrakakis) Date: Fri, 29 Mar 2013 22:31:02 +0100 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> Message-ID: <51560816.5040001@gmail.com> Dear all, Is there no survey or taxonomy that discusses this line of work in one place? If not, I have a suggestion. Why not start up a wiki to begin with? That would also be of tremendous aid to any newcomers. Best, Christos -- Dr. Christos Dimitrakakis http://lia.epfl.ch/People/dimitrak/ From jeffclune at uwyo.edu Fri Mar 29 20:30:08 2013 From: jeffclune at uwyo.edu (Jeff Clune) Date: Fri, 29 Mar 2013 18:30:08 -0600 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> Message-ID: <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> Hello Christos, Rafael Calabretta keeps a list of papers on the subject of the evolution of modularity. http://gral.ip.rm.cnr.it/rcalabretta/modularity.html I like your idea of a wiki too. It could be a great resource for the field. We could even start fleshing out this page, which is currently nearly empty: http://en.wikipedia.org/wiki/Modularity_(biology) PS. Thanks to everyone who has participated in the discussion of our paper The Evolutionary Origins of Modularity. Some of the papers that have been mentioned we reference in our paper, and others are new to us. We have enjoyed learning about the various different studies and opinions on this subject, and look forward to more great work to come. Best regards, Jeff Clune Assistant Professor Computer Science University of Wyoming jeffclune at uwyo.edu jeffclune.com On Mar 29, 2013, at 3:31 PM, Christos Dimitrakakis wrote: > Dear all, > > Is there no survey or taxonomy that discusses this line of work in one > place? > If not, I have a suggestion. Why not start up a wiki to begin with? That > would also be of tremendous aid to any newcomers. > > Best, > Christos > > -- > Dr. Christos Dimitrakakis > http://lia.epfl.ch/People/dimitrak/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sun Mar 31 12:41:53 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Sun, 31 Mar 2013 12:41:53 -0400 Subject: Connectionists: Brain-Mind Institute Summer School and ICBM Conference 2013 In-Reply-To: <51574C63.90809@cse.msu.edu> References: <51574C13.2020700@cse.msu.edu> <51574C63.90809@cse.msu.edu> Message-ID: <51586751.6060505@cse.msu.edu> Dear all, sorry, the due date for BMI course application is Sunday, April 28, 2013. The following has corrected the date error. -John On 3/30/13 4:34 PM, Juyang Weng wrote: > > > Brain-Mind Institute (BMI) > > > Programs: Summer 2013 > > > *Summer School June 17 - July 5, August 15 - August 2, 2013 > International Conference on Brain-Mind (ICBM), July 27 - 28, 2013 > Michigan State University , East Lansing > , Michigan > USA > * > > This is the 2nd year of BMI after the successful BMI 2012. BMI 2013 > has two parts, summer school and conference. > > *Important dates*: > Full papers: by Sunday, April 14, 2013 > Abstracts: by Sunday, April 21, 2013 > Course applications (to get admitted so that you can register): by > Sunday, April 28, 2013 > Advance registration: Sunday, April 28, 2013 > > > Call for BMI Course Applications > > BMI summer courses will have two 3-week sessions, the 1st session > (Cognitive Science) June 17 - July 5, 2013 and the 2nd session > (Computational Brain-Mind) July 15-August 2, 2013. > > * Due to the need of 6-discipline scope, the BMI courses are > designed for anybody who has at least a bachelor degree, including > faculty, senior researchers, post-doctoral researchers, and > graduate students in any discipline. Exceptional undergraduate > student can be considered on a case by case basis. > * On-site participation of ICBM in the same year is required for all > course related activities (except BMI 871) as part of the > requirements of the BMI 6DC program, including one-course > enrollment, two-course enrollment, and non-course subject tests. > * Those who do not take a BMI course can also register for only ICBM. > > > Call for ICBM Papers and Abstracts > > BMI Internal Conference on Brain-Mind (ICBM) calls for papers in all > subjects related to brain or mind to be presented during Saturday and > Sunday. The subjects of interest include, but not limited to: > > 1. *Genes*: inheritance, evolution, species, environments, nature vs. > nurture, and evolution vs. development. > 2. *Cells*: cell models, cell learning, cell signaling, tissues, > morphogenesis, and tissue development. > 3. *Circuits*: features, clustering, self-organization, cortical > circuits, Brodmann areas, representation, classification, and > regression. > 4. *Streams*: pathways, intra-modal attention, vision, audition, > touch (including kinesthetics, temperature), smell, and taste. > 5. *Brain ways*: neural networks, brain-mind architecture, > inter-modal attention, multisensory integration, and neural > modulation (punishment/serotonin/pain, > reward/dopamine/pleasure/sex, > novelty/acetylcholine/norepinephrine, higher emotion). > 6. *Experiences/learning*: training, learning, development, > interaction, performance metrics, and functions of genome. > 7. *Behaviors:* actions, motor development, concept learning, > abstraction, languages, decision making, reasoning, and creativity. > 8. *Societies/multi-agent*: joint attention, swarm intelligence, > group intelligence, genders, races, science of organization, > constitutions, and laws. > 9. *Diseases*: depression, ADD/ADHD, drug addiction, dyslexia, > autism, schizophrenia, Alzheimer's disease, Parkinson's disease, > vision loss, and hearing loss. > 10. *Applications*: image analysis, computer vision, speech > recognition, pattern recognition, robotics, artificial > intelligence, instrumentation, and prosthetics. > > Accepted full papers and abstracts will be presented orally at ICBM, > available online and searchable. Each paper will be maximum 8 pages. > Each abstracts must be no longer than 500 words plus as many as 4 > bibliographic citations (4000 characters total). > > -- > -- > Juyang (John) Weng, Professor > Department of Computer Science and Engineering > MSU Cognitive Science Program and MSU Neuroscience Program > 428 S Shaw Ln Rm 3115 > Michigan State University > East Lansing, MI 48824 USA > Tel: 517-353-4388 > Fax: 517-432-1061 > Email:weng at cse.msu.edu > URL:http://www.cse.msu.edu/~weng/ > ---------------------------------------------- > > > > > > > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From weng at cse.msu.edu Sat Mar 30 16:50:11 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 30 Mar 2013 16:50:11 -0400 Subject: Connectionists: Brain-Mind Magazine, vol. 2, no. 1, 2013 Message-ID: <51575003.3090009@cse.msu.edu> Brain-Mind Magazine Vol. 2, No. 1, 2013 Table of Contexts Front Cover 0 Understanding the Self Biologically banner 1 - 3 by /Yi Zheng /and/Gonzalo Munevar / *Abstract: *In Total Recall the hero discovers that his good-guy self is just implanted memories. His body used to be occupied by another, vicious self, whose allies want back. This fantasy gains some plausibility from the traditional conception of the self as a collection of experiences kept in memory --- a unified conscious self that makes our experiences feel ours. Great fiction, bad neuroscience. There is no central brain structure that corresponds to that self, and some scientists have concluded that the self is an illusion. The notion that the self tags our experiences as ours seems to be wrong. And the idea that the self is a collection of remembered experiences turns out to be false. We propose instead a revolutionary biological conception of the self: The brain has evolved to constitute a self that is mostly unconscious and distributive, which does away with the paradoxes, explains all the seemingly contradictory experimental results, and opens up new avenues of research in neuroscience. *Index terms: *Self, evolution, neuroscience, distributive, brain-imaging Turn Slogans into "Science"? banner 4 - 7 by /D. W. Mabaho / *Abstract: *Juyang Weng's two letters to Obama amount to a lack and misuse of neuroscience knowledge, impoverishment and confusion in logic, as well as tailoring and misreading of historical facts. It is an example of ideological slogans disguised under the term "science". *Index terms: *brain-mind, checks of government power, history Every Country should Self-Organize like a Brain: Rebuttal to D. W. Mabaho banner 8 - 11 by /Juyang Weng / *Abstract: *I would like to thank D. W. Mabaho for raising many questions, which enable me to reply more comprehensively. I am more convinced that every country should self-organize like a brain. This is not an ideology, since the brain's self-organization is highly holistic. *Index terms: *history, checks-and-balances, self-organization Private Data: A Huge Problem with Education Research banner 12 - 13 by /R. James Milgram / *Abstract: *A very influential paper on improving math outcomes was published in 2008. The authors refused to divulge their data claiming that agreements with the schools and FERPA rules prevented it. It turns out that this is not true. When, by other means, we found the identities of the schools, serious problems with the conclusions of the article were quickly revealed. The 2008 paper was far from unique in this respect. There are many papers that have had huge influences on K-12 mathematics curricula, and could not be independently verified because the authors refused to reveal their data. In this article we describe how we were able to find the real data, and point out the legal constraints that should make it very difficult for authors of such papers to withhold their data in the future. *Index terms: *Evaluation of publication, mathematical education Standing Up to Academic Bullying: and Those Who Block the Path to Improvements in Education banner 14 - 15 by /Jo Boaler / *Abstract: *Honest academic debate lies at the core of good scholarship. But what happens when, under the guise of academic freedom, people distort the truth in order to promote their position and discredit someone's evidence? *Index terms: *Evaluation of publication, mathematical education When a Reviewer's Comments Became Longer than the Submitted Paper banner 16 - 17 by /Mojtaba Solgi / *Abstract: *The debate over the pros and cons of the so-called scholarly peer review of journals is as old as itself. In this short essay, I wish not to take a side, but to simply tell a story. The story is of how a recently published paper [1] was first hammered by the reviewers as vague, incomprehensible and worthy of rejection, and later praised as "unorthodox" and worthy of the attention of the research community. *Index terms: *Peer review, research evaluation, perceptual learning, transfer Brain Stories 3: Bitter Science banner 18 - 21 by /Brian N. Huang / *Abstract: *Like the rest of this series, this is a true story. The developmental program for scientific research --- scientific policies and bylaws --- seems to be immature for upholding justice. The human race is paying dearly for this immaturity using taxpayer dollars. However, I do not mean to discourage bright young people from taking a career in scientific research. *Index terms: *Checks and balances, scientific policies, bylaws, shortsightedness, discrimination, injustice The 3rd Open Letter to the US President Obama: Why Government Ideologies Block Knowledge? banner 22 - 24 by /Juyang Weng / *Abstract: *Ideologies are attractive to many constituents who are not aware of the limitations of each ideology. Although the US Constitution was designed for checks-and-balances of power, the most basic problem in the US is still the lack of checks-and-balances of power. Only after US governmental officials and politicians acquire knowledge about how the brain works can they enable the US to overcome the fundamental limitations of its Constitution. Many current major problems in the US cannot be solved without a holistic approach suggested by known theoretical understanding about how our own brains work. *Index terms: *US interest, science of brain and mind, domestic and foreign policies Back cover 25 BMI-line-right Established since June 2012 Published by the Brain-Mind Institute -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a1-SelfBiology-i.jpg Type: image/jpeg Size: 103638 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a2-Slogans-i.jpg Type: image/jpeg Size: 189225 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a3-CountryBrain-i.jpg Type: image/jpeg Size: 77806 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a4-MilgramRichard-i.jpg Type: image/jpeg Size: 73145 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a5-BoalerJo-i.jpg Type: image/jpeg Size: 51388 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a6-SolgiMojtaba-i.gif Type: image/gif Size: 434572 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a7-Brain3Bitter-i.jpg Type: image/jpeg Size: 128596 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMM-V2-N1-a8-Letter3Ideology-i.jpg Type: image/jpeg Size: 47740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BMI-line-right.jpg Type: image/jpeg Size: 591054 bytes Desc: not available URL: From weng at cse.msu.edu Sat Mar 30 15:21:35 2013 From: weng at cse.msu.edu (Juyang Weng) Date: Sat, 30 Mar 2013 15:21:35 -0400 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> Message-ID: <51573B3F.5090305@cse.msu.edu> Dear Jeff Clune: Thank you for pointing to the URL. I quote some statements below in two paragraphs. Although I agree that the genome has made a "best guess" when a zygote forms, it is simple-minded to attribute the modularity of the brain, even at the birth time, primarily to "evolution of modularity" as you put it. In other words, unlike the zygote, the brain of a new born is no longer simply the "best guess" of the genome. The body of the new born has played a fundamental role in the formation of the modularity inside the newborn's brain. Namely, the "emergence" or development, is the key process for brain's modularity in the newborn and of course also in the later life. If you have a chance to read our computational model of the DEVELOPMENT of a brain-inspired network DN, at least computationally DN does not need to attribute its emergence of modularity to anything other than a set of cell mechanisms. This is because of the cell-centered role of the genes, known as genomic equivalence. For example, each cell grows and connects according to signals from other cells in its neighborhood (not primarily genes!). Many biological experiments have shown how autonomous cells (whose properties are to some degree genome specified) communicate to migrate, differentiate, form tissues (e.g., cortex), and connect. In our DN model, such cell behaviors give rise to surprising brain-like capabilities when sensory and motor signals are present. By attention to "emergence" in the paragraphs I quoted below. -John "The existence of modules is recognized at all levels of the biological hierarchy. In order to understand what modules are, why and how they emerge and how they change, it would be necessary to start a joint effort by researchers in different disciplines (evolutionary and developmental biology, comparative anatomy, physiology, neuro- and cognitive science). This is made difficult by disciplinary specialization. [...] we claim that, because of the strong similarities in the intellectual agenda of artificial life and evolutionary biology and of their common grounding in Darwinian evolutionary theory, a close interaction between the two fields could easily take place. Moreover, by considering that artificial neural networks draw an inspiration from neuro- and cognitive science, an artificial life approach to the problem could theoretically enlarge the field of investigation." (Calabretta /et al./, 1998 ) *A general definition of modularity and nonmodularity in neural networks can be the following*: "modular systems can be defined as systems made up of structurally and/or functionally distinct parts. While non-modular systems are internally homogeneous, modular systems are segmented into modules, i.e., portions of a system having a structure and/or function different from the structure or function of other portions of the system. [...] In a /nonmodular/ architecture one and the same connection weight may be involved in two or more tasks. In a /modular/ architecture each weight is always involved in a single task: /Modules are sets of 'proprietary' connections that are only used to accomplish a single task./" (Calabretta & Parisi , 2005, Fig. 14.4; see also Calabretta /et al./, 2003 ). On 3/29/13 8:30 PM, Jeff Clune wrote: > Hello Christos, > > Rafael Calabretta keeps a list of papers on the subject of the > evolution of modularity. > > http://gral.ip.rm.cnr.it/rcalabretta/modularity.html > > I like your idea of a wiki too. It could be a great resource for the > field. We could even start fleshing out this page, which is currently > nearly empty: http://en.wikipedia.org/wiki/Modularity_(biology) > > > PS. Thanks to everyone who has participated in the discussion of our > paper The Evolutionary Origins of Modularity. Some of the papers that > have been mentioned we reference in our paper, and others are new to > us. We have enjoyed learning about the various different studies and > opinions on this subject, and look forward to more great work to come. > > > Best regards, > *Jeff Clune* > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > On Mar 29, 2013, at 3:31 PM, Christos Dimitrakakis > > wrote: > >> Dear all, >> >> Is there no survey or taxonomy that discusses this line of work in one >> place? >> If not, I have a suggestion. Why not start up a wiki to begin with? That >> would also be of tremendous aid to any newcomers. >> >> Best, >> Christos >> >> -- >> Dr. Christos Dimitrakakis >> http://lia.epfl.ch/People/dimitrak/ >> > -- -- Juyang (John) Weng, Professor Department of Computer Science and Engineering MSU Cognitive Science Program and MSU Neuroscience Program 428 S Shaw Ln Rm 3115 Michigan State University East Lansing, MI 48824 USA Tel: 517-353-4388 Fax: 517-432-1061 Email: weng at cse.msu.edu URL: http://www.cse.msu.edu/~weng/ ---------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuille at stat.ucla.edu Sun Mar 31 20:42:56 2013 From: yuille at stat.ucla.edu (Yuille, Alan) Date: Mon, 1 Apr 2013 00:42:56 +0000 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> , <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> Message-ID: My group's work on compositional models might be of interest. An example appears on Archive in the new International Conference on Learning Representations -- see http://arxiv.org/abs/1301.3560 . Although this work was not neuronally motivated it seems to have connections to Les Valiant's neuroidal models -- like JOIN and LINK -- which seem to address these issues and may be relevant to your discussions. Best, Alan ________________________________ From: connectionists-bounces at mailman.srv.cs.cmu.edu [connectionists-bounces at mailman.srv.cs.cmu.edu] on behalf of Jeff Clune [jeffclune at uwyo.edu] Sent: Friday, March 29, 2013 5:30 PM To: Christos Dimitrakakis Cc: connectionists at mailman.srv.cs.cmu.edu Subject: Re: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks Hello Christos, Rafael Calabretta keeps a list of papers on the subject of the evolution of modularity. http://gral.ip.rm.cnr.it/rcalabretta/modularity.html I like your idea of a wiki too. It could be a great resource for the field. We could even start fleshing out this page, which is currently nearly empty: http://en.wikipedia.org/wiki/Modularity_(biology) PS. Thanks to everyone who has participated in the discussion of our paper The Evolutionary Origins of Modularity. Some of the papers that have been mentioned we reference in our paper, and others are new to us. We have enjoyed learning about the various different studies and opinions on this subject, and look forward to more great work to come. Best regards, Jeff Clune Assistant Professor Computer Science University of Wyoming jeffclune at uwyo.edu jeffclune.com On Mar 29, 2013, at 3:31 PM, Christos Dimitrakakis > wrote: Dear all, Is there no survey or taxonomy that discusses this line of work in one place? If not, I have a suggestion. Why not start up a wiki to begin with? That would also be of tremendous aid to any newcomers. Best, Christos -- Dr. Christos Dimitrakakis http://lia.epfl.ch/People/dimitrak/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.mingus at colorado.edu Sun Mar 31 21:59:45 2013 From: brian.mingus at colorado.edu (Brian J Mingus) Date: Sun, 31 Mar 2013 19:59:45 -0600 Subject: Connectionists: New paper on why modules evolve, and how to evolve modular neural networks In-Reply-To: <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> References: <70D7BAED-6D38-4506-BE10-F1A463AC74BA@uwyo.edu> <73054C4A-084D-40A9-B33A-15E2A38D53A8@uwyo.edu> Message-ID: Hi Jeff, I have created an official wiki for this mailing list. If you or someone in this discussion could start fleshing out the first article on modularity, that would set us off to a great start. Here it is: http://grey.colorado.edu/Connectionists I think this wiki has a lot of potential, although it seems that it could also be forgotten about altogether.. - Brian Mingus On Fri, Mar 29, 2013 at 6:30 PM, Jeff Clune wrote: > Hello Christos, > > Rafael Calabretta keeps a list of papers on the subject of the evolution > of modularity. > > http://gral.ip.rm.cnr.it/rcalabretta/modularity.html > > I like your idea of a wiki too. It could be a great resource for the > field. We could even start fleshing out this page, which is currently > nearly empty: http://en.wikipedia.org/wiki/Modularity_(biology) > > PS. Thanks to everyone who has participated in the discussion of our paper > The Evolutionary Origins of Modularity. Some of the papers that have been > mentioned we reference in our paper, and others are new to us. We have > enjoyed learning about the various different studies and opinions on this > subject, and look forward to more great work to come. > > > Best regards, > *Jeff Clune* > > Assistant Professor > Computer Science > University of Wyoming > jeffclune at uwyo.edu > jeffclune.com > > On Mar 29, 2013, at 3:31 PM, Christos Dimitrakakis < > christos.dimitrakakis at gmail.com> wrote: > > Dear all, > > Is there no survey or taxonomy that discusses this line of work in one > place? > If not, I have a suggestion. Why not start up a wiki to begin with? That > would also be of tremendous aid to any newcomers. > > Best, > Christos > > -- > Dr. Christos Dimitrakakis > http://lia.epfl.ch/People/dimitrak/ > > > -- Brian Mingus Graduate student Computational Cognitive Neuroscience Lab University of Colorado at Boulder http://grey.colorado.edu/mingus 1-720-587-9482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.carnevale at yale.edu Sun Mar 31 22:34:42 2013 From: ted.carnevale at yale.edu (Ted Carnevale) Date: Sun, 31 Mar 2013 22:34:42 -0400 Subject: Connectionists: Course on parallelizing NEURON models Message-ID: <5158F242.4050800@yale.edu> Who should take the course on parallelizing NEURON models that we are planning to present June 26-30 at the Institute for Neural Computation at UCSD? Any NEURON user who: * wants to use all of the computational power of their own multicore laptop or desktop PC or Mac, in order to run simulations more quickly--gain an N-fold speedup on an N-core machine! * has a model optimization or parameter space exploration problem that requires hundreds or thousands of time-consuming simulations * has a model that is too complex for a single PC or Mac, and wants to take advantage of high performance parallel computing resources This course will address topics that include: * installing NEURON on a wide variety of parallel supercomputers * parallelizing network models that involve spike-triggered synaptic transmission and/or gap junctions * using "bulletin board style parallelization" for embarassingly parallel problems such as parameter space exploration * ensuring that simulation results are independent of the number of processors or how cells are distributed over the processors * implementing reproducible randomness (and why it is essential) * measure performance * achieving load balance (a key factor in maximizing performance) * debugging parallel models * using remote high-performance computing resources Applicants who sign up and pay by Friday, April 12, will qualify for the early bird registration fee of $1200. After April 12, the fee goes back up to $1350. Registration is limited, and the last day to sign up is Friday, May 24. For more information and the on-line application form, see http://www.neuron.yale.edu/neuron/static/courses/parnrn2013/parnrn2013.html or contact ted dot carnevale at yale dot edu