From Z.Hussain at cs.ucl.ac.uk Sun Oct 1 08:14:27 2006 From: Z.Hussain at cs.ucl.ac.uk (Zakria Hussain) Date: Sun, 1 Oct 2006 13:14:27 +0100 Subject: Connectionists: Call for participation: NIPS workshop on On-line Trading of Exploration and Exploitation Message-ID: <000001c6e553$24f96c10$0801a8c0@ZakHussain> CALL FOR PARTICIPATION: ---------------------------------------------------------------------------- ----------------- On-line Trading of Exploration and Exploitation http://www.homepages.ucl.ac.uk/~ucabzhu/OTEE.htm Nips 2006 Workshop December 8-9th, Whistler, BC, Canada ---------------------------------------------------------------------------- ------------------ Background: Trading exploration and exploitation plays a key role in a number of learning tasks. For example the bandit problem ([1],[2],[3],[4]) provides perhaps the simplest case in which we must decide a trade-off between pulling the arm that appears most advantageous and experimenting with arms for which we do not have accurate information. Similar issues arise in learning problems where the information received depends on the choices made by the learner. Examples include reinforcement learning and active learning, though similar issues also arise in other disciplines, for example sequential decision-making from statistics, optimal control from control theory, etc. Learning studies have frequently concentrated on the final performance of the learned system rather than consider the errors made during the learning process. For example reinforcement learning has traditionally been concerned with showing convergence to an optimal policy, while in contrast analysis of the bandit problem has attempted to bound the extra loss experienced during the learning process when compared with an a priori optimal agent. The workshop will provide a focus for work concerned with on-line trading of exploration and exploitation, in particular providing a forum for extensions to the bandit problem, invited presentations by researchers working in related areas in other disciplines, as well as discussion and contributed papers. Call for Participation: Papers: The organizing committee would like to invite extended abstract paper submissions to the NIPS 2006 workshop on ?On-line Trading of Exploration and Exploitation? in the following related areas (but not restricted to): * Exploration and Exploitation problems * Multi-armed bandit problems * Sequential decision-making * Empirical/theoretical studies of bandit problems * On-line learning algorithms * Related work from other disciplines such as control theory, game theory, statistics etc. The organizers will select a small number of contributed papers for oral presentation to last between 20 - 40 minutes. All other accepted papers will be displayed in a poster session. We shall evaluate proposed talks on the merits of both their relevance and significance and their contribution to the well-roundedness of the workshop as a whole. Paper submission deadline: 28th October 2006 Acceptance notification: 10th November 2006 Workshop: 8th or 9th December Papers should be written using the NIPS style file (nips06.sty), not exceed 8 pages in length, and be sent as PDF files to Zakria Hussain by 23.59 (Western Samoa time) on the 28th October 2006. Challenge: A call for participation in phase 2 of the PASCAL network of excellence Exploration Vs Exploitation (EE) challenge (see http://www.pascal-network.org/Challenges/EEC/). The workshop will be used to discuss the results of the challenge currently running under the auspices of the PASCAL network and Touch Clarity Ltd, investigating algorithms for multi-armed bandits in which the response rates for the individual arms vary over time according to certain patterns. Touch Clarity have agreed to award a first prize of ?1000 to the best entry. See challenge website for instructions, data set downloads and submission instructions. Code submission: 15th November 2006 Seed distribution: 16th November 2006 Final results: 23rd November 2006 Structure of the workshop: We plan a 1-day workshop to be held either on the 8th or 9th December 2006. The workshop will include a discussion of the results of the PASCAL challenge concerned with extensions of the bandit problem as well as broadening the discussion to other tasks and other approaches to on-line trading of exploration and exploitation. Our aim is to make connections to approaches (both algorithmic and theoretical) that have been developed in other areas. We plan to fund invited speakers from other disciplines, specifically a presentation on Optimal Control and on Sequential Decision-making. One part of the workshop will be used to discuss the results of the challenge (see above) currently running under the auspices of the PASCAL network investigating algorithms for multi-armed bandits in which the response rates for the individual arms vary over time according to certain patterns (see http://www.pascal-network.org/Challenges/EEC/). An outline schedule: * Presentation of the practical problem underlying the PASCAL challenge * Short spotlight presentations of successful approaches to the challenge interspersed with discussion leading into a general discussion of results and lessons learnt. * Invited talks/tutorials on other approaches in related disciplines specifically optimal control and sequential decision-making. A Preliminary list of invited speakers is: Adam Kalai and Csaba Szepesv?ri (to be confirmed). * Discussion of relations to more standard learning formulations such as reinforcement learning. * Contributed talks interspersed with discussion * Poster session * Closing discussion Program Committee: Peter Auer University of Leoben Nicol? Cesa-Bianchi University of Milan Zakria Hussain University College London Adam Kalai Toyota Technology Institute (to be confirmed) Robert Kleinberg University of California Berkeley (to be confirmed) Yishay Mansour Tel Aviv University Leonard Newnham Touch Clarity Ltd John Shawe-Taylor University College London Organization: Peter Auer University of Leoben Nicol? Cesa-Bianchi University of Milan Zakria Hussain University College London Leonard Newnham Touch Clarity Ltd John Shawe-Taylor University College London References: [1] Peter Auer, Nicol? Cesa-Bianchi, Paul Fischer. Finite-time Analysis of the Multi-armed Bandit Problem. Machine Learning (2002). [2] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, Robert E. Schapire. Gambling in a Rigged Casino: The adversarial multi-armed bandit problem. Proceedings of the 36th Annual Symposium on Foundations of Computer Science (1998). [3]Shie Mannor, John N. Tsitsiklis. The Sample Complexity of Exploration in the Multi-Armed Bandit Problem. The Journal of Machine Learning Research (2004). [4] Joann?s Vermorel and Mehryar Mohri. Multi-Armed Bandit Algorithms and Empirical Evaluation. In Proceedings of the 16th European Conference on Machine Learning (ECML 2005). Sponsors: PASCAL network of excellence Touch Clarity Ltd From martin.giese at tuebingen.mpg.de Sun Oct 1 11:42:26 2006 From: martin.giese at tuebingen.mpg.de (Martin Giese) Date: Sun, 01 Oct 2006 17:42:26 +0200 Subject: Connectionists: PhD position in Tuebingen Message-ID: <451FE1E2.3060708@tuebingen.mpg.de> PHD POSITION IN COMPUTER GRAPHICS, LAB. FOR ACTION REPRESENTATION AND LEARNING, HERTIE INST. FOR CLINICAL BRAIN RESEARCH, TUEBINGEN, GERMANY Within the DFG-funded package project 'Perceptual Graphics' we offer a PhD position (BAT IIa/2) for the development of algorithms in computer animation. The grant aims at the transfer of methods and knowledge from biological science to systems in computer graphics. It also involves partners from the Max Planck Institute for Biological Cybernetics (Tuebingen) and the computer graphics departments of the universities in Bonn, Konstanz, and Tuebingen. The specific research topic will be the development of machine learning algorithms for the efficient representation of body movements in computer animation, based on concepts derived from motor control in biological systems. Good mathematical skills and an interest in machine learning will be required. Ideal candidates should have a background in computer science, mathematics, physics, or engineering. Also students from bioinformatics with an interest in computer graphics are highly welcome. The Laboratory for Action Representation and Learning (ARL) disposes of a high-end motion capture system (VICON) and modern software tools for computer animation, and a motion laboratory that is also appropriate for biomechanical and clinical studies. In the context of this project the group collaborates internationally with the Dept. of Brain and Cognitive Science (M.I.T) and the Dept. of Computer Science (Weizmann Institute, Israel). Interested candidates should send or email CVs and the names of 2 references to PD Dr. Martin Giese, ARL, Dept. of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Tuebingen, Frondsbergstr. 23, D-72076 Tuebingen, Germany. Tel.: (+49) 7071 601 724 / (+49) 7071 365 9880 Email: martin.giese at uni-tuebingen.de Further information: http://www.uni-tuebingen.de/uni/knv/arl/ http://www.hih-tuebingen.de/ From Gunnar.Raetsch at tuebingen.mpg.de Mon Oct 2 07:04:10 2006 From: Gunnar.Raetsch at tuebingen.mpg.de (=?ISO-8859-1?Q?Gunnar_R=E4tsch?=) Date: Mon, 2 Oct 2006 13:04:10 +0200 Subject: Connectionists: NIPS Workshop on New Problems and Methods in Computational Biology Message-ID: Dear colleagues, I would like to invite you to participate in the workshop on New Problems and Methods in Computational Biology on the 8th or 9th of December at NIPS'06 in Whistler, B.C. (http:// nips.cc). If you would like to contribute then please send an extended abstract by *October 31, 11:59am (Samoa time)* to nips-compbio at tuebingen.mpg.de. We still have a few slots for talks available (details below). I am looking forward to meet you there! Gunnar Raetsch NIPS*06 Workshop New Problems and Methods in Computational Biology Workshop email: nips-compbio at tuebingen.mpg.de Workshop web address: http://www.fml.tuebingen.mpg.de/nipscompbio Organizers: * Gal Chechik, Department of Computer Science, Stanford University * Christina Leslie, Center for Comp. Learning Systems, Columbia University * Quaid Morris, Centre for Cellular and Biomolecular Research, University of Toronto * William S. Noble, Department of Genome Sciences, University of Washington * Koji Tsuda, Max Planck Institute for Biological Cybernetics, T?bingen, Germany * Gunnar Raetsch, Friedrich Miescher Lab. of the Max Planck Society, T?bingen, Germany Workshop Description: The field of computational biology has seen a dramatic growth over the past few years, both in terms of new available data, new scientific questions and new challenges and for learning and inference. In particular, biological data is often relationally structured and highly diverse, thus requires to combine multiple weak evidence from heterogeneous sources. These could include sequenced genomes of a variety of organisms, gene expression data from multiple technologies, protein sequence and 3D structural data, protein interactions, gene ontology and pathway databases, genetic variation data, and an enormous amount of textual data in the biological and medical literature. The new types of scientific and clinical problems, require to develop new supervised and unsupervised learning approaches that can use these growing resources. The goal of this workshop is to present emerging problems and machine learning techniques in computational biology. Speakers from the biology/bioinformatics community will present current research problems in bioinformatics, and we invite contributed talks on novel learning approaches in computational biology. We encourage contributions describing either progress on new bioinformatics problems or work on established problems using methods that are substantially different from standard approaches. Kernel methods, graphical models, feature selection and other techniques applied to relevant bioinformatics problems would all be appropriate for the workshop. Submission instructions: Researchers interested in contributing should send an extended abstract (1-4 pages, postscript or pdf format) to nips-compbio at tuebingen.mpg.de by *October 31, 11:59pm (Samoa time)*. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. This is done to encourage presentation of mature research projects that are interesting to the community. The authors should clearly state any overlapping published work at time of submission. The workshop organizers intend to invite submissions of full length versions of accepted workshop contributions for publication in a special issue of a BMC Bioinformatics (previous issue is available at http://www.biomedcentral.com/1471-2105/7?issue=S1) Program Committee: * Pierre Baldi, UC Irvine * Kristin Bennett, Rensselaer Polytechnic Institute * Mathieu Blanchette, McGill University * Florence d'Alche, Universit? d'Evry-Val d'Essonne, Genopole * Eleazar Eskin, UC San Diego * Brendan Frey, University of Toronto * Nir Friedman, Hebrew University and Harvard * Michael I. Jordan, UC Berkeley * Michal Linial, The Hebrew University of Jerusalem * Klaus-Robert M?ller, Fraunhofer FIRST * Uwe Ohler, Duke University * Eran Segal, Stanford University * Alexander Schliep, Max Planck Institute for Molecular Genetics * Jean-Philippe Vert, Ecole des Mines de Paris Please check http://www.fml.tuebingen.mpg.de/nipscompbio/cfp regularly for updates. +-------------------------------------------------------------------+ Gunnar R?tsch http://www.fml.mpg.de/raetsch Friedrich Miescher Laboratory Gunnar.Raetsch at tuebingen.mpg.de Max Planck Society Tel: (+49) 7071 601 820 Spemannstra?e 39, 72076 T?bingen, Germany Fax: (+49) 7071 601 801 From Cyril.Goutte at nrc-cnrc.gc.ca Mon Oct 2 15:48:05 2006 From: Cyril.Goutte at nrc-cnrc.gc.ca (Cyril Goutte) Date: Mon, 02 Oct 2006 15:48:05 -0400 Subject: Connectionists: Call for contributions: NIPS 2006 Workshop; Machine Learning for Multilingual Information Access Message-ID: <1159818485.9742.3.camel@0562-crtl.crtl.ca> Call for contributions NIPS 2006 Workshop MACHINE LEARNING FOR MULTILINGUAL INFORMATION ACCESS ==================================================== http://ilt.iit.nrc.ca/MLIA/ Description: ------------ In many different settings, accessing information available in different languages is a challenge. In Europe, the wide variety of languages is clearly a bottleneck for efficient circulation and access to information. More than half of EU citizens cannot hold a conversation in a language other than their mother tongue. Even in an officially bilingual country like Canada, less than one in five are considered to have a good enough command of both official languages (2001 census data). The traditional paradigm for addressing this issue is to perform human translation on a massive scale, and rely on monolingual information access technology. Although this model has worked reasonably well in the past, the rapid increase in the amount of information produced (and, in Europe, in the number of languages covered) raises questions as to its sustainability. Machine Learning has the potential to help develop and deploy technology that provides: 1. access to information across different languages, 2. usable translation from one language to another. We are interested in Machine Learning techniques addressing for example the following problems: * Word alignment * Machine translation * Multilingual lexicon and terminology extraction * Cross-lingual information retrieval * Cross-lingual categorisation Goals of the workshop: ---------------------- Multilingual applications are also emerging as a promising application for some Machine Learning techniques, for example the use of Kernel CCA for Cross-Language applications, or large-margin approaches to word alignment. This new trend converges with a well-established interest of the Natural Language Processing community for learning approaches. The purpose of this workshop is to provide a forum for discussion of current developments at the intersection between multilingual processing and machine learning. This includes developing new techniques to address various multilingual information access problems (e.g. translation), but also scaling up existing techniques to the available NLP data, developing tools for cross-language information retrieval, etc. We will promote discussions of some inter-related key issues in applying Machine Learning to Multilingual problems: * SCALING UP: - Applying ML to 100 million words corpora (e.g. SMT) - Deploying ML solutions on new language pairs * SCARCE RESOURCES: - Languages or domains with limited bilingual corpora - Bootstrapping limited resources * EVALUATION: - Design of better performance measures - Optimisation of application-specific measures - Learning human evaluation * PRIOR LINGUISTIC KNOWLEDGE: - Modelling and using linguistic knowledge in ML - The continuum between all-data (SMT) and all prior knowledge (handcrafted rules) Submission instructions: ------------------------ Researchers interested in presenting their work at the workshop should send an email to: mlia at nrc-cnrc.gc.ca (preferably plain text) with the following information: - Title - Author(s) - Abstract (around 1 page) Schedule: Submission deadline: 29 October 2006 Notification: 6 November 2006 Workshop date: 8 or 9 December 2006 Co-organisers: -------------- Cyril Goutte, National Research Council Canada (contact) Nicola Cancedda, Xerox Research Centre Europe Marc Dymetman, Xerox Research Centre Europe George Foster, National Research Council Canada Workshop format: ---------------- We intend to leave a good part of the workshop to panel discussions that will address relevant topics in multilingual information access (MIA), as well as invited talks presenting some important MIA problems and associated challenges for Machine Learning. For each half day, we will start with either a keynote or a short tutorial, continue with a few shorter technical presentations, and end with a panel discussion (topics to be decided depending on the confirmed list of speakers). Invited speakers: - Dan Melamed (Courant Institute, NYU) - John Shawe-Taylor (ECS, U. of Southampton, UK), tbc - Ralf Steinberger (JRC, Ispra, Italy) - Wray Buntine (HIIT, Helsinki, Finland), tbc Related work: ------------- Past NIPS workshops have addressed related topics such as learning with structured data, or the use of Machine Learning for Natural Language Processing. There is also some ongoing interest within the European network of excellence Pascal, as exemplified by the recent workshop on intelligent information access. However none of these specifically target multilingual aspects. We believe there is sufficient interest and genuine need on this particular aspect to justify a specific focus on multilingual information access. The newly started European project SMART (Statistical Multilingual Analysis for Retrieval and Translation) is specifically targeting advanced machine learning techniques for multilingual applications. From M.Casey at surrey.ac.uk Tue Oct 3 09:49:26 2006 From: M.Casey at surrey.ac.uk (M.Casey@surrey.ac.uk) Date: Tue, 3 Oct 2006 14:49:26 +0100 Subject: Connectionists: Biologically Inspired Information Fusion: Information Fusion Journal Special Issue Message-ID: Call for papers for a special issue of Information Fusion on "Biologically Inspired Information Fusion" Our understanding of both natural and artificial cognitive systems is an exciting area of research that is developing into a multi-disciplinary subject with the potential for significant impact on science, engineering and society in general. There is considerable interest in how our understanding of natural systems may help us to apply biological strategies to artificial systems. Of particular interest is our understanding of how to build adaptive information fusion systems by combining knowledge from different domains. In natural systems, the integration of sensory information is learnt at an early stage of development. Therefore, through a better understanding of the structures and processes involved in this natural adaptive integration, we may be able to construct a truly artificial multi-sensory processing system. Here then, psychological and physiological knowledge of multisensory processing, and particularly the low level influence that different modalities have on one another, can be used to build upon existing theoretical work on computational mechanisms, such as self organization and the combination of multiple neural networks, to build systems that can fuse together different information sources. These themes were recently discussed at an International Workshop on Biologically Inspired Information Fusion. As well as presenting the state-of-the-art on multi-sensory processing and information fusion from the life and physical sciences, the workshop provided a forum for researchers to discuss priorities for developing this multi-disciplinary area. This special issue of Information Fusion is therefore aimed at following up from these discussions by focusing on the highlighted priorities, whilst also providing an opportunity for the wider dissemination of relevant themes. For this special issue, papers should either have a biological motivation and/or inspiration, or otherwise be of biological relevance and interest. Manuscripts should make the biological dimension explicit. Information Fusion related papers lacking this dimension are to be submitted to a regular issue of the journal. Manuscripts (which should be original and not previously published or presented even in a more or less similar form under any other forum) covering biologically inspired information fusion methods and their applications as well as the theories and algorithms developed to address these applications are invited. Contributions should be described in sufficient detail to be reproducible on the basis of the material presented in the paper. Topics appropriate for this special issue include, but are not limited to: * Biologically inspired fusion schemes * Adaptive information fusion which emphasize biological motivations * Biologically inspired fusion in robotics * Multimodal integration: * Modeling combined sensory processing * Including, but not exclusively, combining vision, audition, olfaction, taste or touch * Combining artificial and biological sensors * Attention or emotional biasing on sensory processing * Biologically motivated applications of multi-sensor integration Manuscripts should be submitted electronically online at http://ees.elsevier.com/inffus. The corresponding author will have to create a user profile if one has not been established before at Elsevier. Simultaneously, please also send without fail an electronic copy (PDF format preferred), to the Guest Editors listed below. Please identify clearly that the submission is meant for this special issue. Guest Editors Dr Matthew Casey, Department of Computing, University of Surrey, UK, m.casey at surrey.ac.uk Professor Robert Damper, School of Electronics and Computer Science, University of Southampton, UK, rid at ecs.soton.ac.uk Deadline for Submission: January 30, 2007 Further information can be found at: http://www.cs.surrey.ac.uk/people/academic/M.Casey/biif2006.html http://www.elsevier.com/locate/inffus From david.barber at idiap.ch Tue Oct 3 13:34:15 2006 From: david.barber at idiap.ch (David Barber) Date: Tue, 3 Oct 2006 19:34:15 +0200 Subject: Connectionists: call for abstracts : NIPS Workshop on Advances in Models for Acoustic Processing Message-ID: <0b2a01c6e712$2628e640$c8dd21c0@davidbarber> CALL FOR ABSTRACTS : NIPS06 Workshop on Advances in Models for Acoustic Processing http://www.idiap.ch:9080/amac Whistler, Canada December 8th or 9th, 2006 [Abstract submission deadline: November 2, 2006] DESCRIPTION: The analysis of audio signals is central to the scientific understanding of human hearing abilities as well as in engineering applications such as sound localisation, hearing aids or music information retrieval. Historically, the main mathematical tools are from signal processing: digital filtering theory, system identification and various transform methods such as Fourier techniques. In recent years, there is an increasing interest for Bayesian treatment and graphical models which permit more refined analysis and representation of the acoustic signals. The application of Bayesian techniques is quite natural: acoustical time series can be conveniently modelled using hierarchical signal models by incorporating prior knowledge from various sources: from physics or studies of human cognition and perception. Once a realistic hierarchical model is constructed, many tasks such as coding, analysis, restoration, transcription, separation, identification or resynthesis can be formulated consistently as Bayesian posterior inference problems. In particular, the development of a powerful framework for approaching such tasks is central to improvements in the understanding of how both natural and synthetic systems may produce efficient auditory representations. GOALS: The goal of the workshop is to establish a discussion forum between practitioners of acoustical signal processing, researchers interested in computational neural acoustic processing, and more theoretically oriented researchers in machine learning, statistics and signal processing. This includes also researchers interested in the development of efficient neural codes for auditory representation. In particular, we welcome contributions that introduce interesting and challenging models for acoustical signal analysis and related inference techniques. Example issues are: * What types of modelling approaches are useful for acoustic processing (e.g. hierarchical, generative, discriminative) ? * What classes of inference algorithms are suitable for these potentially large and hybrid models of sound ? * How can we improve the quality and speed of inference ? * Can efficient online algorithms be developed? * How can we learn efficient auditory codes based on independence assumptions about the generating processes? * What can biology and cognitive science can tell us about acoustic representations and processing? The workshop scope is deliberately broad so that key advances in acoustic processing in natural systems, together with advances in computational modelling and inference methods may be discussed by experts that may not otherwise share the same platform. In so doing, we hope that a deeper understanding of acoustic processing, representations and applications may emerge. We were have identified three key facets of acoustic processing : FACET 1 : Computational Neuroscience/modelling of Auditory Organisation FACET 2 : Models and inference techniques for Audio and Music Applications FACET 3 : Source Separation, Statistical Inference for analysis and processing of natural sounds The workshop will be programmed to highlight the overlaps between these topics to provoke interaction. In addition to oral and poster presentations, two invited tutorials will also be given. Based on submissions, we will identify themes related to the workshop topics in order to group the talks and organize thematic discussions after the presentations. SUBMISSION PROCEDURE: Authors should submit an extended abstract to: amac at idiap.ch in pdf or ps by November 2, 2006. We will send an email confirming the reception of the submission. The suggested abstract length is 3 pages (maximum 8 pages), formatted in standard NIPS style. The authors of the accepted abstracts will be allocated as talks or poster highlights. Time will be allocated in the programme for poster presentations and discussions. Before the workshop, the abstracts will be made available to a broader audience on the workshop web site. We also plan to maintain the webpage after the workshop and encourage the authors to submit slides and posters with relevant links to their personal web pages. KEY DATES: Oct 3: Workshop announcement and Call for Abstracts Nov 2: Extended Abstract submission deadline Nov 10: Notification of acceptance Nov 23: Final extended abstracts due Dec 8 or 9: Workshop ORGANISERS: David Barber, IDIAP Research Institute http://www.idiap.ch/~barber Taylan Cemgil, Univ. of Cambrige http://www-sigproc.eng.cam.ac.uk/~atc27 CONTACT : For questions/ suggestions about the workshop, please contact amac at idiap.ch Please refer to http://www.idiap.ch:9080/amac for up-to-date information about the workshop. From jbennett at netflix.com Mon Oct 2 10:48:10 2006 From: jbennett at netflix.com (Jim Bennett) Date: Mon, 2 Oct 2006 07:48:10 -0700 Subject: Connectionists: Netflix Prize Announcement Message-ID: <07DE814B11E00B4F9EE7CD75AE45737801B2BD30@superfly.netflix.com> Netflix is pleased to announce the Netflix Prize, an award of $1 million to the first person or team who can achieve certain accuracy goals when recommending movies based on personal preferences. The company also made 100 million anonymous movie ratings available to contestants. Complete details for registering and competing for the Netflix Prize are available at www.netflixprize.com . We especially invite members of the machine learning community to participate. From stephane.dufau at univ-provence.fr Mon Oct 2 04:45:45 2006 From: stephane.dufau at univ-provence.fr (Stephane Dufau) Date: Mon, 2 Oct 2006 10:45:45 +0200 Subject: Connectionists: Computational modeling in the south of France Message-ID: <005701c6e5ff$269afaf0$de6a5e93@SIMUL4GO> Computational modeling in the south of France A post-doctoral position is open at the Laboratoire de Psychologie Cognitive, a CNRS lab at the University of Provence, Marseille, France (http://www.up.univ-mrs.fr/wlpc). The person hired on this position will participate in a large-scale project on modeling reading acquisition, and will be specifically involved in developing and testing both supervised and unsupervised learning algorithms applied to the development of orthographic representations and spelling-sound correspondences during the process of learning to read. The ideal candidate will have appropriate programming skills (C, Matlab) and experience in developing neural network simulations of cognitive processes, with a Ph.D in cognitive science or cognitive psychology. Provisional start date ? January 1st 2007. Send application with CV plus names and contact information for two referees to: Jonathan Grainger - grainger at up.univ-mrs.fr Director Laboratoire de Psychologie Cognitive Universit? de Provence 3 pl. Victor Hugo 13331 Marseille France From tijl.debie at gmail.com Mon Oct 2 10:18:53 2006 From: tijl.debie at gmail.com (Tijl De Bie) Date: Mon, 2 Oct 2006 16:18:53 +0200 Subject: Connectionists: 2nd call for participation: Current Challenges in Kernel Methods -- 27-28 Nov 06, Brussels, Belgium In-Reply-To: <682af5170610020716i136ac9fdg4000ee6bfa2d8e32@mail.gmail.com> References: <682af5170610020716i136ac9fdg4000ee6bfa2d8e32@mail.gmail.com> Message-ID: <682af5170610020718g401b0c83qda41ab59ff2a9fe@mail.gmail.com> Apologies for cross- or double posting. (Please feel free to distribute.) Second announcement and call for participation: =============================================================== International Workshop on Current Challenges in Kernel Methods (CCKM06) The official 2006 kernel workshop, "10 years of kernel machines" Belgium, Brussels, 27-28 November 2006 www.machine-learning.be/cckm06/ =============================================================== * Format The workshop will consist of two days of invited talks by internationally renown researchers. Additionally, an interactive student poster session will be organized. You can submit your poster abstract on the workshop website (deadline: 27 October 2006). * Invited speakers Andreas Christmann (Free University of Brussels) Nello Cristianini (University of Bristol) Ingrid Daubechies (Princeton University) Kristiaan Pelckmans (Katholieke Universiteit Leuven) Alain Rakotomamonjy (INSA de Rouen) Bernhard Schoelkopf (Max Planck Institute for Biological Cybernetics) John Shawe-Taylor (University College London) Johan Suykens (Katholieke Universiteit Leuven) Sandor Szedmak (University of Southampton) Ioannis Tsochantaridis (Google) Jean-Philippe Vert (Ecole de Mines de Paris) * Important dates - Conference accommodation (ATLAS hotel) reserved until: Sunday 15 October. - Poster abstract submission deadline: Friday 27 October. - Early registration deadline: Friday 27 October. - Late registration deadline: Friday 17 November. - Workshop: Monday 27 - Tuesday 28 November 2006. * Description In the past decade, the kernel methods domain has expanded from a single algorithm for classification to a full-grown toolbox of techniques that are currently being applied in a variety of domains. This workshop aims at highlighting the current trends and topics of interest, and at putting these in a synthesized historical perspective, with attention for both theoretical and application challenges. Specific topics range from learning theory, over algorithmic/optimization issues in new kernel methods to practical successes and bottlenecks.*** Intended audience*: researchers interested or working in kernel methods, or application domains that are likely to benefit from the advances in kernel methods. Participants may come from artificial intelligence, machine learning, statistics, bioinformatics, data mining, web mining,... with a special interest in the study and application of kernel methods.*** Level and scope *: the lectures in the workshop are intended to be accessible to a broad audience, including anyone with a broad background in computer science, statistics, mathematics, physics, electrical engineering, or related domain. As a guideline, half of each lecture will be tutorial style, while the other half will cover recent developments. The workshop is sponsored by the *WOG machine learning for data mining and applications* , and by the *PASCAL-network *, such that attendance will be free except for the catering expenses.However,for practical reasons, registration is mandatory. Please register early, as the number of places is limited. * Please check the workshop website for more details: www.machine-learning.be/cckm06/ Kind regards, The organizers, Prof. Bernard Manderick Dr. Tijl De Bie From mvanross at inf.ed.ac.uk Thu Oct 5 10:24:23 2006 From: mvanross at inf.ed.ac.uk (Mark van Rossum) Date: Thu, 5 Oct 2006 15:24:23 +0100 Subject: Connectionists: Postdoc Computational models of plasticity Message-ID: <200610051524.23787.mvanross@inf.ed.ac.uk> Postdoctoral Fellowship in Computational Neuroscience We invite applications for a 2.5 years HFSP funded postdoctoral fellowship to work on theoretical and computational models of plasticity. The subject is synaptic tagging and capture (Frey and Morris '97). The project is a HFSP collaboration with 3 experimental labs (Haruhiko Bito in Tokyo, Tobias Bonhoeffer in Munich and Richard Morris in Edinburgh). This part of the project aims to develop models for synaptic tagging both on a biophysical and computational level, quantify the experimental findings, and generate hypotheses for the experimental labs. Applicants should have a strong background in mathematics, physics, computer science, or computational neuroscience and have a commitment to a future research career in neuroscience. The fellowship allows for top class research in a stimulating environment. Edinburgh is one of the leading centers in the UK for Comp Neuroscience. It provides a large, active community in computational neuroscience with strong links with the neuroscience research groups in Edinburgh and the project partners. Edinburgh has been voted as 'best place to live in Britain'. The expected starting data is Nov 1st, but could be adjusted to be later. To apply please send cover letter, CV, representative papers and 2 letters of recommendation to Pat Ferguson, Rm D10, 5 Forrest Hill, Edinburgh EH1 2QL, UK or email to pferguso at inf.ed.ac.uk with subject "hfsp". Inquiries can be addressed to Mark van Rossum, mvanross at inf.ed.ac.uk Weblinks: homepages.inf.ed.ac.uk/mvanross, www.anc.ed.ac.uk From beckmann at fmrib.ox.ac.uk Thu Oct 5 11:24:27 2006 From: beckmann at fmrib.ox.ac.uk (Christian Beckmann) Date: Thu, 5 Oct 2006 16:24:27 +0100 Subject: Connectionists: Post-doc in mean field modeling Message-ID: <5C922C3B-7C9C-49FA-AB4F-5B4A4B1895A3@fmrib.ox.ac.uk> On behalf of Rolf K?tter: The newly formed Chair in Neurophysiology and Neuroinformatics invites applications for a senior post-doc position in the group of Rolf K?tter at Radboud University Nijmegen/NL. The overall mission of the group is to study the biological conditions of neuronal population dynamics aiming to bridge the gaps between single cell electrophysiology and large-scale brain activity. Experimentally, we will expand the combination of intracellular recordings and flash photolysis of caged glutamate in brain slices with multielectrode recordings. Computationally, we will simulate the behaviour of neuron populations within and across brain regions drawing upon our extensive neuroinformatics databases (CoCoDat, CoCoMac). Refs: Schubert et al. 2006 Cereb Cx 16: 223-236; Sporns et al. 2005 PLoS Comput Biol 1: e42; Sporns & K?tter 2004 PLoS Biol 2:1910-1918; Schubert et al. 2003 J Neurosci 23:2961-2970; Passingham et al. 2002 Nat Rev Neurosci 3:606-616. We are looking for an experienced post-doc with a background in mean field neuronal modelling and dynamical systems theory to advance the computational side in close collaboration with the experimental work in the group (Dirk Schubert) and with related groups performing, e.g., in vivo recordings (Stan Gielen, http://www.mbfys.ru.nl/ ~stan/), statistical learning theory (Bert Kappen, http:// www.snn.ru.nl/~bertk/), non-invasive brain imaging (F.C. Donders Centre, http://www.ru.nl/fcdonders/) and clinical studies (University Hospital). The position is available now for initially 5 years with subsequent permanent employment according to Dutch labour law depending on positive evaluation. The salary will be according to qualification and experience. Interviews will be conducted until the position is filled. Further positions for diploma and PhD students and a junior post-doc with a focus on analyzing macaque connectivity data in the context of human imaging studies are also available. Candidates should send a letter of intention with a full CV and the names of two referees by e-mail to: Prof. Dr. Rolf K?tter, Chair Section Neurophysiology & Neuroinformatics Department of Cognitive Neuroscience (126) Radboud University Nijmegen Medical Centre Geert Grooteplein 21, NL- 6525 EZ Nijmegen The Netherlands phone +31 24 3614248; email rk at cns.umcn.nl previous home page: http://www.hirn.uni-duesseldorf.de/rk -- Christian F. Beckmann Oxford University Centre for Functional Magnetic Resonance Imaging of the Brain, John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK Email: beckmann at fmrib.ox.ac.uk - http://www.fmrib.ox.ac.uk/~beckmann/ Phone: +44(0)1865 222551 Fax: +44(0)1865 222717 From dayan at gatsby.ucl.ac.uk Thu Oct 5 11:42:40 2006 From: dayan at gatsby.ucl.ac.uk (Peter Dayan) Date: Thu, 5 Oct 2006 16:42:40 +0100 Subject: Connectionists: Gatsby Postdoc Training Fellowships In-Reply-To: <20060113094900.GC23422@flies.gatsby.ucl.ac.uk> References: <20060113094900.GC23422@flies.gatsby.ucl.ac.uk> Message-ID: <20061005154240.GA30345@flies.gatsby.ucl.ac.uk> Postdoctoral Training Fellowship Positions Theoretical Neuroscience Gatsby Computational Neuroscience Unit UCL, UK http://www.gatsby.ucl.ac.uk/ The Gatsby Computational Neuroscience Unit invites applications for postdoctoral fellowship positions in theoretical neuroscience and related areas. The Gatsby Unit is a world-class centre for theoretical neuroscience and machine learning, focusing on the interpretation of neural data, population coding, perceptual processing, neural dynamics, neuromodulation, and learning. The Unit also has significant interests across a range of areas in machine learning. For further details of our research please see: http://www.gatsby.ucl.ac.uk/research.html The Unit provides a unique environment in which a critical mass of theoreticians interact closely with each other and with other world-class research groups in related departments at University College London, including Anatomy, Computer Science, Functional Imaging, Physics, Physiology, Psychology, Neurology, Ophthalmology, and Statistics, and the new cross-faculty Centre for Computational Statistics and Machine Learning. The Unit's visitor and seminar programmes enable staff and students to engage with leading researchers from across the world. Candidates must have a strong analytical background and demonstrable interest and expertise in theoretical neuroscience. Stipends are competitive, based on experience and achievement. Fellowships are typically offered for two years in the first instance. Applicants should send in pdf, plain text or Word format a CV, a statement of research interests, and the names and full contact details (including e-mail addresses) of three referees to: asstadmin at gatsby.ucl.ac.uk Applicants are directed to further particulars about the positions available from: http://www.gatsby.ucl.ac.uk/vacancies/index.html The closing date for applications is 12th November 2006. From btanner at cs.ualberta.ca Thu Oct 5 15:36:00 2006 From: btanner at cs.ualberta.ca (Brian Tanner) Date: Thu, 5 Oct 2006 13:36:00 -0600 Subject: Connectionists: Call for Participation : NIPS Workshop on Grounding Perception, Knowledge and Cognition in Sensori-Motor Experience Message-ID: NIPS 2006 Workshop on Grounding Perception, Knowledge and Cognition in Sensori-Motor Experience ====================================================== http://rlai.net/RLAI/prw2006.html Workshop Overview ------------------------------ Understanding how world knowledge can be grounded in sensori-motor experience has been a long-standing goal of philosophy, psychology, and artificial intelligence. So far this goal has remained distant, but recent progress in machine learning, cognitive science, neuroscience, engineering, and other fields seems to bring nearer the possibility of addressing it productively. The objective of this workshop is to provide cross-fertilization of ideas between diverse research communities interested in this subject. This workshop will serve as a meeting point for researchers from these various disciplines to share their perspectives and insights on the issue of representing knowledge in terms of sensori- motor experience. The workshop will focus on research topics such as: * The role of prediction in biological and neurological systems * Grounded state representations (PSRs, OOMs, etc) * Dynamical / environmental models grounded in sensori-motor experience * Identifying relevant sensory information, both across sensors and time (sensor bootstrapping) * Representations spanning multiple spatio-temporal scales * Signals to symbols, symbol grounding * Issues of grounded knowledge representations: formats, capabilities, affordances, limitations * Reasoning and planning in terms of grounded knowledge * Active perception guided by sensory-motor experience * Construction of perceptual or motor control primitives * Learning in infants, going from sensory data to representations The workshop will be comprised of invited talks by 5-6 of the top people from a variety of disciplines related to experience based knowledge representations. The speakers will share their area- specific knowledge and understanding of these issues with the workshop attendees. Several discussion sessions will give an opportunity for all workshop participants to discuss ideas. The workshop will conclude with a poster session populated with work submitted by the community at large. A central goal is to bring together the perspectives of different communities. We invite participants from any area, including machine learning, cognitive science, computational neuroscience, developmental robotics, and philosophy. Call for Participation ------------------------------ Participation in the form of a poster will be by invitation from the program committee based on a small written submission, either a short paper or extended abstract on your relevant work (this may be work that has been previously published elsewhere). We encourage submissions from all disciplines that are related to the topic of the workshop. The poster session is expected to reflect that wide variety of interesting ideas surrounding our topic. * Submission Date: November 3, 2006 * Acceptance Notification: November 10, 2006 * Workshop date: December 8, 2006 All submissions should be emailed to grounded.workshop at gmail.com Organizers / Contact Information ------------------------------ * Brian Tanner (University of Alberta) Co-Chair * Michael James (Toyota Research) Co-Chair * David Wingate (University of Michigan) Co-Chair * Satinder Singh (University of Michigan) * Rich Sutton (University of Alberta) Please direct all questions and submissions to grounded.workshop at gmail.com. The official workshop website is: http://rlai.net/RLAI/prw2006.html From C.Archambeau at cs.ucl.ac.uk Fri Oct 6 02:48:05 2006 From: C.Archambeau at cs.ucl.ac.uk (Cedric Archambeau) Date: Fri, 06 Oct 2006 07:48:05 +0100 Subject: Connectionists: Call for abstracts: NIPS 06 workshop on Dynamical Systems, Stochastic Processes and Bayesian Inference Message-ID: <4525FC25.70303@cs.ucl.ac.uk> Apologies for cross-posting. CALL FOR ABSTRACTS: ==================================================== NIPS 2006 Workshop on Dynamical Systems, Stochastic Processes and Bayesian Inference ==================================================== http://www.cs.ucl.ac.uk/staff/c.archambeau/dsb.htm December 8-9, Whistler, BC, Canada [Abstract submission deadline: November 1, 2006] OVERVIEW: The modelling of continuous-time dynamical systems from uncertain observations is an important task that comes up in a wide range of applications ranging from numerical weather prediction over finance to genetic networks and motion capture in video. Often, we may assume that the dynamical models are formulated by systems of differential equations. In a Bayesian approach, we may then incorporate a priori knowledge about the dynamics by providing probability distributions on the unknown functions, which correspond for example to driving forces and appear as coefficients or parameters in the differential equations. Hence, such functions become stochastic processes in a probabilistic Bayesian framework. Gaussian processes (GPs) provide a natural and flexible framework in such circumstances. The use of GPs in the learning of functions from data is now a well-established technique in Machine Learning. Nevertheless, their application to dynamical systems becomes highly nontrivial when the dynamics is nonlinear in the (Gaussian) parameter functions. This happens naturally for nonlinear systems which are driven by a Gaussian noise process, or when the nonlinearity is needed to provide necessary constraints (e.g., positivity) for the parameter functions. In such a case, the prior process over the system's dynamics is non-Gaussian right from the start. This means, that closed form analytical posterior predictions (even in the case of Gaussian observation noise) are no longer possible. Moreover, their computation requires the entire underlying Gaussian latent process at all times (not just at the discrete observation times). Hence, inference of the dynamics would require nontrivial sampling methods or approximation techniques. This raises the following questions: - What is the practical relevance of nonlinear effects, i.e. could we just ignore them? - How should we sample randomly from posterior continuous-time processes? - How should we deal with large data sets and/or very high dimensional data? - Are functional Laplace approximations suitable? - Can we think of variational approximations? - Can we do parameter and hyper-parameter estimation? - Etc. The aim of this workshop is to provide a forum for discussing open problems related to continuous-time stochastic dynamical systems, their links to Bayesian inference and their relevance to Machine Learning. The workshop will be of interest to workers in both Bayesian Inference and Stochastic Processes. We hope that the workshop will provide new insights in continous-time stochastic processes and serve as a starting point for new research perspectives and future collaborations. SUBMISSIONS: We welcome extended abstract submissions to the NIPS 2006 workshop on "Dynamical Systems, Stochastic Processes and Bayesian Inference" in the following related areas (but not restricted to): - Nonlinear dynamical systems - Bayesian inference in stochastic processes - Gaussian and non-Gaussian processes - Continuous-time Markov chains - Continuous-time discrete/continuous state processes - Gaussian, mixture of Gaussians and nonparametric belief networks - Nonlinear filtering/smoothing The suggested abstract length is 4 pages (maximum 8 pages), formatted in the NIPS format. The abstracts will be made available on the web. The authors should submit their extended abstract to dsb at cs.ucl.ac.uk in PDF before Nov. 1, 2006, 23:59 UTC. An email confirming the reception of the submission will be sent by the organizers. Further requests, suggestions and comments should be sent to dsb at cs.ucl.ac.uk. SCHEDULE: Oct. 01: Call for extended abstracts Nov. 01: Abstract submission deadline Nov. 17: Notification of acceptance Nov. 24: Final extended abstracts due Dec. 8 or 9: Workshop PROGRAM: In order to encourage an active participation of the attendees, both, the morning and the afternoon session will include invited talks, short peer-reviewed spotlights presentations, and extended poster sessions for informal discussions. The workshop will close with a wrap-up. SPEAKERS: Neil Lawrence, University of Sheffield. Manfred Opper, Technical University Berlin. Chris Williams, University of Edingburgh. ORGANIZERS: Cedric Archambeau, University College, London. Manfred Opper, Technical University, Berlin. John Shawe-Taylor, University College, London. PROGRAM COMMITTEE: Cedric Archambeau, University College, London. Dan Cornford, Aston University. Manfred Opper, Technical University, Berlin. John Shawe-Taylor, University College, London. Magnus Rattray, University of Manchester. From soeren.lorenz at uni-bielefeld.de Fri Oct 6 05:57:21 2006 From: soeren.lorenz at uni-bielefeld.de (Soeren Lorenz) Date: Fri, 06 Oct 2006 11:57:21 +0200 Subject: Connectionists: ebook 'Neural Networks as Cybernetic Systems' and more published in Brains, Minds, and Media Message-ID: <45262881.1080503@uni-bielefeld.de> (Apologies for cross-posting - please forward to colleagues) ________________________ eBook NEURAL NETWORKS AS CYBERNETIC SYSTEMS and more ... now available, free online access: http://www.brains-minds-media.org/current Brains, Minds & Media - open access eJournal http://www.brains-minds-media.org ________________________ 1. ebook: NEURAL NETWORKS AS CYBERNETIC SYSTEMS (2nd and revised edition) by Holk Cruse Introduction to the simulation of dynamic systems in biology combining system theory, biological cybernetics, and the theory of neural networks. Mainly addressed to students of biology, the text is based on illustrations and tries to avoid mathematical formulas as far as possible. http://www.brains-minds-media.org/archive/615 2. Related tool feature: tkCYBERNETICS for the construction and simulation of small cybernetic circuits. Easy to use and well suited to solve excercises of Holk Cruse's book. http://www.brains-minds-media.org/archive/327 3. Tool article: REEFSOM - A Metaphoric Data Display for Exploratory Data Mining. A visualization tool for SOMs, including supplementary material, like software and videos. http://www.brains-minds-media.org/archive/305 ----------------------------- CONTINUOUS CALL FOR SUBMISSION 'Brains, Minds & Media' is an open access eJournal and is currently free of page charges. BMM publishes peer-reviewed articles and media from research and education in the neural and cognitive sciences (see http://www.brains-minds-media.org/aims"). You are invited to submit contributions to 'Brains, Minds & Media'. More information about manuscript submissions can be found at http://www.brains-minds-media.org/guidelines. Please send your submission to editors at brains-minds-media.org. ----------------------------- If you have any questions, please contact info at brains-minds-media.org. Thank you! Soeren Lorenz (editorial co-ordinator) soeren.lorenz at uni-bielefeld.de From fpereira at cs.cmu.edu Sun Oct 8 20:05:41 2006 From: fpereira at cs.cmu.edu (fpereira@cs.cmu.edu) Date: Sun, 8 Oct 2006 20:05:41 -0400 (EDT) Subject: Connectionists: Call for abstracts: NIPS workshop on New Directions on Decoding Mental States from fMRI Data Message-ID: <49960.70.20.91.222.1160352341.squirrel@webmail.cs.cmu.edu> with our apologies if you receive multiple copies of this through various mailing lists: ------------------------------------------------------- New Directions on Decoding Mental States from fMRI Data (http://www.cs.cmu.edu/~fmri/workshop) ------------------------------------------------------- to be held at NIPS 06 in Whistler, Canada, December 8 or 9 Important dates: - 2 page abstract submission deadline: November 2 - notification of acceptance: early November - workshop date: December 8 or 9 Program Committee: - John-Dylan Haynes (MPI for Human Cognitive and Brain Sciences,Leipzig) - Francisco Pereira (Carnegie Mellon University) - Tom Mitchell (Carnegie Mellon University) Overview: In the past five years machine learning classifiers have met great interest in the field of cognitive neuroscience for the purpose of decoding mental states given observed fMRI data. This work has received considerable attention because it is seen as a way to overcome limitations of more conventional fMRI analysis methods. Whereas conventional fMRI research is focused on spatially localizing cognitive modules, decoding-based research allows for the first time the study of the neural encoding of specific mental contents in the human brain. The recent progress has also raised a number of fundamental questions about the practice of using classifiers for decoding, the interpretation of results and their implications for theories of cognitive neuroscience. This workshop has the following goals: (1) To give an overview of decoding mental states from fMRI (2) To present cutting edge research and to address the fundamental practical challenges. (3) To provide a venue for discussion of the broader questions that may result in an agenda for the field. Scope: We aim to have several cognitive neuroscience researchers give overview talks and introduce members of the NIPS community to the field and challenges from their perspective. We will also leave ample space for submitted presentations and discussion. At a high level, we are interested in how decoding can help model-building in cognitive neuroscience and, ultimately, help develop theories of neural representation that explain the decoding-identified structure in the fMRI data. At a more technical level, we are considering specific issues such as: - Can classifiers be used as a confirmatory scientific tool for existing theories or hypotheses? - What are the characteristics of fMRI datasets that affect current machine learning wisdom? - Is it feasible to use nonlinear classifiers or do linear ones suffice? - Is regularization useful and, if so, which form is more appropriate? - Are there feature selection strategies that work in general, or does success depend entirely on the activation structure of each study? - How should having an hypothesis about the structure of activation influence the choices above? What other prior information can be used? - How can decoding be done under dynamic conditions? - Is it feasible to decode multiple superimposed mental states (in space or time)? - How should one perform inference using data: - acquired under different contextual conditions - from multiple subjects - from multiple studies - Are there low-dimensional representations of fMRI data that are better for decoding? - What activation structures can classifiers learn other than location of activation? - How should one attach statistical significance to decoding results or activation structure identified? We especially believe fMRI decoding to be of high interest also to machine learning experts. Given the number and type of specific open questions, this is more than just another application domain and thus there is a requirement for machine learning researchers to come up with new methods and creative applications. This workshop is also designed to facilitate their entry into this field and put them in contact with cognitive neuroscientists receptive to their computational expertise and creativity. Submissions: Here we invite proposals for presentations addressing any of the questions above or other related issues. We welcome presentations of completed work or work-in-progress, as well as papers discussing potential research directions and surveys of recent developments. If you would like to present at the workshop, please send an abstract at most 2 pages long (NIPS format (http://leon.bottou.com/nips), excluding citations, PDF preferred) to fpereira at cs.cmu.edu as soon as possible, and no later than November 2, 2006. We will select presentations and have a final program posted by early November. From Bill at BillHowell.ca Mon Oct 9 13:10:44 2006 From: Bill at BillHowell.ca (Bill Howell. home email. Calgary) Date: Mon, 9 Oct 2006 11:10:44 -0600 Subject: Connectionists: Call for Papers: IJCNN-2007, Celebrating 20 years of neural networks! 12-17Aug07, Orlando Florida Message-ID: <000701c6ebc5$de834f70$ea709344@billhowell> Dear colleagues: The International Joint Conference on Neural Networks (IJCNN) is the premier conference dedicated to the field of neural networks, and 2007 marks the 20th anniversary of this event (WWW.IJCNN2007.ORG). To celebrate this important milestone, the organizers of the conference would like to reflect on past progress and also to inject new energy into the field. In addition to covering all topics in neural network research, IJCNN 2007 will feature world-renowned plenary speakers, state-of-the-art special sessions, moderated panel discussions, pre-conference tutorials, post-conference workshops, regular technical sessions, poster sessions, and social functions. The conference will provide an excellent venue to demonstrate the great promise and the real substance behind recent advances in brain inspired computation and the understanding of the brain. Important Dates: 30Nov06 Special and panel session proposals 31Jan07 Paper submissions; Tutorial and workshop proposals 31Mar07 Decision notification 30Apr07 Final Submission IMPORTANT! Selected regular conference papers will be invited to a Special Issue of the journal "Neural Networks", planned for the autumn of 2007. These papers must be substantial enhancements of the IJCNN regular submissions, and they will be additionally peer reviewed. See you in Orlando next August! WWW.IJCNN2007.ORG Mr. Bill Howell IJCNN07 Orlando Publicity Chair http://www.ijcnn2007.org/ Project Manager: Facilities & Special Projects, Natural Resources Canada, Ottawa (on "sabbatical") 1-403-889-6792 Bill at BillHowell.ca Calgary From thomson at neuro.duke.edu Mon Oct 9 18:01:30 2006 From: thomson at neuro.duke.edu (Eric E. Thomson) Date: Mon, 9 Oct 2006 18:01:30 -0400 Subject: Connectionists: NIPS 2006: Decoding the Code. Workshop Announcement/Call for Abstracts References: <116332600C82C04CB4EB555AFA05EABD017F847F@axon.neuro.duke.edu> Message-ID: <116332600C82C04CB4EB555AFA05EABD017F8481@axon.neuro.duke.edu> NIPS 2006 WORKSHOP ANNOUNCEMENT AND CALL FOR ABSTRACTS: DECODING THE NEURAL CODE For the full announcement, see the workshop web site: http://science.ethomson.net/NIPS_workshop.html INVITED SPEAKERS Henry Abarbanel (Physics, UCSD) Andrea Hasenstaub (Neurobiology, Yale) Eric Horvitz (Microsoft Research, Redmond, WA) Pamela Reinagel (Neurobiology, UCSD) Michael Shadlen (Physiology, U Washington) Tatyana Sharpee (Physiology, UCSF) Simon Thorpe (CNRS) WORKSHOP ORGANIZERS William B. Kristan, Jr. (wkristan at ucsd.edu) Terrence J. Sejnowski (terry at salk.edu) Eric Thomson (thomson at neuro.duke.edu) DESCRIPTION There is great interest in sensory coding. Studies of sensory coding typically involve recording from sensory neurons during stimulus presentation, and the investigators determine which aspects of the neuronal response are most informative about the stimulus. These studies are left with a decoding problem: are the discovered codes, sometimes quite exotic, ultimately used by the nervous system to guide behavior? In our one-day workshop, researchers with many different backgrounds will evaluate what we know about neuronal decoders and suggest new strategies, both experimental and computational, for addressing the decoding problem. Each hour, five to six researchers will address a particular question for five minutes, followed by a half-hour discussion. We will also set aside time for a poster session. We tentatively plan to include the following questions, and are soliciting additional questions from our speakers: 1. Which variables that encode stimuli are actually used to guide behavior? 2. What mechanisms do nervous systems use to decode encoded information? 3. Are motor systems better than sensory systems for experimentally addressing decoding? 4. What computational and experimental techniques are needed to address decoding? For instance, should information theory be used to address decoding as well as encoding? ======================================== Eric Thomson Email: thomson at neuro.duke.edu Lab: http://www.nicolelislab.net Personal: http://ericthomson.net From Arnaud.Tonnelier at inrialpes.fr Wed Oct 11 08:46:52 2006 From: Arnaud.Tonnelier at inrialpes.fr (Arnaud.Tonnelier@inrialpes.fr) Date: Wed, 11 Oct 2006 14:46:52 +0200 Subject: Connectionists: Announcement of Post Doctoral position Message-ID: <1160570812.452ce7bc93e76@listes-serv.inrialpes.fr> Could you send this annoucement of a post-doctoral position : Applications are invited for a post-doctoral position to carry out theoretical and computational research on non-smooth dynamical networks. The research interests currently focus on network dynamics and non-smooth systems. The research work will be related to the following domains: dynamical systems, bifurcation, event-driven simulation, hybrid computation, non-smooth dynamics. We investigate the dynamical properties of networks coming from the modeling of biological systems (neurons, metabolic, population, ...). We are mainly interested (but not limited) in the numerical simulation of spiking neural networks. We try to understand how local dynamics create emergent properties (synchronization, oscillations, traveling waves). The position is funded for one-year. The applicant should be able to perform independent research. He should have a strong background in applied mathematics and experience in programming. Applicants should send a curriculum vitae and two letters of recommandation send by email to Arnaud Tonnelier (arnaud.tonnelier at inrialpes.fr) and Dominique Martinez (dominique.martinez at loria.fr). The candidate will work at INRIA Montbonnot and/or the campus university of Grenoble. The position is available immediately. From levine at uta.edu Wed Oct 11 19:02:18 2006 From: levine at uta.edu (Levine, Daniel S) Date: Wed, 11 Oct 2006 18:02:18 -0500 Subject: Connectionists: Announcement of conference in Arlington, TX, November 3-4, on Goal-Directed Neural Systems Message-ID: <6D8CAC895F95814FA9CD510290ABA61C07C9F52D@MAILFS1.uta.edu> First Announcement and Call for Participation Conference on Goal-Directed Neural Systems Nedderman Hall, University of Texas at Arlington Friday, November 3, and Saturday, November 4, 2006 Sponsors: Texas Special Interest Group (SIG) of the International Neural Network Society (INNS) Metroplex Institute for Neural Dynamics (MIND) University of Texas at Arlington (UTA) Conference Theme: The conference focuses on both biological and artificial neural networks that are capable of forming autonomous goals and proactively interacting with their environments to pursue those goals. This includes generating hypothesis about the external world, testing their hypotheses by actions on the environment, and perceiving and evaluating the consequences of their actions. The conference is partially a continuation of the Workshop on Intentional Systems after the 2005 International Joint Conference on Neural Networks in Montreal. Confirmed Speakers:\ Paul Werbos, National Science Foundation Donald Wunsch, University of Missouri at Rolla Robert Kozma, University of Memphis Leonid Perlovsky, Harvard University Jose Principe, University of Florida Daniel Levine, University of Texas at Arlington Frank Lewis, University of Texas at Arlington Risto Miikkulainen, University of Texas at Austin Gerhard Werner, University of Texas at Austin Ricardo Gutierrez-Osuna, Texas A&M University Yoonsuck Choe, Texas A&M University Derek Harter, Texas A&M University at Commerce Horatiu Voicu, University of Texas Medical Center at Houston Registration and Fees: Conference registration is $90 for employed non-members of INNS or MIND (other than invited speakers); $50 for employed INNS or MIND members; $30 for students who are not MIND or INNS members; $15 for students who are MIND or INNS members We will attempt to set up on-line registration through a MIND web site under construction. But meanwhile registration can be paid at the conference, or by a check made out to "MIND" and sent to: Professor Daniel S. Levine Department of Psychology 501 South Nedderman Drive University of Texas at Arlington Arlington, TX 76019-0528 Hotel: A block of rooms is available for the nights of Thursday, November 4, through Saturday, November 6, at the Arlington Hilton at 2401 East Lamar Boulevard (about 3 miles northeast of the UTA campus meeting site). The conference rate is $85 a night plus 15% tax and $.90 assessment fee. You can reserve a room at this rate through October 27 by calling the hotel at 817-640-3322 and identifying yourself as an attendee of the Neural Network Conference at UTA. From isabelle at clopinet.com Tue Oct 10 16:18:21 2006 From: isabelle at clopinet.com (Isabelle Guyon) Date: Tue, 10 Oct 2006 13:18:21 -0700 Subject: Connectionists: New game! Message-ID: <452C000D.9060006@clopinet.com> Dear Colleagues, A new challenge has stated: ==> AGNOSTIC LEARNING vs. PRIOR KNOWLEDGE <== http://www.agnostic.inf.ethz.ch/ You will get to compete on 5 classification problems for which you are provided with both the raw data and preprocessed data, thus have the opportunity to either embed domain knowledge into your classifier or have a more "black box" approach. ==> Workshops <== The results of the challenge will be discussed at 2 workshops: 1) NIPS 2006 workshop on multi-level inference http://clopinet.com/isabelle/Projects/NIPS2006/ The agnostic learning track of the challenge is used to implement a model selection game. Win one of two prizes (and get to ski at Whistler!) 2) IJCNN 2007 workshop on agnostic learning vs. prior knowledge http://clopinet.com/isabelle/Projects/agnostic/ Other prizes will be awarded (TBA). ==> Publications <== To submit a full length paper on model selection, see the call for paper: http://clopinet.com/isabelle/Projects/modelselect/call-for-papers.html The results of the challenge can be submitted to IJCNN 2007 to be published in the proceedings. The book on feature extraction with the results of the first challenge is now available: http://clopinet.com/fextract-book/ We are looking forward to your participation! The organizers From isabelle at clopinet.com Tue Oct 10 16:19:41 2006 From: isabelle at clopinet.com (Isabelle Guyon) Date: Tue, 10 Oct 2006 13:19:41 -0700 Subject: Connectionists: NIPS multi-level inference workshop Message-ID: <452C005D.4000702@clopinet.com> Dear colleagues, *** NIPS 2006 Workshop on Multi-Level Inference ***, and *** Model Section Game *** We are solliciting contributions to the workshop and participation to our model selection game. Two $500 cash awards will be granted. http://clopinet.com/isabelle/Projects/NIPS2006/ The best contributors will be encouraged to submit a paper to the new special topic of JMLR on model selection. http://clopinet.com/isabelle/Projects/modelselect/call-for-papers.html Isabelle Guyon From isabelle at clopinet.com Tue Oct 10 17:55:42 2006 From: isabelle at clopinet.com (Isabelle Guyon) Date: Tue, 10 Oct 2006 14:55:42 -0700 Subject: Connectionists: Feature extraction book, CD, course material Message-ID: <452C16DE.3050600@clopinet.com> Dear colleagues, We have edited a book publshed by Springer on feature extraction, including: - tutorial chapters on machine learning and feature selection - papers on the best methods of the NIPS 2003 feature selection challenge - papers on new space dimensionality reduction and feature construction methods - a CD with the datasets of the challenge and sample Matlab code Additionally, we make available course material. For information, see: http://clopinet.com/fextract-book/ Isabelle Guyon, Steve Gunn, Masoud Nikravesh, and Lofti Zadeh From jqc at tuebingen.mpg.de Wed Oct 11 13:15:45 2006 From: jqc at tuebingen.mpg.de (=?ISO-8859-1?Q?Joaquin_Qui=F1onero_Candela?=) Date: Wed, 11 Oct 2006 19:15:45 +0200 Subject: Connectionists: NIPS*06 Workshop on Learning with Different Input Distributions Message-ID: CALL FOR ABSTRACTS ====================================================== NIPS*06 Workshop - Whistler, BC, December 8-9 2006 "Learning when Test and Training Inputs Have Different Distributions" http://ida.first.fraunhofer.de/projects/different06 ====================================================== Call for contributions: --- We invite submissions of extended abstracts (1 to 4 pages long). A selection of the submitted abstracts will be accepted as oral presentations. We will also accept a few abstracts for poster presentation. Oral presenters are strongly encourage to additionally prepare a poster. The best abstracts will be considered for extended versions for the workshop proceedings. (please see workshop website for information about how to submit) Important Dates: --- . deadline for submissions: November 8, 2006 . notification of acceptance: November 15, 2006 Program Committee: --- . Tony O'Hagan (University of Sheffield) . Bernhard Schoelkopf (Max Planck Institute for Biological Cybernetics) . Thorsten Joachims (Cornell University) Background: --- Many machine learning algorithms assume that the training and the test data are drawn from the same distribution. Indeed many of the proofs of statistical consistency, etc., rely on this assumption. However, in practice we are very often faced with the situation where the training and the test data both follow the same conditional distribution, p(y|x), but the input distributions, p(x), differ. For example, principles of experimental design dictate that training data is acquired in a specific manner that bears little resemblance to the way the test inputs may later be generated. The open question is what to do when training and test inputs have different distributions. In statistics the inputs are often treated as ancillary variables. Therefore even when the test inputs come from a different distribution than the training, a statistician would continue doing ``business as usual''. Since the conditional distribution p(y|x) is the only one being modelled, the input distribution is simply irrelevant. In contrast, in machine learning the different test input distribution is often explicitly taken into account. An example is semi-supervised learning, where the unlabeled inputs can be used for learning. These unlabeled inputs can of course be the test. Additionally, it has recently proposed to re-weight the training examples that fall in areas of high test input density for learning (Sugiyama and Mueller, 2005). Transductive learning, which concentrates the modelling at the test inputs, and the problem of unbalanced class labels in classification, particularly where this imbalance is different in the training and in the test sets, are both also very intimately related to the topic of this workshop. It does not seem to be completely clear, whether the benefits of explicitly accounting for the difference between training and test input distributions outweigh the potential dangers. By focusing more on the training examples in areas of high test input density, one is effectively throwing away training data. Semi-supervised learning on the other hand is very dependent on certain prior assumptions being true, such as the cluster assumption for classification. The aim of this workshop will be to try and shed light on the kind of situations where explicitly addressing the difference in the input distributions is beneficial, and on what the most sensible ways of doing this are. Organizers: --- . Joaquin Quinonero Candela (Technical University of Berlin) . Neil D. Lawrence (University of Sheffield) . Anton Schwaighofer (Fraunhofer FIRST.IDA) . Masashi Sugiyama (Tokio Institute of Technology) From kilian at gmail.com Sat Oct 7 11:21:29 2006 From: kilian at gmail.com (Kilian Weinberger) Date: Sat, 07 Oct 2006 08:21:29 -0700 Subject: Connectionists: [Invitation] Workshop on Novel Applications of Dimensionality Reduction @ Fri Dec 8 - Fri Dec 8, 2006 (1 day) Message-ID: connectionists at cs.cmu.edu, you are invited to Title: Workshop on Novel Applications of Dimensionality Reduction Time: Fri Dec 8 - Fri Dec 8, 2006 (1 day) (Eastern Time) Where: Whistler, CANADA Description: ******************************************************************* Call For Papers Novel Applications of Dimensionality Reduction http://www.seas.upenn.edu/~kilianw/nldrworkshop06 Workshop held at the 20th Annual Conference on Neural Information Processing Systems (NIPS 2006) Whistler, CANADA: December 8, 2006 ******************************************************************** Dimensionality reduction is an important research topic in machine learning that is motivated by several needs, including data visualization and representation, discovering meaningful underlying structures, reducing computational complexity, and improving accuracy by avoiding overfitting due to data sparsity. In the past few years, significant progress has been made in this area, including development of novel algorithms for nonlinear dimensionality reduction (Isomap, locally linear embedding, local tangent space alignment, maximum variance unfolding, etc.) and supervised dimensionality reduction (neighborhood components analysis, max-margin matrix factorization, support vector decomposition, etc.) that have taken significant steps toward overcoming deficiencies of traditional (linear) methods like PCA and Fisher's LDA. At the same time, there is a growing interest in applying such novel dimensionality reduction techniques to a wide range of practical domains, including robotics, image driven navigation and localization, neuroscience, biomedical imaging, face recognition, bioinformatics and natural language processing, just to name a few. Typically, such applications have high-dimensional state- and action-spaces. Discovering low-dimensional structures in such spaces can improve our understanding of the domain, as well as the efficiency of learning and decision-making. Given these developments, we believe it is time to revisit the dimensionality reduction techniques from the point of view of various practical applications and their specific goals. The objective of this workshop is to understand how to match the capabilities of new nonlinear and supervised dimensionality reduction approaches with practical applications in science, engineering, and technology. We hope to achieve this by bringing together researchers who develop these techniques and those who apply them. A successful workshop will lead to new directions for application-oriented dimensionality-reduction research and ignite cross-fertilization between different application domains. This workshop will address the following questions: - Can we characterize which application domains are amenable to nonlinear and/or supervised dimensionality reduction? - Can we characterize which methods are best suited for specific application domains? - For a given application domain, what properties of the data must be preserved in the low-dimensional representation? - How should the application goal (such as prediction or decision-making) be used to influence the dimensionality reduction process? Suggested Topics ================= We would welcome submissions on applications of dimensionality reduction (particularly, novel nonlinear and/or supervised methods) to various practical domains, including (but not limited to) those mentioned above. We encourage case studies where such approaches improve the results (such as accuracy or performance) or lead to better understanding of the underlying problem structure. We would also encourage analyses and comparisons of various approaches for specific application scenarios. Format ======= We are proposing a one-day workshop. We are planning on having one tutorial, 4 invited talks and shorter contributions from researches in industry and academia as well as a panel discussion. Each presentation will be followed by 10-15 minutes of discussion on the aspects detailed earlier in the overview. We will hold a poster session if we receive a sufficient number of good submissions. The workshop is intended to be accessible to the broader NIPS community and to encourage communication between different fields. Submission Instructions ======================== We invite submissions of extended abstracts (up to 2 pages, not including bibliography) for the short contributed talks and/or posters. The submission should present a high-level description of recent or ongoing work related to the topics above. We will explore the possibility of publishing papers based on invited and submitted talks in a special issue of an appropriate journal. Email submissions to nips06workshop at watson.ibm.com as attachments in Postscript or PDF, no later than November 3, 2006. Information ============ Workshop URL: http://www.seas.upenn.edu/~kilianw/nldrworkshop06 Submission: nips06workshop at watson.ibm.com NIPS: http://www.nips.cc Dates & Deadlines ================== November 3: Abstract Submission November 8: Acceptance Notification Organizing Committee ===================== John Blitzer University of Pennsylvania, USA Rajarshi Das IBM T. J. Watson Research Lab, USA Irina Rish IBM T. J. Watson Research Lab, USA Kilian Weinberger (Chair) University of Pennsylvania, USA Invited Speakers ================= TBA You can view this event at http://www.google.com/calendar/event?action=VIEW&eid=aTNzMmQ1MXE2aTA4MjVpMTBmdDkya2ZhZTAgY29ubmVjdGlvbmlzdHNAY3MuY211LmVkdQ&tok=MTYja2lsaWFuQGdtYWlsLmNvbWNmMDNkNWZjMmE5ZWYzMDcwMjZhZWYxZjhiNmJiOTFjNmVkYWRhNGE&ctz=America%2FNew_York&hl=en_US From dayan at gatsby.ucl.ac.uk Fri Oct 13 08:28:57 2006 From: dayan at gatsby.ucl.ac.uk (Peter Dayan) Date: Fri, 13 Oct 2006 13:28:57 +0100 Subject: Connectionists: Gatsby PhD Programme Message-ID: <20061013122857.GA10331@flies.gatsby.ucl.ac.uk> Gatsby Computational Neuroscience Unit 4 year PhD Programme The Gatsby Unit is a world-class centre for theoretical neuroscience and machine learning, focusing on unsupervised, semi-supervised and reinforcement learning, neural dynamics, population coding, Bayesian and nonparametric statistics and applications of these to the analysis of perceptual processing, neural data, natural language processing, machine vision and bioinformatics. It provides a unique opportunity for a critical mass of theoreticians to interact closely with each other, and with other world-class research groups in related departments at University College London, including Anatomy, Computer Science, Functional Imaging Laboratory, Physics, Physiology, Psychology, Neurology, Ophthalmology and Statistics, with the nascent cross-faculty Centre for Computational Statistics and Machine Learning, and also with other Universities, notably Cambridge. The Unit always has openings for exceptional PhD candidates. Applicants should have a strong analytical background, a keen interest in neuroscience and/or machine learning and a relevant first degree, for example in Computer Science, Engineering, Mathematics, Neuroscience, Physics, Psychology or Statistics. The PhD programme lasts four years, including a first year of intensive instruction in techniques and research in theoretical neuroscience and machine learning. It is described at http://www.gatsby.ucl.ac.uk/teaching/phd/ A number of competitive fully-funded studentships are available each year (to students of any nationality) and the Unit also welcomes students with pre-secured funding or with other scholarship/studentship applications in progress. In the first instance, applicants are encouraged to apply informally by sending, in pdf, plain text or Word format, a CV, a statement of research interests, and full contact details (including e-mail addresses) for three academic referees to: admissions at gatsby.ucl.ac.uk. General enquiries should also be directed to this e-mail address. For further details of research interests please see: http://www.gatsby.ucl.ac.uk/research.html Applications for 2007 entry (commencing late September 2007) should be received no later than 14 January 2007. From siwu at sussex.ac.uk Fri Oct 13 11:38:29 2006 From: siwu at sussex.ac.uk (Si Wu) Date: Fri, 13 Oct 2006 16:38:29 +0100 (BST) Subject: Connectionists: NIPS workshop on Continous Attractor Neural Networks Message-ID: Call for Participation: NIPS06 Workshop on Continuous Attractor Neural Networks Whistler, Canada, December 8th or 9th, 2006 [The deadline for abstract submission is Nov.10, 2006] Aim & Scope: Continuous attractor neural networks (CANNs) are a special type of recurrent networks that have been studied in many neuro- and cognitive science areas such as modelling hypercolumns, movement generation, spatial navigation, working memory, population coding, attention, saccade initiation and decision making. They have been also applied to engineering problems such as robot control. Such neural field models of the Wilson-Cowan-Amari type, or bump models, are a fundamental type of neural circuitry that underlies the general mechanisms for neural systems encoding continuous stimuli and categorizing objects. The goal of the workshop is to bring together researchers from diverse areas to solidify the existing research on CANNs, identify important issues that need to be solved, and explore their potential applications to artificial learning systems. The issues (not exclusive) to be covered in the workshop include: 1. CANNs in different cortical areas and their function meanings. 2. Why CANNs for neural information processing 3. Mathematical properties and different models of CANNs 4. How to learn CANNs in a natural noisy environment 5. The neural and behavior signatures of CANNs 6. CANNs for robotics and complex object representation Program: This one-day workshop will combine invited/survey talks, short contributed talks, posters, and panel discussions. The survey talks are intended to provide a thorough background and overview of the field from a number of established scientists who have notable contributions in the field. The short contributed talks will be solicited through public calls, and participants will be asked to submit a summary of two to four pages in length outlining their research as it relates to the workshop theme. Papers will be accepted based on their quality and novel contributions to CANNs. All participants including the ones giving talks will have the opportunity to present in addition posters to further stimulate discussions between participants. Post-workshop: Selected participants are encouraged to submit extended papers after the workshop that will be peer reviewed. High quality paper in addition to the solicited reviews will be published in an edited book by Springer. Invited speakers (tentative, more to be added) . Shun-ichi Amari (RIKEN) . Mark Goldam(Wellesley) . Hugn Wilson (York) . Carson Chow (NIH) . Kechen Zhang (John Hopkins) SUBMISSION PROCEDURE: Authors should submit an extended abstract in 2-4 pages to siwu at sussex.ac.uk by November 10, 2006. We will send an email confirming the reception of the submission. The authors of the accepted abstracts will be allocated as talks or poster highlights. IMPORTANT DATES: Oct 13: Workshop announcement and Call for Abstracts Nov 10: Abstract submission deadline Nov 15: Notification of acceptance Dec 8 or 9: Workshop ORGANISERS: Si Wu: Dept of Informatics, Sussex University, UK Thomas Trappenberg: Dept of Computer Science, Dalhousie University, Canada For questions/ suggestions about the workshop, please feel free to contact siwu at sussex.ac.uk Please refer to the workshop website: http://www.informatics.sussex.ac.uk/users/siwu/nips-workshop/CANN.html for up-to-date information. Dr. Si Wu Seninor Lecturer in Neural Computation Lab for Bioinformatics and Machine Learning Sussex University, UK From knorman at Princeton.EDU Fri Oct 13 16:47:23 2006 From: knorman at Princeton.EDU (Ken Norman) Date: Fri, 13 Oct 2006 16:47:23 -0400 Subject: Connectionists: Postdoctoral position: Pattern classification of neuroimaging data Message-ID: <452FFB5B.9060503@princeton.edu> dear connectionists, i have an open post-doc position in my lab: see advertisement below. best! ken ken norman assistant professor of psychology princeton university knorman at princeton.edu RESEARCH ASSOCIATE, PRINCETON UNIVERSITY. A postdoctoral position is available in the Psychology Department at Princeton University. The postdoc will be involved in developing and refining new, multivariate methods for decoding cognitive states based on neuroimaging data. REQUIREMENTS: completed Ph.D., with a strong quantitative background (e.g., in Physics, Applied Math, or Computer Science) and prior experience with neuroimaging data analysis. Initial appointment for one year with possibility of renewal. For more information about Dr. Norman's Computational Memory Lab, please visit http://compmem.princeton.edu. Submit CVs to SEARCH CJA/KN, Department of Psychology, Green Hall, Princeton University, Princeton, NJ 08544-1010 or by email to agans at princeton.edu. For information about applying online to Princeton, please link to http://web.princeton.edu/sites/dof/ApplicantsInfo.htm. PU/EO/AAE. From maass at igi.tugraz.at Sat Oct 14 02:42:21 2006 From: maass at igi.tugraz.at (Wolfgang Maass) Date: Sat, 14 Oct 2006 08:42:21 +0200 Subject: Connectionists: NIPS-Workshop on Echo State Networks and Liquid State Machines Message-ID: <453086CD.3050806@igi.tugraz.at> You are invited to participate in the Workshop on Echo State Networks and Liquid State Machines at NIPS 2006 http://www.nips.cc/Conferences/2006/ (the workshop will take place on Dec. 8 or 9). There will also be an opportunity to present a poster at this workshop; send an abstract or full paper by Nov. 15 to Daniela Potzinger . Organizers : Dr. Herbert Jaeger, International University Bremen Dr. Wolfgang Maass, Technische Universitaet Graz Dr. Jose C. Principe, University of Florida, Motivation, Goals and Details of this Workshop: A new approach to analyzing and training recurrent neural networks (RNNs) has emerged over the last few years. The central idea is to regard a sparsely connected recurrent circuit as a nonlinear, excitable medium, which is driven by input signals (possibly in conjunction with feedbacks from readouts). This recurrent circuit is --like a kernel in Support Vector Machine applications-- not adapted during learning. Rather, very simple (typically linear) readouts are trained to extract desired output signals. Despite its simplicity, it was recently shown that such simple networks have (in combination with feedback from readouts) universal computational power, both for digital and for analog computation. There are currently two main flavours of such networks. Echo state networks were developed from a mathematical and engineering background and are composed of simple sigmoid units, updated in discrete time. Liquid state machines were conceived from a mathematical and computational neuroscience perspective and usually are made of biologically more plausible, spiking neurons with a continuous-time dynamics. These approaches have quickly gained popularity because of their simplicity, expressiveness, ease of training. In addition they provide a new perspective for modeling cortical computation that differs in several aspects from previous models. Generic cortical microcircuits are seen from this perspective as explicit implementations of kernels (in the sense of SVMs), raising the question how such explicit kernels can be optimized by unsupervised learning procedures for a particular inputs statistics and a particular range of computational tasks. Quite a number of researchers have started to work on this approach, and a first special issue of a journal (Neural Networks) dedicated to this topic is currently assembled. Furthermore results of neurobiological experiments that test predictions of this approach have just been completed, and further experiments are currently in the planning stage. The goals of this workshop are to --provide a resume of the current state of knowledge, in particular regarding theory and results of firsts experimental tests of its predictions in neuroscience --discuss consequences of this approach for computational and theoretical neuroscience --to guide future research by working out the essential open problems --to encourage new applications (e.g. in reinforcement learning, speech processing, handwriting recognition reading, auditory processing). The target audience consists of neuroscientists, cognitive scientists, theoreticians, neural network researchers, and engineers. ---------------------------------------------------------------------- The workshop will begin with 3 mini-tutorials (20 minutes each) on -- Theory of ESNs and LSMs -- Resulting perspectives for neuroscience research -- How to design a reservoir or liquid for particular tasks. The rest of the morning session, and the first part of the afternoon session will be devoted to presentations of the most exciting new research results in this area (format: 10-12 talks of lengths between 15 and 25 minutes, followed each by 5-10 minutes of discussion). The last 60 minutes of the workshop will be devoted to a discussion of open problems, and resulting new strategies for experimental planning and data-analysis in neuroscience. -- Prof. Dr. Wolfgang Maass Institut fuer Grundlagen der Informationsverarbeitung Technische Universitaet Graz Inffeldgasse 16b , A-8010 Graz, Austria Tel.: ++43/316/873-5811 Fax ++43/316/873-5805 http://www.igi.tugraz.at/maass/Welcome.html From steve at cns.bu.edu Sun Oct 15 15:49:43 2006 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sun, 15 Oct 2006 15:49:43 -0400 Subject: Connectionists: Postdoc for cognitive neuroimaging and neural modeling of learning Message-ID: A postdoctoral fellow is sought to study learning in both healthy subjects and individuals with autism. The fellow is expected to design and run cognitive neuroimaging experiments, and to collaborate on further developing neural models of autism. The ideal candidate should thus have strong backgrounds in experimental and computational cognitive neuroscience. The fellow will work with colleagues in the Department of Cognitive and Neural Systems (http://www.cns.bu.edu) and the Center of Excellence for Learning in Education, Science, and Technology (CELEST: http://cns.bu.edu/celest) at Boston University (BU), together with the Autism Research Center of Excellence (http://www.bu.edu/anatneuro/dcn/autism/staart.htm) and the Center for Biomedical Imaging (www.bumc.bu.edu/mri) at the BU School of Medicine. This new research project offers a major training opportunity in an exciting interdisciplinary environment, with access to cutting edge computational and MR imaging facilities. Boston University is an Equal Opportunity Affirmative Action Employer. Interested candidates should send their CV, letter of intent, and 3 recommendation letters to Prof. Stephen Grossberg at: NSF-NIH CELEST postdoctoral search Department of Cognitive and Neural Systems 677 Beacon Street Room 203 Boston University Boston MA 02215 From Christopher.Bishop at microsoft.com Sat Oct 14 09:38:18 2006 From: Christopher.Bishop at microsoft.com (Christopher Bishop) Date: Sat, 14 Oct 2006 14:38:18 +0100 Subject: Connectionists: New book: Pattern Recognition and Machine Learning Message-ID: <7FD4B614CDEEAE4F8038EF51D6A70B7D016F29DB@eur-msg-110.europe.corp.microsoft.com> --- New Book --- Pattern Recognition and Machine Learning Christopher M. Bishop * Springer (2006) * 738 pages * Full colour * 431 exercises Full details, including sample chapter, at: http://research.microsoft.com/~cmbishop/PRML This completely new textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. This is the first machine learning textbook to include a comprehensive coverage of recent developments such as probabilistic graphical models and deterministic inference methods, and to emphasize a modern Bayesian perspective. The book is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics, and extensive support is provided for course instructors. --- ooo --- From jst at ecs.soton.ac.uk Mon Oct 16 12:43:30 2006 From: jst at ecs.soton.ac.uk (John Shawe-Taylor) Date: Mon, 16 Oct 2006 17:43:30 +0100 Subject: Connectionists: Academic Recruitment Message-ID: <4533B6B2.4000107@ecs.soton.ac.uk> *UCL DEPARTMENT OF COMPUTER SCIENCE Lecturer/Senior Lecturer/Reader Computational Statistics and Machine Learning We are looking for world-class research talent to join us. We are specifically recruiting to a permanent faculty position in Computational Statistics and Machine Learning, an area in which we have a strong established research group linking with the Department of Statistical Science and the Gatsby Computational Neuroscience Unit. We are open to all types of principled approaches to analyzing and designing adaptive systems. We have a commitment to experimental research and to UCL's tradition of interdisciplinary research. Candidates should be interested in innovative and challenging teaching at both the core and edges of computer science. Starting salary for a Lecturer will be between ?31,321 - ?41,244 and for a Senior Lecturer or Reader ?44,893 - ?48,767 according to experience. You can find out more about us at http://www.cs.ucl.ac.uk Further details of the vacancy and the application procedure can be found at http://www.cs.ucl.ac.uk/vacancies. For informal enquiries please contact Anthony Finkelstein at a.finkelstein at cs.ucl.ac.uk or John Shawe-Taylor at jst at cs.ucl.ac.uk. The closing date for applications is Wednesday, 8th November 2006. From jqc at tuebingen.mpg.de Mon Oct 16 01:27:36 2006 From: jqc at tuebingen.mpg.de (=?ISO-8859-1?Q?Joaquin_Qui=F1onero_Candela?=) Date: Mon, 16 Oct 2006 07:27:36 +0200 Subject: Connectionists: Competition: Learning when Training and Test Inputs Have Different Distributions Message-ID: Apologies if you receive this message more than once] We are glad to announce the Pascal Challenge "Learning when test and training inputs have different distributions" http://different.kyb.tuebingen.mpg.de organized by Joaquin Quinonero Candela, Anton Schwaighofer and Neil Lawrence. The goal of this challenge is to attract the attention of the Machine Learning community towards the problem where the input distributions, p(x), are different for test and training inputs. A number of regression and classification tasks (some of them artificial, some of them real-world) are proposed, where the test inputs follow a different distribution than the training inputs. Training data (input- output pairs) are given, and the contestants are asked to predict the outputs associated to a set of validation and test inputs. Many more details are to be found at the website of the competition. This competition will be part of a NIPS 2006 Workshop on the same topic, http://ida.first.fraunhofer.de/projects/different06/ Joaquin Quinonero Candela, TU Berlin and Fraunhofer FIRST Anton Schwaighofer, Fraunhofer FIRST Neil Lawrence, University of Sheffield From smyth at ics.uci.edu Mon Oct 16 18:24:47 2006 From: smyth at ics.uci.edu (Padhraic Smyth) Date: Mon, 16 Oct 2006 15:24:47 -0700 Subject: Connectionists: postdoctoral position in Machine Learning at UC Irvine Message-ID: <9d909ca60610161524w2c34282dg974ebb83a12c05a9@mail.gmail.com> Postdoctoral Research Position in Machine Learning Department of Computer Science University of California, Irvina A full-time postdoctoral scholar position is available in Padhraic Smyth's research group at UC Irvine. The successful candidate will carry out basic research on statistical learning algorithms for latent (hidden) variable models with potential applications in a number of areas such as text analysis, geoscience, biology, and medicine. There will also be opportunities for interaction with Phd students and other machine learning faculty at UC Irvine, such as Max Welling, Pierre Baldi, and Eric Mjolsness. Applicants should have a Phd in computer science, electrical engineering, mathematics, statistics, or a closely related discipline. The UC Irvine campus is located 3 miles from the Pacific Ocean in the warm sunny climate of Southern California - the local area offers a wide variety of cultural and outdoor opportunities. Further details, including application instructions, are available at http://www.ics.uci.edu/employment/employ_research.php The University of California, Irvine, is an Equal Opportunity Employer, committed to excellent through diversity. From dwang at cse.ohio-state.edu Tue Oct 17 15:27:00 2006 From: dwang at cse.ohio-state.edu (DeLiang Wang) Date: Tue, 17 Oct 2006 15:27:00 -0400 Subject: Connectionists: New CASA book and promotion In-Reply-To: <4533B6B2.4000107@ecs.soton.ac.uk> References: <4533B6B2.4000107@ecs.soton.ac.uk> Message-ID: <45352E84.8030507@cse.ohio-state.edu> Dear List, We are pleased to announce that Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, has just been published by Wiley jointly with IEEE Press. This 10-chapter book is edited by DeLiang Wang and Guy Brown and provides a coherent, comprehensive, and up-to-date introduction to the emerging field of computational auditory scene analysis. We have negotiated with the publisher to provide a 20% new book discount until January 31, 2007. The hardback is priced at $89.95, and with the discount the price comes down to $71.96. The discount is available through Wiley's website at www.wiley.com/ieee. The promotion code is 'CASA1'. If you have problems obtaining the promotion price please contact Maria Corpuz at 'mcorpuz at wiley.com' (phone: 201-748-7668). We have also created a companion website for the book at http://www.casabook.org This evolving website contains CASA resources including sound demos, evaluation corpora, and program code. We hope you enjoy the book. DeLiang Wang, Ohio State University Guy Brown, University of Sheffield From jirsa at ccs.fau.edu Tue Oct 17 20:49:55 2006 From: jirsa at ccs.fau.edu (Viktor Jirsa) Date: Tue, 17 Oct 2006 20:49:55 -0400 Subject: Connectionists: Tenure Track Faculty Recruitment, Florida Atlantic University Message-ID: <45357A33.8010009@ccs.fau.edu> Dear Colleagues, please note the following faculty search. One major focus of the search will be computational neuroscience. Best wishes, Viktor Jirsa --------------------------- Florida Atlantic University, Tenure-Track Assistant Professor in Physics position. The Department of Physics at FAU invites applications for a tenure track position at the Assistant Professor level to begin in Aug 2007. The Department of Physics is growing rapidly and we are seeking highly qualified candidates committed to a career in research and teaching. We are particularly interested in building upon our experimental and theoretical strengths in biophysics and neuroscience. These Departmental foci overlap with ongoing research thrusts in the Charles E. Schmidt College of Science (http://www.science.fau.edu/). However, we will also consider applications from exceptional candidates in our department's other research areas in Condensed Matter Physics, General Relativity, Astrophysics and Quantum Optics. The successful candidates will be expected to teach, conduct a vigorous research program, supervise Ph. D. Students and contribute to the development of the current research initiatives within the Physics Department. /Women and minorities are especially encouraged to apply/. Candidates must apply by letter and include curriculum vitae, list of publications, a statement of research interests, and arrange for at least three letters of recommendation. The package should be sent to *Chair of the Search Committee, Department of Physics, **Florida** **Atlantic** **University**, **Boca Raton**, **FL** **33431-0991*. Material received by January 10, 2006 will receive full consideration. For further details e-mail to physics at fau.edu or visit our Web page (http://physics.fau.edu ). Positions are subject to anticipated availability of funds. FAU is an equal opportunity, equal access institution. From georgios.theocharous at intel.com Wed Oct 18 14:21:21 2006 From: georgios.theocharous at intel.com (Theocharous, Georgios) Date: Wed, 18 Oct 2006 11:21:21 -0700 Subject: Connectionists: NIPS 2006 workshop on UserAdaptive Systems Message-ID: Call for participation: User Adaptive Systems NIPS 2006 workshop Whistler Canada: December 8th, 2006 (The deadline for abstract submission is Nov.15, 2006) http://www.ece.mcgill.ca/~shie/UserAdaptiveSystems.html Overview: Systems that adapt to their users have the potential to tailor the system behavior to the specific needs and preferences of their users. The purpose of this workshop is to bring together researchers from academia and industry to summarize previous work; evaluate the need for user adaptive systems; and discuss the main difficulties that arise in designing and implementing such systems. The workshop will allow people working on different types of user adaptive systems to exchange ideas and to learn from each other's experience. The interaction with human users is at the core of many important systems. Improving this interaction and adapting to the specific user needs and preferences may result in better computing on several levels. It can lead to more usable and friendlier interfaces; improved performance as perceived by the user; adequate prioritization of tasks; and many others advantages. The range of applications is vast: health care for the elderly, determining user satisfaction for PCs, adaptive power management of laptops, improving driving experience, and better personalization in online shops to name a few. It is our belief that the relevance of machine learning as a field will be measured by its effect on modern technology. Adapting the behavior of a system to its user is a need that arises in a diverse range of applications. The workshop will focus on the methodology of user adaptive systems; it will explore the current state-of-the-art, and will offer a place for researchers from academia and industry to exchange ideas and formulate common goals. Workshop Goals: User adaptive systems learn and monitor user activity. These systems take actions based on user activity, explicit feedback, and implicit feedback signals. The goal of the workshop is to summarize the state-of-the-art in user adaptive systems. At the end of this workshop we would like to: 1. Summarize previous work on topics such as user monitoring, activity inference and preference elicitations. 2. Evaluate the need for such systems. 3. Discuss the main difficulties such as learning user activity models, computing control policies using the user activity models, multi-constraint optimization, evaluating control policies, etc. Schedule: The morning session will consist of several invited lectures by prominent researchers in academia and industry. Confirmed speakers are: 1. Tanzeem Choudhury 2. Thomas Dietterich 3. Samuel Kaski The afternoon session will be shared by a panel discussion and by several short presentations. We solicit 15-20 minutes presentations that: 1. Describe a system that uses machine learning techniques to adapt to a user. 2. Describe a challenging domain involving adaptation to human users. 3. Highlight a particular aspect of user adaptive systems that warrants further discussion. Please send a one page description of your presentation to shie.mannor AT mcgill.ca by November 15. Organizers: Shie Mannor (McGill University) Georgios Theocharous (Intel) From bhanu.prasad at famu.edu Wed Oct 18 13:06:32 2006 From: bhanu.prasad at famu.edu (Dr. Bhanu Prasad) Date: Wed, 18 Oct 2006 13:06:32 -0400 Subject: Connectionists: Call for papers on neural networks Message-ID: <000901c6f2d7$c2c775e0$d815dfa8@newcis.cis.famu.edu> A special session/workshop on Neural Networks is planned as part of the 2007 International Conference on Artificial Intelligence and Pattern Recognition (AIPR-07) to be held in Orlando, FL, USA during July 9-12 2007. This session focuses on the theory and applications of neural networks. We invite paper submissions. While submitting a paper, please indicate "neural network session at AIPR-07" at the subject line of your email. Click here more details: http://www.promoteresearch.org for more information on the conference and submission instructions. Bhanu Prasad Organizing committee member From Dominique.Martinez at loria.fr Thu Oct 19 02:12:49 2006 From: Dominique.Martinez at loria.fr (Dominique.Martinez@loria.fr) Date: Thu, 19 Oct 2006 08:12:49 +0200 Subject: Connectionists: Postdoctoral position in computational olfaction at LORIA-INRIA Message-ID: <1161238369.453717614b089@www.loria.fr> LORIA-INRIA in Nancy, France is a research unit in computer science. Part of the group activity focuses on computational neuroscience and in particular on spiking neural network models for sensory processing. A one year postdoctoral position is offered to study the role of inhibition in shaping spike synchronization and field potential oscillations in early olfactory systems (insects' antennal lobe, vertebrates' olfactory bulb) by means of computational modelling. This research is part of the european project GOSPEL (General Olfaction and Sensing Projects on a European Level: http://www.gospel-network.org). The ideal candidate should be familiar with spiking neurons (integrate-and-fire models) and should have skills in computer science and applied mathematics. Salary will be based upon the candidate's experience. This position is available immediately to qualified applicants. Please email CV to Dominique Martinez at dmartine at loria.fr Dominique Martinez LORIA - BP 239 - 54506 Vandoeuvre-l?s-Nancy, France http://www.loria.fr/~dmartine From sethu.vijayakumar at ed.ac.uk Thu Oct 19 08:41:21 2006 From: sethu.vijayakumar at ed.ac.uk (Sethu Vijayakumar) Date: Thu, 19 Oct 2006 13:41:21 +0100 Subject: Connectionists: Postdoctoral Fellowship in Probabilistic Machine Learning and Robotics in Edinburgh, UK Message-ID: <45377271.4040105@ed.ac.uk> UNIVERSITY OF EDINBURGH SCHOOL OF INFORMATICS Postdoctoral Research Fellow in Probabilistic Machine Learning and Robotics Applications are invited for a Postdoctoral Research Fellow in the area of Probabilistic Machine Learning and Motor Control as part of an EU-IST FP6 funded project. The post is available from Jan. 2007 for a maximum of 36 months and located in the School of Informatics at the University of Edinburgh. Salary is on the UE07 scale (GBP 26,402 - 34,448) and will depend upon qualifications and experience. The postdoctoral fellow will be responsible for developing statistical machine learning techniques for automatically extracting sensorimotor contingencies from dynamic and kinematic movement data (collected from robots and human experiments) and using this to learn latent model representations of various contexts and tasks. This will involve basic research in the fields of nonparametric regression, Bayesian model selection and latent model learning as well as applied areas of adaptive motor control and robotics Candidates are expected to have a PhD in the area of probabilistic machine learning and/or adaptive motor control and strong mathematical skills in the area of optimization, algebra and probability theory in addition to strong programming skills in C, C++, MATLAB or equivalent. Some experience with real world robotic systems and motor control is a plus. (candidates in the final stages of their PhD with an exceptional publication record may also apply). More details of the job and the research group can be found at: http://www.ipab.inf.ed.ac.uk/slmc/index.html Applicants are asked to submit (a) a cover letter describing their research experiences, interests, and goals, (b) a curriculum vitae, (c) the names and contact information of three individuals who can serve as references using the online application procedure at: http://www.jobs.ed.ac.uk/vacancies/index.cfm?fuseaction=vacancies.detail&vacancy_ref=3006517 Application Deadline: November 10, 2006 Informal enquiries may be addressed to: Dr. Sethu Vijayakumar (sethu.vijayakumar [at] ed.ac.uk) who will also be available at NIPS 2006 for an informal discussion/interview. (Please advertise in your department using the attached flyer) -- ------------------------------------------------------------------ Sethu Vijayakumar, Ph.D. Associate Professor(UK Reader) Director, IPAB, School of Informatics, The University of Edinburgh 2107F JCMB, The Kings Buildings, Edinburgh EH9 3JZ, United Kingdom URL: http://homepages.inf.ed.ac.uk/svijayak Ph: +44(0)131 651 3444 SLMC Research Group URL: http://www.ipab.informatics.ed.ac.uk/slmc ------------------------------------------------------------------ Adjunct Assistant Professor Department of Computer Science, University of Southern California ------------------------------------------------------------------ From isabelle at clopinet.com Thu Oct 19 13:59:46 2006 From: isabelle at clopinet.com (Isabelle Guyon) Date: Thu, 19 Oct 2006 10:59:46 -0700 Subject: Connectionists: NIPS Causality and feature selection workshop Message-ID: <4537BD12.1000602@clopinet.com> NIPS*06 Workshop - Whistler, BC, December 8, 2006 "Causality and Feature Selection" ====================================================== http://research.ihost.com/cws2006 Call for contributions: --- Extended abstracts (1 to 4 pages long) . A selection of the submitted abstracts will be accepted as oral or poster presentations. The selected contributions will be invited to publish an extended version for the workshop proceedings. Important Dates: --- Deadline for submissions: November 11, 2006 Notification of acceptance: November 15, 2006 Background: --- This workshop explores the use of causality with predictive models in order to assess the results of given actions. Such assessment is essential in many domains, including epidemiology, medicine, ecology, economy, sociology and business. Predictive models simply based on event correlations do not model mechanisms. They allow us to make predictions in a stationary environment (no change in the distribution of all the variables), but do not allow us to predict the consequence of given actions. For instance, smoking and coughing are both predictive of respiratory disease. One is a cause and the other a symptom. Acting on the cause can change the disease state, but not acting on the symptom. Understanding the effect of interventions has been the goal of most causal models but their complexity has limited their use to a few hundreds variables. Feature selection on the other hand can handle thousands of variables at the same time but does not make a difference between causes and symptoms. By confronting the hypothesis underlying causality and feature selection approaches, this workshop aims at investigating new approaches to extract causal relationships from data. We invite therefore contributions in the area of causality and/or feature selection. Applications to other fields such as econometrics, computational biology, physics or manufacturing are welcome. For more details, please go to http://research.ihost.com/cws2006 Organizers: --- * Andre Elisseeff, IBM Zurich Research Lab, Switzerland * Constantin Aliferis, Discovery Systems Laboratory, Vanderbilt University, USA * Isabelle Guyon, Clopinet, USA From rsun at rpi.edu Thu Oct 19 19:02:35 2006 From: rsun at rpi.edu (Professor Ron Sun) Date: Thu, 19 Oct 2006 19:02:35 -0400 Subject: Connectionists: Cognitive Systems Research --- new issues available Message-ID: <38037CE0-DC32-441F-B97A-A10A086999D4@rpi.edu> New Issues of Cognitive Systems Research are now available on ScienceDirect: NOTE: If the URLs in this email are not active hyperlinks, copy and paste the URL into the address/location box in your browser. ======================================================================== ======== * Cognitive Systems Research Volume 7, Issue 1, Pages 1-96 (March 2006) Models of Eye-Movement Control in Reading Edited by Erik D. Reichle http://www.sciencedirect.com/science/issue/ 6595-2006-999929998-613319 Computational models of eye-movement control during reading: Theories of the eye-mind link Pages 2-3 Erik D. Reichle http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4H21NHS-1&md5=86a22a91e2f756106bc80adf063fe1 e8 EZ Reader: A cognitive-control, serial-attention model of eye- movement behavior during reading Pages 4-22 Erik D. Reichle, Alexander Pollatsek and Keith Rayner http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4H3BM5H-1&md5=e429e491fb9b24f28be853d3c3b8ac 93 Current advances in SWIFT Pages 23-33 E.M. Richter, R. Engbert and R. Kliegl http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4H21NHS-2&md5=1d2001f571c7e6bb349945aa05c265 df Some empirical tests of an interactive activation model of eye movement control in reading Pages 34-55 Ronan G. Reilly and Ralph Radach http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4H68P00-1&md5=4d9a3d013ee113f17da249e32349aa 4b An oculomotor-based model of eye movements in reading: The competition/interaction model Pages 56-69 Shun-nan Yang http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4H877TT-1&md5=1dc0125bf7613964ed30042f017e8d c4 Eye movements as time-series random variables: A stochastic model of eye movement control in reading Pages 70-95 Gary Feng http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4HB4DN2-1&md5=a31ecbb10fde01ee12d39dda994ffa 8c ==================================== Cognitive Systems Research Volume 7, Issues 2-3, Pages 97-326 (June 2006) Cognition, Joint Action and Collective Intentionality Edited by Luca Tummolini and Cristiano Castelfranchi Introduction to the Special Issue on Cognition, Joint Action and Collective Intentionality Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 97-100 Luca Tummolini and Cristiano Castelfranchi http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JKYWP4-1&_origin=SD EMFRASCII&_version=1&md5=5b56a50a52d042ef279f82a622cfb5af From mirror neurons to joint actions Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 101-112 Elisabeth Pacherie and Jerome Dokic http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JDVP85-1&_origin=SD EMFRASCII&_version=1&md5=0e78b909c1f4ba8c5c1b15f7669a7832 Pretend play and the development of collective intentionality Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 113-127 Hannes Rakoczy http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JF997K-1&_origin=SD EMFRASCII&_version=1&md5=f4b055dac9218c9ea8b391841543b9c0 Sharedness and privateness in human early social life Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 128-139 Maurizio Tirassa, Francesca Marina Bosco and Livia Colle http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JD0JCY-1&_origin=SD EMFRASCII&_version=1&md5=063b10c94598fcdb9c787a95f764e58c From extended mind to collective mind Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 140-150 Deborah Perron Tollefsen http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JCCM8S-2&_origin=SD EMFRASCII&_version=1&md5=cc0ff3796bc2c0b06d1937d5916ae3ad Collective representational content for shared extended mind Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 151-174 Tibor Bosse, Catholijn M. Jonker, Martijn C. Schut and Jan Treur http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JCCM8S-1&_origin=SD EMFRASCII&_version=1&md5=f20c424f9231f9446dae9a5c4058e01f Uptake and joint action Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 175-191 Joris Hulstijn and Nicolas Maudet http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JDMR62-1&_origin=SD EMFRASCII&_version=1&md5=9fba9fa8211981778e49d91b6e0f004e From collective intentionality to intentional collectives: An ontological perspective Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 192-208 Emanuele Bottazzi, Carola Catenacci, Aldo Gangemi and Jos Lehmann http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JDVP85-2&_origin=SD EMFRASCII&_version=1&md5=ab3b901bd0fb242ba72f1d593aca5fae Argyll-Feet giants: A cognitive analysis of collective autonomy Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 209-219 Rosaria Conte and Paolo Turrini http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JKRWK2-1&_origin=SD EMFRASCII&_version=1&md5=c72c5b145d506f939b202ed3c7266aa0 Culture, evolution and the puzzle of human cooperation Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 220-245 Joseph Henrich and Natalie Henrich http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JF997K-2&_origin=SD EMFRASCII&_version=1&md5=0254201968f0a7f27a2054f0cc8501b3 The economic and evolutionary basis of selves Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 246-258 Don Ross http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JCBPM6-1&_origin=SD EMFRASCII&_version=1&md5=5ccc6d34be5dbc5a37609635858222ef The dynamics of intention in collaborative activity Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 259-272 Barbara J. Grosz and Luke Hunsberger http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JD0JCY-3&_origin=SD EMFRASCII&_version=1&md5=c2086d4d5e6cd2f64561d8a9ccae8581 Social obligation as reason for action Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 273-285 Kaarlo Miller http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JD0JCY-2&_origin=SD EMFRASCII&_version=1&md5=9104395412874d7061d117e4d345a27e Group beliefs and the distinction between belief and acceptance Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 286-297 Raul Hakli http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JDVP85-3&_origin=SD EMFRASCII&_version=1&md5=624049193fd4da5ecc92e587d4766c9a Institutional facts, performativity and false beliefs Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 298-306 Eerik Lagerspetz http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JBGKMS-1&_origin=SD EMFRASCII&_version=1&md5=e773b1f1116cac3bce816a11aa458656 The cognitive and behavioral mediation of institutions: Towards an account of institutional actions Cognitive Systems Research, Volume 7, Issues 2-3, June 2006, Pages 307-323 Luca Tummolini and Cristiano Castelfranchi http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_uoikey=B6W6C-4JKRWK2-2&_origin=SD EMFRASCII&_version=1&md5=a8d318f074c46da9ab006df0c2f30bae ======================================================================== ======== * Cognitive Systems Research Volume 7, Issue 4, Pages 327-378 (December 2006) http://www.sciencedirect.com/science/issue/ 6595-2006-999929995-635590 Modeling meta-cognition in a cognitive architecture Pages 327-338 Ron Sun, Xi Zhang and Robert Mathews http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4HK03H5-1&md5=96d237dc2217e5b2e7dfeb823fc5d9 92 Human and machine perception of biological motion Pages 339-356 Vijay Laxmi, R.I. Damper and J.N. Carter http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4JRVCP6-1&md5=52bfed7a1dfd43f5e69b0dae8fd76f a0 Are unsupervised neural networks ignorant? Sizing the effect of environmental distributions on unsupervised learning Pages 357-371 S?bastien H?lie, Sylvain Chartier and Robert Proulx http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4KBVX4F-1&md5=f0fc27c730333ca293ba1031f64815 21 Review of Curious Emotions, by Ralph Ellis; John Benjamins Publishing, 2005 (Advances in Consciousness Research Series). Pages 372-374 Christopher Willmot http://www.sciencedirect.com/science? _ob=GatewayURL&_method=citationSearch&_urlVersion=4&_origin=SDVIALERTASC II&_version=1&_uoikey=B6W6C-4J73097-1&md5=06a89740270a79601d07924a1e4ff1 62 ------------------------------------------------------------------------ See the following Web page for submission, subscription, and other information regarding Cognitive Systems Research: http://www.cogsci.rpi.edu/~rsun/journal.html See http://www.elsevier.com/locate/cogsys for further information regarding accessing these articles If you have questions about features of ScienceDirect, please access the ScienceDirect Info Site at http://www.info.sciencedirect.com ======================================================== Professor Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute 110 Eighth Street, Carnegie 302A Troy, NY 12180, USA phone: 518-276-3409 fax: 518-276-3017 email: rsun at rpi.edu web: http://www.cogsci.rpi.edu/~rsun ======================================================= From agutierrez at el.ub.es Fri Oct 20 05:48:07 2006 From: agutierrez at el.ub.es (=?ISO-8859-1?Q?Agust=EDn_Guti=E9rrez?=) Date: Fri, 20 Oct 2006 11:48:07 +0200 Subject: Connectionists: Workshop on bioinspired signal processing Message-ID: <1161337687.5061.2.camel@tokyo> Dear Sirs/Madames, We would like to inform you about the forthcoming GOSPEL workshop on bioinspired signal processing to be held in Barcelona from the 24th to the 26th of January 2007. We have just opened the period to accept contributions (orals/posters) to participate in this exciting event. We invite you to visit our website (www.gospel-wbsp-2007.org) and submit an abstract if you have an interesting contribution to this workshop. Do not hesitate to contact us for any question you may have about the workshop. Yours sincerely, Agust?n Guti?rrez, PhD Departament d'Electronica Universitat de Barcelona Mart? i Franqu?s 1 08028-Barcelona +34 934039174 From YOMTOV at il.ibm.com Fri Oct 20 08:58:31 2006 From: YOMTOV at il.ibm.com (Elad Yom-Tov) Date: Fri, 20 Oct 2006 14:58:31 +0200 Subject: Connectionists: NIPS 2006 workshop: Revealing Hidden Elements of Dynamical Systems -- Announcement and Call for Papers Message-ID: ******************************************************************* Call For Papers Revealing Hidden Elements of Dynamical Systems http://www.haifa.il.ibm.com/Workshops/nips2006/index.html Workshop held at the 20th Annual Conference on Neural Information Processing Systems (NIPS 2006) Whistler, CANADA: December 8 or 9, 2006 ******************************************************************** Revealing and modeling the hidden state-space of dynamical systems is a fundamental problem in signal processing, control theory, and learning. Classical approaches to this problem include Hidden Markov Models, Reinforcement Learning, and various system identification algorithms. More recently, the problem has been approached by such modern machine learning techniques as kernel methods, Bayesian and Gaussian processes, latent variables, and the Information Bottleneck. Moreover, dynamic state-space learning is the key mechanism in the way organisms cope with complex stochastic environments such as biological adaptation. One familiar example of a complex dynamic system is the authorship system in the NIPS community. Such a system can be described by both internal variables, such as links between NIPS authors, and external environment variables, such as other research communities. This complex system, which generates a vast number of papers each year, can be modeled and investigated using various parametric and non-parametric methods. In this workshop, we intend to review and confront different approaches to dynamical system learning, with various applications in machine learning and neuroscience. We plan to discuss relations between the different approaches, and address a range of questions and applications: ? What are the special features of dynamical system learning that separate it from other learning problems? ? What are the pros and cons of the current methods? ? How can statistical and information theoretic techniques be combined with the theoretical structure of dynamical systems? ? What kind of optimization principles for learning dynamics can be derived? ? Are there generic features that can be extracted from time-series data? ? How can we combine static and time series data for modeling dynamic systems? In addition, we hope this workshop will familiarize the machine learning community with many real-world examples and applications of dynamical system learning. Such examples will also serve as the basis for the discussion of such systems in the workshop. A successful outcome of the workshop would be novel methods for learning and modeling from such data, as well as providing new conceptual frameworks for the general problem of adaptation to complex environments. Format ======= This will be a one-day workshop. We plan to have around 50% of the workshop devoted to diverse short talks. The rest of the time would be dedicated to a panel presentation and discussions. Submission Instructions ======================== If you would like to present at this workshop, please send an email to Elad Yom-Tov (yomtov at il.ibm.com) no later than 31st October, specifying: ? Title ? Authors and affiliations ? A short paper - Length should be no more than 2000 words (Postscript or PDF format) If there is interest among workshop participants, we may publish an edited volume of the proceedings after the workshop. Dates & Deadlines ================== October 31: Abstract Submission November 13: Acceptance Notification Organizing Committee ===================== Naftali Tishby Hebrew University, Israel Michal Rosen-Zvi IBM Haifa Research Lab, Israel Elad Yom-Tov IBM Haifa Research Lab, Israel Pierre Baldi University of California at Irvine, USA Invited Speakers ================= Ziv Bar-Joseph Carnegie Mellon University, USA Jim Crutchfield University of California at Davis, USA Irina Rish IBM T.J. Watson Research Center, USA From mhb0 at lehigh.edu Fri Oct 20 09:56:56 2006 From: mhb0 at lehigh.edu (Mark H. Bickhard) Date: Fri, 20 Oct 2006 09:56:56 -0400 Subject: Connectionists: 2nd Call for Papers Message-ID: CALL FOR PAPERS Cognitive Robotics and Theoretical Psychology Tom Ziemke & Mark Bickhard Cognitive robotics ? the use of robots in the modeling of cognition ? evolved in reaction to disembodied and context free notions of cognition. Cognitive robotics and related orientations emphasize the centrality of agency to cognition, together with multiple corollary properties of dynamic agents interacting in real situations. There are many interrelations and synergies between cognitive robotics and theoretical psychology, and progressively more of them are being explored in both psychology and robotics. We are calling for papers for a special issue of the journal New Ideas in Psychology* addressing the relationships between cognitive robotics in a broad sense ? including, for example, embodied cognition, and autonomous agents ? on the one hand, and theoretical psychology, on the other. We would like to focus on what these approaches offer to psychology ? philosophically, theoretically, methodologically, suggestions for models, and so on ? as well as what opportunities are offered to psychology for application, testing, and exploration of psychological issues and theories. Interested authors should send an abstract of their intended paper to both editors. Initial abstracts are due 15 Nov 2006, and full papers for accepted abstracts will be due 16 Apr 2007. Important dates: Submission of abstracts: 15 Nov 2006 Submission of full papers: 16 Apr 2007 Feedback/notification: 20 June 2007 Completion of final drafts: 15 Oct 2007 Aim for publication: Early 2008 Tom Ziemke Professor of Cognitive Science/Cognitive Robotics School of Humanities & Informatics University of Sk?vde, Sweden tom.ziemke at his.se Mark Bickhard Henry R. Luce Professor of Cognitive Robotics and the Philosophy of Knowledge Lehigh University Bethlehem, PA USA mark at bickhard.name http://www.bickhard.ws/ * A journal for innovative theory in psychology. See our editorial statement: http://ees.elsevier.com/newideas/ Mark H. Bickhard Lehigh University 17 Memorial Drive East Bethlehem, PA 18015 mark at bickhard.name http://bickhard.ws/ From Eugene.Izhikevich at nsi.edu Wed Oct 25 01:27:57 2006 From: Eugene.Izhikevich at nsi.edu (Eugene M. Izhikevich) Date: Tue, 24 Oct 2006 22:27:57 -0700 Subject: Connectionists: Scholarpedia update: Recent review articles in Computational Neuroscience Message-ID: <453EF5DD.5010207@nsi.edu> Scholarpedia is a free peer-reviewed encyclopedia that combines the philosophies of Wikipedia and Encyclopedia Britannica. Scholarpedia hosts Encyclopedia of Computational Neuroscience, Encyclopedia of Dynamical Systems, and Encyclopedia of Computational Intelligence: http://www.scholarpedia.org/article/Encyclopedia_of_Computational_Neuroscience http://www.scholarpedia.org/article/Encyclopedia_of_Dynamical_Systems http://www.scholarpedia.org/article/Encyclopedia_of_Computational_Intelligence All three will be published in a printed form, and will be used as seeds to start Encyclopedia of Cognitive Neuroscience, Encyclopedia of Applied Mathematics, and Encyclopedia of Computer Science (later next year.) Scholarpedia update: Number of registered users: 609 Number of curators: 330 Number of reserved articles: 415 Number of peer-reviewed articles: 23 see http://www.scholarpedia.org/article/Special:Allpages&wpShowReviewed=1 5 recent articles in Neuroscience: Sherman S.M. (2006) Thalamus. Scholarpedia, p.3553 Schultz W. (2006) Reward. Scholarpedia. Schultz W. (2006) Reward Signals. Scholarpedia. Freeman W. (2006) Scale-Free Neocortical Dynamics. Scholarpedia. Beggs J.M. (2006) Neuronal Avalanche. Scholarpedia. 5 recent articles in Dynamical Systems: Milnor J. (2006) Attractor. Scholarpedia. Murdock J. (2006) Normal Form. Scholarpedia. Holmes P. and Shea-Brown E. (2006) Stability. Scholarpedia, p.4208 Kuznetsov Yu. (2006) Andronov-Hopf Bifurcation. Scholarpedia, p.3799 Kuznetsov Yu. (2006) Saddle-Node Bifurcation. Scholarpedia, p.3778 5 recent articles in Computational Intelligence: Castiglione F. (2006) Agent Based Modeling. Scholarpedia, p.4093 Mares M. (2006) Fuzzy Sets. Scholarpedia. Hidalgo C. and Barabasi A.-L. (2006) Scale-Free Networks. Scholarpedia. Rescorla R. (2006) Rescorla-Wagner Model. Scholarpedia. Johnson D.H.(2006) Signal to Noise Ratio. Scholarpedia. There is an ongoing election of authors for many articles, including "Neuron", "Synapse", "STDP", "Neural Networks", "Associative Memory", "Machine Vision", "Error Back-Propagation", "Bayesian Learning". To participate in the election, you need to register with the invitation key 'IEEE'. Eugene M. Izhikevich - editor-in-chief of Scholarpedia. From ps629 at columbia.edu Wed Oct 25 12:03:57 2006 From: ps629 at columbia.edu (Paul Sajda) Date: Wed, 25 Oct 2006 12:03:57 -0400 Subject: Connectionists: CFP: IEEE SIGNAL PROCESSING MAGAZINE Special Issue on Brain-Computer Interfaces Message-ID: <12439D89-8C43-4051-8731-C057BF626D9F@columbia.edu> CALL FOR PAPERS IEEE SIGNAL PROCESSING MAGAZINE Special Issue on Brain-Computer Interfaces Developing real-time systems for decoding neural activity so as to provide a direct interface between brain and machine is no longer science fiction, instead it is an active area of research that has been termed brain computer interfaces (BCI). BCI research is highly multidisciplinary, integrating neuroscience, rehabilitation engineering, human computer interface design, machine learning and signal processing into either invasive or non-invasive systems that are capable of decoding brain activity so as to generate control/ communication signals for interpretation by computers. The application and utility of BCI ranges from systems for aiding those with severe neurological disease to systems optimizing the workload of a user by modulating information delivery based on the user's current "brain state". BCI research is an excellent example of "applied signal processing", since at its core lies the real-time, online decoding of noisy multi- dimensional signals. Though the BCI field has made tremendous progress in recent years, there are still serious challenges, many of which could potentially be addressed by advances/improvements in signal processing, including increasing information transmission rates, more robust neural decoding algorithms, adaptive multi-channel filtering, etc. This CFP is aimed at researchers applying signal processing techniques to build BCI systems. The special issue will cover a broad range of topics including hardware, software, algorithmic and application areas of BCI specifically focused on signal processing. High-quality tutorial-style papers are solicited from the following non-exhaustive list of topics. Scope of topics: Multi-electrode technologies interfacing with the motor system of primates or humans for decoding planned/intended movement (MEMs electrodes, LNA circuits, etc.). Non-invasive BCI systems (electroencephalography or EEG, electrocorticography or ECoG, and functional near infrared imaging or fNIR) for decoding neural activity in humans. Review and comparisons of linear versus non-linear signal processing for decoding neural activity. Multi-modal neural imaging methods for BCI. Systems for monitoring brain state to enable cognitive user interfaces. On-line algorithms for data compression necessary for low bandwidth wired or wireless communication to/from implanted systems (e.g., spike sorting, lossy compression). On-line algorithms for decoding neural activity, including circuit implementations or implantable systems. Implantable telemetry systems for communication to/from implanted systems (e.g., optimal frequency considerations, low power implementations). Signal processing methods for handling artifacts in BCI systems Submission Procedure: Prospective authors should submit white papers to the web submission system at http://www.ee.columbia.edu/spm/ according to the following timetable. White papers, limited to 2 single-column double-spaced pages, should summarize the motivation, the significance of the topic, a brief history, and an outline of the content. In all cases, prospective contributors should make sure to emphasize the signal processing in their submission. Schedule (all deadlines are firm no exceptions) White paper due: December 1, 2006 Invitation notification: January 1, 2007 Manuscript due: May 1, 2007 Acceptance Notification: August 1, 2007 Final Manuscript due: September 15, 2007 Publication date: January, 2008 Guest Editors: Paul Sajda Columbia University ps629 at columbia.edu Klaus-Robert M?ller Technical University of Berlin and Fraunhofer FIRST klaus at first.fhg.de Krishna V. Shenoy Stanford University shenoy at stanford.edu Paul Sajda, Ph.D. Associate Professor Department of Biomedical Engineering Columbia University 351 Engineering Terrace Building, Mail Code 8904 1210 Amsterdam Avenue New York, NY 10027 tel: (212) 854-5279 fax: (212) 854-8725 email: ps629 at columbia.edu http://liinc.bme.columbia.edu From eero at cns.nyu.edu Thu Oct 26 16:14:07 2006 From: eero at cns.nyu.edu (Eero Simoncelli) Date: Thu, 26 Oct 2006 16:14:07 -0400 Subject: Connectionists: COSYNE 2007 Message-ID: <7F5C0646-8FA4-42EF-ABB8-26A008B8E9A0@cns.nyu.edu> ****************************************************************** Computational and Sytems Neuroscience (CoSyNe) MAIN MEETING WORKSHOPS Feb 22-25, 2007 Feb 26-27, 2007 Salt Lake City, UTAH The Canyons, UTAH http://cosyne.org ******************************************************************* IMPORTANT DATES --------------- * Early registration begins: 15-Nov-06 * Abstract submission deadline: 15-Dec-06 * Complete schedule release: 25-Jan-07 * Regular registration begins: 01-Feb-07 * On-line registration ends: 20-Feb-07 The annual COSYNE meeting provides an inclusive forum for the exchange of experimental and theoretical approaches to problems in systems neuroscience. The meeting is expected to draw about 350-400 researchers from a wide variety of disciplines. Topics include but are not limited to: neural coding; natural scene statistics; dendritic computation; neural basis of persistent activity; nonlinear receptive field mapping; representations of time and sequence; reward systems; synaptic plasticity; map formation and plasticity; population coding; attention; computation with spiking networks. The MAIN MEETING, held in Salt Lake City, will be single-track, and will consist of both oral and poster sessions. Some oral presentations will be invited, while others will be drawn from short submitted abstracts. Poster presentations will be drawn from submitted abstracts. Invited speakers for this year are as follows: * Ehud Ahissar (Weizmann Institute) * Richard Andersen (Caltech) * Ed Callaway (Salk Institute) * Paul Glimcher (NYU) * Michael Goldberg (Columbia) * Judith Hirsch (USC) * Mitsuo Kawato (ATR) * Eric Knudsen (Stanford) * Mike Lewicki (CMU) * Zhaoping Li (UCL) * Dan Margoliash (U Chicago) * Bruce McNaughton (U Arizona) * Bartlett Mel (USC) * Sheila Nirenberg (Cornell) * Mike Shadlen (U Washington) The WORKSHOPS will be at the Canyons ski resort nearby, and will offer parallel sessions for more in-depth discussion of specialized topics. Preliminary workshop topics are as follows: 1. How silent/active is the brain? 2. Hippocampal and entorhinal coding across species (2 days) 3. Emerging information-theoretic measures and methods in neuroscience 4. Neurally plausible statistical inference 5. Functional requirements of a visual theory 6. Conserved functions of the basal ganglia circuit 7. What role does spike synchrony or correlation play in sensory processing? 8. Asking why - normative models in neuroscience 9. Quantitative analysis of shape representation in mid and higher level visual areas 10. Random matrix theory and neural networks 11. Motor control 12. Decision making For further information, please consult the conference web site: http://cosyne.org or send email to cosyne at rochester.edu From s.crone at lancaster.ac.uk Tue Oct 24 21:46:31 2006 From: s.crone at lancaster.ac.uk (Crone, Sven) Date: Wed, 25 Oct 2006 02:46:31 +0100 Subject: Connectionists: NN3 Forecasting Competition Message-ID: <84C837A579BB6B41993A00F525676759A18100@exchange-be3.lancs.local> The 2006-2007 Forecasting competition for Neural Networks and Computational Intelligence Methods has started! Objectives Forecast a set of 11 or 111 time series as accurately as possible, using methods from computational intelligence and a consistent methodology. We hope to evaluate progress in modelling neural networks for forecasting & to disseminate knowledge on "best practices". The competition is for academic purposes and supported by a grant from SAS & the International Institute of Forecasters (IIF). Methods The prediction competition is open to all methods of computational intelligence, incl. feed-forward and recurrent neural networks, fuzzy predictors, decision & regression tress, support vector regression, hybrid approaches etc. used in financial forecasting, statistical prediction, time series analysis Publication of Results The results will be presented at various conferences in 2007. All those submitting predictions will be invited to participate in Events at ISF'07 , New York, IJCNN'07 , Orlando (pending), DMIN'07 , Las Vegas (pending) and LUMS'07, Manchester, and submit papers to those special sessions. All submissions will invited for a full publication in an edited book "Advances of Neural Forecasting", subject to acceptance (pending). The most successful authors will be invited for a joint submission to the highly acclaimed International Journal of Forecasting (ISI SCI, DBLP etc. indexed). Dates & deadlines 20 October 2006 Publication of reduced & complete dataset May 2007 Predictions submissions due June & July 2007 publication of preliminary results at various conferences in USA, UK etc August 2007 submissions to full publications Please visit the NN3 website at http://www.neural-forecasting-competition.com/ for further instructions. GOOD LUCK! Sven F. Crone & Konstantinos Nikolopoulos _____________________________________________________ Sven F. Crone Deputy Director, Lancaster Centre for Forecasting Lecturer (Ass. Prof.), Department of Management Science Lancaster University Management School Lancaster LA1 4YX United Kingdom Tel +44 (0)1524 592991direct Tel +44 (0)1524 593867 department Fax +44 (0)1524 844885 Internet http://www.lums.lancs.ac.uk eMail s.crone at lancaster.ac.uk _______________________________________________ Nesox Email Marketer Tracking Counter From online at gavrila.net Wed Oct 25 05:00:40 2006 From: online at gavrila.net (online@gavrila.net) Date: Wed, 25 Oct 2006 11:00:40 +0200 Subject: Connectionists: DATA: DC pedestrian classification benchmark Message-ID: <1161766840.453f27b845f80@www.domainfactory-webmail.de> Dear Moderator, could you kindly post below announcement to the Connectionists mailing list? Many thanks, Prof. Dr. Dariu M. Gavrila Machine Perception, REI/AI DaimlerChrysler Research & Technology, Wilhelm Runge St. 11, 89081 Ulm, Germany Email: [first-name].[last-name]@daimlerchrysler.com Phone: +49 731 505 2388, Fax: +49 731 505 4105 *********************************************************************** DAIMLERCHRYSLER PEDESTRIAN CLASSIFICATION BENCHMARK As part of the publication S. Munder and D. M. Gavrila, "An Experimental Study on Pedestrian Classification," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.28, no.11, pp.1863-1868, 2006, we are making a large set of pedestrian and non-pedestrian images available for non-commercial, research purposes. The aim is to advance research on pedestrian (object) classification by allowing comparison of various approaches (including neural computations) on a common data set. The so-called DaimlerChrysler (DC) pedestrian classification benchmark contains a base set of 4000 pedestrian- and 5000 non-pedestrian samples cut out from video images and scaled to common size of 18x36 pixels. The benchmark furthermore contains a collection of 1200 video images not containing any pedestrians, for optional bootstrapping approaches. The DC pedestrian classfication benchmark is available at: http://www.science.uva.nl/research/isla/dc-ped-class-benchmark.html For more information on the pedestrian detection application, see http://www.gavrila.net/Computer_Vision/Research/Pedestrian_Detection/pedestrian_detection.html Feedback is welcome. *********************************************************************** From juergen at idsia.ch Wed Oct 25 09:48:01 2006 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 25 Oct 2006 15:48:01 +0200 Subject: Connectionists: Jobs in Munich: Machine Learning and Adaptive Robotics Message-ID: <723bb02653ec96713b0d509ef53cf285@idsia.ch> We are seeking outstanding postdocs or PhD students for the Cognitive Robotics & Machine Learning group in the new cluster of excellence on Cognitive Technical Systems (CoTeSys) such as autonomous vehicles, humanoid robots, and automated factories. Research topics include: Artificial curiosity for the artificial hands of German Aerospace (DLR), behavior evolution for AM's 180cm humanoid walking biped, visual attention & unsupervised learning for adaptive robots. Possible starting date: 1 November 2006 or later. Details: http://www.idsia.ch/~juergen/cotesys.html CoTeSys combines the expertise of Munich's TU, LMU, UniBW, DLR, MPI, in neuroscience, natural sciences, engineering, computer science, and the humanities. It also has strong connections to industry leaders such as BMW and Siemens. CoTeSys is one of the few proposals that made it in an intense nation-wide competition during the past year. CoTeSys partners TUM and LMU rank first among Germany's universities, according to a recent FOCUS survey. 14 Nobel laureates (the most recent one of 2005 at MPI) are associated with Munich (M?nchen). On 10/13/2006 both TUM and LMU were selected as two of the three German "Elite Universities" by the "Excellence Initiative" funded by 1.9 billion Euros for the next 5 years. This was prime time news on all German channels. Germany is a good place for robot research. It is still the world's largest exporter of machines and other goods, birthplace of the first robot cars, and the second largest maker and user of robots, after Japan. Many German teams became world champions in the RoboCup, the most visible robot competition. Munich is one of the world's most livable places - ranked 2nd among the world's cities with over a million inhabitants, after Vienna, according to recent surveys. How to apply: http://www.idsia.ch/~juergen/cotesys.html CoTeSys Portal: http://www.cotesys.org/ Juergen Schmidhuber TU Munich & IDSIA Machine Learning & Cognitive Robotics: http://www.idsia.ch/~juergen/cogbotlab.html http://www.idsia.ch/~juergen/learningrobots.html http://atknoll1.informatik.tu-muenchen.de:8080/tum6/people/schmidhuber http://www.idsia.ch/~juergen From C.Campbell at bristol.ac.uk Thu Oct 26 13:20:40 2006 From: C.Campbell at bristol.ac.uk (ICG Campbell, Engineering Mathematics) Date: Thu, 26 Oct 2006 18:20:40 +0100 Subject: Connectionists: Postdoctoral Position in Machine Learning and Bioinformatics Message-ID: <29438139.1161886840@ems-iggs.enm.bris.ac.uk> Postdoctoral Position in Machine Learning and Bioinformatics University of Bristol, United Kingdom We are seeking to appoint an outstanding postdoctoral researcher interested in machine learning and bioinformatics to work with Dr. Colin Campbell, University of Bristol, United Kingdom. This project will involve the design of new algorithms and data analysis techniques in addition to an important core application study. We will use a variety of methods from modern machine learning, including Bayesian techniques and probabilistic graphical methods, kernel-based methods and other approaches. The core application study will be the use of unsupervised and semi-supervised learning methods to delineate subtypes of cancer and genes differentially expressed in these subtypes. This will lead through to gene knockdown studies using the new technology of siRNA (small inteferring RNAs) by our cancer research collaborators. We have had significant success with our preliminary project and machine learning techniques played a critical role in target selection. Aside from this core programme, the appointee will have the freedom to pursue research in machine learning, bioinformatics, statistics and cancer informatics, according to their interests. Topics of interest may include semi-supervised learning and data fusion, for example. The Research Assistant will be a member of the proposed Intelligent Systems Lab, Faculty of Engineering, Bristol University. Other researchers in the Lab with interests in machine learning and bioinformatics include Nello Cristianini and Tijl de Bie. The deadline for applications is: *** 9.00 am on 30th November 2006 *** Further details are available at: http://www.bris.ac.uk/boris/jobs/ads?ID=57694 Interested candidates must apply through the online application on this webpage. ---------------------- C.Campbell at bristol.ac.uk From awhite at cs.ualberta.ca Thu Oct 26 17:27:40 2006 From: awhite at cs.ualberta.ca (Adam White) Date: Thu, 26 Oct 2006 15:27:40 -0600 Subject: Connectionists: NIPS*06 Workshop: First Annual Reinforcement Learning Competition Message-ID: <328147B4-F624-46F2-80D8-920EC4763E41@cs.ualberta.ca> NIPS*06 Workshop - Whistler, BC, December 9, 2006 "First Annual Reinforcement Learning Competition" ====================================================== http://rlai.cs.ualberta.ca/RLAI/rlc.html ---------------------------- Call for participation ---------------------------- Teams can compete in 7 events: - Cat and Mouse - Tetris - Garnet - Non-stationary Mountain Car - Cart-pole - Puddle World The winners of the six individual events will be invited to give short talks, at the workshop, outlining their solution techniques. Each team is also encouraged to submitt a two page description of their solution techniques for each event they compete in. Teams are required to register their team with Adam White (awhite at cs.ualberta.ca). See the workshop web-page for a compete description of the problems, evaluation scheme and rules. A pentathlon event will be announced in a few days. Look for the announcement and description of the pentathlon on the workshop web- page. In the Pentathlon each participant's agent will be evaluated on five continuous observation problems consecutively. The agents that perform best across all five problems will be awarded small prizes. ------------------------ Important Dates ------------------------ Initial software release and official start of competition: October 26, 2006 Competition end: December 3, 2006 Submission deadline for Solution descriptions: December 5, 2006 The server will come online a few days after the initial software has been released. Participants are encouraged to begin writing agents and testing locally before connecting to the competition server. --------------------------------------------------- Workshop Description and Motivation --------------------------------------------------- Competitions can greatly increase the interest and focus in an area by clarifying its objectives and challenges, publicly acknowledging the best algorithms, and generally making the area more exciting and enjoyable. The First Annual Reinforcement Learning competition is ongoing and will reach culmination six days before this workshop (December 3rd). The purpose of this one day workshop is to 1) report the results from the competition, 2) summarize the experiences of and solution approaches employed by the competition participants, and 3) plan for future competitions. There have been two previous events within the machine learning community that have involved comparing different reinforcement learning methods, one at a NIPS 2005 workshop and one at a ICML 2006 workshop. Last year's event at NIPS had over 30 submissions, from 9 countries. This competition differs from previous events in that it will be a competitive event. The winners will be invited to describe their approach to each problem, at the workshop, improving the attendees? expertise on applying and implementing reinforcement learning systems. This year's competition will feature seven events, three discrete observation problems and three continuous observation problems. The final event will be a Pentathlon: each participant's agent will be evaluated on five continuous observation problems consecutively. The agents that perform best across all five problems will be awarded small prizes. For more details, see: http://rlai.cs.ualberta.ca/RLAI/rlc.html ---------------------------------- Organization Committee ---------------------------------- Adam White, University of Alberta, Alberta, Canada Richard S. Sutton, University of Alberta, Alberta, Canada Michael L. Littman, Rutgers University, New Jersey, USA Doina Precup, Mcgill University , Montreal, Canada Peter Stone, University of Texas, Austin, Texas, USA ------------------------------------------------ Technical Organization Committee ------------------------------------------------ Andrew Butcher, University of Alberta, Alberta, Canada From alex at bcs.rochester.edu Thu Oct 26 20:50:49 2006 From: alex at bcs.rochester.edu (alex@bcs.rochester.edu) Date: Thu, 26 Oct 2006 20:50:49 -0400 Subject: Connectionists: postdoc position Pouget/Shadlen Message-ID: <7.0.1.0.0.20061026204929.03b1ee30@bcs.rochester.edu> A postdoc position is available immediately in Dr Pouget's laboratory at the University of Rochester to work on the neural basis of decision making in collaboration with the laboratory of Michael Shadlen at the University of Washington, Seattle. The project will combine theories of Bayesian inference, sequential decision making, reinforcement learning and network dynamics. Applicants should have expertise in one or more of these fields and strong background in computational neuroscience, statistics, computer science, physics or applied mathematics. Candidates should send their CV to alex at bcs.rochester.edu. Additional information about Dr Pouget's laboratory can be found at http://www.bcs.rochester.edu/people/alex/ Alexandre Pouget Department of Brain and Cognitive Sciences Meliora Hall University of Rochester Rochester, NY 14627 Phone: 585 275 0760 Fax: 585 442 9216 WWW: http://www.bcs.rochester.edu/people/alex/ From nq6 at columbia.edu Tue Oct 24 10:10:58 2006 From: nq6 at columbia.edu (Ning Qian) Date: Tue, 24 Oct 2006 10:10:58 -0400 Subject: Connectionists: postdoc position at Columbia Message-ID: <453E1EF2.40902@columbia.edu> Postdoctoral Position in Computational Neuroscience Center for Neurobiology and Behavior Columbia University A postdoctoral position is available immediately in Dr. Ning Qian's lab at Columbia University. Research projects include (but not restricted to) computational modeling of motor control, stereovision, motion perception, and visuomotor integration. Publications from the lab can be found at: http://brahms.cpmc.columbia.edu. The lab is part of Mahoney Center for Brain and Behavior Research, and Center for Theoretical Neuroscience at Columbia. The position is for three to four years. Salary is commensurate with NIH levels (plus benefits). The candidate should be highly motivated, and have a strong research background in mathematical modeling and programming, as evidenced by first-authored publications. Please send CV, names/email addresses/phone numbers of two to three referees, and representative publications to the address below. Email applications and inquiries are preferred. Dr. Ning Qian Ctr. Neurobiology & Behavior Columbia University / NYSPI Kolb Annex, Rm 519 1051 Riverside Drive New York, NY 10032, USA nq6 at columbia.edu 212-543-6931x600 http://brahms.cpmc.columbia.edu From i.tetko at gsf.de Fri Oct 27 08:22:50 2006 From: i.tetko at gsf.de (Igor V. Tetko) Date: Fri, 27 Oct 2006 14:22:50 +0200 Subject: Connectionists: Jobs in Munich: Chemoinformatics, machine learning Message-ID: Dear colleagues: Bio-/chemoinformatics machine learning positions at the GSF -- National Center for Environment and Health, Institute for Bioinformatics, Neuherberg (Munich) Germany Following an award of "Development of ADME/T methods using Associative Neural Networks (ASNN)" grant we have several positions to develop methods for prediction of physico-chemical and ADME/T (Absorption, Distribution, Metabolism, Excretion and Toxicology) properties of small molecules for drug discovery and environmental studies. The approaches will be developed/validated in a close interaction with the wet laboratory scientists from the GSF, Helmholtz network and other (inter)national centers. Subgoals of the project are to develop new methods to estimate bias and confidence of model predictions (e.g., to distinguish reliable vs non-reliable predictions), to improve accuracy of predictions by exploring different representations of molecules (kernel methods vs vector of descriptors) and to interpret the calculated results to the end users (chemists/biologists). Some of these ideas have been used within the ASNN and will be further developed. We are looking for outstanding scientists (postdocs, but exceptional PhD students will be also considered) who are expert in machine learning and computer science and have an experience in application of machine learning methods to small molecules (ideally MSc in physical chemistry/biochemistry and PhD in machine learning/computer science). The applicants should be also proficient in programming using Java/C++, design of Web services and tools, client-user interface development, data visualization and be ready to work on these tasks as a part of their employment. See: Methodological article: http://cogprints.org/1441 Job application details: http://www.vcclab.org/jobs.html Further information about the grant: http://www.gsf.de/neu/Aktuelles/Presse/2006/gobio.php -- Dr. Igor V. Tetko GSF - National Research Centre for Environment and Health Institute for Bioinformatics Ingolstaedter Landstrasse 1, D-85764 Neuherberg, Germany Tel./Fax: +49-89-3187-3575/x85 e-mail: itetko at vcclab.org, i.tetko at gsf.de http://www.vcclab.org -- Virtual Computational Chemistry Laboratory: free on-line calculation of logP/S, CORINA/DRAGON indices, PLS, ASNN, PNN and etc. From esann at dice.ucl.ac.be Sat Oct 28 16:36:12 2006 From: esann at dice.ucl.ac.be (esann) Date: Sat, 28 Oct 2006 22:36:12 +0200 Subject: Connectionists: speciel sessions at ESANN'2007 European Symposium on Artificial Neural Networks Message-ID: <00ab01c6fad0$b58dd2b0$43ed6882@maxwell.local> ESANN'2007 15th European Symposium on Artificial Neural Networks Advances in Computational Intelligence and Learning Bruges (Belgium) - April 25-26-27, 2007 Special sessions ===================================================== The following message contains a summary of all special sessions that will be organized during the ESANN'2007 conference. Authors are invited to submit their contributions to one of these sessions or to a regular session, according to the guidelines found on the web pages of the conference http://www.dice.ucl.ac.be/esann/. According to our policy to reduce the number of unsolicited e-mails, we gathered all special session descriptions in a single message, and try to avoid sending it to overlapping distribution lists. We apologize if you receive multiple copies of this e-mail despite our precautions. Special sessions that will be organized during the ESANN'2007 conference ======================================================================== 1. Fuzzy and Probabilistic Methods in Neural Networks and Machine Learning B. Hammer, Clausthal Univ. Tech. (Germany), T. Villmann, Univ. Leipzig (Germany) 2. Reinforcement Learning V. Heidrich-Meisner, Ruhr-Univ. Bochum, M. Lauer, Univ. Osnabr?ck, C. Igel, Ruhr-Univ. Bochum, M. Riedmiller, Univ. Karlsruhe (Germany) 3. Convex Optimization for the Design of Learning Machines K. Pelckmans, J.A.K. Suykens, Katholieke Univ. Leuven (Belgium) 4. Learning causality P. F. Verdes, Heidelberg Acad. of Sciences (Germany), K. Hlavackova-Schindler, Austrian Acad. of Sciences (Austria) 5. Reservoir Computing D. Verstraeten, B. Schrauwen, Univ. Gent (Belgium) Short descriptions ================== Fuzzy and Probabilistic Methods in Neural Networks and Machine Learning ----------------------------------------------------------------------- Organized by: - B. Hammer, Clausthal Univ. Tech. (Germany) - T. Villmann, Univ. Leipzig (Germany) The availability of huge amounts of real world data in widespread areas such as image processing, medicine, bioinformatics, robotics, geophysics, etc. lead to an increasing importance of fuzzy and probabilistic methods in adaptive data processing. Usually, measured data contain noise, classification labels may be undetermined on a certain level, decision systems have to cope with uncertainty of knowledge, and systems have to deal with missing or contradictory data from several sources. Extensions of neural networks and machine learning methods to incorporate probabilistic or fuzzy information offer possibilities to handle such problems. In the special session we focus on new developments and applications which extend common approaches by features related to fuzzy and probabilistic information processing. We encourage submissions within the following non-exclusive list of topics: - fuzzy and probabilistic clustering or classification - processing of misleading or insecure information - fuzzy reasoning and rule extraction - fuzzy control - probabilistic networks - visualization of fuzzy information - applications in image processing, medicine, robotics, ... Thereby, the focus will lie on major developments which contribute to new insights, improved algorithms, or the demonstration of fuzzy and probabilistic methods for real life problems. Reinforcement Learning ----------------------------------------------------------------------- Organized by: - V. Heidrich-Meisner, Ruhr-Univ. Bochum - M. Lauer, Univ. Osnabr?ck - C. Igel, Ruhr-Univ. Bochum - M. Riedmiller, Univ. Karlsruhe (Germany) Reinforcement learning (RL) deals with computational models and algorithms for solving sequential decision and control problems: a learning agent builds policies of optimal behavior by interacting with the environment and observing reward for his present performance. Fields of applications are for example nonlinear control of technical systems, robotics, and economical processes. Reinforcement learning addresses problems for which knowledge of system?s dynamics is poor, feedback about actions is sparse, unspecific, or delayed. Moreover, it is the biologically most plausible learning paradigm for behavioral processes. Thus, RL is a highly interdiciplinary field of research combining optimal control, psychology, biology, and machine learning. Bringing together researchers from these different disciplines is one goal of the proposed session. In our session we would welcome papers describing theoretical work and carefully evaluated applications from all areas of RL or approximate dynamic programming. We would encourage submissions dealing with new biological models of RL processes as well as innovative computational learning algorithms, in particular direct policy search methods and approximative RL. Convex Optimization for the Design of Learning Machines ----------------------------------------------------------------------- Organized by: - K. Pelckmans, Katholieke Univ. Leuven (Belgium) - J.A.K. Suykens, Katholieke Univ. Leuven (Belgium) Recently, techniques of Convex Optimization (CO) take a more prominent place in learning approaches, as pioneered by the work on Support Vector Machines (SVMs) and other regularization based learning schemes. Duality theory has played an important role in the development of so-called kernel machines, while the fact of uniqueness of the optimal solution has permitted theoretical as well as practical breakthroughs. A third main advantage of using CO tools in research on learning problems is that the conceptual level of the design of a learning scheme becomes nicely separated from the actual algorithm implementing this scheme (the CO solver). In this special session, we target the first stage, i.e. the phase where one studies how the learning problem at hand can be converted effectively into a CO problem. Papers are solicited from the area of, but not restricted to * New formulations of kernel machines in terms of convex optimization problems * Novel (convex) optimality principles for learning machines * Handling different model structures (e.g. additive models) and noise models using a convex solver. * Structure detection, regularization path and sparseness issues * Convex approaches to model selection problems * Convex techniques in Clustering and Exploratory Data Analysis (EDA) * SDP and SOCP based techniques in kernel machines * Application specific (convex) formulations Learning causality ----------------------------------------------------------------------- Organized by: - P. F. Verdes, Heidelberg Acad. of Sciences (Germany) - K. Hlavackova-Schindler, Austrian Acad. of Sciences (Austria) Discovering interdependencies and causal relationships is one of the most relevant challenges raised by the information era. As more and better data become available, there is an urgent need for techniques with the capability of efficiently sensing, for example, the hidden interactions within regulatory networks in Biology, the complex feedbacks of a climate system affected by global warming, the possible coupling of economic indexes, the subtle recruiting processes that may take place in the human brain, etc. As such, this important issue is receiving increasing attention in the recent literature. We encourage submissions to this topic that present original contributions on methodological and practical aspects of causality detection techniques by means of learning, statistical information processing and computational intelligence, possibly illustrated by real-world data applications. Reservoir Computing ----------------------------------------------------------------------- Organized by: - D. Verstraeten, Univ. Gent (Belgium) - B. Schrauwen, Univ. Gent (Belgium) Reservoir computing is a novel temporal classification and regression technique that is a generalization of three prior concepts: the Echo State Network, the Liquid State Machine and Backpropagation Decorrelation. It has been successfully applied to many temporal machine learning problems in DSP, robotics, speech recognition and others. The technique uses a broad class of recurrent neural networks as a form of explicit kernel-like mapping that projects the input space into a higher dimensional network state-space, while retaining temporal information using the fading memory property of the network. The network itself is left untrained, but instead a simple readout function (usually a linear discriminant) is applied to the network's state-space, which is generally very easy to train. This technique shows promising advantages compared to traditional temporal machine learning methods, but there are still many open research questions. This special session will present a mixture of theoretical results and state-of-the-art applications in several fields. ======================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Microelectronics Laboratory 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat d-side conference services 24 av. L. Mommaerts - B-1140 Evere - Belgium tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00 mailto:esann at dice.ucl.ac.be ======================================================== From steve at cns.bu.edu Sat Oct 28 16:59:17 2006 From: steve at cns.bu.edu (Stephen Grossberg) Date: Sat, 28 Oct 2006 16:59:17 -0400 Subject: Connectionists: a neural model of 3D shape-from-tenure Message-ID: The following article is now available at http://www.cns.bu.edu/Profiles/Grossberg : Grossberg, S., Kuhlmann, L., and Mingolla, E. A Neural Model of 3D Shape-From-Texture: Multiple-Scale Filtering, Boundary Grouping, and Surface Filling-In Vision Research, in press ABSTRACT A neural model is presented of how cortical areas V1, V2, and V4 interact to convert a textured 2D image into a representation of curved 3D shape. Two basic problems are solved to achieve this: (1) Patterns of spatially discrete 2D texture elements are transformed into a spatially smooth surface representation of 3D shape. (2) Changes in the statistical properties of texture elements across space induce the perceived 3D shape of this surface representation. This is achieved in the model through multiple-scale filtering of a 2D image, followed by a cooperative-competitive grouping network that coherently binds texture elements into boundary webs at the appropriate depths using a scale-to-depth map and a subsequent depth competition stage. These boundary webs then gate filling-in of surface lightness signals in order to form a smooth 3D surface percept. The model quantitatively simulates challenging psychophysical data about perception of prolate ellipsoids (Todd and Akerstrom, 1987, J. Exp. Psych., 13, 242). In particular, the model represents a high degree of 3D curvature for a certain class of images, all of whose texture elements have the same degree of optical compression, in accordance with percepts of human observers. Simulations of 3D percepts of an elliptical cylinder, a slanted plane, and a photo of a golf ball are also presented. Key words: shape, texture, neural modeling, 3D vision, visual cortex, FACADE model, BCS, FCS, multiple scales, perceptual grouping, size-disparity correlation, filling-in, shape-from-texture. From mail at jan-peters.net Mon Oct 30 18:56:20 2006 From: mail at jan-peters.net (Jan Peters) Date: Mon, 30 Oct 2006 15:56:20 -0800 Subject: Connectionists: [NIPS 2006] Call for Posters & Participation for the Workshop: Towards a New Reinforcement Learning? Message-ID: <380DC639-0371-4BFC-93B8-2842F78398D8@jan-peters.net> ============================================== NIPS 2006 Workshop CALL FOR POSTERS & PARTICIPATION Towards a New Reinforcement Learning? http://www.jan-peters.net/Research/NIPS2006 Whistler, CANADA: December 8, 2006 ============================================== Abstract ======= During the last decade, many areas of statistical machine learning have reached a high level of maturity with novel, efficient, and theoretically well founded algorithms that increasingly removed the need for heuristics and manual parameter tuning, which dominated the early days of neural net- works. Reinforcement learning (RL) has also made major progress in theory and algorithms, but is somehow lagging behind the success stories of classification, supervised, and unsupervised learn- ing. Besides the long-standing question for scalability of RL to larger and real world problems, even in simpler scenarios, a significant amount of manual tuning and human insight is needed to achieve good performance, e.g., as in exemplified in issues like eligibility factors, learning rates, the choice of function approximators and their basis functions for policy and/or value functions, etc. Some of the reasons for the progress of other statistical learning disciplines comes from con- nections to well-established fundamental learning approaches, like maximum-likelihood with EM, Bayesian statistics, linear regression, linear and quadratic programming, graph theory, function space analysis, etc. Therefore, the main question of this workshop is to discuss, how other statisti- cal learning techniques may be used to developed new RL approaches in order to achieve properties including higher numerical robustness, easier use in terms of open parameters, probabilistic and Bayesian interpretations, better scalability, the inclusions of prior knowledge, etc. Format ====== Our goal is to bring together researchers who have worked on reinforcement learning techniques which are heading towards new approaches in terms of bringing other statistical learning tech- niques to bear on RL. The workshop will consist of short presentations, posters, and panel discus- sions. Topics to be addressed include, but are not limited to: ? Which methods from supervised and unsupervised learning are the most promising to help developing new RL approaches? ? How can modern probabilistic and Bayesian method be beneficial for Reinforcement Learn- ing? ? Which approaches can help reducing the number of open parameters in Reinforcement Learning? ? Can the Reinforcement Learning Problem be reduced to Classification or Regression? ? Can reinforcement learning be seen as a big filtering or prediction problem where the pre- diction of good actions is the main objective? ? Are there useful alternative ways to formulate the RL problem? E.g, as a dynamic Bayesian network, by using multiplicative rewards, etc. ? Can reinforcement learning be accelerated by incorporating biases, expert data from demon- stration, prior knowledge on reward functions, etc.? Invited Talks (tentative) =================== Game theoretic learning and planning algorithms, Geoff Gordon Reductive Reinforcement Learning, John Langford The Importance of Measure in Reinforcement Learning, Sham Kakade Sample Complexity Results for Reinforcement Learning in Large State Spaces, Csaba Szepesvari Policies Based on Trajectory Libraries, Martin Stolle Towards Bayesian Reinforcement Learning, Pascal Poupart Bayesian Policy Gradient Algorithms, Mohammed Ghavamsadeh Bayesian RL for Partially Observable Domains, Joelle Pinau Bayesian Reinforcement Learning with Gaussian Processes, Yaakov Engel From Imitation Learning to Reinforcement Learning, Nathan Ratliff Graphical Models for Imitation: A New Approach to Speeding up RL, Deepak Verma Apprenticeship learning and robotic control, Andrew Ng Variational Methods for Stochastic Optimization: A Unification of Population-Based Methods, Mark Andrews Probabilistic inference for solving structured MDPs and POMDPs, Marc Toussaint Poster Submission Instructions ======================== If you would like to present a poster at this workshop, please send an email to Jan Peters (jrpeters at usc.edu) no later than 13th November, 2006, specifying: -> Title -> Presenter and affiliation -> A short abstract with one or two references We intend to create an edited book with contributions of people who have presented at our workshop. We would be delighted if you would indicate whether you are interested to add a chapter/section to such a book? Dates & Deadlines for Poster Submissions ================================= November 13: Abstract Submission November 15: Acceptance Notification Organizing Committee ===================== Jan Peters University of Southern California Drew Bagnell Carnegie Mellon University Stefan Schaal University of Southern California From Randy.OReilly at colorado.edu Tue Oct 31 16:50:33 2006 From: Randy.OReilly at colorado.edu (Randall C. O'Reilly) Date: Tue, 31 Oct 2006 14:50:33 -0700 Subject: Connectionists: CCNC Conference: Final Notice for Registration In-Reply-To: <200606162346.49042.Randy.OReilly@colorado.edu> References: <200604142347.38871.Randy.OReilly@colorado.edu> <200606062219.06440.oreilly@psych.colorado.edu> <200606162346.49042.Randy.OReilly@colorado.edu> Message-ID: <200610311450.34326.Randy.OReilly@colorado.edu> -------------------------------------------------------------------------------------------------------------------------------- FINAL NOTICE --- CCNC2006 REGULAR REGISTRATION RATES END NOVEMBER 6! -------------------------------------------------------------------------------------------------------------------------------- 2ND ANNUAL CONFERENCE ON COMPUTATIONAL COGNITIVE NEUROSCIENCE www.ccnconference.org To be held in conjunction with the 2006 PSYCHONOMIC SOCIETY CONFERENCE, November 16-19, 2006 at the Hilton Americas hotel in Houston, TX. CONFERENCE DATES: Wed-Thu November 15 & 16, 2006 LOCATION: Hilton Americas Hotel, Houston, TX ******************************************************************************************************** Please REGISTER through the ccnconference.org website or go directly to: http://www.regonline.com/Checkin.asp?EventId=97555 at the regular rates of $75 faculty/$35 student through 6 Nov 06. Hotel reservations should be made separately. After November 6, conference registration rates will increase to $125/$50. On site registration will be available, but we'd like to strongly discourage it. Early registration helps us get our head counts correct and keep our costs down. Thanks for your cooperation! ******************************************************************************************************* The inaugural CCNC 2005 meeting held prior to Society for Neuroscience (SfN) in Washington DC was a great success with approximately 250 attendees 60 presented posters and strongly positive reviews. In future years it will continue to be held on a rotating basis with other meetings such as (tentative list): Cognitive Neuroscience Society (CNS) Organization for Human Brain Mapping (OHBM) Cognitive Science Society (CogSci) Neural Information Processing Systems (NIPS) and Computational and Systems Neuroscience (COSYNE). ___________________________________________________________________________ The journal Brain Research has agreed to publish selected papers from this meeting as a dedicated section, and possibly special issue of the journal. Presenting authors can elect to have their work considered for this purpose. Final selections will be made by the program committee shortly after the meeting. __________________________________________________________________________ Program (Final): __________________________________________________________________________ * 2006 Keynote Speakers (confirmed): Mike Kahana, University of Pennsylvania Mark Seidenberg, University of Wisconsin Madison ___________________________________________________________________________ * 3 Symposia (2 hours each): 1) Face/Object Recognition: Are Faces Special, or Just a Special Case? Computational models of face and object processing Speakers: Gary Cottrell, UCSD (Moderator) Kalanit Grill-Spector, Stanford Alice O'Toole, UT Dallas Maximilian Riesenhuber, Georgetown Description: What can computational models tell us about human visual object processing? We have excellent models that explain how we may recognize objects at multiple scales and orientations, while other models explain why faces may or may not be "special," or simply a special case. The goal of this symposium is to summarize what we understand with some degree of confidence, what is still not understood, and to what degree what we understand meshes with data on human and animal visual processing, including behavioral, fMRI, neurophysiological, and neuropsychological data. 2) Semantics: Development and Brain Organization of Conceptual Knowledge: Computational and Experimental Investigations. Speakers: Jay McClelland, Stanford University (Moderator) Linda Smith, Indiana University Tim Rogers, University of Wisconsin Alex Martin, National Institute of Mental Health Description: The symposium is predicated on the assumption that there are links between conceptual structure, experience, conceptual development, and brain organization of conceptual knowledge. Jay McClelland will begin with a computational perspective on conceptual development, followed by Linda Smith with an empirical perspective. We would then switch to the subject of brain organization of conceptual knowledge, beginning with a computational perspective by Tim Rogers followed by an empirical perspective from Alex Martin. 3) Emergent Cognitive Control: Computational and Empirical Investigations Speakers: Mike Mozer, University of Colorado (Moderator) Stephen Monsell, Universityof Exeter Gordon Logan, Vanderbilt Sue Becker, McMaster Description: Cognitive control is required whenever an individual performs novel activities, either because the task is novel or because the stimuli, responses, or task environment is unfamiliar. Aspects of cognitive control include: the deployment of visual attention, the selection of responses, and the use of working memory to subserve ongoing processing. The cognitive architecture is extremely flexible. The role of cognitive control is to reconfigure this general-purpose architecture to perform a specific task. Cognitive control is typically conceived of in terms of an active process in frontal cortex which guides and routes processing in posterior systems. Even if the process is implemented in neural hardware, it still has the flavor of a homunculus---an intelligent overseer that inhibits or otherwise biases processing in less intelligent, subservient systems. In this symposium, we wish to explore alternative perspectives on cognitive control, including perspectives that treat control as an emergent property of a complex cognitive architecture, and perspectives in which control is not an explicit active process, but rather a consequence of the sequential dynamics of experience. ___________________________________________________________________________ * 12 short talks featuring selected posters * Poster sessions (2) ___________________________________________________________________________ DEADLINE FOR SUBMISSION OF ABSTRACTS: July 1, 2006 (SPACE AVAILABLE ONLY) Abstracts to be submitted online via the website: www.ccnconference.org Like last year, there will be two categories of submissions: *Poster only *Poster, plus short talk (15 min) to highlight the poster Abstracts should be no more than 500 words. Women and underrepresented minorities are especially encouraged to apply. Reviewing for posters will be inclusive and only to ensure appropriateness to the meeting. Short talks will be selected on the basis of research quality relevance to conference theme and expected accessibility in a talk format. Abstracts not selected for short talks will still be accepted as posters as long as they meet appropriateness criteria. NOTIFICATION OF ACCEPTANCE: August 1, 2006 -------------------------------------------------------------------------------------------------------------------------------------- 2006 Planning Committee: Suzanna Becker, McMaster University Jonathan Cohen, Princeton University Yuko Munakata, University of Colorado, Boulder David Noelle, Vanderbilt University Randall O'Reilly, University of Colorado, Boulder Maximilian Riesenhuber, Georgetown University Medical Center Executive Organizer: Thomas Hazy, University of Colorado, Boulder For more information and to sign up for the mailing list visit: www.ccnconference.org _______________________________________________ From jrpeters at usc.edu Tue Oct 31 17:50:05 2006 From: jrpeters at usc.edu (Jan Peters) Date: Tue, 31 Oct 2006 14:50:05 -0800 Subject: Connectionists: [NIPS 2006] Call for Posters & Participation for the Workshop: Towards a New Reinforcement Learning? Message-ID: ============================================== NIPS 2006 Workshop CALL FOR POSTERS & PARTICIPATION Towards a New Reinforcement Learning? http://www.jan-peters.net/Research/NIPS2006 Whistler, CANADA: December 8, 2006 ============================================== Abstract ======= During the last decade, many areas of statistical machine learning have reached a high level of maturity with novel, efficient, and theoretically well founded algorithms that increasingly removed the need for heuristics and manual parameter tuning, which dominated the early days of neural net- works. Reinforcement learning (RL) has also made major progress in theory and algorithms, but is somehow lagging behind the success stories of classification, supervised, and unsupervised learn- ing. Besides the long-standing question for scalability of RL to larger and real world problems, even in simpler scenarios, a significant amount of manual tuning and human insight is needed to achieve good performance, e.g., as in exemplified in issues like eligibility factors, learning rates, the choice of function approximators and their basis functions for policy and/or value functions, etc. Some of the reasons for the progress of other statistical learning disciplines comes from con- nections to well-established fundamental learning approaches, like maximum-likelihood with EM, Bayesian statistics, linear regression, linear and quadratic programming, graph theory, function space analysis, etc. Therefore, the main question of this workshop is to discuss, how other statisti- cal learning techniques may be used to developed new RL approaches in order to achieve properties including higher numerical robustness, easier use in terms of open parameters, probabilistic and Bayesian interpretations, better scalability, the inclusions of prior knowledge, etc. Format ====== Our goal is to bring together researchers who have worked on reinforcement learning techniques which are heading towards new approaches in terms of bringing other statistical learning tech- niques to bear on RL. The workshop will consist of short presentations, posters, and panel discus- sions. Topics to be addressed include, but are not limited to: ? Which methods from supervised and unsupervised learning are the most promising to help developing new RL approaches? ? How can modern probabilistic and Bayesian method be beneficial for Reinforcement Learn- ing? ? Which approaches can help reducing the number of open parameters in Reinforcement Learning? ? Can the Reinforcement Learning Problem be reduced to Classification or Regression? ? Can reinforcement learning be seen as a big filtering or prediction problem where the pre- diction of good actions is the main objective? ? Are there useful alternative ways to formulate the RL problem? E.g, as a dynamic Bayesian network, by using multiplicative rewards, etc. ? Can reinforcement learning be accelerated by incorporating biases, expert data from demon- stration, prior knowledge on reward functions, etc.? Invited Talks (tentative) =================== Game theoretic learning and planning algorithms, Geoff Gordon Reductive Reinforcement Learning, John Langford The Importance of Measure in Reinforcement Learning, Sham Kakade Sample Complexity Results for Reinforcement Learning in Large State Spaces, Csaba Szepesvari Policies Based on Trajectory Libraries, Martin Stolle Towards Bayesian Reinforcement Learning, Pascal Poupart Bayesian Policy Gradient Algorithms, Mohammed Ghavamsadeh Bayesian RL for Partially Observable Domains, Joelle Pinau Bayesian Reinforcement Learning with Gaussian Processes, Yaakov Engel >From Imitation Learning to Reinforcement Learning, Nathan Ratliff Graphical Models for Imitation: A New Approach to Speeding up RL, Deepak Verma Apprenticeship learning and robotic control, Andrew Ng Variational Methods for Stochastic Optimization: A Unification of Population-Based Methods, Mark Andrews Probabilistic inference for solving structured MDPs and POMDPs, Marc Toussaint Poster Submission Instructions ======================== If you would like to present a poster at this workshop, please send an email to Jan Peters (jrpeters at usc.edu) no later than 13th November, 2006, specifying: -> Title -> Presenter and affiliation -> A short abstract with one or two references We intend to create an edited book with contributions of people who have presented at our workshop. We would be delighted if you would indicate whether you are interested to add a chapter/section to such a book? Dates & Deadlines for Poster Submissions ================================= November 13: Abstract Submission November 15: Acceptance Notification Organizing Committee ===================== Jan Peters University of Southern California Drew Bagnell Carnegie Mellon University Stefan Schaal University of Southern California From wahba at stat.wisc.edu Tue Oct 31 14:24:43 2006 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 31 Oct 2006 13:24:43 -0600 Subject: Connectionists: LASSO-Patternsearch algorithm in ArXiV Message-ID: <200610311924.k9VJOhu6006985@juno.stat.wisc.edu> %I am resending this with a more informative subject line %so please just delete these lines with %. Previous %msg a few minutes ago had submission below with non-informative subject line in the e-mail. %sorry for confusion - Grace Title: LASSO-Patternsearch algorithm with application to ophthalmology data. Authors: Weiliang Shi, Grace Wahba, Stephen Wright, Kristine Lee, Ronald Klein and Barbara Klein. ArXiV Address: http://arxiv.org/pdf/math.ST/0610916 Comments: Modern method for extracting multiple interacting risk factors in demographic and genetic data. Also available via the TRLIST at http://www.stat.wisc.edu/~wahba along with related papers. From awhite at cs.ualberta.ca Tue Oct 31 21:12:09 2006 From: awhite at cs.ualberta.ca (Adam White) Date: Tue, 31 Oct 2006 19:12:09 -0700 Subject: Connectionists: NIPS Reinforcement Learning Competition - UPDATE Message-ID: NIPS*06 Workshop - Whistler, BC, December 9, 2006 "First Annual Reinforcement Learning Competition" ====================================================== ---------------------------- UPDATE ---------------------------- New competition software has been released. You can now evaluate your agents for the Pentathlon (locally). The server will be activated this week. You need to register your team with awhite at cs.ualberta.ca to use the server. However, local testing does not require an account. ------------------------ Important Dates ------------------------ Official start of competition: October 26, 2006 Competition end: December 3, 2006 Submission deadline for Solution descriptions: December 5, 2006 The server will come online a few days after the initial software has been released. Participants are encouraged to begin writing agents and testing locally before connecting to the competition server. ---------------------------------- Organization Committee ---------------------------------- Adam White, University of Alberta, Alberta, Canada Richard S. Sutton, University of Alberta, Alberta, Canada Michael L. Littman, Rutgers University, New Jersey, USA Doina Precup, Mcgill University , Montreal, Canada Peter Stone, University of Texas, Austin, Texas, USA ------------------------------------------------ Technical Organization Committee ------------------------------------------------ Andrew Butcher, University of Alberta, Alberta, Canada