From Luc.Berthouze at aist.go.jp Thu Jan 3 21:00:19 2002 From: Luc.Berthouze at aist.go.jp (Luc Berthouze) Date: Fri, 04 Jan 2002 11:00:19 +0900 Subject: Postdoc position at AIST, Japan Message-ID: <4.3.2-J.20020104104952.036736e0@procsv16.u-aizu.ac.jp> We are seeking a postdoctoral fellow with research interests in computational neuroscience and cognitive science to participate to a project on human learning of speech-reading. Candidates should have expertise in computational modeling, in human or animal research on learning and experience in applying neural models to artificial systems (robots or simulations). Competitive funding for a two-years period is available. Candidates should contact Luc Berthouze for more details. ----- Dr. Luc Berthouze Cognitive Neuroinformatics Group Neuroscience Research Institute (AIST) Tsukuba AIST Central 2 Umezono 1-1-1, Tsukuba 305-8568, Japan Tel: +81-298-615369 Fax: +81-298-615841 Email: Luc.Berthouze at aist.go.jp URL: http://staff.aist.go.jp/luc.berthouze/ From Neural.Plasticity at snv.jussieu.fr Fri Jan 4 06:55:07 2002 From: Neural.Plasticity at snv.jussieu.fr (Neural Plasticity) Date: Fri, 04 Jan 2002 11:55:07 +0000 Subject: Journal of Transplantation and Neural Plasticity Message-ID: <3.0.6.32.20020104115507.0092bcb0@mail.snv.jussieu.fr> Dear Colleagues, In July 1998 the Journal of Transplantation and Neural Plasticity was reborn with a revised editorial policy, a new look and an abbreviated name to reflect a broadened scope. Beginning with the July-September 1998 issue, the journal has been called called Neural Plasticity and has been publishing full research papers, short communications, commentary and review articles concerning all aspects of neural plasticity, with special attention to its functional significance as reflected in behaviour. In vitro models, in vivo studies in anesthetized and behaving animals, as well as clinical studies in humans are included. In addition to the regular quarterly issues, there have been two special issues: 'Neural Plasticity, Learning and Memory', and 'Therapeutic Interventions in Motor Disorders' both of which were very well received. The latest issue of Science Citation Index lists Neural Plasticity as ranking 75/200 neuroscience journals, with an impact factor of 2.33. This is ahead of such well-established journals as Experimental Brain Research, Behavioral Brain Research, Neurosciences Letters. I hope this will encourage you to submit your papers to Neural Plasticity. As editor-in -chief, I am committed to rapid, thorough and fair peer review resulting in publication of high quality papers which make a significant contribution to the field. This can only be achieved with the strong support of the scientific community through submission of manuscripts and by the collaboration of the distinguished group of scientists who have joined the editorial board. Dr. Virginia Buchner is our very efficient managing editor. Her experience in managing the journal over the past few years assures rapid and efficient handling of the manuscripts once they have been accepted for publication. We hope the the journal will be an inspiring forum for neuroscients studying the development of the nervous system, learning and memory processes, and reorganisation and recovery after brain injury. The editorial board has been selected to reflect this broad range of competence. Instructions for authors are attached. [attachment deleted -- moderator ] We are very much looking forward to receiving your contributions for forthcoming issues of Neural Plasticity, whose future success depends entirely upon your interest and support. Yours sincerely, Susan J. Sara, Editor NEURAL PLASTICITY Institut des Neurosciences Fax:: 33 1 44 27 3252 9 quai Saint Bernard neural.plasticity at snv.jussieu.fr 75005 Paris France From ken at phy.ucsf.edu Fri Jan 4 14:53:38 2002 From: ken at phy.ucsf.edu (Ken Miller) Date: Fri, 4 Jan 2002 11:53:38 -0800 Subject: Paper Available: Model of Cortical Layer 4 Message-ID: <15414.2114.190081.556538@coltrane.ucsf.edu> The following paper is now available as ftp://ftp.keck.ucsf.edu/pub/ken/kayser_miller.pdf or from http://www.keck.ucsf.edu/~ken (click on 'Publications', then on 'Models of Neural Development'). Kayser, A.S. and K.D. Miller. "Opponent inhibition: A developmental model of layer 4 of the neocortical circuit". This is a final draft of a manuscript that has now appeared as Neuron 33, 131-142 (2002). Summary: We model the development of the functional circuit of layer 4, the input-recipient layer, of cat primary visual cortex. The observed thalamocortical and intracortical circuitry codevelop under Hebb-like synaptic plasticity. Hebbian development yields opponent inhibition: inhibition evoked by stimuli anticorrelated with those that excite a cell. Strong opponent inhibition enables recognition of stimulus orientation in a manner invariant to stimulus contrast. These principles may apply to cortex more generally: Hebb-like plasticity can guide layer 4 of any piece of cortex to create opposition between anticorrelated stimulus pairs, and this enables recognition of specific stimulus patterns in a manner invariant to stimulus magnitude. Properties that are invariant across a cortical column are predicted to be those shared by opponent stimulus pairs; this contrasts with the common idea that a column represents cells with similar response properties. Ken Kenneth D. Miller telephone: (415) 476-8217 Associate Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From isabelle at clopinet.com Fri Jan 4 12:32:52 2002 From: isabelle at clopinet.com (Isabelle Guyon) Date: Fri, 04 Jan 2002 09:32:52 -0800 Subject: Special Issue of JMLR on Variable and Feature Selection Message-ID: <3C35E744.4E75F34A@clopinet.com> Special Issue of JMLR on Variable and Feature Selection Guest Editors: Isabelle Guyon and Andr Elisseeff Submission deadline: May 15, 2002 The Journal of Machine Learning Research invites authors to submit papers for the Special Issue on Variable and Feature Selection. This special issue follows the NIPS 2001 workshop on the same topic, but is open also to contributions that were not presented in it. A special volume will be published for this issue. The call for papers can be found at: http://www.clopinet.com/isabelle/Projects/NIPS2001/call-for-papers.html If you have a potential interest in publishing in this special issue, please email Isabelle Guyon (isabelle at clopinet.com) as this will facilitate planning the issue. From wolfskil at MIT.EDU Mon Jan 7 15:31:39 2002 From: wolfskil at MIT.EDU (Jud Wolfskill) Date: Mon, 07 Jan 2002 15:31:39 -0500 Subject: book announcement--Kitano Message-ID: <5.0.2.1.2.20020107152908.00a7b4b8@po14.mit.edu> I thought readers of the Connectionists List might be interested in this book. For more information, please visit http://mitpress.mit.edu/0262112663/ Thank you! Best, Jud Foundations of Systems Biology edited by Hiroaki Kitano The emerging field of systems biology involves the application of experimental, theoretical, and modeling techniques to the study of biological organisms at all levels, from the molecular, through the cellular, to the behavioral. Its aim is to understand biological processes as whole systems instead of as isolated parts. Developments in the field have been made possible by advances in molecular biology--in particular, new technologies for determining DNA sequence, gene expression profiles, protein-protein interactions, and so on. Foundations of Systems Biology provides an overview of the state of the art of the field. The book covers the central topics of systems biology: comprehensive and automated measurements, reverse engineering of genes and metabolic networks from experimental data, software issues, modeling and simulation, and system-level analysis. Hiroaki Kitano is Director of the ERATO Kitano Symbiotic Systems Project of the Japan Science and Technology Corporation and a Senior Researcher at Sony Computer Science Laboratories, Inc. Contributors Mutsuki Amano, Katja Bettenbrock, Hamid Bolouri, Dennis Bray, Jehoshua Bruck, John Doyle, Andrew Finney, Ernst Dieter Gilles, Martin Ginkel, Shugo Hamahashi, Michael Hucka, Kozo Kaibuchi, Mitsuo Kawato, Martin A. Keane, Hiroaki Kitano, John R. Koa, Andreas Kremling, Shinya Kuroda, Koji M. Kyoda, Guido Lanza, Andre Levchenko, Pedro Mendes, Satoru Miyano, Eric Mjolsness, Mineo Morohashi, William Mydlowec, Masao Nagasaki, Yoichi Nakayama, Shuichi Onami, Herbert Sauro, Nicolas Schweighofer, Bruce Shapiro, Thomas Simon Shimizu, J?rg Stelling, Paul W. Sternberg, Zoltan Szallasi, Masaru Tomita, Mattias Wahde, Tau-Mu Yi, Jessen Yu. 7 x 9, 320 pp. 100 illus. cloth ISBN 0262112663 Jud Wolfskill Associate Publicist MIT Press 5 Cambridge Center, 4th Floor Cambridge, MA 02142 617.253.2079 617.253.1709 fax wolfskil at mit.edu From Luc.Berthouze at aist.go.jp Mon Jan 7 20:50:48 2002 From: Luc.Berthouze at aist.go.jp (Luc Berthouze) Date: Tue, 08 Jan 2002 10:50:48 +0900 Subject: Correction: Postdoc position at AIST, Japan Message-ID: <4.3.2-J.20020108100822.00c580b0@procsv16.u-aizu.ac.jp> Dear Connectionists, This is an addendum to my previous post for a postdoc position in AIST, Japan. I have received a large number of applications with background in signal processing and speech processing in particular, probably due to my lack of explanation of what speech-reading is. Speech-reading, also often referred to as lipreading, is the ability to perceive speech by: (1) watching the movements of a speaker's mouth, (2) by observing all other visible clues including facial expressions and gestures, and (3) using the context of the message and the situation. More than the skill itself, our project aims at understanding and modelling the learning process through which such skill is acquired. In particular, we are interested in evaluating the importance of motoric activity in the process. The candidates should therefore have background and research interests in the area of sensorimotor coordination and categorization, both at the neural and behavioral level. Thank you, Luc Berthouze The original post was: We are seeking a postdoctoral fellow with research interests in computational neuroscience and cognitive science to participate to a project on human learning of speech-reading. Candidates should have expertise in computational modeling, in human or animal research on learning and experience in applying neural models to artificial systems (robots or simulations). Competitive funding for a two-years period is available. Candidates should contact Luc Berthouze for more details. ----- Dr. Luc Berthouze Cognitive Neuroinformatics Group Neuroscience Research Institute (AIST) Tsukuba AIST Central 2 Umezono 1-1-1, Tsukuba 305-8568, Japan Tel: +81-298-615369 Fax: +81-298-615841 Email: Luc.Berthouze at aist.go.jp URL: http://staff.aist.go.jp/luc.berthouze/ From rojas at inf.fu-berlin.de Tue Jan 8 03:57:05 2002 From: rojas at inf.fu-berlin.de (rojas) Date: Tue, 8 Jan 2002 09:57:05 +0100 Subject: Research Lecturer Position in Berlin In-Reply-To: <4.3.2-J.20020108100822.00c580b0@procsv16.u-aizu.ac.jp> Message-ID: Freie Universitat Berlin Research Lecturership in Neuroinformatics / Theoretical Neuroscience (Oberassistent/in, C 2) Department of Biology, Chemistry and Pharmacy Institute of Biology Applications are invited for the position of research lecturer in neuroinformatics/ theoretical neuroscience. The post is funded by the Donors ' Association for the Promotion of Sciences and Humanities in Germany (Stifterverband fur die Deutsche Wissenschaft). The successful applicant will be required to provide research and teaching in the said area. In line with article 106 of the Higher Education Act of the land of Berlin (Berliner Hochschulgesetz), a postdoctoral qualification (Habilitation) in the field of Informatics or Neurobiology or comparable qualifications for a teaching career in higher education are required. The successful candidate is expected to have extensive experience in the acquisition and evaluation of neural data as well as international experience in teaching and research. She/he will collaborate with experimental neuroscience groups at the Freie Universitat Berlin and participate in the activities of a Collaborative Research Centre in neuroscience (title: Mechanisms of developmental and experience-dependent neural plasticity). She/he should not be older than 35 years at the time of appointment. In general, the language of instruction will be German, but some activities may be offered in English. Non-German speaking applicants are expected to learn German within two years. The Freie Universitat Berlin is an equal opportunities employer. The successful candidate will be offered civil servant or comparable public sector employee status (Oberassistent Grade "C2", limited to four years according to the German system). Applications, quoting Vacancy Oberassistent/in must reach the Freie Universitat Berlin Fachbereich Biologie, Chemie, Pharmazie Institut fur Biologie Prof.Dr. Randolf Menzel 14195 Berlin, Konigin-Luise-Str. 28-30 Germany not later than 4 weeks after the publication of this advertisement. Applications should include the following: a letter describing your interest in the position and pertinent experience, a curriculum vitae, a list of publications, and copies of the certificates of academic qualifications held. The Freie Universitat Berlin is a state-funded university. It has some 40,000 students and 520 professors. The University has 12 departments structured into more than 100 institutes. Detailed information is available at the following web sites: www.fu-berlin.de and www.bcp.fu-berlin.de Prof.Dr. Raul Rojas Freie Universitat Berlin FB Mathematik und Informatik Takustr. 9 14195 Berlin Tel: ++49/30/83875100 From giacomo at ini.phys.ethz.ch Tue Jan 8 04:11:06 2002 From: giacomo at ini.phys.ethz.ch (Giacomo Indiveri) Date: Tue, 08 Jan 2002 10:11:06 +0100 Subject: Telluride Workshop and Summer School on Neuromorphic Engineering Message-ID: <3C3AB7AA.3070807@ini.phys.ethz.ch> Apologies for cross-postings... -- We invite applications for the annual three week "Telluride Workshop and Summer School on Neuromorphic Engineering" that will be held in Telluride, Colorado from Sunday, June 30 to Saturday, July 21. 2002. The application deadline is FRIDAY, MARCH 15, and application instructions are described at the bottom of this document. Like each of these workshops that have taken place since 1994, the 2001 Workshop and Summer School on Neuromorphic Engineering, sponsored by the National Science Foundation, the Gatsby Foundation, Whitaker Foundation, the Office of Naval Research, and by the Center for Neuromorphic Systems Engineering at the California Institute of Technology, was an exciting event and a great success. A detailed report is available at the web-site: http://www.ini.unizh.ch/telluride01. We strongly encourage interested parties to browse through the previous workshop web pages. For a discussion of the underlying science and technology and a report on the 2001 school, see the September 20 issue of "The Economist" (or visit the web-page http://www.economist.com/science/tq/displayStory.cfm?Story_ID=779503 ) GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on active participation, with demonstration systems and hands on experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors, vision, speech understanding and robotics. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware (with an emphasis on analog and asynchronous digital VLSI), are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer school will include background lectures on systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Koalas, Kheperas and LEGO), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed. They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. In the early afternoon there will be tutorial on a wide spectrum of topics, including analog VLSI, mobile robotics, auditory systems, central-pattern-generators, selective attention mechanisms, etc. Projects that are carried out during the workshop will be centered in a number of working groups, including: * active vision * audition * olfaction * motor control * central pattern generator * robotics * multichip communication * analog VLSI * learning The active perception project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include an active vision system consisting of a three degree-of-freedom pan-tilt unit, and a silicon retina chip. The central pattern generator group will focus on small walking and undulating robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged and segmented robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The robotics group will use rovers and working digital vision boards as well as other possible sensors to investigate issues of sensorimotor integration, navigation and learning. The audition group aims to develop biologically plausible algorithms and aVLSI implementations of specific auditory tasks such as source localization and tracking, and sound pattern recognition. Projects will be integrated with visual and motor tasks in the context of a robot platform. The multichip communication project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The SCHOOL will take place in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours drive away from Denver (350 miles). United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of PCs running LINUX and Microsoft Windows. No cars are required. Given the small size of the town, we recommend that you do NOT rent a car. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. Unless otherwise arranged with one of the organizers, we expect participants to stay for the entire duration of this three week workshop. FINANCIAL ARRANGEMENT: Notification of acceptances will be mailed out around Monday, April 8. 2002. Participants are expected to pay a $275.00 workshop fee at that time in order to reserve a place in the workshop. The cost of a shared condominium will be covered for all academic participants but upgrades to a private room will cost extra. Participants from National Laboratories and Industry are expected to pay for these condominiums. Travel reimbursement of up to $500 for US domestic travel and up to $800 for overseas travel will be possible if financial help is needed (Please specify on the application). HOW TO APPLY: Applicants should be at the level of graduate students or above (i.e., postdoctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: * First name, Last name, valid email address. * Curriculum Vitae. * One page summary of background and interests relevant to the workshop. * Description of special equipment or software needed for demonstrations that could be brought to the workshop. * Two letters of recommendation Complete applications should be sent to: Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 e-mail: telluride at salk.edu FAX: (858) 587 0417 APPLICATION DEADLINE: MARCH 15, 2002 From aapo at james.hut.fi Tue Jan 8 10:44:34 2002 From: aapo at james.hut.fi (Aapo Hyvarinen) Date: Tue, 8 Jan 2002 17:44:34 +0200 Subject: Papers on natural image statistics and V1 Message-ID: Dear Colleagues, the following papers are now available on the web. ------------------------------------------------------------ J. Hurri and A. Hyvarinen. Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video. Submitted manuscript. http://www.cis.hut.fi/aapo/ps/gz/Hurri01.ps.gz Abstract: Recently, statistical models of natural images have shown emergence of several properties of the visual cortex. Most models have considered the non-Gaussian properties of static image patches, leading to sparse coding or independent component analysis. Here we consider the basic time dependencies of image sequences instead of their non-Gaussianity. We show that simple cell type receptive fields emerge when temporal response strength correlation is maximized for natural image sequences. Thus, temporal response strength correlation, which is a nonlinear measure of temporal coherence, provides an alternative to sparseness in modeling simple cell receptive field properties. Our results also suggest an interpretation of simple cells in terms of invariant coding principles that have previously been used to explain complex cell receptive fields. ------------------------------------------------------------ A. Hyvarinen. An Alternative Approach to Infomax and Independent Component Analysis. Neurocomputing, in press (CNS'01). http://www.cis.hut.fi/aapo/ps/gz/CNS01.ps.gz Abstract: Infomax means maximization of information flow in a neural system. A nonlinear version of infomax has been shown to be connected to independent component analysis and the receptive fields of neurons in the visual cortex. Here we show a problem of nonrobustness of nonlinear infomax: it is very sensitive to the choice the nonlinear neuronal transfer function. We consider an alternative approach in which the system is linear, but the noise level depends on the mean of the signal, as in a Poisson neuron model. This gives similar predictions as the nonlinear infomax, but seem to be more robust. ------------------------------------------------------------ Also, a considerably revised version of a paper that I already announced on the connectionists list in June 2001: P.O. Hoyer and A. Hyvarinen. A Multi-Layer Sparse Coding Network Learns Contour Coding from Natural Images. Vision Research, in press http://www.cis.hut.fi/aapo/ps/gz/VR02.ps.gz Abstract: An important approach in visual neuroscience considers how the function of the early visual system relates to the statistics of its natural input. Previous studies have shown how many basic properties of the primary visual cortex, such as the receptive fields of simple and complex cells and the spatial organization (topography) of the cells, can be understood as efficient coding of natural images. Here we extend the framework by considering how the responses of complex cells could be sparsely represented by a higher-order neural layer. This leads to contour coding and end-stopped receptive fields. In addition, contour integration could be interpreted as top-down inference in the presented model. ---------------------------------------------------- Aapo Hyvarinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800, FIN-02015 HUT, Finland Email: Aapo.Hyvarinen at hut.fi Home page: http://www.cis.hut.fi/aapo/ ---------------------------------------------------- From nicka at dai.ed.ac.uk Tue Jan 8 11:00:00 2002 From: nicka at dai.ed.ac.uk (Nick Adams) Date: Tue, 08 Jan 2002 16:00:00 +0000 Subject: PhD thesis available Message-ID: <3C3B1780.44AD@dai.ed.ac.uk> I am pleased to announce that my PhD thesis entitled DYNAMIC TREES: A HIERARCHICAL PROBABILISTIC APPROACH TO IMAGE MODELLING is now available at http://www.anc.ed.ac.uk/code/adams/ Matlab and C++ code implementing Gibbs Sampling, Metropolis and the Mean Field Variational approaches and various EM-style learning algorithms based upon them for the Dynamic Tree model is also available at http://www.anc.ed.ac.uk/code/adams/dt/dt.tgz ---------------------------------------------------------------------- ABSTRACT This work introduces a new class of image model which we call Dynamic Trees or DTs. A Dynamic Tree model specifies a prior over structures of trees, each of which is a forest of one or more tree-structured belief networks (TSBN). In the literature standard tree-structured belief network models were found to produce ``blocky'' segmentations when naturally occurring boundaries within an image did not coincide with those of the subtrees in the rigid fixed structure of the network. Dynamic Trees have a flexible architecture which allows the structure to vary to accommodate configurations where the subtree and image boundaries align, and experimentation with the model showed significant improvements. They are also hierarchical in nature allowing a multi-scale representation and are constructed within a well founded Bayesian framework. For large models the number of tree configurations quickly becomes intractable to enumerate over, presenting a problem for exact inference. Techniques such as Gibbs sampling over trees are considered and search using simulated annealing finds high posterior probability trees on synthetic 2-d images generated from the model. However simulated annealing and sampling techniques are rather slow. Variational methods are applied to the model in an attempt to approximate the posterior by a simpler tractable distribution, and the simplest of these techniques, mean field, found comparable solutions to simulated annealing in the order of 100 times faster. This increase in speed goes a long way towards making real-time inference in the Dynamic Tree viable. Variational methods have the further advantage that by attempting to model the full posterior distribution it is possible to gain an indication as to the quality of the solutions found. An EM-style update based upon mean field inference is derived and the learned conditional probability tables (describing state transitions between a node and its parent) are compared with exact EM on small tractable fixed architecture models. The mean field approximation by virtue of its form is biased towards fully factorised solutions which tends to create degenerate CPTs, but despite this mean field learning still produces solutions whose log-likelihood rivals exact EM. Development of algorithms for learning the probabilities of the prior over tree structures completes the Dynamic Tree picture. After discussion of the relative merits of certain representations for the disconnection probabilities and initial investigation on small model structures the full Dynamic Tree model is applied to a database of images of outdoor scenes where all of its parameters are learned. DTs are seen to offer significant improvement in performance over the fixed architecture TSBN and in a coding comparison the DT achieves 0.294 bits per pixel (bpp) compression compared to 0.378 bpp for lossless JPEG on images of 7 colours. From shiffrin at indiana.edu Tue Jan 8 16:16:33 2002 From: shiffrin at indiana.edu (Rich Shiffrin) Date: Tue, 08 Jan 2002 16:16:33 -0500 Subject: ASIC Conference Announcement Message-ID: <3C3B61B1.EBA12DC0@indiana.edu> First Annual Summer Interdisciplinary Conference (ASIC) Squamish, British Columbia, Canada July 30 (Tuesday) - August 5 (Monday), 2002 Organizer: Richard M. Shiffrin, Indiana University, Bloomington, IN 47405 This conference is modeled after the winter AIC conference that has been held for almost 30 years: Days are free for leisure activities and the talks are in the later afternoon/early evening. The date has been chosen to make it convenient for attendees to bring family/friends. The conference is open to all interested parties, and an invitation is NOT needed. The subject is interdisciplinary, within the broad frame of Cognitive Science. Much more information is available on the conference website at: http://www.psych.indiana.edu/asic2002/ [ Below is an excerpt from the web page. -- Connectionists moderator ] Conference Aims The conference will cover a wide range of subjects in cognitive science, including: * neuroscience, cognitive neuroscience * psychology (including perception, psychophysics, attention, * information processing, memory and cognition) * computer science * machine intelligence and learning * linguistics * and philosophy We especially invite talks emphasizing theory, mathematical modeling and computational modeling (including neural networks and artificial intelligence). Nonetheless, we require talks that are comprehensible and interesting to a wide scientific audience. Speakers will provide overviews of current research areas, as well as of their own recent progress. Conference Format The conference will start with a reception on the first evening,Tuesday, July 30, at 5 PM, followed by a partial session. Each of the next five evenings, the sessions will begin at 4:30 PM (time to be confirmed later). Drinks, light refreshments and snacks will be available starting at 4:15 PM, prior to the start of the session, and at the midway break. A session will consist of 6-7 talks and a mid-session break, finishing at approximately 8:30-8:45 PM. A banquet will be held following the final session of the conference. There are no parallel sessions or presentations. We will have a separate room, day and time set aside for poster presentations, both for persons preferring this format to a spoken presentation, and for any presenters who cannot be allotted speaking slots. The time and date for the posters has yet to be decided, but we are considering an hour just preceding the regular session on one of the days of the conference. It will not escape the savvy reader that this conference format frees most of the day for various activities with colleagues, family, and friends. From jose at psychology.rutgers.edu Wed Jan 9 18:48:25 2002 From: jose at psychology.rutgers.edu (Stephen J. Hanson) Date: Wed, 09 Jan 2002 18:48:25 -0500 Subject: POSITION at RUTGERS-NEWARK CAMPUS: COGNITIVE SCIENCE--Deadline extended Message-ID: <3C3CD6C9.90501@psychology.rutgers.edu> We are extending the DEADLINE until FEB 15th. RUTGERS UNIVERSITY-Newark Campus. The Department of Psychology anticipates making one tenure track , Assistant Professor level appointment in area of COGNITIVE SCIENCE. In particular we are seeking individuals in one of any the following THREE following areas: LEARNING (Cognitive Modeling) COMPUTATIONAL NEUROSCIENCE or SOCIAL-COGNITION (interests in NEUROIMAGING in any of these areas would also be a plus, since the Department in conjunction with UMDNJ has recently acquired a 3T Neuroimaging Center (see http://www.psych.rutgers.edu/fmri). The successful candidate is expected to develop and maintain an active, externally funded research program, and to teach at both the graduate and undergraduate levels. Review of applications begin February 15th, 2002, pending final budgetary approval from the administration. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are encouraged to apply. Please send a CV, a statement of current and future research interests, and three letters of recommendation to COGNITIVE SCIENCE SEARCH COMMITTEE, Department of Psychology, Rutgers University, Newark, NJ 07102. Email enquires can be made to cogsci at psychology.rutgers.edu. Also see, http://www.psych.rutgers.edu. From Chris.Diehl at jhuapl.edu Wed Jan 9 11:35:39 2002 From: Chris.Diehl at jhuapl.edu (Diehl, Chris P.) Date: Wed, 9 Jan 2002 11:35:39 -0500 Subject: Job Announcement Message-ID: <91D1D51C2955D111B82B00805F1998950BD8CB9F@aples2.jhuapl.edu> Applications are invited for a senior research staff position in the Research and Technology Development Center (RTDC) at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). The successful candidate will join the System and Information Sciences (RSI) group of the RTDC and perform original R&D supporting research thrusts in automated video analysis, statistical language modeling and information retrieval, and bioinformatics. The ideal candidate will have a Ph.D. in electrical engineering, computer science or related field and extensive experience developing and applying machine learning techniques to challenging real-world problems. Expertise in the following areas is desirable: (1) Bayesian networks (2) Support vector classification and regression/kernel methods (3) Statistical learning theory JHU/APL is a not-for-profit university affiliated research laboratory, located between Washington D.C. and Baltimore, MD. The RSI group is comprised of 20+ M.S. and Ph.D. level Computer Scientists, Electrical Engineers and Physicists performing R&D in the areas of statistical modeling and inference, bioinformatics, network modeling, video analysis, information retrieval, intelligent databases, distributed computing and human- computer interaction. Further information on JHU/APL can be found at the JHU/APL web site: www.jhuapl.edu. Interested and qualified candidates with U.S. citizenship should send a curriculum vita via e-mail or regular mail to: Dr. I-Jeng Wang Johns Hopkins University Applied Physics Laboratory 11100 Johns Hopkins Road Laurel, MD 20723-6099 E-mail: I-Jeng.Wang at jhuapl.edu The Johns Hopkins University Applied Physics Laboratory is an equal opportunity employer. From cardoso at ifado.de Thu Jan 10 03:20:48 2002 From: cardoso at ifado.de (Simone Cardoso de Oliveira) Date: Thu, 10 Jan 2002 09:20:48 +0100 Subject: Job Offer Message-ID: <3C3D4EE0.25AA1E@ifado.de> Dear List-members, I would like to announce the following job offers. Please feel free to post them in your home institutions. Thank you very much! ---------------------------------------------------------------------------- Within a joint Israeli-German research collaboration , positions for 3 PhD Students (Bat IIa/2 or equivalent) are available immediately. The interdisciplinary project "METACOMP - Adaptive Control of Motor Prostheses" aims at advancing techniques for the development of neural prostheses. It combines the efforts of two theoretical (Prof. A. Aertsen, Freiburg, and Prof. K. Pawelzik, Bremen) and two experimental groups (Dr. S. Cardoso de Oliveira, Dortmund and Prof. E. Vaadia, Jerusalem). One of the students will work on developing adaptive models that can learn to control movements on the basis of experimental data (Bremen). The second student will test the adaptive models in psychophysical experiments in a virtual reality environment (Dortmund). The student in Israel will test the model and decoding mechanisms on brain activity recorded from awake primates. Applicants should have a background in computational neuroscience or related disciplines and will need good knowledge in computer programming. Please send applications until January 31 (with a statement about in which of the 3 positions you are particularly interested, preferentially by e-mail) to: Dr. S. Cardoso de Oliveira, Institut fr Arbeitsphysiologie an der Universitt Dortmund, Ardeystrae 67, 44139 Dortmund, e-mail: cardoso at ifado.de. -- ----------------------------------------------------------------------------- Dr. Simone Cardoso de Oliveira, PhD Institut fr Arbeitsphysiologie an der Universitt Dortmund Ardeystr. 67, 44139 Dortmund Tel.: ++49-(0)231-1084-311 (Lab) ++49-(0)234-333 8488 (Home) Fax: ++49-(0)231-1084-340 cardoso at arb-phys.uni-dortmund.de http://www.ifado.de/projekt07/cardoso/ ----------------------------------------------------------------------------- From becker at meitner.psychology.mcmaster.ca Thu Jan 10 13:46:32 2002 From: becker at meitner.psychology.mcmaster.ca (S. Becker) Date: Thu, 10 Jan 2002 13:46:32 -0500 (EST) Subject: openings in computer vision/image processing/machine learning. In-Reply-To: <5.1.0.14.0.20020109160342.00af68d0@medipattern.com> Message-ID: Dear connectionists, As a member of the Scientific Advisory Board for Medipattern Corp., I'd like to bring to your attention the following announcement of job openings for specialists in computer vision/image processing/machine learning. cheers, Sue Sue Becker, Associate Professor Department of Psychology, McMaster University becker at mcmaster.ca 1280 Main Street West, Hamilton, Ont. L8S 4K1 Fax: (905)529-6225 www.science.mcmaster.ca/Psychology/sb.html Tel: 525-9140 ext. 23020 Medipattern is a young, fully financed biomedical enterprise in Toronto, Canada. We want to revolutionize the standard of care in detection and treatment of emergent disease through the application of our novel three-dimensional medical imaging technology and our vision of large-scale machine learning. We are looking for creative, knowledgeable people with a background in machine vision or image processing and statistical, information theoretic, symbolic or neural approaches to machine learning and pattern recognition. We are looking for people eager to work in a dynamic, entrepreneurial environment. Please send your resume to jobs at medipattern.com. We will be in touch with people that appear to be a good fit. From mdorigo at iridia.ulb.ac.be Thu Jan 10 12:44:09 2002 From: mdorigo at iridia.ulb.ac.be (Marco Dorigo) Date: Thu, 10 Jan 2002 18:44:09 +0100 (CET) Subject: ANTS'2002: Call for Papers Message-ID: <200201101744.g0AHi9F05685@iridia.ulb.ac.be> ANTS'2002 - From Ant Colonies to Artificial Ants: Third International Workshop on Ant Algorithms Brussels, Belgium, September 11-14, 2002 CALL FOR PAPERS (up-to-date information on the workshop is maintained on the web at http://iridia.ulb.ac.be/~ants/ants2002/) SCOPE OF THE WORKSHOP The behavior of social insects in general, and of ant colonies in particular, has long since fascinated both the scientist and the layman. Researchers in ethology and animal behavior have proposed many models to explain interesting aspects of social insect behavior such as self-organization and shape-formation. Recently, ant algorithms have been proposed as a novel computational model that replaces the traditional emphasis on control, preprogramming, and centralization with designs featuring autonomy, emergence, and distributed functioning. These designs are proving flexible and robust, they are able to adapt quickly to changing environments and they continue functioning when individual elements fail. A particularly successful research direction in ant algorithms, known as "Ant Colony Optimization", is dedicated to their application to discrete optimization problems. Ant colony optimization has been applied successfully to a large number of difficult combinatorial problems including the traveling salesman problem, the quadratic assignment problem, scheduling problems, etc., as well as to routing in telecommunication networks. ANTS'2002 is the third edition of the only event entirely devoted to ant algorithms and to ant colony optimization. Also of great interest to the workshop are models of ant colony behavior which could stimulate new algorithmic approaches. The workshop will give researchers in both real ant behavior and in ant colony optimization an opportunity to meet, to present their latest research, and to discuss current developments and applications. The three-day workshop will be held in Brussels, Belgium, September 12-14, 2002. In the late afternoon of September 11th there will also be a tutorial on ant algorithms. RELEVANT RESEARCH AREAS ANTS'2002 solicits contributions dealing with any aspect of ant algorithms and Ant Colony Optimization. Typical, but not exclusive topics of interest are: (1) Models of aspects of real ant colony behavior that can stimulate new algorithmic approaches. (2) Empirical and theoretical research in ant algorithms and ant colony optimization. (3) Application of ant algorithms and ant colony optimization methods to real-world problems. (4) Related research in swarm intelligence. SUBMISSION PROCEDURE & PUBLICATION DETAILS Conference proceedings will be published by Springer-Verlag in the Lecture Notes on Computer Science series (http://www.springer.de/comp/lncs/index.html), and distributed to the participants at the conference site. Submitted papers must be between 8 and 12 pages long, and must conform to the LNCS style (http://www.springer.de/comp/lncs/authors.html). It is important that submissions are already in the LNCS format (papers that do not respect this formatting may not be considered). Submitted papers should be either in Postscript or PDF format and should be emailed to: ants-submissions at iridia.ulb.ac.be. Each submission will be acknowledged by email. Each submitted paper will be peer reviewed by at least two referees. BEST PAPER AWARD A best paper award will be presented at the workshop. REGISTRATION AND FURTHER INFORMATION By submitting a camera-ready paper, the author(s) agree that at least one author will attend and present their paper at the workshop. A registration fee of 200 EUR will cover organization expenses, workshop proceedings, the possibility to attend a tutorial on ant algorithms in the late afternoon of September 11, coffee breaks, and a workshop social dinner on Friday 13 evening. A reduced PhD student registration fee of 150 EUR is available. A proof of inscription to a doctoral school will be necessary. Up-to-date information about the workshop will be available at the ANTS'2002 web site (http://iridia.ulb.ac.be/~ants/ants2002/). For information about local arrangements, registration forms, etc., please refer to the above mentioned web site, or contact the local organizers at the address below. LIMITED NUMBER OF PLACES The number of participants will be limited. If you intend to participate please fill in and email the INTENTION FORM available at the workshop web page to: ants-registration at iridia.ulb.ac.be Only researchers that have received a confirmation to their intention form may take part in the workshop. IMPORTANT DATES Submission deadline April 14th, 2002 Notification of acceptance May 31st, 2002 Camera ready copy June 16th, 2002 Tutorial Sep 11th, 2002 Workshop Sep 12-14, 2002 ANTS'2002 CONFERENCE COMMITTEE PROGRAM CHAIR Marco DORIGO, IRIDIA, ULB, Belgium (mdorigo at ulb.ac.be) TECHNICAL PROGRAM AND PUBLICATION CHAIRS Gianni DI CARO, IRIDIA, ULB, Belgium (gdicaro at ulb.ac.be) Michael SAMPELS, IRIDIA, ULB, Belgium (msampels at ulb.ac.be) LOCAL ARRANGEMENTS Christian BLUM, IRIDIA, ULB, Belgium (cblum at ulb.ac.be) Mauro BIRATTARI, IRIDIA, ULB, Belgium (mbiro at ulb.ac.be) PROGRAM COMMITTEE Under formation. PROGRAM CHAIR ADDRESS Marco Dorigo, Ph.D. Chercheur Qualifie' du FNRS Tel +32-2-6503169 IRIDIA CP 194/6 Fax +32-2-6502715 Universite' Libre de Bruxelles Secretary +32-2-6502729 Avenue Franklin Roosevelt 50 http://iridia.ulb.ac.be/~mdorigo/ 1050 Bruxelles http://iridia.ulb.ac.be/~ants/ants2002/ Belgium http://iridia.ulb.ac.be/dorigo/ACO/ACO.html CONFERENCE LOCATION Avenue A. Buyl 87, 1050 Brussels, Belgium, Building C - 4th floor (IRIDIA, Dorigo's lab, is at the 5th floor of the same building). There will be signs giving directions to the workshop room. RELATED CONFERENCES Note that just before ANTS'2002, the conference PPSN-VII will take place in Granada, Spain (http://ppsn2002.ugr.es/ppsn2002.shtml). From school at cogs.nbu.bg Thu Jan 10 06:27:22 2002 From: school at cogs.nbu.bg (CogSci Summer School) Date: Thu, 10 Jan 2002 13:27:22 +0200 Subject: summer school Message-ID: 9th International Summer School in Cognitive Science Sofia, New Bulgarian University, July 8 - 28, 2002 Courses: * Jeff Elman (University of California at San Diego, USA) - Connectionist Models of Learning and Development * Michael Mozer (University of Colorado, USA) - Connectionist Models of Human Perception, Attention, and Awareness * Eran Zaidel (University of California at Los Angeles, USA) - Hemispheric Specialization * Barbara Knowlton (University of California at Los Angeles, USA) - Cognitive Neuroscience of Memory * Markus Knauff (University of Freiburg, Germany) - Imagery and Reasoning: Cognitive and Cortical Models * Stella Vosniadou (University of Athens, Greece) - Cognitive Development and Conceptual Change * Peter van der Helm (University of Nijmegen, the Netherlands) - Structural Description of Visual Form * Antonio Rizzo (University of Siena, Italy) - The Nature of Interactive Artifacts and Their Design * Nick Chater (University of Warwick, UK) - Simplicity as a Fundamental Cognitive Principle Organised by the New Bulgarian University Endorsed by the Cognitive Science Society For more information look at: http://www.nbu.bg/cogs/events/ss2002.htm Central and East European Center for Cognitive Science New Bulgarian University 21 Montevideo Str. Sofia 1635 phone: 955-7518 Svetlana Petkova Administrative manager Central and East European Center for Cognitive Science From hinton at cs.toronto.edu Thu Jan 10 17:29:19 2002 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Thu, 10 Jan 2002 17:29:19 -0500 Subject: graduate study opportunities Message-ID: <02Jan10.172920edt.453142-19237@jane.cs.toronto.edu> The machine learning group at the University of Toronto has recently expanded and we are looking for about 10 new graduate students. The core machine learning faculty include: Craig Boutilier (Computer Science) Brendan Frey (Electrical and Computer Engineering) Geoffrey Hinton (Computer Science) Radford Neal (Statistics, Computer Science) Sam Roweis (Computer Science) Rich Zemel (Computer Science) In addition, there are many other faculty interested in applying machine learning techniques in specific domains such as vision, speech, medicine, and finance. More details about the individual faculty can be obtained at http://learning.cs.toronto.edu/people.html Possible research areas include: Neural Computation and Perceptual Learning, Graphical Models, Monte Carlo Methods, Bayesian Inference, Spectral Methods, Reinforcement Learning and Markov Decision Problems, Coding and Information Theory, Bioinformatics, and Game Theory. Applications for graduate study in the Department of Computer Science are due by Feb 1. For details of how to apply see http://www.cs.toronto.edu/DCS/Grad/Apply/ ********** Please forward this message ********** ********** to students who may be interested. ********** (sorry if you get multiple copies of this message) From mcps at cin.ufpe.br Fri Jan 11 12:29:26 2002 From: mcps at cin.ufpe.br (Marcilio C. Pereira de Souto) Date: Fri, 11 Jan 2002 15:29:26 -0200 (EDT) Subject: SBRN 2002 - PRELIMINARY CFP Message-ID: --------------------------- Apologies for cross-posting --------------------------- PRELIMINARY CALL FOR PAPERS ********************************************************************** SBRN'2002 - VII BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS (http://www.cin.ufpe.br/~sbiarn02) Recife, November 11-14, 2002 ********************************************************************** The biannual Brazilian Symposium on Artificial Neural Networks (SBRN) - of which this is the 7th event - is a forum dedicated to Neural Networks (NNs) and other models of computational intelligence. The emphasis of the Symposium will be on original theories and novel applications of these computational models. The Symposium welcomes paper submissions from researchers, practitioners, and students worldwide. The proceedings will be published by the IEEE Computer Society. Selected, extended, and revised papers from SBRN'2002 will be also considered for publication in a special issue of the International Journal of Neural Systems and of the International Journal of Computational Intelligence and Applications. SBRN'2002 is sponsored by the Brazilian Computer Society (SBC) and co-sponsored by SIG/INNS/Brazil Special Interest Group of the International Neural Networks Society in Brazil. It will take place November 11-14, and will be held in Recife. Recife, located on the northeast coast of Brazil, is known as the "Brazilian Venice" because of its many canals and waterways and the innumerable bridges that span them. It is the major gateway to the Northeast with regular flights to all major cities in Brazil as well as Lisbon, London, Frankfurt, and Miami. See more information about the place (http://www.braziliantourism.com.br/pe-pt1-en.html) and about the hotel (http://www.hotelgavoa.com.br)that will host the event. SBRN'2002 will be held in conjunction with the XVI Brazilian Symposium on Artificial Intelligence (http://www.cin.ufpe.br/~sbiarn02) (SBIA). SBIA has its main focus on symbolic AI. Crossfertilization of these fields will be strongly encouraged. Both Symposiums will feature keynote speeches and tutorials by world-leading researchers. The deadline for submissions is April 15, 2002. More details on paper submission and conference registration will be coming soon. Sponsored by the Brazilian Computer Society (SBC) Co-Sponsored by SIG/INNS/Brazil Special Interest Group of the International Neural Networks Society in Brazil Organised by the Federal University of Pernambuco (UFPE)/Centre of Informatics (CIn) Published by the IEEE Computer Society General Chair: Teresa B. Ludermir (UFPE/CIn, Brazil) tbl at cin.ufpe.br Program Chair: Marcilio C. P. de Souto (UFPE/CIn, Brazil) mcps at cin.ufpe.br Deadlines: Submission: 15 April 2002 Acceptance: 17 June 2002 Camera-ready: 22 August 2002 Non-exhaustive list of topics which will be covered during SBRN'2002: Applications: finances, data mining, neurocontrol, time series analysis, bioinformatics; Architectures: cellular NNs, hardware and software implementations, new models, weightless models; Cognitive Sciences: adaptive behaviour, natural language, mental processes; Computational Intelligence: evolutionary systems, fuzzy systems, hybrid systems; Learning: algorithms, evolutionary and fuzzy techniques, reinforcement learning; Neurobiological Systems: bio-inspired systems, biologically plausible networks, vision; Neurocontrol: robotics, dynamic systems, adaptive control; Neurosymbolic processing: hybrid approaches, logical inference, rule extraction, structured knowledge; Pattern Recognition: signal processing, artificial/computational vision; Theory: radial basis functions, Bayesian systems, function approximation, computability, learnability, computational complexity. From pli at richmond.edu Fri Jan 11 09:42:40 2002 From: pli at richmond.edu (Ping Li) Date: Fri, 11 Jan 2002 09:42:40 -0500 Subject: positions in cognitive neuroscience of language Message-ID: Research Assistant Professorship and Post-doctoral Fellowships in Cognitive Neuroscience of Language Applications are invited for a research assistant professorship and two post-doctoral fellowships in Cognitive Neuroscience of Language, attached to the Laboratory for Language and Cognitive Neuroscience in the Department of Linguistics, tenable from as soon as possible, but in any case no later than 1 August, 2002. The appointments will be made on a 2-3 year fixed-term basis. The successful applicants will join our interdisciplinary group that employs state-of-the-art fMRI recording and analysis techniques as well as computational modeling to study topics in cognitive neuroscience of language processes. In addition to our resources and well developed plans for fMRI research at the University of Hong Kong, we are engaged in long-term collaborations with colleagues at Research Imaging Center at San Antonio, National Institutes of Health, University of Pittsburgh and the Key Laboratory of Cognitive Science and Learning of the Ministry of Education of China. Ample scanner time and training in fMRI techniques will be provided. The University of Hong Kong runs an active Cognitive Science undergraduate degree. The successful applicants may have the opportunity to contribute to the teaching of Linguistics and Cognitive Science at this university. Applicants should have a PhD in cognitive psychology, cognitive neuroscience, linguistics, or related fields. The appointments will be made usually on the first point of the 4-point salary scale: HK$542,520 - 660,240 per annum for the research assistant professor post and HK$378,240 - 497,580 per annum for the post-doctoral fellow post. For the research assistant professor post, a taxable financial subsidy fixed at HK$7,000 per month towards rented accommodation may be provided, subject to the Prevention of Double Housing Benefits Rules. At current rates, salaries tax will not exceed 15% of gross income. The appointments carry leave and medical benefits. Application forms (41/1000) can be obtained at https://extranet.hku.hk/apptunit/; or from the Appointments Unit (Senior), Registry, The University of Hong Kong, Hong Kong (Fax: (852) 2540 6735 or 2559 2058; E-mail: apptunit at reg.hku.hk). Applicants should submit cover letter, resume, and application form by March 30, 2002 to: Dr. Li-Hai Tan, Director, Laboratory for Language and Cognitive Neuroscience, Department of Linguistics and Cognitive Science Programme, University of Hong Kong, Hong Kong (email: tanlh at hku.hk). From terry at salk.edu Mon Jan 14 13:33:26 2002 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 14 Jan 2002 10:33:26 -0800 (PST) Subject: NEURAL COMPUTATION 14:1 In-Reply-To: <200111292340.fATNecF17024@purkinje.salk.edu> Message-ID: <200201141833.g0EIXQM40787@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 1 - January 1, 2002 VIEW Mechanisms Shaping Fast Excitatory Postsynaptic Currents In Central Nervous System Mladen I. Glavinovic NOTE Adjusting the outputs of a Classifier to New a Priori Probabilities: A Simple Procedure Marco Saerens, Patrice Latinne and Christine Decaestecker LETTERS Unitary Events in Multiple Single-Neuron Spiking Activity: I. Detection and Significance Sonja Grun, Markus Diesmann and Ad Aertsen Unitary Events in Multiple Single-Neuron Spiking Activity: II. Non-Stationary Data Sonja Grun, Markus Diesmann and Ad Aertsen Statistical Significance of Coincident Spikes: Count-Based Versus Rate-Based Statistics Robert Guetig, Ad Aertsen and Stefan Rotter Representational Accuracy of Stochastic Neural Populations Stefan D. Wilke and Christian W. Eurich Supervised Dimension Reduction of Intrinsically Low-Dimensional Data Nikos Vlassis, Yoichi Motomura, and Ben Krose Clustering Based on Conditional Distributions in an Auxiliary Space Janne Sinkkonen and Samuel Kaski ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From ps629 at columbia.edu Mon Jan 14 15:08:02 2002 From: ps629 at columbia.edu (Paul Sajda) Date: Mon, 14 Jan 2002 15:08:02 -0500 Subject: Postdoctoral Position in Computational Vision Message-ID: <3C433AA2.CE11F3F7@columbia.edu> Postdoctoral Position in Computational Vision-- The Laboratory for Intelligent Imaging and Neurocomputing (LIINC) has an immediate opening for a two year position to conduct research in probabilistic models which integrate concepts from biological vision for robust visual scene analysis. A mathematical and computational background is desired, particularly in computer vision, probabilistic modeling and optimization. Previous work in computational neuroscience is preferred, but not required. This position will be part of project investigating biomimetic methods for analysis of literal and non-literal imagery. Applicants should send a CV, three representative papers and the names of three references to Prof. Paul Sajda, Department of Biomedical Engineering, Columbia University, 530 W 120th Street, NY, NY 10027. Or email to ps629 at columbia.edu. Further information on LIINC can be found at newton.bme.columbia.edu -- Paul Sajda, Ph.D. Associate Professor Department of Biomedical Engineering 530 W 120th Street Columbia University New York, NY 10027 tel: (212) 854-5279 fax: (212) 854-8725 email: ps629 at columbia.edu http://www.columbia.edu/~ps629 From oja at james.hut.fi Tue Jan 15 03:17:03 2002 From: oja at james.hut.fi (Erkki Oja) Date: Tue, 15 Jan 2002 10:17:03 +0200 (EET) Subject: ICA meeting Message-ID: <200201150817.KAA64371@james.hut.fi> CALL FOR PARTICIPATION: European Meeting on Independent Component Analysis The International Institute for Advanced Scientific Studies (IIASS), The University of Salerno, and the European project BLISS (Blind Source Separation and Applications) are organizing a 3-day ICA meeting at the IIASS in Vietri Sul Mare, Italy, on February 21 to 23, 2002. The meeting is organized as a workshop / winter school in which leading European researchers on Independent Component Analysis and Blind Source Separation will give tutorial lectures. We invite graduate students, researchers, and practitioners of ICA to attend the meeting. It is also possible to submit a short paper. The registration fee is 150 Euro, not including accommodation and travel expenses. The number of attendees is limited and registrations will be accepted on a first come, first served basis. The invited speakers are: L. Almeida, IST/INESC-ID, Portugal S. Fiori, Perugia University, Italy A. Harmelin, Fraunhofer Institut, Germany C. Jutten, INPG, France C. Morabito, Reggio Calabria University, Italy K.-R. Muller, Fraunhofer Institut, Germany E. Oja, HUT, Finland F. Palmieri, Naples University, Italy R. Parisi, Rome University, Italy T. Pham, INPG, France F. Silva, IST/INESC-ID, Portugal R. Tagliaferri, Salerno University, Italy H. Valpola, HUT, Finland A. Ziehe, Fraunhofer Institut, Germany For information about the titles of the talks, the programme, registration, paper submission, travelling, etc. please consult the Web page of the meeting: http://ica.sa.infn.it . Prof. Maria Marinaro Prof. Erkki Oja From esalinas at wfubmc.edu Tue Jan 15 12:29:45 2002 From: esalinas at wfubmc.edu (Emilio Salinas) Date: Tue, 15 Jan 2002 12:29:45 -0500 Subject: Neuroscience PhD at WFUSM Message-ID: <3C446709.7BEF0641@wfubmc.edu> Graduate Training in Neuroscience at Wake Forest University School of Medicine The Department of Neurobiology & Anatomy at the Wake Forest University School of Medicine is seeking applicants for its doctoral program. The Department is a major neuroscience research center, with strong emphasis in the areas of systems neuroscience, sensorimotor and integrative systems, cognitive neuroscience, plasticity and development. For the 2002 academic year, matriculants will receive full tuition remission, a stipend of $18,500, a laptop computer and a health care benefit. For more information on the program and departmental research opportunities, please visit our website at www.wfubmc.edu/nba. Emilio Salinas Department of Neurobiology and Anatomy Wake Forest University School of Medicine Winston-Salem NC 27157 Tel: (336) 713-5176, 5177 Fax: (336) 716-4534 e-mail: esalinas at wfubmc.edu From mhc27 at cornell.edu Thu Jan 17 13:18:16 2002 From: mhc27 at cornell.edu (Morten H. Christiansen) Date: Thu, 17 Jan 2002 13:18:16 -0500 Subject: Two Postdoctoral positions in Cognitive Science Message-ID: TWO POSTDOCTORAL POSITIONS IN COGNITIVE SCIENCE OF LANGUAGE Two postdoctoral training opportunities - one at Cornell University (US) and one at the University of Warwick (UK) - are available immediately to investigate the role of multiple-cue integration in language acquisition across different languages. The project is funded by the Human Frontiers Science Program and involves four closely interacting research teams in the US (Morten Christiansen, Cornell University), the UK (Nick Chater, University of Warwick), France (Peter Dominey, Institut des Sciences Cognitives, Lyon) and Japan (Mieko Ogura, Tsurumi University). MULTIPLE-CUE INTEGRATION IN LANGUAGE ACQUISITION: MECHANISMS AND NEURAL CORRELATES How do children acquire the subtle and complex structure of their native language with such remarkable speed and reliability, and with little direct instruction? Recent computational and acoustic analyses of language addressed to children indicate that there are rich cues to linguistic structure available in the child's input. Moreover, evidence from developmental psycholinguistics shows that infants are sensitive to many sound-based (phonological) and intonational (prosodic) cues in the input - cues that may facilitate language acquisition. Although this research indicates that linguistic input is rich with possible cues to linguistic structure, there is an important caveat: the cues are only partially reliable and none considered alone provide an infallible bootstrap into language. To acquire language successfully, it seems that the child needs to integrate a great diversity of multiple probabilistic cues to linguistic structure in an effective way. Our research program aims to provide a rigorous cross-linguistic test of the hypothesis that multiple-cue integration is crucial for the acquisition of syntactic structure. The research has four interrelated strands: 1) Computational and acoustic analyses of child-directed speech. 2) Psycholinguistic and artificial language learning experiments. 3) Computational modeling using neural networks and statistical learning methods. 4) Event-related potential (ERP) studies. Together, the two postdoctoral positions will span the four research strands. For more information about the project please refer to our web site: http://cnl.psych.cornell.edu/mcila. CORNELL UNIVERSITY POSITION The Cornell Cognitive Neuroscience Lab headed by Morten Christiansen is coordinating the research efforts and the work here involves all four research strands. The postdoctoral position is primarily aimed at the ERP work but may also include the other research strands, depending on the interests of the candidate. Candidates should have a PhD in cognitive science, psychology or related discipline. Experience with high-density ERP experimentation is highly desirable as are interests in computational modeling of language. Salary will be based on experience in relation to the NIH postdoctoral scale. For more information about the Cornell Cognitive Neuroscience Lab, please visit our web site: http://cnl.psych.cornell.edu. Candidates interested in the Cornell position should email a vita and a short statement about graduate training and research interests to Morten Christiansen (mhc27 at cornell.edu). UNIVERSITY OF WARWICK POSITION The Warwick team headed by Nick Chater will focus primarily on research strands 1 and 3, especially on statistical analysis of child-directed speech across the three languages, and computational modeling, using statistical and connectionist techniques, of how relevant information is acquired and processed. There may also be some experimental work on artificial grammar learning. A strong candidate for this position would have a PhD in cognitive science or related discipline, and have an interest in, and preferably experience with, corpus analysis and statistical and connectionist models of language. Candidates interested in the Warwick position should email a vita and a short statement about graduate training and research interests to Nick Chater (nick.chater at warwick.ac.uk). Both positions are initially for two years, but may be extended into a third year. In addition to salary, funds are available for travel to conferences and meetings between research teams. Neither position carry any special citizen requirements. -- ------------------------------------------------------------------------ Morten H. Christiansen Assistant Professor Phone: +1 (607) 255-3570 Department of Psychology Fax: +1 (607) 255-8433 Cornell University Email: mhc27 at cornell.edu Ithaca, NY 14853 Office: 240 Uris Hall Web: http://www.psych.cornell.edu/faculty/people/Christiansen_Morten.htm Lab Web Site: http://cnl.psych.cornell.edu ------------------------------------------------------------------------ Nick Chater Professor Institute for Applied Cognitive Science Department of Psychology Phone: +44 2476 523537 University of Warwick Fax: +44 2476 524225 Coventry, CV4 7AL, UK Email: nick.chater at warwick.ac.uk Web: http://www.warwick.ac.uk/fac/sci/Psychology/staff/academic.html#NC ------------------------------------------------------------------------ From tcp1 at leicester.ac.uk Thu Jan 17 11:28:20 2002 From: tcp1 at leicester.ac.uk (Tim Pearce) Date: Thu, 17 Jan 2002 16:28:20 -0000 Subject: Postdoc / phd positions Message-ID: <001901c19f73$fa154d40$e468fea9@rothko> Sorry for any cross-postings ..... Due to our Personnel Department indulging in the Christmas (alcoholic) spirit the deadline for applications to guarantee consideration is now 15th Feb. but will remain open until filled! =============== Postdoctoral Research Associate in Computational Neuroscience R&AIA ?17,451 to ?26,229 pa Available immediately for 4 years Ref: R9398/JAU A postdoctoral researcher is required for a 4-year EC-funded project available from January 2002. The project concerns the development of biologically constrained sensory processing models for performing stereotypical moth-like chemotaxis behaviour in uncertain environments. We propose to develop biologically-inspired sensor, information processing and control systems for a c (hemosensing) unmanned aerial vehicle (UAV). The cUAV will identify and track volatile compounds of different chemical composition in outdooor environments. Its olfactory and sensory-motor systems are to be inspired by the moth. This development continues our research in artificial and biological olfaction, sensory processing and analysis, neuronal models of learning, real-time behavioural control, and robotics. Fleets of cUAVs will ultimately be deployed to sense, identify, and map the airborne chemical composition of large-scale environments. The mobile robotics aspects of the project will be carried out with the assistance of an associated PhD studentship position. Further details on the project and the research teams can be found at http://www.le.ac.uk/eg/tcp1/amoth/ The project includes significant funding and opportunities for travel within Europe to visit the laboratories of the participating consortia (in Switzerland, France, and Sweden) and outside Europe to attend international scientific meetings. A strong mathematical and computer modelling background is required in order to develop a biologically constrained model of the insect antennal lobe and protocerebellum. Expertise is required in the area of neuronal modelling, although not necessarily in the area of olfaction. Good team skills are also a necessity. Informal enquiries regarding these positions and the project in general should be addressed to the project co-ordinator, Dr. T.C. Pearce, Department of Engineering, University of Leicester, Leicester LE1 7RH, United Kingdom, +44 116 223 1290, t.c.pearce at le.ac.uk For Research Associate post, application forms and further particulars are available from the Personnel Office, tel 0116 252 5114, fax 0116 252 5140, email personnel at le.ac.uk, or via www.le.ac.uk/personnel/jobs. Closing date: 15 February 2002. PhD Studentship in Mobile Robotics A postgraduate researcher is required for a 4-year EC-funded project available from January 2002. The project concerns the development of an unmanned aerial vehicle (UAV) robot to perform stereotypical moth-like chemotaxis behaviour in uncertain environments. We propose to develop biologically-inspired sensor, information processing and control systems for a c (hemosensing) UAV. The cUAV will identify and track volatile compounds of different chemical composition in outdooor environments. Its olfactory and sensory-motor systems are to be inspired by the moth, which will be supported by computational neuroscience model development conducted by an associated Postdoctoral Research Associate. This development continues our research in artificial and biological olfaction, sensory processing and analysis, neuronal models of learning, real-time behavioural control, and robotics. Fleets of cUAVs will ultimately be deployed to sense, identify, and map the airborne chemical composition of large scale environments. Further details on the project and the research teams can be found at http://www.le.ac.uk/eg/tcp1/amoth/. The project includes significant funding and opportunities for travel within Europe to visit the laboratories of the participating consortia (in Switzerland, France, and Sweden) and outside Europe to attend international scientific meetings. A first degree (at the 2(i) level or higher) is required in mathematics, computer science, physics, or engineering. The student will be responsible for deploying the chemical sensors on the UAV and designing interface circuitry, assisting with construction of the UAV, programming the on board flight systems (incorporating a neuronal model), and assisting with field trials. Applicants should have a demonstrated interest in one or more of the following, UAVs, neuroscience, robotics, and/or artificial intelligence. Good team skills are essential. The studentship includes a stipend of ?12,000 per year for four years and includes provision for overseas PhD fees although EU nationals may also apply. For PhD Studentship, applications and informal enquiries should be addressed to the project co-ordinator, Dr. T.C. Pearce, Department of Engineering, University of Leicester, Leicester LE1 7RH, United Kingdom, +44 116 223 1290, t.c.pearce at le.ac.uk. Closing date: 15 February 2002. -- T.C. Pearce, PhD URL: http://www.leicester.ac.uk/eg/tcp1/ Lecturer in Bioengineering E-mail: t.c.pearce at leicester.ac.uk Department of Engineering Tel: +44 (0)116 223 1290 University of Leicester Fax: +44 (0)116 252 2619 Leicester LE1 7RH Bioengineering, Transducers and United Kingdom Signal Processing Group From jel1 at nyu.edu Fri Jan 18 10:38:01 2002 From: jel1 at nyu.edu (Joseph LeDoux) Date: Fri, 18 Jan 2002 10:38:01 -0500 Subject: Synaptic Self Message-ID: <3C484159.CBCAFE56@nyu.edu> NOW AVAILABLE!!! PRESS RELEASE For more information please contact: Holly Watson at 212.366.2147 or via email at hwatson at penguinputnam.com SYNAPTIC SELF How Our Brains Become Who We Are Joseph LeDoux author of The Emotional Brain "Synaptic Self represents a brilliant manifesto at the cutting edge of psychology's evolution into a brain science. Joseph LeDoux is one of the field's pre-eminent, most important thinkers." -Daniel Goleman, author of Emotional Intelligence "In this pathbreaking synthesis, Joseph LeDoux draws on dazzling insights from the cutting edge of neuroscience to generate a new conception of an enduring mystery: the nature of the self. Enlightening and engrossing, LeDoux's bold formulation will change the way you think about who you are." -Daniel L. Schacter, Chairman of Psychology at Harvard University, and author of The Seven Sins of Memory "LeDoux offers a fascinating view into that 'most unaccountable of machinery,' the human brain." --Kirkus Reviews "Synaptic Self is a wonderful tour of the brain circuitry behind some of the critical aspects of the mind. LeDoux is an expert tour guide and it is well worth listening. His perspective takes you deep into the cellular basis of what it is to be a thinking being." -Antonio R. Damasio, neuroscientist and author of The Feeling of What Happens How is the self, the personality which we present to other people on a daily basis and through which we experience the world, actually constructed? Although nature and nurture have long been believed to both participate, their contributions have remained vague. Joseph LeDoux, author of the SYNAPTIC SELF: How Our Brains Become Who We Are (Viking; January 14, 2002; 400 pages), provides a detailed explanation grounded in cutting edge brain science, arguing that nature and nurture speak the same language-they construct our personality by influencing synapses. In SYNAPTIC SELF, LeDoux proposes an entirely new, biologically based theory, one which does not exclude other ways of understanding the self--whether spiritual, aesthetic, or moral--but rather enriches and broadens these by grounding them in a neurological framework. LeDoux's theory centers on synapses, the spaces between brain cells, which serve as channels of communication between neurons and the means by which the brain accomplishes most of its business, including the key component of constructing a self. According to LeDoux, synapses are not only the means by which we think, act, imagine, feel, and remember, but also the means by which interactions take place between these different mental processes. Without such interactions, we wouldn't be able to attend to and remember the important things in life better than the trivial. Synapses are responsible for encoding the essence of the individual, allowing us to be the same person from moment to moment, week to week, and year to year. Memory is thus a key process in constructing the self. And because many of the brain's systems form memories, either of the conscious kind or, more often, of the implicit, unconscious kind, synaptic interactions between the systems keeps the self together. SYNAPTIC SELF is a dramatically new look at human personality as a product of the integrated brain and represents an important breakthrough in one of the last frontiers of brain research. About the Author: Joseph LeDoux is a neuroscientist and Henry and Lucy Moses Professor of Science at New York University's Center for Neural Sciences. SYNAPTIC SELF: How Our Brains Become Who We Are By Joseph LeDoux Viking On-Sale Date: January 14, 2002 Pages: 400; Price: $25.95; ISBN: 0-670-03028-7 Discovery Channel Book Club Selection To find out more information, or schedule an interview with the author, please contact To obtain a review copy, please contact Dina Jordan via fax at 212.366.2952 or email at djordan at penguinputnam.com For more information, please visit our website at www.penguinputnam.com Penguin Putnam Inc. is the U.S. affiliate of the internationally renowned Penguin Group. Penguin Putnam is one of the leading U.S. adult and children's trade book publishers, owning a wide range of imprints and trademarks including Berkley Books, Dutton, Frederick Warne, G.P. Putnam's Sons, Grosset & Dunlap, New American Library, Penguin, Philomel, Riverhead Books and Viking, among others. The Penguin Group is owned by Pearson plc, the international media group. From radford at cs.toronto.edu Sat Jan 19 17:45:48 2002 From: radford at cs.toronto.edu (Radford Neal) Date: Sat, 19 Jan 2002 17:45:48 -0500 Subject: Postdoc in Bayesian modeling, MCMC, bioinformatics Message-ID: <02Jan19.174555edt.453148-5749@jane.cs.toronto.edu> Postdoctoral position in BAYESIAN MODELING, MARKOV CHAIN MONTE CARLO, BIOINFORMATICS Radford Neal, University of Toronto I am looking for a postdoc who is interested in the following areas: o Bayesian statistical modeling, especially flexible models such as those based on Dirichet process mixtures, neural networks, and Gaussian processes. o Markov chain Monte Carlo methods, either general in scope, or tailored to specific Bayesian models. o Applications of flexible models in bioinformatics, especially analysis of spectroscopic data, analysis of gene expression data from DNA microarrays, and inference for phylogenetic trees. Candidates should have a PhD in a relevant discipline (or be about to receive one), and have an excellent background in at least one of the above areas, plus a willingness and ability to learn about the others. This position is for one year, with possibility of extension to two years, starting no later than August 2002 (preferably sooner). There will be opportunities to apply for sessional teaching in Statistics or Computer Science if desired. The University of Toronto has a large and diverse group of faculty, graduate students, and postdocs in Statistics and in Machine Learning. To learn more about what is going on here, visit the Statistics site at http://utstat.utoronto.ca and the web site for the Machine Learning group (in Computer Science) at http://www.cs.utoronto.ca/learning/. For more about my personal research interests, see my web pages at http://www.cs.utoronto.ca/~radford/. Applicants should EMAIL a CV, the email addresses of two references, and a description of their research background and interests to me at radford at cs.utoronto.ca. Plain text is preferred, but Postscript, PDF, and (if you really have to) MS Word documents will also be read. All applications that are received by February 15 will be considered. Those received after that will be considered if the position has not already been filled. ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ---------------------------------------------------------------------------- From eero.simoncelli at nyu.edu Sun Jan 20 14:08:09 2002 From: eero.simoncelli at nyu.edu (Eero Simoncelli) Date: Sun, 20 Jan 2002 14:08:09 -0500 (EST) Subject: Summer course: Computational Visual Neuroscience Message-ID: <200201201908.OAA02223@calaf.cns.nyu.edu> Computational Neuroscience: Vision Cold Spring Harbor Laboratory Summer Course 13 - 26 June 2002 Computational modeling and simulation have produced important advances in our understanding of neural processing. This intensive 2-week summer course focuses on areas of visual science in which interactions among psychophysics, neurophysiology, and computation have been especially fruitful. Topics to be covered this year include: neural representation and coding; photon detection and the neural basis of color vision, pattern vision, and visual motion perception; oculomotor function; object/shape representation; visual attention and decision-making. The course combines lectures (generally two 3-hour sessions each day) with hands-on problem solving using the MatLab programming environment in a computer laboratory. Lectures are given by the course organizers and by invited lecturers, including: Edward Adelson (MIT), David Brainard (U Pennsylvania), Kathleen Cullen (McGill U), Norma Graham (Columbia U), Kalanit Grill Spector (Stanford U), David Heeger (Stanford U), Dan Kersten (U Minnesota), Tony Movshon (NYU), Bill Newsome (Stanford U), Fred Rieke (U Washington), Mike Shadlen (U Washington), Stefan Treue (U Tuebingen), Preeti Verghese (Smith-Kettlewell Institute). Application deadline: 15 March 2002 Further information & application materials: http://www.cns.nyu.edu/csh02 Course Organizers: E.J. Chichilnisky, Salk Institute Paul W. Glimcher, New York University Eero P. Simoncelli, New York University From jaz at cs.rhul.ac.uk Mon Jan 21 05:37:50 2002 From: jaz at cs.rhul.ac.uk (J.S. Kandola) Date: Mon, 21 Jan 2002 10:37:50 +0000 Subject: Call for Papers: JMLR Special Issue on Machine Learning Methods for Text and Images Message-ID: *********** Apologies for Multiple Postings*************** ******************************************************* Call for Papers: JMLR Special Issue on Machine Learning Methods for Text and Images Guest Editors: Jaz Kandola (Royal Holloway College, University of London, UK) Thomas Hofmann (Brown University, USA) Tomaso Poggio (M.I.T, USA) John Shawe-Taylor (Royal Holloway College, University of London, UK) Submission Deadline: 29th March 2002 Papers are invited reporting original research on Machine Learning Methods for Text and Images. This special issue follows the NIPS 2001 workshop on the same topic, but is open also to contribution that were not presented in it. A special volume will be published for this issue. There has been much interest in information extraction from structured and semi-structured data in the machine learning community. This has in part been driven by the large amount of unstructured and semi-structured data available in the form of text documents, images, audio, and video files. In order to optimally utilize this data, one has to devise efficient methods and tools that extract relevant information. We invite original contributions that focus on exploring innovative and potentially groundbreaking machine learning technologies as well as on identifying key challenges in information access, such as multi-class classification, partially labeled examples and the combination of evidence from separate multimedia domains. The special issue seeks contributions applied to text and/or images. For a list of possible topics and information about the associated NIPS workshop please see http://www.cs.rhul.ac.uk/colt/JMLR.html Important Dates: Submission Deadline: 29th March 2002 Decision: 24th June 2002 Final Papers: 24th July 2002 Many thanks Jaz Kandola, Thomas Hofmann, Tommy Poggio and John Shawe-Taylor From pelillo at dsi.unive.it Mon Jan 21 09:07:33 2002 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Mon, 21 Jan 2002 15:07:33 +0100 (ora solare Europa occidentale) Subject: A PAMI/NIPS paper on matching free trees In-Reply-To: <200201211155.g0LBtcL22489@oink.dsi.unive.it> Message-ID: The following paper, accepted for publication in the IEEE Transactions on Pattern Analysis and Machine Intelligence, is accessible at the following www site: http://www.dsi.unive.it/~pelillo/papers/pami-2001.ps.gz A shorter version of it has just been presented at NIPS*01, and can be accesses at: http://www.dsi.unive.it/~pelillo/papers/nips2001.ps.gz (files are gzipped postscripts) Comments and suggestions are welcome! Best regards, Marcello Pelillo ================================ Matching Free Trees, Maximal Cliques, and Monotone Game Dynamics Marcello Pelillo University of Venice, Italy Abstract Motivated by our recent work on rooted tree matching, in this paper we provide a solution to the problem of matching two free (i.e., unrooted) trees by constructing an association graph whose maximal cliques are in one-to-one correspondence with maximal common subtrees. We then solve the problem using simple payoff-monotonic dynamics from evolutionary game theory. We illustrate the power of the approach by matching articulated and deformed shapes described by shape-axis trees. Experiments on hundreds of larger, uniformly random trees are also presented. The results are impressive: despite the inherent inability of these simple dynamics to escape from local optima, they always returned a globally optimal solution. ================================ ________________________________________________________________________ Marcello Pelillo Dipartimento di Informatica Universita' Ca' Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: pelillo at dsi.unive.it URL: http://www.dsi.unive.it/~pelillo From smarsland at cs.man.ac.uk Mon Jan 21 09:00:18 2002 From: smarsland at cs.man.ac.uk (Stephen Marsland) Date: Mon, 21 Jan 2002 14:00:18 +0000 Subject: PhD thesis available Message-ID: <3C4C1EF2.FEF8D2DE@cs.man.ac.uk> Hi, my PhD thesis On-line Novelty Detection Through Self-Organisation, with Application to Inspection Robotics is available at http://www.cs.man.ac.uk/~marslans/pubs.html Stephen Abstract: Novelty detection, the recognition that a perception differs in some way from the features that have been seen previously, is a useful capability for both natural and artificial organisms. For many animals the ability to detect novel stimuli is an important survival trait, since the new perception could be evidence of a predator, while for learning machines novelty detection can enable useful behaviours such as focusing attention on novel features, selecting what to learn and~-- the main focus of this thesis~-- inspection tasks. There are many places where an autonomous mobile inspection robot would be useful~-- examples include sewers, pipelines and even outer space. The robot could explore its environment and highlight potential problems for further investigation. The challenge is to have the robot recognise the evidence of problems. For inspection applications it is better to err on the side of caution, detecting potential faults that are, in fact not problems, rather than missing any faults that do exist. However, by training the robot to recognise each individual fault, other problems will be missed. This is where novelty detection is useful. Instead of training the robot to recognise the faults, the robot learns a model of the `normal' environment that does not have any problems and the novelty filter detects deviations from this model. In training the robot it may well be found that the initial training set was deficient in some way, for example some feature that should be found normal was missing and is therefore always detected as novel. To deal with this situation the novelty filter should be capable of continuous on-line learning, so that the filter can learn to recognise the missing feature without having to relearn every other perception. This thesis introduces a novelty filter that is suitable for the inspection task. The novelty filter uses a model of the biological phenomenon of habituation, a decrement in behavioural response to a stimulus that is seen repeatedly without ill effects, together with an unsupervised neural network that learns the model of normality. A variety of neural networks are investigated for suitability as the basis of the novelty filter on a number of robot experiments where a robot equipped with sonar sensors explores a set of corridor environments. The particular needs of the novelty filter require a self-organising network that is capable of continuous learning and that can increase the number of nodes in the network as new perceptions are seen during training. A suitable network, termed the `Grow When Required' network, is devised. The network is applied to a variety of problems, initially non-novelty detection classification tasks, at which its performance compares favourably to other algorithms in terms of accuracy and speed of learning, and then a series of inspection problems~-- both robotic and not~-- again with promising results. In addition to the sonar sensors that were used for the earlier robotic inspection tasks, the output of a CCD camera is also used as input. Finally, an extension to the novelty detection algorithm is presented that enables the filter to store multiple models of a variety of environments and to autonomously select the best one. This means that the filter can be used in a set of environments that demonstrate different characteristics and can automatically select a suitably trained filter. From mlf at dlsi.ua.es Mon Jan 21 09:15:52 2002 From: mlf at dlsi.ua.es (Mikel L. Forcada) Date: Mon, 21 Jan 2002 15:15:52 +0100 Subject: Neural Networks, Automata, and Formal Models of Computation Message-ID: <3C4C2298.DDA59F35@dlsi.ua.es> Dear Connectionists, This is just to announce the availability of a web document, "Neural Networks, Automata, and Formal Models of Computation". The URL is http://www.dlsi.ua.es/~mlf/nnafmc/ You will find both a printable PDF file and a browsable (imperfect) HTML version. "Neural Networks, Automata, and Formal Models of Computation" was initially conceived (in 1995) as a reprint collection. A number of personal and editorial circumstances have prevented me from finishing this work; therefore, this document can only be seen as some kind of draft, besides being somewhat outdated (reflecting perhaps the state of things around 1999). But I am making it publically available, just in case it is useful to other people working in related fields. I am switching to a different field of computer science, and I thought that perhaps it was better to have it on the web than buried in my hard disk. -- _____________________________________________________________________ Mikel L. Forcada E-mail: mlf at dlsi.ua.es Departament de Llenguatges Phone: +34-96-590-3400 ext. 3384; i Sistemes Inform?tics also +34-96-590-3772. UNIVERSITAT D'ALACANT Fax: +34-96-590-9326, -3464 E-03071 ALACANT, Spain. URL: http://www.dlsi.ua.es/~mlf From vaina at bu.edu Mon Jan 21 17:42:49 2002 From: vaina at bu.edu (Lucia M. Vaina) Date: Mon, 21 Jan 2002 17:42:49 -0500 Subject: Postdoctoral position in computational vision Message-ID: Postdoctoral Position at Boston University : Learning Invariance for Recognition A postdoctoral position funded by NSF is open immediately for research in modeling the neural mechanisms underlying learning invariance in the visual system at several levels of resolution. This position is part of a multi-university research team (St. Andrews University, Scotland and the Weizmann Institute of Science Israel) investigating invariance learning through a combination of computational modeling and visual psychophysics. Applicants must have a Ph.D. or equivalent degree. A strong background in mathematics, physics, or computer science and a background in visual neuroscience are required. Experience with computational modeling in vision using matlab and/or C programming is a plus. The post is for one year initially, with the possibility of renewal. The salary will be determined by the experience-appropriate level on NIH stipend scale. US citizenship is not required The successful candidate will join a dynamic and interdisciplinary group of scientists performing cutting-edge research on human vision, using psychophysics, functional a, neurology, and computational modeling. Further information about the research environment can be found at our Website http://www.bu.edu/eng/labs/bravi/. To apply, please send a curriculum vitae, representative publications, and three letters of recommendation to: Professor Lucia M. Vaina Boston University, Department of Biomedical Engineering College of Engineering 44 Cummington str, Room 315 Boston University Boston, Ma 02215 USA tel: 617-353-2455 fax: 617-353-6766 From mcs at diee.unica.it Mon Jan 21 06:46:57 2002 From: mcs at diee.unica.it (Fabio Roli) Date: Mon, 21 Jan 2002 12:46:57 +0100 Subject: MCS 2002 Final Call for Papers Message-ID: **Apologies for multiple copies** ****************************************** *****MCS 2002 Final Call for Papers***** ****************************************** *****Paper Submission: 1 FEBRUARY 2002***** *********************************************************************** THIRD INTERNATIONAL WORKSHOP ON MULTIPLE CLASSIFIER SYSTEMS Grand Hotel Chia Laguna, Cagliari, Italy, June 24-26 2002 Updated information: http://www.diee.unica.it/mcs E-mail: mcs at diee.unica.it *********************************************************************** WORKSHOP OBJECTIVES MCS 2002 is the third workshop of a series aimed to create a common international forum for researchers of the diverse communities working in the field of multiple classifier systems. Information on the previous editions of MCS workshop can be found on www.diee.unica.it/mcs. Contributions from all the research communities working in the field are welcome in order to compare the different approaches and to define the common research priorities. Special attention is also devoted to assess the applications of multiple classifier systems. The papers will be published in the workshop proceedings, and extended versions of selected papers will be considered for publication in a special issue of the International Journal of Pattern Recognition and Artificial Intelligence. WORKSHOP CHAIRS Josef Kittler (Univ. of Surrey, United Kingdom) Fabio Roli (Univ. of Cagliari, Italy) ORGANIZED BY Dept. of Electrical and Electronic Eng. of the University of Cagliari Center for Vision, Speech and Signal Proc. of the University of Surrey Sponsored by IAPR and IAPR-TC1 Statistical Pattern Recognition Techniques PAPER SUBMISSION An electronic version of the manuscript (PostScript or PDF format) should be submitted to mcs at diee.unica.it. The papers should not exceed 10 pages (LNCS format, see http://www.springer.de/comp/lncs/authors.html). A cover sheet with the authors names and affiliations is also requested, with the complete address of the corresponding author, and an abstract (200 words). In addition, three hard copies of the full paper should be mailed to: MCS 2002 Prof. Fabio Roli Dept. of Electrical and Electronic Eng. University of Cagliari Piazza d'armi 09123 Cagliari Italy IMPORTANT NOTICE: Submission implies the willingness of at least one author to register, attend the workshop, and present the paper. Accepted papers will be published in the proceedings only if the registration form and payment for one of the authors will be received. WORKSHOP TOPICS Papers describing original work in the following and related research topics are welcome: Foundations of multiple classifier systems Methods for classifier fusion Design of multiple classifier systems Neural network ensembles Bagging and boosting Mixtures of experts New and related approaches Applications INVITED SPEAKERS Joydeep Ghosh (University of Texas, USA) Trevor Hastie (Stanford University, USA) Sarunas Raudys (Vilnius University, Lithuania) SCIENTIFIC COMMITTEE J. A. Benediktsson (Iceland) H. Bunke (Switzerland) L. P. Cordella (Italy) B. V. Dasarathy (USA) R. P.W. Duin (The Netherlands) C. Furlanello (Italy) J. Ghosh (USA) T. K. Ho (USA) S. Impedovo (Italy) N. Intrator (Israel) A.K. Jain (USA) M. Kamel (Canada) L.I. Kuncheva (UK) L. Lam (Hong Kong) D. Landgrebe (USA) D-S. Lee (USA) D. Partridge (UK) A.J.C. Sharkey (UK) K. Tumer (USA) G. Vernazza (Italy) T. Windeatt (UK) IMPORTANT DATES February 1, 2002 : Paper Submission March 15, 2002: Notification of Acceptance April 10, 2002: Camera-ready Manuscript April 10, 2002: Registration WORKSHOP VENUE The workshop will be held at Grand Hotel Chia Laguna, Cagliari, Italy. See http://www.crs4.it/~zip/EGVISC95/chia_laguna.html (in English) or http://web.tiscali.it/chialaguna (in Italian). WORKSHOP PROCEEDINGS Accepted papers will appear in the workshop proceedings that will be published in the series Lecture Notes in Computer Science by Springer-Verlag. Extended versions of selected papers will considered for possible publication in a special issue of the International Journal of Pattern Recognition and Artificial Intelligence. From mvzaanen at science.uva.nl Tue Jan 22 04:19:11 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Tue, 22 Jan 2002 10:19:11 +0100 (CET) Subject: CFP: The 6th International Colloquium on Grammatical Inference (fwd) Message-ID: Hello, Could you please put this call for papers on the mailinglist for me? Thank you very much and best regards, Menno van Zaanen ICGI-2002Call for Papers The 6th International Colloquium on Grammatical Inference will be held in Amsterdam, The Netherlands September 11-13th, 2002 http://www.illc.uva.nl/ICGI-2002/ SCOPE ICGI-2002 is the sixth in a series of successful biennial international conferences on the area of grammatical inference. Grammatical inference has been extensively addressed by researchers in information theory, automata theory, language acquisition, computational linguistics, machine learning, pattern recognition, computational learning theory and neural networks. This colloquium aims at bringing together researchers in these fields. Previous editions of this meeting were held in Essex, U.K.; Alicante, Spain; Montpellier, France; Ames, Iowa, USA; and Lisbon, Portugal. AREAS OF INTEREST The conference seeks to provide a forum for presentation and discussion of original research papers on all aspects of grammatical inference including, but not limited to: Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions), impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, parameterized models, etc. Theoretical and experimental analysis of different approaches to grammar induction including artificial neural networks, statistical methods, symbolic methods, information-theoretic ated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, information retrieval, text processing, adaptive intelligent agents, systems modeling and control, and other domains. TECHNICAL PROGRAM COMMITTEE Pieter Adriaans, Perot Systems Corporation/University of Amsterdam, Netherlands (Chair) Dana Angluin, Yale University, USA Dick de Jongh, Universiteit van Amsterdam, Netherlands Jerry Feldman, ICSI, Berkeley, USA Colin de la Higuera, EURISE, Univ. de St. Etienne, France Vasant Honavar, Iowa State University, USA Laurent Miclet, ENSSAT, Lannion, France G. Nagaraja, Indian Institute of Technology, Bombay, India Arlindo Oliveira, Lisbon Technical University, Portugal Jose Oncina Carratala, Universidade de Alicante, Spain Rajesh Parekh, Blue Martini, USA Yasubumi Sakakibara, Tokyo Denki University, Japan Enrique Vidal, U. Politecnica de Valencia, Spain Takashi Yokomori, Waseda University, Japan Menno van Zaanen, Universiteit van Amsterdam, Netherlands Thomas Zeugmann, University at Lubeck, Germany CONFERENCE FORMAT The conference will include oral and possibly poster presentations of accepted papers, a small number of tutorials and invited talks. All accepted papers will appear in the conference proceedings. The proceedings of ICGI-2002 will be published by Springer-Verlag as a volume in their Lecture Notes in Artificial Intelligence, a subseries of the Lecture Notes in Computer Science series. SUBMISSION OF PAPERS Prospective authors are invited to submit a draft paper in English with the following format. The cover page should specify: - submission to ICGI-2002 - title, - authors and affiliation, - mailing address, phone, fax, and e-mail address of the contact author, - a brief abstract describing the work, - at least three keywords which can specify typically the contents of the work. Postscript versions of the papers should be formatted according to A4 or 8.5"x11", and the length should not exceed 12 pages excluding the cover page. The technical expositions should be directed to a specialist and should include an introduction understandable to a non specialist that describes the problem studied and the results achieved, focusing on the important ideas and their significance. All paper submissions, review and notification of acceptance will be done electronically through the conference's WWW pages http://www.illc.uva.nl/ICGI-2002/ DEADLINES Submission of manuscripts: April 5, 2002 Notification of acceptance: May 27th, 2002 Final version of manuscript: June 28th, 2002 ORGANIZING COMMITTEE Pieter Adriaans (Chair) Henning Fernau (Co-chair) Menno van Zaanen (Local organization) Marjan Veldhuisen (Secretariat) SPONSORS: ILLC, Institute for Language Logic and Computation OZSL, Dutch Research School in Logic Take a look at: http://www.illc.uva.nl/ICGI-2002/ See you in Amsterdam in September 2002!! +-------------------------------------+ | Menno van Zaanen | "Let him not vow to walk in the dark, | mvzaanen at science.uva.nl | who has not seen the nightfall." | http://www.science.uva.nl/~mvzaanen | -Elrond From terry at salk.edu Tue Jan 22 12:44:11 2002 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 22 Jan 2002 09:44:11 -0800 (PST) Subject: NEURAL COMPUTATION 14:2 Message-ID: <200201221744.g0MHiB917659@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 2 - February 1, 2002 ARTICLE On the Complexity of Computing and Learning with Multiplicative Neural Networks Michael Schmitt NOTES A Lagrange Multiplier And Hopfield-Type Barrier Function Method for The Traveling Salesman Problem Chuangyin Dang and Lei Xu The Time-Rescaling Theorem and Its Application to Neural Spike Train Data Analysis Emery N. Brown, Riccardo Barbieri, Valerie Ventura, and Loren M. Frank LETTERS The Impact of Spike Timing Variability on the Signal-Encoding Performance of Neural Spiking Models Amit Manwani, Peter N. Steinmetz, and Christof Koch Temporal Correlations in Stochastic Networks of Spiking Neurons Carsten Meyer and Carl van Vreeswijk Measuring Information Spatial Densities Michele Bezzi, Ines Samengo, Stefan Leutgeb and Sheri J. Mizumori Stochastic Trapping in a Solvable Model of On-Line Independent Component Analysis Magnus Rattray A Neural Network-Based Approach to the Double Traveling Salesman Problem Alessio Plebe and Angelo Marcello Anile ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From doug.leith at may.ie Tue Jan 22 12:20:07 2002 From: doug.leith at may.ie (Douglas Leith) Date: Tue, 22 Jan 2002 17:20:07 -0000 Subject: Senior Research Position (Statistical Machine Learning), Hamilton Institute Message-ID: <018101c1a369$0a3518b0$04000001@DougLaptop> SENIOR RESEARCH POSITION Applications are invited from well qualified candidates for a number of positions at the Hamilton Institute. The successful candidates will be outstanding researchers who can demonstrate an exceptional research track record or significant research potential at international level in the general area of modern statistical and machine learning methods for data intensive probabilistic modelling and reasoning, particularly in the context of time series analysis. We are committed to research excellence. This post offers a unique opportunity for tackling fundamental problems in a leading edge multi-disciplinary research group with state of the art facilities. All appointments are initially for 3 years, extendable to 5, with the potential for permanency. Where appropriate, the potential exists to fund post-doc/post-grad positions in support of this post. Salary scale: $30000- $81000 approx. Further information: Please visit our web site at: http://hamilton.may.ie Enquiries to Prof. Douglas Leith, doug.leith at may.ie From rid at ecs.soton.ac.uk Thu Jan 24 11:17:47 2002 From: rid at ecs.soton.ac.uk (Bob Damper) Date: Thu, 24 Jan 2002 16:17:47 +0000 (GMT) Subject: This Workshop will interest many connectionists Message-ID: EPSRC/BBSRC International Workshop Biologically-Inspired Robotics: The Legacy of W. Grey Walter 14-16 August 2002, Bristol, UK http://www.ecs.soton.ac.uk/~rid/wgw02/home.html Biologically-inspired robots functioning in the real world can provide valuable physical models of biology, but can also provide a radical alternative to conventional methods of designing intelligent systems. The origins and history of this fascinating topic can be traced back to seminal work in the 1940's and 1950's, much of it taking place in the United Kingdom. One of the pioneers of the field was William Grey Walter, a neurophysiologist and amateur engineer who spent the majority of his working life in Bristol. He died in 1977 some time after the road accident that ultimately ended his life. A three-day scientific meeting "Biologically-Inspired Robotics: The Legacy of W. Grey Walter" will will take place at Hewlett-Packard Laboratories, Bristol in August 2002 sponsored by the UK Engineering and Physical Sciences Research Council (under the EPSRC Programme in Adaptive and Interactive Behaviour of Animal and Computational Systems) and by the Biotechnology and Biological Sciences Research Council, with additional support from Hewlett-Packard. The workshop will focus on the latest work in this important area of overlap between biology, engineering and computing, with keynote talks from internationally-acclaimed practitioners across the range of disciplines impacting on biologically-inspired robotics. The following invited speakers are confirmed: Michael Arbib (University of Southern California, Los Angeles) Randall Beer (Case Western Reserve University, Cleveland) Valentino Braitenberg (University of Tubingen) Rodney Brooks (MIT, Boston) Gerald Edelman (The Neurosciences Institute, La Jolla) Owen Holland (University of Essex) Rolf Pfeifer (University of Zurich) Mandyam Srinivasan (Australian National University, Canberra) Luc Steels (Sony Computer Science Laboratories, Paris) Contributed papers (maximum 8 pages) are also invited from scientists and engineers keen to publicise their recent work at this high-calibre workshop. Relevant topics include (but are not limited to): Biorobotics Biologically-inspired robot architectures Artificial life and animats Artificial perception Autonomous robots Humanoid robots Learning and adaptation Evolutionary robotics Hardware for biorobotics Applications Communication and cooperation Robot-human interaction Social and collective behaviour History of cybernetics Embodiment and robotics The life and work of Grey Walter Emergence and interaction Neuroethology Philosophical issues Biological basis of intelligence Papers must be submitted electronically according to instructions on the Workshop web site (URL above). The best papers from the workshop will be published in a special, themed issue of the Philosophical Transactions of the Royal Society, the world's longest running scientific journal. The local organising committee are keen to encourage the participation of research students, especially those sponsored by EPSRC and BBSRC. To this end, there will be free registration and accommodation for research council students working a relevant or related area (subject to fairly generous availability) and a special Student Poster Session is being organised. Important Dates Deadline for contributed papers: 1 May 2002 Notification of acceptance/rejection: 3 June 2002 Final submission of revised paper: 1 July 2002 Workshop: 14-16 August 2002 The workshop has been timed to fit in with SAB'02 in Edinburgh and to make attendance at both events maximally convenient. Local Organising Committee Dave Cliff (Hewlett-Packard Laboratories, Bristol) Bob Damper (University of Southampton, Chair) Kerstin Dautenhahn (University of Hertfordshire) Inman Harvey (University of Sussex) Chris Melhuish (University of West of England) Ulrich Nehmzow (University of Essex) Nigel Shadbolt (University of Southampton) Noel Sharkey (University of Sheffield) Barbara Webb (University of Stirling) From peterk at ini.phys.ethz.ch Thu Jan 24 08:56:53 2002 From: peterk at ini.phys.ethz.ch (Peter =?iso-8859-1?Q?K=F6nig?=) Date: Thu, 24 Jan 2002 14:56:53 +0100 Subject: No subject Message-ID: 25.2.-27.2.2002 the EU neuroinformatics workshop "Neural and Artificial Information" will take place in Z?rich. This workshop is part of a series of workshop organized by the EU network of Neuroinformatics. It addresses our current understanding of the computational properties of the brain. In particular we want to focus on a discussion of the encoding capabilities of neurons and populations of neurons. One of the goals of this analysis is to assess how our understanding of neural computation can both be facilitated by the constructing of synthetic neuronal systems and give rise to novel information processing technology. The different contributions to the workshop are structured around different levels of neuronal organization: subcellular, cellular, circuits, systems. The different contributions aim to elucidate the differences and commonalities between the traditional definition and understanding of computation and that observed in the nervous system. More information and registration forms can be found at www.neuroinf.org With best regards, Peter K?nig, Paul FMJ Verschure and Tim Pearce. -- PD Dr. Peter K?nig +41-1-635 30 60 Institute of Neuroinformatics +41-1-635 30 53 (fax) ETH - University Z?rich http://www.ini.unizh.ch/~peterk Winterthurerstr. 190 peterk at ini.phys.ethz.ch 8057 Z?rich From engp9354 at nus.edu.sg Thu Jan 24 10:46:13 2002 From: engp9354 at nus.edu.sg (Chu Wei) Date: Thu, 24 Jan 2002 23:46:13 +0800 Subject: Bayesian Inference in Support Vector Machines Message-ID: <9C4C56CDF89E0440A6BD571E76D2387F0131E6F2@exs23.ex.nus.edu.sg> Dear Connectionists: We have recently completed two technical reports in which we apply popular Bayesian techniques in support vector machines to implement hyperparameter tuning. In a probabilistic framwork, Bayesian inference is used to implement model adaption, while keeping the merits of support vector machines, such as sparseness and convex quadratic programming. Another benifit is the availability of probabilistic prediction. The results in numerical experiments verify that the generalization capability of the Bayesian methods is competitive and it is feasible to tackle reasonable large data sets in this approach. The pdf files of these reports can be accessed at: For regression: http://guppy.mpe.nus.edu.sg/~mpessk/papers/bisvr.pdf For classification: http://guppy.mpe.nus.edu.sg/~mpessk/papers/bitsvc.pdf We are looking forward to your comments to improve this work. Thanks. We attach their abstract in the following: Title: Bayesian Inference in Support Vector Regression Abstract: In this paper, we apply popular Bayesian techniques on support vector regression. We describe a Bayesian framework in a function-space view with a Gaussian process prior probability over the functions. A unified non-quadratic loss function with the desirable characteristic of differentiability, called the soft insensitive loss function, is used in likelihood evaluation. In the framework, maximum a posteriori estimate of the functions results in an extended support vector regression problem. Bayesian methods are used to implement model adaptation, while keeping the merits of support vector regression, such as quadratic programming and sparseness. Moreover, we put forward confidence interval in making predictions. Experimental results on simulated and real-world datasets indicate that the approach works well even on large datasets. Title: Bayesian Inference in Trigonometric Support Vector Classifier Abstract: In the report, we propose a novel classifier, known as trigonometric support vector classifier, to integrate popular Bayesian techniques with support vector classifier. We describe a Bayesian framework in a function-space view with a Gaussian process prior probability over the functions. The trigonometric likelihood function with the desirable characteristics of normalization in likelihood and differentiability is used in likelihood evaluation. In the framework, maximum a posteriori estimate of the functions results in an extended support vector classifier problem. Bayesian methods are used to implement model adaptation, while keeping the merits of support vector classifier, such as sparseness and convex programming. Moreover, we put forward class probability in making predictions. Experimental results on artificial and benchmark datasets indicate that the approach works well even on large datasets. Sincerely Wei Chu (engp9354 at nus.edu.sg) S. Sathiya Keerthi (mpessk at nus.edu.sg) Chong Jin Ong (mpeongcj at nus.edu.sg) From erik at bbf.uia.ac.be Thu Jan 24 12:22:29 2002 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 24 Jan 2002 18:22:29 +0100 Subject: CNS*2002: Call for papers Message-ID: CALL FOR PAPERS: APPLICATION DEADLINE: February 8, 2002 midnight GMT Eleventh Annual Computational Neuroscience Meeting CNS*2002 July 21 - July 25, 2002 Chicago, Illinois USA http://www.neuroinf.org/CNS.shtml Info at cp at bbf.uia.ac.be CNS*2002 will be held in Chicago from Sunday, July 21, 2002 to Thursday, July 25 in the Congress Plaza Hotel & Convention Center. This is a historic hotel located on Lake Michigan in downtown Chicago. General sessions will be Sunday-Wednesday, Thursday will be a full day of workshops. The conference dinner will be Wednesday night, followed by the rock-n-roll jam session. Papers can include experimental, model-based, as well as more abstract theoretical approaches to understanding neurobiological computation. We especially encourage papers that mix experimental and theoretical studies. We also accept papers that describe new technical approaches to theoretical and experimental issues in computational neuroscience or relevant software packages. The paper submission procedure is new this year: it is at a different web site and makes use of a preprint server. This allows everybody to view papers before the actual meeting and to engage in discussions about submitted papers. PAPER SUBMISSION Papers for the meeting can be submitted ONLY through the web site at http://www.neuroinf.org/CNS.shtml. Papers can be submitted either old style (a 100 word abstract followed by a 1000 word summary) or as a full paper (max 6 typeset pages). In both cases the abstract (100 words max) will be published in the conference program. Submission will occur through a preprint server run by Elsevier, more information can be found on the submission web site. Authors have the option of declaring their submission restricted access, not making it publicly visible. All submissions will be acknowledged by email. It is important to note that this notice, as well as all other communication related to the paper will be sent to the designated correspondence author only. THE REVIEW PROCESS All submitted papers will be first reviewed by the program committee. Papers will be judged and accepted for the meeting based on the clarity with which the work is described and the biological relevance of the research. For this reason authors should be careful to make the connection to biology clear. We reject only a small fraction of the papers (~ 5%) and this usually based on absence of biological relevance (e.g. pure artificial neural networks). We expect to notify authors of meeting acceptance before end of March. The second stage of review involves evaluation of each submission by two independent referees. The primary objective of this round of review will be to select papers for oral and featured oral presentation. In addition to perceived quality as an oral presentation, the novelty of the research and the diversity and coherence of the overall program will be considered. To ensure diversity, those who have given talks in the recent past will not be selected and multiple oral presentations from the same lab will be discouraged. A second objective of the review is to rank papers for inclusion in the conference proceedings. All accepted papers not selected for oral talks as well as papers explicitly submitted as poster presentations will be included in one of three evening poster sessions. Authors will be notified of the presentation format of their papers by end of April. CONFERENCE PROCEEDINGS The proceedings volume is published each year as a special supplement to the journal Neurocomputing. In addition the proceedings are published in a hardbound edition by Elsevier Press. Only papers which are made publicly available on the preprint server, which are presented at the CNS meeting and which are not longer than 6 typeset pages will be eligible for inclusion in the proceedings. Authors who only submitted a 1000 word symmary will be required to submit a full paper to the preprint server. The proceedings size is limited to 1200 pages (about 200 papers). In case more papers are eligible the lowest ranked papers will not be included in the proceedings but will remain available on the preprint server. Authors will be advised of the status of their papers immediately after the CNS meeting. Submission of final papers will be through the preprint server with a deadline early October. For reference, papers presented at CNS*99 can be found in volumes 32-33 of Neurocomputing (2000) and those of CNS*00 in volumes 38-40 (2001). INVITED SPEAKERS: Ad Aertsen (Albert-Ludwigs-University, Germany) Leah Keshet (University British Columbia, Canada) Alex Thomson (University College London, UK) ORGANIZING COMMITTEE: Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Philip Ulinski (University of Chicago, USA) Workshop organizer: Maneesh Sahani (Gatsby Computational Neuroscience Unit, UK) Government Liaison: Dennis Glanzman (NIMH/NIH, USA) Program Committee: Upinder Bhalla (National Centre for Biological Sciences, India) Avrama Blackwell (George Mason University, USA) Victoria Booth (New Jersey Institute of Technology, USA) Alain Destexhe (CNRS Gif-sur-Yvette, France) John Hertz (Nordita, Denmark) David Horn (University of Tel Aviv, Israel) Barry Richmond (NIMH, USA) Steven Schiff (George Mason University, USA) Todd Troyer (University of Maryland, USA) From rens at science.uva.nl Thu Jan 24 20:03:08 2002 From: rens at science.uva.nl (Rens Bod) Date: Fri, 25 Jan 2002 02:03:08 +0100 (MET) Subject: Volumes on "Data-Oriented Parsing" and "Probabilistic Linguistics" Message-ID: We're finalizing an edited volume on "Data-Oriented Parsing" (CSLI Publications). People interested in the preliminary chapters are wellcome to have a look at: http://turing.wins.uva.nl/~rens/dopbook.html Comments are wellcome! We are also finalizing a handbook on "Probabilistic Linguistics" (MIT Press). See http://turing.wins.uva.nl/~rens/ for the relevant link, or go directly to: http://www.ling.canterbury.ac.nz/jen/documents/contents.html Best, Rens Bod From geoff at cns.georgetown.edu Fri Jan 25 14:35:58 2002 From: geoff at cns.georgetown.edu (geoff@cns.georgetown.edu) Date: Fri, 25 Jan 2002 14:35:58 -0500 Subject: Faculty positions Message-ID: <200201251935.g0PJZw102777@jacquet.cns.georgetown.edu> Dear Colleagues, I would like to encourage you to apply for the following faculty positions at Georgetown University. Please feel free to contact me if you have any questions. Geoff Geoffrey J Goodhill, PhD Associate Professor, Department of Neuroscience Georgetown University Medical Center 3900 Reservoir Road NW, Washington DC 20007 geoff at georgetown.edu http://cns.georgetown.edu ------------------- NEUROSCIENCE TENURE-TRACK FACULTY POSITIONS GEORGETOWN UNIVERSITY The Department of Neuroscience is recruiting two new tenure track faculty at the rank of Assistant or Associate Professor. We seek outstanding candidates with research in molecular or developmental neurobiology, neurophysiology, or cognitive, computational or systems neuroscience. We have state-of-the-art core facilities in cellular neurobiology, neuroanatomy, EEG/ERP, as well as animal (7T) and human magnetic resonance (fMRI). We offer an outstanding intellectual and collaborative environment with highly competitive salary and start-up packages. Successful candidates must have a Ph.D. or equivalent, evidence of productivity and innovation, and the potential to establish an independently funded research program. Applications are encouraged from women and underrepresented minorities. To apply send a detailed CV, a two-page statement of research and teaching interests specifying one or more of the research areas noted above, and names of at least three referees to the following address: Neuroscience Search Committee Attn: Janet Bordeaux Department of Neuroscience Georgetown University Medical Center 3900 Reservoir Road, NW Washington DC 20007 http://neuro.georgetown.edu Application review will begin immediately and will continue until positions are filled. Georgetown University is an Equal Opportunity, Affirmation Action Employer. Qualified candidates will receive employment consideration without regard to race, sex, sexual orientations, age, religion, national origin, marital status, veteran status or disability. We are committed to diversity in the workplace. From J.A.Bullinaria at cs.bham.ac.uk Fri Jan 25 10:06:29 2002 From: J.A.Bullinaria at cs.bham.ac.uk (John A Bullinaria) Date: Fri, 25 Jan 2002 15:06:29 +0000 (GMT) Subject: MSc in Natural Computation Message-ID: Studentships/Scholarships for MSc in Natural Computation ======================================================== School of Computer Science (http://www.cs.bham.ac.uk) The University of Birmingham Birmingham, UK Students are invited for an advanced 12 month MSc programme in Natural Computation (i.e. computational systems that use ideas and inspirations from natural biological, ecological and physical systems). This will comprise of six taught modules in Neural Computation, Evolutionary Computation, Molecular and Quantum Computation, Nature Inspired Optimisation, Nature Inspired Learning, and Nature Inspired Design (10 credits each); two mini research projects (30 credits each); and one full scale research project (60 credits). The programme is supported by the EPSRC through its Master's Level Training Packages and by a number of leading companies. Our industrial advisory board includes representatives from British Telecom, Unilever, QinetiQ, Rolls Royce, Severn Trent, Pro Enviro, and SPSS. The School of Computer Science at the University of Birmingham has a strong research group in evolutionary and neural computation, with eight members of academic staff (six faculty and two research fellows) currently specialising in these fields: Dr. John Bullinaria (Neural Networks, Evolutionary Computation, Cog.Sci.) Dr. Ke Chen (Neural Networks, Pattern Recognition, Machine Perception) Dr. Aniko Ekart (Genetic Programming, AI, Machine Learning) Dr. Jun He (Evolutionary Computation) Dr. Julian Miller (Evolutionary Computation, Machine Learning) Dr. Jon Rowe (Evolutionary Computation, AI) Dr. Thorsten Schnier (Evolutionary Computation, Engineering Design) Prof. Xin Yao (Evolutionary Computation, NNs, Machine Learning) Other staff members also working in these areas include Prof. Aaron Sloman (evolvable architectures of mind, co-evolution, interacting niches) and Dr. Jeremy Wyatt (evolutionary robotics, classifier systems). The programme is open to candidates with a very good honours degree or equivalent qualifications in Computer Science/Engineering or closely related areas. Several fully funded EPSRC studentships (covering fees and maintenance costs) are available, and additional financial support from our industrial partners may be available during the main project period. Further details about this programme and funding opportunities are available from our Web-site at: http://www.cs.bham.ac.uk/natcomp Please note that the closing date for applications is 15th July 2002. From nando at cs.ubc.ca Fri Jan 25 18:53:32 2002 From: nando at cs.ubc.ca (Nando de Freitas) Date: Fri, 25 Jan 2002 15:53:32 -0800 Subject: Software for particle filtering and dynamic mixtures of Gaussians Message-ID: <3C51EFFC.8A6138CC@cs.ubc.ca> Dear Connectionists, I've made available some matlab code on particle filtering (aka condensation, survival of the fittest, bootstrap filters, sequential Monte Carlo) and a substantial better algorithm when the model at hand is a dynamic conditionally Gaussian representation. The latter algorithm is known as Rao Blackwellised particle filtering. This algorithm can be interpreted as an efficient stochastic mixture of Kalman filters. That is, as a stochastic bank of Kalman filters. Needless to say, dynamic mixtures of Gaussians arise in many settings, including computer vision, speech processing, music processing, fault diagnosis, and so on. The software also includes efficient state-of-the-art resampling routines. These are generic and suitable for any application of particle filters. The matlab code and links to papers are available at http://www.cs.ubc.ca/~nando/software.html Best, Nando From zhouzh at nju.edu.cn Sat Jan 26 03:04:37 2002 From: zhouzh at nju.edu.cn (Zhi-Hua Zhou) Date: Sat, 26 Jan 2002 16:04:37 +0800 Subject: paper to be published by AI journal and its code Message-ID: <000c01c1a640$19ae0fc0$03a8a8c0@daniel> Dear Colleagues, Below is a paper accepted by AI Journal: Zhi-Hua Zhou, Jianxin Wu, Wei Tang. Ensembling neural networks: many could be better than all. Abstract: Neural network ensemble is a learning paradigm where many neural networks are jointly used to solve a problem. In this paper, the relationship between the ensemble and its component neural networks is analyzed from the context of both regression and classification, which reveals that it may be better to ensemble many instead of all of the neural networks at hand. This result is interesting because at present, most approaches ensemble all the available neural networks for prediction. Then, in order to show that the appropriate neural networks for composing an ensemble can be effectively selected from a set of available neural networks, an approach named GASEN is presented. GASEN trains a number of neural networks at first. Then it assigns random weights to those networks and employs genetic algorithm to evolve the weights so that they can characterize to some extent the fitness of the neural networks in constituting an ensemble. Finally it selects some neural networks based on the evolved weights to make up the ensemble. A large empirical study shows that, comparing with some popular ensemble approaches such as Bagging and Boosting, GASEN can generate neural network ensembles with far smaller sizes but stronger generalization ability. Furthermore, in order to understand the working mechanism of GASEN, the bias-variance decomposition of the error is provided in this paper, which shows that the success of GASEN may lie in that it can significantly reduce the bias as well as the variance. The pdf version of this paper is now available at http://cs.nju.edu.cn/people/zhouzh/zhouzh.files/Publication/aij02.pdf The matlab code of GASEN now is available at http://cs.nju.edu.cn/people/zhouzh/zhouzh.files/MLNN_Group/freeware/Gasen.zip Enjoy it! Best Regards Zhihua ----------------------------------------------- Zhi-Hua ZHOU Ph.d. National Lab for Novel Software Technology Nanjing University Hankou Road 22 Nanjing 210093, P.R.China Tel: +86-25-359-3163 Fax: +86-25-330-0710 URL: http://cs.nju.edu.cn/people/zhouzh/ Email: zhouzh at nju.edu.cn ----------------------------------------------- From javier at ergo.ucsd.edu Tue Jan 29 21:54:59 2002 From: javier at ergo.ucsd.edu (movellan) Date: Tue, 29 Jan 2002 18:54:59 -0800 Subject: Tech Report on Development of Gaze Following Message-ID: <3C576083.5287BE27@inc.ucsd.edu> The following report is available at http://mplab.ucsd.edu following link to tech reports. The Development of Gaze Following as a Bayesian Systems Identification Problem Javier R. Movellan & John S. Watson UC San Diego UC Berkeley UCSD's Institute for Neural Computation Machine Perception Lab Tech Report 2002.01 We propose a view of gaze following in which infants act as Bayesian learners actively attempting to identify the operating characteristics of the systems with which they interact. We present results of an experiment in which 28 infants (average age 10 months) interacted for a 3 minute period with a non-humanoid robot. For half the infants the robot simulated contingency structure typically produced by human beings. In particular it provided causal information about the existence of a line of regard. For the other 14 infants, the robot behaved in a manner which was not contingent with the environment. We found that a few minutes of interaction with the contingent robot was sufficient to elicit statistically detectable gaze following. There were clear signs that some of these infants were actively attempting to identify whether or not the robot was responsive to them. We propose that the infant brain is equipped to learn and analyze the contingency structure of real-time social interactions. Contingency is a fundamental perceptual dimension used by infants to recognize the operational properties of humans and to generalize existing behaviors to new social partners. From m.padgett at ieee.org Wed Jan 30 01:14:25 2002 From: m.padgett at ieee.org (Mary Lou Padgett) Date: Wed, 30 Jan 2002 00:14:25 -0600 Subject: WCCI 2002 Tutorials on Computational Intelligence: Neural Networks, Fuzzy Systems, Evolutionary Computation Message-ID: <5.1.0.14.2.20020130001418.027444b0@pop.mindspring.com> *** REGISTER by FEB. 1 for DISCOUNT on World-Class TUTORIALS *** Conference Home Page: http://www.wcci2002.org/ **************************************************************************** 2002 World Congress on Computational Intelligence Hilton Hawaiian Village Hotel Honolulu, Hawaii May 12-17, 2002 WCCI'02 features three of the most important conferences in the areas of Computational Intelligence. * International Joint Conference on Neural Networks (IJCNN). * IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). * Congress on Evolutionary Computation (CEC). See you in paradise! **************************************************************************** Tutorials Tutorials will be held on Sunday, May 12, 2002. Registrations will be accepted for each tutorial on a first-come, first-served basis until the available space is filled. See details on the website: http://www.wcci2002.org/tutorial.html Sunday, May 12, 2002 8:00 AM - 10:00 AM Hans-Paul Schwefel An Introduction to Evolutionary Computation Jacek Zurada An Introduction to Neural Networks Raghu Krishnapuram An Introduction to Fuzzy Systems Marimuthu Palaniswami Support Vector Machines 10:15 AM to 12:15 PM Russ Eberhart and James Kennedy Particle Swarm Optimization Peter J. Angeline Evolutionary Algorithms for Program Induction Paul Werbos Neural Nets For Diagnostics, Prediction and Control: Capabilities & Myths Mitra Basu An Introduction to Biological Sequence Analysis Wlodzislaw Duch Computational Intelligence for Data Mining 12:30 to 2:30 PM Dipankar Dasgupta Artificial Immune Systems David Corne Computational Intelligence for Scheduling Dennis Fernandez The Legal Aspects of Computational Intelligence Ernst Niebur Attention and Selection Ling Guan Intelligent Multimedia Processing Peter Adlassnig Fuzzy Systems in Biomedicine 2:45 PM to 4:45 PM Claus Wilke Artificial Life Systems DeLiang Wang Neural Networks for Scene Analysis Kalyanmoy Deb Evolutionary Multiobjective Optimization Robert G. Reynolds An Introduction to Cultural Algorithms Bayya Yegnanarayana Neural Network Models for Speech and Image Processing Jerry Mendel Type-2 Fuzzy Logic: Expanded and Enhanced Fuzzy Logic **************************************************************************** Questions or comments: Mary Lou Padgett, WCCI 2002 Tutorial Chair, m.padgett at ieee.org. **************************************************************************** From malchiodi at dsi.unimi.it Wed Jan 30 10:54:16 2002 From: malchiodi at dsi.unimi.it (Dario Malchiodi) Date: Wed, 30 Jan 2002 16:54:16 +0100 Subject: CFP special session on "Learning with confidence" Message-ID: <3C581728.90204@dsi.unimi.it> Many apologizes for cross-posting SCI2002 Sixth World Multiconference on Systemics, Cybernetics and Informatics July 14-18, 2002 ~ Orlando, Florida Special Session: Learning with confidence http://laren.dsi.unimi.it/SCI2002/ Call for papers Leaving the asymptotic learnability results of early sixties, for instance from E. Gold or A. Gill, modern theories consider learning as a statistical operation, possibly based on highly structured sample values, possibly done in a very poor probabilistic framework. In this scenario the target of our learning task is generally a function that is a random object, and we want to frame its variability within a set of possible realizations with satisfactory confidence. Under a computational perspective this problem reads in terms of sample complexity for a given accuracy (a relevant measure of the width of the realization set) and In the aim of locating the learning task in the one or other side of the exponential complexity divide, former results came from rather elementary probabilistic modeling based on binomial experiments and sharp bounds such as those coming from Chernoff inequality. Subsequent comparisons of the algorithms efficiency on a same learning task lead to the employment of more sophisticated statistical tools to identify very accurate confidence intervals, in relation with both sample properties - such as their distribution law or error rate - and structural constraints - such as the allowed complexity of the statistics. These theoretical improvements allow, for instance, to distinguish between different degrees of the polynomials describing sample complexities of algorithms for learning a monotone DNF under proper probability hypotheses on the example space. Many efforts have also been devoted to the confidence intervals for the shape of continuous functions, with results concerning trained neural networks as well.The session aims at collecting contributions by researchers involved in these topics. The special perspective is the exploitation of relations between the randomness of the training examples and their mutual dependence exactly denoted by the function we want discovering from them. Submissions A 2000 characters abstract should be submitted in electronic format (preferably in PDF, but PostScript or MS Word are also acceptable formats) to apolloni at dsi.unimi.it within February 23, 2002, using as subject-line "SCI2002 Special session submission". After notification of acceptance the authors will have to submit within April 5, 2002 an extended abstract not exceeding the length of six pages. Please do not send your papers to SCI2002 secretariat. All papers must be presented by one of the authors, who must pay the registration fee. For more information about the general conference please see http://www.iiisci.org/sci2002/. Session Chair Bruno Apolloni Dipartimento di Scienze dell'Informazione Universita' degli Studi di Milano Via Complico 39/41, I-20153 Milano - Italy Phone: +39 02 503 16284 Fax: +39 02 503 16288 E-mail: apolloni at dsi.unimi.it confidence. From rsun at cecs.missouri.edu Wed Jan 30 13:58:26 2002 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Wed, 30 Jan 2002 12:58:26 -0600 Subject: Cognitive Systems Research, Volume 2, Issue 4 Message-ID: <200201301858.g0UIwQY02425@ari1.cecs.missouri.edu> The new issue of Cognitive Systems Research: --------------------------------------------------------- Table of Contents for Cognitive Systems Research Volume 2, Issue 4, December 2001 Noel E. Sharkey and Tom Ziemke Mechanistic versus phenomenal embodiment: Can robot embodiment lead to strong AI? 251-262 H. John Caulfield, John L. Johnson, Marius P. Schamschula and Ramarao Inguva A general model of primitive consciousness 263-272 Tarja Susi and Tom Ziemke Social cognition, artefacts, and stigmergy: A comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity 273-290 Book review Ezequiel A. Di Paolo The Mechanization of the Mind: On the Origins of Cognitive Science, Jean-Pierre Dupuy, Princeton University Press, 2000, Conference review D. Van Rooy Report on the Fourth International Conference on Cognitive Modeling 297-300 Online access to full text articles for Cognitive Systems Research is available to those readers whose library has subscribed to Cognitive Systems Research via ScienceDirect Digital Collections. For subscription information, see: http://www.elsevier.nl/locate/cogsys http://www.elsevier.com/locate/cogsys Copyright 2002, Elsevier Science, All rights reserved. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html =========================================================================== From john at eyelab.psy.msu.edu Thu Jan 31 09:34:16 2002 From: john at eyelab.psy.msu.edu (John M. Henderson) Date: Thu, 31 Jan 2002 09:34:16 -0500 Subject: faculty position in computational vision/visual cognition Message-ID: <5.0.2.1.2.20020131093108.052ce9c8@eyelab.msu.edu> MICHIGAN STATE UNIVERSITY DEPARTMENT OF PSYCHOLOGY AND COGNITIVE SCIENCE PROGRAM Computational Vision/Visual Cognition. The Department of Psychology and the Cognitive Science Program at Michigan State University invite applications for a tenure-system position at the rank of Assistant or Associate Professor. We are seeking candidates who study vision or visual cognition by combining computational modeling or hardware implementation with behavioral, psychophysical, and/or cognitive neuroscience techniques. The successful candidate will be appointed by Psychology, the tenure home department, and will be affiliated with the Cognitive Science Program and a newly funded NSF IGERT (Integrative Graduate Education and Research Training) grant in cognitive science (http://cogsci.msu.edu/). We encourage applications from individuals pursuing research questions in areas such as (but not limited to) visual attention, eye movement control, visually guided action, spatial navigation, object recognition, and scene perception. Women and minority-group candidates are strongly urged to apply. The individual must have a strong research program capable of attracting extramural support. The position begins August 16, 2002 (pending final administrative approval). Salary and rank will depend on the candidate's qualifications and experience. Review of applications will begin March 1, 2002 and continue until a suitable candidate is identified. Send a letter of application, vitae, (p)reprints and three letters of reference to: John M. Henderson, Chair, Computational Vision Search Committee, Department of Psychology, Michigan State University, 121 Psychology Research Building, East Lansing, MI 48824-1117. MSU is an AA/EO employer. From lgraham at jhu.edu Thu Jan 31 15:44:09 2002 From: lgraham at jhu.edu (Laura Graham) Date: Thu, 31 Jan 2002 15:44:09 -0500 Subject: NSF Supported Summer Internships at Johns Hopkins for Undergraduates Message-ID: Dear Colleague: The Center for Language and Speech Processing at Johns Hopkins University is offering a unique summer internship opportunity, which we would like you to bring to the attention of your best students in the current junior class. Only two weeks remain for students to apply for these internships. This internship is unique in the sense that the selected students will participate in cutting edge research as full members alongside leading scientists from industry, academia, and the government. The exciting nature of the internship is the exposure of the undergraduate students to the emerging fields of language engineering, such as automatic speech recognition (ASR), natural language processing (NLP), machine translation (MT), and speech synthesis (ITS). We are specifically looking to attract new talent into the field and, as such, do not require the students to have prior knowledge of language engineering technology. Please take a few moments to nominate suitable bright students who may be interested in this internship. On-line applications for the program can be found at http://www.clsp.jhu.edu/ along with additional information regarding plans for the 2002 Workshop and information on past workshops. The application deadline is February 15, 2002. If you have questions, please contact us by phone (410-516-4237), e-mail (sec at clsp.jhu.edu) or via the Internet http://www.clsp.jhu.edu Sincerely, Frederick Jelinek J.S. Smith Professor and Director Project Descriptions for this Summer 1. Weakly Supervised Learning For Wide-Coverage Parsing Before a computer can try to understand or translate a human sentence, it must identify the phrases and diagram the grammatical relationships among them. This is called parsing. State-of-the-art parsers correctly guess over 90% of the phrases and relationships, but make some errors on nearly half the sentences analyzed. Many of these errors distort any subsequent automatic interpretation of the sentence. Much of the problem is that these parsers, which are statistical, are not "trained" on enough example parses to know about many of the millions of potentially related word pairs. Human labor can produce more examples, but still too few by orders of magnitude. In this project, we seek to achieve a quantum advance by automatically generating large volumes of novel training examples. We plan to bootstrap from up to 350 million words of raw newswire stories, using existing parsers to generate the new parses together with confidence measures. We will use a method called co-training, in which several reasonably good parsing algorithms collaborate to automatically identify one another's weaknesses (errors) and to correct them by supplying new example parses to one another. This accuracy-boosting technique has widespread application in other areas of machine learning, natural language processing and artificial intelligence. Numerous challenges must be faced: how do we parse 350 million words of text in less than a year (we have 6 weeks)? How to use partly incompatible parsers to train one another? Which machine learning techniques scale up best? What kind of grammars, probability models, and confidence measures work best? The project will involve a significant amount of programming, but the rewards should be high. 2. Novel Speech Recognition Models for Arabic Previous research on large-vocabulary automatic speech recognition (ASR) has mainly concentrated on European and Asian languages. Other language groups have been explored to a lesser extent, for instance Semitic languages like Hebrew and Arabic. These languages possess certain characteristics, which present problems for standard ASR systems. For example, their written representation does not contain most of the vowels present in the spoken form, which makes it difficult to utilize textual training data. Furthermore, they have a complex morphological structure, which is characterized not only by a high degree of affixation but also by the interleaving of vowel and consonant patterns (so-called "non-concatenative morphology"). This leads to a large number of possible word forms, which complicates the robust estimation of statistical language models. In this workshop group we aim to develop new modeling approaches to address these and related problems, and to apply them to the task of conversational Arabic speech recognition. We will develop and evaluate a multi-linear language model, which decomposes the task of predicting a given word form into predicting more basic morphological patterns and roots. Such a language model can be combined with a similarly decomposed acoustic model, which necessitates new decoding techniques based on modeling statistical dependencies between loosely coupled information streams. Since one pervading issue in language processing is the tradeoff between language-specific and language-independent methods, we will also pursue an alternative control approach which relies on the capabilities of existing, language-independent recognition technology. Under this approach no morphological analysis will be performed and all word forms will be treated as basic vocabulary units. Furthermore, acoustic model topologies will be used which specify short vowels as optional rather than obligatory elements, in order to facilitate the use of text documents as language model training data. Finally, we will investigate the possibility of using large, generally available text and audio sources to improve the accuracy of conversational Arabic speech recognition. 3. Generation from Deep Syntactic Representation in Machine Translation Let's imagine a system for translating a sentence from a foreign language (say Arabic) into your native language (say English). Such a system works as follows. It analyzes the foreign-language sentence to obtain a structural representation that captures its essence, i.e. "who did what to whom where," It then translates (or transfers) the actors, actions, etc. into words in your language while "copying over" the deeper relationship between them. Finally it synthesizes a syntactically well-formed sentence that conveys the essence of the original sentence. Each step in this process is a hard technical problem, to which the best-known solutions are either not adequate for applications, or good enough only in narrow application domains, failing when applied to other domains. This summer, we will concentrate on improving one of these three steps, namely the synthesis (or generation). The target language for generation will be English, and that the source language to the MT system a language of a completely different type (Arabic and Czech). We will further assume that the transfer produces a fairly deeply analyzed sentence structure. The incorporation of the deep analysis makes the whole approach very novel - so far no large-coverage translation system has tried to operate with such a structure, and the application to very diverse languages makes it an even more exciting enterprise! Within the generation process, we will focus on the structural (syntactic) part, assuming that a morphological generation module exists to complete the generation process, and will be added to the suite so as to be able to evaluate the final result, namely, the goodness of the plain English text coming out of the system. Statistical methods will be used throughout. A significant part of the workshop preparation will be devoted to assembling and running a simplified MT system from Arabic/Czech to English (up to the syntactic structure level), in order to have realistic training data for the workshop project. As a consequence, we will not only understand and solve the generation problem, but also learn the mechanics of an end-to-end MT system, creating the intellectual preparation of team members to work on other parts of the MT system in the future. 4. SuperSID: Exploiting High-level Information for High-performance Speaker Recognition Identifying individuals based on their speech is an important component technology in many application, be it automatically tagging speakers in the transcription of a board-room meeting (to track who said what), user verification for computer security or picking out a known terrorist or narcotics trader among millions of ongoing satellite telephone calls. How do we recognize the voices of the people we know? Generally, we use multiple levels of speaker information conveyed in the speech signal. At the lowest level, we recognize a person based on the sound of his/her voice (e.g., low/high pitch, bass, nasality, etc.). But we also use other types of information in the speech signal to recognize a speaker, such as a unique laugh, particular phrase usage, or speed of speech among other things. Most current state-of-the-art automatic speaker recognition systems, however, use only the low level sound information (specifically, very short-term features based on purely acoustic signals computed on 10-20 ms intervals of speech) and ignore higher-level information. While these systems have shown reasonably good performance, there is much more information in speech which can be used and potentially greatly improve accuracy and robustness. In this workshop we will look at how to augment the traditional signal-processing based speaker recognition systems with such higher-level knowledge sources. We will be exploring ways to define speaker-distinctive markers and create new classifiers that make use of these multi-layered knowledge sources. The team will be working on a corpus of recorded telephone conversations (Switchboard I and II corpora) that have been transcribed both by humans and by machine and have been augmented with a rich database of phonetic and prosodic features. A well-defined performance evaluation procedure will be used to measure progress and utility of newly developed techniques. From Luc.Berthouze at aist.go.jp Thu Jan 3 21:00:19 2002 From: Luc.Berthouze at aist.go.jp (Luc Berthouze) Date: Fri, 04 Jan 2002 11:00:19 +0900 Subject: Postdoc position at AIST, Japan Message-ID: <4.3.2-J.20020104104952.036736e0@procsv16.u-aizu.ac.jp> We are seeking a postdoctoral fellow with research interests in computational neuroscience and cognitive science to participate to a project on human learning of speech-reading. Candidates should have expertise in computational modeling, in human or animal research on learning and experience in applying neural models to artificial systems (robots or simulations). Competitive funding for a two-years period is available. Candidates should contact Luc Berthouze for more details. ----- Dr. Luc Berthouze Cognitive Neuroinformatics Group Neuroscience Research Institute (AIST) Tsukuba AIST Central 2 Umezono 1-1-1, Tsukuba 305-8568, Japan Tel: +81-298-615369 Fax: +81-298-615841 Email: Luc.Berthouze at aist.go.jp URL: http://staff.aist.go.jp/luc.berthouze/ From Neural.Plasticity at snv.jussieu.fr Fri Jan 4 06:55:07 2002 From: Neural.Plasticity at snv.jussieu.fr (Neural Plasticity) Date: Fri, 04 Jan 2002 11:55:07 +0000 Subject: Journal of Transplantation and Neural Plasticity Message-ID: <3.0.6.32.20020104115507.0092bcb0@mail.snv.jussieu.fr> Dear Colleagues, In July 1998 the Journal of Transplantation and Neural Plasticity was reborn with a revised editorial policy, a new look and an abbreviated name to reflect a broadened scope. Beginning with the July-September 1998 issue, the journal has been called called Neural Plasticity and has been publishing full research papers, short communications, commentary and review articles concerning all aspects of neural plasticity, with special attention to its functional significance as reflected in behaviour. In vitro models, in vivo studies in anesthetized and behaving animals, as well as clinical studies in humans are included. In addition to the regular quarterly issues, there have been two special issues: 'Neural Plasticity, Learning and Memory', and 'Therapeutic Interventions in Motor Disorders' both of which were very well received. The latest issue of Science Citation Index lists Neural Plasticity as ranking 75/200 neuroscience journals, with an impact factor of 2.33. This is ahead of such well-established journals as Experimental Brain Research, Behavioral Brain Research, Neurosciences Letters. I hope this will encourage you to submit your papers to Neural Plasticity. As editor-in -chief, I am committed to rapid, thorough and fair peer review resulting in publication of high quality papers which make a significant contribution to the field. This can only be achieved with the strong support of the scientific community through submission of manuscripts and by the collaboration of the distinguished group of scientists who have joined the editorial board. Dr. Virginia Buchner is our very efficient managing editor. Her experience in managing the journal over the past few years assures rapid and efficient handling of the manuscripts once they have been accepted for publication. We hope the the journal will be an inspiring forum for neuroscients studying the development of the nervous system, learning and memory processes, and reorganisation and recovery after brain injury. The editorial board has been selected to reflect this broad range of competence. Instructions for authors are attached. [attachment deleted -- moderator ] We are very much looking forward to receiving your contributions for forthcoming issues of Neural Plasticity, whose future success depends entirely upon your interest and support. Yours sincerely, Susan J. Sara, Editor NEURAL PLASTICITY Institut des Neurosciences Fax:: 33 1 44 27 3252 9 quai Saint Bernard neural.plasticity at snv.jussieu.fr 75005 Paris France From ken at phy.ucsf.edu Fri Jan 4 14:53:38 2002 From: ken at phy.ucsf.edu (Ken Miller) Date: Fri, 4 Jan 2002 11:53:38 -0800 Subject: Paper Available: Model of Cortical Layer 4 Message-ID: <15414.2114.190081.556538@coltrane.ucsf.edu> The following paper is now available as ftp://ftp.keck.ucsf.edu/pub/ken/kayser_miller.pdf or from http://www.keck.ucsf.edu/~ken (click on 'Publications', then on 'Models of Neural Development'). Kayser, A.S. and K.D. Miller. "Opponent inhibition: A developmental model of layer 4 of the neocortical circuit". This is a final draft of a manuscript that has now appeared as Neuron 33, 131-142 (2002). Summary: We model the development of the functional circuit of layer 4, the input-recipient layer, of cat primary visual cortex. The observed thalamocortical and intracortical circuitry codevelop under Hebb-like synaptic plasticity. Hebbian development yields opponent inhibition: inhibition evoked by stimuli anticorrelated with those that excite a cell. Strong opponent inhibition enables recognition of stimulus orientation in a manner invariant to stimulus contrast. These principles may apply to cortex more generally: Hebb-like plasticity can guide layer 4 of any piece of cortex to create opposition between anticorrelated stimulus pairs, and this enables recognition of specific stimulus patterns in a manner invariant to stimulus magnitude. Properties that are invariant across a cortical column are predicted to be those shared by opponent stimulus pairs; this contrasts with the common idea that a column represents cells with similar response properties. Ken Kenneth D. Miller telephone: (415) 476-8217 Associate Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From isabelle at clopinet.com Fri Jan 4 12:32:52 2002 From: isabelle at clopinet.com (Isabelle Guyon) Date: Fri, 04 Jan 2002 09:32:52 -0800 Subject: Special Issue of JMLR on Variable and Feature Selection Message-ID: <3C35E744.4E75F34A@clopinet.com> Special Issue of JMLR on Variable and Feature Selection Guest Editors: Isabelle Guyon and Andr Elisseeff Submission deadline: May 15, 2002 The Journal of Machine Learning Research invites authors to submit papers for the Special Issue on Variable and Feature Selection. This special issue follows the NIPS 2001 workshop on the same topic, but is open also to contributions that were not presented in it. A special volume will be published for this issue. The call for papers can be found at: http://www.clopinet.com/isabelle/Projects/NIPS2001/call-for-papers.html If you have a potential interest in publishing in this special issue, please email Isabelle Guyon (isabelle at clopinet.com) as this will facilitate planning the issue. From wolfskil at MIT.EDU Mon Jan 7 15:31:39 2002 From: wolfskil at MIT.EDU (Jud Wolfskill) Date: Mon, 07 Jan 2002 15:31:39 -0500 Subject: book announcement--Kitano Message-ID: <5.0.2.1.2.20020107152908.00a7b4b8@po14.mit.edu> I thought readers of the Connectionists List might be interested in this book. For more information, please visit http://mitpress.mit.edu/0262112663/ Thank you! Best, Jud Foundations of Systems Biology edited by Hiroaki Kitano The emerging field of systems biology involves the application of experimental, theoretical, and modeling techniques to the study of biological organisms at all levels, from the molecular, through the cellular, to the behavioral. Its aim is to understand biological processes as whole systems instead of as isolated parts. Developments in the field have been made possible by advances in molecular biology--in particular, new technologies for determining DNA sequence, gene expression profiles, protein-protein interactions, and so on. Foundations of Systems Biology provides an overview of the state of the art of the field. The book covers the central topics of systems biology: comprehensive and automated measurements, reverse engineering of genes and metabolic networks from experimental data, software issues, modeling and simulation, and system-level analysis. Hiroaki Kitano is Director of the ERATO Kitano Symbiotic Systems Project of the Japan Science and Technology Corporation and a Senior Researcher at Sony Computer Science Laboratories, Inc. Contributors Mutsuki Amano, Katja Bettenbrock, Hamid Bolouri, Dennis Bray, Jehoshua Bruck, John Doyle, Andrew Finney, Ernst Dieter Gilles, Martin Ginkel, Shugo Hamahashi, Michael Hucka, Kozo Kaibuchi, Mitsuo Kawato, Martin A. Keane, Hiroaki Kitano, John R. Koa, Andreas Kremling, Shinya Kuroda, Koji M. Kyoda, Guido Lanza, Andre Levchenko, Pedro Mendes, Satoru Miyano, Eric Mjolsness, Mineo Morohashi, William Mydlowec, Masao Nagasaki, Yoichi Nakayama, Shuichi Onami, Herbert Sauro, Nicolas Schweighofer, Bruce Shapiro, Thomas Simon Shimizu, J?rg Stelling, Paul W. Sternberg, Zoltan Szallasi, Masaru Tomita, Mattias Wahde, Tau-Mu Yi, Jessen Yu. 7 x 9, 320 pp. 100 illus. cloth ISBN 0262112663 Jud Wolfskill Associate Publicist MIT Press 5 Cambridge Center, 4th Floor Cambridge, MA 02142 617.253.2079 617.253.1709 fax wolfskil at mit.edu From Luc.Berthouze at aist.go.jp Mon Jan 7 20:50:48 2002 From: Luc.Berthouze at aist.go.jp (Luc Berthouze) Date: Tue, 08 Jan 2002 10:50:48 +0900 Subject: Correction: Postdoc position at AIST, Japan Message-ID: <4.3.2-J.20020108100822.00c580b0@procsv16.u-aizu.ac.jp> Dear Connectionists, This is an addendum to my previous post for a postdoc position in AIST, Japan. I have received a large number of applications with background in signal processing and speech processing in particular, probably due to my lack of explanation of what speech-reading is. Speech-reading, also often referred to as lipreading, is the ability to perceive speech by: (1) watching the movements of a speaker's mouth, (2) by observing all other visible clues including facial expressions and gestures, and (3) using the context of the message and the situation. More than the skill itself, our project aims at understanding and modelling the learning process through which such skill is acquired. In particular, we are interested in evaluating the importance of motoric activity in the process. The candidates should therefore have background and research interests in the area of sensorimotor coordination and categorization, both at the neural and behavioral level. Thank you, Luc Berthouze The original post was: We are seeking a postdoctoral fellow with research interests in computational neuroscience and cognitive science to participate to a project on human learning of speech-reading. Candidates should have expertise in computational modeling, in human or animal research on learning and experience in applying neural models to artificial systems (robots or simulations). Competitive funding for a two-years period is available. Candidates should contact Luc Berthouze for more details. ----- Dr. Luc Berthouze Cognitive Neuroinformatics Group Neuroscience Research Institute (AIST) Tsukuba AIST Central 2 Umezono 1-1-1, Tsukuba 305-8568, Japan Tel: +81-298-615369 Fax: +81-298-615841 Email: Luc.Berthouze at aist.go.jp URL: http://staff.aist.go.jp/luc.berthouze/ From rojas at inf.fu-berlin.de Tue Jan 8 03:57:05 2002 From: rojas at inf.fu-berlin.de (rojas) Date: Tue, 8 Jan 2002 09:57:05 +0100 Subject: Research Lecturer Position in Berlin In-Reply-To: <4.3.2-J.20020108100822.00c580b0@procsv16.u-aizu.ac.jp> Message-ID: Freie Universitat Berlin Research Lecturership in Neuroinformatics / Theoretical Neuroscience (Oberassistent/in, C 2) Department of Biology, Chemistry and Pharmacy Institute of Biology Applications are invited for the position of research lecturer in neuroinformatics/ theoretical neuroscience. The post is funded by the Donors ' Association for the Promotion of Sciences and Humanities in Germany (Stifterverband fur die Deutsche Wissenschaft). The successful applicant will be required to provide research and teaching in the said area. In line with article 106 of the Higher Education Act of the land of Berlin (Berliner Hochschulgesetz), a postdoctoral qualification (Habilitation) in the field of Informatics or Neurobiology or comparable qualifications for a teaching career in higher education are required. The successful candidate is expected to have extensive experience in the acquisition and evaluation of neural data as well as international experience in teaching and research. She/he will collaborate with experimental neuroscience groups at the Freie Universitat Berlin and participate in the activities of a Collaborative Research Centre in neuroscience (title: Mechanisms of developmental and experience-dependent neural plasticity). She/he should not be older than 35 years at the time of appointment. In general, the language of instruction will be German, but some activities may be offered in English. Non-German speaking applicants are expected to learn German within two years. The Freie Universitat Berlin is an equal opportunities employer. The successful candidate will be offered civil servant or comparable public sector employee status (Oberassistent Grade "C2", limited to four years according to the German system). Applications, quoting Vacancy Oberassistent/in must reach the Freie Universitat Berlin Fachbereich Biologie, Chemie, Pharmazie Institut fur Biologie Prof.Dr. Randolf Menzel 14195 Berlin, Konigin-Luise-Str. 28-30 Germany not later than 4 weeks after the publication of this advertisement. Applications should include the following: a letter describing your interest in the position and pertinent experience, a curriculum vitae, a list of publications, and copies of the certificates of academic qualifications held. The Freie Universitat Berlin is a state-funded university. It has some 40,000 students and 520 professors. The University has 12 departments structured into more than 100 institutes. Detailed information is available at the following web sites: www.fu-berlin.de and www.bcp.fu-berlin.de Prof.Dr. Raul Rojas Freie Universitat Berlin FB Mathematik und Informatik Takustr. 9 14195 Berlin Tel: ++49/30/83875100 From giacomo at ini.phys.ethz.ch Tue Jan 8 04:11:06 2002 From: giacomo at ini.phys.ethz.ch (Giacomo Indiveri) Date: Tue, 08 Jan 2002 10:11:06 +0100 Subject: Telluride Workshop and Summer School on Neuromorphic Engineering Message-ID: <3C3AB7AA.3070807@ini.phys.ethz.ch> Apologies for cross-postings... -- We invite applications for the annual three week "Telluride Workshop and Summer School on Neuromorphic Engineering" that will be held in Telluride, Colorado from Sunday, June 30 to Saturday, July 21. 2002. The application deadline is FRIDAY, MARCH 15, and application instructions are described at the bottom of this document. Like each of these workshops that have taken place since 1994, the 2001 Workshop and Summer School on Neuromorphic Engineering, sponsored by the National Science Foundation, the Gatsby Foundation, Whitaker Foundation, the Office of Naval Research, and by the Center for Neuromorphic Systems Engineering at the California Institute of Technology, was an exciting event and a great success. A detailed report is available at the web-site: http://www.ini.unizh.ch/telluride01. We strongly encourage interested parties to browse through the previous workshop web pages. For a discussion of the underlying science and technology and a report on the 2001 school, see the September 20 issue of "The Economist" (or visit the web-page http://www.economist.com/science/tq/displayStory.cfm?Story_ID=779503 ) GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on active participation, with demonstration systems and hands on experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors, vision, speech understanding and robotics. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware (with an emphasis on analog and asynchronous digital VLSI), are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer school will include background lectures on systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Koalas, Kheperas and LEGO), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed. They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. In the early afternoon there will be tutorial on a wide spectrum of topics, including analog VLSI, mobile robotics, auditory systems, central-pattern-generators, selective attention mechanisms, etc. Projects that are carried out during the workshop will be centered in a number of working groups, including: * active vision * audition * olfaction * motor control * central pattern generator * robotics * multichip communication * analog VLSI * learning The active perception project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include an active vision system consisting of a three degree-of-freedom pan-tilt unit, and a silicon retina chip. The central pattern generator group will focus on small walking and undulating robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged and segmented robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The robotics group will use rovers and working digital vision boards as well as other possible sensors to investigate issues of sensorimotor integration, navigation and learning. The audition group aims to develop biologically plausible algorithms and aVLSI implementations of specific auditory tasks such as source localization and tracking, and sound pattern recognition. Projects will be integrated with visual and motor tasks in the context of a robot platform. The multichip communication project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The SCHOOL will take place in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours drive away from Denver (350 miles). United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of PCs running LINUX and Microsoft Windows. No cars are required. Given the small size of the town, we recommend that you do NOT rent a car. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. Unless otherwise arranged with one of the organizers, we expect participants to stay for the entire duration of this three week workshop. FINANCIAL ARRANGEMENT: Notification of acceptances will be mailed out around Monday, April 8. 2002. Participants are expected to pay a $275.00 workshop fee at that time in order to reserve a place in the workshop. The cost of a shared condominium will be covered for all academic participants but upgrades to a private room will cost extra. Participants from National Laboratories and Industry are expected to pay for these condominiums. Travel reimbursement of up to $500 for US domestic travel and up to $800 for overseas travel will be possible if financial help is needed (Please specify on the application). HOW TO APPLY: Applicants should be at the level of graduate students or above (i.e., postdoctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: * First name, Last name, valid email address. * Curriculum Vitae. * One page summary of background and interests relevant to the workshop. * Description of special equipment or software needed for demonstrations that could be brought to the workshop. * Two letters of recommendation Complete applications should be sent to: Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 e-mail: telluride at salk.edu FAX: (858) 587 0417 APPLICATION DEADLINE: MARCH 15, 2002 From aapo at james.hut.fi Tue Jan 8 10:44:34 2002 From: aapo at james.hut.fi (Aapo Hyvarinen) Date: Tue, 8 Jan 2002 17:44:34 +0200 Subject: Papers on natural image statistics and V1 Message-ID: Dear Colleagues, the following papers are now available on the web. ------------------------------------------------------------ J. Hurri and A. Hyvarinen. Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video. Submitted manuscript. http://www.cis.hut.fi/aapo/ps/gz/Hurri01.ps.gz Abstract: Recently, statistical models of natural images have shown emergence of several properties of the visual cortex. Most models have considered the non-Gaussian properties of static image patches, leading to sparse coding or independent component analysis. Here we consider the basic time dependencies of image sequences instead of their non-Gaussianity. We show that simple cell type receptive fields emerge when temporal response strength correlation is maximized for natural image sequences. Thus, temporal response strength correlation, which is a nonlinear measure of temporal coherence, provides an alternative to sparseness in modeling simple cell receptive field properties. Our results also suggest an interpretation of simple cells in terms of invariant coding principles that have previously been used to explain complex cell receptive fields. ------------------------------------------------------------ A. Hyvarinen. An Alternative Approach to Infomax and Independent Component Analysis. Neurocomputing, in press (CNS'01). http://www.cis.hut.fi/aapo/ps/gz/CNS01.ps.gz Abstract: Infomax means maximization of information flow in a neural system. A nonlinear version of infomax has been shown to be connected to independent component analysis and the receptive fields of neurons in the visual cortex. Here we show a problem of nonrobustness of nonlinear infomax: it is very sensitive to the choice the nonlinear neuronal transfer function. We consider an alternative approach in which the system is linear, but the noise level depends on the mean of the signal, as in a Poisson neuron model. This gives similar predictions as the nonlinear infomax, but seem to be more robust. ------------------------------------------------------------ Also, a considerably revised version of a paper that I already announced on the connectionists list in June 2001: P.O. Hoyer and A. Hyvarinen. A Multi-Layer Sparse Coding Network Learns Contour Coding from Natural Images. Vision Research, in press http://www.cis.hut.fi/aapo/ps/gz/VR02.ps.gz Abstract: An important approach in visual neuroscience considers how the function of the early visual system relates to the statistics of its natural input. Previous studies have shown how many basic properties of the primary visual cortex, such as the receptive fields of simple and complex cells and the spatial organization (topography) of the cells, can be understood as efficient coding of natural images. Here we extend the framework by considering how the responses of complex cells could be sparsely represented by a higher-order neural layer. This leads to contour coding and end-stopped receptive fields. In addition, contour integration could be interpreted as top-down inference in the presented model. ---------------------------------------------------- Aapo Hyvarinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800, FIN-02015 HUT, Finland Email: Aapo.Hyvarinen at hut.fi Home page: http://www.cis.hut.fi/aapo/ ---------------------------------------------------- From nicka at dai.ed.ac.uk Tue Jan 8 11:00:00 2002 From: nicka at dai.ed.ac.uk (Nick Adams) Date: Tue, 08 Jan 2002 16:00:00 +0000 Subject: PhD thesis available Message-ID: <3C3B1780.44AD@dai.ed.ac.uk> I am pleased to announce that my PhD thesis entitled DYNAMIC TREES: A HIERARCHICAL PROBABILISTIC APPROACH TO IMAGE MODELLING is now available at http://www.anc.ed.ac.uk/code/adams/ Matlab and C++ code implementing Gibbs Sampling, Metropolis and the Mean Field Variational approaches and various EM-style learning algorithms based upon them for the Dynamic Tree model is also available at http://www.anc.ed.ac.uk/code/adams/dt/dt.tgz ---------------------------------------------------------------------- ABSTRACT This work introduces a new class of image model which we call Dynamic Trees or DTs. A Dynamic Tree model specifies a prior over structures of trees, each of which is a forest of one or more tree-structured belief networks (TSBN). In the literature standard tree-structured belief network models were found to produce ``blocky'' segmentations when naturally occurring boundaries within an image did not coincide with those of the subtrees in the rigid fixed structure of the network. Dynamic Trees have a flexible architecture which allows the structure to vary to accommodate configurations where the subtree and image boundaries align, and experimentation with the model showed significant improvements. They are also hierarchical in nature allowing a multi-scale representation and are constructed within a well founded Bayesian framework. For large models the number of tree configurations quickly becomes intractable to enumerate over, presenting a problem for exact inference. Techniques such as Gibbs sampling over trees are considered and search using simulated annealing finds high posterior probability trees on synthetic 2-d images generated from the model. However simulated annealing and sampling techniques are rather slow. Variational methods are applied to the model in an attempt to approximate the posterior by a simpler tractable distribution, and the simplest of these techniques, mean field, found comparable solutions to simulated annealing in the order of 100 times faster. This increase in speed goes a long way towards making real-time inference in the Dynamic Tree viable. Variational methods have the further advantage that by attempting to model the full posterior distribution it is possible to gain an indication as to the quality of the solutions found. An EM-style update based upon mean field inference is derived and the learned conditional probability tables (describing state transitions between a node and its parent) are compared with exact EM on small tractable fixed architecture models. The mean field approximation by virtue of its form is biased towards fully factorised solutions which tends to create degenerate CPTs, but despite this mean field learning still produces solutions whose log-likelihood rivals exact EM. Development of algorithms for learning the probabilities of the prior over tree structures completes the Dynamic Tree picture. After discussion of the relative merits of certain representations for the disconnection probabilities and initial investigation on small model structures the full Dynamic Tree model is applied to a database of images of outdoor scenes where all of its parameters are learned. DTs are seen to offer significant improvement in performance over the fixed architecture TSBN and in a coding comparison the DT achieves 0.294 bits per pixel (bpp) compression compared to 0.378 bpp for lossless JPEG on images of 7 colours. From shiffrin at indiana.edu Tue Jan 8 16:16:33 2002 From: shiffrin at indiana.edu (Rich Shiffrin) Date: Tue, 08 Jan 2002 16:16:33 -0500 Subject: ASIC Conference Announcement Message-ID: <3C3B61B1.EBA12DC0@indiana.edu> First Annual Summer Interdisciplinary Conference (ASIC) Squamish, British Columbia, Canada July 30 (Tuesday) - August 5 (Monday), 2002 Organizer: Richard M. Shiffrin, Indiana University, Bloomington, IN 47405 This conference is modeled after the winter AIC conference that has been held for almost 30 years: Days are free for leisure activities and the talks are in the later afternoon/early evening. The date has been chosen to make it convenient for attendees to bring family/friends. The conference is open to all interested parties, and an invitation is NOT needed. The subject is interdisciplinary, within the broad frame of Cognitive Science. Much more information is available on the conference website at: http://www.psych.indiana.edu/asic2002/ [ Below is an excerpt from the web page. -- Connectionists moderator ] Conference Aims The conference will cover a wide range of subjects in cognitive science, including: * neuroscience, cognitive neuroscience * psychology (including perception, psychophysics, attention, * information processing, memory and cognition) * computer science * machine intelligence and learning * linguistics * and philosophy We especially invite talks emphasizing theory, mathematical modeling and computational modeling (including neural networks and artificial intelligence). Nonetheless, we require talks that are comprehensible and interesting to a wide scientific audience. Speakers will provide overviews of current research areas, as well as of their own recent progress. Conference Format The conference will start with a reception on the first evening,Tuesday, July 30, at 5 PM, followed by a partial session. Each of the next five evenings, the sessions will begin at 4:30 PM (time to be confirmed later). Drinks, light refreshments and snacks will be available starting at 4:15 PM, prior to the start of the session, and at the midway break. A session will consist of 6-7 talks and a mid-session break, finishing at approximately 8:30-8:45 PM. A banquet will be held following the final session of the conference. There are no parallel sessions or presentations. We will have a separate room, day and time set aside for poster presentations, both for persons preferring this format to a spoken presentation, and for any presenters who cannot be allotted speaking slots. The time and date for the posters has yet to be decided, but we are considering an hour just preceding the regular session on one of the days of the conference. It will not escape the savvy reader that this conference format frees most of the day for various activities with colleagues, family, and friends. From jose at psychology.rutgers.edu Wed Jan 9 18:48:25 2002 From: jose at psychology.rutgers.edu (Stephen J. Hanson) Date: Wed, 09 Jan 2002 18:48:25 -0500 Subject: POSITION at RUTGERS-NEWARK CAMPUS: COGNITIVE SCIENCE--Deadline extended Message-ID: <3C3CD6C9.90501@psychology.rutgers.edu> We are extending the DEADLINE until FEB 15th. RUTGERS UNIVERSITY-Newark Campus. The Department of Psychology anticipates making one tenure track , Assistant Professor level appointment in area of COGNITIVE SCIENCE. In particular we are seeking individuals in one of any the following THREE following areas: LEARNING (Cognitive Modeling) COMPUTATIONAL NEUROSCIENCE or SOCIAL-COGNITION (interests in NEUROIMAGING in any of these areas would also be a plus, since the Department in conjunction with UMDNJ has recently acquired a 3T Neuroimaging Center (see http://www.psych.rutgers.edu/fmri). The successful candidate is expected to develop and maintain an active, externally funded research program, and to teach at both the graduate and undergraduate levels. Review of applications begin February 15th, 2002, pending final budgetary approval from the administration. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are encouraged to apply. Please send a CV, a statement of current and future research interests, and three letters of recommendation to COGNITIVE SCIENCE SEARCH COMMITTEE, Department of Psychology, Rutgers University, Newark, NJ 07102. Email enquires can be made to cogsci at psychology.rutgers.edu. Also see, http://www.psych.rutgers.edu. From Chris.Diehl at jhuapl.edu Wed Jan 9 11:35:39 2002 From: Chris.Diehl at jhuapl.edu (Diehl, Chris P.) Date: Wed, 9 Jan 2002 11:35:39 -0500 Subject: Job Announcement Message-ID: <91D1D51C2955D111B82B00805F1998950BD8CB9F@aples2.jhuapl.edu> Applications are invited for a senior research staff position in the Research and Technology Development Center (RTDC) at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). The successful candidate will join the System and Information Sciences (RSI) group of the RTDC and perform original R&D supporting research thrusts in automated video analysis, statistical language modeling and information retrieval, and bioinformatics. The ideal candidate will have a Ph.D. in electrical engineering, computer science or related field and extensive experience developing and applying machine learning techniques to challenging real-world problems. Expertise in the following areas is desirable: (1) Bayesian networks (2) Support vector classification and regression/kernel methods (3) Statistical learning theory JHU/APL is a not-for-profit university affiliated research laboratory, located between Washington D.C. and Baltimore, MD. The RSI group is comprised of 20+ M.S. and Ph.D. level Computer Scientists, Electrical Engineers and Physicists performing R&D in the areas of statistical modeling and inference, bioinformatics, network modeling, video analysis, information retrieval, intelligent databases, distributed computing and human- computer interaction. Further information on JHU/APL can be found at the JHU/APL web site: www.jhuapl.edu. Interested and qualified candidates with U.S. citizenship should send a curriculum vita via e-mail or regular mail to: Dr. I-Jeng Wang Johns Hopkins University Applied Physics Laboratory 11100 Johns Hopkins Road Laurel, MD 20723-6099 E-mail: I-Jeng.Wang at jhuapl.edu The Johns Hopkins University Applied Physics Laboratory is an equal opportunity employer. From cardoso at ifado.de Thu Jan 10 03:20:48 2002 From: cardoso at ifado.de (Simone Cardoso de Oliveira) Date: Thu, 10 Jan 2002 09:20:48 +0100 Subject: Job Offer Message-ID: <3C3D4EE0.25AA1E@ifado.de> Dear List-members, I would like to announce the following job offers. Please feel free to post them in your home institutions. Thank you very much! ---------------------------------------------------------------------------- Within a joint Israeli-German research collaboration , positions for 3 PhD Students (Bat IIa/2 or equivalent) are available immediately. The interdisciplinary project "METACOMP - Adaptive Control of Motor Prostheses" aims at advancing techniques for the development of neural prostheses. It combines the efforts of two theoretical (Prof. A. Aertsen, Freiburg, and Prof. K. Pawelzik, Bremen) and two experimental groups (Dr. S. Cardoso de Oliveira, Dortmund and Prof. E. Vaadia, Jerusalem). One of the students will work on developing adaptive models that can learn to control movements on the basis of experimental data (Bremen). The second student will test the adaptive models in psychophysical experiments in a virtual reality environment (Dortmund). The student in Israel will test the model and decoding mechanisms on brain activity recorded from awake primates. Applicants should have a background in computational neuroscience or related disciplines and will need good knowledge in computer programming. Please send applications until January 31 (with a statement about in which of the 3 positions you are particularly interested, preferentially by e-mail) to: Dr. S. Cardoso de Oliveira, Institut fr Arbeitsphysiologie an der Universitt Dortmund, Ardeystrae 67, 44139 Dortmund, e-mail: cardoso at ifado.de. -- ----------------------------------------------------------------------------- Dr. Simone Cardoso de Oliveira, PhD Institut fr Arbeitsphysiologie an der Universitt Dortmund Ardeystr. 67, 44139 Dortmund Tel.: ++49-(0)231-1084-311 (Lab) ++49-(0)234-333 8488 (Home) Fax: ++49-(0)231-1084-340 cardoso at arb-phys.uni-dortmund.de http://www.ifado.de/projekt07/cardoso/ ----------------------------------------------------------------------------- From becker at meitner.psychology.mcmaster.ca Thu Jan 10 13:46:32 2002 From: becker at meitner.psychology.mcmaster.ca (S. Becker) Date: Thu, 10 Jan 2002 13:46:32 -0500 (EST) Subject: openings in computer vision/image processing/machine learning. In-Reply-To: <5.1.0.14.0.20020109160342.00af68d0@medipattern.com> Message-ID: Dear connectionists, As a member of the Scientific Advisory Board for Medipattern Corp., I'd like to bring to your attention the following announcement of job openings for specialists in computer vision/image processing/machine learning. cheers, Sue Sue Becker, Associate Professor Department of Psychology, McMaster University becker at mcmaster.ca 1280 Main Street West, Hamilton, Ont. L8S 4K1 Fax: (905)529-6225 www.science.mcmaster.ca/Psychology/sb.html Tel: 525-9140 ext. 23020 Medipattern is a young, fully financed biomedical enterprise in Toronto, Canada. We want to revolutionize the standard of care in detection and treatment of emergent disease through the application of our novel three-dimensional medical imaging technology and our vision of large-scale machine learning. We are looking for creative, knowledgeable people with a background in machine vision or image processing and statistical, information theoretic, symbolic or neural approaches to machine learning and pattern recognition. We are looking for people eager to work in a dynamic, entrepreneurial environment. Please send your resume to jobs at medipattern.com. We will be in touch with people that appear to be a good fit. From mdorigo at iridia.ulb.ac.be Thu Jan 10 12:44:09 2002 From: mdorigo at iridia.ulb.ac.be (Marco Dorigo) Date: Thu, 10 Jan 2002 18:44:09 +0100 (CET) Subject: ANTS'2002: Call for Papers Message-ID: <200201101744.g0AHi9F05685@iridia.ulb.ac.be> ANTS'2002 - From Ant Colonies to Artificial Ants: Third International Workshop on Ant Algorithms Brussels, Belgium, September 11-14, 2002 CALL FOR PAPERS (up-to-date information on the workshop is maintained on the web at http://iridia.ulb.ac.be/~ants/ants2002/) SCOPE OF THE WORKSHOP The behavior of social insects in general, and of ant colonies in particular, has long since fascinated both the scientist and the layman. Researchers in ethology and animal behavior have proposed many models to explain interesting aspects of social insect behavior such as self-organization and shape-formation. Recently, ant algorithms have been proposed as a novel computational model that replaces the traditional emphasis on control, preprogramming, and centralization with designs featuring autonomy, emergence, and distributed functioning. These designs are proving flexible and robust, they are able to adapt quickly to changing environments and they continue functioning when individual elements fail. A particularly successful research direction in ant algorithms, known as "Ant Colony Optimization", is dedicated to their application to discrete optimization problems. Ant colony optimization has been applied successfully to a large number of difficult combinatorial problems including the traveling salesman problem, the quadratic assignment problem, scheduling problems, etc., as well as to routing in telecommunication networks. ANTS'2002 is the third edition of the only event entirely devoted to ant algorithms and to ant colony optimization. Also of great interest to the workshop are models of ant colony behavior which could stimulate new algorithmic approaches. The workshop will give researchers in both real ant behavior and in ant colony optimization an opportunity to meet, to present their latest research, and to discuss current developments and applications. The three-day workshop will be held in Brussels, Belgium, September 12-14, 2002. In the late afternoon of September 11th there will also be a tutorial on ant algorithms. RELEVANT RESEARCH AREAS ANTS'2002 solicits contributions dealing with any aspect of ant algorithms and Ant Colony Optimization. Typical, but not exclusive topics of interest are: (1) Models of aspects of real ant colony behavior that can stimulate new algorithmic approaches. (2) Empirical and theoretical research in ant algorithms and ant colony optimization. (3) Application of ant algorithms and ant colony optimization methods to real-world problems. (4) Related research in swarm intelligence. SUBMISSION PROCEDURE & PUBLICATION DETAILS Conference proceedings will be published by Springer-Verlag in the Lecture Notes on Computer Science series (http://www.springer.de/comp/lncs/index.html), and distributed to the participants at the conference site. Submitted papers must be between 8 and 12 pages long, and must conform to the LNCS style (http://www.springer.de/comp/lncs/authors.html). It is important that submissions are already in the LNCS format (papers that do not respect this formatting may not be considered). Submitted papers should be either in Postscript or PDF format and should be emailed to: ants-submissions at iridia.ulb.ac.be. Each submission will be acknowledged by email. Each submitted paper will be peer reviewed by at least two referees. BEST PAPER AWARD A best paper award will be presented at the workshop. REGISTRATION AND FURTHER INFORMATION By submitting a camera-ready paper, the author(s) agree that at least one author will attend and present their paper at the workshop. A registration fee of 200 EUR will cover organization expenses, workshop proceedings, the possibility to attend a tutorial on ant algorithms in the late afternoon of September 11, coffee breaks, and a workshop social dinner on Friday 13 evening. A reduced PhD student registration fee of 150 EUR is available. A proof of inscription to a doctoral school will be necessary. Up-to-date information about the workshop will be available at the ANTS'2002 web site (http://iridia.ulb.ac.be/~ants/ants2002/). For information about local arrangements, registration forms, etc., please refer to the above mentioned web site, or contact the local organizers at the address below. LIMITED NUMBER OF PLACES The number of participants will be limited. If you intend to participate please fill in and email the INTENTION FORM available at the workshop web page to: ants-registration at iridia.ulb.ac.be Only researchers that have received a confirmation to their intention form may take part in the workshop. IMPORTANT DATES Submission deadline April 14th, 2002 Notification of acceptance May 31st, 2002 Camera ready copy June 16th, 2002 Tutorial Sep 11th, 2002 Workshop Sep 12-14, 2002 ANTS'2002 CONFERENCE COMMITTEE PROGRAM CHAIR Marco DORIGO, IRIDIA, ULB, Belgium (mdorigo at ulb.ac.be) TECHNICAL PROGRAM AND PUBLICATION CHAIRS Gianni DI CARO, IRIDIA, ULB, Belgium (gdicaro at ulb.ac.be) Michael SAMPELS, IRIDIA, ULB, Belgium (msampels at ulb.ac.be) LOCAL ARRANGEMENTS Christian BLUM, IRIDIA, ULB, Belgium (cblum at ulb.ac.be) Mauro BIRATTARI, IRIDIA, ULB, Belgium (mbiro at ulb.ac.be) PROGRAM COMMITTEE Under formation. PROGRAM CHAIR ADDRESS Marco Dorigo, Ph.D. Chercheur Qualifie' du FNRS Tel +32-2-6503169 IRIDIA CP 194/6 Fax +32-2-6502715 Universite' Libre de Bruxelles Secretary +32-2-6502729 Avenue Franklin Roosevelt 50 http://iridia.ulb.ac.be/~mdorigo/ 1050 Bruxelles http://iridia.ulb.ac.be/~ants/ants2002/ Belgium http://iridia.ulb.ac.be/dorigo/ACO/ACO.html CONFERENCE LOCATION Avenue A. Buyl 87, 1050 Brussels, Belgium, Building C - 4th floor (IRIDIA, Dorigo's lab, is at the 5th floor of the same building). There will be signs giving directions to the workshop room. RELATED CONFERENCES Note that just before ANTS'2002, the conference PPSN-VII will take place in Granada, Spain (http://ppsn2002.ugr.es/ppsn2002.shtml). From school at cogs.nbu.bg Thu Jan 10 06:27:22 2002 From: school at cogs.nbu.bg (CogSci Summer School) Date: Thu, 10 Jan 2002 13:27:22 +0200 Subject: summer school Message-ID: 9th International Summer School in Cognitive Science Sofia, New Bulgarian University, July 8 - 28, 2002 Courses: * Jeff Elman (University of California at San Diego, USA) - Connectionist Models of Learning and Development * Michael Mozer (University of Colorado, USA) - Connectionist Models of Human Perception, Attention, and Awareness * Eran Zaidel (University of California at Los Angeles, USA) - Hemispheric Specialization * Barbara Knowlton (University of California at Los Angeles, USA) - Cognitive Neuroscience of Memory * Markus Knauff (University of Freiburg, Germany) - Imagery and Reasoning: Cognitive and Cortical Models * Stella Vosniadou (University of Athens, Greece) - Cognitive Development and Conceptual Change * Peter van der Helm (University of Nijmegen, the Netherlands) - Structural Description of Visual Form * Antonio Rizzo (University of Siena, Italy) - The Nature of Interactive Artifacts and Their Design * Nick Chater (University of Warwick, UK) - Simplicity as a Fundamental Cognitive Principle Organised by the New Bulgarian University Endorsed by the Cognitive Science Society For more information look at: http://www.nbu.bg/cogs/events/ss2002.htm Central and East European Center for Cognitive Science New Bulgarian University 21 Montevideo Str. Sofia 1635 phone: 955-7518 Svetlana Petkova Administrative manager Central and East European Center for Cognitive Science From hinton at cs.toronto.edu Thu Jan 10 17:29:19 2002 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Thu, 10 Jan 2002 17:29:19 -0500 Subject: graduate study opportunities Message-ID: <02Jan10.172920edt.453142-19237@jane.cs.toronto.edu> The machine learning group at the University of Toronto has recently expanded and we are looking for about 10 new graduate students. The core machine learning faculty include: Craig Boutilier (Computer Science) Brendan Frey (Electrical and Computer Engineering) Geoffrey Hinton (Computer Science) Radford Neal (Statistics, Computer Science) Sam Roweis (Computer Science) Rich Zemel (Computer Science) In addition, there are many other faculty interested in applying machine learning techniques in specific domains such as vision, speech, medicine, and finance. More details about the individual faculty can be obtained at http://learning.cs.toronto.edu/people.html Possible research areas include: Neural Computation and Perceptual Learning, Graphical Models, Monte Carlo Methods, Bayesian Inference, Spectral Methods, Reinforcement Learning and Markov Decision Problems, Coding and Information Theory, Bioinformatics, and Game Theory. Applications for graduate study in the Department of Computer Science are due by Feb 1. For details of how to apply see http://www.cs.toronto.edu/DCS/Grad/Apply/ ********** Please forward this message ********** ********** to students who may be interested. ********** (sorry if you get multiple copies of this message) From mcps at cin.ufpe.br Fri Jan 11 12:29:26 2002 From: mcps at cin.ufpe.br (Marcilio C. Pereira de Souto) Date: Fri, 11 Jan 2002 15:29:26 -0200 (EDT) Subject: SBRN 2002 - PRELIMINARY CFP Message-ID: --------------------------- Apologies for cross-posting --------------------------- PRELIMINARY CALL FOR PAPERS ********************************************************************** SBRN'2002 - VII BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS (http://www.cin.ufpe.br/~sbiarn02) Recife, November 11-14, 2002 ********************************************************************** The biannual Brazilian Symposium on Artificial Neural Networks (SBRN) - of which this is the 7th event - is a forum dedicated to Neural Networks (NNs) and other models of computational intelligence. The emphasis of the Symposium will be on original theories and novel applications of these computational models. The Symposium welcomes paper submissions from researchers, practitioners, and students worldwide. The proceedings will be published by the IEEE Computer Society. Selected, extended, and revised papers from SBRN'2002 will be also considered for publication in a special issue of the International Journal of Neural Systems and of the International Journal of Computational Intelligence and Applications. SBRN'2002 is sponsored by the Brazilian Computer Society (SBC) and co-sponsored by SIG/INNS/Brazil Special Interest Group of the International Neural Networks Society in Brazil. It will take place November 11-14, and will be held in Recife. Recife, located on the northeast coast of Brazil, is known as the "Brazilian Venice" because of its many canals and waterways and the innumerable bridges that span them. It is the major gateway to the Northeast with regular flights to all major cities in Brazil as well as Lisbon, London, Frankfurt, and Miami. See more information about the place (http://www.braziliantourism.com.br/pe-pt1-en.html) and about the hotel (http://www.hotelgavoa.com.br)that will host the event. SBRN'2002 will be held in conjunction with the XVI Brazilian Symposium on Artificial Intelligence (http://www.cin.ufpe.br/~sbiarn02) (SBIA). SBIA has its main focus on symbolic AI. Crossfertilization of these fields will be strongly encouraged. Both Symposiums will feature keynote speeches and tutorials by world-leading researchers. The deadline for submissions is April 15, 2002. More details on paper submission and conference registration will be coming soon. Sponsored by the Brazilian Computer Society (SBC) Co-Sponsored by SIG/INNS/Brazil Special Interest Group of the International Neural Networks Society in Brazil Organised by the Federal University of Pernambuco (UFPE)/Centre of Informatics (CIn) Published by the IEEE Computer Society General Chair: Teresa B. Ludermir (UFPE/CIn, Brazil) tbl at cin.ufpe.br Program Chair: Marcilio C. P. de Souto (UFPE/CIn, Brazil) mcps at cin.ufpe.br Deadlines: Submission: 15 April 2002 Acceptance: 17 June 2002 Camera-ready: 22 August 2002 Non-exhaustive list of topics which will be covered during SBRN'2002: Applications: finances, data mining, neurocontrol, time series analysis, bioinformatics; Architectures: cellular NNs, hardware and software implementations, new models, weightless models; Cognitive Sciences: adaptive behaviour, natural language, mental processes; Computational Intelligence: evolutionary systems, fuzzy systems, hybrid systems; Learning: algorithms, evolutionary and fuzzy techniques, reinforcement learning; Neurobiological Systems: bio-inspired systems, biologically plausible networks, vision; Neurocontrol: robotics, dynamic systems, adaptive control; Neurosymbolic processing: hybrid approaches, logical inference, rule extraction, structured knowledge; Pattern Recognition: signal processing, artificial/computational vision; Theory: radial basis functions, Bayesian systems, function approximation, computability, learnability, computational complexity. From pli at richmond.edu Fri Jan 11 09:42:40 2002 From: pli at richmond.edu (Ping Li) Date: Fri, 11 Jan 2002 09:42:40 -0500 Subject: positions in cognitive neuroscience of language Message-ID: Research Assistant Professorship and Post-doctoral Fellowships in Cognitive Neuroscience of Language Applications are invited for a research assistant professorship and two post-doctoral fellowships in Cognitive Neuroscience of Language, attached to the Laboratory for Language and Cognitive Neuroscience in the Department of Linguistics, tenable from as soon as possible, but in any case no later than 1 August, 2002. The appointments will be made on a 2-3 year fixed-term basis. The successful applicants will join our interdisciplinary group that employs state-of-the-art fMRI recording and analysis techniques as well as computational modeling to study topics in cognitive neuroscience of language processes. In addition to our resources and well developed plans for fMRI research at the University of Hong Kong, we are engaged in long-term collaborations with colleagues at Research Imaging Center at San Antonio, National Institutes of Health, University of Pittsburgh and the Key Laboratory of Cognitive Science and Learning of the Ministry of Education of China. Ample scanner time and training in fMRI techniques will be provided. The University of Hong Kong runs an active Cognitive Science undergraduate degree. The successful applicants may have the opportunity to contribute to the teaching of Linguistics and Cognitive Science at this university. Applicants should have a PhD in cognitive psychology, cognitive neuroscience, linguistics, or related fields. The appointments will be made usually on the first point of the 4-point salary scale: HK$542,520 - 660,240 per annum for the research assistant professor post and HK$378,240 - 497,580 per annum for the post-doctoral fellow post. For the research assistant professor post, a taxable financial subsidy fixed at HK$7,000 per month towards rented accommodation may be provided, subject to the Prevention of Double Housing Benefits Rules. At current rates, salaries tax will not exceed 15% of gross income. The appointments carry leave and medical benefits. Application forms (41/1000) can be obtained at https://extranet.hku.hk/apptunit/; or from the Appointments Unit (Senior), Registry, The University of Hong Kong, Hong Kong (Fax: (852) 2540 6735 or 2559 2058; E-mail: apptunit at reg.hku.hk). Applicants should submit cover letter, resume, and application form by March 30, 2002 to: Dr. Li-Hai Tan, Director, Laboratory for Language and Cognitive Neuroscience, Department of Linguistics and Cognitive Science Programme, University of Hong Kong, Hong Kong (email: tanlh at hku.hk). From terry at salk.edu Mon Jan 14 13:33:26 2002 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 14 Jan 2002 10:33:26 -0800 (PST) Subject: NEURAL COMPUTATION 14:1 In-Reply-To: <200111292340.fATNecF17024@purkinje.salk.edu> Message-ID: <200201141833.g0EIXQM40787@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 1 - January 1, 2002 VIEW Mechanisms Shaping Fast Excitatory Postsynaptic Currents In Central Nervous System Mladen I. Glavinovic NOTE Adjusting the outputs of a Classifier to New a Priori Probabilities: A Simple Procedure Marco Saerens, Patrice Latinne and Christine Decaestecker LETTERS Unitary Events in Multiple Single-Neuron Spiking Activity: I. Detection and Significance Sonja Grun, Markus Diesmann and Ad Aertsen Unitary Events in Multiple Single-Neuron Spiking Activity: II. Non-Stationary Data Sonja Grun, Markus Diesmann and Ad Aertsen Statistical Significance of Coincident Spikes: Count-Based Versus Rate-Based Statistics Robert Guetig, Ad Aertsen and Stefan Rotter Representational Accuracy of Stochastic Neural Populations Stefan D. Wilke and Christian W. Eurich Supervised Dimension Reduction of Intrinsically Low-Dimensional Data Nikos Vlassis, Yoichi Motomura, and Ben Krose Clustering Based on Conditional Distributions in an Auxiliary Space Janne Sinkkonen and Samuel Kaski ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From ps629 at columbia.edu Mon Jan 14 15:08:02 2002 From: ps629 at columbia.edu (Paul Sajda) Date: Mon, 14 Jan 2002 15:08:02 -0500 Subject: Postdoctoral Position in Computational Vision Message-ID: <3C433AA2.CE11F3F7@columbia.edu> Postdoctoral Position in Computational Vision-- The Laboratory for Intelligent Imaging and Neurocomputing (LIINC) has an immediate opening for a two year position to conduct research in probabilistic models which integrate concepts from biological vision for robust visual scene analysis. A mathematical and computational background is desired, particularly in computer vision, probabilistic modeling and optimization. Previous work in computational neuroscience is preferred, but not required. This position will be part of project investigating biomimetic methods for analysis of literal and non-literal imagery. Applicants should send a CV, three representative papers and the names of three references to Prof. Paul Sajda, Department of Biomedical Engineering, Columbia University, 530 W 120th Street, NY, NY 10027. Or email to ps629 at columbia.edu. Further information on LIINC can be found at newton.bme.columbia.edu -- Paul Sajda, Ph.D. Associate Professor Department of Biomedical Engineering 530 W 120th Street Columbia University New York, NY 10027 tel: (212) 854-5279 fax: (212) 854-8725 email: ps629 at columbia.edu http://www.columbia.edu/~ps629 From oja at james.hut.fi Tue Jan 15 03:17:03 2002 From: oja at james.hut.fi (Erkki Oja) Date: Tue, 15 Jan 2002 10:17:03 +0200 (EET) Subject: ICA meeting Message-ID: <200201150817.KAA64371@james.hut.fi> CALL FOR PARTICIPATION: European Meeting on Independent Component Analysis The International Institute for Advanced Scientific Studies (IIASS), The University of Salerno, and the European project BLISS (Blind Source Separation and Applications) are organizing a 3-day ICA meeting at the IIASS in Vietri Sul Mare, Italy, on February 21 to 23, 2002. The meeting is organized as a workshop / winter school in which leading European researchers on Independent Component Analysis and Blind Source Separation will give tutorial lectures. We invite graduate students, researchers, and practitioners of ICA to attend the meeting. It is also possible to submit a short paper. The registration fee is 150 Euro, not including accommodation and travel expenses. The number of attendees is limited and registrations will be accepted on a first come, first served basis. The invited speakers are: L. Almeida, IST/INESC-ID, Portugal S. Fiori, Perugia University, Italy A. Harmelin, Fraunhofer Institut, Germany C. Jutten, INPG, France C. Morabito, Reggio Calabria University, Italy K.-R. Muller, Fraunhofer Institut, Germany E. Oja, HUT, Finland F. Palmieri, Naples University, Italy R. Parisi, Rome University, Italy T. Pham, INPG, France F. Silva, IST/INESC-ID, Portugal R. Tagliaferri, Salerno University, Italy H. Valpola, HUT, Finland A. Ziehe, Fraunhofer Institut, Germany For information about the titles of the talks, the programme, registration, paper submission, travelling, etc. please consult the Web page of the meeting: http://ica.sa.infn.it . Prof. Maria Marinaro Prof. Erkki Oja From esalinas at wfubmc.edu Tue Jan 15 12:29:45 2002 From: esalinas at wfubmc.edu (Emilio Salinas) Date: Tue, 15 Jan 2002 12:29:45 -0500 Subject: Neuroscience PhD at WFUSM Message-ID: <3C446709.7BEF0641@wfubmc.edu> Graduate Training in Neuroscience at Wake Forest University School of Medicine The Department of Neurobiology & Anatomy at the Wake Forest University School of Medicine is seeking applicants for its doctoral program. The Department is a major neuroscience research center, with strong emphasis in the areas of systems neuroscience, sensorimotor and integrative systems, cognitive neuroscience, plasticity and development. For the 2002 academic year, matriculants will receive full tuition remission, a stipend of $18,500, a laptop computer and a health care benefit. For more information on the program and departmental research opportunities, please visit our website at www.wfubmc.edu/nba. Emilio Salinas Department of Neurobiology and Anatomy Wake Forest University School of Medicine Winston-Salem NC 27157 Tel: (336) 713-5176, 5177 Fax: (336) 716-4534 e-mail: esalinas at wfubmc.edu From mhc27 at cornell.edu Thu Jan 17 13:18:16 2002 From: mhc27 at cornell.edu (Morten H. Christiansen) Date: Thu, 17 Jan 2002 13:18:16 -0500 Subject: Two Postdoctoral positions in Cognitive Science Message-ID: TWO POSTDOCTORAL POSITIONS IN COGNITIVE SCIENCE OF LANGUAGE Two postdoctoral training opportunities - one at Cornell University (US) and one at the University of Warwick (UK) - are available immediately to investigate the role of multiple-cue integration in language acquisition across different languages. The project is funded by the Human Frontiers Science Program and involves four closely interacting research teams in the US (Morten Christiansen, Cornell University), the UK (Nick Chater, University of Warwick), France (Peter Dominey, Institut des Sciences Cognitives, Lyon) and Japan (Mieko Ogura, Tsurumi University). MULTIPLE-CUE INTEGRATION IN LANGUAGE ACQUISITION: MECHANISMS AND NEURAL CORRELATES How do children acquire the subtle and complex structure of their native language with such remarkable speed and reliability, and with little direct instruction? Recent computational and acoustic analyses of language addressed to children indicate that there are rich cues to linguistic structure available in the child's input. Moreover, evidence from developmental psycholinguistics shows that infants are sensitive to many sound-based (phonological) and intonational (prosodic) cues in the input - cues that may facilitate language acquisition. Although this research indicates that linguistic input is rich with possible cues to linguistic structure, there is an important caveat: the cues are only partially reliable and none considered alone provide an infallible bootstrap into language. To acquire language successfully, it seems that the child needs to integrate a great diversity of multiple probabilistic cues to linguistic structure in an effective way. Our research program aims to provide a rigorous cross-linguistic test of the hypothesis that multiple-cue integration is crucial for the acquisition of syntactic structure. The research has four interrelated strands: 1) Computational and acoustic analyses of child-directed speech. 2) Psycholinguistic and artificial language learning experiments. 3) Computational modeling using neural networks and statistical learning methods. 4) Event-related potential (ERP) studies. Together, the two postdoctoral positions will span the four research strands. For more information about the project please refer to our web site: http://cnl.psych.cornell.edu/mcila. CORNELL UNIVERSITY POSITION The Cornell Cognitive Neuroscience Lab headed by Morten Christiansen is coordinating the research efforts and the work here involves all four research strands. The postdoctoral position is primarily aimed at the ERP work but may also include the other research strands, depending on the interests of the candidate. Candidates should have a PhD in cognitive science, psychology or related discipline. Experience with high-density ERP experimentation is highly desirable as are interests in computational modeling of language. Salary will be based on experience in relation to the NIH postdoctoral scale. For more information about the Cornell Cognitive Neuroscience Lab, please visit our web site: http://cnl.psych.cornell.edu. Candidates interested in the Cornell position should email a vita and a short statement about graduate training and research interests to Morten Christiansen (mhc27 at cornell.edu). UNIVERSITY OF WARWICK POSITION The Warwick team headed by Nick Chater will focus primarily on research strands 1 and 3, especially on statistical analysis of child-directed speech across the three languages, and computational modeling, using statistical and connectionist techniques, of how relevant information is acquired and processed. There may also be some experimental work on artificial grammar learning. A strong candidate for this position would have a PhD in cognitive science or related discipline, and have an interest in, and preferably experience with, corpus analysis and statistical and connectionist models of language. Candidates interested in the Warwick position should email a vita and a short statement about graduate training and research interests to Nick Chater (nick.chater at warwick.ac.uk). Both positions are initially for two years, but may be extended into a third year. In addition to salary, funds are available for travel to conferences and meetings between research teams. Neither position carry any special citizen requirements. -- ------------------------------------------------------------------------ Morten H. Christiansen Assistant Professor Phone: +1 (607) 255-3570 Department of Psychology Fax: +1 (607) 255-8433 Cornell University Email: mhc27 at cornell.edu Ithaca, NY 14853 Office: 240 Uris Hall Web: http://www.psych.cornell.edu/faculty/people/Christiansen_Morten.htm Lab Web Site: http://cnl.psych.cornell.edu ------------------------------------------------------------------------ Nick Chater Professor Institute for Applied Cognitive Science Department of Psychology Phone: +44 2476 523537 University of Warwick Fax: +44 2476 524225 Coventry, CV4 7AL, UK Email: nick.chater at warwick.ac.uk Web: http://www.warwick.ac.uk/fac/sci/Psychology/staff/academic.html#NC ------------------------------------------------------------------------ From tcp1 at leicester.ac.uk Thu Jan 17 11:28:20 2002 From: tcp1 at leicester.ac.uk (Tim Pearce) Date: Thu, 17 Jan 2002 16:28:20 -0000 Subject: Postdoc / phd positions Message-ID: <001901c19f73$fa154d40$e468fea9@rothko> Sorry for any cross-postings ..... Due to our Personnel Department indulging in the Christmas (alcoholic) spirit the deadline for applications to guarantee consideration is now 15th Feb. but will remain open until filled! =============== Postdoctoral Research Associate in Computational Neuroscience R&AIA ?17,451 to ?26,229 pa Available immediately for 4 years Ref: R9398/JAU A postdoctoral researcher is required for a 4-year EC-funded project available from January 2002. The project concerns the development of biologically constrained sensory processing models for performing stereotypical moth-like chemotaxis behaviour in uncertain environments. We propose to develop biologically-inspired sensor, information processing and control systems for a c (hemosensing) unmanned aerial vehicle (UAV). The cUAV will identify and track volatile compounds of different chemical composition in outdooor environments. Its olfactory and sensory-motor systems are to be inspired by the moth. This development continues our research in artificial and biological olfaction, sensory processing and analysis, neuronal models of learning, real-time behavioural control, and robotics. Fleets of cUAVs will ultimately be deployed to sense, identify, and map the airborne chemical composition of large-scale environments. The mobile robotics aspects of the project will be carried out with the assistance of an associated PhD studentship position. Further details on the project and the research teams can be found at http://www.le.ac.uk/eg/tcp1/amoth/ The project includes significant funding and opportunities for travel within Europe to visit the laboratories of the participating consortia (in Switzerland, France, and Sweden) and outside Europe to attend international scientific meetings. A strong mathematical and computer modelling background is required in order to develop a biologically constrained model of the insect antennal lobe and protocerebellum. Expertise is required in the area of neuronal modelling, although not necessarily in the area of olfaction. Good team skills are also a necessity. Informal enquiries regarding these positions and the project in general should be addressed to the project co-ordinator, Dr. T.C. Pearce, Department of Engineering, University of Leicester, Leicester LE1 7RH, United Kingdom, +44 116 223 1290, t.c.pearce at le.ac.uk For Research Associate post, application forms and further particulars are available from the Personnel Office, tel 0116 252 5114, fax 0116 252 5140, email personnel at le.ac.uk, or via www.le.ac.uk/personnel/jobs. Closing date: 15 February 2002. PhD Studentship in Mobile Robotics A postgraduate researcher is required for a 4-year EC-funded project available from January 2002. The project concerns the development of an unmanned aerial vehicle (UAV) robot to perform stereotypical moth-like chemotaxis behaviour in uncertain environments. We propose to develop biologically-inspired sensor, information processing and control systems for a c (hemosensing) UAV. The cUAV will identify and track volatile compounds of different chemical composition in outdooor environments. Its olfactory and sensory-motor systems are to be inspired by the moth, which will be supported by computational neuroscience model development conducted by an associated Postdoctoral Research Associate. This development continues our research in artificial and biological olfaction, sensory processing and analysis, neuronal models of learning, real-time behavioural control, and robotics. Fleets of cUAVs will ultimately be deployed to sense, identify, and map the airborne chemical composition of large scale environments. Further details on the project and the research teams can be found at http://www.le.ac.uk/eg/tcp1/amoth/. The project includes significant funding and opportunities for travel within Europe to visit the laboratories of the participating consortia (in Switzerland, France, and Sweden) and outside Europe to attend international scientific meetings. A first degree (at the 2(i) level or higher) is required in mathematics, computer science, physics, or engineering. The student will be responsible for deploying the chemical sensors on the UAV and designing interface circuitry, assisting with construction of the UAV, programming the on board flight systems (incorporating a neuronal model), and assisting with field trials. Applicants should have a demonstrated interest in one or more of the following, UAVs, neuroscience, robotics, and/or artificial intelligence. Good team skills are essential. The studentship includes a stipend of ?12,000 per year for four years and includes provision for overseas PhD fees although EU nationals may also apply. For PhD Studentship, applications and informal enquiries should be addressed to the project co-ordinator, Dr. T.C. Pearce, Department of Engineering, University of Leicester, Leicester LE1 7RH, United Kingdom, +44 116 223 1290, t.c.pearce at le.ac.uk. Closing date: 15 February 2002. -- T.C. Pearce, PhD URL: http://www.leicester.ac.uk/eg/tcp1/ Lecturer in Bioengineering E-mail: t.c.pearce at leicester.ac.uk Department of Engineering Tel: +44 (0)116 223 1290 University of Leicester Fax: +44 (0)116 252 2619 Leicester LE1 7RH Bioengineering, Transducers and United Kingdom Signal Processing Group From jel1 at nyu.edu Fri Jan 18 10:38:01 2002 From: jel1 at nyu.edu (Joseph LeDoux) Date: Fri, 18 Jan 2002 10:38:01 -0500 Subject: Synaptic Self Message-ID: <3C484159.CBCAFE56@nyu.edu> NOW AVAILABLE!!! PRESS RELEASE For more information please contact: Holly Watson at 212.366.2147 or via email at hwatson at penguinputnam.com SYNAPTIC SELF How Our Brains Become Who We Are Joseph LeDoux author of The Emotional Brain "Synaptic Self represents a brilliant manifesto at the cutting edge of psychology's evolution into a brain science. Joseph LeDoux is one of the field's pre-eminent, most important thinkers." -Daniel Goleman, author of Emotional Intelligence "In this pathbreaking synthesis, Joseph LeDoux draws on dazzling insights from the cutting edge of neuroscience to generate a new conception of an enduring mystery: the nature of the self. Enlightening and engrossing, LeDoux's bold formulation will change the way you think about who you are." -Daniel L. Schacter, Chairman of Psychology at Harvard University, and author of The Seven Sins of Memory "LeDoux offers a fascinating view into that 'most unaccountable of machinery,' the human brain." --Kirkus Reviews "Synaptic Self is a wonderful tour of the brain circuitry behind some of the critical aspects of the mind. LeDoux is an expert tour guide and it is well worth listening. His perspective takes you deep into the cellular basis of what it is to be a thinking being." -Antonio R. Damasio, neuroscientist and author of The Feeling of What Happens How is the self, the personality which we present to other people on a daily basis and through which we experience the world, actually constructed? Although nature and nurture have long been believed to both participate, their contributions have remained vague. Joseph LeDoux, author of the SYNAPTIC SELF: How Our Brains Become Who We Are (Viking; January 14, 2002; 400 pages), provides a detailed explanation grounded in cutting edge brain science, arguing that nature and nurture speak the same language-they construct our personality by influencing synapses. In SYNAPTIC SELF, LeDoux proposes an entirely new, biologically based theory, one which does not exclude other ways of understanding the self--whether spiritual, aesthetic, or moral--but rather enriches and broadens these by grounding them in a neurological framework. LeDoux's theory centers on synapses, the spaces between brain cells, which serve as channels of communication between neurons and the means by which the brain accomplishes most of its business, including the key component of constructing a self. According to LeDoux, synapses are not only the means by which we think, act, imagine, feel, and remember, but also the means by which interactions take place between these different mental processes. Without such interactions, we wouldn't be able to attend to and remember the important things in life better than the trivial. Synapses are responsible for encoding the essence of the individual, allowing us to be the same person from moment to moment, week to week, and year to year. Memory is thus a key process in constructing the self. And because many of the brain's systems form memories, either of the conscious kind or, more often, of the implicit, unconscious kind, synaptic interactions between the systems keeps the self together. SYNAPTIC SELF is a dramatically new look at human personality as a product of the integrated brain and represents an important breakthrough in one of the last frontiers of brain research. About the Author: Joseph LeDoux is a neuroscientist and Henry and Lucy Moses Professor of Science at New York University's Center for Neural Sciences. SYNAPTIC SELF: How Our Brains Become Who We Are By Joseph LeDoux Viking On-Sale Date: January 14, 2002 Pages: 400; Price: $25.95; ISBN: 0-670-03028-7 Discovery Channel Book Club Selection To find out more information, or schedule an interview with the author, please contact To obtain a review copy, please contact Dina Jordan via fax at 212.366.2952 or email at djordan at penguinputnam.com For more information, please visit our website at www.penguinputnam.com Penguin Putnam Inc. is the U.S. affiliate of the internationally renowned Penguin Group. Penguin Putnam is one of the leading U.S. adult and children's trade book publishers, owning a wide range of imprints and trademarks including Berkley Books, Dutton, Frederick Warne, G.P. Putnam's Sons, Grosset & Dunlap, New American Library, Penguin, Philomel, Riverhead Books and Viking, among others. The Penguin Group is owned by Pearson plc, the international media group. From radford at cs.toronto.edu Sat Jan 19 17:45:48 2002 From: radford at cs.toronto.edu (Radford Neal) Date: Sat, 19 Jan 2002 17:45:48 -0500 Subject: Postdoc in Bayesian modeling, MCMC, bioinformatics Message-ID: <02Jan19.174555edt.453148-5749@jane.cs.toronto.edu> Postdoctoral position in BAYESIAN MODELING, MARKOV CHAIN MONTE CARLO, BIOINFORMATICS Radford Neal, University of Toronto I am looking for a postdoc who is interested in the following areas: o Bayesian statistical modeling, especially flexible models such as those based on Dirichet process mixtures, neural networks, and Gaussian processes. o Markov chain Monte Carlo methods, either general in scope, or tailored to specific Bayesian models. o Applications of flexible models in bioinformatics, especially analysis of spectroscopic data, analysis of gene expression data from DNA microarrays, and inference for phylogenetic trees. Candidates should have a PhD in a relevant discipline (or be about to receive one), and have an excellent background in at least one of the above areas, plus a willingness and ability to learn about the others. This position is for one year, with possibility of extension to two years, starting no later than August 2002 (preferably sooner). There will be opportunities to apply for sessional teaching in Statistics or Computer Science if desired. The University of Toronto has a large and diverse group of faculty, graduate students, and postdocs in Statistics and in Machine Learning. To learn more about what is going on here, visit the Statistics site at http://utstat.utoronto.ca and the web site for the Machine Learning group (in Computer Science) at http://www.cs.utoronto.ca/learning/. For more about my personal research interests, see my web pages at http://www.cs.utoronto.ca/~radford/. Applicants should EMAIL a CV, the email addresses of two references, and a description of their research background and interests to me at radford at cs.utoronto.ca. Plain text is preferred, but Postscript, PDF, and (if you really have to) MS Word documents will also be read. All applications that are received by February 15 will be considered. Those received after that will be considered if the position has not already been filled. ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ---------------------------------------------------------------------------- From eero.simoncelli at nyu.edu Sun Jan 20 14:08:09 2002 From: eero.simoncelli at nyu.edu (Eero Simoncelli) Date: Sun, 20 Jan 2002 14:08:09 -0500 (EST) Subject: Summer course: Computational Visual Neuroscience Message-ID: <200201201908.OAA02223@calaf.cns.nyu.edu> Computational Neuroscience: Vision Cold Spring Harbor Laboratory Summer Course 13 - 26 June 2002 Computational modeling and simulation have produced important advances in our understanding of neural processing. This intensive 2-week summer course focuses on areas of visual science in which interactions among psychophysics, neurophysiology, and computation have been especially fruitful. Topics to be covered this year include: neural representation and coding; photon detection and the neural basis of color vision, pattern vision, and visual motion perception; oculomotor function; object/shape representation; visual attention and decision-making. The course combines lectures (generally two 3-hour sessions each day) with hands-on problem solving using the MatLab programming environment in a computer laboratory. Lectures are given by the course organizers and by invited lecturers, including: Edward Adelson (MIT), David Brainard (U Pennsylvania), Kathleen Cullen (McGill U), Norma Graham (Columbia U), Kalanit Grill Spector (Stanford U), David Heeger (Stanford U), Dan Kersten (U Minnesota), Tony Movshon (NYU), Bill Newsome (Stanford U), Fred Rieke (U Washington), Mike Shadlen (U Washington), Stefan Treue (U Tuebingen), Preeti Verghese (Smith-Kettlewell Institute). Application deadline: 15 March 2002 Further information & application materials: http://www.cns.nyu.edu/csh02 Course Organizers: E.J. Chichilnisky, Salk Institute Paul W. Glimcher, New York University Eero P. Simoncelli, New York University From jaz at cs.rhul.ac.uk Mon Jan 21 05:37:50 2002 From: jaz at cs.rhul.ac.uk (J.S. Kandola) Date: Mon, 21 Jan 2002 10:37:50 +0000 Subject: Call for Papers: JMLR Special Issue on Machine Learning Methods for Text and Images Message-ID: *********** Apologies for Multiple Postings*************** ******************************************************* Call for Papers: JMLR Special Issue on Machine Learning Methods for Text and Images Guest Editors: Jaz Kandola (Royal Holloway College, University of London, UK) Thomas Hofmann (Brown University, USA) Tomaso Poggio (M.I.T, USA) John Shawe-Taylor (Royal Holloway College, University of London, UK) Submission Deadline: 29th March 2002 Papers are invited reporting original research on Machine Learning Methods for Text and Images. This special issue follows the NIPS 2001 workshop on the same topic, but is open also to contribution that were not presented in it. A special volume will be published for this issue. There has been much interest in information extraction from structured and semi-structured data in the machine learning community. This has in part been driven by the large amount of unstructured and semi-structured data available in the form of text documents, images, audio, and video files. In order to optimally utilize this data, one has to devise efficient methods and tools that extract relevant information. We invite original contributions that focus on exploring innovative and potentially groundbreaking machine learning technologies as well as on identifying key challenges in information access, such as multi-class classification, partially labeled examples and the combination of evidence from separate multimedia domains. The special issue seeks contributions applied to text and/or images. For a list of possible topics and information about the associated NIPS workshop please see http://www.cs.rhul.ac.uk/colt/JMLR.html Important Dates: Submission Deadline: 29th March 2002 Decision: 24th June 2002 Final Papers: 24th July 2002 Many thanks Jaz Kandola, Thomas Hofmann, Tommy Poggio and John Shawe-Taylor From pelillo at dsi.unive.it Mon Jan 21 09:07:33 2002 From: pelillo at dsi.unive.it (Marcello Pelillo) Date: Mon, 21 Jan 2002 15:07:33 +0100 (ora solare Europa occidentale) Subject: A PAMI/NIPS paper on matching free trees In-Reply-To: <200201211155.g0LBtcL22489@oink.dsi.unive.it> Message-ID: The following paper, accepted for publication in the IEEE Transactions on Pattern Analysis and Machine Intelligence, is accessible at the following www site: http://www.dsi.unive.it/~pelillo/papers/pami-2001.ps.gz A shorter version of it has just been presented at NIPS*01, and can be accesses at: http://www.dsi.unive.it/~pelillo/papers/nips2001.ps.gz (files are gzipped postscripts) Comments and suggestions are welcome! Best regards, Marcello Pelillo ================================ Matching Free Trees, Maximal Cliques, and Monotone Game Dynamics Marcello Pelillo University of Venice, Italy Abstract Motivated by our recent work on rooted tree matching, in this paper we provide a solution to the problem of matching two free (i.e., unrooted) trees by constructing an association graph whose maximal cliques are in one-to-one correspondence with maximal common subtrees. We then solve the problem using simple payoff-monotonic dynamics from evolutionary game theory. We illustrate the power of the approach by matching articulated and deformed shapes described by shape-axis trees. Experiments on hundreds of larger, uniformly random trees are also presented. The results are impressive: despite the inherent inability of these simple dynamics to escape from local optima, they always returned a globally optimal solution. ================================ ________________________________________________________________________ Marcello Pelillo Dipartimento di Informatica Universita' Ca' Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, Italy Tel: (39) 041 2348.440 Fax: (39) 041 2348.419 E-mail: pelillo at dsi.unive.it URL: http://www.dsi.unive.it/~pelillo From smarsland at cs.man.ac.uk Mon Jan 21 09:00:18 2002 From: smarsland at cs.man.ac.uk (Stephen Marsland) Date: Mon, 21 Jan 2002 14:00:18 +0000 Subject: PhD thesis available Message-ID: <3C4C1EF2.FEF8D2DE@cs.man.ac.uk> Hi, my PhD thesis On-line Novelty Detection Through Self-Organisation, with Application to Inspection Robotics is available at http://www.cs.man.ac.uk/~marslans/pubs.html Stephen Abstract: Novelty detection, the recognition that a perception differs in some way from the features that have been seen previously, is a useful capability for both natural and artificial organisms. For many animals the ability to detect novel stimuli is an important survival trait, since the new perception could be evidence of a predator, while for learning machines novelty detection can enable useful behaviours such as focusing attention on novel features, selecting what to learn and~-- the main focus of this thesis~-- inspection tasks. There are many places where an autonomous mobile inspection robot would be useful~-- examples include sewers, pipelines and even outer space. The robot could explore its environment and highlight potential problems for further investigation. The challenge is to have the robot recognise the evidence of problems. For inspection applications it is better to err on the side of caution, detecting potential faults that are, in fact not problems, rather than missing any faults that do exist. However, by training the robot to recognise each individual fault, other problems will be missed. This is where novelty detection is useful. Instead of training the robot to recognise the faults, the robot learns a model of the `normal' environment that does not have any problems and the novelty filter detects deviations from this model. In training the robot it may well be found that the initial training set was deficient in some way, for example some feature that should be found normal was missing and is therefore always detected as novel. To deal with this situation the novelty filter should be capable of continuous on-line learning, so that the filter can learn to recognise the missing feature without having to relearn every other perception. This thesis introduces a novelty filter that is suitable for the inspection task. The novelty filter uses a model of the biological phenomenon of habituation, a decrement in behavioural response to a stimulus that is seen repeatedly without ill effects, together with an unsupervised neural network that learns the model of normality. A variety of neural networks are investigated for suitability as the basis of the novelty filter on a number of robot experiments where a robot equipped with sonar sensors explores a set of corridor environments. The particular needs of the novelty filter require a self-organising network that is capable of continuous learning and that can increase the number of nodes in the network as new perceptions are seen during training. A suitable network, termed the `Grow When Required' network, is devised. The network is applied to a variety of problems, initially non-novelty detection classification tasks, at which its performance compares favourably to other algorithms in terms of accuracy and speed of learning, and then a series of inspection problems~-- both robotic and not~-- again with promising results. In addition to the sonar sensors that were used for the earlier robotic inspection tasks, the output of a CCD camera is also used as input. Finally, an extension to the novelty detection algorithm is presented that enables the filter to store multiple models of a variety of environments and to autonomously select the best one. This means that the filter can be used in a set of environments that demonstrate different characteristics and can automatically select a suitably trained filter. From mlf at dlsi.ua.es Mon Jan 21 09:15:52 2002 From: mlf at dlsi.ua.es (Mikel L. Forcada) Date: Mon, 21 Jan 2002 15:15:52 +0100 Subject: Neural Networks, Automata, and Formal Models of Computation Message-ID: <3C4C2298.DDA59F35@dlsi.ua.es> Dear Connectionists, This is just to announce the availability of a web document, "Neural Networks, Automata, and Formal Models of Computation". The URL is http://www.dlsi.ua.es/~mlf/nnafmc/ You will find both a printable PDF file and a browsable (imperfect) HTML version. "Neural Networks, Automata, and Formal Models of Computation" was initially conceived (in 1995) as a reprint collection. A number of personal and editorial circumstances have prevented me from finishing this work; therefore, this document can only be seen as some kind of draft, besides being somewhat outdated (reflecting perhaps the state of things around 1999). But I am making it publically available, just in case it is useful to other people working in related fields. I am switching to a different field of computer science, and I thought that perhaps it was better to have it on the web than buried in my hard disk. -- _____________________________________________________________________ Mikel L. Forcada E-mail: mlf at dlsi.ua.es Departament de Llenguatges Phone: +34-96-590-3400 ext. 3384; i Sistemes Inform?tics also +34-96-590-3772. UNIVERSITAT D'ALACANT Fax: +34-96-590-9326, -3464 E-03071 ALACANT, Spain. URL: http://www.dlsi.ua.es/~mlf From vaina at bu.edu Mon Jan 21 17:42:49 2002 From: vaina at bu.edu (Lucia M. Vaina) Date: Mon, 21 Jan 2002 17:42:49 -0500 Subject: Postdoctoral position in computational vision Message-ID: Postdoctoral Position at Boston University : Learning Invariance for Recognition A postdoctoral position funded by NSF is open immediately for research in modeling the neural mechanisms underlying learning invariance in the visual system at several levels of resolution. This position is part of a multi-university research team (St. Andrews University, Scotland and the Weizmann Institute of Science Israel) investigating invariance learning through a combination of computational modeling and visual psychophysics. Applicants must have a Ph.D. or equivalent degree. A strong background in mathematics, physics, or computer science and a background in visual neuroscience are required. Experience with computational modeling in vision using matlab and/or C programming is a plus. The post is for one year initially, with the possibility of renewal. The salary will be determined by the experience-appropriate level on NIH stipend scale. US citizenship is not required The successful candidate will join a dynamic and interdisciplinary group of scientists performing cutting-edge research on human vision, using psychophysics, functional a, neurology, and computational modeling. Further information about the research environment can be found at our Website http://www.bu.edu/eng/labs/bravi/. To apply, please send a curriculum vitae, representative publications, and three letters of recommendation to: Professor Lucia M. Vaina Boston University, Department of Biomedical Engineering College of Engineering 44 Cummington str, Room 315 Boston University Boston, Ma 02215 USA tel: 617-353-2455 fax: 617-353-6766 From mcs at diee.unica.it Mon Jan 21 06:46:57 2002 From: mcs at diee.unica.it (Fabio Roli) Date: Mon, 21 Jan 2002 12:46:57 +0100 Subject: MCS 2002 Final Call for Papers Message-ID: **Apologies for multiple copies** ****************************************** *****MCS 2002 Final Call for Papers***** ****************************************** *****Paper Submission: 1 FEBRUARY 2002***** *********************************************************************** THIRD INTERNATIONAL WORKSHOP ON MULTIPLE CLASSIFIER SYSTEMS Grand Hotel Chia Laguna, Cagliari, Italy, June 24-26 2002 Updated information: http://www.diee.unica.it/mcs E-mail: mcs at diee.unica.it *********************************************************************** WORKSHOP OBJECTIVES MCS 2002 is the third workshop of a series aimed to create a common international forum for researchers of the diverse communities working in the field of multiple classifier systems. Information on the previous editions of MCS workshop can be found on www.diee.unica.it/mcs. Contributions from all the research communities working in the field are welcome in order to compare the different approaches and to define the common research priorities. Special attention is also devoted to assess the applications of multiple classifier systems. The papers will be published in the workshop proceedings, and extended versions of selected papers will be considered for publication in a special issue of the International Journal of Pattern Recognition and Artificial Intelligence. WORKSHOP CHAIRS Josef Kittler (Univ. of Surrey, United Kingdom) Fabio Roli (Univ. of Cagliari, Italy) ORGANIZED BY Dept. of Electrical and Electronic Eng. of the University of Cagliari Center for Vision, Speech and Signal Proc. of the University of Surrey Sponsored by IAPR and IAPR-TC1 Statistical Pattern Recognition Techniques PAPER SUBMISSION An electronic version of the manuscript (PostScript or PDF format) should be submitted to mcs at diee.unica.it. The papers should not exceed 10 pages (LNCS format, see http://www.springer.de/comp/lncs/authors.html). A cover sheet with the authors names and affiliations is also requested, with the complete address of the corresponding author, and an abstract (200 words). In addition, three hard copies of the full paper should be mailed to: MCS 2002 Prof. Fabio Roli Dept. of Electrical and Electronic Eng. University of Cagliari Piazza d'armi 09123 Cagliari Italy IMPORTANT NOTICE: Submission implies the willingness of at least one author to register, attend the workshop, and present the paper. Accepted papers will be published in the proceedings only if the registration form and payment for one of the authors will be received. WORKSHOP TOPICS Papers describing original work in the following and related research topics are welcome: Foundations of multiple classifier systems Methods for classifier fusion Design of multiple classifier systems Neural network ensembles Bagging and boosting Mixtures of experts New and related approaches Applications INVITED SPEAKERS Joydeep Ghosh (University of Texas, USA) Trevor Hastie (Stanford University, USA) Sarunas Raudys (Vilnius University, Lithuania) SCIENTIFIC COMMITTEE J. A. Benediktsson (Iceland) H. Bunke (Switzerland) L. P. Cordella (Italy) B. V. Dasarathy (USA) R. P.W. Duin (The Netherlands) C. Furlanello (Italy) J. Ghosh (USA) T. K. Ho (USA) S. Impedovo (Italy) N. Intrator (Israel) A.K. Jain (USA) M. Kamel (Canada) L.I. Kuncheva (UK) L. Lam (Hong Kong) D. Landgrebe (USA) D-S. Lee (USA) D. Partridge (UK) A.J.C. Sharkey (UK) K. Tumer (USA) G. Vernazza (Italy) T. Windeatt (UK) IMPORTANT DATES February 1, 2002 : Paper Submission March 15, 2002: Notification of Acceptance April 10, 2002: Camera-ready Manuscript April 10, 2002: Registration WORKSHOP VENUE The workshop will be held at Grand Hotel Chia Laguna, Cagliari, Italy. See http://www.crs4.it/~zip/EGVISC95/chia_laguna.html (in English) or http://web.tiscali.it/chialaguna (in Italian). WORKSHOP PROCEEDINGS Accepted papers will appear in the workshop proceedings that will be published in the series Lecture Notes in Computer Science by Springer-Verlag. Extended versions of selected papers will considered for possible publication in a special issue of the International Journal of Pattern Recognition and Artificial Intelligence. From mvzaanen at science.uva.nl Tue Jan 22 04:19:11 2002 From: mvzaanen at science.uva.nl (Menno van Zaanen) Date: Tue, 22 Jan 2002 10:19:11 +0100 (CET) Subject: CFP: The 6th International Colloquium on Grammatical Inference (fwd) Message-ID: Hello, Could you please put this call for papers on the mailinglist for me? Thank you very much and best regards, Menno van Zaanen ICGI-2002Call for Papers The 6th International Colloquium on Grammatical Inference will be held in Amsterdam, The Netherlands September 11-13th, 2002 http://www.illc.uva.nl/ICGI-2002/ SCOPE ICGI-2002 is the sixth in a series of successful biennial international conferences on the area of grammatical inference. Grammatical inference has been extensively addressed by researchers in information theory, automata theory, language acquisition, computational linguistics, machine learning, pattern recognition, computational learning theory and neural networks. This colloquium aims at bringing together researchers in these fields. Previous editions of this meeting were held in Essex, U.K.; Alicante, Spain; Montpellier, France; Ames, Iowa, USA; and Lisbon, Portugal. AREAS OF INTEREST The conference seeks to provide a forum for presentation and discussion of original research papers on all aspects of grammatical inference including, but not limited to: Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions), impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, parameterized models, etc. Theoretical and experimental analysis of different approaches to grammar induction including artificial neural networks, statistical methods, symbolic methods, information-theoretic ated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, information retrieval, text processing, adaptive intelligent agents, systems modeling and control, and other domains. TECHNICAL PROGRAM COMMITTEE Pieter Adriaans, Perot Systems Corporation/University of Amsterdam, Netherlands (Chair) Dana Angluin, Yale University, USA Dick de Jongh, Universiteit van Amsterdam, Netherlands Jerry Feldman, ICSI, Berkeley, USA Colin de la Higuera, EURISE, Univ. de St. Etienne, France Vasant Honavar, Iowa State University, USA Laurent Miclet, ENSSAT, Lannion, France G. Nagaraja, Indian Institute of Technology, Bombay, India Arlindo Oliveira, Lisbon Technical University, Portugal Jose Oncina Carratala, Universidade de Alicante, Spain Rajesh Parekh, Blue Martini, USA Yasubumi Sakakibara, Tokyo Denki University, Japan Enrique Vidal, U. Politecnica de Valencia, Spain Takashi Yokomori, Waseda University, Japan Menno van Zaanen, Universiteit van Amsterdam, Netherlands Thomas Zeugmann, University at Lubeck, Germany CONFERENCE FORMAT The conference will include oral and possibly poster presentations of accepted papers, a small number of tutorials and invited talks. All accepted papers will appear in the conference proceedings. The proceedings of ICGI-2002 will be published by Springer-Verlag as a volume in their Lecture Notes in Artificial Intelligence, a subseries of the Lecture Notes in Computer Science series. SUBMISSION OF PAPERS Prospective authors are invited to submit a draft paper in English with the following format. The cover page should specify: - submission to ICGI-2002 - title, - authors and affiliation, - mailing address, phone, fax, and e-mail address of the contact author, - a brief abstract describing the work, - at least three keywords which can specify typically the contents of the work. Postscript versions of the papers should be formatted according to A4 or 8.5"x11", and the length should not exceed 12 pages excluding the cover page. The technical expositions should be directed to a specialist and should include an introduction understandable to a non specialist that describes the problem studied and the results achieved, focusing on the important ideas and their significance. All paper submissions, review and notification of acceptance will be done electronically through the conference's WWW pages http://www.illc.uva.nl/ICGI-2002/ DEADLINES Submission of manuscripts: April 5, 2002 Notification of acceptance: May 27th, 2002 Final version of manuscript: June 28th, 2002 ORGANIZING COMMITTEE Pieter Adriaans (Chair) Henning Fernau (Co-chair) Menno van Zaanen (Local organization) Marjan Veldhuisen (Secretariat) SPONSORS: ILLC, Institute for Language Logic and Computation OZSL, Dutch Research School in Logic Take a look at: http://www.illc.uva.nl/ICGI-2002/ See you in Amsterdam in September 2002!! +-------------------------------------+ | Menno van Zaanen | "Let him not vow to walk in the dark, | mvzaanen at science.uva.nl | who has not seen the nightfall." | http://www.science.uva.nl/~mvzaanen | -Elrond From terry at salk.edu Tue Jan 22 12:44:11 2002 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 22 Jan 2002 09:44:11 -0800 (PST) Subject: NEURAL COMPUTATION 14:2 Message-ID: <200201221744.g0MHiB917659@purkinje.salk.edu> Neural Computation - Contents - Volume 14, Number 2 - February 1, 2002 ARTICLE On the Complexity of Computing and Learning with Multiplicative Neural Networks Michael Schmitt NOTES A Lagrange Multiplier And Hopfield-Type Barrier Function Method for The Traveling Salesman Problem Chuangyin Dang and Lei Xu The Time-Rescaling Theorem and Its Application to Neural Spike Train Data Analysis Emery N. Brown, Riccardo Barbieri, Valerie Ventura, and Loren M. Frank LETTERS The Impact of Spike Timing Variability on the Signal-Encoding Performance of Neural Spiking Models Amit Manwani, Peter N. Steinmetz, and Christof Koch Temporal Correlations in Stochastic Networks of Spiking Neurons Carsten Meyer and Carl van Vreeswijk Measuring Information Spatial Densities Michele Bezzi, Ines Samengo, Stefan Leutgeb and Sheri J. Mizumori Stochastic Trapping in a Solvable Model of On-Line Independent Component Analysis Magnus Rattray A Neural Network-Based Approach to the Double Traveling Salesman Problem Alessio Plebe and Angelo Marcello Anile ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2002 - VOLUME 14 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $506 $451.42 $554 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From doug.leith at may.ie Tue Jan 22 12:20:07 2002 From: doug.leith at may.ie (Douglas Leith) Date: Tue, 22 Jan 2002 17:20:07 -0000 Subject: Senior Research Position (Statistical Machine Learning), Hamilton Institute Message-ID: <018101c1a369$0a3518b0$04000001@DougLaptop> SENIOR RESEARCH POSITION Applications are invited from well qualified candidates for a number of positions at the Hamilton Institute. The successful candidates will be outstanding researchers who can demonstrate an exceptional research track record or significant research potential at international level in the general area of modern statistical and machine learning methods for data intensive probabilistic modelling and reasoning, particularly in the context of time series analysis. We are committed to research excellence. This post offers a unique opportunity for tackling fundamental problems in a leading edge multi-disciplinary research group with state of the art facilities. All appointments are initially for 3 years, extendable to 5, with the potential for permanency. Where appropriate, the potential exists to fund post-doc/post-grad positions in support of this post. Salary scale: $30000- $81000 approx. Further information: Please visit our web site at: http://hamilton.may.ie Enquiries to Prof. Douglas Leith, doug.leith at may.ie From rid at ecs.soton.ac.uk Thu Jan 24 11:17:47 2002 From: rid at ecs.soton.ac.uk (Bob Damper) Date: Thu, 24 Jan 2002 16:17:47 +0000 (GMT) Subject: This Workshop will interest many connectionists Message-ID: EPSRC/BBSRC International Workshop Biologically-Inspired Robotics: The Legacy of W. Grey Walter 14-16 August 2002, Bristol, UK http://www.ecs.soton.ac.uk/~rid/wgw02/home.html Biologically-inspired robots functioning in the real world can provide valuable physical models of biology, but can also provide a radical alternative to conventional methods of designing intelligent systems. The origins and history of this fascinating topic can be traced back to seminal work in the 1940's and 1950's, much of it taking place in the United Kingdom. One of the pioneers of the field was William Grey Walter, a neurophysiologist and amateur engineer who spent the majority of his working life in Bristol. He died in 1977 some time after the road accident that ultimately ended his life. A three-day scientific meeting "Biologically-Inspired Robotics: The Legacy of W. Grey Walter" will will take place at Hewlett-Packard Laboratories, Bristol in August 2002 sponsored by the UK Engineering and Physical Sciences Research Council (under the EPSRC Programme in Adaptive and Interactive Behaviour of Animal and Computational Systems) and by the Biotechnology and Biological Sciences Research Council, with additional support from Hewlett-Packard. The workshop will focus on the latest work in this important area of overlap between biology, engineering and computing, with keynote talks from internationally-acclaimed practitioners across the range of disciplines impacting on biologically-inspired robotics. The following invited speakers are confirmed: Michael Arbib (University of Southern California, Los Angeles) Randall Beer (Case Western Reserve University, Cleveland) Valentino Braitenberg (University of Tubingen) Rodney Brooks (MIT, Boston) Gerald Edelman (The Neurosciences Institute, La Jolla) Owen Holland (University of Essex) Rolf Pfeifer (University of Zurich) Mandyam Srinivasan (Australian National University, Canberra) Luc Steels (Sony Computer Science Laboratories, Paris) Contributed papers (maximum 8 pages) are also invited from scientists and engineers keen to publicise their recent work at this high-calibre workshop. Relevant topics include (but are not limited to): Biorobotics Biologically-inspired robot architectures Artificial life and animats Artificial perception Autonomous robots Humanoid robots Learning and adaptation Evolutionary robotics Hardware for biorobotics Applications Communication and cooperation Robot-human interaction Social and collective behaviour History of cybernetics Embodiment and robotics The life and work of Grey Walter Emergence and interaction Neuroethology Philosophical issues Biological basis of intelligence Papers must be submitted electronically according to instructions on the Workshop web site (URL above). The best papers from the workshop will be published in a special, themed issue of the Philosophical Transactions of the Royal Society, the world's longest running scientific journal. The local organising committee are keen to encourage the participation of research students, especially those sponsored by EPSRC and BBSRC. To this end, there will be free registration and accommodation for research council students working a relevant or related area (subject to fairly generous availability) and a special Student Poster Session is being organised. Important Dates Deadline for contributed papers: 1 May 2002 Notification of acceptance/rejection: 3 June 2002 Final submission of revised paper: 1 July 2002 Workshop: 14-16 August 2002 The workshop has been timed to fit in with SAB'02 in Edinburgh and to make attendance at both events maximally convenient. Local Organising Committee Dave Cliff (Hewlett-Packard Laboratories, Bristol) Bob Damper (University of Southampton, Chair) Kerstin Dautenhahn (University of Hertfordshire) Inman Harvey (University of Sussex) Chris Melhuish (University of West of England) Ulrich Nehmzow (University of Essex) Nigel Shadbolt (University of Southampton) Noel Sharkey (University of Sheffield) Barbara Webb (University of Stirling) From peterk at ini.phys.ethz.ch Thu Jan 24 08:56:53 2002 From: peterk at ini.phys.ethz.ch (Peter =?iso-8859-1?Q?K=F6nig?=) Date: Thu, 24 Jan 2002 14:56:53 +0100 Subject: No subject Message-ID: 25.2.-27.2.2002 the EU neuroinformatics workshop "Neural and Artificial Information" will take place in Z?rich. This workshop is part of a series of workshop organized by the EU network of Neuroinformatics. It addresses our current understanding of the computational properties of the brain. In particular we want to focus on a discussion of the encoding capabilities of neurons and populations of neurons. One of the goals of this analysis is to assess how our understanding of neural computation can both be facilitated by the constructing of synthetic neuronal systems and give rise to novel information processing technology. The different contributions to the workshop are structured around different levels of neuronal organization: subcellular, cellular, circuits, systems. The different contributions aim to elucidate the differences and commonalities between the traditional definition and understanding of computation and that observed in the nervous system. More information and registration forms can be found at www.neuroinf.org With best regards, Peter K?nig, Paul FMJ Verschure and Tim Pearce. -- PD Dr. Peter K?nig +41-1-635 30 60 Institute of Neuroinformatics +41-1-635 30 53 (fax) ETH - University Z?rich http://www.ini.unizh.ch/~peterk Winterthurerstr. 190 peterk at ini.phys.ethz.ch 8057 Z?rich From engp9354 at nus.edu.sg Thu Jan 24 10:46:13 2002 From: engp9354 at nus.edu.sg (Chu Wei) Date: Thu, 24 Jan 2002 23:46:13 +0800 Subject: Bayesian Inference in Support Vector Machines Message-ID: <9C4C56CDF89E0440A6BD571E76D2387F0131E6F2@exs23.ex.nus.edu.sg> Dear Connectionists: We have recently completed two technical reports in which we apply popular Bayesian techniques in support vector machines to implement hyperparameter tuning. In a probabilistic framwork, Bayesian inference is used to implement model adaption, while keeping the merits of support vector machines, such as sparseness and convex quadratic programming. Another benifit is the availability of probabilistic prediction. The results in numerical experiments verify that the generalization capability of the Bayesian methods is competitive and it is feasible to tackle reasonable large data sets in this approach. The pdf files of these reports can be accessed at: For regression: http://guppy.mpe.nus.edu.sg/~mpessk/papers/bisvr.pdf For classification: http://guppy.mpe.nus.edu.sg/~mpessk/papers/bitsvc.pdf We are looking forward to your comments to improve this work. Thanks. We attach their abstract in the following: Title: Bayesian Inference in Support Vector Regression Abstract: In this paper, we apply popular Bayesian techniques on support vector regression. We describe a Bayesian framework in a function-space view with a Gaussian process prior probability over the functions. A unified non-quadratic loss function with the desirable characteristic of differentiability, called the soft insensitive loss function, is used in likelihood evaluation. In the framework, maximum a posteriori estimate of the functions results in an extended support vector regression problem. Bayesian methods are used to implement model adaptation, while keeping the merits of support vector regression, such as quadratic programming and sparseness. Moreover, we put forward confidence interval in making predictions. Experimental results on simulated and real-world datasets indicate that the approach works well even on large datasets. Title: Bayesian Inference in Trigonometric Support Vector Classifier Abstract: In the report, we propose a novel classifier, known as trigonometric support vector classifier, to integrate popular Bayesian techniques with support vector classifier. We describe a Bayesian framework in a function-space view with a Gaussian process prior probability over the functions. The trigonometric likelihood function with the desirable characteristics of normalization in likelihood and differentiability is used in likelihood evaluation. In the framework, maximum a posteriori estimate of the functions results in an extended support vector classifier problem. Bayesian methods are used to implement model adaptation, while keeping the merits of support vector classifier, such as sparseness and convex programming. Moreover, we put forward class probability in making predictions. Experimental results on artificial and benchmark datasets indicate that the approach works well even on large datasets. Sincerely Wei Chu (engp9354 at nus.edu.sg) S. Sathiya Keerthi (mpessk at nus.edu.sg) Chong Jin Ong (mpeongcj at nus.edu.sg) From erik at bbf.uia.ac.be Thu Jan 24 12:22:29 2002 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 24 Jan 2002 18:22:29 +0100 Subject: CNS*2002: Call for papers Message-ID: CALL FOR PAPERS: APPLICATION DEADLINE: February 8, 2002 midnight GMT Eleventh Annual Computational Neuroscience Meeting CNS*2002 July 21 - July 25, 2002 Chicago, Illinois USA http://www.neuroinf.org/CNS.shtml Info at cp at bbf.uia.ac.be CNS*2002 will be held in Chicago from Sunday, July 21, 2002 to Thursday, July 25 in the Congress Plaza Hotel & Convention Center. This is a historic hotel located on Lake Michigan in downtown Chicago. General sessions will be Sunday-Wednesday, Thursday will be a full day of workshops. The conference dinner will be Wednesday night, followed by the rock-n-roll jam session. Papers can include experimental, model-based, as well as more abstract theoretical approaches to understanding neurobiological computation. We especially encourage papers that mix experimental and theoretical studies. We also accept papers that describe new technical approaches to theoretical and experimental issues in computational neuroscience or relevant software packages. The paper submission procedure is new this year: it is at a different web site and makes use of a preprint server. This allows everybody to view papers before the actual meeting and to engage in discussions about submitted papers. PAPER SUBMISSION Papers for the meeting can be submitted ONLY through the web site at http://www.neuroinf.org/CNS.shtml. Papers can be submitted either old style (a 100 word abstract followed by a 1000 word summary) or as a full paper (max 6 typeset pages). In both cases the abstract (100 words max) will be published in the conference program. Submission will occur through a preprint server run by Elsevier, more information can be found on the submission web site. Authors have the option of declaring their submission restricted access, not making it publicly visible. All submissions will be acknowledged by email. It is important to note that this notice, as well as all other communication related to the paper will be sent to the designated correspondence author only. THE REVIEW PROCESS All submitted papers will be first reviewed by the program committee. Papers will be judged and accepted for the meeting based on the clarity with which the work is described and the biological relevance of the research. For this reason authors should be careful to make the connection to biology clear. We reject only a small fraction of the papers (~ 5%) and this usually based on absence of biological relevance (e.g. pure artificial neural networks). We expect to notify authors of meeting acceptance before end of March. The second stage of review involves evaluation of each submission by two independent referees. The primary objective of this round of review will be to select papers for oral and featured oral presentation. In addition to perceived quality as an oral presentation, the novelty of the research and the diversity and coherence of the overall program will be considered. To ensure diversity, those who have given talks in the recent past will not be selected and multiple oral presentations from the same lab will be discouraged. A second objective of the review is to rank papers for inclusion in the conference proceedings. All accepted papers not selected for oral talks as well as papers explicitly submitted as poster presentations will be included in one of three evening poster sessions. Authors will be notified of the presentation format of their papers by end of April. CONFERENCE PROCEEDINGS The proceedings volume is published each year as a special supplement to the journal Neurocomputing. In addition the proceedings are published in a hardbound edition by Elsevier Press. Only papers which are made publicly available on the preprint server, which are presented at the CNS meeting and which are not longer than 6 typeset pages will be eligible for inclusion in the proceedings. Authors who only submitted a 1000 word symmary will be required to submit a full paper to the preprint server. The proceedings size is limited to 1200 pages (about 200 papers). In case more papers are eligible the lowest ranked papers will not be included in the proceedings but will remain available on the preprint server. Authors will be advised of the status of their papers immediately after the CNS meeting. Submission of final papers will be through the preprint server with a deadline early October. For reference, papers presented at CNS*99 can be found in volumes 32-33 of Neurocomputing (2000) and those of CNS*00 in volumes 38-40 (2001). INVITED SPEAKERS: Ad Aertsen (Albert-Ludwigs-University, Germany) Leah Keshet (University British Columbia, Canada) Alex Thomson (University College London, UK) ORGANIZING COMMITTEE: Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Philip Ulinski (University of Chicago, USA) Workshop organizer: Maneesh Sahani (Gatsby Computational Neuroscience Unit, UK) Government Liaison: Dennis Glanzman (NIMH/NIH, USA) Program Committee: Upinder Bhalla (National Centre for Biological Sciences, India) Avrama Blackwell (George Mason University, USA) Victoria Booth (New Jersey Institute of Technology, USA) Alain Destexhe (CNRS Gif-sur-Yvette, France) John Hertz (Nordita, Denmark) David Horn (University of Tel Aviv, Israel) Barry Richmond (NIMH, USA) Steven Schiff (George Mason University, USA) Todd Troyer (University of Maryland, USA) From rens at science.uva.nl Thu Jan 24 20:03:08 2002 From: rens at science.uva.nl (Rens Bod) Date: Fri, 25 Jan 2002 02:03:08 +0100 (MET) Subject: Volumes on "Data-Oriented Parsing" and "Probabilistic Linguistics" Message-ID: We're finalizing an edited volume on "Data-Oriented Parsing" (CSLI Publications). People interested in the preliminary chapters are wellcome to have a look at: http://turing.wins.uva.nl/~rens/dopbook.html Comments are wellcome! We are also finalizing a handbook on "Probabilistic Linguistics" (MIT Press). See http://turing.wins.uva.nl/~rens/ for the relevant link, or go directly to: http://www.ling.canterbury.ac.nz/jen/documents/contents.html Best, Rens Bod From geoff at cns.georgetown.edu Fri Jan 25 14:35:58 2002 From: geoff at cns.georgetown.edu (geoff@cns.georgetown.edu) Date: Fri, 25 Jan 2002 14:35:58 -0500 Subject: Faculty positions Message-ID: <200201251935.g0PJZw102777@jacquet.cns.georgetown.edu> Dear Colleagues, I would like to encourage you to apply for the following faculty positions at Georgetown University. Please feel free to contact me if you have any questions. Geoff Geoffrey J Goodhill, PhD Associate Professor, Department of Neuroscience Georgetown University Medical Center 3900 Reservoir Road NW, Washington DC 20007 geoff at georgetown.edu http://cns.georgetown.edu ------------------- NEUROSCIENCE TENURE-TRACK FACULTY POSITIONS GEORGETOWN UNIVERSITY The Department of Neuroscience is recruiting two new tenure track faculty at the rank of Assistant or Associate Professor. We seek outstanding candidates with research in molecular or developmental neurobiology, neurophysiology, or cognitive, computational or systems neuroscience. We have state-of-the-art core facilities in cellular neurobiology, neuroanatomy, EEG/ERP, as well as animal (7T) and human magnetic resonance (fMRI). We offer an outstanding intellectual and collaborative environment with highly competitive salary and start-up packages. Successful candidates must have a Ph.D. or equivalent, evidence of productivity and innovation, and the potential to establish an independently funded research program. Applications are encouraged from women and underrepresented minorities. To apply send a detailed CV, a two-page statement of research and teaching interests specifying one or more of the research areas noted above, and names of at least three referees to the following address: Neuroscience Search Committee Attn: Janet Bordeaux Department of Neuroscience Georgetown University Medical Center 3900 Reservoir Road, NW Washington DC 20007 http://neuro.georgetown.edu Application review will begin immediately and will continue until positions are filled. Georgetown University is an Equal Opportunity, Affirmation Action Employer. Qualified candidates will receive employment consideration without regard to race, sex, sexual orientations, age, religion, national origin, marital status, veteran status or disability. We are committed to diversity in the workplace. From J.A.Bullinaria at cs.bham.ac.uk Fri Jan 25 10:06:29 2002 From: J.A.Bullinaria at cs.bham.ac.uk (John A Bullinaria) Date: Fri, 25 Jan 2002 15:06:29 +0000 (GMT) Subject: MSc in Natural Computation Message-ID: Studentships/Scholarships for MSc in Natural Computation ======================================================== School of Computer Science (http://www.cs.bham.ac.uk) The University of Birmingham Birmingham, UK Students are invited for an advanced 12 month MSc programme in Natural Computation (i.e. computational systems that use ideas and inspirations from natural biological, ecological and physical systems). This will comprise of six taught modules in Neural Computation, Evolutionary Computation, Molecular and Quantum Computation, Nature Inspired Optimisation, Nature Inspired Learning, and Nature Inspired Design (10 credits each); two mini research projects (30 credits each); and one full scale research project (60 credits). The programme is supported by the EPSRC through its Master's Level Training Packages and by a number of leading companies. Our industrial advisory board includes representatives from British Telecom, Unilever, QinetiQ, Rolls Royce, Severn Trent, Pro Enviro, and SPSS. The School of Computer Science at the University of Birmingham has a strong research group in evolutionary and neural computation, with eight members of academic staff (six faculty and two research fellows) currently specialising in these fields: Dr. John Bullinaria (Neural Networks, Evolutionary Computation, Cog.Sci.) Dr. Ke Chen (Neural Networks, Pattern Recognition, Machine Perception) Dr. Aniko Ekart (Genetic Programming, AI, Machine Learning) Dr. Jun He (Evolutionary Computation) Dr. Julian Miller (Evolutionary Computation, Machine Learning) Dr. Jon Rowe (Evolutionary Computation, AI) Dr. Thorsten Schnier (Evolutionary Computation, Engineering Design) Prof. Xin Yao (Evolutionary Computation, NNs, Machine Learning) Other staff members also working in these areas include Prof. Aaron Sloman (evolvable architectures of mind, co-evolution, interacting niches) and Dr. Jeremy Wyatt (evolutionary robotics, classifier systems). The programme is open to candidates with a very good honours degree or equivalent qualifications in Computer Science/Engineering or closely related areas. Several fully funded EPSRC studentships (covering fees and maintenance costs) are available, and additional financial support from our industrial partners may be available during the main project period. Further details about this programme and funding opportunities are available from our Web-site at: http://www.cs.bham.ac.uk/natcomp Please note that the closing date for applications is 15th July 2002. From nando at cs.ubc.ca Fri Jan 25 18:53:32 2002 From: nando at cs.ubc.ca (Nando de Freitas) Date: Fri, 25 Jan 2002 15:53:32 -0800 Subject: Software for particle filtering and dynamic mixtures of Gaussians Message-ID: <3C51EFFC.8A6138CC@cs.ubc.ca> Dear Connectionists, I've made available some matlab code on particle filtering (aka condensation, survival of the fittest, bootstrap filters, sequential Monte Carlo) and a substantial better algorithm when the model at hand is a dynamic conditionally Gaussian representation. The latter algorithm is known as Rao Blackwellised particle filtering. This algorithm can be interpreted as an efficient stochastic mixture of Kalman filters. That is, as a stochastic bank of Kalman filters. Needless to say, dynamic mixtures of Gaussians arise in many settings, including computer vision, speech processing, music processing, fault diagnosis, and so on. The software also includes efficient state-of-the-art resampling routines. These are generic and suitable for any application of particle filters. The matlab code and links to papers are available at http://www.cs.ubc.ca/~nando/software.html Best, Nando From zhouzh at nju.edu.cn Sat Jan 26 03:04:37 2002 From: zhouzh at nju.edu.cn (Zhi-Hua Zhou) Date: Sat, 26 Jan 2002 16:04:37 +0800 Subject: paper to be published by AI journal and its code Message-ID: <000c01c1a640$19ae0fc0$03a8a8c0@daniel> Dear Colleagues, Below is a paper accepted by AI Journal: Zhi-Hua Zhou, Jianxin Wu, Wei Tang. Ensembling neural networks: many could be better than all. Abstract: Neural network ensemble is a learning paradigm where many neural networks are jointly used to solve a problem. In this paper, the relationship between the ensemble and its component neural networks is analyzed from the context of both regression and classification, which reveals that it may be better to ensemble many instead of all of the neural networks at hand. This result is interesting because at present, most approaches ensemble all the available neural networks for prediction. Then, in order to show that the appropriate neural networks for composing an ensemble can be effectively selected from a set of available neural networks, an approach named GASEN is presented. GASEN trains a number of neural networks at first. Then it assigns random weights to those networks and employs genetic algorithm to evolve the weights so that they can characterize to some extent the fitness of the neural networks in constituting an ensemble. Finally it selects some neural networks based on the evolved weights to make up the ensemble. A large empirical study shows that, comparing with some popular ensemble approaches such as Bagging and Boosting, GASEN can generate neural network ensembles with far smaller sizes but stronger generalization ability. Furthermore, in order to understand the working mechanism of GASEN, the bias-variance decomposition of the error is provided in this paper, which shows that the success of GASEN may lie in that it can significantly reduce the bias as well as the variance. The pdf version of this paper is now available at http://cs.nju.edu.cn/people/zhouzh/zhouzh.files/Publication/aij02.pdf The matlab code of GASEN now is available at http://cs.nju.edu.cn/people/zhouzh/zhouzh.files/MLNN_Group/freeware/Gasen.zip Enjoy it! Best Regards Zhihua ----------------------------------------------- Zhi-Hua ZHOU Ph.d. National Lab for Novel Software Technology Nanjing University Hankou Road 22 Nanjing 210093, P.R.China Tel: +86-25-359-3163 Fax: +86-25-330-0710 URL: http://cs.nju.edu.cn/people/zhouzh/ Email: zhouzh at nju.edu.cn ----------------------------------------------- From javier at ergo.ucsd.edu Tue Jan 29 21:54:59 2002 From: javier at ergo.ucsd.edu (movellan) Date: Tue, 29 Jan 2002 18:54:59 -0800 Subject: Tech Report on Development of Gaze Following Message-ID: <3C576083.5287BE27@inc.ucsd.edu> The following report is available at http://mplab.ucsd.edu following link to tech reports. The Development of Gaze Following as a Bayesian Systems Identification Problem Javier R. Movellan & John S. Watson UC San Diego UC Berkeley UCSD's Institute for Neural Computation Machine Perception Lab Tech Report 2002.01 We propose a view of gaze following in which infants act as Bayesian learners actively attempting to identify the operating characteristics of the systems with which they interact. We present results of an experiment in which 28 infants (average age 10 months) interacted for a 3 minute period with a non-humanoid robot. For half the infants the robot simulated contingency structure typically produced by human beings. In particular it provided causal information about the existence of a line of regard. For the other 14 infants, the robot behaved in a manner which was not contingent with the environment. We found that a few minutes of interaction with the contingent robot was sufficient to elicit statistically detectable gaze following. There were clear signs that some of these infants were actively attempting to identify whether or not the robot was responsive to them. We propose that the infant brain is equipped to learn and analyze the contingency structure of real-time social interactions. Contingency is a fundamental perceptual dimension used by infants to recognize the operational properties of humans and to generalize existing behaviors to new social partners. From m.padgett at ieee.org Wed Jan 30 01:14:25 2002 From: m.padgett at ieee.org (Mary Lou Padgett) Date: Wed, 30 Jan 2002 00:14:25 -0600 Subject: WCCI 2002 Tutorials on Computational Intelligence: Neural Networks, Fuzzy Systems, Evolutionary Computation Message-ID: <5.1.0.14.2.20020130001418.027444b0@pop.mindspring.com> *** REGISTER by FEB. 1 for DISCOUNT on World-Class TUTORIALS *** Conference Home Page: http://www.wcci2002.org/ **************************************************************************** 2002 World Congress on Computational Intelligence Hilton Hawaiian Village Hotel Honolulu, Hawaii May 12-17, 2002 WCCI'02 features three of the most important conferences in the areas of Computational Intelligence. * International Joint Conference on Neural Networks (IJCNN). * IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). * Congress on Evolutionary Computation (CEC). See you in paradise! **************************************************************************** Tutorials Tutorials will be held on Sunday, May 12, 2002. Registrations will be accepted for each tutorial on a first-come, first-served basis until the available space is filled. See details on the website: http://www.wcci2002.org/tutorial.html Sunday, May 12, 2002 8:00 AM - 10:00 AM Hans-Paul Schwefel An Introduction to Evolutionary Computation Jacek Zurada An Introduction to Neural Networks Raghu Krishnapuram An Introduction to Fuzzy Systems Marimuthu Palaniswami Support Vector Machines 10:15 AM to 12:15 PM Russ Eberhart and James Kennedy Particle Swarm Optimization Peter J. Angeline Evolutionary Algorithms for Program Induction Paul Werbos Neural Nets For Diagnostics, Prediction and Control: Capabilities & Myths Mitra Basu An Introduction to Biological Sequence Analysis Wlodzislaw Duch Computational Intelligence for Data Mining 12:30 to 2:30 PM Dipankar Dasgupta Artificial Immune Systems David Corne Computational Intelligence for Scheduling Dennis Fernandez The Legal Aspects of Computational Intelligence Ernst Niebur Attention and Selection Ling Guan Intelligent Multimedia Processing Peter Adlassnig Fuzzy Systems in Biomedicine 2:45 PM to 4:45 PM Claus Wilke Artificial Life Systems DeLiang Wang Neural Networks for Scene Analysis Kalyanmoy Deb Evolutionary Multiobjective Optimization Robert G. Reynolds An Introduction to Cultural Algorithms Bayya Yegnanarayana Neural Network Models for Speech and Image Processing Jerry Mendel Type-2 Fuzzy Logic: Expanded and Enhanced Fuzzy Logic **************************************************************************** Questions or comments: Mary Lou Padgett, WCCI 2002 Tutorial Chair, m.padgett at ieee.org. **************************************************************************** From malchiodi at dsi.unimi.it Wed Jan 30 10:54:16 2002 From: malchiodi at dsi.unimi.it (Dario Malchiodi) Date: Wed, 30 Jan 2002 16:54:16 +0100 Subject: CFP special session on "Learning with confidence" Message-ID: <3C581728.90204@dsi.unimi.it> Many apologizes for cross-posting SCI2002 Sixth World Multiconference on Systemics, Cybernetics and Informatics July 14-18, 2002 ~ Orlando, Florida Special Session: Learning with confidence http://laren.dsi.unimi.it/SCI2002/ Call for papers Leaving the asymptotic learnability results of early sixties, for instance from E. Gold or A. Gill, modern theories consider learning as a statistical operation, possibly based on highly structured sample values, possibly done in a very poor probabilistic framework. In this scenario the target of our learning task is generally a function that is a random object, and we want to frame its variability within a set of possible realizations with satisfactory confidence. Under a computational perspective this problem reads in terms of sample complexity for a given accuracy (a relevant measure of the width of the realization set) and In the aim of locating the learning task in the one or other side of the exponential complexity divide, former results came from rather elementary probabilistic modeling based on binomial experiments and sharp bounds such as those coming from Chernoff inequality. Subsequent comparisons of the algorithms efficiency on a same learning task lead to the employment of more sophisticated statistical tools to identify very accurate confidence intervals, in relation with both sample properties - such as their distribution law or error rate - and structural constraints - such as the allowed complexity of the statistics. These theoretical improvements allow, for instance, to distinguish between different degrees of the polynomials describing sample complexities of algorithms for learning a monotone DNF under proper probability hypotheses on the example space. Many efforts have also been devoted to the confidence intervals for the shape of continuous functions, with results concerning trained neural networks as well.The session aims at collecting contributions by researchers involved in these topics. The special perspective is the exploitation of relations between the randomness of the training examples and their mutual dependence exactly denoted by the function we want discovering from them. Submissions A 2000 characters abstract should be submitted in electronic format (preferably in PDF, but PostScript or MS Word are also acceptable formats) to apolloni at dsi.unimi.it within February 23, 2002, using as subject-line "SCI2002 Special session submission". After notification of acceptance the authors will have to submit within April 5, 2002 an extended abstract not exceeding the length of six pages. Please do not send your papers to SCI2002 secretariat. All papers must be presented by one of the authors, who must pay the registration fee. For more information about the general conference please see http://www.iiisci.org/sci2002/. Session Chair Bruno Apolloni Dipartimento di Scienze dell'Informazione Universita' degli Studi di Milano Via Complico 39/41, I-20153 Milano - Italy Phone: +39 02 503 16284 Fax: +39 02 503 16288 E-mail: apolloni at dsi.unimi.it confidence. From rsun at cecs.missouri.edu Wed Jan 30 13:58:26 2002 From: rsun at cecs.missouri.edu (rsun@cecs.missouri.edu) Date: Wed, 30 Jan 2002 12:58:26 -0600 Subject: Cognitive Systems Research, Volume 2, Issue 4 Message-ID: <200201301858.g0UIwQY02425@ari1.cecs.missouri.edu> The new issue of Cognitive Systems Research: --------------------------------------------------------- Table of Contents for Cognitive Systems Research Volume 2, Issue 4, December 2001 Noel E. Sharkey and Tom Ziemke Mechanistic versus phenomenal embodiment: Can robot embodiment lead to strong AI? 251-262 H. John Caulfield, John L. Johnson, Marius P. Schamschula and Ramarao Inguva A general model of primitive consciousness 263-272 Tarja Susi and Tom Ziemke Social cognition, artefacts, and stigmergy: A comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity 273-290 Book review Ezequiel A. Di Paolo The Mechanization of the Mind: On the Origins of Cognitive Science, Jean-Pierre Dupuy, Princeton University Press, 2000, Conference review D. Van Rooy Report on the Fourth International Conference on Cognitive Modeling 297-300 Online access to full text articles for Cognitive Systems Research is available to those readers whose library has subscribed to Cognitive Systems Research via ScienceDirect Digital Collections. For subscription information, see: http://www.elsevier.nl/locate/cogsys http://www.elsevier.com/locate/cogsys Copyright 2002, Elsevier Science, All rights reserved. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html =========================================================================== From john at eyelab.psy.msu.edu Thu Jan 31 09:34:16 2002 From: john at eyelab.psy.msu.edu (John M. Henderson) Date: Thu, 31 Jan 2002 09:34:16 -0500 Subject: faculty position in computational vision/visual cognition Message-ID: <5.0.2.1.2.20020131093108.052ce9c8@eyelab.msu.edu> MICHIGAN STATE UNIVERSITY DEPARTMENT OF PSYCHOLOGY AND COGNITIVE SCIENCE PROGRAM Computational Vision/Visual Cognition. The Department of Psychology and the Cognitive Science Program at Michigan State University invite applications for a tenure-system position at the rank of Assistant or Associate Professor. We are seeking candidates who study vision or visual cognition by combining computational modeling or hardware implementation with behavioral, psychophysical, and/or cognitive neuroscience techniques. The successful candidate will be appointed by Psychology, the tenure home department, and will be affiliated with the Cognitive Science Program and a newly funded NSF IGERT (Integrative Graduate Education and Research Training) grant in cognitive science (http://cogsci.msu.edu/). We encourage applications from individuals pursuing research questions in areas such as (but not limited to) visual attention, eye movement control, visually guided action, spatial navigation, object recognition, and scene perception. Women and minority-group candidates are strongly urged to apply. The individual must have a strong research program capable of attracting extramural support. The position begins August 16, 2002 (pending final administrative approval). Salary and rank will depend on the candidate's qualifications and experience. Review of applications will begin March 1, 2002 and continue until a suitable candidate is identified. Send a letter of application, vitae, (p)reprints and three letters of reference to: John M. Henderson, Chair, Computational Vision Search Committee, Department of Psychology, Michigan State University, 121 Psychology Research Building, East Lansing, MI 48824-1117. MSU is an AA/EO employer. From lgraham at jhu.edu Thu Jan 31 15:44:09 2002 From: lgraham at jhu.edu (Laura Graham) Date: Thu, 31 Jan 2002 15:44:09 -0500 Subject: NSF Supported Summer Internships at Johns Hopkins for Undergraduates Message-ID: Dear Colleague: The Center for Language and Speech Processing at Johns Hopkins University is offering a unique summer internship opportunity, which we would like you to bring to the attention of your best students in the current junior class. Only two weeks remain for students to apply for these internships. This internship is unique in the sense that the selected students will participate in cutting edge research as full members alongside leading scientists from industry, academia, and the government. The exciting nature of the internship is the exposure of the undergraduate students to the emerging fields of language engineering, such as automatic speech recognition (ASR), natural language processing (NLP), machine translation (MT), and speech synthesis (ITS). We are specifically looking to attract new talent into the field and, as such, do not require the students to have prior knowledge of language engineering technology. Please take a few moments to nominate suitable bright students who may be interested in this internship. On-line applications for the program can be found at http://www.clsp.jhu.edu/ along with additional information regarding plans for the 2002 Workshop and information on past workshops. The application deadline is February 15, 2002. If you have questions, please contact us by phone (410-516-4237), e-mail (sec at clsp.jhu.edu) or via the Internet http://www.clsp.jhu.edu Sincerely, Frederick Jelinek J.S. Smith Professor and Director Project Descriptions for this Summer 1. Weakly Supervised Learning For Wide-Coverage Parsing Before a computer can try to understand or translate a human sentence, it must identify the phrases and diagram the grammatical relationships among them. This is called parsing. State-of-the-art parsers correctly guess over 90% of the phrases and relationships, but make some errors on nearly half the sentences analyzed. Many of these errors distort any subsequent automatic interpretation of the sentence. Much of the problem is that these parsers, which are statistical, are not "trained" on enough example parses to know about many of the millions of potentially related word pairs. Human labor can produce more examples, but still too few by orders of magnitude. In this project, we seek to achieve a quantum advance by automatically generating large volumes of novel training examples. We plan to bootstrap from up to 350 million words of raw newswire stories, using existing parsers to generate the new parses together with confidence measures. We will use a method called co-training, in which several reasonably good parsing algorithms collaborate to automatically identify one another's weaknesses (errors) and to correct them by supplying new example parses to one another. This accuracy-boosting technique has widespread application in other areas of machine learning, natural language processing and artificial intelligence. Numerous challenges must be faced: how do we parse 350 million words of text in less than a year (we have 6 weeks)? How to use partly incompatible parsers to train one another? Which machine learning techniques scale up best? What kind of grammars, probability models, and confidence measures work best? The project will involve a significant amount of programming, but the rewards should be high. 2. Novel Speech Recognition Models for Arabic Previous research on large-vocabulary automatic speech recognition (ASR) has mainly concentrated on European and Asian languages. Other language groups have been explored to a lesser extent, for instance Semitic languages like Hebrew and Arabic. These languages possess certain characteristics, which present problems for standard ASR systems. For example, their written representation does not contain most of the vowels present in the spoken form, which makes it difficult to utilize textual training data. Furthermore, they have a complex morphological structure, which is characterized not only by a high degree of affixation but also by the interleaving of vowel and consonant patterns (so-called "non-concatenative morphology"). This leads to a large number of possible word forms, which complicates the robust estimation of statistical language models. In this workshop group we aim to develop new modeling approaches to address these and related problems, and to apply them to the task of conversational Arabic speech recognition. We will develop and evaluate a multi-linear language model, which decomposes the task of predicting a given word form into predicting more basic morphological patterns and roots. Such a language model can be combined with a similarly decomposed acoustic model, which necessitates new decoding techniques based on modeling statistical dependencies between loosely coupled information streams. Since one pervading issue in language processing is the tradeoff between language-specific and language-independent methods, we will also pursue an alternative control approach which relies on the capabilities of existing, language-independent recognition technology. Under this approach no morphological analysis will be performed and all word forms will be treated as basic vocabulary units. Furthermore, acoustic model topologies will be used which specify short vowels as optional rather than obligatory elements, in order to facilitate the use of text documents as language model training data. Finally, we will investigate the possibility of using large, generally available text and audio sources to improve the accuracy of conversational Arabic speech recognition. 3. Generation from Deep Syntactic Representation in Machine Translation Let's imagine a system for translating a sentence from a foreign language (say Arabic) into your native language (say English). Such a system works as follows. It analyzes the foreign-language sentence to obtain a structural representation that captures its essence, i.e. "who did what to whom where," It then translates (or transfers) the actors, actions, etc. into words in your language while "copying over" the deeper relationship between them. Finally it synthesizes a syntactically well-formed sentence that conveys the essence of the original sentence. Each step in this process is a hard technical problem, to which the best-known solutions are either not adequate for applications, or good enough only in narrow application domains, failing when applied to other domains. This summer, we will concentrate on improving one of these three steps, namely the synthesis (or generation). The target language for generation will be English, and that the source language to the MT system a language of a completely different type (Arabic and Czech). We will further assume that the transfer produces a fairly deeply analyzed sentence structure. The incorporation of the deep analysis makes the whole approach very novel - so far no large-coverage translation system has tried to operate with such a structure, and the application to very diverse languages makes it an even more exciting enterprise! Within the generation process, we will focus on the structural (syntactic) part, assuming that a morphological generation module exists to complete the generation process, and will be added to the suite so as to be able to evaluate the final result, namely, the goodness of the plain English text coming out of the system. Statistical methods will be used throughout. A significant part of the workshop preparation will be devoted to assembling and running a simplified MT system from Arabic/Czech to English (up to the syntactic structure level), in order to have realistic training data for the workshop project. As a consequence, we will not only understand and solve the generation problem, but also learn the mechanics of an end-to-end MT system, creating the intellectual preparation of team members to work on other parts of the MT system in the future. 4. SuperSID: Exploiting High-level Information for High-performance Speaker Recognition Identifying individuals based on their speech is an important component technology in many application, be it automatically tagging speakers in the transcription of a board-room meeting (to track who said what), user verification for computer security or picking out a known terrorist or narcotics trader among millions of ongoing satellite telephone calls. How do we recognize the voices of the people we know? Generally, we use multiple levels of speaker information conveyed in the speech signal. At the lowest level, we recognize a person based on the sound of his/her voice (e.g., low/high pitch, bass, nasality, etc.). But we also use other types of information in the speech signal to recognize a speaker, such as a unique laugh, particular phrase usage, or speed of speech among other things. Most current state-of-the-art automatic speaker recognition systems, however, use only the low level sound information (specifically, very short-term features based on purely acoustic signals computed on 10-20 ms intervals of speech) and ignore higher-level information. While these systems have shown reasonably good performance, there is much more information in speech which can be used and potentially greatly improve accuracy and robustness. In this workshop we will look at how to augment the traditional signal-processing based speaker recognition systems with such higher-level knowledge sources. We will be exploring ways to define speaker-distinctive markers and create new classifiers that make use of these multi-layered knowledge sources. The team will be working on a corpus of recorded telephone conversations (Switchboard I and II corpora) that have been transcribed both by humans and by machine and have been augmented with a rich database of phonetic and prosodic features. A well-defined performance evaluation procedure will be used to measure progress and utility of newly developed techniques.