From nello at wald.ucdavis.edu Sat Nov 1 11:22:19 2003 From: nello at wald.ucdavis.edu (Nello Cristianini) Date: Sat, 1 Nov 2003 08:22:19 -0800 (PST) Subject: statistical learning and bioinfo positions at uc davis Message-ID: <20031101081137.N31138-100000@anson.ucdavis.edu> University of California at Davis Department of Statistics The Department of Statistics invites applications for two faculty positions that will start on July 1, 2004. Each position is either at the tenure-track Assistant Professor rank or at the tenured Associate Professor rank, depending on qualifications. Applicants must have a Ph.D. in Statistics or a related field. An outstanding research and teaching record is required for an appointment with tenure; and demonstrated interest and ability to achieve such a record is required for a tenure track appointment. Preferred research areas are computational statistics, statistical learning, bioinformatics/biostatistics, or time series/spatial statistics. Candidates with demonstrated research interest in statistical theory motivated by complex applications, such as methods for the analysis of high-dimensional data from longitudinal or imaging sources, or genomics/proteomics are strongly encouraged to apply. The successful candidates will be expected to teach at both the undergraduate and graduate levels. UC Davis has launched Bioinformatics and Computational Science initiatives, and has recently established a graduate program in Biostatistics, in addition to the existing Ph.D./M.S. program in Statistics. Information about the department and programs offered can be found at http://www-stat.ucdavis.edu/ . Send letter of application, including a statement of research interests, curriculum vitae with publication list, at least three letters of reference, relevant reprints/preprints, and transcripts (applicants with Ph.D. obtained in 2002 or later) to: Chair, Search Committee Department of Statistics 1 Shields Avenue University of California Davis, CA 95616. All e-mail to: search at wald.ucdavis.edu Review of applications will begin on Dec. 1, 2003, and will continue until the positions are filled. The University of California is an affirmative action/equal opportunity employer with a strong institutional commitment to the achievement of diversity among its faculty and staff. From abr2001 at med.cornell.edu Mon Nov 3 10:02:25 2003 From: abr2001 at med.cornell.edu (Adrian B. Robert) Date: Mon, 3 Nov 2003 10:02:25 -0500 Subject: Position in computational neuroscience/informatics Message-ID: Computational Neuroscience Research Associate and Neuroinformatics Research Associate Starting 1 January 2004 At the Laboratory of Neuroinformatics in New York City, funded by the NIH's Human Brain Project. The Laboratory of Neuroinformatics at Cornell's Weill Medical College in New York City seeks researchers at the post-doc level or above to join a team developing an integrated suite of analytic algorithms, parallel computational resources, databases, tools, and standards for data and algorithm description and exchange. Successful candidates will have a background in computational neuroscience or neuroinformatics, and combine the ability to work in a team setting with creativity and initiative. We provide generous salary and benefits, the excitement of life in New York, and the opportunity to work with a dedicated group of neuroinformatic developers and neurophysiologists, including Daniel Gardner, Jonathan D. Victor, and distinguished collaborators in neural data acquisition, analysis, and algorithm development. Our work in computational neuroinformatics is a component of the NIH's Human Brain Project, funded by NIMH. To advance our understanding of neural coding, we are developing a suite of information-theoretic algorithms as public resources and for application to data in our neurophysiology databases via a linked dedicated computational array. The project complements our development at neurodatabase.org of searchable neurophysiology databases containing spike train and other microelectrode data and allied descriptive metadata including recording site, technique, and stimulus. To aid data sharing and interoperability among neurodatabases, we are creating BrainML, an XML-based multilevel data description suite for neuroscience. Computational Neuroscience candidates should have solid experience using information theoretic or other statistical or analytic techniques to analyze neurophysiologic or similar experimental data. Programming skills in C and Matlab are essential, and experience in large-scale computation or numerical analysis is helpful. Neuroinformatics candidates will bring experience with informatics techniques including database design, Java programming, and/or XML as well as a background in neuroscience, bioinformatics, or medical informatics. Those attending the Society for Neuroscience meeting may contact Daniel Gardner via the placement service (employer number 204240); otherwise, email CV, letter of interest, and the names of three references to: dan at med.cornell.edu. From bower at uthscsa.edu Mon Nov 3 19:17:09 2003 From: bower at uthscsa.edu (james Bower) Date: Mon, 03 Nov 2003 18:17:09 -0600 Subject: Assistant Professor in Cognitive and/or Systems Neuroscience. Message-ID: ASSISTANT FACULTY POSITION IN COGNITIVE AND/OR SYSTEMS NEUROSCIENCE AT THE THE UNIVERSITY OF TEXAS - SAN ANTONIO The Department of Biology at The University of Texas at San Antonio (UTSA) (http://www.utsa.edu) invites applications for a tenure track Assistant Professor faculty position in the general area of Cognitive and/or Systems Neurobiology, broadly defined. Neuroscience is a rapidly growing emphasis at UTSA led by the Cajal Neuroscience Research Institute (http://bio.utsa.edu/Cajal/) , an interdepartmental organization of neuroscience faculty. Applicant's research interests should include experimental and computational techniques but can apply to any area of neuro-biological research. The Department of Biology consists of 30 faculty members, and offers a B.S. degree in Biology, M.S. degrees in Biology and Biotechnology, and a Ph.D. degree in Biology with an emphasis in Neurobiology. New research facilities will be an integral part of a $83 million, 228,000 square foot Biological Sciences Building currently under construction. Advanced Neuroimaging facilities are also available through the Research Imaging Center at the University of Texas Health Science Center San Antonio. A competitive start-up package is available. We are especially interested in candidates committed to our mission of research and teaching within a diverse student body. Additional information on the search process, UTSA, as well as the benefits of living and working in 'Ol San Antone can be found at: http://bio.utsa.edu/faculty-recruitment -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From Mayank_Mehta at brown.edu Tue Nov 4 13:45:53 2003 From: Mayank_Mehta at brown.edu (Mayank Mehta) Date: Tue, 04 Nov 2003 13:45:53 -0500 Subject: Assistant or Associate Professor Tenure Track Faculty Message-ID: <5.2.1.1.2.20031104133552.00b97d90@postoffice.brown.edu> Neuroscientist Assistant or Associate Professor Tenure Track Faculty Brown University Medical School The Department of Neuroscience at Brown University announces a tenure-track position at the Assistant or Associate Professor level. Research areas of particular interest to the Department include synaptic function and plasticity, molecular and cellular neurobiology, sensory processing, motor control, and development. The Ph.D. or M.D. degree and at least 2 years of relevant postdoctoral training are required. Neuroscience at Brown University is undergoing a significant expansion that includes additional positions, new research buildings and new facilities for transgenic mice. This expanded research infrastructure will complement existing state-of-the-art facilities for molecular biology, imaging, multielectrode recording and MRI. Criteria for Assistant Professor: Ph.D. or M.D. Two or more years of postdoctoral experience in research; ability for independent research and potential to secure external funding to support a scholarly research program; potential to be an effective teacher and mentor for undergraduate and graduate students. Criteria for Associate Professor: Ph.D. or M.D. Teaching - Effectiveness as a lecturer and mentor of graduate students and postdoctoral fellows. Ability to direct courses at the undergraduate and graduate level. Research. - Record of scholarly productivity through publication in peer-reviewed journals and a record of presentation at scientific meetings and research seminars; established program of research with continuity of funding and continuous productivity; national reputation for excellence in scholarship and a record of professional service to the scientific and academic community. Applications received by December 15, 2003 will be given full consideration. Please specify if you are applying at Assistant or Associate Professor level. Submit a curriculum vitae, a set of representative reprints, a concise description of research interests and goals, and arrange for three (Assistant Professor) or five (Associate Professor) letters of reference to be sent to: David Berson, Ph.D. Department of Neuroscience Brown University Search Committee Box 1953 Providence, RI 02912 Brown University is an Affirmative Action/Equal Opportunity employer Mayank R. Mehta Brown University Department of Neuroscience Providence, RI 02912-1953 URL: http://neuroscience.brown.edu/mehta.html From pam_reinagel at hms.harvard.edu Wed Nov 5 14:38:40 2003 From: pam_reinagel at hms.harvard.edu (Pamela Reinagel) Date: Wed, 5 Nov 2003 11:38:40 -0800 Subject: UCSD Comp Neurosci Program: Call for Applications Message-ID: <01d501c3a3d4$6d96a7d0$ad9eef84@RLABCOMP2> Call for Applications: PhD in Computational Neurobiology The Computational Neurobiology Graduate Program at the University of California at San Diego is designed to train young scientists with the broad range of scientific and technical skills that are essential to understand the computational resources of neural systems. This program welcomes students with backgrounds in physics, chemistry, biology, psychology, computer science and mathematics, with courses and research programs that reflect the uniquely computational properties of nervous systems. Complete details of the program can be found at: http://www.biology.ucsd.edu/grad/CN_overview.html To apply to the Computational Neurobiology Program, fill out the Biology Pre-Application form, indicating "Computational Neurobiology" as your first area of interest. h ttp://www.biology.ucsd.edu/grad/admissions/preapp_main.html If your pre-application is judged to be competitive, we will send you a complete application package. Pre-application by Dec 1, 2003 is recommended to meet the full application deadline of Jan 2, 2004. From terry at salk.edu Thu Nov 6 14:00:32 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 6 Nov 2003 11:00:32 -0800 (PST) Subject: NIPS Registration deadline Message-ID: <200311061900.hA6J0WL59653@purkinje.salk.edu> NIPS 2003 early registration deadline: Saturday, November 8, 2003 (midnight PST) REGISTRATION WEBSITE: http://www.nips.snl.salk.edu/ The 17th Annual Conference of NIPS 2003, Neural Information Processing Systems, will be held in Vancouver, British Columbia, Canada December 8-11, 2003 The Post-Conference Workshops will be held in Whistler, B.C. December 12-13, 2003 From mzib at ee.technion.ac.il Sun Nov 9 13:34:41 2003 From: mzib at ee.technion.ac.il (Michael Zibulevsky) Date: Sun, 9 Nov 2003 20:34:41 +0200 (IST) Subject: new paper: Relative Optimization for Blind Deconvolution Message-ID: Dear Colleagues, we would like to announce the following paper: "Relative Optimization for Blind Deconvolution" by A. Bronstein, M. Bronstein and M. Zibulevsky Abstract- We propose a relative optimization framework for quasi-maximum likelihood (ML) blind deconvolution and the relative Newton method as its particular instance. Special Hessian structure allows fast approximate Hessian construction and in- version with complexity comparable to that of gradient methods. Sequential optimization with gradual change of the smoothing parameter makes the proposed algorithm very accurate for sparse or uniformly-like distributed signals. We also propose the use of rational IIR restoration kernels, which constitute a richer family of filters than the traditionally used FIR kernels. Simulation results demonstrate the efficiency of the proposed methods. URL of the pdf file: http://ie.technion.ac.il/~mcib/ or http://visl.technion.ac.il/bron/alex/publications.html =========================================================================== Michael Zibulevsky, Ph.D. Email: mzib at ee.technion.ac.il Department of Electrical Engineering Phone: 972-4-829-4724 Technion - Israel Institute of Technology Haifa 32000, Israel http://ie.technion.ac.il/~mcib/ Fax: 972-4-829-4799 =========================================================================== From auke.ijspeert at epfl.ch Tue Nov 11 13:56:29 2003 From: auke.ijspeert at epfl.ch (Auke Ijspeert) Date: Tue, 11 Nov 2003 19:56:29 +0100 Subject: [2nd CFP] From Animals to Animats 8, SAB'04, 13-17 July 2004, Los Angeles. Message-ID: <3FB130DD.9040605@epfl.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ SECOND CALL FOR PAPERS FROM ANIMALS TO ANIMATS 8 The Eighth International Conference on the SIMULATION OF ADAPTIVE BEHAVIOR (SAB'04) http://www.isab.org/sab04 An International Conference organized by The International Society for Adaptive Behavior (ISAB) 13-17 July 2004, Los Angeles, USA The objective of this interdisciplinary conference is to bring together researchers in computer science, artificial intelligence, alife, control, robotics, neurosciences, ethology, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. The conference will focus on experiments with well-defined models --- robot models, computer simulation models, mathematical models --- designed to help characterize and compare various organizational principles or architectures underlying adaptive behavior in real animals and in synthetic agents, the animats. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis: The Animat approach Characterization of agents and environments Passive and active perception Motor control Visually-guided behaviors Action selection Behavioral sequencing Navigation and mapping Internal models and representation Learning and development Motivation and emotion Collective and social behavior Emergent structures and behaviors Neural correlates of behavior Evolutionary and co-evolutionary approaches Autonomous robotics Humanoid robotics Software agents and virtual creatures Applied adaptive behavior Animats in education Philosophical and psychological issues Authors should make every effort to suggest implications of their work for both natural and artificial animals, and to distinguish the portions of their work which use simulation from those using a physical agent. Papers that do not deal explicitly with adaptive behavior will be rejected. Conference format Following the tradition of SAB conferences, the conference will be single track, with additional poster sessions. Each poster session will start with poster spotlights giving presenters the opportunity to orally present their main results. Submission Instructions Submission instructions can be found on the conference Web site. Submitted papers must not exceed 10 pages (double columns). Because the whole review process heavily relies on electronic means, the organizers strongly enforce electronic submissions of PDF documents. Authors who are in the impossibility to deliver a PDF document should contact the program chairs (sab2004-program at isab.org) for discussing alternative ways of submitting. Computer, video, and robotic demonstrations are also invited for submission. Submit a 2-page proposal plus a title page to the program chairs. Indicate equipment requirements and relevance to the themes of the conference. Call for workshop and tutorial proposals A separate call for workshop and tutorial proposals can be found on the conference web site at http://www.isab.org/sab04. The accepted workshops and tutorials will take place on the last day of the conference, July 17. Inquiries can be made to sab2004-workshops at isab.org IMPORTANT DATES (2004) JAN 09: Submissions must be received JUL 13-16: Conference dates JUL 17: Workshops and tutorials Program chairs: Stefan Schaal, University of Southern California (USC) Auke Ijspeert, Swiss Federal Institute of Technology, Lausanne & USC Aude Billard, Swiss Federal Institute of Technology, Lausanne & USC Sethu Vijayakumar, University of Edinburgh & USC General Chairs: John Hallam, Universities of Odense and Edinburgh Jean-Arcady Meyer, Laboratoire d'Informatique de Paris 6 Publisher: The MIT Press, Cambridge. Program queries to: sab2004-program at isab.org Workshops queries to: sab2004-workshops at isab.org General queries to: sab2004 at isab.org WWW Page: http://www.isab.org/sab04 From thomas.j.palmeri at vanderbilt.edu Wed Nov 12 10:25:16 2003 From: thomas.j.palmeri at vanderbilt.edu (Thomas Palmeri) Date: Wed, 12 Nov 2003 09:25:16 -0600 Subject: Postdoctoral Fellowship at Vanderbilt University Message-ID: <001401c3a931$2c8b3c80$71e63b81@PalmeriThinkPad> POSTDOCTORAL FELLOWSHIP LINKING COMPUTATIONAL MODELS AND SINGLE-CELL NEUROPHYSIOLOGY Members of the Psychology Department and the Center for Integrative and Cognitive Neuroscience at Vanderbilt University seek a highly qualified postdoctoral fellow to join an NSF-funded collaborative research project linking computational models of human cognition with single-cell neurophysiology. The aim is to elucidate how control over attention, categorization, and response selection are instantiated in neural processes underlying adaptive behavior. The project integrates separate programs of research in computational models of human cognition (Logan and Palmeri) and in single-cell neurophysiology (Schall). We are particularly interested in applicants with training in computational modeling (experience in mathematical modeling, neural network modeling, or dynamic systems modeling are equally desirable). Knowledge of theoretical and empirical research in attention, categorization, response selection, or related areas of cognition is desirable. The fellowship will pay according to the standard NIH scale, and will be for one or two years. Fellows will be expected to apply for individual funding within their first year. Applicants should send a current vita, relevant reprints and preprints, a personal letter describing their research interests, background, goals, and career plans, and reference letters from two individuals. Applications will be reviewed as they are received. Individuals who have recently completed their dissertation or who expect to defend their dissertation this winter or spring are encouraged to apply. We will also consider individuals currently in postdoctoral positions. Send Materials to: Thomas Palmeri, Gordon Logan, or Jeffrey Schall Department of Psychology 301 Wilson Hall 111 21st Avenue South Nashville, TN 37203 For more information on Vanderbilt, the Psychology Department, and the Center for Integrative and Cognition Neuroscience, see the following web pages: Vanderbilt University http://www.vanderbilt.edu/ Psychology Department http://sitemason.vanderbilt.edu/psychology Center for Integrative and Cognitive Neuroscience http://cicn.vanderbilt.edu Vanderbilt University is an Affirmative Action / Equal Opportunity employer. -------------------------------------------- Thomas Palmeri Associate Professor Department of Psychology 301 Wilson Hall Vanderbilt University Nashville, TN 37240 tel: 615-343-7900 fax: 615-343-8449 thomas.j.palmeri at vanderbilt.edu From mozer at colorado.edu Wed Nov 12 14:56:47 2003 From: mozer at colorado.edu (Michael C. Mozer) Date: Wed, 12 Nov 2003 12:56:47 -0700 Subject: faculty position at University of Colorado Message-ID: <3FB2907F.70107@colorado.edu> UNIVERSITY OF COLORADO: The Department of Computer Science is seeking outstanding candidates for a tenure-track faculty position at the assistant professor level. This position is targeted for candidates whose research focuses on computational biology or bioinformatics, and whose interests overlap the department's core strengths in machine learning, high-performance computing, security, systems and software, and theory. Candidates must have a Ph.D. degree in computer science or a related discipline, enthusiasm for working with both undergraduate and graduate students, and the ability to develop an innovative interdisciplinary research program. This position is one of several that the University has committed to bioinformatics as part of a larger initiative in Molecular Biotechnology. It provides an unrivaled opportunity for a top computer scientist to join a critical mass of colleagues from many disciplines including life sciences and biological engineering. The University of Colorado is committed to diversity and equality in education and employment. Review of applications will begin immediately. Candidates should submit a vitae, research and teaching statements, and names of at least three references to: Elizabeth Bradley, Chair University of Colorado Department of Computer Science Campus Box 430 Boulder, CO 80309 From hasselmo at bu.edu Thu Nov 13 11:25:40 2003 From: hasselmo at bu.edu (Michael Hasselmo) Date: Thu, 13 Nov 2003 11:25:40 -0500 (EST) Subject: Call for Nominations for Awards from International Neural Network Society (INNS) Message-ID: The International Neural Network Society's (INNS) Awards Program has been established to recognize individuals who have made outstanding contributions in the field of Neural Networks. Up to three awards are presented annually to senior individuals for outstanding contributions made in the field of Neural Networks. In addition, two Young Investigator Awards are presented annually to individuals with no more than five years postdoctoral experience and who are under forty years of age, for significant contributions in the field of Neural Networks. The INNS Awards Committee is soliciting nominations for these awards. Details on the Awards Program, as well as how to make the nomination in practice, are available on the INNS Web page http://www.inns.org/ . It also contains a list of the previous INNS Awards recipients. All nominations should be emailed to the chair of the Awards Committee, prof. Erkki Oja, email address erkki.oja at hut.fi, by November 19, 2003. From perdi at kzoo.edu Thu Nov 13 15:17:45 2003 From: perdi at kzoo.edu (Peter Erdi) Date: Thu, 13 Nov 2003 15:17:45 -0500 (EST) Subject: Call: IJCNN 2004 - Special Sessions organized by the ENNS In-Reply-To: <200311120806.KAA09123@james.hut.fi> References: <200308051104.OAA65498@james.hut.fi> <200308051104.OAA65498@james.hut.fi> <3.0.2.32.20031024085007.0127b830@pop-srv.mbfys.kun.nl> <200311120806.KAA09123@james.hut.fi> Message-ID: Call for organizing Special Sessions by the ENNS for the IJCNN'04 25-28 July 2004, Budapest, Hungary The annual International Joint Conference on Neural Networks (IJCNN), is one of the premier international conferences in the field. European Neural Network Society (ENNS) will organize six special sessions. This is an open CALL for organizing Sepcial Sessions for the IJCNN meeting supported by the ENNS. Each proposal should be motivated with 10 lines and should contain 4-6 papers. Each paper should have a title, authors (with email address) and 10 line abstract and a commitment from the authors. The deadline of the application is December 1. Applications should be sent by email to perdi at kzoo.edu. Acceptance/rejection info will be provided by December 20th. (Of course authors whose special session is not accepted, can send in their contribution as a submitted paper.) All authors of accepted special sessions are requested to submit their paper by January 29th with the same system as submitted papers but under the heading of special sessions. (see http://www.conferences.hu/budapest2004/ ) The acceptance to special sessions does not have any financial consequnces. ENNS has policiy to support young members of the ENNS. (see http://www.snn.kun.nl/enns/) Travel grantst for a number of students presenting papers at IJCNN will be available. Preference are given to those participents, who will make presentations on the Special Sessions organized by the ENNS. Peter Erdi program Co-Chair perdi at kzoo.edu From cimca at ise.canberra.edu.au Thu Nov 13 22:06:30 2003 From: cimca at ise.canberra.edu.au (cimca) Date: Fri, 14 Nov 2003 14:06:30 +1100 Subject: CFP: International Conference on Computational Intelligence for Modelling, Control and Automation Message-ID: <6.0.0.22.1.20031114140609.02558d18@mercury.ise.canberra.edu.au> CALL FOR PAPERS International Conference on Computational Intelligence for Modelling, Control and Automation 12-14 July 2004 Gold Coast, Australia http://www.ise.canberra.edu.au/conferences/cimca04/index.htm Jointly with International Conference on Intelligent Agents, Web Technologies and Internet Commerce 12-14 July 2004 Gold Coast, Australia http://www.ise.canberra.edu.au/conferences/iawtic04/index.htm The international conference on computational intelligence for modelling, control and automation will be held in Gold Coast, Australia on 12-14 July 2004. The conference provides a medium for the exchange of ideas between theoreticians and practitioners to address the important issues in computational intelligence, modelling, control and automation. The conference will consist of both plenary sessions and contributory sessions, focusing on theory, implementation and applications of computational intelligence techniques to modelling, control and automation. For contributory sessions, papers (4 pages or more) are being solicited. Several well-known keynote speakers will address the conference. Topics of the conference include, but are not limited to, the following areas: Modern and Advanced Control Strategies: Neural Networks Control, Fuzzy Logic Control, Genetic Algorithms & Evolutionary Control, Model-Predictive Control, Adaptive and Optimal Control, Intelligent Control Systems, Robotics and Automation, Fault Diagnosis, Intelligent agents, Industrial Automations Hybrid Systems: Fuzzy Evolutionary Systems, Fuzzy Expert Systems, Fuzzy Neural Systems, Neural Genetic Systems, Neural-Fuzzy-Genetic Systems, Hybrid Systems for Optimisation Data Analysis, Prediction and Model Identification: Signal Processing, Prediction & Time Series Analysis, System Identification, Data Fusion and Mining, Knowledge Discovery, Intelligent Information Systems, Image Processing, Image Understanding, Parallel Computing applications in Identification & Control, Pattern Recognition, Clustering, Classification Decision Making and Information Retrieval: Case-Based Reasoning, Decision Analysis, Intelligent Databases & Information Retrieval, Dynamic Systems Modelling, Decision Support Systems, Multi-criteria Decision Making, Qualitative and Approximate-Reasoning Paper Submission Papers will be selected based on their originality, significance, correctness, and clarity of presentation. Papers (4 pages or more) should be submitted to the following e-mail or the following address: CIMCA'2004 Secretariat School of Computing University of Canberra Canberra, 2601, ACT, Australia E-mail: cimca at ise.canberra.edu.au E-mail submission is preferred. Papers should present original work, which has not been published or being reviewed for other conferences. Important Dates 14 March 2004 Submission of papers 30 April 2004 Notification of acceptance 21 May 2004 Deadline for camera-ready copies of accepted papers 12-14 July 2004 Conference sessions Special Sessions and Tutorials Special sessions and tutorials will be organised at the conference. The conference is calling for special sessions and tutorial proposals. All proposals should be sent to the conference chair on or before 27th February 2004. CIMCA'04 will also include a special poster session devoted to recent work and work-in-progress. Abstracts are solicited for this session. Abstracts (3 pages limit) may be submitted up to 30 days before the conference date. Invited Sessions Keynote speakers from academia and industry will be addressing the main issues of the conference. Visits and social events Sightseeing visits will be arranged for the delegates and guests. A separate program will be arranged for companions during the conference. Further Information For further information either contact cimca at ise.canberra.edu.au or see the conference homepage at: http://www.ise.canberra.edu.au/conferences/cimca04/index.htm From Ronan.Reilly at may.ie Thu Nov 13 08:44:49 2003 From: Ronan.Reilly at may.ie (Ronan Reilly) Date: Thu, 13 Nov 2003 13:44:49 -0000 Subject: Associate Professorship in Computer Science at National University of Ireland, Maynooth Message-ID: <001201c3a9ec$4fdd6030$25f59d95@neuron> Readers of the list may be interested in the position advertised below. While it is not tied to a specific research area, applications from candidates with backgrounds in machine learning, computational neuroscience, cognitive science, and related areas are very welcome. Ronan === DEPARTMENT OF COMPUTER SCIENCE ASSOCIATE PROFESSOR POST NUI Maynooth invites applications for the position of Associate Professor in Computer Science. Applicants should have a strong track-record in research and teaching. Applications are welcome in all areas of research. The person appointed will play an active role in all aspects of the Department?s activities, pursuing high-quality research, providing research leadership, and supervising and teaching students at both undergraduate and postgraduate level. Salary Scale (new entrants): ?68,495 - ?91,555 p.a. (8 points) Prior to application, further details of the post may be obtained by writing to the Personnel Officer, National University of Ireland, Maynooth, Maynooth, Co. Kildare. Confidential Fax: +353-1-7083940; Email: personnel at may.ie Department of Computer Science Website: www.cs.may.ie Applications (including a full CV, the names, email and postal addresses, telephone, and fax numbers of three referees, and a personal statement) should be forwarded to the Personnel Officer, to arrive no later than 27 February, 2004. === _________________________________ Prof. Ronan G. Reilly Department of Computer Science National University of Ireland, Maynooth Co. Kildare IRELAND v: +353-1-708 3847 f: +353-1-708 3848 w1: www.cs.may.ie/staff/ronan.html (homepage) w2: cortex.cs.may.ie (research group) e: ronan at cs.may.ie From sylee at ee.kaist.ac.kr Mon Nov 10 22:35:05 2003 From: sylee at ee.kaist.ac.kr (Soo-Young Lee at KAIST) Date: Tue, 11 Nov 2003 12:35:05 +0900 Subject: New Journal: Neural Information Processing - Letters and Reviews References: <3EC14B01.3040901@iub-psych.psych.indiana.edu> Message-ID: <002701c3a804$cc455fe0$ac1ff88f@kaistsylee> Sorry if you recevied multiple times.) The first issue of a new journal had been published on-line on October 2003 at http://www.nip-lr.info and http://bsrc.kaist.ac.kr/nip-lr/. Neural Information Processing - Letters and Reviews The high-quality rapid publication with double-blind reviews Table of Contents Vol.1, No.1, October, 2003 Preface pp. i-ii Toward a New Journal with Timeliness, Accessibility, Quality, and Double-Blind Reviews Soo-Young Lee Review pp. 1-52 Independent Component Analysis and Extensions with Noise and Time: A Bayesian Ying-Yang Learning Perspective Lei Xu Letters pp. 53-59 Extraction and Optimization of Fuzzy Protein Sequences Classification Rules Using GRBF Neural Networks Dianhui Wang, Nung Kion Lee, and Tharam S. Dillon pp. 61-66 Phonological Approach to the Mapping of Semantic Space: Replication as a Basis for Language Storage in the Cortex Victor Vvedensky pp. 67-73 Artificial Neural Networks as Analytic Tools in an ERP Study of Face Memory Reiko Graham and Michael R.W. Dawson =============================================================== Facts of NIP-LR The goals of the new journal NIP-LR are (a) Timely Publication - 3 to 4 months to publication for Letters - up to 6 months to publication for Reviews (b) Connecting Neuroscience and Engineering - serving both system-level neuroscience and artificial neural network communities (c) Low Cost - free for online only - US$30 per year for hardcopy (d) High Quality - unbiased double-blind reviews - short papers (up to 10 single-column single-space published pages) for Letters (The Letters may include preliminary results of excellent ideas, and full paper may be published latter at other journals.) - in-depth reviews of new and important topics for Reviews The topics include Cognitive neuroscience Computational neuroscience Neuroinformatics database and analysis tools Brain signal measurements and functional brain mapping Neural modeling and simulators Neural network architecture and learning algorithms Data representations in neural systems Information theory for neural systems Software implementations of neural networks Neuromorphic hardware implementations Biologically-motivated speech signal processing Biologically-motivated image processing Human-like inference systems and intelligent agents Human-like behavior and intelligent systems Artificial life Other applications of neural information processing mechanisms The journal will consist of monthly online publication and yearly paper publications. Authors have all the rights on their papers, and may publish extended versions of their Letters to other journals. All the submission and review process will be done electronically with Adobe PDF, Postscript, or MS Word files. For rapid review process only binary (Accept" or "Reject) decisions will be made without revision requirements for Letters. Minor revision may be requested for Review papers. (Mandatory English editing services may be recommended.) We also incorporate double-blind review procedure. (The reviewrs will not know the names of authors.) All papers should be submitted by e-mail at nip-lr at neuron.kaist.ac.kr to the Soo-Young Lee, Editor-in-Chief for NIP-LR Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr, sylee at kaist.ac.kr Publisher: KAIST Press Home Page: http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/ From lss at cs.stir.ac.uk Mon Nov 17 11:05:00 2003 From: lss at cs.stir.ac.uk (Professor Leslie Smith) Date: Mon, 17 Nov 2003 16:05:00 +0000 Subject: CFP: Brain Inspired Cognitive Systems - BICS2004 Message-ID: <3FB8F1AC.6020000@cs.stir.ac.uk> Call for Papers: Brain Inspired Cognitive Systems - BICS2004 University of Stirling, Stirling, Scotland, UK August 29 - September 1, 2004 First International ICSC Symposium on Cognitive Neuro Science (CNS 2004) Cognitive neuroscience covers both computational models of the brain and brain inspired algorithms and artifacts. Chair: Prof. Igor Aleksander, Imperial College London, U.K i.aleksander at imperial.ac.uk Second International ICSC Symposium on Biologically Inspired Systems (BIS 2004) Systems are inspired by many different aspects of biology. We are interested in systems at all levels from VLSI engineered to software to mathematical models. Chair: Prof. Leslie Smith, University of Stirling, U.K. lss at cs.stir.ac.uk Third International ICSC Symposium on Neural Computation (NC'2004) Neural Computation covers models, software and hardware implementations together with applications. Chair: Dr. Amir Hussain, University of Stirling, U.K. ahu at cs.stir.ac.uk Further Details: http://www.icsc-naiso.org/conferences/bics2004/bics-cfp.html Important dates: Submission deadline January 31, 2004 Notification March 31, 2004 Early registration May 15, 2004 Delivery of full papers and registration: May 31, 2004 Tutorials and Workshops August 29, 2004 Conference August 30 - September 1, 2004 -- Professor Leslie S. Smith, Dept of Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA, Scotland l.s.smith at cs.stir.ac.uk Tel (44) 1786 467435 Fax (44) 1786 464551 www http://www.cs.stir.ac.uk/~lss/ UKRI IEEE NNS Chapter Chair: http://www.cs.stir.ac.uk/ieee-nns-ukri/ -- The University of Stirling is a university established in Scotland by charter at Stirling, FK9 4LA. Privileged/Confidential Information may be contained in this message. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not disclose, copy or deliver this message to anyone and any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. In such case, you should destroy this message and kindly notify the sender by reply email. Please advise immediately if you or your employer do not consent to Internet email for messages of this kind. From orhan at aipllc.com Mon Nov 17 10:01:30 2003 From: orhan at aipllc.com (Orhan Karaali) Date: Mon, 17 Nov 2003 10:01:30 -0500 Subject: SVM and neural network research position Message-ID: <000501c3ad1b$af32a540$97eb63c7@Istanbul> ADVANCED INVESTMENT PARTNERS, LLC. www.aipllc.com Advanced Investment Partners, LLC. (AIP) is a registered investment advisor based in Clearwater, Florida focusing on institutional domestic equity asset management. Our partners include Boston-based State Street Global Advisors, a global leader in institutional financial services, and Amsterdam-based Stichting Pensioenfonds ABP, one of the world's largest pension plans. AIP's reputation as an innovative entrepreneur within the asset management community is built upon the research and development of nontraditional quantitative stock valuation techniques for which a patent was issued in 1998. POSITION: RESEARCH SCIENTIST This position will involve enhancing AIP's financial valuation algorithms for stock selection and portfolio management. Job responsibilities include applying new machine learning and regression algorithms, writing new algorithms, developing new factors, contributing to financial and algorithmic research projects, developing applications in the areas of multifactor stock models. AIP uses Windows XP and 2003 Server; Visual Studio Net; MS SQL 2000; C++ STL; C#; OLE DB; XML; COM+, and parallel processing technologies. Minimum Qualifications: Ph.D. or Masters Degree in CS, EE, Computer Engineering, or Mathematics. Thesis or Dissertation concentration in SVM. Understanding of neural networks and regression algorithms. Strong C++ background. Bonus Qualifications: Familiarity with stock market. Knowledge of SQL and STL. Compensation includes a competitive salary, bonus and benefits package. US citizenship or US permanent resident status is required. To apply, please mail your resume to: Attn: Orhan Karaali 311 Park Place Blvd. Suite 250 Clearwater, FL 33759 From bio-adit2004-REMOVE at teuscher.ch Tue Nov 18 16:41:06 2003 From: bio-adit2004-REMOVE at teuscher.ch (Christof Teuscher) Date: Tue, 18 Nov 2003 22:41:06 +0100 Subject: [Bio-ADIT2004] - Call for Participation Message-ID: <11182241.TSMHFXFD@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ Bio-ADIT 2004 CALL FOR PARTICIPATION The First International Workshop on Biologically Inspired Approaches to Advanced Information Technology January 29 - 30, 2004 Swiss Federal Institute of Technology, Lausanne, Switzerland Website: http://lslwww.epfl.ch/bio-adit2004/ Sponsored by - Osaka University Forum, - Swiss Federal Institute of Technology, Lausanne, and - The 21st Century Center of Excellence Program of The Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under the Program Title "Opening Up New Information Technologies for Building Networked Symbiosis Environment Biologically inspired approaches have already proved successful in achieving major breakthroughs in a wide variety of problems in information technology (IT). A more recent trend is to explore the applicability of bio-inspired approaches to the development of self-organizing, evolving, adaptive and autonomous information technologies, which will meet the requirements of next-generation information systems, such as diversity, scalability, robustness, and resilience. These new technologies will become a base on which to build a networked symbiotic environment for pleasant, symbiotic society of human beings in the 21st century. Bio-ADIT 2004 will be the first international workshop to present original research results in the field of bio-inspired approaches to advanced information technologies. It will also serve to foster the connection between biological paradigms and solutions to building the next-generation information systems. PROGRAM: The advance program for the conference can be viewed at: http://lslwww.epfl.ch/bio-adit2004/program.shtml The program includes 25 oral presentations and 15 poster presentations selected out of 85 submitted articles. It also includes two keynote addresses: "The Architecture of Complexity: From the Internet to Metabolic Networks", Albert-Laszlo Barabasi, University of Notre Dame, USA. "www.siliconcell.net: Bringing Bits and Chips to Life", Hans V. Westerhoff, Free University, Amsterdam REGISTRATION: Thanks to the generous support of our sponsors, the total registration fees (including lunch, conference dinner and pre-proceedings) are only 90 Swiss Francs (approx. 70 US dollars). Please register through the conference web site, at http://lslwww.epfl.ch/bio-adit2004/registration.shtml and follow the instructions for credit-card payment. We are looking forward seeing you in Lausanne. EXECUTIVE COMMITTEE: General Co-Chairs: - Daniel Mange (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Shojiro Nishio (Osaka University, Japan) Technical Program Committee Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Finance Chair: - Marlyse Taric (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Toshimitsu Masuzawa (Osaka University, Japan) Publicity Chair: - Christof Teuscher (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takao Onoye (Osaka University, Japan) Publications Chair: - Naoki Wakamiya (Osaka University, Japan) Local Arrangements Chair: - Carlos Andres Pena-Reyes (Swiss Federal Institute of Technology, Lausanne, Switzerland) Internet Chair: - Jonas Buchli (Swiss Federal Institute of Technology, Lausanne, Switzerland) TECHNICAL PROGRAM COMMITTEE: Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Members: - Michael A. Arbib (University of Southern California, Los Angeles, USA) - Aude Billard (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Marco Dorigo (Universit? Libre de Bruxelles, Belgium) - Takeshi Fukuda (IBM Tokyo Research Laboratory, Japan) - Katsuo Inoue (Osaka University, Japan) - Wolfgang Maass (Graz University of Technology, Austria) - Ian W. Marshall (BTexact Technologies, UK) - Toshimitsu Masuzawa (Osaka University, Japan) - Alberto Montresor (University of Bologna, Italy) - Chrystopher L. Nehaniv (University of Hertfordshire, U.K.) - Stefano Nolfi (Institute of Cognitive Sciences and Technology,CNR, Rome, Italy) - Takao Onoye (Osaka University, Japan) - Rolf Pfeifer (University of Zurich, Switzerland) - Eduardo Sanchez (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Hiroshi Shimizu (Osaka University, Japan) - Moshe Sipper (Ben-Gurion University, Israel) - Gregory Stephanopoulos (Massachusetts Institute of Technology, USA) - Adrian Stoica (Jet Propulsion Laboratory, USA) - Tim Taylor (University of Edinburgh, UK) - Gianluca Tempesti (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Naoki Wakamiya (Osaka University, Japan) - Hans V. Westerhoff (Vrije Universiteit Amsterdam, NL) - Xin Yao (University of Birmingham, UK) From cindy at bu.edu Tue Nov 18 11:32:30 2003 From: cindy at bu.edu (Cynthia Bradford) Date: Tue, 18 Nov 2003 11:32:30 -0500 Subject: Neural Networks 16(10) Message-ID: <001b01c3adf1$9054dc70$903dc580@cnspc31> NEURAL NETWORKS 16(10) Contents - Volume 16, Number 10 - 2003 ------------------------------------------------------------------ ***** ERRATUM ***** "Delay-dependent exponential stability analysis of delayed neural networks: An LMI approach" Xiaofeng Liao, Guanrong Chen, and Edgar N. Sanchez ***** MATHEMATICAL AND COMPUTATIONAL ANALYSIS ***** "Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation" Wai-keung Fung and Yun-hui Liu "Comparison of simulated annealing and mean field annealing as applied to the generation of block designs" Pau Bofill, Roger Guimera, and Carme Torras "The general inefficiency of batch training for gradient descent learning" D. Randall Wilson and Tony R. Martinez "Analyzing stability of equilibrium points in neural networks: A general approach" Wilson A. Truccolo, Govindan Rangarajan, Yonghong Chen, and Mingzhou Ding "A functions localized neural network with branch gates" Qingyu Xiong, Kotaro Hirasawa, Jinglu Hu, and Junichi Murata "Dynamical properties of strongly interacting Markov chains" Nihat Ay and Thomas Wennekers ***** ENGINEERING AND DESIGN ***** "The co-adaptive neural network approach to the Euclidean traveling salesman problem" E.M. Cochrane and J.E. Beasley "Polynomial harmonic GMDH learning networks for time series modeling" Nikolay Y. Nikolaev and Hitoshi Iba CURRENT EVENTS CONTENTS, VOLUME 16, 2003 AUTHOR INDEX, VOLUME 16, 2003 ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 Y 13,000 Neural Networks (plus Y 2,000 enrollment fee) $20 (student) SEK 460 Y 11,000 (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 student (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- --------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Shozo Yasui Kyushu Institute of Technology Graduate School of Life Science and Engineering 2-4 Hibikino, Wakamatsu-ku Kitakyushu 808-0196 Japan 81 93 695 6108 (phone and fax) jnns at brain.kyutech.ac.jp http://www.jnns.org/ ---------------------------------------------------------------------------- From erik at bbf.uia.ac.be Wed Nov 19 11:10:20 2003 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Wed, 19 Nov 2003 17:10:20 +0100 Subject: CNS*04 CALL FOR PAPERS Message-ID: <3C315C5C-1AAD-11D8-907A-000393452C9C@bbf.uia.ac.be> FIRST CALL FOR PAPERS: SUBMISSION DEADLINE: January 19, 2004 midnight; submission open December 1, 2003. Thirteenth Annual Computational Neuroscience Meeting CNS*2004 July 18 - July 20, 2004 (workshops: July 21-22, 2004) Baltimore, USA http://www.neuroinf.org/CNS.shtml Info at cp at bbf.uia.ac.be The annual Computational Neuroscience Meeting will be held at the historic Radisson Plaza Lord hotel in Baltimore, MD from July 18th ? 20th, 2004. The main meeting will be followed by two days of workshops on July 21st and 22nd. In conjunction, the ?2004 Annual Symposium, University of Maryland Program in Neuroscience: Computation in the Olfactory System? will be held as a satellite symposium to CNS*04 on Saturday, July 17th. NOTICE: NEW PAPER SUBMISSION PROCEDURE! As in the years before papers presented at the CNS*04 meeting can be published as a paper in a special issue of the journal Neurocomputing and in a proceedings book. Authors who would like to see their CNS*04 presentation published will have to submit a COMPLETE manuscript for review during this call (deadline January 19, 2004). You will also have the option to submit instead an extended summary but this cannot be included in the journal. Both types of submissions will be reviewed but full manuscripts will get back reviewers' comments and will have to be revised with final submission shortly after the meeting. The decision of who gets to speak at the conference is independent of the type of submission, both full manuscripts and extended summaries qualify. More details on the review process can be found below. Papers can include experimental, model-based, as well as more abstract theoretical approaches to understanding neurobiological computation. We especially encourage papers that mix experimental and theoretical studies. We also accept papers that describe new technical approaches to theoretical and experimental issues in computational neuroscience or relevant software packages. PAPER SUBMISSION The paper submission procedure is again completely electronic this year. Papers for the meeting can be submitted ONLY through the web site at http://www.neuroinf.org/CNS.shtml Papers can be submitted either as a full manuscript to be published in the journal Neurocomputing (max 6 typeset pages) or as an extended summary (1 to 6 pages). You will need to submit both types of papers in pdf format and the 100 word abstract as text. You will also need to select two categories which describe your paper and which will guide the selection of reviewers. I All submissions will be acknowledged by email generated by the neuroinf.org web robot (may be considered junk mail by a spam filter) THE REVIEW PROCESS All submitted papers will be first reviewed by the program committee. Submissions will be judged and accepted for the meeting based on the clarity with which the work is described and the biological relevance of the research. For this reason authors should be careful to make the connection to biology clear. We reject only a small fraction of the submissions (~ 5%) and this usually based on absence of biological relevance (e.g. pure machine learning). We will notify authors of meeting acceptance begin March. The second stage of review involves evaluation by two independent reviewers of full manuscripts submitted to the journal Neurocomputing (all) and those extended summaries which requested an oral presentation. Full manuscripts will be reviewed as real journal publications: each paper will have an action editor and two independent reviewers. The paper may be rejected for publication if it contains no novel content or is considered to contain grave errors. We hope that this will apply to a small number of papers only, but we also need to respect a limit of maximum 200 published papers which may enforce more strict selection criteria. Paper rejection at this stage does not exclude poster presentation at the meeting itself as we assume that these authors will benefit from the feedback they can receive at the meeting. Accepted papers will receive comments for improvements and corrections from the reviewers by e-mail. Submissions of the revised papers will be due in August. Criteria for selection as an oral presentation include perceived quality, the novelty of the research and the diversity and coherence of the overall program. To ensure diversity, those who have given talks in the recent past will not be selected and multiple oral presentations from the same lab will be discouraged. All accepted papers not selected for oral talks as well as papers explicitly submitted as poster presentations will be included in one of three evening poster sessions. Authors will be notified of the presentation format of their papers by begin of May. CONFERENCE PROCEEDINGS The proceedings volume is published each year as a special supplement to the journal Neurocomputing. In addition the proceedings are published in a hardbound edition by Elsevier Press. Only 200 papers will be published in the proceedings volume. For reference, papers presented at CNS*02 can be found in volumes 52-54 of Neurocomputing (2003). INVITED SPEAKERS: Mary Kennedy (California Institute of Technology, USA) Miguel Nicolelis (Duke University, USA) TBA ORGANIZING COMMITTEE: The CNS meeting is organized by the Computational Meeting Organization (http://www.cnsorg.org) precided by Christiane Linster (Cornell University, USA) Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Asaf Keller (University of Maryland School of Medicine, USA) Workshop organizer: Adrienne Fairhall (Princeton University, USA) Government liaison: Dennis Glanzman (NIMH/NIH, USA) and Yuan Liu (NINDS/NIH, USA) Program committee: Nicolas Brunel (Universite Paris Rene Descartes, France) Alain Destexhe (CNRS Gif-sur-Yvette, France) Bill Holmes (Ohio University, USA) Hidetoshi Ikeno (Himeji Institute of Technology, Japan) Don H. Johnson (Rice University, USA) Leslie M. Kay (University of Chicago, USA) Barry Richmond (NIMH, USA) Eytan Ruppin (Tel Aviv University, Israel) Frances Skinner (Toronto Western Research Institute, Canada) From dwang at cis.ohio-state.edu Thu Nov 20 10:29:07 2003 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 20 Nov 2003 10:29:07 -0500 Subject: Faculty position related to machine learning Message-ID: <3FBCDDB7.7C17EAF8@cis.ohio-state.edu> The Ohio State University Department of Computer and Information Science invites applications for a tenure-track position at the rank of assistant professor. Of particular interest are candidates who combine interests in one or more of the following fields: data mining, machine learning, and model checking as they relate to bioinformatics or security. To apply, send a curriculum vita (including names and addresses of at least three references) and a statement of research and teaching interests, by e-mail to: fsearch at cis.ohio-state.edu or by mail to: Chair, Faculty Search Committee Department of Computer and Information Science The Ohio State University 2015 Neil Avenue, DL395 Columbus, OH 43210-1277 Review of applications will begin immediately and will continue until the position is filled. For additional information please see http://www.cis.ohio-state.edu. From calls at bbsonline.org Thu Nov 20 10:57:39 2003 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Thu, 20 Nov 2003 15:57:39 +0000 Subject: Walker/Sleep and memory formation: BBS Call for Commentators Message-ID: Below is a link to the forthcoming BBS target article A refined model of sleep and the time course of memory formation by Matthew P. Walker http://www.bbsonline.org/Preprints/Walker-12042002/Referees/ This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html (please note that this list is being updated) If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. (Please note that we only request expertise information in order to simplify the selection process.) Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract and keywords below. ======================================================================= ======================================================================= A refined model of sleep and the time course of memory formation Matthew P. Walker Department of Psychiatry Harvard Medical School ABSTRACT: Research in the neurosciences continues to provide evidence that sleep plays a role in the processes of learning and memory. There is less of a consensus, however, regarding the precise stage of memory development where sleep is considered a requirement, simply favorable, or not important. This article begins with an overview of recent studies regarding sleep and learning, predominantly in the procedural memory domain, and is measured against our current understanding of the mechanisms that govern memory formation. Based on these considerations, a new neurocognitive framework of procedural learning is offered, consisting firstly of acquisition, followed by two specific stages of consolidation, one involving a process of stabilization, the other involving enhancement, whereby delayed learning occurs. Psychophysiological evidence indicates that initial acquisition does not fundamentally rely on sleep. This also appears to be true for the stabilization phase of consolidation, with durable representations, resistant to interference, clearly developing in a successful manner during time awake (or just time per se). In contrast, the consolidation stage resulting in additional/enhanced learning in the absence of further rehearsal does appear to rely on the process of sleep, with evidence for specific sleep-stage dependencies across the procedural domain. Evaluations at a molecular, cellular and systems level currently offer several sleep specific candidates that could play a role in sleep-dependent learning. These include the up regulation of select plasticity-associated genes, increased protein synthesis, changes in neurotransmitter concentration, and specific electrical events in neuronal networks that modulate synaptic potentiation. KEYWORDS: Consolidation; Enhancement; Learning; Memory; Plasticity; Sleep; Stabilization http://www.bbsonline.org/Preprints/Walker-12042002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Jeffrey Gray - Editor Paul Bloom - Editor Barbara Finlay - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From cns at cnsorg.org Thu Nov 20 20:38:46 2003 From: cns at cnsorg.org (CNS - Organization for Computational Neurosciences) Date: Thu, 20 Nov 2003 17:38:46 -0800 Subject: Call for Workshop proposals CNS*2004 Message-ID: <1069378726.3fbd6ca6af480@webmail.mydomain.com> The CNS*2004 committee calls for proposals for workshops, to be held on the final two days of CNS*2004, 21 and 22 July, at the Radisson Royal Plaza Lord hotel in Baltimore, MD. Information can be found at www.cnsorg.org Workshops provide an informal forum within the CNS meeting for focused discussion of recent or speculative research, novel techniques, and open issues in computational neuroscience. Topics exploring theoretical interfaces to recent experimental work are particularly encouraged. Several formats are possible: Discussion Workshops (formal or informal); Tutorials; and Mini- symposia, or a combination of these formats. Discussion workshops, whether formal (i.e., held in a conference room with projection and writing media) or informal (held elsewhere), should stress interactive and open discussions in preference to sequential presentations. Tutorials and mini-symposia provide a format for a focused exploration of particular issues or techniques within a more traditional presentation framework; ample time should be reserved for questions and general discussion. The organizers of a workshop should endeavor to bring together as broad a range of pertinent viewpoints as possible. The length of a workshop may range from one (half-day) session to the full two days. Single day workshops have been particularly successful in the past. To propose a workshop, please submit the following information to the workshop coordinator at the address below 1. the name(s) of the organizer(s) 2. the title of the workshop 3. a description of the subject matter, indicating clearly the range of topics to be discussed 4. a 200 word abstract of the subject matter 5. the format(s) of the workshop; if a discussion session, please specify whether you would like it to be held in a conference room or in a less formal setting 6. for tutorials and mini-symposia, a provisional list of speakers 7. the number of sessions for which the workshop is to run Please submit proposals as early as possible by email to workshops at cnsorg.org or by post to Adrienne Fairhall Department of Physiology and Biophysics University of Washington Box 357290 Seattle WA 98195-7290 The descriptions of accepted workshops will appear on the CNS*2004 web site as they are received. Attendees are encouraged to check this list, and to contact the organizers of any workshops in which they are interested in participating. ******************************************** Organization for Computational Neurosciences ******************************************** From terry at salk.edu Thu Nov 20 19:56:30 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 20 Nov 2003 16:56:30 -0800 (PST) Subject: NEURAL COMPUTATION 15:12 In-Reply-To: <200309271643.h8RGhIA58569@purkinje.salk.edu> Message-ID: <200311210056.hAL0uUX94464@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 12 - December 1, 2003 REVIEW General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results Jiri Sima and Pekka Orponen LETTERS Dynamics of Deterministic and Stochastic Paired Excitatory-Inhibitory Delayed Feedback Carlo R. Laing and Andre Longtin Differences in Spiking Patterns Among Cortical Neurons Shigeru Shinomoto, Keisetsu Shima and Jun Tanji Hybrid Integrate-and-Fire Model of a Bursting Neuron Barbara J. Breen, Wiliam C. Gerken and Robert J. Butera, Jr. Lateral Neural Model of Binocular Rivalry Lars Stollenwerk and Mathias Bode Suprathreshold Intrinsic Dynamics of the Human Visual System Gopathy Purushothaman, Haluk Ogmen and Harold E. Bedell Closed-Form Expressions of Some Stochastic Adapting Equations for Nonlinear Adaptive Activation Function Neurons Simone Fiori Selectring Informative Data for Developing Peptide-MHC Binding Predictors Using a Query By Committee Approach Jens Kaae Christensen, Kasper Lamberth, Morten Nielsen, Claus Lundegaard, Peder Worning, Sanne Lise Lauemoller, Soren Buus, Soren Brunak and Ole Lund ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From baldassarre at www.ip.rm.cnr.it Fri Nov 21 11:38:57 2003 From: baldassarre at www.ip.rm.cnr.it (Gianluca Baldassarre) Date: Fri, 21 Nov 2003 17:38:57 +0100 Subject: PhD thesis and papers on reinforcement-learning based neural planner and basal ganglia Message-ID: Dear connectionits, you can find my PhD thesis, and downloadable preprints of some papers related to it, at the web-page: http://gral.ip.rm.cnr.it/baldassarre/publications/publications.html Thesis and papers are about a neural-network planner based on reinforcement learning (it builds on Sutton's Dyna-PI architectures(1990)). Some of the papers show the biological inspiration of the model and its possible relations with the brain (basal ganglia). Below you will find: - the list of the titles of the thesis and the papers - the same list with abstracts - the index of the thesis. Best regards, Gianluca Baldassarre |.CS...|.......|...............|..|......US.|||.|||||.||.||||..|...|.... Gianluca Baldassarre, Ph.D. Institute of Cognitive Sciences and Technologies National Research Council of Italy (ISTC-CNR) Viale Marx 15, 00137, Rome, Italy E-mail: baldassarre at ip.rm.cnr.it Web: http://gral.ip.rm.cnr.it/baldassarre Tel: ++39-06-86090227 Fax: ++39-06-824737 ..CS.|||.||.|||.||..|.......|........|...US.|.|....||..|..|......|...... **************************************************************************** ****************** TITLES **************************************************************************** ****************** Baldassarre G. (2002). Planning with Neural Networks and Reinforcement Learning. PhD Thesis. Colchester - UK: Computer Science Department, University of Essex. Baldassarre G. (2001). Coarse Planning for Landmark Navigation in a Neural-Network Reinforcement Learning Robot. Proceedings of the International Conference on Intelligent Robots and Systems (IROS-2001). IEEE. Baldassarre G. (2001). A Planning Modular Neural-Network Robot for Asynchronous Multi-Goal Navigation Tasks. In Arras K.O., Baerveldt A.-J, Balkenius C., Burgard W., Siegwart R. (eds.), Proceedings of the 2001 Fourth European Workshop on Advanced Mobile Robots - EUROBOT-2001, pp. 223-230. Lund ? Sweden: Lund University Cognitive Studies. Baldassarre G. (2003). Forward and Bidirectional Planning Based on Reinforcement Learning and Neural Networks in a Simulated Robot. In Butz M., Sigaud O., G?rard P. (eds.), Adaptive Behaviour in Anticipatory Learning Systems, pp. 179-200. Berlin: Springer Verlag. ..papers describing the biological inspiration of the model and its possible relations with the brain (not in the thesis). Baldassarre G. (2002). A modular neural-network model of the basal ganglia's role in learning and selecting motor behaviours. Journal of Cognitive Systems Research. Vol. 3, pp. 5-13. Baldassarre G. (2002). A biologically plausible model of human planning based on neural networks and Dyna-PI models. In Butz M., Sigaud O., G?rard P. (eds.), Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems ? ABiALS-2002 (hold within SAB-2002), pp. 40-60. Wurzburg: University of Wurzburg. **************************************************************************** ****************** TITLES WITH ABSTRACTS **************************************************************************** ****************** Baldassarre G. (2002). Planning with Neural Networks and Reinforcement Learning. PhD Thesis. Colchester - UK: Computer Science Department, University of Essex. Abstract This thesis presents the design, implementation and investigation of some predictive-planning controllers built with neural-networks and inspired by Dyna-PI architectures (Sutton, 1990). Dyna-PI architectures are planning systems based on actor-critic reinforcement learning methods and a model of the environment. The controllers are tested with a simulated robot that solves a stochastic path-finding landmark navigation task. A critical review of ideas and models proposed by the literature on problem solving, planning, reinforcement learning, and neural networks precedes the presentation of the controllers. The review isolates ideas relevant to the design of planners based on neural networks. A ?neural forward planner? is implemented that, unlike the Dyna-PI architectures, is taskable in a strong sense. This planner is capable of building a ?partial policy? focussed on around efficient start-goal paths, and is capable of deciding to re-plan if ?unexpected? states are encountered. Planning iteratively generates ?chains of predictions? starting from the current state and using the model of the environment. This model is made up by some neural networks trained to predict the next input when an action is executed. A ?neural bidirectional planner? that generates trajectories backward from the goal and forward from the current state is also implemented. This planner exploits the knowledge (image) on the goal, further focuses planning around efficient start-goal paths, and produces a quicker updating of evaluations. In several experiments the generalisation capacity of neural networks proves important for learning but it also causes problems of interference. To deal with these problems a modular neural architecture is implemented, that uses a mixture of experts network for the critic, and a simple hierarchical modular network for the actor. The research also implements a simple form of neural abstract planning named ?coarse planning?, and investigates its strengths in terms of exploration and evaluations? updating. Some experiments with coarse planning and with other controllers suggest that discounted reinforcement learning may have problems dealing with long-lasting tasks. Baldassarre G. (2001). Coarse Planning for Landmark Navigation in a Neural-Network Reinforcement Learning Robot. Proceedings of the International Conference on Intelligent Robots and Systems (IROS-2001). IEEE. Abstract Is it possible to plan at a coarse level and act at a fine level with a neural-network (NN) reinforcement-learning (RL) planner? This work presents a NN planner, used to control a simulated robot in a stochastic landmark-navigation problem, which plans at an abstract level. The controller has both reactive components, based on actor-critic RL, and planning components inspired by the Dyna-PI architecture (this roughly corresponds to RL plus a model of the environment). Coarse planning is based on macro-actions defined as a sequence of identical primitive actions. It updates the evaluations and the action policy while generating simulated experience at the macro level with the model of the environment (a NN trained at the macro level). The simulations show how the controller works. They also show the advantages of using a discount coefficient tuned to the level of planning coarseness, and suggest that discounted RL has problems dealing with long periods of time. Baldassarre G. (2001). A Planning Modular Neural-Network Robot for Asynchronous Multi-Goal Navigation Tasks. In Arras K.O., Baerveldt A.-J, Balkenius C., Burgard W., Siegwart R. (eds.), Proceedings of the 2001 Fourth European Workshop on Advanced Mobile Robots - EUROBOT-2001, pp. 223-230. Lund ? Sweden: Lund University Cognitive Studies. Abstract This paper focuses on two planning neural-network controllers, a "forward planner" and a "bidirectional planner". These have been developed within the framework of Sutton's Dyna-PI architectures (planning within reinforcement learning) and have already been presented in previous papers. The novelty of this paper is that the architecture of these planners is made modular in some of its components in order to deal with catastrophic interference. The controllers are tested through a simulated robot engaged in an asynchronous multi-goal path-planning problem that should exacerbate the interference problems. The results show that: (a) the modular planners can cope with multi-goal problems allowing generalisation but avoiding interference; (b) when dealing with multi-goal problems the planners keeps the advantages shown previously for one-goal problems vs. sheer reinforcement learning; (c) the superiority of the bidirectional planner vs. the forward planner is confirmed for the multi-goal task. Baldassarre G. (2003). Forward and Bidirectional Planning Based on Reinforcement Learning and Neural Networks in a Simulated Robot. In Butz M., Sigaud O., G?rard P. (eds.), Adaptive Behaviour in Anticipatory Learning Systems, pp. 179-200. Berlin: Springer Verlag. Abstract Building intelligent systems that are capable of learning, acting reactively and planning actions before their execution is a major goal of artificial intelligence. This paper presents two reactive and planning systems that contain important novelties with respect to previous neural-network planners and reinforcement-learning based planners: (a) the introduction of a new component (?matcher?) allows both planners to execute genuine taskable planning (while previous reinforcement-learning based models have used planning only for speeding up learning); (b) the planners show for the first time that trained neural-network models of the world can generate long prediction chains that have an interesting robustness with regards to noise; (c) two novel algorithms that generate chains of predictions in order to plan, and control the flows of information between the systems? different neural components, are presented; (d) one of the planners uses backward ?predictions? to exploit the knowledge of the pursued goal; (e) the two systems presented nicely integrate reactive behavior and planning on the basis of a measure of ?confidence? in action. The soundness and potentialities of the two reactive and planning systems are tested and compared with a simulated robot engaged in a stochastic path-finding task. The paper also presents an extensive literature review on the relevant issues. Baldassarre G. (2002). A modular neural-network model of the basal ganglia's role in learning and selecting motor behaviours. Journal of Cognitive Systems Research. Vol. 3, pp. 5-13. Abstract This work presents a modular neural-network model (based on reinforcement-learning actor-critic methods) that tries to capture some of the most-relevant known aspects of the role that basal ganglia play in learning and selecting motor behavior related to different goals. In particular some simulations with the model show that basal ganglia select "chunks" of behaviour whose "details" are specified by direct sensory-motor pathways, and how emergent modularity can help to deal with tasks with asynchronous multiple goals. A "top-down" approach is adopted, beginning with the analysis of the adaptive interaction of a (simulated) organism with the environment, and its capacity to learn. Then an attempt is made to implement these functions with neural architectures and mechanisms that have an empirical neuroanatomical and neurophysiological foundation. Baldassarre G. (2002). A biologically plausible model of human planning based on neural networks and Dyna-PI models. In Butz M., Sigaud O., G?rard P. (eds.), Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems ? ABiALS-2002 (hold within SAB-2002), pp. 40-60. Wurzburg: University of Wurzburg. Abstract Understanding the neural structures and physiological mechanisms underlying human planning is a difficult challenge. In fact it is the product of a sophisticated network of different brain components that interact in complex ways. However, some data produced by brain imaging, neuroanatomical and neurophysiological research, are now beginning to make it possible to draw a first approximate picture of this network. This paper proposes such a picture in the form of a neural-network computational model inspired by the Dyna-PI models (Sutton, 1990). The model is based on the actor-critic reinforcement learning model, that has been shown to be a good representation of the anatomy and functioning of the basal ganglia. It is also based on a ?predictor?, a network capable of predicting the sensorial consequences of actions, that may correspond to the lateral cerebellum-prefrontal and rostral premotor cortex pathways. All these neural structures have been shown to be involved in human planning by functional brain-imaging research. The model has been tested with an animat engaged with a landmark navigation task. In accordance with the brain imaging data, the simulations show that with repeated practice performing the task, the complex planning processes, and the activity of the neural structures underlying them, fade away and leave the routine control of action to lower-level reactive components. The simulations also show the biological advantages offered by planning and some interesting properties of the processing of ?mental images?, based on neural networks, during planning. On the machine learning side, the model presented extends the Dyna-PI models with two important novelties: a ?matcher? for the self-generation of a reward signal in correspondence to any possible goal, and an algorithm that focuses the exploration of the model of the world around important states and allows the animat to decide when planning and when acting on the basis of a measure of its ?confidence?. The paper also offers a wide collection of references on the addressed issues. **************************************************************************** ****************** TITLES WITH ABSTRACTS **************************************************************************** ****************** 1 INTRODUCTION 12 1.1 The Objective of the Thesis 13 1.1.1 Why Neural-Network Planning Controllers? 13 1.1.2 Why a Robot and a Noisy Environment? Why a simulated robot? 15 1.1.3 Reinforcement Learning, Dynamic Programming and Dyna Architectures 16 1.1.4 Ideas from Problem Solving and Logical Planning 18 1.1.5 Why Dyna-PI Architectures (Reinforcement Learning + Model of the Environment)? 19 1.1.6 Stochastic Path-Finding Landmark Navigation Problems 20 1.2 Overview of the Controllers and Outline of the Thesis 22 1.2.1 Overview of the Controllers Implemented in this Research 22 1.2.2 Outline of the Thesis and Problems Addressed Chapter by Chapter 23 PART 1: CRITICAL LITERATURE REVIEW AND ANALYSIS OF CONCEPTS USEFUL FOR NEURAL PLANNING 2 PROBLEM SOLVING, SEARCH, AND STRIPS PLANNING 28 2.1 Planning as a Searching Process: Blind-Search Strategies 28 2.1.1 Critical Observations 29 2.2 Planning as a Searching Process: Heuristic-Search Strategies 29 2.2.1 Critical Observations 29 2.3 STRIPS Planning: Partial Order Planner 30 2.3.1 Situation Space and Plan Space 30 2.3.2 Partial Order Planner 31 2.3.3 Critical Observations 32 2.4 STRIPS Planning: Conditional Planning, Execution Monitoring, Abstract Planning 32 2.4.1 Conditional Planning 33 2.4.2 Execution Monitoring and Replanning 33 2.4.3 Abstract Planning 34 2.4.4 Critical Observations 34 2.5 STRIPS Planning: Probabilistic and Reactive Planning 34 2.5.1 BURIDAN Planning Algorithm 35 2.5.2 Reactive Planning and Universal Plans 35 2.5.3 Decision theoretic planning 35 2.5.4 Maes' Planner 37 2.5.5 Critical Observations 37 2.6 Navigation and Motion Planning Through Configuration Spaces 38 3 MARKOV DECISION PROCESSES AND DYNAMIC PROGRAMMING 40 3.1 The Problem Domain Considered Here: Stochastic Path-Finding Problems 40 3.2 Critical Observations on Dynamic Programming and Heuristic Search 42 3.3 Dyna Framework and Dyna-PI Architecture 43 3.3.1 Critical Observations 44 3.4 Prioritised Sweeping and Trajectory Sampling 45 3.4.1 Critical Observations 46 4 NEURAL-NETWORKS 47 4.1 What is a Neural Network? 47 4.1.1 Critical Observations 48 4.2 Critical Observations: Feed-Forward Networks and Mixture of Experts Networks 48 4.3 Neural Networks for Prediction Learning 50 4.3.1 Critical Observations 51 4.4 Properties of Neural Networks and Planning 51 4.4.1 Generalisation, Noise Tolerance, and Catastrophic Interference 51 4.4.2 Prototype Extraction 52 4.4.3 Learning 53 4.5 Planning with Neural Networks 53 4.5.1 Activation Diffusion Planning 54 4.5.2 Neural Planners Based on Gradient Descent Methods 56 5 UNIFYING CONCEPTS 58 5.1 Learning, Planning, Prediction and Taskability 58 5.1.1 Learning of Behaviour 59 5.1.2 Taskable Planning 60 5.1.3 Taskability: Reactive and Planning Controllers 61 5.1.4 Taskability and Dyna-PI 63 5.2 A Unified View of Heuristic Search, Dynamic Programming, and Activation Diffusion 63 5.3 Policies and Plans 65 PART 2: DESIGNING AND TESTING NEURAL PLANNERS 6 NEURAL ACTOR-CRITIC REINFORCEMENT LEARNING 69 6.1 Introduction: Basic Neural Actor-Critic Controller and Simulations' Scenarios 69 6.2 Scenarios of Simulations and the Simulated Robot 70 6.3 Architectures and Algorithms 72 6.4 Results and Interpretations 76 6.4.1 Functioning of the Matcher 76 6.4.2 Performance of the Controller: The Critic and the Actor 77 6.4.3 Aliasing Problem and Parameters' Exploration 81 6.4.4 Parameter Exploration 83 6.4.5 Why the Contrasts? Why no more than the Contrasts? 84 6.5 Temporal Limitations of Discounted Reinforcement Learning 85 6.6 Conclusion 89 7 REINFORCEMENT LEARNING, MULTIPLE GOALS, MODULARITY 91 7.1 Introduction 91 7.2 Scenario of Simulations: An Asynchronous Multi-Goal Task 92 7.3 Architectures and Algorithms: Monolithic and Modular Neural-Networks 93 7.4 Results and Interpretation 96 7.5 Limitations of the Controllers 100 7.6 Conclusion 100 8 THE NEURAL FORWARD PLANNER 101 8.1 Introduction: Taskability, Planning and Acting, Focussing 101 8.2 Scenario of the Simulations 103 8.3 Architectures and Algorithms: Reactive and Planning Components 104 8.3.1 The Reactive Components of the Architecture 104 8.3.2 The Planning Components of the Architecture 105 8.4 Results and Interpretation 108 8.4.1 Taskable Planning vs. Reactive Behaviour 108 8.4.2 Focussing, Partial Policies and Replanning 111 8.4.3 Neural Networks for Prediction: ?True? Images as Attractors? 112 8.5 Limitations of the Neural Forward Planner 115 8.6 Conclusion 115 9 THE NEURAL BIDIRECTIONAL PLANNER 117 9.1 Introduction: More Efficient Exploration 117 9.2 Scenario of Simulations 118 9.3 Architectures and Algorithms 119 9.3.1 The Reactive Components of the Architecture 119 9.3.2 The Planning Components of the Architecture: Forward Planning 119 9.3.3 The Planning Components of the Architecture: Bidirectional Planning 121 9.4 Results and Interpretation 123 9.4.1 Common Strengths of the Forward-Planner and the Bidirectional Planner 123 9.4.2 The Forward Planner Versus the Bidirectional Planner 124 9.5 Limitations of the Neural Bidirectional Planner 126 9.6 A New ?Goal Oriented Forward Planner? (Not Implemented) 126 9.7 Conclusion 127 10 NEURAL NETWORK PLANNERS AND MULTI-GOAL TASKS 128 10.1 Introduction: Neural Planners, Interference and Modularity 128 10.2 Scenario: Again the Asynchronous Multi-Goal Task 129 10.3 Architectures and Algorithms 129 10.3.1 Modular Reactive Components 129 10.3.2 Neural Modular Forward Planner 130 10.3.3 Neural Modular Bidirectional Planner 131 10.4 Results and Interpretation 132 10.4.1 Modularity and Interference 132 10.4.2 Taskability 134 10.4.3 From Planning To Reaction 134 10.4.4 The Forward Planner Versus the Bidirectional Planner 135 10.5 Limitations of the Modular Planners 137 10.6 Conclusion 137 11 COARSE PLANNING 138 11.1 Introduction: Abstraction, Macro-actions and Coarse Planning 138 11.2 Scenario of Simulations: A Simplified Navigation Task 139 11.3 Architectures and Algorithms: Coarse Planning with Macro-actions 140 11.4 Results and Interpretation 142 11.4.1 Reinforcement Learning at a Coarse Level 142 11.4.2 The Advantages of Coarse Planning 143 11.4.3 Predicting at a Coarse Level 145 11.4.4 Coarse Planning, Discount Coefficient and Time Limitations of Reinforcement Learning 146 11.5 Limitations of the Neural Coarse Planner 149 11.6 Conclusion 150 12 CONCLUSION AND FUTURE WORK 152 12.1 Conclusion: What Have We Learned from This Research? 152 12.1.1 Ideas for Neural-Network Reinforcement-Learning Planning 152 12.1.2 Landmark Navigation, Reinforcement Learning and Neural Networks 153 12.1.3 A New Neural Forward Planner 153 12.1.4 A New Neural Bidirectional Planner 155 12.1.5 Common Structure, Interference, and Modular Networks 156 12.1.6 Coarse Planning and Time Limits of Reinforcement Learning 157 12.2 A List of the Major ?Usable? Insights Delivered 158 12.3 Future Work 159 13 APPENDICES 162 13.1 Blind-Search and Heuristic-Search Strategies 162 13.1.1 Blind-Search Strategies 162 13.1.2 Heuristic-Search Strategies 163 13.2 Markov Decision Processes, Reinforcement Learning and Dynamic Programming 165 13.2.1 Markov Decision Processes 165 13.2.2 Markov Property and Partially Observable Markov Decision Problems 167 13.2.3 Reinforcement Learning 168 13.2.4 Approximating the State or State-Action Evaluations 168 13.2.5 Searching the Policy with the Q'* and Q'p evaluations 170 13.2.6 Actor-Critic Model 171 13.2.7 Macro-actions and Options 172 13.2.8 Function Approximation and Reinforcement Learning 174 13.2.9 Dynamic Programming 174 13.2.10 Asynchronous Dynamic Programming 176 13.2.11 Trial-Based Real-Time Dynamic Programming and Heuristic Search 176 13.3 Feed-Forward Architectures and Mixture of Experts Networks 178 13.3.1 Feed-Forward Architectures and Error Backpropagation Algorithm 178 13.3.2 Mixture of Experts Neural Networks 179 13.3.3 The Generalisation Property of Neural Networks 181 14 REFERENCES 182 14.1 Candidate's Publications During the PhD Research 182 14.2 References 183 **************************************************************************** ****************** From Hualou.Liang at uth.tmc.edu Fri Nov 21 17:39:58 2003 From: Hualou.Liang at uth.tmc.edu (Hualou Liang) Date: Fri, 21 Nov 2003 16:39:58 -0600 Subject: POSTDOCTORAL POSITION AVAILABLE Message-ID: COMPUTATIONAL COGNITIVE NEUROSCIENCE POSTDOCTORAL POSITION AVAILABLE University of Texas Health Science Center at Houston A postdoctoral position is available starting Jan 1 2004 in my laboratory (http://www.sahs.uth.tmc.edu/hliang/) at University of Texas Health Science Center at Houston to participate in an ongoing research project studying the cortical dynamics of visual attention. The project involves the application of multivariate signal analysis techniques to cortical event-related potentials. Our current facilities include a 90-node (2 CPUs per node) Linux cluster and a 128-channel EEG system dedicated for research activities. The ideal candidate should have a Ph.D. in relevant discipline with substantial mathematical/computational experience in neurophysiological signal processing and multivariate statistics. Programming skills in C and Matlab are essential. Interested individuals should send a curriculum vitae, representative publications, and the names and e-mail addresses of three references to Hualou Liang (hualou.liang at uth.tmc.edu). -------------------------------- Hualou Liang, Ph.D. Assistant Professor The University of Texas at Houston 7000 Fannin, Suite 600 Houston, TX 77030 From ken at phy.ucsf.edu Mon Nov 24 12:47:46 2003 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 24 Nov 2003 09:47:46 -0800 Subject: Two papers available: V1 circuitry and multiplicative gain modulation Message-ID: <16322.17474.404075.634061@coltrane.ucsf.edu> Reprints of the following two papers are available either from http://www.keck.ucsf.edu/~ken (Click on 'publications', then on 'Models of Neuronal Integration and Circuitry') or through the specific links below. ------------------------------------------- Lauritzen, T.Z. and K.D. Miller (2003). "Different roles for simple- and complex-cell inhibition in V1". Journal of Neuroscience 23, 10201-10213. ftp://ftp.keck.ucsf.edu/pub/ken/lauritzen_miller03.pdf Abstract: Previously, we proposed a model of the circuitry underlying simple-cell responses in cat primary visual cortex (V1) layer 4. We argued that the ordered arrangement of lateral geniculate nucleus inputs to a simple cell must be supplemented by a component of feedforward inhibition that is untuned for orientation and responds to high temporal frequencies to explain the sharp contrast-invariant orientation tuning and low-pass temporal frequency tuning of simple cells. The temporal tuning also requires a significant NMDA component in geniculocortical synapses. Recent experiments have revealed cat V1 layer 4 inhibitory neurons with two distinct types of receptive fields (RFs): complex RFs with mixed ON/OFF responses lacking in orientation tuning, and simple RFs with normal, sharp-orientation tuning (although, some respond to all orientations). We show that complex inhibitory neurons can provide the inhibition needed to explain simple-cell response properties. Given this complex cell inhibition, antiphase or "push-pull" inhibition from tuned simple inhibitory neurons acts to sharpen spatial frequency tuning, lower responses to low temporal frequency stimuli, and increase the stability of cortical activity. --------------------------------------- Murphy, B.K. and K.D. Miller (2003). "Multiplicative Gain Changes Are Induced by Excitation or Inhibition Alone". J. Neurosci., Nov 2003; 23: 10040 - 10051. ftp://ftp.keck.ucsf.edu/pub/ken/murphy_miller03.pdf Abstract: We model the effects of excitation and inhibition on the gain of cortical neurons. Previous theoretical work has concluded that excitation or inhibition alone will not cause a multiplicative gain change in the curve of firing rate versus input current. However, such gain changes in vivo are measured in the curve of firing rate versus stimulus parameter. We find that when this curve is considered, and when the nonlinear relationships between stimulus parameter and input current and between input current and firing rate in vivo are taken into account, then simple excitation or inhibition alone can induce a multiplicative gain change. In particular, the power-law relationship between voltage and firing rate that is induced by neuronal noise is critical to this result. This suggests an unexpectedly simple mechanism that may underlie the gain modulations commonly observed in cortex. More generally, it suggests that a smaller input will multiplicatively modulate the gain of a larger one when both converge on a common cortical target. Ken Kenneth D. Miller telephone: (415) 476-8217 Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From ken at phy.ucsf.edu Mon Nov 24 12:33:40 2003 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 24 Nov 2003 09:33:40 -0800 Subject: UCSF Postdoctoral/Graduate Fellowships in Theoretical Neurobiology Message-ID: <16322.16628.911471.189310@coltrane.ucsf.edu> FULL INFO: http://www.sloan.ucsf.edu/sloan/sloan-info.html PLEASE DO NOT USE 'REPLY'; FOR MORE INFO USE ABOVE WEB SITE OR CONTACT ADDRESSES GIVEN BELOW The Sloan-Swartz Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in a quantitative field such as mathematics, theoretical or experimental physics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan-Swartz Center offers opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. Young scientists with strong theoretical backgrounds will receive scientific training in experimental approaches to understanding the operation of the intact brain. They will learn to integrate their theoretical abilities with these experimental approaches to form a mature research program in integrative neuroscience. The research undertaken by the trainees may be theoretical, experimental, or a combination. Resident Faculty of the Sloan-Swartz Center and their research interests include: Michael Brainard: Mechanisms underlying vocal learning in the songbird; sensorimotor adaptation to alteration of performance-based feedback Allison Doupe: Development of song recognition and production in songbirds Loren Frank: The relationship between behavior and neural activity in the hippocampus and anatomically related cortical areas Stephen Lisberger: Learning and memory in a simple motor reflex, the vestibulo-ocular reflex, and visual guidance of smooth pursuit eye movements by the cerebral cortex Michael Merzenich: Experience-dependent plasticity underlying learning in the adult cerebral cortex, and the neurological bases of learning disabilities in children Kenneth Miller: Circuitry of the cerebral cortex: its structure, self-organization, and computational function (primarily using cat primary visual cortex as a model system) Philip Sabes: Sensorimotor coordination, adaptation and development of spatially guided behaviors, experience dependent cortical plasticity. Christoph Schreiner: Cortical mechanisms of perception of complex sounds such as speech in adults, and plasticity of speech recognition in children and adults Michael Stryker: Mechanisms that guide development of the visual cortex There are also a number of visiting faculty, including Larry Abbott, Brandeis University; Bill Bialek, Princeton University; Sebastian Seung, MIT; David Sparks, Baylor University; Steve Zucker, Yale University. TO APPLY for a POSTDOCTORAL position, please send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. The application deadline is January 30, 2004. Send applications to: Sloan-Swartz Center 2004 Admissions Sloan-Swartz Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 PRE-DOCTORAL applicants with strong theoretical training may seek admission into the UCSF Neuroscience Graduate Program as a first-year student. Applicants seeking such admission must apply by Jan. 3, 2004 to be considered for fall, 2004 admission. Application materials for the UCSF Neuroscience Program may be obtained from http://www.ucsf.edu/neurosc/neuro_admissions.html#application or from Pat Vietch Neuroscience Graduate Program Department of Physiology University of California San Francisco San Francisco, CA 94143-0444 neuroscience at phy.ucsf.edu Be sure to include your surface-mail address. The procedure is: make a normal application to the UCSF Neuroscience program; but also alert the Sloan-Swartz Center of your application, by writing to sloan-info at phy.ucsf.edu. If you need more information: -- Consult the Sloan-Swartz Center WWW Home Page: http://www.sloan.ucsf.edu/sloan -- Send e-mail to sloan-info at phy.ucsf.edu -- See also the home page for the W.M. Keck Foundation Center for Integrative Neuroscience, in which the Sloan-Swartz Center is housed: http://www.keck.ucsf.edu/ From D.Palmer-Brown at lmu.ac.uk Mon Nov 24 10:29:34 2003 From: D.Palmer-Brown at lmu.ac.uk (Palmer-Brown, Dominic [IES]) Date: Mon, 24 Nov 2003 15:29:34 -0000 Subject: No subject Message-ID: Dear Connectionists, I would be grateful if you would circulate the following information to potential postgraduates. PhD Research Bursary in Neurocomputing. Within the School of Computing at Leeds Metropolitan University we are investigating neural network algorithms for adaptive function networks applied to data analysis and natural language processing. The project is likely to involve collaboration with psychologists and environmental scientists, in addition to computer scientists. The ideal candidate would possess a Masters or very good Bachelors degree in a discipline that includes a significant level of computing, and would be able to demonstrate a keen interest in neural networks, cognitive science, and programming, together with the determination to successfully complete three years of PhD study. The bursary is worth =A39000 in the first year, in addition to EU/Home fees. The formal advert is on jobs.ac.uk at http://jobs.ac.uk/jobfiles/PI512.html and it includes details of how to apply. I'm happy to respond to any informal enquiries. Best wishes, Dominic **************************************************************************** Dominic Palmer-Brown PhD d.palmer-brown at lmu.ac.uk , d.palmer-brown at leedsmet.ac.uk +44 (0)113 2837594 Professor of Neurocomputing Leader of the Computational Intelligence Research Group http://www.lmu.ac.uk/ies/comp/research/cig/ School of Computing Faculty of Informatics and Creative Technologies Leeds Metropolitan University **************************************************************************** From nando at cs.ubc.ca Tue Nov 25 13:26:18 2003 From: nando at cs.ubc.ca (Nando de Freitas) Date: Tue, 25 Nov 2003 10:26:18 -0800 Subject: Machine Learning Jobs at UBC Message-ID: <3FC39ECA.3020001@cs.ubc.ca> Dear colleagues, The Department of Computer Science at the University of British Columbia is looking for outstanding candidates for a faculty position in machine learning and computational statistics (broadly construed). See http://www.cs.ubc.ca/career/Faculty/index.html UBC has a dynamic department in Vancouver, which is a great place to live (and near NIPS!). For more details on the department see http://www.cs.ubc.ca/ Note that this position is in addition to a corresponding position in the Department of Statistics. See: http://www.stat.ubc.ca/jobs/CRC_Sept_2003_approved.html If you have any questions about this position, don't hesitate to ask. Nando From paul at santafe.edu Tue Nov 25 18:06:58 2003 From: paul at santafe.edu (Paul Brault) Date: Tue, 25 Nov 2003 16:06:58 -0700 Subject: SFI Complex Systems Summer Schools Message-ID: ANNOUNCING THE SANTA FE INSTITUTE'S 2004 COMPLEX SYSTEMS SUMMER SCHOOLS Santa Fe School: June 7 - July 2, 2004 in Santa Fe, New Mexico, USA Director: Melanie Mitchell, Oregon Health & Science University and Santa Fe Institute. Held at the campus of St. John's College. Administered by the Santa Fe Institute. China School: July 5 - 30, 2004 in Qingdao, Shandong Province, China. Co-Directors: Douglas Erwin, Smithsonian Institution and Santa Fe Institute; John Olsen, University of Arizona. Held at the campus of Qingdao University. Administered by Qingdao University and the Santa Fe Institute. GENERAL DESCRIPTION: An intensive four-week introduction to complex behavior in mathematical, physical, living, and social systems for graduate students and postdoctoral fellows in the physical, natural and social sciences. Open to students from all countries. Students are expected to attend one school for the full four weeks. Week one will consist of an intensive series of lectures and laboratories introducing the foundational ideas and tools of complex systems research. Topics will include nonlinear dynamics and pattern formation, information theory and computation theory, adaptation and evolution, network structure and dynamics, computer modeling tools, and specific applications of these core topics to various disciplines. Weeks two and three will consist of lectures and panel discussions on current research in complex systems. * Santa Fe: Lecture topics include cancer as a complex adaptive system; neuro-cognitive development; ecological dynamics and robustness; and interactions between physics and computation. * China: Lecture topics include defining principles and methods of complex systems, and specific case studies drawn from the physical, biological, and social sciences. Week four will be devoted to the completion and presentation of student projects. COSTS: No tuition is charged. Housing and meal costs are supported as follows: * Santa Fe--100% for graduate students and 50% for postdoctoral fellows (the remaining 50% share is $750, due at the beginning of the school). * China--100% for graduate students and postdoctoral fellows. Most students will provide their own travel funding. Some travel scholarships may be available based on demonstrated need, with preference given to international students. Housing and travel support for accompanying families is not available. ELIGIBILITY: Applications are solicited from graduate students and postdoctoral fellows in any discipline. Some background in science and mathematics at the undergraduate level, at least through calculus and linear algebra, is required. Students should indicate school location preference when applying. Placements may be influenced by recent increased restrictions in U.S. foreign visitor policies. APPLICATION INSTRUCTIONS: Chinese students who wish to apply to the Qingdao school should be alert for a call for nominations at their university or research institution and apply locally. For more information, please e-mail summerschool at santafe.edu. Applications for the Santa Fe school, or for non-China international participants in the Qingdao school, may be submitted using our online application form at http://www.santafe.edu/csss04.html. Application requirements include a current resume with publications list (if any), a statement of your current research interests and comments about why you want to attend the shool, and two letters of recommendation from scholars who know your work. Applications sent via postal mail will also be accepted. Do not bind your application materials in any manner. Send packages to: Summer Schools Santa Fe Institute 1399 Hyde Park Road Santa Fe, NM 87501 USA Deadline: All application materials must be postmarked or electronically submitted no later than January 23, 2004. Women, minorities, and students from developing countries are especially encouraged to apply. FOR FURTHER INFORMATION: Please visit http://www.santafe.edu/csss04.html, or e-mail summerschool at santafe.edu. From niebur at jhu.edu Tue Nov 25 13:45:04 2003 From: niebur at jhu.edu (niebur@jhu.edu) Date: Tue, 25 Nov 2003 13:45:04 -0500 Subject: Graduate studies in Systems Neuroscience at the Mind/Brain Institute of Johns Hopkins University Message-ID: <200311251845.hAPIj4B05658@russell.mindbrain> PLEASE DO NOT USE 'REPLY'; FOR MORE INFORMATION CONTACT ADDRESSES ON WEB PAGES GIVEN BELOW ******************************************************************* Graduate Training in Systems Neuroscience in the Zanvyl Krieger Mind/Brain Institute of Johns Hopkins University ******************************************************************* The Zanvyl Krieger Mind/Brain Institute is dedicated to the study of the neural mechanisms of higher brain functions using modern neurophysiological, anatomical, and computational techniques. Applications are invited for pre-doctorate fellowships by students with a strong interest in systems neuroscience. In addition to students with training in neuroscience or neurobiology, we particularly encourage students with a background in quantitative or computational sciences such as mathematcs, physics, engineering or computer science. For those students, biological or neuroscience training is not required but students must show a strong commitment to combine theoretical and experimental techniques to understanding brain function. Faculty in the Mind/Brain Institute include: Guy McKhann (emeritus) Vernon Mountcastle (emeritus) Gian Poggio (emeritus) Ken Johnson (Director): Neural Mechanisms of Tactile Perception and Object Recognition Ed Connor: Shape Processing in Higher Level Visual Cortex Stewart Hendry: Functional Organization of the Primate Visual System Rudiger von der Heydt: Neural Mechanisms of Visual Perception Steven Hsiao: Neurophysiology of Tactile Shape and Texture Perception Alfredo Kirkwood: Mechanisms of Cortical Modification Ernst Niebur: Computational Neuroscience Takashi Yoshioka: Neural Mechanisms of Tactile Perception and Object Recognition The neuroscience graduate program includes over sixty faculty members in both clinical and academic departments. In addition, students from other graduate programs including Biomedical Engineering, Electrical Engineering, Psychology and Biophysics are part of the Mind/Brain Institute. For more details about the Institute visit the webpage www.mb.jhu.edu Information about the neuroscience graduate program, including online and off-line application, is available from neuroscience.jhu.edu/gradprogram.asp -- Dr. Ernst Niebur Krieger Mind/Brain Institute Assoc. Prof. of Neuroscience Johns Hopkins University niebur at jhu.edu http://cnslab.mb.jhu.edu 3400 N. Charles Street (410)516-8643, -8640 (secr), -8648 (fax), -3357 (lab) Baltimore, MD 21218 From pmunro at mail.sis.pitt.edu Sun Nov 30 22:27:38 2003 From: pmunro at mail.sis.pitt.edu (Paul Munro) Date: Sun, 30 Nov 2003 22:27:38 -0500 (EST) Subject: ICCM 2004 Announcement July 29 - Aug 1, 2004 In-Reply-To: Message-ID: Sixth International Conference of Cognitive Modeling ICCM-2004 http://simon.lrdc.pitt.edu/~iccm To be held July 29 - August 1, 2004, in Pittsburgh, USA (jointly between Carnegie Mellon University and the University of Pittsburgh). THEME ICCM brings researchers together who develop computational models that explain/predict cognitive data. The core theme of ICCM2004 is Integrating Computational Models: models that integrate diverse data; integration across modeling approaches; and integration of teaching and modeling. ICCM2004 seeks to grow the discipline of computational cognitive modeling. Towards this end, it will provide - a sophisticated modeling audience for cutting-edge researchers - critical information on the best computational modeling teaching resources for teachers of the next generation of modelers - a forum for integrating insights across alternative modeling approaches (including connectionism, symbolic modeling, dynamical systems, Bayesian modeling, and cognitive architectures) in both basic research and applied settings, across a wide variety of domains, ranging from low-level perception and attention to higher-level problem-solving and learning. - a venue for planning the future growth of the discipline INVITED SPEAKERS Kenneth Forbus (Northwestern University) Michael Mozer (University of Colorado at Boulder) SUBMISSION CATEGORIES --- DEADLINE FOR SUBMISSIONS: April 1st 2004 Papers and Posters Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available on the website. The research being presented at ICCM-2004 will appear in the conference proceedings. The proceedings will contain 6-page extended descriptions for paper presentations and 2-page extended abstracts for poster presentations. There will also be an opportunity to attach model code and simulation results in an electronic form. Comparative Symposia Three to five participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss. Parties interested in putting a comparative symposia proposal together are highly encouraged to do so well before the April 1st deadline and will be given feedback shortly after submission. Please see the website for additional information. Newell Prize for Best Student Paper Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend. The Best Applied Research Paper Award To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem. Doctoral Consortium Full-day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement---maximum amounts will be determined at a later date. CONFERENCE CHAIRS Marsha Lovett (lovett at cmu.edu) Christian Schunn (schunn at pitt.edu) Christian Lebiere (clebiere at maad.com) Paul Munro (pmunro at mail.sis.pitt.edu) Further information about the conference can be found at http://simon.lrdc.pitt.edu/~iccm or through email inquiries to iccm at pitt.edu. From Bob.Williamson at anu.edu.au Sun Nov 30 18:44:15 2003 From: Bob.Williamson at anu.edu.au (Bob Williamson) Date: Mon, 1 Dec 2003 10:44:15 +1100 (AUS Eastern Daylight Time) Subject: Senior and Junior Research Positions in Machine Learning; Canberra, Australia Message-ID: Senior and Junior Research Positions in Machine Learning (4 positions) Canberra, Australia National ICT Australia (NICTA) is a newly formed research institute based in Canberra and Sydney focussing on Information and Communication Technology. Details of the centre can be found on its website http://nicta.com.au We are now hiring researchers at all levels from postdoctoral to the equivalent of full professor in machine learning. The junior positions are 3-5 years duration. The senior positions are continuing. All Canberra based NICTA researchers are eligible to have an adjunct position at the Australian National University. There are at least four positions available, and at least one at level E (equivalent to full professor). Internationally competitive remuneration is offered. The formal job ad can be found at http://nicta.com.au/jobs/SML_B2.pdf which contains details on how to apply. The closing date is 20 January 2004. Please pass this on to any of your colleagues who you think may be interested. Regards ------------------------------------------+-----------------------------. | Professor Robert (Bob) Williamson, // Phone: +61 2 6125 0079 | | Director, Canberra Research Laboratory // Office: +61 2 6125 8801 | | National ICT Australia Ltd (NICTA) // Fax: +61 2 6125 8623 | | Research School of Information // Mobile: +61 2 0405 3877 | | Sciences and Engineering, // Bob.Williamson at anu.edu.au | | Australian National University, // http://nicta.com.au | | Canberra 0200 AUSTRALIA // http://axiom.anu.edu.au/~williams | `-----------------------------------+------------------------------------' From nello at wald.ucdavis.edu Sat Nov 1 11:22:19 2003 From: nello at wald.ucdavis.edu (Nello Cristianini) Date: Sat, 1 Nov 2003 08:22:19 -0800 (PST) Subject: statistical learning and bioinfo positions at uc davis Message-ID: <20031101081137.N31138-100000@anson.ucdavis.edu> University of California at Davis Department of Statistics The Department of Statistics invites applications for two faculty positions that will start on July 1, 2004. Each position is either at the tenure-track Assistant Professor rank or at the tenured Associate Professor rank, depending on qualifications. Applicants must have a Ph.D. in Statistics or a related field. An outstanding research and teaching record is required for an appointment with tenure; and demonstrated interest and ability to achieve such a record is required for a tenure track appointment. Preferred research areas are computational statistics, statistical learning, bioinformatics/biostatistics, or time series/spatial statistics. Candidates with demonstrated research interest in statistical theory motivated by complex applications, such as methods for the analysis of high-dimensional data from longitudinal or imaging sources, or genomics/proteomics are strongly encouraged to apply. The successful candidates will be expected to teach at both the undergraduate and graduate levels. UC Davis has launched Bioinformatics and Computational Science initiatives, and has recently established a graduate program in Biostatistics, in addition to the existing Ph.D./M.S. program in Statistics. Information about the department and programs offered can be found at http://www-stat.ucdavis.edu/ . Send letter of application, including a statement of research interests, curriculum vitae with publication list, at least three letters of reference, relevant reprints/preprints, and transcripts (applicants with Ph.D. obtained in 2002 or later) to: Chair, Search Committee Department of Statistics 1 Shields Avenue University of California Davis, CA 95616. All e-mail to: search at wald.ucdavis.edu Review of applications will begin on Dec. 1, 2003, and will continue until the positions are filled. The University of California is an affirmative action/equal opportunity employer with a strong institutional commitment to the achievement of diversity among its faculty and staff. From abr2001 at med.cornell.edu Mon Nov 3 10:02:25 2003 From: abr2001 at med.cornell.edu (Adrian B. Robert) Date: Mon, 3 Nov 2003 10:02:25 -0500 Subject: Position in computational neuroscience/informatics Message-ID: Computational Neuroscience Research Associate and Neuroinformatics Research Associate Starting 1 January 2004 At the Laboratory of Neuroinformatics in New York City, funded by the NIH's Human Brain Project. The Laboratory of Neuroinformatics at Cornell's Weill Medical College in New York City seeks researchers at the post-doc level or above to join a team developing an integrated suite of analytic algorithms, parallel computational resources, databases, tools, and standards for data and algorithm description and exchange. Successful candidates will have a background in computational neuroscience or neuroinformatics, and combine the ability to work in a team setting with creativity and initiative. We provide generous salary and benefits, the excitement of life in New York, and the opportunity to work with a dedicated group of neuroinformatic developers and neurophysiologists, including Daniel Gardner, Jonathan D. Victor, and distinguished collaborators in neural data acquisition, analysis, and algorithm development. Our work in computational neuroinformatics is a component of the NIH's Human Brain Project, funded by NIMH. To advance our understanding of neural coding, we are developing a suite of information-theoretic algorithms as public resources and for application to data in our neurophysiology databases via a linked dedicated computational array. The project complements our development at neurodatabase.org of searchable neurophysiology databases containing spike train and other microelectrode data and allied descriptive metadata including recording site, technique, and stimulus. To aid data sharing and interoperability among neurodatabases, we are creating BrainML, an XML-based multilevel data description suite for neuroscience. Computational Neuroscience candidates should have solid experience using information theoretic or other statistical or analytic techniques to analyze neurophysiologic or similar experimental data. Programming skills in C and Matlab are essential, and experience in large-scale computation or numerical analysis is helpful. Neuroinformatics candidates will bring experience with informatics techniques including database design, Java programming, and/or XML as well as a background in neuroscience, bioinformatics, or medical informatics. Those attending the Society for Neuroscience meeting may contact Daniel Gardner via the placement service (employer number 204240); otherwise, email CV, letter of interest, and the names of three references to: dan at med.cornell.edu. From bower at uthscsa.edu Mon Nov 3 19:17:09 2003 From: bower at uthscsa.edu (james Bower) Date: Mon, 03 Nov 2003 18:17:09 -0600 Subject: Assistant Professor in Cognitive and/or Systems Neuroscience. Message-ID: ASSISTANT FACULTY POSITION IN COGNITIVE AND/OR SYSTEMS NEUROSCIENCE AT THE THE UNIVERSITY OF TEXAS - SAN ANTONIO The Department of Biology at The University of Texas at San Antonio (UTSA) (http://www.utsa.edu) invites applications for a tenure track Assistant Professor faculty position in the general area of Cognitive and/or Systems Neurobiology, broadly defined. Neuroscience is a rapidly growing emphasis at UTSA led by the Cajal Neuroscience Research Institute (http://bio.utsa.edu/Cajal/) , an interdepartmental organization of neuroscience faculty. Applicant's research interests should include experimental and computational techniques but can apply to any area of neuro-biological research. The Department of Biology consists of 30 faculty members, and offers a B.S. degree in Biology, M.S. degrees in Biology and Biotechnology, and a Ph.D. degree in Biology with an emphasis in Neurobiology. New research facilities will be an integral part of a $83 million, 228,000 square foot Biological Sciences Building currently under construction. Advanced Neuroimaging facilities are also available through the Research Imaging Center at the University of Texas Health Science Center San Antonio. A competitive start-up package is available. We are especially interested in candidates committed to our mission of research and teaching within a diverse student body. Additional information on the search process, UTSA, as well as the benefits of living and working in 'Ol San Antone can be found at: http://bio.utsa.edu/faculty-recruitment -- James M. Bower Ph.D. Research Imaging Center University of Texas Health Science Center at San Antonio 7703 Floyd Curl Drive San Antonio, TX 78284-6240 Cajal Neuroscience Center University of Texas San Antonio Phone: 210 567 8080 Fax: 210 567 8152 From Mayank_Mehta at brown.edu Tue Nov 4 13:45:53 2003 From: Mayank_Mehta at brown.edu (Mayank Mehta) Date: Tue, 04 Nov 2003 13:45:53 -0500 Subject: Assistant or Associate Professor Tenure Track Faculty Message-ID: <5.2.1.1.2.20031104133552.00b97d90@postoffice.brown.edu> Neuroscientist Assistant or Associate Professor Tenure Track Faculty Brown University Medical School The Department of Neuroscience at Brown University announces a tenure-track position at the Assistant or Associate Professor level. Research areas of particular interest to the Department include synaptic function and plasticity, molecular and cellular neurobiology, sensory processing, motor control, and development. The Ph.D. or M.D. degree and at least 2 years of relevant postdoctoral training are required. Neuroscience at Brown University is undergoing a significant expansion that includes additional positions, new research buildings and new facilities for transgenic mice. This expanded research infrastructure will complement existing state-of-the-art facilities for molecular biology, imaging, multielectrode recording and MRI. Criteria for Assistant Professor: Ph.D. or M.D. Two or more years of postdoctoral experience in research; ability for independent research and potential to secure external funding to support a scholarly research program; potential to be an effective teacher and mentor for undergraduate and graduate students. Criteria for Associate Professor: Ph.D. or M.D. Teaching - Effectiveness as a lecturer and mentor of graduate students and postdoctoral fellows. Ability to direct courses at the undergraduate and graduate level. Research. - Record of scholarly productivity through publication in peer-reviewed journals and a record of presentation at scientific meetings and research seminars; established program of research with continuity of funding and continuous productivity; national reputation for excellence in scholarship and a record of professional service to the scientific and academic community. Applications received by December 15, 2003 will be given full consideration. Please specify if you are applying at Assistant or Associate Professor level. Submit a curriculum vitae, a set of representative reprints, a concise description of research interests and goals, and arrange for three (Assistant Professor) or five (Associate Professor) letters of reference to be sent to: David Berson, Ph.D. Department of Neuroscience Brown University Search Committee Box 1953 Providence, RI 02912 Brown University is an Affirmative Action/Equal Opportunity employer Mayank R. Mehta Brown University Department of Neuroscience Providence, RI 02912-1953 URL: http://neuroscience.brown.edu/mehta.html From pam_reinagel at hms.harvard.edu Wed Nov 5 14:38:40 2003 From: pam_reinagel at hms.harvard.edu (Pamela Reinagel) Date: Wed, 5 Nov 2003 11:38:40 -0800 Subject: UCSD Comp Neurosci Program: Call for Applications Message-ID: <01d501c3a3d4$6d96a7d0$ad9eef84@RLABCOMP2> Call for Applications: PhD in Computational Neurobiology The Computational Neurobiology Graduate Program at the University of California at San Diego is designed to train young scientists with the broad range of scientific and technical skills that are essential to understand the computational resources of neural systems. This program welcomes students with backgrounds in physics, chemistry, biology, psychology, computer science and mathematics, with courses and research programs that reflect the uniquely computational properties of nervous systems. Complete details of the program can be found at: http://www.biology.ucsd.edu/grad/CN_overview.html To apply to the Computational Neurobiology Program, fill out the Biology Pre-Application form, indicating "Computational Neurobiology" as your first area of interest. h ttp://www.biology.ucsd.edu/grad/admissions/preapp_main.html If your pre-application is judged to be competitive, we will send you a complete application package. Pre-application by Dec 1, 2003 is recommended to meet the full application deadline of Jan 2, 2004. From terry at salk.edu Thu Nov 6 14:00:32 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 6 Nov 2003 11:00:32 -0800 (PST) Subject: NIPS Registration deadline Message-ID: <200311061900.hA6J0WL59653@purkinje.salk.edu> NIPS 2003 early registration deadline: Saturday, November 8, 2003 (midnight PST) REGISTRATION WEBSITE: http://www.nips.snl.salk.edu/ The 17th Annual Conference of NIPS 2003, Neural Information Processing Systems, will be held in Vancouver, British Columbia, Canada December 8-11, 2003 The Post-Conference Workshops will be held in Whistler, B.C. December 12-13, 2003 From mzib at ee.technion.ac.il Sun Nov 9 13:34:41 2003 From: mzib at ee.technion.ac.il (Michael Zibulevsky) Date: Sun, 9 Nov 2003 20:34:41 +0200 (IST) Subject: new paper: Relative Optimization for Blind Deconvolution Message-ID: Dear Colleagues, we would like to announce the following paper: "Relative Optimization for Blind Deconvolution" by A. Bronstein, M. Bronstein and M. Zibulevsky Abstract- We propose a relative optimization framework for quasi-maximum likelihood (ML) blind deconvolution and the relative Newton method as its particular instance. Special Hessian structure allows fast approximate Hessian construction and in- version with complexity comparable to that of gradient methods. Sequential optimization with gradual change of the smoothing parameter makes the proposed algorithm very accurate for sparse or uniformly-like distributed signals. We also propose the use of rational IIR restoration kernels, which constitute a richer family of filters than the traditionally used FIR kernels. Simulation results demonstrate the efficiency of the proposed methods. URL of the pdf file: http://ie.technion.ac.il/~mcib/ or http://visl.technion.ac.il/bron/alex/publications.html =========================================================================== Michael Zibulevsky, Ph.D. Email: mzib at ee.technion.ac.il Department of Electrical Engineering Phone: 972-4-829-4724 Technion - Israel Institute of Technology Haifa 32000, Israel http://ie.technion.ac.il/~mcib/ Fax: 972-4-829-4799 =========================================================================== From auke.ijspeert at epfl.ch Tue Nov 11 13:56:29 2003 From: auke.ijspeert at epfl.ch (Auke Ijspeert) Date: Tue, 11 Nov 2003 19:56:29 +0100 Subject: [2nd CFP] From Animals to Animats 8, SAB'04, 13-17 July 2004, Los Angeles. Message-ID: <3FB130DD.9040605@epfl.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ SECOND CALL FOR PAPERS FROM ANIMALS TO ANIMATS 8 The Eighth International Conference on the SIMULATION OF ADAPTIVE BEHAVIOR (SAB'04) http://www.isab.org/sab04 An International Conference organized by The International Society for Adaptive Behavior (ISAB) 13-17 July 2004, Los Angeles, USA The objective of this interdisciplinary conference is to bring together researchers in computer science, artificial intelligence, alife, control, robotics, neurosciences, ethology, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow natural and artificial animals to adapt and survive in uncertain environments. The conference will focus on experiments with well-defined models --- robot models, computer simulation models, mathematical models --- designed to help characterize and compare various organizational principles or architectures underlying adaptive behavior in real animals and in synthetic agents, the animats. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis: The Animat approach Characterization of agents and environments Passive and active perception Motor control Visually-guided behaviors Action selection Behavioral sequencing Navigation and mapping Internal models and representation Learning and development Motivation and emotion Collective and social behavior Emergent structures and behaviors Neural correlates of behavior Evolutionary and co-evolutionary approaches Autonomous robotics Humanoid robotics Software agents and virtual creatures Applied adaptive behavior Animats in education Philosophical and psychological issues Authors should make every effort to suggest implications of their work for both natural and artificial animals, and to distinguish the portions of their work which use simulation from those using a physical agent. Papers that do not deal explicitly with adaptive behavior will be rejected. Conference format Following the tradition of SAB conferences, the conference will be single track, with additional poster sessions. Each poster session will start with poster spotlights giving presenters the opportunity to orally present their main results. Submission Instructions Submission instructions can be found on the conference Web site. Submitted papers must not exceed 10 pages (double columns). Because the whole review process heavily relies on electronic means, the organizers strongly enforce electronic submissions of PDF documents. Authors who are in the impossibility to deliver a PDF document should contact the program chairs (sab2004-program at isab.org) for discussing alternative ways of submitting. Computer, video, and robotic demonstrations are also invited for submission. Submit a 2-page proposal plus a title page to the program chairs. Indicate equipment requirements and relevance to the themes of the conference. Call for workshop and tutorial proposals A separate call for workshop and tutorial proposals can be found on the conference web site at http://www.isab.org/sab04. The accepted workshops and tutorials will take place on the last day of the conference, July 17. Inquiries can be made to sab2004-workshops at isab.org IMPORTANT DATES (2004) JAN 09: Submissions must be received JUL 13-16: Conference dates JUL 17: Workshops and tutorials Program chairs: Stefan Schaal, University of Southern California (USC) Auke Ijspeert, Swiss Federal Institute of Technology, Lausanne & USC Aude Billard, Swiss Federal Institute of Technology, Lausanne & USC Sethu Vijayakumar, University of Edinburgh & USC General Chairs: John Hallam, Universities of Odense and Edinburgh Jean-Arcady Meyer, Laboratoire d'Informatique de Paris 6 Publisher: The MIT Press, Cambridge. Program queries to: sab2004-program at isab.org Workshops queries to: sab2004-workshops at isab.org General queries to: sab2004 at isab.org WWW Page: http://www.isab.org/sab04 From thomas.j.palmeri at vanderbilt.edu Wed Nov 12 10:25:16 2003 From: thomas.j.palmeri at vanderbilt.edu (Thomas Palmeri) Date: Wed, 12 Nov 2003 09:25:16 -0600 Subject: Postdoctoral Fellowship at Vanderbilt University Message-ID: <001401c3a931$2c8b3c80$71e63b81@PalmeriThinkPad> POSTDOCTORAL FELLOWSHIP LINKING COMPUTATIONAL MODELS AND SINGLE-CELL NEUROPHYSIOLOGY Members of the Psychology Department and the Center for Integrative and Cognitive Neuroscience at Vanderbilt University seek a highly qualified postdoctoral fellow to join an NSF-funded collaborative research project linking computational models of human cognition with single-cell neurophysiology. The aim is to elucidate how control over attention, categorization, and response selection are instantiated in neural processes underlying adaptive behavior. The project integrates separate programs of research in computational models of human cognition (Logan and Palmeri) and in single-cell neurophysiology (Schall). We are particularly interested in applicants with training in computational modeling (experience in mathematical modeling, neural network modeling, or dynamic systems modeling are equally desirable). Knowledge of theoretical and empirical research in attention, categorization, response selection, or related areas of cognition is desirable. The fellowship will pay according to the standard NIH scale, and will be for one or two years. Fellows will be expected to apply for individual funding within their first year. Applicants should send a current vita, relevant reprints and preprints, a personal letter describing their research interests, background, goals, and career plans, and reference letters from two individuals. Applications will be reviewed as they are received. Individuals who have recently completed their dissertation or who expect to defend their dissertation this winter or spring are encouraged to apply. We will also consider individuals currently in postdoctoral positions. Send Materials to: Thomas Palmeri, Gordon Logan, or Jeffrey Schall Department of Psychology 301 Wilson Hall 111 21st Avenue South Nashville, TN 37203 For more information on Vanderbilt, the Psychology Department, and the Center for Integrative and Cognition Neuroscience, see the following web pages: Vanderbilt University http://www.vanderbilt.edu/ Psychology Department http://sitemason.vanderbilt.edu/psychology Center for Integrative and Cognitive Neuroscience http://cicn.vanderbilt.edu Vanderbilt University is an Affirmative Action / Equal Opportunity employer. -------------------------------------------- Thomas Palmeri Associate Professor Department of Psychology 301 Wilson Hall Vanderbilt University Nashville, TN 37240 tel: 615-343-7900 fax: 615-343-8449 thomas.j.palmeri at vanderbilt.edu From mozer at colorado.edu Wed Nov 12 14:56:47 2003 From: mozer at colorado.edu (Michael C. Mozer) Date: Wed, 12 Nov 2003 12:56:47 -0700 Subject: faculty position at University of Colorado Message-ID: <3FB2907F.70107@colorado.edu> UNIVERSITY OF COLORADO: The Department of Computer Science is seeking outstanding candidates for a tenure-track faculty position at the assistant professor level. This position is targeted for candidates whose research focuses on computational biology or bioinformatics, and whose interests overlap the department's core strengths in machine learning, high-performance computing, security, systems and software, and theory. Candidates must have a Ph.D. degree in computer science or a related discipline, enthusiasm for working with both undergraduate and graduate students, and the ability to develop an innovative interdisciplinary research program. This position is one of several that the University has committed to bioinformatics as part of a larger initiative in Molecular Biotechnology. It provides an unrivaled opportunity for a top computer scientist to join a critical mass of colleagues from many disciplines including life sciences and biological engineering. The University of Colorado is committed to diversity and equality in education and employment. Review of applications will begin immediately. Candidates should submit a vitae, research and teaching statements, and names of at least three references to: Elizabeth Bradley, Chair University of Colorado Department of Computer Science Campus Box 430 Boulder, CO 80309 From hasselmo at bu.edu Thu Nov 13 11:25:40 2003 From: hasselmo at bu.edu (Michael Hasselmo) Date: Thu, 13 Nov 2003 11:25:40 -0500 (EST) Subject: Call for Nominations for Awards from International Neural Network Society (INNS) Message-ID: The International Neural Network Society's (INNS) Awards Program has been established to recognize individuals who have made outstanding contributions in the field of Neural Networks. Up to three awards are presented annually to senior individuals for outstanding contributions made in the field of Neural Networks. In addition, two Young Investigator Awards are presented annually to individuals with no more than five years postdoctoral experience and who are under forty years of age, for significant contributions in the field of Neural Networks. The INNS Awards Committee is soliciting nominations for these awards. Details on the Awards Program, as well as how to make the nomination in practice, are available on the INNS Web page http://www.inns.org/ . It also contains a list of the previous INNS Awards recipients. All nominations should be emailed to the chair of the Awards Committee, prof. Erkki Oja, email address erkki.oja at hut.fi, by November 19, 2003. From perdi at kzoo.edu Thu Nov 13 15:17:45 2003 From: perdi at kzoo.edu (Peter Erdi) Date: Thu, 13 Nov 2003 15:17:45 -0500 (EST) Subject: Call: IJCNN 2004 - Special Sessions organized by the ENNS In-Reply-To: <200311120806.KAA09123@james.hut.fi> References: <200308051104.OAA65498@james.hut.fi> <200308051104.OAA65498@james.hut.fi> <3.0.2.32.20031024085007.0127b830@pop-srv.mbfys.kun.nl> <200311120806.KAA09123@james.hut.fi> Message-ID: Call for organizing Special Sessions by the ENNS for the IJCNN'04 25-28 July 2004, Budapest, Hungary The annual International Joint Conference on Neural Networks (IJCNN), is one of the premier international conferences in the field. European Neural Network Society (ENNS) will organize six special sessions. This is an open CALL for organizing Sepcial Sessions for the IJCNN meeting supported by the ENNS. Each proposal should be motivated with 10 lines and should contain 4-6 papers. Each paper should have a title, authors (with email address) and 10 line abstract and a commitment from the authors. The deadline of the application is December 1. Applications should be sent by email to perdi at kzoo.edu. Acceptance/rejection info will be provided by December 20th. (Of course authors whose special session is not accepted, can send in their contribution as a submitted paper.) All authors of accepted special sessions are requested to submit their paper by January 29th with the same system as submitted papers but under the heading of special sessions. (see http://www.conferences.hu/budapest2004/ ) The acceptance to special sessions does not have any financial consequnces. ENNS has policiy to support young members of the ENNS. (see http://www.snn.kun.nl/enns/) Travel grantst for a number of students presenting papers at IJCNN will be available. Preference are given to those participents, who will make presentations on the Special Sessions organized by the ENNS. Peter Erdi program Co-Chair perdi at kzoo.edu From cimca at ise.canberra.edu.au Thu Nov 13 22:06:30 2003 From: cimca at ise.canberra.edu.au (cimca) Date: Fri, 14 Nov 2003 14:06:30 +1100 Subject: CFP: International Conference on Computational Intelligence for Modelling, Control and Automation Message-ID: <6.0.0.22.1.20031114140609.02558d18@mercury.ise.canberra.edu.au> CALL FOR PAPERS International Conference on Computational Intelligence for Modelling, Control and Automation 12-14 July 2004 Gold Coast, Australia http://www.ise.canberra.edu.au/conferences/cimca04/index.htm Jointly with International Conference on Intelligent Agents, Web Technologies and Internet Commerce 12-14 July 2004 Gold Coast, Australia http://www.ise.canberra.edu.au/conferences/iawtic04/index.htm The international conference on computational intelligence for modelling, control and automation will be held in Gold Coast, Australia on 12-14 July 2004. The conference provides a medium for the exchange of ideas between theoreticians and practitioners to address the important issues in computational intelligence, modelling, control and automation. The conference will consist of both plenary sessions and contributory sessions, focusing on theory, implementation and applications of computational intelligence techniques to modelling, control and automation. For contributory sessions, papers (4 pages or more) are being solicited. Several well-known keynote speakers will address the conference. Topics of the conference include, but are not limited to, the following areas: Modern and Advanced Control Strategies: Neural Networks Control, Fuzzy Logic Control, Genetic Algorithms & Evolutionary Control, Model-Predictive Control, Adaptive and Optimal Control, Intelligent Control Systems, Robotics and Automation, Fault Diagnosis, Intelligent agents, Industrial Automations Hybrid Systems: Fuzzy Evolutionary Systems, Fuzzy Expert Systems, Fuzzy Neural Systems, Neural Genetic Systems, Neural-Fuzzy-Genetic Systems, Hybrid Systems for Optimisation Data Analysis, Prediction and Model Identification: Signal Processing, Prediction & Time Series Analysis, System Identification, Data Fusion and Mining, Knowledge Discovery, Intelligent Information Systems, Image Processing, Image Understanding, Parallel Computing applications in Identification & Control, Pattern Recognition, Clustering, Classification Decision Making and Information Retrieval: Case-Based Reasoning, Decision Analysis, Intelligent Databases & Information Retrieval, Dynamic Systems Modelling, Decision Support Systems, Multi-criteria Decision Making, Qualitative and Approximate-Reasoning Paper Submission Papers will be selected based on their originality, significance, correctness, and clarity of presentation. Papers (4 pages or more) should be submitted to the following e-mail or the following address: CIMCA'2004 Secretariat School of Computing University of Canberra Canberra, 2601, ACT, Australia E-mail: cimca at ise.canberra.edu.au E-mail submission is preferred. Papers should present original work, which has not been published or being reviewed for other conferences. Important Dates 14 March 2004 Submission of papers 30 April 2004 Notification of acceptance 21 May 2004 Deadline for camera-ready copies of accepted papers 12-14 July 2004 Conference sessions Special Sessions and Tutorials Special sessions and tutorials will be organised at the conference. The conference is calling for special sessions and tutorial proposals. All proposals should be sent to the conference chair on or before 27th February 2004. CIMCA'04 will also include a special poster session devoted to recent work and work-in-progress. Abstracts are solicited for this session. Abstracts (3 pages limit) may be submitted up to 30 days before the conference date. Invited Sessions Keynote speakers from academia and industry will be addressing the main issues of the conference. Visits and social events Sightseeing visits will be arranged for the delegates and guests. A separate program will be arranged for companions during the conference. Further Information For further information either contact cimca at ise.canberra.edu.au or see the conference homepage at: http://www.ise.canberra.edu.au/conferences/cimca04/index.htm From Ronan.Reilly at may.ie Thu Nov 13 08:44:49 2003 From: Ronan.Reilly at may.ie (Ronan Reilly) Date: Thu, 13 Nov 2003 13:44:49 -0000 Subject: Associate Professorship in Computer Science at National University of Ireland, Maynooth Message-ID: <001201c3a9ec$4fdd6030$25f59d95@neuron> Readers of the list may be interested in the position advertised below. While it is not tied to a specific research area, applications from candidates with backgrounds in machine learning, computational neuroscience, cognitive science, and related areas are very welcome. Ronan === DEPARTMENT OF COMPUTER SCIENCE ASSOCIATE PROFESSOR POST NUI Maynooth invites applications for the position of Associate Professor in Computer Science. Applicants should have a strong track-record in research and teaching. Applications are welcome in all areas of research. The person appointed will play an active role in all aspects of the Department?s activities, pursuing high-quality research, providing research leadership, and supervising and teaching students at both undergraduate and postgraduate level. Salary Scale (new entrants): ?68,495 - ?91,555 p.a. (8 points) Prior to application, further details of the post may be obtained by writing to the Personnel Officer, National University of Ireland, Maynooth, Maynooth, Co. Kildare. Confidential Fax: +353-1-7083940; Email: personnel at may.ie Department of Computer Science Website: www.cs.may.ie Applications (including a full CV, the names, email and postal addresses, telephone, and fax numbers of three referees, and a personal statement) should be forwarded to the Personnel Officer, to arrive no later than 27 February, 2004. === _________________________________ Prof. Ronan G. Reilly Department of Computer Science National University of Ireland, Maynooth Co. Kildare IRELAND v: +353-1-708 3847 f: +353-1-708 3848 w1: www.cs.may.ie/staff/ronan.html (homepage) w2: cortex.cs.may.ie (research group) e: ronan at cs.may.ie From sylee at ee.kaist.ac.kr Mon Nov 10 22:35:05 2003 From: sylee at ee.kaist.ac.kr (Soo-Young Lee at KAIST) Date: Tue, 11 Nov 2003 12:35:05 +0900 Subject: New Journal: Neural Information Processing - Letters and Reviews References: <3EC14B01.3040901@iub-psych.psych.indiana.edu> Message-ID: <002701c3a804$cc455fe0$ac1ff88f@kaistsylee> Sorry if you recevied multiple times.) The first issue of a new journal had been published on-line on October 2003 at http://www.nip-lr.info and http://bsrc.kaist.ac.kr/nip-lr/. Neural Information Processing - Letters and Reviews The high-quality rapid publication with double-blind reviews Table of Contents Vol.1, No.1, October, 2003 Preface pp. i-ii Toward a New Journal with Timeliness, Accessibility, Quality, and Double-Blind Reviews Soo-Young Lee Review pp. 1-52 Independent Component Analysis and Extensions with Noise and Time: A Bayesian Ying-Yang Learning Perspective Lei Xu Letters pp. 53-59 Extraction and Optimization of Fuzzy Protein Sequences Classification Rules Using GRBF Neural Networks Dianhui Wang, Nung Kion Lee, and Tharam S. Dillon pp. 61-66 Phonological Approach to the Mapping of Semantic Space: Replication as a Basis for Language Storage in the Cortex Victor Vvedensky pp. 67-73 Artificial Neural Networks as Analytic Tools in an ERP Study of Face Memory Reiko Graham and Michael R.W. Dawson =============================================================== Facts of NIP-LR The goals of the new journal NIP-LR are (a) Timely Publication - 3 to 4 months to publication for Letters - up to 6 months to publication for Reviews (b) Connecting Neuroscience and Engineering - serving both system-level neuroscience and artificial neural network communities (c) Low Cost - free for online only - US$30 per year for hardcopy (d) High Quality - unbiased double-blind reviews - short papers (up to 10 single-column single-space published pages) for Letters (The Letters may include preliminary results of excellent ideas, and full paper may be published latter at other journals.) - in-depth reviews of new and important topics for Reviews The topics include Cognitive neuroscience Computational neuroscience Neuroinformatics database and analysis tools Brain signal measurements and functional brain mapping Neural modeling and simulators Neural network architecture and learning algorithms Data representations in neural systems Information theory for neural systems Software implementations of neural networks Neuromorphic hardware implementations Biologically-motivated speech signal processing Biologically-motivated image processing Human-like inference systems and intelligent agents Human-like behavior and intelligent systems Artificial life Other applications of neural information processing mechanisms The journal will consist of monthly online publication and yearly paper publications. Authors have all the rights on their papers, and may publish extended versions of their Letters to other journals. All the submission and review process will be done electronically with Adobe PDF, Postscript, or MS Word files. For rapid review process only binary (Accept" or "Reject) decisions will be made without revision requirements for Letters. Minor revision may be requested for Review papers. (Mandatory English editing services may be recommended.) We also incorporate double-blind review procedure. (The reviewrs will not know the names of authors.) All papers should be submitted by e-mail at nip-lr at neuron.kaist.ac.kr to the Soo-Young Lee, Editor-in-Chief for NIP-LR Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8490 E-mail: nip-lr at neuron.kaist.ac.kr, sylee at kaist.ac.kr Publisher: KAIST Press Home Page: http://www.nip-lr.info and http://neuron.kaist.ac.kr/nip-lr/ From lss at cs.stir.ac.uk Mon Nov 17 11:05:00 2003 From: lss at cs.stir.ac.uk (Professor Leslie Smith) Date: Mon, 17 Nov 2003 16:05:00 +0000 Subject: CFP: Brain Inspired Cognitive Systems - BICS2004 Message-ID: <3FB8F1AC.6020000@cs.stir.ac.uk> Call for Papers: Brain Inspired Cognitive Systems - BICS2004 University of Stirling, Stirling, Scotland, UK August 29 - September 1, 2004 First International ICSC Symposium on Cognitive Neuro Science (CNS 2004) Cognitive neuroscience covers both computational models of the brain and brain inspired algorithms and artifacts. Chair: Prof. Igor Aleksander, Imperial College London, U.K i.aleksander at imperial.ac.uk Second International ICSC Symposium on Biologically Inspired Systems (BIS 2004) Systems are inspired by many different aspects of biology. We are interested in systems at all levels from VLSI engineered to software to mathematical models. Chair: Prof. Leslie Smith, University of Stirling, U.K. lss at cs.stir.ac.uk Third International ICSC Symposium on Neural Computation (NC'2004) Neural Computation covers models, software and hardware implementations together with applications. Chair: Dr. Amir Hussain, University of Stirling, U.K. ahu at cs.stir.ac.uk Further Details: http://www.icsc-naiso.org/conferences/bics2004/bics-cfp.html Important dates: Submission deadline January 31, 2004 Notification March 31, 2004 Early registration May 15, 2004 Delivery of full papers and registration: May 31, 2004 Tutorials and Workshops August 29, 2004 Conference August 30 - September 1, 2004 -- Professor Leslie S. Smith, Dept of Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA, Scotland l.s.smith at cs.stir.ac.uk Tel (44) 1786 467435 Fax (44) 1786 464551 www http://www.cs.stir.ac.uk/~lss/ UKRI IEEE NNS Chapter Chair: http://www.cs.stir.ac.uk/ieee-nns-ukri/ -- The University of Stirling is a university established in Scotland by charter at Stirling, FK9 4LA. Privileged/Confidential Information may be contained in this message. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not disclose, copy or deliver this message to anyone and any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. In such case, you should destroy this message and kindly notify the sender by reply email. Please advise immediately if you or your employer do not consent to Internet email for messages of this kind. From orhan at aipllc.com Mon Nov 17 10:01:30 2003 From: orhan at aipllc.com (Orhan Karaali) Date: Mon, 17 Nov 2003 10:01:30 -0500 Subject: SVM and neural network research position Message-ID: <000501c3ad1b$af32a540$97eb63c7@Istanbul> ADVANCED INVESTMENT PARTNERS, LLC. www.aipllc.com Advanced Investment Partners, LLC. (AIP) is a registered investment advisor based in Clearwater, Florida focusing on institutional domestic equity asset management. Our partners include Boston-based State Street Global Advisors, a global leader in institutional financial services, and Amsterdam-based Stichting Pensioenfonds ABP, one of the world's largest pension plans. AIP's reputation as an innovative entrepreneur within the asset management community is built upon the research and development of nontraditional quantitative stock valuation techniques for which a patent was issued in 1998. POSITION: RESEARCH SCIENTIST This position will involve enhancing AIP's financial valuation algorithms for stock selection and portfolio management. Job responsibilities include applying new machine learning and regression algorithms, writing new algorithms, developing new factors, contributing to financial and algorithmic research projects, developing applications in the areas of multifactor stock models. AIP uses Windows XP and 2003 Server; Visual Studio Net; MS SQL 2000; C++ STL; C#; OLE DB; XML; COM+, and parallel processing technologies. Minimum Qualifications: Ph.D. or Masters Degree in CS, EE, Computer Engineering, or Mathematics. Thesis or Dissertation concentration in SVM. Understanding of neural networks and regression algorithms. Strong C++ background. Bonus Qualifications: Familiarity with stock market. Knowledge of SQL and STL. Compensation includes a competitive salary, bonus and benefits package. US citizenship or US permanent resident status is required. To apply, please mail your resume to: Attn: Orhan Karaali 311 Park Place Blvd. Suite 250 Clearwater, FL 33759 From bio-adit2004-REMOVE at teuscher.ch Tue Nov 18 16:41:06 2003 From: bio-adit2004-REMOVE at teuscher.ch (Christof Teuscher) Date: Tue, 18 Nov 2003 22:41:06 +0100 Subject: [Bio-ADIT2004] - Call for Participation Message-ID: <11182241.TSMHFXFD@teuscher.ch> ================================================================ We apologize if you receive multiple copies of this email. Please distribute this announcement to all interested parties. ================================================================ Bio-ADIT 2004 CALL FOR PARTICIPATION The First International Workshop on Biologically Inspired Approaches to Advanced Information Technology January 29 - 30, 2004 Swiss Federal Institute of Technology, Lausanne, Switzerland Website: http://lslwww.epfl.ch/bio-adit2004/ Sponsored by - Osaka University Forum, - Swiss Federal Institute of Technology, Lausanne, and - The 21st Century Center of Excellence Program of The Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under the Program Title "Opening Up New Information Technologies for Building Networked Symbiosis Environment Biologically inspired approaches have already proved successful in achieving major breakthroughs in a wide variety of problems in information technology (IT). A more recent trend is to explore the applicability of bio-inspired approaches to the development of self-organizing, evolving, adaptive and autonomous information technologies, which will meet the requirements of next-generation information systems, such as diversity, scalability, robustness, and resilience. These new technologies will become a base on which to build a networked symbiotic environment for pleasant, symbiotic society of human beings in the 21st century. Bio-ADIT 2004 will be the first international workshop to present original research results in the field of bio-inspired approaches to advanced information technologies. It will also serve to foster the connection between biological paradigms and solutions to building the next-generation information systems. PROGRAM: The advance program for the conference can be viewed at: http://lslwww.epfl.ch/bio-adit2004/program.shtml The program includes 25 oral presentations and 15 poster presentations selected out of 85 submitted articles. It also includes two keynote addresses: "The Architecture of Complexity: From the Internet to Metabolic Networks", Albert-Laszlo Barabasi, University of Notre Dame, USA. "www.siliconcell.net: Bringing Bits and Chips to Life", Hans V. Westerhoff, Free University, Amsterdam REGISTRATION: Thanks to the generous support of our sponsors, the total registration fees (including lunch, conference dinner and pre-proceedings) are only 90 Swiss Francs (approx. 70 US dollars). Please register through the conference web site, at http://lslwww.epfl.ch/bio-adit2004/registration.shtml and follow the instructions for credit-card payment. We are looking forward seeing you in Lausanne. EXECUTIVE COMMITTEE: General Co-Chairs: - Daniel Mange (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Shojiro Nishio (Osaka University, Japan) Technical Program Committee Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Finance Chair: - Marlyse Taric (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Toshimitsu Masuzawa (Osaka University, Japan) Publicity Chair: - Christof Teuscher (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Takao Onoye (Osaka University, Japan) Publications Chair: - Naoki Wakamiya (Osaka University, Japan) Local Arrangements Chair: - Carlos Andres Pena-Reyes (Swiss Federal Institute of Technology, Lausanne, Switzerland) Internet Chair: - Jonas Buchli (Swiss Federal Institute of Technology, Lausanne, Switzerland) TECHNICAL PROGRAM COMMITTEE: Co-Chairs: - Auke Jan Ijspeert (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Masayuki Murata (Osaka University, Japan) Members: - Michael A. Arbib (University of Southern California, Los Angeles, USA) - Aude Billard (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Marco Dorigo (Universit? Libre de Bruxelles, Belgium) - Takeshi Fukuda (IBM Tokyo Research Laboratory, Japan) - Katsuo Inoue (Osaka University, Japan) - Wolfgang Maass (Graz University of Technology, Austria) - Ian W. Marshall (BTexact Technologies, UK) - Toshimitsu Masuzawa (Osaka University, Japan) - Alberto Montresor (University of Bologna, Italy) - Chrystopher L. Nehaniv (University of Hertfordshire, U.K.) - Stefano Nolfi (Institute of Cognitive Sciences and Technology,CNR, Rome, Italy) - Takao Onoye (Osaka University, Japan) - Rolf Pfeifer (University of Zurich, Switzerland) - Eduardo Sanchez (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Hiroshi Shimizu (Osaka University, Japan) - Moshe Sipper (Ben-Gurion University, Israel) - Gregory Stephanopoulos (Massachusetts Institute of Technology, USA) - Adrian Stoica (Jet Propulsion Laboratory, USA) - Tim Taylor (University of Edinburgh, UK) - Gianluca Tempesti (Swiss Federal Institute of Technology, Lausanne, Switzerland) - Naoki Wakamiya (Osaka University, Japan) - Hans V. Westerhoff (Vrije Universiteit Amsterdam, NL) - Xin Yao (University of Birmingham, UK) From cindy at bu.edu Tue Nov 18 11:32:30 2003 From: cindy at bu.edu (Cynthia Bradford) Date: Tue, 18 Nov 2003 11:32:30 -0500 Subject: Neural Networks 16(10) Message-ID: <001b01c3adf1$9054dc70$903dc580@cnspc31> NEURAL NETWORKS 16(10) Contents - Volume 16, Number 10 - 2003 ------------------------------------------------------------------ ***** ERRATUM ***** "Delay-dependent exponential stability analysis of delayed neural networks: An LMI approach" Xiaofeng Liao, Guanrong Chen, and Edgar N. Sanchez ***** MATHEMATICAL AND COMPUTATIONAL ANALYSIS ***** "Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation" Wai-keung Fung and Yun-hui Liu "Comparison of simulated annealing and mean field annealing as applied to the generation of block designs" Pau Bofill, Roger Guimera, and Carme Torras "The general inefficiency of batch training for gradient descent learning" D. Randall Wilson and Tony R. Martinez "Analyzing stability of equilibrium points in neural networks: A general approach" Wilson A. Truccolo, Govindan Rangarajan, Yonghong Chen, and Mingzhou Ding "A functions localized neural network with branch gates" Qingyu Xiong, Kotaro Hirasawa, Jinglu Hu, and Junichi Murata "Dynamical properties of strongly interacting Markov chains" Nihat Ay and Thomas Wennekers ***** ENGINEERING AND DESIGN ***** "The co-adaptive neural network approach to the Euclidean traveling salesman problem" E.M. Cochrane and J.E. Beasley "Polynomial harmonic GMDH learning networks for time series modeling" Nikolay Y. Nikolaev and Hitoshi Iba CURRENT EVENTS CONTENTS, VOLUME 16, 2003 AUTHOR INDEX, VOLUME 16, 2003 ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 (regular) SEK 660 Y 13,000 Neural Networks (plus Y 2,000 enrollment fee) $20 (student) SEK 460 Y 11,000 (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- membership without $30 SEK 200 not available to Neural Networks non-students (subscribe through another society) Y 5,000 student (plus Y 2,000 enrollment fee) ---------------------------------------------------------------------------- --------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Shozo Yasui Kyushu Institute of Technology Graduate School of Life Science and Engineering 2-4 Hibikino, Wakamatsu-ku Kitakyushu 808-0196 Japan 81 93 695 6108 (phone and fax) jnns at brain.kyutech.ac.jp http://www.jnns.org/ ---------------------------------------------------------------------------- From erik at bbf.uia.ac.be Wed Nov 19 11:10:20 2003 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Wed, 19 Nov 2003 17:10:20 +0100 Subject: CNS*04 CALL FOR PAPERS Message-ID: <3C315C5C-1AAD-11D8-907A-000393452C9C@bbf.uia.ac.be> FIRST CALL FOR PAPERS: SUBMISSION DEADLINE: January 19, 2004 midnight; submission open December 1, 2003. Thirteenth Annual Computational Neuroscience Meeting CNS*2004 July 18 - July 20, 2004 (workshops: July 21-22, 2004) Baltimore, USA http://www.neuroinf.org/CNS.shtml Info at cp at bbf.uia.ac.be The annual Computational Neuroscience Meeting will be held at the historic Radisson Plaza Lord hotel in Baltimore, MD from July 18th ? 20th, 2004. The main meeting will be followed by two days of workshops on July 21st and 22nd. In conjunction, the ?2004 Annual Symposium, University of Maryland Program in Neuroscience: Computation in the Olfactory System? will be held as a satellite symposium to CNS*04 on Saturday, July 17th. NOTICE: NEW PAPER SUBMISSION PROCEDURE! As in the years before papers presented at the CNS*04 meeting can be published as a paper in a special issue of the journal Neurocomputing and in a proceedings book. Authors who would like to see their CNS*04 presentation published will have to submit a COMPLETE manuscript for review during this call (deadline January 19, 2004). You will also have the option to submit instead an extended summary but this cannot be included in the journal. Both types of submissions will be reviewed but full manuscripts will get back reviewers' comments and will have to be revised with final submission shortly after the meeting. The decision of who gets to speak at the conference is independent of the type of submission, both full manuscripts and extended summaries qualify. More details on the review process can be found below. Papers can include experimental, model-based, as well as more abstract theoretical approaches to understanding neurobiological computation. We especially encourage papers that mix experimental and theoretical studies. We also accept papers that describe new technical approaches to theoretical and experimental issues in computational neuroscience or relevant software packages. PAPER SUBMISSION The paper submission procedure is again completely electronic this year. Papers for the meeting can be submitted ONLY through the web site at http://www.neuroinf.org/CNS.shtml Papers can be submitted either as a full manuscript to be published in the journal Neurocomputing (max 6 typeset pages) or as an extended summary (1 to 6 pages). You will need to submit both types of papers in pdf format and the 100 word abstract as text. You will also need to select two categories which describe your paper and which will guide the selection of reviewers. I All submissions will be acknowledged by email generated by the neuroinf.org web robot (may be considered junk mail by a spam filter) THE REVIEW PROCESS All submitted papers will be first reviewed by the program committee. Submissions will be judged and accepted for the meeting based on the clarity with which the work is described and the biological relevance of the research. For this reason authors should be careful to make the connection to biology clear. We reject only a small fraction of the submissions (~ 5%) and this usually based on absence of biological relevance (e.g. pure machine learning). We will notify authors of meeting acceptance begin March. The second stage of review involves evaluation by two independent reviewers of full manuscripts submitted to the journal Neurocomputing (all) and those extended summaries which requested an oral presentation. Full manuscripts will be reviewed as real journal publications: each paper will have an action editor and two independent reviewers. The paper may be rejected for publication if it contains no novel content or is considered to contain grave errors. We hope that this will apply to a small number of papers only, but we also need to respect a limit of maximum 200 published papers which may enforce more strict selection criteria. Paper rejection at this stage does not exclude poster presentation at the meeting itself as we assume that these authors will benefit from the feedback they can receive at the meeting. Accepted papers will receive comments for improvements and corrections from the reviewers by e-mail. Submissions of the revised papers will be due in August. Criteria for selection as an oral presentation include perceived quality, the novelty of the research and the diversity and coherence of the overall program. To ensure diversity, those who have given talks in the recent past will not be selected and multiple oral presentations from the same lab will be discouraged. All accepted papers not selected for oral talks as well as papers explicitly submitted as poster presentations will be included in one of three evening poster sessions. Authors will be notified of the presentation format of their papers by begin of May. CONFERENCE PROCEEDINGS The proceedings volume is published each year as a special supplement to the journal Neurocomputing. In addition the proceedings are published in a hardbound edition by Elsevier Press. Only 200 papers will be published in the proceedings volume. For reference, papers presented at CNS*02 can be found in volumes 52-54 of Neurocomputing (2003). INVITED SPEAKERS: Mary Kennedy (California Institute of Technology, USA) Miguel Nicolelis (Duke University, USA) TBA ORGANIZING COMMITTEE: The CNS meeting is organized by the Computational Meeting Organization (http://www.cnsorg.org) precided by Christiane Linster (Cornell University, USA) Program chair: Erik De Schutter (University of Antwerp, Belgium) Local organizer: Asaf Keller (University of Maryland School of Medicine, USA) Workshop organizer: Adrienne Fairhall (Princeton University, USA) Government liaison: Dennis Glanzman (NIMH/NIH, USA) and Yuan Liu (NINDS/NIH, USA) Program committee: Nicolas Brunel (Universite Paris Rene Descartes, France) Alain Destexhe (CNRS Gif-sur-Yvette, France) Bill Holmes (Ohio University, USA) Hidetoshi Ikeno (Himeji Institute of Technology, Japan) Don H. Johnson (Rice University, USA) Leslie M. Kay (University of Chicago, USA) Barry Richmond (NIMH, USA) Eytan Ruppin (Tel Aviv University, Israel) Frances Skinner (Toronto Western Research Institute, Canada) From dwang at cis.ohio-state.edu Thu Nov 20 10:29:07 2003 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 20 Nov 2003 10:29:07 -0500 Subject: Faculty position related to machine learning Message-ID: <3FBCDDB7.7C17EAF8@cis.ohio-state.edu> The Ohio State University Department of Computer and Information Science invites applications for a tenure-track position at the rank of assistant professor. Of particular interest are candidates who combine interests in one or more of the following fields: data mining, machine learning, and model checking as they relate to bioinformatics or security. To apply, send a curriculum vita (including names and addresses of at least three references) and a statement of research and teaching interests, by e-mail to: fsearch at cis.ohio-state.edu or by mail to: Chair, Faculty Search Committee Department of Computer and Information Science The Ohio State University 2015 Neil Avenue, DL395 Columbus, OH 43210-1277 Review of applications will begin immediately and will continue until the position is filled. For additional information please see http://www.cis.ohio-state.edu. From calls at bbsonline.org Thu Nov 20 10:57:39 2003 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Thu, 20 Nov 2003 15:57:39 +0000 Subject: Walker/Sleep and memory formation: BBS Call for Commentators Message-ID: Below is a link to the forthcoming BBS target article A refined model of sleep and the time course of memory formation by Matthew P. Walker http://www.bbsonline.org/Preprints/Walker-12042002/Referees/ This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html (please note that this list is being updated) If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. (Please note that we only request expertise information in order to simplify the selection process.) Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract and keywords below. ======================================================================= ======================================================================= A refined model of sleep and the time course of memory formation Matthew P. Walker Department of Psychiatry Harvard Medical School ABSTRACT: Research in the neurosciences continues to provide evidence that sleep plays a role in the processes of learning and memory. There is less of a consensus, however, regarding the precise stage of memory development where sleep is considered a requirement, simply favorable, or not important. This article begins with an overview of recent studies regarding sleep and learning, predominantly in the procedural memory domain, and is measured against our current understanding of the mechanisms that govern memory formation. Based on these considerations, a new neurocognitive framework of procedural learning is offered, consisting firstly of acquisition, followed by two specific stages of consolidation, one involving a process of stabilization, the other involving enhancement, whereby delayed learning occurs. Psychophysiological evidence indicates that initial acquisition does not fundamentally rely on sleep. This also appears to be true for the stabilization phase of consolidation, with durable representations, resistant to interference, clearly developing in a successful manner during time awake (or just time per se). In contrast, the consolidation stage resulting in additional/enhanced learning in the absence of further rehearsal does appear to rely on the process of sleep, with evidence for specific sleep-stage dependencies across the procedural domain. Evaluations at a molecular, cellular and systems level currently offer several sleep specific candidates that could play a role in sleep-dependent learning. These include the up regulation of select plasticity-associated genes, increased protein synthesis, changes in neurotransmitter concentration, and specific electrical events in neuronal networks that modulate synaptic potentiation. KEYWORDS: Consolidation; Enhancement; Learning; Memory; Plasticity; Sleep; Stabilization http://www.bbsonline.org/Preprints/Walker-12042002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Jeffrey Gray - Editor Paul Bloom - Editor Barbara Finlay - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From cns at cnsorg.org Thu Nov 20 20:38:46 2003 From: cns at cnsorg.org (CNS - Organization for Computational Neurosciences) Date: Thu, 20 Nov 2003 17:38:46 -0800 Subject: Call for Workshop proposals CNS*2004 Message-ID: <1069378726.3fbd6ca6af480@webmail.mydomain.com> The CNS*2004 committee calls for proposals for workshops, to be held on the final two days of CNS*2004, 21 and 22 July, at the Radisson Royal Plaza Lord hotel in Baltimore, MD. Information can be found at www.cnsorg.org Workshops provide an informal forum within the CNS meeting for focused discussion of recent or speculative research, novel techniques, and open issues in computational neuroscience. Topics exploring theoretical interfaces to recent experimental work are particularly encouraged. Several formats are possible: Discussion Workshops (formal or informal); Tutorials; and Mini- symposia, or a combination of these formats. Discussion workshops, whether formal (i.e., held in a conference room with projection and writing media) or informal (held elsewhere), should stress interactive and open discussions in preference to sequential presentations. Tutorials and mini-symposia provide a format for a focused exploration of particular issues or techniques within a more traditional presentation framework; ample time should be reserved for questions and general discussion. The organizers of a workshop should endeavor to bring together as broad a range of pertinent viewpoints as possible. The length of a workshop may range from one (half-day) session to the full two days. Single day workshops have been particularly successful in the past. To propose a workshop, please submit the following information to the workshop coordinator at the address below 1. the name(s) of the organizer(s) 2. the title of the workshop 3. a description of the subject matter, indicating clearly the range of topics to be discussed 4. a 200 word abstract of the subject matter 5. the format(s) of the workshop; if a discussion session, please specify whether you would like it to be held in a conference room or in a less formal setting 6. for tutorials and mini-symposia, a provisional list of speakers 7. the number of sessions for which the workshop is to run Please submit proposals as early as possible by email to workshops at cnsorg.org or by post to Adrienne Fairhall Department of Physiology and Biophysics University of Washington Box 357290 Seattle WA 98195-7290 The descriptions of accepted workshops will appear on the CNS*2004 web site as they are received. Attendees are encouraged to check this list, and to contact the organizers of any workshops in which they are interested in participating. ******************************************** Organization for Computational Neurosciences ******************************************** From terry at salk.edu Thu Nov 20 19:56:30 2003 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 20 Nov 2003 16:56:30 -0800 (PST) Subject: NEURAL COMPUTATION 15:12 In-Reply-To: <200309271643.h8RGhIA58569@purkinje.salk.edu> Message-ID: <200311210056.hAL0uUX94464@purkinje.salk.edu> Neural Computation - Contents - Volume 15, Number 12 - December 1, 2003 REVIEW General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results Jiri Sima and Pekka Orponen LETTERS Dynamics of Deterministic and Stochastic Paired Excitatory-Inhibitory Delayed Feedback Carlo R. Laing and Andre Longtin Differences in Spiking Patterns Among Cortical Neurons Shigeru Shinomoto, Keisetsu Shima and Jun Tanji Hybrid Integrate-and-Fire Model of a Bursting Neuron Barbara J. Breen, Wiliam C. Gerken and Robert J. Butera, Jr. Lateral Neural Model of Binocular Rivalry Lars Stollenwerk and Mathias Bode Suprathreshold Intrinsic Dynamics of the Human Visual System Gopathy Purushothaman, Haluk Ogmen and Harold E. Bedell Closed-Form Expressions of Some Stochastic Adapting Equations for Nonlinear Adaptive Activation Function Neurons Simone Fiori Selectring Informative Data for Developing Peptide-MHC Binding Predictors Using a Query By Committee Approach Jens Kaae Christensen, Kasper Lamberth, Morten Nielsen, Claus Lundegaard, Peder Worning, Sanne Lise Lauemoller, Soren Buus, Soren Brunak and Ole Lund ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2003 - VOLUME 15 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $95 $101.65 $143 Institution $590 $631.30 $638 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 577-1545 journals-orders at mit.edu ----- From baldassarre at www.ip.rm.cnr.it Fri Nov 21 11:38:57 2003 From: baldassarre at www.ip.rm.cnr.it (Gianluca Baldassarre) Date: Fri, 21 Nov 2003 17:38:57 +0100 Subject: PhD thesis and papers on reinforcement-learning based neural planner and basal ganglia Message-ID: Dear connectionits, you can find my PhD thesis, and downloadable preprints of some papers related to it, at the web-page: http://gral.ip.rm.cnr.it/baldassarre/publications/publications.html Thesis and papers are about a neural-network planner based on reinforcement learning (it builds on Sutton's Dyna-PI architectures(1990)). Some of the papers show the biological inspiration of the model and its possible relations with the brain (basal ganglia). Below you will find: - the list of the titles of the thesis and the papers - the same list with abstracts - the index of the thesis. Best regards, Gianluca Baldassarre |.CS...|.......|...............|..|......US.|||.|||||.||.||||..|...|.... Gianluca Baldassarre, Ph.D. Institute of Cognitive Sciences and Technologies National Research Council of Italy (ISTC-CNR) Viale Marx 15, 00137, Rome, Italy E-mail: baldassarre at ip.rm.cnr.it Web: http://gral.ip.rm.cnr.it/baldassarre Tel: ++39-06-86090227 Fax: ++39-06-824737 ..CS.|||.||.|||.||..|.......|........|...US.|.|....||..|..|......|...... **************************************************************************** ****************** TITLES **************************************************************************** ****************** Baldassarre G. (2002). Planning with Neural Networks and Reinforcement Learning. PhD Thesis. Colchester - UK: Computer Science Department, University of Essex. Baldassarre G. (2001). Coarse Planning for Landmark Navigation in a Neural-Network Reinforcement Learning Robot. Proceedings of the International Conference on Intelligent Robots and Systems (IROS-2001). IEEE. Baldassarre G. (2001). A Planning Modular Neural-Network Robot for Asynchronous Multi-Goal Navigation Tasks. In Arras K.O., Baerveldt A.-J, Balkenius C., Burgard W., Siegwart R. (eds.), Proceedings of the 2001 Fourth European Workshop on Advanced Mobile Robots - EUROBOT-2001, pp. 223-230. Lund ? Sweden: Lund University Cognitive Studies. Baldassarre G. (2003). Forward and Bidirectional Planning Based on Reinforcement Learning and Neural Networks in a Simulated Robot. In Butz M., Sigaud O., G?rard P. (eds.), Adaptive Behaviour in Anticipatory Learning Systems, pp. 179-200. Berlin: Springer Verlag. ..papers describing the biological inspiration of the model and its possible relations with the brain (not in the thesis). Baldassarre G. (2002). A modular neural-network model of the basal ganglia's role in learning and selecting motor behaviours. Journal of Cognitive Systems Research. Vol. 3, pp. 5-13. Baldassarre G. (2002). A biologically plausible model of human planning based on neural networks and Dyna-PI models. In Butz M., Sigaud O., G?rard P. (eds.), Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems ? ABiALS-2002 (hold within SAB-2002), pp. 40-60. Wurzburg: University of Wurzburg. **************************************************************************** ****************** TITLES WITH ABSTRACTS **************************************************************************** ****************** Baldassarre G. (2002). Planning with Neural Networks and Reinforcement Learning. PhD Thesis. Colchester - UK: Computer Science Department, University of Essex. Abstract This thesis presents the design, implementation and investigation of some predictive-planning controllers built with neural-networks and inspired by Dyna-PI architectures (Sutton, 1990). Dyna-PI architectures are planning systems based on actor-critic reinforcement learning methods and a model of the environment. The controllers are tested with a simulated robot that solves a stochastic path-finding landmark navigation task. A critical review of ideas and models proposed by the literature on problem solving, planning, reinforcement learning, and neural networks precedes the presentation of the controllers. The review isolates ideas relevant to the design of planners based on neural networks. A ?neural forward planner? is implemented that, unlike the Dyna-PI architectures, is taskable in a strong sense. This planner is capable of building a ?partial policy? focussed on around efficient start-goal paths, and is capable of deciding to re-plan if ?unexpected? states are encountered. Planning iteratively generates ?chains of predictions? starting from the current state and using the model of the environment. This model is made up by some neural networks trained to predict the next input when an action is executed. A ?neural bidirectional planner? that generates trajectories backward from the goal and forward from the current state is also implemented. This planner exploits the knowledge (image) on the goal, further focuses planning around efficient start-goal paths, and produces a quicker updating of evaluations. In several experiments the generalisation capacity of neural networks proves important for learning but it also causes problems of interference. To deal with these problems a modular neural architecture is implemented, that uses a mixture of experts network for the critic, and a simple hierarchical modular network for the actor. The research also implements a simple form of neural abstract planning named ?coarse planning?, and investigates its strengths in terms of exploration and evaluations? updating. Some experiments with coarse planning and with other controllers suggest that discounted reinforcement learning may have problems dealing with long-lasting tasks. Baldassarre G. (2001). Coarse Planning for Landmark Navigation in a Neural-Network Reinforcement Learning Robot. Proceedings of the International Conference on Intelligent Robots and Systems (IROS-2001). IEEE. Abstract Is it possible to plan at a coarse level and act at a fine level with a neural-network (NN) reinforcement-learning (RL) planner? This work presents a NN planner, used to control a simulated robot in a stochastic landmark-navigation problem, which plans at an abstract level. The controller has both reactive components, based on actor-critic RL, and planning components inspired by the Dyna-PI architecture (this roughly corresponds to RL plus a model of the environment). Coarse planning is based on macro-actions defined as a sequence of identical primitive actions. It updates the evaluations and the action policy while generating simulated experience at the macro level with the model of the environment (a NN trained at the macro level). The simulations show how the controller works. They also show the advantages of using a discount coefficient tuned to the level of planning coarseness, and suggest that discounted RL has problems dealing with long periods of time. Baldassarre G. (2001). A Planning Modular Neural-Network Robot for Asynchronous Multi-Goal Navigation Tasks. In Arras K.O., Baerveldt A.-J, Balkenius C., Burgard W., Siegwart R. (eds.), Proceedings of the 2001 Fourth European Workshop on Advanced Mobile Robots - EUROBOT-2001, pp. 223-230. Lund ? Sweden: Lund University Cognitive Studies. Abstract This paper focuses on two planning neural-network controllers, a "forward planner" and a "bidirectional planner". These have been developed within the framework of Sutton's Dyna-PI architectures (planning within reinforcement learning) and have already been presented in previous papers. The novelty of this paper is that the architecture of these planners is made modular in some of its components in order to deal with catastrophic interference. The controllers are tested through a simulated robot engaged in an asynchronous multi-goal path-planning problem that should exacerbate the interference problems. The results show that: (a) the modular planners can cope with multi-goal problems allowing generalisation but avoiding interference; (b) when dealing with multi-goal problems the planners keeps the advantages shown previously for one-goal problems vs. sheer reinforcement learning; (c) the superiority of the bidirectional planner vs. the forward planner is confirmed for the multi-goal task. Baldassarre G. (2003). Forward and Bidirectional Planning Based on Reinforcement Learning and Neural Networks in a Simulated Robot. In Butz M., Sigaud O., G?rard P. (eds.), Adaptive Behaviour in Anticipatory Learning Systems, pp. 179-200. Berlin: Springer Verlag. Abstract Building intelligent systems that are capable of learning, acting reactively and planning actions before their execution is a major goal of artificial intelligence. This paper presents two reactive and planning systems that contain important novelties with respect to previous neural-network planners and reinforcement-learning based planners: (a) the introduction of a new component (?matcher?) allows both planners to execute genuine taskable planning (while previous reinforcement-learning based models have used planning only for speeding up learning); (b) the planners show for the first time that trained neural-network models of the world can generate long prediction chains that have an interesting robustness with regards to noise; (c) two novel algorithms that generate chains of predictions in order to plan, and control the flows of information between the systems? different neural components, are presented; (d) one of the planners uses backward ?predictions? to exploit the knowledge of the pursued goal; (e) the two systems presented nicely integrate reactive behavior and planning on the basis of a measure of ?confidence? in action. The soundness and potentialities of the two reactive and planning systems are tested and compared with a simulated robot engaged in a stochastic path-finding task. The paper also presents an extensive literature review on the relevant issues. Baldassarre G. (2002). A modular neural-network model of the basal ganglia's role in learning and selecting motor behaviours. Journal of Cognitive Systems Research. Vol. 3, pp. 5-13. Abstract This work presents a modular neural-network model (based on reinforcement-learning actor-critic methods) that tries to capture some of the most-relevant known aspects of the role that basal ganglia play in learning and selecting motor behavior related to different goals. In particular some simulations with the model show that basal ganglia select "chunks" of behaviour whose "details" are specified by direct sensory-motor pathways, and how emergent modularity can help to deal with tasks with asynchronous multiple goals. A "top-down" approach is adopted, beginning with the analysis of the adaptive interaction of a (simulated) organism with the environment, and its capacity to learn. Then an attempt is made to implement these functions with neural architectures and mechanisms that have an empirical neuroanatomical and neurophysiological foundation. Baldassarre G. (2002). A biologically plausible model of human planning based on neural networks and Dyna-PI models. In Butz M., Sigaud O., G?rard P. (eds.), Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems ? ABiALS-2002 (hold within SAB-2002), pp. 40-60. Wurzburg: University of Wurzburg. Abstract Understanding the neural structures and physiological mechanisms underlying human planning is a difficult challenge. In fact it is the product of a sophisticated network of different brain components that interact in complex ways. However, some data produced by brain imaging, neuroanatomical and neurophysiological research, are now beginning to make it possible to draw a first approximate picture of this network. This paper proposes such a picture in the form of a neural-network computational model inspired by the Dyna-PI models (Sutton, 1990). The model is based on the actor-critic reinforcement learning model, that has been shown to be a good representation of the anatomy and functioning of the basal ganglia. It is also based on a ?predictor?, a network capable of predicting the sensorial consequences of actions, that may correspond to the lateral cerebellum-prefrontal and rostral premotor cortex pathways. All these neural structures have been shown to be involved in human planning by functional brain-imaging research. The model has been tested with an animat engaged with a landmark navigation task. In accordance with the brain imaging data, the simulations show that with repeated practice performing the task, the complex planning processes, and the activity of the neural structures underlying them, fade away and leave the routine control of action to lower-level reactive components. The simulations also show the biological advantages offered by planning and some interesting properties of the processing of ?mental images?, based on neural networks, during planning. On the machine learning side, the model presented extends the Dyna-PI models with two important novelties: a ?matcher? for the self-generation of a reward signal in correspondence to any possible goal, and an algorithm that focuses the exploration of the model of the world around important states and allows the animat to decide when planning and when acting on the basis of a measure of its ?confidence?. The paper also offers a wide collection of references on the addressed issues. **************************************************************************** ****************** TITLES WITH ABSTRACTS **************************************************************************** ****************** 1 INTRODUCTION 12 1.1 The Objective of the Thesis 13 1.1.1 Why Neural-Network Planning Controllers? 13 1.1.2 Why a Robot and a Noisy Environment? Why a simulated robot? 15 1.1.3 Reinforcement Learning, Dynamic Programming and Dyna Architectures 16 1.1.4 Ideas from Problem Solving and Logical Planning 18 1.1.5 Why Dyna-PI Architectures (Reinforcement Learning + Model of the Environment)? 19 1.1.6 Stochastic Path-Finding Landmark Navigation Problems 20 1.2 Overview of the Controllers and Outline of the Thesis 22 1.2.1 Overview of the Controllers Implemented in this Research 22 1.2.2 Outline of the Thesis and Problems Addressed Chapter by Chapter 23 PART 1: CRITICAL LITERATURE REVIEW AND ANALYSIS OF CONCEPTS USEFUL FOR NEURAL PLANNING 2 PROBLEM SOLVING, SEARCH, AND STRIPS PLANNING 28 2.1 Planning as a Searching Process: Blind-Search Strategies 28 2.1.1 Critical Observations 29 2.2 Planning as a Searching Process: Heuristic-Search Strategies 29 2.2.1 Critical Observations 29 2.3 STRIPS Planning: Partial Order Planner 30 2.3.1 Situation Space and Plan Space 30 2.3.2 Partial Order Planner 31 2.3.3 Critical Observations 32 2.4 STRIPS Planning: Conditional Planning, Execution Monitoring, Abstract Planning 32 2.4.1 Conditional Planning 33 2.4.2 Execution Monitoring and Replanning 33 2.4.3 Abstract Planning 34 2.4.4 Critical Observations 34 2.5 STRIPS Planning: Probabilistic and Reactive Planning 34 2.5.1 BURIDAN Planning Algorithm 35 2.5.2 Reactive Planning and Universal Plans 35 2.5.3 Decision theoretic planning 35 2.5.4 Maes' Planner 37 2.5.5 Critical Observations 37 2.6 Navigation and Motion Planning Through Configuration Spaces 38 3 MARKOV DECISION PROCESSES AND DYNAMIC PROGRAMMING 40 3.1 The Problem Domain Considered Here: Stochastic Path-Finding Problems 40 3.2 Critical Observations on Dynamic Programming and Heuristic Search 42 3.3 Dyna Framework and Dyna-PI Architecture 43 3.3.1 Critical Observations 44 3.4 Prioritised Sweeping and Trajectory Sampling 45 3.4.1 Critical Observations 46 4 NEURAL-NETWORKS 47 4.1 What is a Neural Network? 47 4.1.1 Critical Observations 48 4.2 Critical Observations: Feed-Forward Networks and Mixture of Experts Networks 48 4.3 Neural Networks for Prediction Learning 50 4.3.1 Critical Observations 51 4.4 Properties of Neural Networks and Planning 51 4.4.1 Generalisation, Noise Tolerance, and Catastrophic Interference 51 4.4.2 Prototype Extraction 52 4.4.3 Learning 53 4.5 Planning with Neural Networks 53 4.5.1 Activation Diffusion Planning 54 4.5.2 Neural Planners Based on Gradient Descent Methods 56 5 UNIFYING CONCEPTS 58 5.1 Learning, Planning, Prediction and Taskability 58 5.1.1 Learning of Behaviour 59 5.1.2 Taskable Planning 60 5.1.3 Taskability: Reactive and Planning Controllers 61 5.1.4 Taskability and Dyna-PI 63 5.2 A Unified View of Heuristic Search, Dynamic Programming, and Activation Diffusion 63 5.3 Policies and Plans 65 PART 2: DESIGNING AND TESTING NEURAL PLANNERS 6 NEURAL ACTOR-CRITIC REINFORCEMENT LEARNING 69 6.1 Introduction: Basic Neural Actor-Critic Controller and Simulations' Scenarios 69 6.2 Scenarios of Simulations and the Simulated Robot 70 6.3 Architectures and Algorithms 72 6.4 Results and Interpretations 76 6.4.1 Functioning of the Matcher 76 6.4.2 Performance of the Controller: The Critic and the Actor 77 6.4.3 Aliasing Problem and Parameters' Exploration 81 6.4.4 Parameter Exploration 83 6.4.5 Why the Contrasts? Why no more than the Contrasts? 84 6.5 Temporal Limitations of Discounted Reinforcement Learning 85 6.6 Conclusion 89 7 REINFORCEMENT LEARNING, MULTIPLE GOALS, MODULARITY 91 7.1 Introduction 91 7.2 Scenario of Simulations: An Asynchronous Multi-Goal Task 92 7.3 Architectures and Algorithms: Monolithic and Modular Neural-Networks 93 7.4 Results and Interpretation 96 7.5 Limitations of the Controllers 100 7.6 Conclusion 100 8 THE NEURAL FORWARD PLANNER 101 8.1 Introduction: Taskability, Planning and Acting, Focussing 101 8.2 Scenario of the Simulations 103 8.3 Architectures and Algorithms: Reactive and Planning Components 104 8.3.1 The Reactive Components of the Architecture 104 8.3.2 The Planning Components of the Architecture 105 8.4 Results and Interpretation 108 8.4.1 Taskable Planning vs. Reactive Behaviour 108 8.4.2 Focussing, Partial Policies and Replanning 111 8.4.3 Neural Networks for Prediction: ?True? Images as Attractors? 112 8.5 Limitations of the Neural Forward Planner 115 8.6 Conclusion 115 9 THE NEURAL BIDIRECTIONAL PLANNER 117 9.1 Introduction: More Efficient Exploration 117 9.2 Scenario of Simulations 118 9.3 Architectures and Algorithms 119 9.3.1 The Reactive Components of the Architecture 119 9.3.2 The Planning Components of the Architecture: Forward Planning 119 9.3.3 The Planning Components of the Architecture: Bidirectional Planning 121 9.4 Results and Interpretation 123 9.4.1 Common Strengths of the Forward-Planner and the Bidirectional Planner 123 9.4.2 The Forward Planner Versus the Bidirectional Planner 124 9.5 Limitations of the Neural Bidirectional Planner 126 9.6 A New ?Goal Oriented Forward Planner? (Not Implemented) 126 9.7 Conclusion 127 10 NEURAL NETWORK PLANNERS AND MULTI-GOAL TASKS 128 10.1 Introduction: Neural Planners, Interference and Modularity 128 10.2 Scenario: Again the Asynchronous Multi-Goal Task 129 10.3 Architectures and Algorithms 129 10.3.1 Modular Reactive Components 129 10.3.2 Neural Modular Forward Planner 130 10.3.3 Neural Modular Bidirectional Planner 131 10.4 Results and Interpretation 132 10.4.1 Modularity and Interference 132 10.4.2 Taskability 134 10.4.3 From Planning To Reaction 134 10.4.4 The Forward Planner Versus the Bidirectional Planner 135 10.5 Limitations of the Modular Planners 137 10.6 Conclusion 137 11 COARSE PLANNING 138 11.1 Introduction: Abstraction, Macro-actions and Coarse Planning 138 11.2 Scenario of Simulations: A Simplified Navigation Task 139 11.3 Architectures and Algorithms: Coarse Planning with Macro-actions 140 11.4 Results and Interpretation 142 11.4.1 Reinforcement Learning at a Coarse Level 142 11.4.2 The Advantages of Coarse Planning 143 11.4.3 Predicting at a Coarse Level 145 11.4.4 Coarse Planning, Discount Coefficient and Time Limitations of Reinforcement Learning 146 11.5 Limitations of the Neural Coarse Planner 149 11.6 Conclusion 150 12 CONCLUSION AND FUTURE WORK 152 12.1 Conclusion: What Have We Learned from This Research? 152 12.1.1 Ideas for Neural-Network Reinforcement-Learning Planning 152 12.1.2 Landmark Navigation, Reinforcement Learning and Neural Networks 153 12.1.3 A New Neural Forward Planner 153 12.1.4 A New Neural Bidirectional Planner 155 12.1.5 Common Structure, Interference, and Modular Networks 156 12.1.6 Coarse Planning and Time Limits of Reinforcement Learning 157 12.2 A List of the Major ?Usable? Insights Delivered 158 12.3 Future Work 159 13 APPENDICES 162 13.1 Blind-Search and Heuristic-Search Strategies 162 13.1.1 Blind-Search Strategies 162 13.1.2 Heuristic-Search Strategies 163 13.2 Markov Decision Processes, Reinforcement Learning and Dynamic Programming 165 13.2.1 Markov Decision Processes 165 13.2.2 Markov Property and Partially Observable Markov Decision Problems 167 13.2.3 Reinforcement Learning 168 13.2.4 Approximating the State or State-Action Evaluations 168 13.2.5 Searching the Policy with the Q'* and Q'p evaluations 170 13.2.6 Actor-Critic Model 171 13.2.7 Macro-actions and Options 172 13.2.8 Function Approximation and Reinforcement Learning 174 13.2.9 Dynamic Programming 174 13.2.10 Asynchronous Dynamic Programming 176 13.2.11 Trial-Based Real-Time Dynamic Programming and Heuristic Search 176 13.3 Feed-Forward Architectures and Mixture of Experts Networks 178 13.3.1 Feed-Forward Architectures and Error Backpropagation Algorithm 178 13.3.2 Mixture of Experts Neural Networks 179 13.3.3 The Generalisation Property of Neural Networks 181 14 REFERENCES 182 14.1 Candidate's Publications During the PhD Research 182 14.2 References 183 **************************************************************************** ****************** From Hualou.Liang at uth.tmc.edu Fri Nov 21 17:39:58 2003 From: Hualou.Liang at uth.tmc.edu (Hualou Liang) Date: Fri, 21 Nov 2003 16:39:58 -0600 Subject: POSTDOCTORAL POSITION AVAILABLE Message-ID: COMPUTATIONAL COGNITIVE NEUROSCIENCE POSTDOCTORAL POSITION AVAILABLE University of Texas Health Science Center at Houston A postdoctoral position is available starting Jan 1 2004 in my laboratory (http://www.sahs.uth.tmc.edu/hliang/) at University of Texas Health Science Center at Houston to participate in an ongoing research project studying the cortical dynamics of visual attention. The project involves the application of multivariate signal analysis techniques to cortical event-related potentials. Our current facilities include a 90-node (2 CPUs per node) Linux cluster and a 128-channel EEG system dedicated for research activities. The ideal candidate should have a Ph.D. in relevant discipline with substantial mathematical/computational experience in neurophysiological signal processing and multivariate statistics. Programming skills in C and Matlab are essential. Interested individuals should send a curriculum vitae, representative publications, and the names and e-mail addresses of three references to Hualou Liang (hualou.liang at uth.tmc.edu). -------------------------------- Hualou Liang, Ph.D. Assistant Professor The University of Texas at Houston 7000 Fannin, Suite 600 Houston, TX 77030 From ken at phy.ucsf.edu Mon Nov 24 12:47:46 2003 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 24 Nov 2003 09:47:46 -0800 Subject: Two papers available: V1 circuitry and multiplicative gain modulation Message-ID: <16322.17474.404075.634061@coltrane.ucsf.edu> Reprints of the following two papers are available either from http://www.keck.ucsf.edu/~ken (Click on 'publications', then on 'Models of Neuronal Integration and Circuitry') or through the specific links below. ------------------------------------------- Lauritzen, T.Z. and K.D. Miller (2003). "Different roles for simple- and complex-cell inhibition in V1". Journal of Neuroscience 23, 10201-10213. ftp://ftp.keck.ucsf.edu/pub/ken/lauritzen_miller03.pdf Abstract: Previously, we proposed a model of the circuitry underlying simple-cell responses in cat primary visual cortex (V1) layer 4. We argued that the ordered arrangement of lateral geniculate nucleus inputs to a simple cell must be supplemented by a component of feedforward inhibition that is untuned for orientation and responds to high temporal frequencies to explain the sharp contrast-invariant orientation tuning and low-pass temporal frequency tuning of simple cells. The temporal tuning also requires a significant NMDA component in geniculocortical synapses. Recent experiments have revealed cat V1 layer 4 inhibitory neurons with two distinct types of receptive fields (RFs): complex RFs with mixed ON/OFF responses lacking in orientation tuning, and simple RFs with normal, sharp-orientation tuning (although, some respond to all orientations). We show that complex inhibitory neurons can provide the inhibition needed to explain simple-cell response properties. Given this complex cell inhibition, antiphase or "push-pull" inhibition from tuned simple inhibitory neurons acts to sharpen spatial frequency tuning, lower responses to low temporal frequency stimuli, and increase the stability of cortical activity. --------------------------------------- Murphy, B.K. and K.D. Miller (2003). "Multiplicative Gain Changes Are Induced by Excitation or Inhibition Alone". J. Neurosci., Nov 2003; 23: 10040 - 10051. ftp://ftp.keck.ucsf.edu/pub/ken/murphy_miller03.pdf Abstract: We model the effects of excitation and inhibition on the gain of cortical neurons. Previous theoretical work has concluded that excitation or inhibition alone will not cause a multiplicative gain change in the curve of firing rate versus input current. However, such gain changes in vivo are measured in the curve of firing rate versus stimulus parameter. We find that when this curve is considered, and when the nonlinear relationships between stimulus parameter and input current and between input current and firing rate in vivo are taken into account, then simple excitation or inhibition alone can induce a multiplicative gain change. In particular, the power-law relationship between voltage and firing rate that is induced by neuronal noise is critical to this result. This suggests an unexpectedly simple mechanism that may underlie the gain modulations commonly observed in cortex. More generally, it suggests that a smaller input will multiplicatively modulate the gain of a larger one when both converge on a common cortical target. Ken Kenneth D. Miller telephone: (415) 476-8217 Professor fax: (415) 476-4929 Dept. of Physiology, UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444 From ken at phy.ucsf.edu Mon Nov 24 12:33:40 2003 From: ken at phy.ucsf.edu (Ken Miller) Date: Mon, 24 Nov 2003 09:33:40 -0800 Subject: UCSF Postdoctoral/Graduate Fellowships in Theoretical Neurobiology Message-ID: <16322.16628.911471.189310@coltrane.ucsf.edu> FULL INFO: http://www.sloan.ucsf.edu/sloan/sloan-info.html PLEASE DO NOT USE 'REPLY'; FOR MORE INFO USE ABOVE WEB SITE OR CONTACT ADDRESSES GIVEN BELOW The Sloan-Swartz Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in a quantitative field such as mathematics, theoretical or experimental physics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan-Swartz Center offers opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. Young scientists with strong theoretical backgrounds will receive scientific training in experimental approaches to understanding the operation of the intact brain. They will learn to integrate their theoretical abilities with these experimental approaches to form a mature research program in integrative neuroscience. The research undertaken by the trainees may be theoretical, experimental, or a combination. Resident Faculty of the Sloan-Swartz Center and their research interests include: Michael Brainard: Mechanisms underlying vocal learning in the songbird; sensorimotor adaptation to alteration of performance-based feedback Allison Doupe: Development of song recognition and production in songbirds Loren Frank: The relationship between behavior and neural activity in the hippocampus and anatomically related cortical areas Stephen Lisberger: Learning and memory in a simple motor reflex, the vestibulo-ocular reflex, and visual guidance of smooth pursuit eye movements by the cerebral cortex Michael Merzenich: Experience-dependent plasticity underlying learning in the adult cerebral cortex, and the neurological bases of learning disabilities in children Kenneth Miller: Circuitry of the cerebral cortex: its structure, self-organization, and computational function (primarily using cat primary visual cortex as a model system) Philip Sabes: Sensorimotor coordination, adaptation and development of spatially guided behaviors, experience dependent cortical plasticity. Christoph Schreiner: Cortical mechanisms of perception of complex sounds such as speech in adults, and plasticity of speech recognition in children and adults Michael Stryker: Mechanisms that guide development of the visual cortex There are also a number of visiting faculty, including Larry Abbott, Brandeis University; Bill Bialek, Princeton University; Sebastian Seung, MIT; David Sparks, Baylor University; Steve Zucker, Yale University. TO APPLY for a POSTDOCTORAL position, please send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. The application deadline is January 30, 2004. Send applications to: Sloan-Swartz Center 2004 Admissions Sloan-Swartz Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 PRE-DOCTORAL applicants with strong theoretical training may seek admission into the UCSF Neuroscience Graduate Program as a first-year student. Applicants seeking such admission must apply by Jan. 3, 2004 to be considered for fall, 2004 admission. Application materials for the UCSF Neuroscience Program may be obtained from http://www.ucsf.edu/neurosc/neuro_admissions.html#application or from Pat Vietch Neuroscience Graduate Program Department of Physiology University of California San Francisco San Francisco, CA 94143-0444 neuroscience at phy.ucsf.edu Be sure to include your surface-mail address. The procedure is: make a normal application to the UCSF Neuroscience program; but also alert the Sloan-Swartz Center of your application, by writing to sloan-info at phy.ucsf.edu. If you need more information: -- Consult the Sloan-Swartz Center WWW Home Page: http://www.sloan.ucsf.edu/sloan -- Send e-mail to sloan-info at phy.ucsf.edu -- See also the home page for the W.M. Keck Foundation Center for Integrative Neuroscience, in which the Sloan-Swartz Center is housed: http://www.keck.ucsf.edu/ From D.Palmer-Brown at lmu.ac.uk Mon Nov 24 10:29:34 2003 From: D.Palmer-Brown at lmu.ac.uk (Palmer-Brown, Dominic [IES]) Date: Mon, 24 Nov 2003 15:29:34 -0000 Subject: No subject Message-ID: Dear Connectionists, I would be grateful if you would circulate the following information to potential postgraduates. PhD Research Bursary in Neurocomputing. Within the School of Computing at Leeds Metropolitan University we are investigating neural network algorithms for adaptive function networks applied to data analysis and natural language processing. The project is likely to involve collaboration with psychologists and environmental scientists, in addition to computer scientists. The ideal candidate would possess a Masters or very good Bachelors degree in a discipline that includes a significant level of computing, and would be able to demonstrate a keen interest in neural networks, cognitive science, and programming, together with the determination to successfully complete three years of PhD study. The bursary is worth =A39000 in the first year, in addition to EU/Home fees. The formal advert is on jobs.ac.uk at http://jobs.ac.uk/jobfiles/PI512.html and it includes details of how to apply. I'm happy to respond to any informal enquiries. Best wishes, Dominic **************************************************************************** Dominic Palmer-Brown PhD d.palmer-brown at lmu.ac.uk , d.palmer-brown at leedsmet.ac.uk +44 (0)113 2837594 Professor of Neurocomputing Leader of the Computational Intelligence Research Group http://www.lmu.ac.uk/ies/comp/research/cig/ School of Computing Faculty of Informatics and Creative Technologies Leeds Metropolitan University **************************************************************************** From nando at cs.ubc.ca Tue Nov 25 13:26:18 2003 From: nando at cs.ubc.ca (Nando de Freitas) Date: Tue, 25 Nov 2003 10:26:18 -0800 Subject: Machine Learning Jobs at UBC Message-ID: <3FC39ECA.3020001@cs.ubc.ca> Dear colleagues, The Department of Computer Science at the University of British Columbia is looking for outstanding candidates for a faculty position in machine learning and computational statistics (broadly construed). See http://www.cs.ubc.ca/career/Faculty/index.html UBC has a dynamic department in Vancouver, which is a great place to live (and near NIPS!). For more details on the department see http://www.cs.ubc.ca/ Note that this position is in addition to a corresponding position in the Department of Statistics. See: http://www.stat.ubc.ca/jobs/CRC_Sept_2003_approved.html If you have any questions about this position, don't hesitate to ask. Nando From paul at santafe.edu Tue Nov 25 18:06:58 2003 From: paul at santafe.edu (Paul Brault) Date: Tue, 25 Nov 2003 16:06:58 -0700 Subject: SFI Complex Systems Summer Schools Message-ID: ANNOUNCING THE SANTA FE INSTITUTE'S 2004 COMPLEX SYSTEMS SUMMER SCHOOLS Santa Fe School: June 7 - July 2, 2004 in Santa Fe, New Mexico, USA Director: Melanie Mitchell, Oregon Health & Science University and Santa Fe Institute. Held at the campus of St. John's College. Administered by the Santa Fe Institute. China School: July 5 - 30, 2004 in Qingdao, Shandong Province, China. Co-Directors: Douglas Erwin, Smithsonian Institution and Santa Fe Institute; John Olsen, University of Arizona. Held at the campus of Qingdao University. Administered by Qingdao University and the Santa Fe Institute. GENERAL DESCRIPTION: An intensive four-week introduction to complex behavior in mathematical, physical, living, and social systems for graduate students and postdoctoral fellows in the physical, natural and social sciences. Open to students from all countries. Students are expected to attend one school for the full four weeks. Week one will consist of an intensive series of lectures and laboratories introducing the foundational ideas and tools of complex systems research. Topics will include nonlinear dynamics and pattern formation, information theory and computation theory, adaptation and evolution, network structure and dynamics, computer modeling tools, and specific applications of these core topics to various disciplines. Weeks two and three will consist of lectures and panel discussions on current research in complex systems. * Santa Fe: Lecture topics include cancer as a complex adaptive system; neuro-cognitive development; ecological dynamics and robustness; and interactions between physics and computation. * China: Lecture topics include defining principles and methods of complex systems, and specific case studies drawn from the physical, biological, and social sciences. Week four will be devoted to the completion and presentation of student projects. COSTS: No tuition is charged. Housing and meal costs are supported as follows: * Santa Fe--100% for graduate students and 50% for postdoctoral fellows (the remaining 50% share is $750, due at the beginning of the school). * China--100% for graduate students and postdoctoral fellows. Most students will provide their own travel funding. Some travel scholarships may be available based on demonstrated need, with preference given to international students. Housing and travel support for accompanying families is not available. ELIGIBILITY: Applications are solicited from graduate students and postdoctoral fellows in any discipline. Some background in science and mathematics at the undergraduate level, at least through calculus and linear algebra, is required. Students should indicate school location preference when applying. Placements may be influenced by recent increased restrictions in U.S. foreign visitor policies. APPLICATION INSTRUCTIONS: Chinese students who wish to apply to the Qingdao school should be alert for a call for nominations at their university or research institution and apply locally. For more information, please e-mail summerschool at santafe.edu. Applications for the Santa Fe school, or for non-China international participants in the Qingdao school, may be submitted using our online application form at http://www.santafe.edu/csss04.html. Application requirements include a current resume with publications list (if any), a statement of your current research interests and comments about why you want to attend the shool, and two letters of recommendation from scholars who know your work. Applications sent via postal mail will also be accepted. Do not bind your application materials in any manner. Send packages to: Summer Schools Santa Fe Institute 1399 Hyde Park Road Santa Fe, NM 87501 USA Deadline: All application materials must be postmarked or electronically submitted no later than January 23, 2004. Women, minorities, and students from developing countries are especially encouraged to apply. FOR FURTHER INFORMATION: Please visit http://www.santafe.edu/csss04.html, or e-mail summerschool at santafe.edu. From niebur at jhu.edu Tue Nov 25 13:45:04 2003 From: niebur at jhu.edu (niebur@jhu.edu) Date: Tue, 25 Nov 2003 13:45:04 -0500 Subject: Graduate studies in Systems Neuroscience at the Mind/Brain Institute of Johns Hopkins University Message-ID: <200311251845.hAPIj4B05658@russell.mindbrain> PLEASE DO NOT USE 'REPLY'; FOR MORE INFORMATION CONTACT ADDRESSES ON WEB PAGES GIVEN BELOW ******************************************************************* Graduate Training in Systems Neuroscience in the Zanvyl Krieger Mind/Brain Institute of Johns Hopkins University ******************************************************************* The Zanvyl Krieger Mind/Brain Institute is dedicated to the study of the neural mechanisms of higher brain functions using modern neurophysiological, anatomical, and computational techniques. Applications are invited for pre-doctorate fellowships by students with a strong interest in systems neuroscience. In addition to students with training in neuroscience or neurobiology, we particularly encourage students with a background in quantitative or computational sciences such as mathematcs, physics, engineering or computer science. For those students, biological or neuroscience training is not required but students must show a strong commitment to combine theoretical and experimental techniques to understanding brain function. Faculty in the Mind/Brain Institute include: Guy McKhann (emeritus) Vernon Mountcastle (emeritus) Gian Poggio (emeritus) Ken Johnson (Director): Neural Mechanisms of Tactile Perception and Object Recognition Ed Connor: Shape Processing in Higher Level Visual Cortex Stewart Hendry: Functional Organization of the Primate Visual System Rudiger von der Heydt: Neural Mechanisms of Visual Perception Steven Hsiao: Neurophysiology of Tactile Shape and Texture Perception Alfredo Kirkwood: Mechanisms of Cortical Modification Ernst Niebur: Computational Neuroscience Takashi Yoshioka: Neural Mechanisms of Tactile Perception and Object Recognition The neuroscience graduate program includes over sixty faculty members in both clinical and academic departments. In addition, students from other graduate programs including Biomedical Engineering, Electrical Engineering, Psychology and Biophysics are part of the Mind/Brain Institute. For more details about the Institute visit the webpage www.mb.jhu.edu Information about the neuroscience graduate program, including online and off-line application, is available from neuroscience.jhu.edu/gradprogram.asp -- Dr. Ernst Niebur Krieger Mind/Brain Institute Assoc. Prof. of Neuroscience Johns Hopkins University niebur at jhu.edu http://cnslab.mb.jhu.edu 3400 N. Charles Street (410)516-8643, -8640 (secr), -8648 (fax), -3357 (lab) Baltimore, MD 21218 From pmunro at mail.sis.pitt.edu Sun Nov 30 22:27:38 2003 From: pmunro at mail.sis.pitt.edu (Paul Munro) Date: Sun, 30 Nov 2003 22:27:38 -0500 (EST) Subject: ICCM 2004 Announcement July 29 - Aug 1, 2004 In-Reply-To: Message-ID: Sixth International Conference of Cognitive Modeling ICCM-2004 http://simon.lrdc.pitt.edu/~iccm To be held July 29 - August 1, 2004, in Pittsburgh, USA (jointly between Carnegie Mellon University and the University of Pittsburgh). THEME ICCM brings researchers together who develop computational models that explain/predict cognitive data. The core theme of ICCM2004 is Integrating Computational Models: models that integrate diverse data; integration across modeling approaches; and integration of teaching and modeling. ICCM2004 seeks to grow the discipline of computational cognitive modeling. Towards this end, it will provide - a sophisticated modeling audience for cutting-edge researchers - critical information on the best computational modeling teaching resources for teachers of the next generation of modelers - a forum for integrating insights across alternative modeling approaches (including connectionism, symbolic modeling, dynamical systems, Bayesian modeling, and cognitive architectures) in both basic research and applied settings, across a wide variety of domains, ranging from low-level perception and attention to higher-level problem-solving and learning. - a venue for planning the future growth of the discipline INVITED SPEAKERS Kenneth Forbus (Northwestern University) Michael Mozer (University of Colorado at Boulder) SUBMISSION CATEGORIES --- DEADLINE FOR SUBMISSIONS: April 1st 2004 Papers and Posters Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available on the website. The research being presented at ICCM-2004 will appear in the conference proceedings. The proceedings will contain 6-page extended descriptions for paper presentations and 2-page extended abstracts for poster presentations. There will also be an opportunity to attach model code and simulation results in an electronic form. Comparative Symposia Three to five participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss. Parties interested in putting a comparative symposia proposal together are highly encouraged to do so well before the April 1st deadline and will be given feedback shortly after submission. Please see the website for additional information. Newell Prize for Best Student Paper Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend. The Best Applied Research Paper Award To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem. Doctoral Consortium Full-day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement---maximum amounts will be determined at a later date. CONFERENCE CHAIRS Marsha Lovett (lovett at cmu.edu) Christian Schunn (schunn at pitt.edu) Christian Lebiere (clebiere at maad.com) Paul Munro (pmunro at mail.sis.pitt.edu) Further information about the conference can be found at http://simon.lrdc.pitt.edu/~iccm or through email inquiries to iccm at pitt.edu. From Bob.Williamson at anu.edu.au Sun Nov 30 18:44:15 2003 From: Bob.Williamson at anu.edu.au (Bob Williamson) Date: Mon, 1 Dec 2003 10:44:15 +1100 (AUS Eastern Daylight Time) Subject: Senior and Junior Research Positions in Machine Learning; Canberra, Australia Message-ID: Senior and Junior Research Positions in Machine Learning (4 positions) Canberra, Australia National ICT Australia (NICTA) is a newly formed research institute based in Canberra and Sydney focussing on Information and Communication Technology. Details of the centre can be found on its website http://nicta.com.au We are now hiring researchers at all levels from postdoctoral to the equivalent of full professor in machine learning. The junior positions are 3-5 years duration. The senior positions are continuing. All Canberra based NICTA researchers are eligible to have an adjunct position at the Australian National University. There are at least four positions available, and at least one at level E (equivalent to full professor). Internationally competitive remuneration is offered. The formal job ad can be found at http://nicta.com.au/jobs/SML_B2.pdf which contains details on how to apply. The closing date is 20 January 2004. Please pass this on to any of your colleagues who you think may be interested. Regards ------------------------------------------+-----------------------------. | Professor Robert (Bob) Williamson, // Phone: +61 2 6125 0079 | | Director, Canberra Research Laboratory // Office: +61 2 6125 8801 | | National ICT Australia Ltd (NICTA) // Fax: +61 2 6125 8623 | | Research School of Information // Mobile: +61 2 0405 3877 | | Sciences and Engineering, // Bob.Williamson at anu.edu.au | | Australian National University, // http://nicta.com.au | | Canberra 0200 AUSTRALIA // http://axiom.anu.edu.au/~williams | `-----------------------------------+------------------------------------'