From robtag at dia.unisa.it Tue Dec 2 06:00:45 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 2 Dec 1997 12:00:45 +0100 Subject: WIRN 97 First call for paper Message-ID: <9712021100.AA31469@udsab> ***************** CALL FOR PAPERS ***************** The 10-th Italian Workshop on Neural Nets WIRN VIETRI-98 May 21-23, 1998 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) N. A. Borghese ( CNR Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) A. Colla (ELSAG Bailey Genova) A. Esposito ( I.I.A.S.S.) M. Frixione ( Univ. Salerno) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Siena) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) C. Morabito ( Univ. Reggio Calabria) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino) A. Petrosino ( I.I.A.S.S.) V. Piuri ( Politecnico Milano) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno) R. Serra ( Centro Ricerche Ambientali Montecatini Ravenna) F. Sorbello ( Univ. Palermo) R. Tagliaferri ( Univ. Salerno) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1998 Replies to Authors: March 31, 1998 Revised Papers Due: May 24, 1998 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Scienze Fisiche "E.R. Caianiello", University of Salerno Dept. of Informatica e Applicazioni "R.M. Capocelli", Univer. of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) IEEE Neural Network Council INNS/SIG Italy Istituto Italiano per gli Studi Filosofici, Napoli The 10-th Italian Workshop on Neural Nets (WIRN VIETRI-98) will take place in Vietri Sul Mare, Salerno ITALY, May 21-23, 1998. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by an International Publishing. Official languages are Italian and English, while papers must be in English. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 98 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. The camera ready format will be sent with the acceptation letter of the referees. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1995,1996,1997 until February 28 1998) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 98 c/o IIASS before February 28,1998. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From bocz at btk.jpte.hu Tue Dec 2 19:02:03 1997 From: bocz at btk.jpte.hu (Bocz Andras) Date: Wed, 3 Dec 1997 00:02:03 +0000 Subject: conference call Message-ID: FIRST CALL FOR PAPERS We are happy to announce a conference and workshop on Multidisciplinary Colloquium on Rules and Rule-Following: Philosophy, Linguistics and Psychology between April 30-May 1-2, 1998 at Janus Pannonius University P?cs, Hungary Keynote speakers (who have already accepted invitation): philosophy: Gy?rgy Kampis Hungarian Academy of Sciences, Budapest linguistics: Pierre-Yves Raccah Idl-CNRS, Paris psychology: Csaba Pl?h Dept. of General Pschology L?r?nd E?tv?s University, Budapest Organizing Committee: L?szl? Tarnay (JPTE, Dep. of Philosophy) L?szl? I. Koml?si (ELTE, Dept. of Psychology) Andr?s Bocz (JPTE, Dept. of English Studies) e-mail: tarnay at btk.jpte.hu; komlosi at btk.jpte.hu; bocz at btk.jpte.hu Advisory Board: G?bor Forrai (Budapest) Gy?rgy Kampis (Budapest) Mike Harnish (Tucson) Andr?s Kert?sz (Debrecen) Kuno Lorenz (Saarbr?cken) Pierre-Yves Raccah (Paris) J?nos S. Pet?fi (Macerata) Aims and scopes: The main aim of the conference is to bring together cholars from the field of cognitive linguistics, philosophy and psychology to investigate the concept of rule and to address various aspects of rule-following. Ever since Wittgenstein formulated in Philosophical investigations his famous 201 ? concerning a kind of rule-following which is not an interpretation, the concept of rule has become a key but elusive idea in almost every discipline and approach. And not only in the human sciences. No wonder, since without this idea the whole edifice of human (and possibly all other kinds of) rationality would surely collapse. With the rise of cognitive science, and especially the appearance of connectionist models and networks, however, the classical concept of rule is once again seriously contested. To put it very generally, there is an ongoing debate between the classical conception in which rules appear as a set of formulizable initial conditions or constraints on external operations linking different successive states of a given system (algorithms) and a dynamic conception in which there is nothing that could be correlated with a prior idea of internal well-formedness of the system's states. The debate centers on the representability of rules: either they are conceived of as meta-representations, or they are mere faon de parler concerning the development of complex systems. Idealizable on the one hand, while token-oriented on the other. Something to be implemented on the one hand, while self-controlling, backpropagational processing, on the other. There is however a common idea that almost all kinds of rule-conceptions address: the problem of learning. This idea reverberates from wittgenstenian pragmatics to strategic non-verbal and rule-governed speech behavior, from perceiving similarities to mental processing. Here are some haunting questions: - How do we acquire knowledge if there are no regularities in the world around us? - But how can we perceive those regularities? - And how do we reason on the basis of that knowledge if there are no observable constraints on infererring? - But if there are, where do they come from and how are they actually implemented mentally? - And finally: how do we come to act rationally, that is, in accordance with what we have perceived, processed and inferred? We are interested in all ways of defining rules and in all aspects of rule following, from the definiton of law, rule, regularity, similarity and analogy to logical consequence, argumentational and other inferences, statistical and linguistic rules, practical and strategic reasoning, pragmatic and praxeological activities. We expect contribution from the following reseach fields: game-theory, action theory, argumentation theory, cognitive science, linguitics, philosophy of language, epistemology, pragmatics, psychology and semiotics. We would be happy to include some contributions from natural sciences such as neuro-biology, physiology or brain sciences. The conference is organized in three major sections: philosophy, psychology and linguistics with three keynote lectures. Then contributions of 30 minutes (20 for paper and 10 for discussion) follow. We also plan to organize a workshop at the end of each section. Abstracts: Abstracts should be one-page (maximum 23 lines) specifying area of contribution and the particular aspect of rule-following to be addressed. Abstracts should be sent by e-mail to tarnay at btk.jpte.hu or bocz at btk.jpte.hu. Hard copies of abstracts may be sent to: Laszlo Tarnay Department of Philosphy Janus Pannonius University H-7624 Pecs, Hungray. Important dates: Deadline for submission: Jan.-15, 1998 Notification of acceptance: Febr.-28, 1998 conference: April 30-May 1-2, 1998 ************************************* Bocz Andr?s Department of English Janus Pannonius University Ifj?s?g u. 6. H-7624 P?cs, Hungary Tel/Fax: (36) (72) 314714 From terry at salk.edu Tue Dec 2 23:41:30 1997 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 2 Dec 1997 20:41:30 -0800 (PST) Subject: Telluride Workshop 1998 Message-ID: <199712030441.UAA08703@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO Deadline for application is February 1, 1998. Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250.- workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1., 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From jfeldman at ICSI.Berkeley.EDU Wed Dec 3 12:25:23 1997 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Wed, 03 Dec 1997 09:25:23 -0800 Subject: backprop w/o math? Message-ID: <34859603.7B03@icsi.berkeley.edu> I would appreciate advice on how best to teach backprop to undergraduates who may have little or no math. We are planning to use Plunkett and Elman's Tlearn for exercises. This is part of a new course with George Lakoff, modestly called "The Neural Basis of Thought and Language". We would also welcome any more general comments on the course: http://www.icsi.berkeley.edu/~mbrodsky/cogsci110/ -- Jerry Feldman From shavlik at cs.wisc.edu Fri Dec 5 15:12:30 1997 From: shavlik at cs.wisc.edu (Jude Shavlik) Date: Fri, 5 Dec 1997 14:12:30 -0600 (CST) Subject: CFP: 1998 Machine Learning Conference Message-ID: <199712052012.OAA17873@jersey.cs.wisc.edu> Call for Papers THE FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING July 24-26, 1998 Madison, Wisconsin, USA The Fifteenth International Conference on Machine Learning (ICML-98) will be held at the University of Wisconsin, Madison from July 24 to July 26, 1998. ICML-98 will be collocated with the Eleventh Annual Conference on Computational Learning Theory (COLT-98) and the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-98). Seven additional conferences, including the Fifteenth National Conference on Artificial Intelligence (AAAI-98), will also be held in Madison (see http://www.cs.wisc.edu/icml98/ for a complete list). Submissions are invited that describe empirical, theoretical, and cognitive-modeling research in all areas of machine learning. Submissions that present algorithms for novel learning tasks, interdisciplinary research involving machine learning, or innovative applications of machine learning techniques to challenging, real-world problems are especially encouraged. The deadline for submissions is MARCH 2, 1998. (An electronic version of the title page is due February 27, 1998.) See http://www.cs.wisc.edu/icml98/callForPapers.html for submission details. There are also three joint ICML/AAAI workshops being held July 27, 1998: Developing ML Applications: Problem Definition, Task Decomposition, and Technique Selection Learning for Text Categorization Predicting the Future: AI Approaches to Time-Series Analysis The submission deadline for these WORKSHOPS is MARCH 11, 1998. Additional details about the workshops are available via http://www.cs.wisc.edu/icml98/ [My apologies if you receive multiple copies of this announcement.] From lwh at montefiore.ulg.ac.be Fri Dec 5 02:43:56 1997 From: lwh at montefiore.ulg.ac.be (WEHENKEL Louis) Date: Fri, 5 Dec 1997 08:43:56 +0100 Subject: Book announcement "Automatic learning techniques in power systems" Message-ID: Dear colleagues, This is to announce the availability of a new book on "Automatic learning techniques in power systems". In the coming weeks, I will add some information on my own web-site http://www.montefiore.ulg.ac.be/~lwh/. Cordially, -- ******************************************************************************* Louis WEHENKEL Research Associate National Fund for Scientific Research Department of Electrical Engineering University of Liege Tel. + 32 4 366.26.84 Institut Montefiore - SART TILMAN B28, Fax. + 32 4 366.29.84 B-4000 LIEGE - BELGIUM Email. lwh at montefiore.ulg.ac.be !!! New telephone numbers from september 15th 1996 onwards ******************************************************************************* >> The web site for this book is http://www.wkap.nl/book.htm/0-7923-8068-1 >> >> >> Automatic Learning Techniques in Power Systems >> >> by >> Louis A. Wehenkel >> University of Lige, Institut Montefiore, Belgium >> >> THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND >> COMPUTER SCIENCE >> Volume 429 >> >> Automatic learning is a complex, multidisciplinary field of research >> and development, involving theoretical and applied methods from >> statistics, computer science, artificial intelligence, biology and >> psychology. Its applications to engineering problems, such as those >> encountered in electrical power systems, are therefore challenging, >> while extremely promising. More and more data have become available, >> collected from the field by systematic archiving, or generated through >> computer-based simulation. To handle this explosion of data, automatic >> learning can be used to provide systematic approaches, without which >> the increasing data amounts and computer power would be of little use. >> >> Automatic Learning Techniques in Power Systems is dedicated to the >> practical application of automatic learning to power systems. Power >> systems to which automatic learning can be applied are screened and >> the complementary aspects of automatic learning, with respect to >> analytical methods and numerical simulation, are investigated. >> >> This book presents a representative subset of automatic learning >> methods - basic and more sophisticated ones - available from >> statistics (both classical and modern), and from artificial >> intelligence (both hard and soft computing). The text also discusses >> appropriate methodologies for combining these methods to make the best >> use of available data in the context of real-life problems. >> >> Automatic Learning Techniques in Power Systems is a useful reference >> source for professionals and researchers developing automatic learning >> systems in the electrical power field. >> >> Contents >> >> List of Figures. >> List of Tables. >> Preface. >> 1. Introduction. >> Part I: Automatic Learning Methods. >> 2. Automatic Learning is Searching a Model Space. >> 3. Statistical Methods. >> 4. Artificial Neural Networks. >> 5. Machine Learning. >> 6. Auxiliary Tools and Hybrid Techniques. >> Part II: Application of Automatic Learning to Security Assessment. >> 7. Framework for Applying Automatic Learning to DSA. >> 8. Overview of Security Problems. >> 9. Security Information Data Bases. >> 10. A Sample of Real-Life Applications. >> 11. Added Value of Automatic Learning. >> 12. Future Orientations. >> Part III: Automatic Learning Applications in Power Systems. >> 13. Overview of Applications by Type. >> References. >> Index. >> Glossary. >> >> 1998, 320pp. ISBN 0-7923-8068-1 PRICE : US$ 122.00 ------- For information on how to subscribe / change address / delete your name from the Power Globe, see http://www.powerquality.com/powerglobe/ From rybaki at eplrx7.es.dupont.com Thu Dec 4 09:39:18 1997 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Thu, 4 Dec 1997 09:39:18 -0500 Subject: No subject Message-ID: <199712041439.JAA24126@pavlov> Dear colleagues, I have received a number of messages that the following my WEB pages, are not available from many servers because of some problem (?) in VOICENET: BMV: Behavioral model of active visual perception and recognition http://www.voicenet.com/~rybak/vnc.html Modeling neural mechanisms for orientation selectivity in the visual cortex: http://www.voicenet.com/~rybak/iod.html Modeling interacting populations of biologicalneurons: http://www.voicenet.com/~rybak/pop.html Modeling neural mechanisms for the respiratory rhythmogenesis: http://www.voicenet.com/~rybak/resp.html Modeling neural mechanisms for the baroreceptor vagal reflex: http://www.voicenet.com/~rybak/baro.html I thank all people informing me about the VOICENET restrictions. I also apologize for the inconvenience and would like to inform those of you who could not reach the above pages from your servers that YOU CAN VISIT THESE PAGES USING THE DIGITAL ADDRESS. In order to do this, please replace http://www.voicenet.com/~rybak/.... with http:// 207.103.26.247/~rybak/... The pages have been recently updated and may be of interest for the people working in the fields of computational neuroscience and vision. Sincerely, Ilya Rybak ======================================= Dr. Ilya Rybak Neural Computation Program E.I. du Pont de Nemours & Co. Central Research Department Experimental Station, E-328/B31 Wilmington, DE 19880-0328 USA Tel.: (302) 695-3511 Fax : (302) 695-8901 E-mail: rybaki at eplrx7.es.dupont.com URL: http://www.voicenet.com/~rybak/ http://207.103.26.247/~rybak/ ======================================== From sml%essex.ac.uk at seralph21.essex.ac.uk Thu Dec 4 11:30:39 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Thu, 04 Dec 1997 16:30:39 +0000 Subject: SOFM Demo Applet + source code Message-ID: <3486DAAE.21C9@essex.ac.uk> Dear All, I've written a colourful self-organising map demo - follow the link on my homepage listed below. This runs as a Java applet, and allows one to experiment with the effects of changing the neighbourhood radius. This may be of interest to those who have to teach the stuff - I've also included a complete listing of the source code. Best Regards, Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From sylee at eekaist.kaist.ac.kr Tue Dec 9 04:06:49 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 18:06:49 +0900 Subject: Special Invited Session on Neural Networks, SCI'98 Message-ID: <348D0A19.7CDE@ee.kaist.ac.kr> I had been asked to organize a Special Invited Session on Neural Networks at the World Multiconference on Systemics, Cybernetics, and Informatics (SCI'98) to be held on July 12-16, 1998, at Orlando, Florida, US. The SCI is an truly multi-disciplinary conference, where researchers come from many different disciplines. I believe this multi-disciplinary nature is quite unique and is helpful to neural networks researchers. We all know that neural networks are interdisciplinary researchers, and will be able to contribute to many different applications. You are cordially invited to present your valuable researches at SCI'98 and enjoy interesting discussion with researchers from different disciplines. If interested, please inform me of your intention by an e-mail. I need fix the author(s) and paper titles by January 10th, 1998. A brief introductory remark is attached for the SCI'98, and more may be found at http://www.iiis.org. Best regards, Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr ------------------------------------------------------------------ SCI'98 The purpose of the Conference is to bring together university professors, Corporate Leaders, Academic and Professional Leaders, consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'98 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. From sylee at eekaist.kaist.ac.kr Tue Dec 9 02:12:32 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 16:12:32 +0900 Subject: KAIST Brain Science Research Center Message-ID: <348CEF60.24CB@ee.kaist.ac.kr> Inauguration of Brain Science Research Center at Korea Advanced Institute of Science and Technology On December 6th, 1997, Brain Science Research Center (BSRC) had its inauguration ceremony at Korea Advanced Institute of Science and Technology (KAIST). Korean Ministry of Science and Technology announced a 10-year national program on "brain researches" on September 30, 1997. The research program consists of two main streams, i.e., developing brain-like computing systems and overcoming brain disease. Starting from 1998 the total budget will reach about 1 billion US dollars. The KAIST BSRC will be the main research organization for the brain-like computing systems, and 69 research members had joined the BSRC from 18 Korean education and research organizatrion. Its seven research areas are o neurobiology o cognitive science o neural network models o hardware implementation of neural networks o artificial vision and auditory systems o inferrence technology o intelligent control and communication systems More information may be available from Prof. Soo-Young Lee, Director Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr From moshe.sipper at di.epfl.ch Tue Dec 9 04:37:45 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Tue, 09 Dec 1997 10:37:45 +0100 Subject: ICES98: Second Call for Papers Message-ID: <199712090937.KAA01605@lslsun7.epfl.ch> Second Call for Papers +-------------------------------------------------------+ | Second International Conference on Evolvable Systems: | | From Biology to Hardware (ICES98) | +-------------------------------------------------------+ The 1998 International EPFL-Latsis Foundation Conference Swiss Federal Institute of Technology, Lausanne, Switzerland September 23-26, 1998 +-------------------------------+ | http://lslwww.epfl.ch/ices98/ | +-------------------------------+ In Cooperation With: IEEE Neural Networks Council EvoNet - The European Network of Excellence in Evolutionary Computing Societe Suisse des Informaticiens, Section Romande (SISR) Societe des Informaticiens (SI) Centre Suisse d'Electronique et de Microtechnique SA (CSEM) Swiss Foundation for Research in Microtechnology Schweizerische Gesellschaft fur Nanowissenschaften und Nanotechnik (SGNT) The idea of evolving machines, whose origins can be traced to the cybernetics movement of the 1940s and the 1950s, has recently resurged in the form of the nascent field of bio-inspired systems and evolvable hardware. The inaugural workshop, Towards Evolvable Hardware, took place in Lausanne in October 1995, followed by the First International Conference on Evolvable Systems: From Biology to Hardware (ICES96), held in Japan in October 1996. Following the success of these past events, ICES98 will reunite this burgeoning community, presenting the latest developments in the field, bringing together researchers who use biologically inspired concepts to implement real systems in artificial intelligence, artificial life, robotics, VLSI design, and related domains. Topics to be covered will include, but not be limited to, the following list: * Evolving hardware systems. * Evolutionary hardware design methodologies. * Evolutionary design of electronic circuits. * Self-replicating hardware. * Self-repairing hardware. * Embryonic hardware. * Neural hardware. * Adaptive hardware platforms. * Autonomous robots. * Evolutionary robotics. * Bio-robotics. * Applications of nanotechnology. * Biological- and chemical-based systems. * DNA computing. o General Chair: Daniel Mange, Swiss Federal Institute of Technology - Lausanne o Program Chair: Moshe Sipper, Swiss Federal Institute of Technology - Lausanne o International Steering Committee * Tetsuya Higuchi, Electrotechnical Laboratory (Japan) * Hiroaki Kitano, Sony Computer Science Laboratory (Japan) * Daniel Mange, Swiss Federal Institute of Technology (Switzerland) * Moshe Sipper, Swiss Federal Institute of Technology (Switzerland) * Andres Perez-Uribe, Swiss Federal Institute of Technology (Switzerland) o Conference Secretariat Andres Perez-Uribe, Swiss Federal Institute of Technology - Lausanne Inquiries: Andres.Perez at di.epfl.ch, Tel.: +41-21-6932652, Fax: +41-21-6933705. o Important Dates * March 1, 1998: Submission deadline. * May 1, 1998: Notification of acceptance. * June 1, 1998: Camera-ready due. * September 23-26, 1998: Conference dates. o Publisher: Springer-Verlag Official Language: English o Submission Procedure Papers should not be longer than 10 pages (including figures and bibliography) in the Springer-Verlag llncs style (see Web page for complete instructions). Authors must submit five (5) complete copies of their paper (hardcopy only), received by March 1st, 1998, to the Program Chairman: Moshe Sipper - ICES98 Program Chair Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland o The Self-Replication Contest * When: The self-replication contest will be held during the ICES98 conference. * Object: Demonstrate a self-replicating machine, implemented in some physical medium, e.g., mechanical, chemical, electronic, etc. * Important: The machine must be demonstrated AT THE CONFERENCE site. Paper submissions will not be considered. * Participation: The contest is open to all conference attendees (at least one member of any participating team must be a registered attendee). * Prize: + The most original design will be awarded a prize of $1000 (one thousand dollars). + The judgment shall be made by a special contest committee. + The committee's decision is final and incontestable. * WWW: Potential participants are advised to consult the self-replication page: http://lslwww.epfl.ch/~moshes/selfrep/ * If you intend to participate please inform the conference secretary Andres.Perez at di.epfl.ch. o Best-Paper Awards * Among the papers presented at ICES98, two will be chosen by a special committee and awarded, respectively, the best paper award and the best student paper award. * The committee's decision is final and incontestable. * All papers are eligible for the best paper award. To be eligible for the best student paper award, the first coauthor must be a full-time student. o Invited Speakers * Dr. David B. Fogel, Natural Selection, Inc. Editor-in-Chief, IEEE Transactions on Evolutionary Computation * Prof. Lewis Wolpert, University College London o Tutorials Four tutorials, delivered by experts in the field, will take place on Wednesday, September 23, 1998 (contingent upon a sufficient number of registrants). * "An Introduction to Molecular and DNA Computing," Prof. Max H. Garzon, University of Memphis * "An Introduction to Nanotechnology," Dr. James K. Gimzewski, IBM Zurich Research Laboratory * "Configurable Computing," Dr. Tetsuya Higuchi, Electrotechnical Laboratory * "An Introduction to Evolutionary Computation," Dr. David B. Fogel, Natural Selection, Inc. o Program * To be posted around May-June, 1998. o Local arrangements * See Web page. o Program Committee * Hojjat Adeli, Ohio State University (USA) * Igor Aleksander, Imperial College (UK) * David Andre, Stanford University (USA) * William W. Armstrong, University of Alberta (Canada) * Forrest H. Bennett III, Stanford University (USA) * Joan Cabestany, Universitat Politecnica de Catalunya (Spain) * Leon O. Chua, University of California at Berkeley (USA) * Russell J. Deaton, University of Memphis (USA) * Boi Faltings, Swiss Federal Institute of Technology (Switzerland) * Dario Floreano, Swiss Federal Institute of Technology (Switzerland) * Terry Fogarty, Napier University (UK) * David B. Fogel, Natural Selection, Inc. (USA) * Hugo de Garis, ATR Human Information Processing Laboratories (Japan) * Max H. Garzon, University of Memphis (USA) * Erol Gelenbe, Duke University (USA) * Wulfram Gerstner, Swiss Federal Institute of Technology (Switzerland) * Reiner W. Hartenstein, University of Kaiserslautern (Germany) * Inman Harvey, University of Sussex (UK) * Hitoshi Hemmi, NTT Human Interface Labs (Japan) * Jean-Claude Heudin, Pole Universitaire Leonard de Vinci (France) * Lishan Kang, Wuhan University (China) * John R. Koza, Stanford University (USA) * Pier L. Luisi, ETH Zentrum (Switzerland) * Bernard Manderick, Free University Brussels (Belgium) * Pierre Marchal, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Juan J. Merelo, Universidad de Granada (Spain) * Julian Miller, Napier University (UK) * Francesco Mondada, Swiss Federal Institute of Technology (Switzerland) * J. Manuel Moreno, Universitat Politecnica de Catalunya (Spain) * Pascal Nussbaum, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Christian Piguet Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * James Reggia, University of Maryland at College Park (USA) * Eytan Ruppin, Tel Aviv University (Israel) * Eduardo Sanchez, Swiss Federal Institute of Technology (Switzerland) * Andre Stauffer, Swiss Federal Institute of Technology (Switzerland) * Luc Steels, Vrije Universiteit Brussel (Belgium) * Daniel Thalmann, Swiss Federal Institute of Technology (Switzerland) * Adrian Thompson, University of Sussex (UK) * Marco Tomassini, University of Lausanne (Switzerland) * Goran Wendin, Chalmers University of Technology and Goeteborg University (Sweden) * Lewis Wolpert, University College London (UK) * Xin Yao, Australian Defense Force Academy (Australia) From Bill_Warren at Brown.edu Tue Dec 9 11:59:45 1997 From: Bill_Warren at Brown.edu (Bill Warren) Date: Tue, 9 Dec 1997 11:59:45 -0500 (EST) Subject: Graduate Traineeships: Visual Navigation in Humans and Robots Message-ID: Graduate Traineeships Visual Navigation in Humans and Robots Brown University The Department of Cognitive and Linguistic Sciences and the Department of Computer Science at Brown University are seeking graduate applicants interested in visual navigation. The project investigates situated learning of spatial knowledge used to guide navigation in humans and robots. Human experiments study active navigation and landmark recognition in virtual envrionments, whose structure is manipulated during learning and transfer. In conjunction, biologically-inspired navigation strategies are tested on mobile robot platform. Computational modeling pursues (a) a neural net model of the hippocampus and (b) reinforcement learning and hidden Markov models for spatial navigation. Underlying questions include the geometric structure of the spatial knowledge that is used in active navigation, and how it interacts with the structure of the environment and the navigational task during learning. The project is under the direction of Leslie Kaelbling (Computer Science, www.cs.brown.edu), Michael Tarr and William Warren (Cognitive & Linguistic Sciences, www.cog.brown.edu). Three graduate traineeships are available, beginning in the Fall of 1998. Applicants should apply to either of these home departments. Application materials can be obtained from: The Graduate School, Brown University, Box 1867, Providence, RI 02912, phone (401) 863-2600, www.brown.edu. The application deadline is Jan. 1, 1998. -- Bill William H. Warren, Professor Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 (401) 863-3980 ofc, 863-2255 FAX Bill_Warren at brown.edu From ptodd at hellbender.mpib-berlin.mpg.de Wed Dec 10 10:43:34 1997 From: ptodd at hellbender.mpib-berlin.mpg.de (ptodd@hellbender.mpib-berlin.mpg.de) Date: Wed, 10 Dec 97 16:43:34 +0100 Subject: pre/postdoc positions in modeling cognitive mechanisms Message-ID: <9712101543.AA21683@hellbender.mpib-berlin.mpg.de> Hello--we are seeking pre- and postdoctoral applicants for positions in our Center for Adaptive Behavior and Cognition to study simple cognitive mechanisms in humans (and other animals), both through simulation modeling techniques and experimentation. Please visit our website (http://www.mpib-berlin.mpg.de/abc) for more information on what we're up to here, and circulate the following ad to all appropriate parties. cheers, Peter Todd Center for Adaptive Behavior and Cognition Max Planck Institute for Human Development Berlin, Germany ****************************************************************************** The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany, is seeking applicants for 1 one-year Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 two-year Postdoctoral Fellowship (tax-free stipend range DM 40,000-44,000) beginning in September 1998. Candidates should be interested in modeling bounded rationality in real-world domains, and should have expertise in one of the following areas: judgment and decision making, evolutionary psychology or biology, cognitive anthropology, experimental economics and social games, risk-taking. For a detailed description of our research projects and current researchers, please visit our WWW homepage at http://www.mpib-berlin.mpg.de/abc or write to Dr. Peter Todd at ptodd at mpib-berlin.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by February 28, 1998 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany. ****************************************************************************** From Rob.Callan at solent.ac.uk Thu Dec 11 06:12:27 1997 From: Rob.Callan at solent.ac.uk (Rob.Callan@solent.ac.uk) Date: Thu, 11 Dec 1997 11:12:27 +0000 Subject: PhD studentship Message-ID: <8025656A.003D7CDD.00@hercules.solent.ac.uk> A studentship is available for someone with an interest in traditional AI and connectionism (or 'new AI') to explore adapting a machine's behaviour by instruction. Further details can be found at: http://www.solent.ac.uk/syseng/create/html/ai/aires.html or from Academic Quality Service Southampton Institute East Park Terrace Southampton UK SO14 OYN Tel: (01703) 319901 Rob Callan From john at dcs.rhbnc.ac.uk Thu Dec 11 07:45:57 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 11 Dec 97 12:45:57 +0000 Subject: NEUROCOMPUTING Special Issue - note extended deadline January 19, 1998 Message-ID: <199712111245.MAA18713@platon.cs.rhbnc.ac.uk> NEUROCOMPUTING announces a Special Issue on Theoretical analysis of real-valued function classes. Manuscripts are solicited for a special issue of NEUROCOMPUTING on the topic of 'Theoretical analysis of real-valued function classes'. Analysis of Neural Networks both as approximators and classifiers frequently relies on viewing them as real-valued function classes. This perspective places Neural Network research in a far broader context linking it with approximation theory, complexity theory over the reals, statistical and computational learning theory, among others. The aim of the special issue is to bring these links into focus by inviting papers on the theoretical analysis of real-valued function classes where the results are relevant to the special case of Neural Networks. The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Measures of Complexity - Approximation rates - Learning algorithms - Generalization estimation - Real valued complexity analysis - Novel neural functionality and its analysis, eg spiking neurons - Kernel and alternative representations - Algorithms for forming real-valued combinations of functions - On-line learning algorithms The following team of editors will be processing the submitted papers: John Shawe-Taylor (coordinating editor), Royal Holloway, Univ. of London Shai Ben-David, Technion, Israel Pascal Koiran, ENS, Lyon, France Rob Schapire, AT&T Labs, Florham Park, USA The deadline for submissions is January 19, 1998. Every effort will be made to ensure fast processing of the papers by editors and referees. For this reason electronic submission of postscript files (compressed and uuencoded) via email is encouraged. Submission should be to the following address: John Shawe-Taylor Dept of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK or Email: jst at dcs.rhbnc.ac.uk From luo at late.e-technik.uni-erlangen.de Fri Dec 12 09:45:43 1997 From: luo at late.e-technik.uni-erlangen.de (Fa-Long Luo) Date: Fri, 12 Dec 1997 15:45:43 +0100 Subject: book announcement: Applied Neural Networks for Signal Processing Message-ID: <199712121445.PAA00140@late5.e-technik.uni-erlangen.de> New Book Applied Neural Networks for Signal Processing Authors: Fa-Long Luo and Rolf Unbehauen Publisher: Cambridge University Press ISBN: 0 521 56391 7 The use of neural networks in signal processing is becoming increasingly widespread, with applications in areas such as filtering, parameter estimation, signal detection, pattern recognition, signal reconstruction, system identification, signal compression, and signal transmission. The signals concerned include audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical and others. The key features of neural networks involved in signal processing are their asynchronous parallel and distributed processing, nonlinear dynamics, global interconnection of network elements, self-organization and high-speed computational capability. With these features, neural networks can provide very powerful means for solving many problems encountered in signal processing, especially, in nonlinear signal processing, real-time signal processing, adaptive signal processing and blind signal processing. From an engineering point of view, this book aims to provide a detailed treatment of neural networks for signal processing by covering basic principles, modelling, algorithms, architectures, implementation procedures and well-designed simulation examples. This book is organized into nine chapters: Chap. 1: Fundamental Models of Neural Networks for Signal Processing, Chap. 2: Neural Networks for Filtering, Chap. 3: Neural Networks for Spectral Estimation, Chap. 4: Neural Networks for Signal Detection, Chap. 5: Neural Networks for Signal Reconstruction, Chap. 6: Neural Networks for Principal Components and Minor Components, Chap. 7: Neural Networks for Array Signal Processing, Chap. 8: Neural Networks for System Identification, Chap. 9: Neural Networks for Signal Compression. This book will be an invaluable reference for scientists and engineers working in communications, control or any other field related to signal processing. It can also be used as a textbook for graduate courses in electrical engineering and computer science. Contact: Dr. Fa-Long Luo or Dr. Philip Meyler Lehrstuhl fuer Allgemeine und Cambridge University Press Theoretische Elektrotechnik 40 West 20th Street Cauerstr. 7, 91058 Erlangen New York, NY 10011-4211 Germany USA Tel: +49 9131 857794 Tel: +1 212 924 3900 ext. 472 Fax: +49 9131 13435 Fax: +1 212 691 3239 Email: luo at late.e-technik.uni-erlangen.de E-mail: pmeyler at cup.org http://www.cup.org/Titles/56/0521563917.html From harnad at coglit.soton.ac.uk Fri Dec 12 15:58:21 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 12 Dec 1997 20:58:21 GMT Subject: Relational Complexity: BBS Call for Commentators Message-ID: <199712122058.UAA25682@amnesia.psy.soton.ac.uk> Errors-to: harnad1 at coglit.soton.ac.uk Reply-to: bbs at coglit.soton.ac.uk Below is the abstract of a forthcoming BBS target article on: PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology by Graeme S. Halford, William H. Wilson, and Steven Phillips This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology Graeme Halford Department of Psychology University of Queensland 4072 AUSTRALIA gsh at psy.uq.oz.au William H. Wilson School of Computer Science and Engineering University of New South Wales Sydney New South Wales 2052 AUSTRALIA billw at cse.unsw.edu.au http://www.cse.unsw.edu.au/~billw Steven Phillips Cognitive Science Section Electrotechnical Laboratory 1-1-4 Umezono Tsukuba Ibaraki 305 JAPAN stevep at etl.go.jp http://www.etl.go.jp/etl/ninchi/stevep at etl.go.jp/welcome.html KEYWORDS: Capacity, complexity, working memory, central executive, resource, cognitive development, comparative psychology, neural nets, representation of relations, chunking ABSTRACT: Working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and and can distinguish between higher animal species as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation because its argument can be instantiated in only one way at a time. A binary relation has two arguments and two sources of variation because two argument instantiations are possible at once. A ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to the number of "chunks," because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. In segmentation tasks are broken into components which do not exceed processing capacity and are processed serially. In conceptual chunking representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Parallel distributed implementations of relational representations show that relations with more arguments have a higher computational cost; this corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity (1) distinguishes between higher species and is related to (2) processing load in reasoning and in sentence comprehension and to (3) the complexity of relations processed by children increases with age. Implications for neural net models and for theories of cognition and cognitive development are discussed. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.halford.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.halford ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.halford gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From atick at monaco.rockefeller.edu Sun Dec 14 11:15:40 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Sun, 14 Dec 1997 11:15:40 -0500 Subject: Research Positions in Computer Vision Message-ID: <9712141115.ZM10632@monaco.rockefeller.edu> Research Positions in Computer Vision and Pattern Recognition *****IMMEDIATE NEW OPENINGS******* Visionics Corporation announces the opening of four new positions for research scientists and engineers in the field of Computer Vision and Pattern Recognition. The positions are at various levels. Candidates are expected to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in developing real-world pattern recognition algorithms. This is the team that produced the face recognition technology that was recently awarded the prestigious PC WEEK's Best of Show AND Best New Technology at COMDEX Fall 97. The job openings are at Visionics' new R&D facility at Exchange Place on the Waterfront of Jersey City, New Jersey--right across the river from the World Trade Center in Manhattan. Visionics offers a competitive salary and benefits package that includes stock options, health insurance, retirement etc, and an excellent chance for rapid career advancement. ***For more information about Visionics please visit our webpage at http://www.faceit.com. FOR IMMEDIATE CONSIDERATION, FAX YOUR RESUME TO: (1) Fax: 201 332 9313 Dr. Norman Redlich--CTO Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer--women and minority candidates are encouraged to apply. Joseph J. Atick, CEO Visionics Corporation 201 332 2890 jja at faceit.com %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From terry at salk.edu Mon Dec 15 16:20:14 1997 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 15 Dec 1997 13:20:14 -0800 (PST) Subject: NEURAL COMPUTATION 10:1 Message-ID: <199712152120.NAA16748@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 1 - January 1, 1998 ARTICLE Memory Maintenance via Neuronal Regulation David Horn, Nir Levy and Eytan Ruppin NOTE Ordering of Self-Organizing Maps in Multidimensional Cases Guang-Bin Huang, Haroon A. Babri, and Hua-Tian Li LETTERS Predicting the Distribution of Synaptic Strengths and Cell Firing Correlations in a Self-Organizing, Sequence Prediction Model Asohan Amarasingham, and William B. Levy State-Dependent Weights for Neural Associative Memories Ravi Kothari, Rohit Lotlikar, and Marc Cathay The Role of the Hippocampus in Solving the Morris Water Maze A. David Redish and David S. Touretzky Near-Saddle-Node Bifurcation Behavior as Dynamics in Working Memory for Goal-Directed Behavior Hiroyuki Nakahara and Kenji Doya A Canonical Form of Nonlinear Discrete-Time Models Gerard Dreyfus and Yizhak Idan A Low-Sensitivity Recurrent Neural Network Andrew D. Back and Ah Chung Tsoi Fixed-Point Attractor Analysis for a Class of Neurodynamics Jianfeng Feng and David Brown GTM: The Generative Topographic Mapping Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Identification Criteria and Lower Bounds for Perceptron-Like Learning Rules Michael Schmitt ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From kia at particle.kth.se Mon Dec 15 17:13:07 1997 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 15 Dec 1997 23:13:07 +0100 Subject: VI-DYNN'98 Workshop on Virtual Intelligence and Dynamic Neural NetworksVI-DYNN'98, Workshop on Virtual Intelligence and Dynamic Neural Networks Message-ID: <3495AB73.53CA8816@particle.kth.se> ------------------------------------------------------------------- Announcement and call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Royal Institute of Technology, KTH Stockholm, Sweden June 22-26, 1998 ------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn Abstracts due to: February 28, 1998 VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit and join it with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn --------------------------------------------------------------------- From rafal at idsia.ch Tue Dec 16 11:00:07 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Tue, 16 Dec 1997 17:00:07 +0100 (MET) Subject: Learning Team Strategies: Soccer Case Studies Message-ID: LEARNING TEAM STRATEGIES: SOCCER CASE STUDIES Rafal Salustowicz Marco Wiering Juergen Schmidhuber IDSIA, Lugano (Switzerland) Revised Technical Report IDSIA-29-97 To appear in the Machine Learning Journal (1998) We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TD-Q learning with linear neural networks (TD-Q), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (CO-PIPE). TD-Q is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and CO-PIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE and CO-PIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in some multiagent learning scenarios direct search in policy space can offer advantages over EF-based approaches. http://www.idsia.ch/~rafal/research.html ftp://ftp.idsia.ch/pub/rafal/soccer.ps.gz Related papers by the same authors: Evolving soccer strategies. In N. Kasabov, R. Kozma, K. Ko, R. O'Shea, G. Coghill, and T. Gedeon, editors, Progress in Connectionist-based Information Systems:Proc. of the 4th Intl. Conf. on Neural Information Processing ICONIP'97, pages 502-505, Springer-Verlag, Singapore, 1997. ftp://ftp.idsia.ch/pub/rafal/ICONIP_soccer.ps.gz On learning soccer strategies. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, editors, Proc. of the 7th Intl. Conf. on Artificial Neural Networks (ICANN'97), volume 1327 of Lecture Notes in Computer Science, pages 769-774, Springer-Verlag Berlin Heidelberg, 1997. ftp://ftp.idsia.ch/pub/rafal/ICANN_soccer.ps.gz ********************************************************************** Rafal Salustowicz Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) Corso Elvezia 36 e-mail: rafal at idsia.ch 6900 Lugano ============== Switzerland raf at cs.tu-berlin.de Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu Tel (office): +41 91 91198-32 Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal ********************************************************************** From becker at curie.psychology.mcmaster.ca Tue Dec 16 13:26:22 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 16 Dec 1997 13:26:22 -0500 (EST) Subject: Faculty Position at McMaster University Message-ID: BEHAVIORAL NEUROSCIENCE The Department of Psychology at McMaster University seeks applications for a position as Assistant Professor commencing July 1, 1998, subject to final budgetary approval. We seek a candidate who holds a Ph.D. and who is conducting research in an area of behavioral neuroscience that will complement one or more of our current strengths in neural plasticity, perceptual development, cognition, computational neuroscience, and animal behavior. Appropriate research interests include, for example, cognitive neuroscience/neuropsychology, the physiology of memory, sensory physiology, the physiology of motivation/emotion, neuroethology, and computational neuroscience. The candidate would be expected to teach in the area of neuropsychology, as well as a laboratory course in neuroscience. This is initially for a 3-year contractually-limited appointment. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents. McMaster University is committed to Employment Equity and encourages applications from all qualified candidates, including aboriginal peoples, persons with disabilities, members of visible minorities, and women. To apply, send a curriculum vitae, a short statement of research interests, a publication list with selected reprints, and three letters of reference to the following address: Prof. Ron Racine, Department of Psychology, McMaster University, Hamilton, Ontario, CANADA L8S 4K1. From rvicente at onsager.if.usp.br Wed Dec 17 09:01:00 1997 From: rvicente at onsager.if.usp.br (Renato Vicente) Date: Wed, 17 Dec 1997 12:01:00 -0200 (EDT) Subject: Paper: Learning a Spin Glass Message-ID: <199712171401.MAA08019@curie.if.usp.br> Dear Connectionists, We would like to announce our new paper on statistical physics of neural networks. The paper is available at: http://www.fma.if.usp.br/~rvicente/nnsp.html/NNGUSP13.ps.gz Comments are welcome. Learning a spin glass: determining Hamiltonians from metastable states. ------------------------------------------------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From HEINKED at psycho1.bham.ac.uk Wed Dec 17 19:48:10 1997 From: HEINKED at psycho1.bham.ac.uk (Mr Dietmar Heinke) Date: Wed, 17 Dec 1997 16:48:10 -0800 Subject: CALL FOR ABSTRACTS Message-ID: <349872CA.238@psycho1.bham.ac.uk> ***************** Call for Abstracts **************************** 5th Neural Computation and Psychology Workshop Connectionist Models in Cognitive Neuroscience University of Birmingham, England Tuesday 8th September - Thursday 10th September 1998 This workshop is the fifth in a series, following on from the first at the University of Wales, Bangor (with theme "Neurodynamics and Psychology"), the second at the University of Edinburgh, Scotland ("Memory and Language"), the third at the University of Stirling, Scotland ("Perception") and the forth at the University College, London ("Connectionist Representations"). The general aim is to bring together researchers from such diverse disciplines as artificial intelligence, applied mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on the connectionist modelling of psychology. This year's workshop is to be hosted by the University of Birmingham. As in previous years there will be a theme to the workshop. The theme is to be: Connectionist Models in Cognitive Neuroscience. This covers many important issues ranging from modelling physiological structure to cognitive function and its disorders (in neuropsychological and psychiatric cases). As in previous years we aim to keep the workshop fairly small, informal and single track. As always, participants bringing expertise from outside the UK are particularly welcome. A one page abstract should be sent to Professor Glyn W. Humphreys School of Psychology University of Birmingham Birmingham B15 2TT United Kingdom Deadline for abstracts: 31th of May, 1998 Registration, Food and Accommodation The workshop will be held at the University of Birmingham. The conference registration fee (which includes lunch and morning and afternoon tea/coffee each day) is 60 pounds. A special conference dinner (optional) is planned for the Wednesday evening costing 20 pounds. Accommodation is available in university halls or local hotels. A special price of 150 pounds is available for 3 nights accommodation in the halls of residence and registration fee. Organising Committee Dietmar Heinke Andrew Olson Professor Glyn W. Humphreys Contact Details Workshop Email address: e.fox at bham.ac.uk Dietmar Heinke, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4920, Fax: +44 212 414 4897 Email: heinked at psycho1.bham.ac.uk Andrew Olson, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 3328, Fax: +44 212 414 4897 Email: olsona at psycho1.bham.ac.uk Glyn W. Humphreys, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4930, Fax: +44 212 414 4897 Email: humphreg at psycho1.bham.ac.uk From jordan at ai.mit.edu Wed Dec 17 15:09:36 1997 From: jordan at ai.mit.edu (Michael I. Jordan) Date: Wed, 17 Dec 1997 15:09:36 -0500 (EST) Subject: graphical models and variational methods Message-ID: <9712172009.AA01909@carpentras.ai.mit.edu> A tutorial paper on graphical models and variational approximations is available at: ftp://psyche.mit.edu/pub/jordan/variational-intro.ps.gz http://www.ai.mit.edu/projects/jordan.html ------------------------------------------------------- AN INTRODUCTION TO VARIATIONAL METHODS FOR GRAPHICAL MODELS Michael I. Jordan Massachusetts Institute of Technology Zoubin Ghahramani University of Toronto Tommi S. Jaakkola University of California Santa Cruz Lawrence K. Saul AT&T Labs -- Research This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models. We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, showing how upper and lower bounds can be found for local probabilities, and discussing methods for extending these bounds to bounds on global probabilities of interest. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case. From bruf at igi.tu-graz.ac.at Thu Dec 18 05:48:05 1997 From: bruf at igi.tu-graz.ac.at (Berthold Ruf) Date: Thu, 18 Dec 1997 11:48:05 +0100 Subject: Paper: Spatial and Temporal Pattern Analysis via Spiking Neurons Message-ID: <3498FF65.B9895E06@igi.tu-graz.ac.at> The following technical report is now available at http://www.cis.tu-graz.ac.at/igi/bruf/rbf-tr.ps.gz Thomas Natschlaeger and Berthold Ruf: Spatial and Temporal Pattern Analysis via Spiking Neurons Abstract: Spiking neurons, receiving temporally encoded inputs, can compute radial basis functions (RBFs) by storing the relevant information in their delays. In this paper we show how these delays can be learned using exclusively locally available information (basically the time difference between the pre- and postsynaptic spike). Our approach gives rise to a biologically plausible algorithm for finding clusters in a high dimensional input space with networks of spiking neurons, even if the environment is changing dynamically. Furthermore, we show that our learning mechanism makes it possible that such RBF neurons can perform some kind of feature extraction where they recognize that only certain input coordinates carry relevant information. Finally we demonstrate that this model allows the recognition of temporal sequences even if they are distorted in various ways. ---------------------------------------------------- Berthold Ruf | | Office: | Home: Institute for Theoretical | Vinzenz-Muchitsch- Computer Science | Strasse 56/9 TU Graz | Klosterwiesgasse 32/2 | A-8010 Graz | A-8020 Graz AUSTRIA | AUSTRIA | Tel:+43 316/873-5824 | Tel:+43 316/274580 FAX:+43 316/873-5805 | Email: bruf at igi.tu-graz.ac.at | Homepage: http://www.cis.tu-graz.ac.at/igi/bruf/ ---------------------------------------------------- From giles at research.nj.nec.com Thu Dec 18 17:38:06 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 18 Dec 97 17:38:06 EST Subject: paper available: financial prediction Message-ID: <9712182238.AA13422@alta> The following technical report and related conference paper is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/#recent-TRs www.cs.umd.edu/TRs/TRumiacs.html We apologize in advance for any multiple postings that may occur. ************************************************************************** Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference U. of Maryland Technical Report CS-TR-3625 and UMIACS-TR-96-27 Steve Lawrence (1), Ah Chung Tsoi (2), C. Lee Giles (1,3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Faculty of Informatics, University of Wollongong, NSW 2522 Australia (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA ABSTRACT Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. Keywords - noisy time series prediction, recurrent neural networks, self-organizing map, efficient market hypothesis, foreign exchange rate, non-stationarity, efficient market hypothesis, rule extraction ********************************************************************** A short published conference version: C.L. Giles, S. Lawrence, A-C. Tsoi, "Rule Inference for Financial Prediction using Recurrent Neural Networks," Proceedings of the IEEE/IAFE Conf. on Computational Intelligence for Financial Engineering, p. 253, IEEE Press, 1997. can be found at www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/papers/IEEE.CIFEr.ps.Z __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From majidi at DI.Unipi.IT Fri Dec 19 11:34:04 1997 From: majidi at DI.Unipi.IT (Darya Majidi) Date: Fri, 19 Dec 1997 17:34:04 +0100 Subject: NNESMED '98 Message-ID: <199712191634.RAA23176@mailserver.di.unipi.it> 2-4 September 1998 Call for papers: 3rd International Conference on Neural Networks and Expert Systems in Medicine and Healthcare (NNESMED 98) Pisa, Italy Deadline: 13 February 1997, electronic file (4 pages containing an abstract of 200 words) Contact: Prof Antonina Starita Computer Science Department University of Pisa Corso Italia 40, 56126 Pisa Tel: +39-50-887215 Fax: +39-50-887226 E-mail: nnesmed at di.unipi.it WWW: http://www.di.unipi.it/~nnesmed/home.htm From takane at takane2.psych.mcgill.ca Fri Dec 19 11:53:23 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Fri, 19 Dec 1997 11:53:23 -0500 (EST) Subject: No subject Message-ID: <199712191653.LAA03407@takane2.psych.mcgill.ca> This will be the last call for papers for the special issue of Behaviormetrika. *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From harnad at coglit.soton.ac.uk Fri Dec 19 08:20:24 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 19 Dec 1997 13:20:24 GMT Subject: PSYC Call for Commentators Message-ID: <199712191320.NAA14295@amnesia.psy.soton.ac.uk> Latimer/Stevens: PART-WHOLE PERCEPTION The target article whose abstract follows below has just appeared in PSYCOLOQUY, a refereed journal of Open Peer Commentary sponsored by the American Psychological Association. Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on this article. Please write for Instructions if you are not familiar with PSYCOLOQUY format and acceptance criteria (all submissions are refereed). The article can be read or retrieved at this URL: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1997.volume.8/psyc.97.8.13.part-whole-perception.1.latimer For submitting articles and commentaries and for information: EMAIL: psyc at pucc.princteton.edu URL: http://www.cogsci.soton.ac.uk/psycoloquy/ AUTHORS' RATIONALE FOR SOLICITING COMMENTARY: The topic of whole/part perception has a long history of controversy, and its ramifications still pervade research on perception and pattern recognition in psychology, artificial intelligence and cognitive science where controversies such as "global versus local precedence" and "holistic versus analytic processing" are still very much alive. We argue that, whereas the vast majority of studies on whole/part perception have been empirical, the major and largely unaddressed problems in the area are conceptual and concern how the terms "whole" and "part" are defined and how wholes and parts are related in each particular experimental context. Without some general theory of wholes and parts and their relations and some consensus on nomenclature, we feel that pseudo-controversies will persist. One possible principle of unification and clarification is a formal analysis of the whole/part relationship by Nicholas Rescher and Paul Oppenheim (1955). We outline this formalism and demonstrate that not only does it have important implications for how we conceptualize wholes and parts and their relations, but it also has far reaching implications for the conduct of experiments on a wide range of perceptual phenomena. We challenge the well established view that whole/part perception is essentially an empirical problem and that its solution will be found in experimental investigations. We argue that there are many purely conceptual issues that require attention prior to empirical work. We question the logic of global-precedence theory and any interpretation of experimental data in terms of the extraction of global attributes without prior analysis of local elements. We also challenge theorists to provide precise, testable theories and working mechanisms that embody the whole/part processing they purport to explain. Although it deals mainly with vision and audition, our approach can be generalized to include tactile perception. Finally, we argue that apparently disparate theories, controversies, results and phenomena can all be considered under the three main conditions for wholes and parts proposed by Rescher and Oppenheim. Commentary and counterargument are sought on all these issues. In particular, we would like to hear arguments to the effect that wholes and parts (perceptual or otherwise) can exist in some absolute sense, and we would like to learn of machines, devices, programs or systems that are capable of extracting holistic properties without prior analysis of parts. ----------------------------------------------------------------------- psycoloquy.97.8.13.part-whole-perception.1.latimer Wed 17 Dec 1997 ISSN 1055-0143 (39 paragraphs, 68 references, 923 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1997 Latimer & Stevens SOME REMARKS ON WHOLES, PARTS AND THEIR PERCEPTION Cyril Latimer Department of Psychology University of Sydney NSW 2006, Australia email: cyril at psych.su.oz.au URL: http://www.psych.su.oz.au/staff/cyril/ Catherine Stevens Department of Psychology/FASS University of Western Sydney, Macarthur PO Box 555, Campbelltown, NSW 2560, Australia email: kj.stevens at uws.edu.au URL: http://psy.uq.edu.au/CogPsych/Noetica/ ABSTRACT: We emphasize the relativity of wholes and parts in whole/part perception, and suggest that consideration must be given to what the terms "whole" and "part" mean, and how they relate in a particular context. A formal analysis of the part/whole relationship by Rescher & Oppenheim, (1955) is shown to have a unifying and clarifying role in many controversial issues including illusions, emergence, local/global precedence, holistic/analytic processing, schema/feature theories and "smart mechanisms". The logic of direct extraction of holistic properties is questioned, and attention drawn to vagueness of reference to wholes and parts which can refer to phenomenal units, physiological structures or theoretical units of perceptual analysis. KEYWORDS: analytic versus holistic processing, emergence, feature gestalt, global versus local precedence part, whole From rvicente at gibbs.if.usp.br Sat Dec 20 08:28:55 1997 From: rvicente at gibbs.if.usp.br (Renato Vicente) Date: Sat, 20 Dec 1997 13:28:55 -0000 Subject: Paper: Learning a Spin Glass - right link Message-ID: <199712201620.OAA17338@borges.uol.com.br> Dear Connectionists, The link for the paper previously announced was wrong, the right one is : http://www.fma.if.usp.br/~rvicente/NNGUSP13.ps.gz Sorry about that. Learning a spin glass: determining Hamiltonians from metastable states. --------------------------------------------------------------------------- ------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From hinton at cs.toronto.edu Tue Dec 23 13:51:45 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 23 Dec 1997 13:51:45 -0500 Subject: postdoc and student positions at new centre Message-ID: <97Dec23.135145edt.1329@neuron.ai.toronto.edu> THE GATSBY COMPUTATIONAL NEUROSCIENCE UNIT UNIVERSITY COLLEGE LONDON We are delighted to announce that the Gatsby Charitable Foundation has decided to create a centre for the study of Computational Neuroscience at University College London. The centre will include: Geoffrey Hinton (director) Zhaoping Li Peter Dayan Zoubin Ghahramani 5 Postdoctoral Fellows 10 Graduate Students We will study computational theories of perception and action with an emphasis on learning. Our main current interests are: unsupervised learning and activity dependent development statistical models of hierarchical representations sensorimotor adaptation and integration visual and olfactory segmentation and recognition computation by non-linear neural dynamics reinforcement learning By establishing this centre, the Gatsby Charitable Foundation is providing a unique opportunity for a critical mass of theoreticians to interact closely with each other and with University College's world class research groups, including those in psychology, neurophysiology, anatomy, functional neuro-imaging, computer science and statistics. The unit has some laboratory space for theoretically motivated experimental studies in visual and motor psychophysics. For further information see our temporary web site http://www.cs.toronto.edu/~zoubin/GCNU/ http://hera.ucl.ac.uk/~gcnu/ (The UK mirror of site above) We are seeking to recruit 4 postdocs and 6 graduate students to start in the Fall of 1998. We are particularly interested in candidates with strong analytical skills as well as a keen interest in and knowledge of neuroscience and cognitive science. Candidates should submit a CV, a one or two page statement of research interests, and the names of 3 referees. Candidates may also submit a copy of one or two papers. We prefer email submissions (hinton at cs.toronto.edu). If you use email please send papers as URL addresses or separate messages in postscript format. The deadline for receipt of submissions is Tuesday January 13, 1998. Professor Geoffrey Hinton Department of Computer Science University of Toronto Room 283 6 King's College Road Toronto, Ontario M5S 3H5 CANADA phone: 416-978-3707 fax: 416-978-1455 From john at dcs.rhbnc.ac.uk Tue Dec 23 08:31:39 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Dec 97 13:31:39 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199712231331.NAA11900@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-040: ---------------------------------------- Saturation and Stability in the Theory of Computation over the Reals by Olivier Chapuis, Universit\'e Lyon I, France Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: This paper was motivated by the following two questions which arise in the theory of complexity for computation over ordered rings in the now famous computational model introduced by Blum, Shub and Smale: (i) is the answer to the question $\p$ $=$? $\np$ the same in every real-closed field? (ii) if $\p\neq\np$ for $\r$, does there exist a problem of $\r$ which is $\np$ but not $\np$-complete? Some unclassical complexity classes arise naturally in the study of these questions. They are still open, but we could obtain unconditional results of independent interest. Michaux introduced $/\const$ complexity classes in an effort to attack question~(i). We show that $\a_{\r}/ \const = \a_{\r}$, answering a question of his. Here $\a$ is the class of real problems which are algorithmic in bounded time. We also prove the stronger result: $\parl_{\r}/ \const =\parl_{\r}$, where $\parl$ stands for parallel polynomial time. In our terminology, we say that $\r$ is $\a$-saturated and $\parl$-saturated. We also prove, at the nonuniform level, the above results for every real-closed field. It is not known whether $\r$ is $\p$-saturated. In the case of the reals with addition and order we obtain $\p$-saturation (and a positive answer to question (ii)). More generally, we show that an ordered $\q$-vector space is $\p$-saturated at the nonuniform level (this almost implies a positive answer to the analogue of question (i)). We also study a stronger notion that we call $\p$-stability. Blum, Cucker, Shub and Smale have (essentially) shown that fields of characteristic 0 are $\p$-stable. We show that the reals with addition and order are $\p$-stable, but real-closed fields are not. Questions (i) and (ii) and the $/\const$ complexity classes have some model theoretic flavor. This leads to the theory of complexity over ``arbitrary'' structures as introduced by Poizat. We give a summary of this theory with a special emphasis on the connections with model theory and we study $/\const$ complexity classes with this point of view. Note also that our proof of the $\parl$-saturation of $\r$ shows that an o-minimal structure which admits quantifier elimination is $\a$-saturated at the nonuniform level. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-041: ---------------------------------------- A survey on the Grzegorczyk Hierarchy and its extension through the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: This paper concerns the Grzegorczyk classes defined from a particular sequence of computable functions. We provide some relevant properties and the main problems about the Grzegorczyk classes through two settings of computability. The first one is the usual setting of recursiveness and the second one is the BSS model introduced at the end of the $80'$s. This model of computability allows to define the concept of effectiveness over continuous domains such that the real numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-043: ---------------------------------------- Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognise Arbitrary Regular Languages by Wolfgang Maass, Technische Universitaet Graz, Austria Eduardo D. Sontag, Rutgers University, USA Abstract: We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognised by networks of this type, and we give a precise characterization of those languages which can be recognised. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From lbl at nagoya.riken.go.jp Wed Dec 24 22:37:55 1997 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Thu, 25 Dec 1997 12:37:55 +0900 Subject: TR available:Inverting Neural Nets Using LP and NLP Message-ID: <9712250337.AA20262@xian> The following Technical Report is available via anonymous FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-ieee-tnn-inversion.ps.gz ========================================================================== TITLE: Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming BMC Technical Report BMC-TR-97-12 AUTHORS: Bao-Liang Lu (1) Hajime Kita (2) Yoshikazu Nishikawa (3) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Tokyo Institute of Technology (3) Osaka Institute of Technology ABSTRACT: The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem because the mapping from the output space to the input space is an one-to-many mapping. In this paper, we present a method for dealing with this inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem. An important advantage of the method is that multilayer perceptrons (MLPs) and radial basis function (RBF) neural networks can be inverted by solving the corresponding separable programming problems, which can be solved by a modified simplex method, a well-developed and efficient method for solving linear programming problems. As a result, large-scale MLPs and RBF networks can be inverted efficiently. We present several examples to demonstrate the proposed inversion algorithms and the applications of the network inversions to examining and improving the generalization performance of trained networks. The results show the effectiveness of the proposed method. (double space, 50 pages) Any comments are appreciated. Bao-Liang Lu ====================================== Bio-Mimetic Control Research Center The Institute of Physical and Chemical Research (RIKEN) Anagahora, Shimoshidami, Moriyama-ku Nagoya 463-0003, Japan Tel: +81-52-736-5870 Fax: +81-52-736-5871 Email: lbl at bmc.riken.go.jp From jkim at FIZ.HUJI.AC.IL Wed Dec 24 05:16:20 1997 From: jkim at FIZ.HUJI.AC.IL (Jai Won Kim) Date: Wed, 24 Dec 1997 12:16:20 +0200 Subject: two papers: On-Line Gibbs Learning Message-ID: <199712241016.AA02233@keter.fiz.huji.ac.il> ftp-host: keter.fiz.huji.ac.il ftp-filenames: pub/OLGA1.tar and pub/OLGA2.tar ______________________________________________________________________________o On-Line Gibbs Learning I: General Theory H. Sompolinsky and J. W. Kim Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT We study a new model of on-line learning, the On-Line Gibbs Algorithm (OLGA) which is particularly suitable for supervised learning of realizable and unrealizable discrete valued functions. The learning algorithm is based on an on-line energy function, $E$, that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. The relative importance of these terms in $E$ is determined by a parameter $\la$, the inverse of which plays a similar role as the learning rate parameters in other on-line learning algorithms. In the stochastic version of the algorithm, following each presentation of a labeled example the new weights are chosen from a Gibbs distribution with the on-line energy $E$ and a temperature parameter $T$. In the zero temperature version of OLGA, at each step, the new weights are those that minimize the on-line energy $E$. The generalization performance of OLGA is studied analytically in the limit of small learning rate, \ie~ $\la \rightarrow \infty$. It is shown that at finite temperature OLGA converges to an equilibrium Gibbs distribution of weight vectors with an energy function which equals the generalization error function. In its deterministic version OLGA converges to a local minimum of the generalization error. The rate of convergence is studied for the case in which the parameter $\la$ increases as a power-law in time. It is shown that in a generic unrealizable dichotomy, the asymptotic rate of decrease of the generalization error is proportional to the inverse square root of presented examples. This rate is similar to that of batch learning. The special cases of learning realizable dichotomies or dichotomies with output noise are also treated. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA1.tar _______________________________________________________________________________o On-Line Gibbs Learning II: Application to Perceptron and Multi-layer Networks J. W. Kim and H. Sompolinsky Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT In a previous paper (On-line Gibbs Learning I: General Theory) we have presented the On-line Gibbs Algorithm(OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to online supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a Winner-Takes-All (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in article I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power-law that is the same as that of batch learning. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA2.tar From biehl at physik.uni-wuerzburg.de Tue Dec 30 08:27:19 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 30 Dec 1997 14:27:19 +0100 (MET) Subject: preprint available Message-ID: <199712301327.OAA17838@wptx08.physik.uni-wuerzburg.de> Subject: paper available: Adaptive Systems on Different Time Scales FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-97-053.ps.gz The following preprint is now available via anonymous ftp: (See below for the retrieval procedure) --------------------------------------------------------------------- Adaptive Systems on Different Time Scales by D. Endres and Peter Riegler Abstract: The special character of certain degrees of freedom in two-layered neural networks is investigated for on-line learning of realizable rules. Our analysis shows that the dynamics of these degrees of freedom can be put on a faster time scale than the remaining, with the profit of speeding up the overall adaptation process. This is shown for two groups of degrees of freedom: second layer weights and bias weights. For the former case our analysis provides a theoretical explanation of phenomenological findings. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> get WUE-ITP-97-053.ps.gz ftp> quit unix> gunzip WUE-ITP-97-053.ps.gz e.g. unix> lp WUE-ITP-97-053.ps _____________________________________________________________________ _/_/_/_/_/_/_/_/_/_/_/_/ _/ _/ Peter Riegler _/ _/_/_/ _/ Institut fuer Theoretische Physik _/ _/ _/ _/ Universitaet Wuerzburg _/ _/_/_/ _/ Am Hubland _/ _/ _/ D-97074 Wuerzburg, Germany _/_/_/ _/_/_/ phone: (++49) (0)931 888-4908 fax: (++49) (0)931 888-5141 email: pr at physik.uni-wuerzburg.de www: http://theorie.physik.uni-wuerzburg.de/~pr ________________________________________________________________ From tgd at CS.ORST.EDU Tue Dec 30 20:09:13 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Wed, 31 Dec 1997 01:09:13 GMT Subject: Learning with Probabilistic Representations (Machine Learning Special Issue) Message-ID: <199712310109.BAA00980@edison.CS.ORST.EDU> Machine Learning has published the following special issue on LEARNING WITH PROBABILISTIC REPRESENTATIONS Guest Editors: Pat Langley, Gregory M. Provan, and Padhraic Smyth Learning with Probabilistic Representations Pat Langley, Gregory Provan, and Padhraic Smyth On the Optimality of the Simple Bayesian Classifier under Zero-One Loss Pedro Domingos and Michael Pazzani Bayesian Network Classifiers Nir Friedman, Dan Geiger, and Moises Goldszmidt The Sample Complexity of Learning Fixed-Structure Bayesian Networks Sanjoy Dasgupta Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables David Maxwell Chickering and David Heckerman Adaptive Probabilistic Networks with Hidden Variables John Binder, Daphne Koller, Stuart Russell, and Keiji Kanazawa Factorial Hidden Markov Models Zoubin Ghahramani and Michael I. Jordan Predicting Protein Secondary Structure using Stochastic Tree Grammars Naoki Abe and Hiroshi Mamitsuka For more information, see http://www.cs.orst.edu/~tgd/mlj --Tom From kchen at cis.ohio-state.edu Wed Dec 31 09:58:15 1997 From: kchen at cis.ohio-state.edu (Ke CHEN) Date: Wed, 31 Dec 1997 09:58:15 -0500 (EST) Subject: TRs available. Message-ID: The following two papers are available on line now. _________________________________________________________________________ Combining Linear Discriminant Functions with Neural Networks for Supervised Learning Ke Chen{1,2}, Xiang Yu {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in Neural Computing & Applications 6(1): 19-41, 1997. ABSTRACT A novel supervised learning method is presented by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, {\it growing} and {\it credit-assignment} algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multi-layered perceptron. URL: http://www.cis.ohio-state.edu/~kchen/nca.ps ftp://www.cis.ohio-state.edu/~kchen/nca.ps ___________________________________________________________________________ ___________________________________________________________________________ Methods of Combining Multiple Classifiers with Different Features and Their Applications to Text-Independent Speaker Identification Ke Chen{1,2}, Lan Wang {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in International Journal of Pattern Recognition and Artificial Intelligence 11(3): 417-445, 1997. ABSTRACT In practical applications of pattern recognition, there are often different features extracted from raw data which needs recognizing. Methods of combining multiple classifiers with different features are viewed as a general problem in various application areas of pattern recognition. In this paper, a systematic investigation has been made and possible solutions are classified into three frameworks, i.e. linear opinion pools, winner-take-all and evidential reasoning. For combining multiple classifiers with different features, a novel method is presented in the framework of linear opinion pools and a modified training algorithm for associative switch is also proposed in the framework of winner-take-all. In the framework of evidential reasoning, several typical methods are briefly reviewed for use. All aforementioned methods have already been applied to text-independent speaker identification. The simulations show that results yielded by the methods described in this paper are better than not only the individual classifiers' but also ones obtained by combining multiple classifiers with the same feature. It indicates that the use of combining multiple classifiers with different features is an effective way to attack the problem of text-independent speaker identification. URL: http://www.cis.ohio-state.edu/~kchen/ijprai.ps ftp://www.cis.ohio-state.edu/~kchen/ijprai.ps __________________________________________________________________________ ---------------------------------------------------- Dr. Ke CHEN Department of Computer and Information Science The Ohio State University 583 Dreese Laboratories 2015 Neil Avenue Columbus, Ohio 43210-1277, U.S.A. E-Mail: kchen at cis.ohio-state.edu WWW: http://www.cis.ohio-state.edu/~kchen ------------------------------------------------------ From robtag at dia.unisa.it Tue Dec 2 06:00:45 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 2 Dec 1997 12:00:45 +0100 Subject: WIRN 97 First call for paper Message-ID: <9712021100.AA31469@udsab> ***************** CALL FOR PAPERS ***************** The 10-th Italian Workshop on Neural Nets WIRN VIETRI-98 May 21-23, 1998 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) N. A. Borghese ( CNR Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) A. Colla (ELSAG Bailey Genova) A. Esposito ( I.I.A.S.S.) M. Frixione ( Univ. Salerno) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Siena) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) C. Morabito ( Univ. Reggio Calabria) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino) A. Petrosino ( I.I.A.S.S.) V. Piuri ( Politecnico Milano) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno) R. Serra ( Centro Ricerche Ambientali Montecatini Ravenna) F. Sorbello ( Univ. Palermo) R. Tagliaferri ( Univ. Salerno) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1998 Replies to Authors: March 31, 1998 Revised Papers Due: May 24, 1998 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Scienze Fisiche "E.R. Caianiello", University of Salerno Dept. of Informatica e Applicazioni "R.M. Capocelli", Univer. of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) IEEE Neural Network Council INNS/SIG Italy Istituto Italiano per gli Studi Filosofici, Napoli The 10-th Italian Workshop on Neural Nets (WIRN VIETRI-98) will take place in Vietri Sul Mare, Salerno ITALY, May 21-23, 1998. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by an International Publishing. Official languages are Italian and English, while papers must be in English. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 98 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. The camera ready format will be sent with the acceptation letter of the referees. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1995,1996,1997 until February 28 1998) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 98 c/o IIASS before February 28,1998. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From bocz at btk.jpte.hu Tue Dec 2 19:02:03 1997 From: bocz at btk.jpte.hu (Bocz Andras) Date: Wed, 3 Dec 1997 00:02:03 +0000 Subject: conference call Message-ID: FIRST CALL FOR PAPERS We are happy to announce a conference and workshop on Multidisciplinary Colloquium on Rules and Rule-Following: Philosophy, Linguistics and Psychology between April 30-May 1-2, 1998 at Janus Pannonius University P?cs, Hungary Keynote speakers (who have already accepted invitation): philosophy: Gy?rgy Kampis Hungarian Academy of Sciences, Budapest linguistics: Pierre-Yves Raccah Idl-CNRS, Paris psychology: Csaba Pl?h Dept. of General Pschology L?r?nd E?tv?s University, Budapest Organizing Committee: L?szl? Tarnay (JPTE, Dep. of Philosophy) L?szl? I. Koml?si (ELTE, Dept. of Psychology) Andr?s Bocz (JPTE, Dept. of English Studies) e-mail: tarnay at btk.jpte.hu; komlosi at btk.jpte.hu; bocz at btk.jpte.hu Advisory Board: G?bor Forrai (Budapest) Gy?rgy Kampis (Budapest) Mike Harnish (Tucson) Andr?s Kert?sz (Debrecen) Kuno Lorenz (Saarbr?cken) Pierre-Yves Raccah (Paris) J?nos S. Pet?fi (Macerata) Aims and scopes: The main aim of the conference is to bring together cholars from the field of cognitive linguistics, philosophy and psychology to investigate the concept of rule and to address various aspects of rule-following. Ever since Wittgenstein formulated in Philosophical investigations his famous 201 ? concerning a kind of rule-following which is not an interpretation, the concept of rule has become a key but elusive idea in almost every discipline and approach. And not only in the human sciences. No wonder, since without this idea the whole edifice of human (and possibly all other kinds of) rationality would surely collapse. With the rise of cognitive science, and especially the appearance of connectionist models and networks, however, the classical concept of rule is once again seriously contested. To put it very generally, there is an ongoing debate between the classical conception in which rules appear as a set of formulizable initial conditions or constraints on external operations linking different successive states of a given system (algorithms) and a dynamic conception in which there is nothing that could be correlated with a prior idea of internal well-formedness of the system's states. The debate centers on the representability of rules: either they are conceived of as meta-representations, or they are mere faon de parler concerning the development of complex systems. Idealizable on the one hand, while token-oriented on the other. Something to be implemented on the one hand, while self-controlling, backpropagational processing, on the other. There is however a common idea that almost all kinds of rule-conceptions address: the problem of learning. This idea reverberates from wittgenstenian pragmatics to strategic non-verbal and rule-governed speech behavior, from perceiving similarities to mental processing. Here are some haunting questions: - How do we acquire knowledge if there are no regularities in the world around us? - But how can we perceive those regularities? - And how do we reason on the basis of that knowledge if there are no observable constraints on infererring? - But if there are, where do they come from and how are they actually implemented mentally? - And finally: how do we come to act rationally, that is, in accordance with what we have perceived, processed and inferred? We are interested in all ways of defining rules and in all aspects of rule following, from the definiton of law, rule, regularity, similarity and analogy to logical consequence, argumentational and other inferences, statistical and linguistic rules, practical and strategic reasoning, pragmatic and praxeological activities. We expect contribution from the following reseach fields: game-theory, action theory, argumentation theory, cognitive science, linguitics, philosophy of language, epistemology, pragmatics, psychology and semiotics. We would be happy to include some contributions from natural sciences such as neuro-biology, physiology or brain sciences. The conference is organized in three major sections: philosophy, psychology and linguistics with three keynote lectures. Then contributions of 30 minutes (20 for paper and 10 for discussion) follow. We also plan to organize a workshop at the end of each section. Abstracts: Abstracts should be one-page (maximum 23 lines) specifying area of contribution and the particular aspect of rule-following to be addressed. Abstracts should be sent by e-mail to tarnay at btk.jpte.hu or bocz at btk.jpte.hu. Hard copies of abstracts may be sent to: Laszlo Tarnay Department of Philosphy Janus Pannonius University H-7624 Pecs, Hungray. Important dates: Deadline for submission: Jan.-15, 1998 Notification of acceptance: Febr.-28, 1998 conference: April 30-May 1-2, 1998 ************************************* Bocz Andr?s Department of English Janus Pannonius University Ifj?s?g u. 6. H-7624 P?cs, Hungary Tel/Fax: (36) (72) 314714 From terry at salk.edu Tue Dec 2 23:41:30 1997 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 2 Dec 1997 20:41:30 -0800 (PST) Subject: Telluride Workshop 1998 Message-ID: <199712030441.UAA08703@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO Deadline for application is February 1, 1998. Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250.- workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1., 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From jfeldman at ICSI.Berkeley.EDU Wed Dec 3 12:25:23 1997 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Wed, 03 Dec 1997 09:25:23 -0800 Subject: backprop w/o math? Message-ID: <34859603.7B03@icsi.berkeley.edu> I would appreciate advice on how best to teach backprop to undergraduates who may have little or no math. We are planning to use Plunkett and Elman's Tlearn for exercises. This is part of a new course with George Lakoff, modestly called "The Neural Basis of Thought and Language". We would also welcome any more general comments on the course: http://www.icsi.berkeley.edu/~mbrodsky/cogsci110/ -- Jerry Feldman From shavlik at cs.wisc.edu Fri Dec 5 15:12:30 1997 From: shavlik at cs.wisc.edu (Jude Shavlik) Date: Fri, 5 Dec 1997 14:12:30 -0600 (CST) Subject: CFP: 1998 Machine Learning Conference Message-ID: <199712052012.OAA17873@jersey.cs.wisc.edu> Call for Papers THE FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING July 24-26, 1998 Madison, Wisconsin, USA The Fifteenth International Conference on Machine Learning (ICML-98) will be held at the University of Wisconsin, Madison from July 24 to July 26, 1998. ICML-98 will be collocated with the Eleventh Annual Conference on Computational Learning Theory (COLT-98) and the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-98). Seven additional conferences, including the Fifteenth National Conference on Artificial Intelligence (AAAI-98), will also be held in Madison (see http://www.cs.wisc.edu/icml98/ for a complete list). Submissions are invited that describe empirical, theoretical, and cognitive-modeling research in all areas of machine learning. Submissions that present algorithms for novel learning tasks, interdisciplinary research involving machine learning, or innovative applications of machine learning techniques to challenging, real-world problems are especially encouraged. The deadline for submissions is MARCH 2, 1998. (An electronic version of the title page is due February 27, 1998.) See http://www.cs.wisc.edu/icml98/callForPapers.html for submission details. There are also three joint ICML/AAAI workshops being held July 27, 1998: Developing ML Applications: Problem Definition, Task Decomposition, and Technique Selection Learning for Text Categorization Predicting the Future: AI Approaches to Time-Series Analysis The submission deadline for these WORKSHOPS is MARCH 11, 1998. Additional details about the workshops are available via http://www.cs.wisc.edu/icml98/ [My apologies if you receive multiple copies of this announcement.] From lwh at montefiore.ulg.ac.be Fri Dec 5 02:43:56 1997 From: lwh at montefiore.ulg.ac.be (WEHENKEL Louis) Date: Fri, 5 Dec 1997 08:43:56 +0100 Subject: Book announcement "Automatic learning techniques in power systems" Message-ID: Dear colleagues, This is to announce the availability of a new book on "Automatic learning techniques in power systems". In the coming weeks, I will add some information on my own web-site http://www.montefiore.ulg.ac.be/~lwh/. Cordially, -- ******************************************************************************* Louis WEHENKEL Research Associate National Fund for Scientific Research Department of Electrical Engineering University of Liege Tel. + 32 4 366.26.84 Institut Montefiore - SART TILMAN B28, Fax. + 32 4 366.29.84 B-4000 LIEGE - BELGIUM Email. lwh at montefiore.ulg.ac.be !!! New telephone numbers from september 15th 1996 onwards ******************************************************************************* >> The web site for this book is http://www.wkap.nl/book.htm/0-7923-8068-1 >> >> >> Automatic Learning Techniques in Power Systems >> >> by >> Louis A. Wehenkel >> University of Lige, Institut Montefiore, Belgium >> >> THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND >> COMPUTER SCIENCE >> Volume 429 >> >> Automatic learning is a complex, multidisciplinary field of research >> and development, involving theoretical and applied methods from >> statistics, computer science, artificial intelligence, biology and >> psychology. Its applications to engineering problems, such as those >> encountered in electrical power systems, are therefore challenging, >> while extremely promising. More and more data have become available, >> collected from the field by systematic archiving, or generated through >> computer-based simulation. To handle this explosion of data, automatic >> learning can be used to provide systematic approaches, without which >> the increasing data amounts and computer power would be of little use. >> >> Automatic Learning Techniques in Power Systems is dedicated to the >> practical application of automatic learning to power systems. Power >> systems to which automatic learning can be applied are screened and >> the complementary aspects of automatic learning, with respect to >> analytical methods and numerical simulation, are investigated. >> >> This book presents a representative subset of automatic learning >> methods - basic and more sophisticated ones - available from >> statistics (both classical and modern), and from artificial >> intelligence (both hard and soft computing). The text also discusses >> appropriate methodologies for combining these methods to make the best >> use of available data in the context of real-life problems. >> >> Automatic Learning Techniques in Power Systems is a useful reference >> source for professionals and researchers developing automatic learning >> systems in the electrical power field. >> >> Contents >> >> List of Figures. >> List of Tables. >> Preface. >> 1. Introduction. >> Part I: Automatic Learning Methods. >> 2. Automatic Learning is Searching a Model Space. >> 3. Statistical Methods. >> 4. Artificial Neural Networks. >> 5. Machine Learning. >> 6. Auxiliary Tools and Hybrid Techniques. >> Part II: Application of Automatic Learning to Security Assessment. >> 7. Framework for Applying Automatic Learning to DSA. >> 8. Overview of Security Problems. >> 9. Security Information Data Bases. >> 10. A Sample of Real-Life Applications. >> 11. Added Value of Automatic Learning. >> 12. Future Orientations. >> Part III: Automatic Learning Applications in Power Systems. >> 13. Overview of Applications by Type. >> References. >> Index. >> Glossary. >> >> 1998, 320pp. ISBN 0-7923-8068-1 PRICE : US$ 122.00 ------- For information on how to subscribe / change address / delete your name from the Power Globe, see http://www.powerquality.com/powerglobe/ From rybaki at eplrx7.es.dupont.com Thu Dec 4 09:39:18 1997 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Thu, 4 Dec 1997 09:39:18 -0500 Subject: No subject Message-ID: <199712041439.JAA24126@pavlov> Dear colleagues, I have received a number of messages that the following my WEB pages, are not available from many servers because of some problem (?) in VOICENET: BMV: Behavioral model of active visual perception and recognition http://www.voicenet.com/~rybak/vnc.html Modeling neural mechanisms for orientation selectivity in the visual cortex: http://www.voicenet.com/~rybak/iod.html Modeling interacting populations of biologicalneurons: http://www.voicenet.com/~rybak/pop.html Modeling neural mechanisms for the respiratory rhythmogenesis: http://www.voicenet.com/~rybak/resp.html Modeling neural mechanisms for the baroreceptor vagal reflex: http://www.voicenet.com/~rybak/baro.html I thank all people informing me about the VOICENET restrictions. I also apologize for the inconvenience and would like to inform those of you who could not reach the above pages from your servers that YOU CAN VISIT THESE PAGES USING THE DIGITAL ADDRESS. In order to do this, please replace http://www.voicenet.com/~rybak/.... with http:// 207.103.26.247/~rybak/... The pages have been recently updated and may be of interest for the people working in the fields of computational neuroscience and vision. Sincerely, Ilya Rybak ======================================= Dr. Ilya Rybak Neural Computation Program E.I. du Pont de Nemours & Co. Central Research Department Experimental Station, E-328/B31 Wilmington, DE 19880-0328 USA Tel.: (302) 695-3511 Fax : (302) 695-8901 E-mail: rybaki at eplrx7.es.dupont.com URL: http://www.voicenet.com/~rybak/ http://207.103.26.247/~rybak/ ======================================== From sml%essex.ac.uk at seralph21.essex.ac.uk Thu Dec 4 11:30:39 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Thu, 04 Dec 1997 16:30:39 +0000 Subject: SOFM Demo Applet + source code Message-ID: <3486DAAE.21C9@essex.ac.uk> Dear All, I've written a colourful self-organising map demo - follow the link on my homepage listed below. This runs as a Java applet, and allows one to experiment with the effects of changing the neighbourhood radius. This may be of interest to those who have to teach the stuff - I've also included a complete listing of the source code. Best Regards, Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From sylee at eekaist.kaist.ac.kr Tue Dec 9 04:06:49 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 18:06:49 +0900 Subject: Special Invited Session on Neural Networks, SCI'98 Message-ID: <348D0A19.7CDE@ee.kaist.ac.kr> I had been asked to organize a Special Invited Session on Neural Networks at the World Multiconference on Systemics, Cybernetics, and Informatics (SCI'98) to be held on July 12-16, 1998, at Orlando, Florida, US. The SCI is an truly multi-disciplinary conference, where researchers come from many different disciplines. I believe this multi-disciplinary nature is quite unique and is helpful to neural networks researchers. We all know that neural networks are interdisciplinary researchers, and will be able to contribute to many different applications. You are cordially invited to present your valuable researches at SCI'98 and enjoy interesting discussion with researchers from different disciplines. If interested, please inform me of your intention by an e-mail. I need fix the author(s) and paper titles by January 10th, 1998. A brief introductory remark is attached for the SCI'98, and more may be found at http://www.iiis.org. Best regards, Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr ------------------------------------------------------------------ SCI'98 The purpose of the Conference is to bring together university professors, Corporate Leaders, Academic and Professional Leaders, consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'98 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. From sylee at eekaist.kaist.ac.kr Tue Dec 9 02:12:32 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 16:12:32 +0900 Subject: KAIST Brain Science Research Center Message-ID: <348CEF60.24CB@ee.kaist.ac.kr> Inauguration of Brain Science Research Center at Korea Advanced Institute of Science and Technology On December 6th, 1997, Brain Science Research Center (BSRC) had its inauguration ceremony at Korea Advanced Institute of Science and Technology (KAIST). Korean Ministry of Science and Technology announced a 10-year national program on "brain researches" on September 30, 1997. The research program consists of two main streams, i.e., developing brain-like computing systems and overcoming brain disease. Starting from 1998 the total budget will reach about 1 billion US dollars. The KAIST BSRC will be the main research organization for the brain-like computing systems, and 69 research members had joined the BSRC from 18 Korean education and research organizatrion. Its seven research areas are o neurobiology o cognitive science o neural network models o hardware implementation of neural networks o artificial vision and auditory systems o inferrence technology o intelligent control and communication systems More information may be available from Prof. Soo-Young Lee, Director Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr From moshe.sipper at di.epfl.ch Tue Dec 9 04:37:45 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Tue, 09 Dec 1997 10:37:45 +0100 Subject: ICES98: Second Call for Papers Message-ID: <199712090937.KAA01605@lslsun7.epfl.ch> Second Call for Papers +-------------------------------------------------------+ | Second International Conference on Evolvable Systems: | | From Biology to Hardware (ICES98) | +-------------------------------------------------------+ The 1998 International EPFL-Latsis Foundation Conference Swiss Federal Institute of Technology, Lausanne, Switzerland September 23-26, 1998 +-------------------------------+ | http://lslwww.epfl.ch/ices98/ | +-------------------------------+ In Cooperation With: IEEE Neural Networks Council EvoNet - The European Network of Excellence in Evolutionary Computing Societe Suisse des Informaticiens, Section Romande (SISR) Societe des Informaticiens (SI) Centre Suisse d'Electronique et de Microtechnique SA (CSEM) Swiss Foundation for Research in Microtechnology Schweizerische Gesellschaft fur Nanowissenschaften und Nanotechnik (SGNT) The idea of evolving machines, whose origins can be traced to the cybernetics movement of the 1940s and the 1950s, has recently resurged in the form of the nascent field of bio-inspired systems and evolvable hardware. The inaugural workshop, Towards Evolvable Hardware, took place in Lausanne in October 1995, followed by the First International Conference on Evolvable Systems: From Biology to Hardware (ICES96), held in Japan in October 1996. Following the success of these past events, ICES98 will reunite this burgeoning community, presenting the latest developments in the field, bringing together researchers who use biologically inspired concepts to implement real systems in artificial intelligence, artificial life, robotics, VLSI design, and related domains. Topics to be covered will include, but not be limited to, the following list: * Evolving hardware systems. * Evolutionary hardware design methodologies. * Evolutionary design of electronic circuits. * Self-replicating hardware. * Self-repairing hardware. * Embryonic hardware. * Neural hardware. * Adaptive hardware platforms. * Autonomous robots. * Evolutionary robotics. * Bio-robotics. * Applications of nanotechnology. * Biological- and chemical-based systems. * DNA computing. o General Chair: Daniel Mange, Swiss Federal Institute of Technology - Lausanne o Program Chair: Moshe Sipper, Swiss Federal Institute of Technology - Lausanne o International Steering Committee * Tetsuya Higuchi, Electrotechnical Laboratory (Japan) * Hiroaki Kitano, Sony Computer Science Laboratory (Japan) * Daniel Mange, Swiss Federal Institute of Technology (Switzerland) * Moshe Sipper, Swiss Federal Institute of Technology (Switzerland) * Andres Perez-Uribe, Swiss Federal Institute of Technology (Switzerland) o Conference Secretariat Andres Perez-Uribe, Swiss Federal Institute of Technology - Lausanne Inquiries: Andres.Perez at di.epfl.ch, Tel.: +41-21-6932652, Fax: +41-21-6933705. o Important Dates * March 1, 1998: Submission deadline. * May 1, 1998: Notification of acceptance. * June 1, 1998: Camera-ready due. * September 23-26, 1998: Conference dates. o Publisher: Springer-Verlag Official Language: English o Submission Procedure Papers should not be longer than 10 pages (including figures and bibliography) in the Springer-Verlag llncs style (see Web page for complete instructions). Authors must submit five (5) complete copies of their paper (hardcopy only), received by March 1st, 1998, to the Program Chairman: Moshe Sipper - ICES98 Program Chair Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland o The Self-Replication Contest * When: The self-replication contest will be held during the ICES98 conference. * Object: Demonstrate a self-replicating machine, implemented in some physical medium, e.g., mechanical, chemical, electronic, etc. * Important: The machine must be demonstrated AT THE CONFERENCE site. Paper submissions will not be considered. * Participation: The contest is open to all conference attendees (at least one member of any participating team must be a registered attendee). * Prize: + The most original design will be awarded a prize of $1000 (one thousand dollars). + The judgment shall be made by a special contest committee. + The committee's decision is final and incontestable. * WWW: Potential participants are advised to consult the self-replication page: http://lslwww.epfl.ch/~moshes/selfrep/ * If you intend to participate please inform the conference secretary Andres.Perez at di.epfl.ch. o Best-Paper Awards * Among the papers presented at ICES98, two will be chosen by a special committee and awarded, respectively, the best paper award and the best student paper award. * The committee's decision is final and incontestable. * All papers are eligible for the best paper award. To be eligible for the best student paper award, the first coauthor must be a full-time student. o Invited Speakers * Dr. David B. Fogel, Natural Selection, Inc. Editor-in-Chief, IEEE Transactions on Evolutionary Computation * Prof. Lewis Wolpert, University College London o Tutorials Four tutorials, delivered by experts in the field, will take place on Wednesday, September 23, 1998 (contingent upon a sufficient number of registrants). * "An Introduction to Molecular and DNA Computing," Prof. Max H. Garzon, University of Memphis * "An Introduction to Nanotechnology," Dr. James K. Gimzewski, IBM Zurich Research Laboratory * "Configurable Computing," Dr. Tetsuya Higuchi, Electrotechnical Laboratory * "An Introduction to Evolutionary Computation," Dr. David B. Fogel, Natural Selection, Inc. o Program * To be posted around May-June, 1998. o Local arrangements * See Web page. o Program Committee * Hojjat Adeli, Ohio State University (USA) * Igor Aleksander, Imperial College (UK) * David Andre, Stanford University (USA) * William W. Armstrong, University of Alberta (Canada) * Forrest H. Bennett III, Stanford University (USA) * Joan Cabestany, Universitat Politecnica de Catalunya (Spain) * Leon O. Chua, University of California at Berkeley (USA) * Russell J. Deaton, University of Memphis (USA) * Boi Faltings, Swiss Federal Institute of Technology (Switzerland) * Dario Floreano, Swiss Federal Institute of Technology (Switzerland) * Terry Fogarty, Napier University (UK) * David B. Fogel, Natural Selection, Inc. (USA) * Hugo de Garis, ATR Human Information Processing Laboratories (Japan) * Max H. Garzon, University of Memphis (USA) * Erol Gelenbe, Duke University (USA) * Wulfram Gerstner, Swiss Federal Institute of Technology (Switzerland) * Reiner W. Hartenstein, University of Kaiserslautern (Germany) * Inman Harvey, University of Sussex (UK) * Hitoshi Hemmi, NTT Human Interface Labs (Japan) * Jean-Claude Heudin, Pole Universitaire Leonard de Vinci (France) * Lishan Kang, Wuhan University (China) * John R. Koza, Stanford University (USA) * Pier L. Luisi, ETH Zentrum (Switzerland) * Bernard Manderick, Free University Brussels (Belgium) * Pierre Marchal, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Juan J. Merelo, Universidad de Granada (Spain) * Julian Miller, Napier University (UK) * Francesco Mondada, Swiss Federal Institute of Technology (Switzerland) * J. Manuel Moreno, Universitat Politecnica de Catalunya (Spain) * Pascal Nussbaum, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Christian Piguet Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * James Reggia, University of Maryland at College Park (USA) * Eytan Ruppin, Tel Aviv University (Israel) * Eduardo Sanchez, Swiss Federal Institute of Technology (Switzerland) * Andre Stauffer, Swiss Federal Institute of Technology (Switzerland) * Luc Steels, Vrije Universiteit Brussel (Belgium) * Daniel Thalmann, Swiss Federal Institute of Technology (Switzerland) * Adrian Thompson, University of Sussex (UK) * Marco Tomassini, University of Lausanne (Switzerland) * Goran Wendin, Chalmers University of Technology and Goeteborg University (Sweden) * Lewis Wolpert, University College London (UK) * Xin Yao, Australian Defense Force Academy (Australia) From Bill_Warren at Brown.edu Tue Dec 9 11:59:45 1997 From: Bill_Warren at Brown.edu (Bill Warren) Date: Tue, 9 Dec 1997 11:59:45 -0500 (EST) Subject: Graduate Traineeships: Visual Navigation in Humans and Robots Message-ID: Graduate Traineeships Visual Navigation in Humans and Robots Brown University The Department of Cognitive and Linguistic Sciences and the Department of Computer Science at Brown University are seeking graduate applicants interested in visual navigation. The project investigates situated learning of spatial knowledge used to guide navigation in humans and robots. Human experiments study active navigation and landmark recognition in virtual envrionments, whose structure is manipulated during learning and transfer. In conjunction, biologically-inspired navigation strategies are tested on mobile robot platform. Computational modeling pursues (a) a neural net model of the hippocampus and (b) reinforcement learning and hidden Markov models for spatial navigation. Underlying questions include the geometric structure of the spatial knowledge that is used in active navigation, and how it interacts with the structure of the environment and the navigational task during learning. The project is under the direction of Leslie Kaelbling (Computer Science, www.cs.brown.edu), Michael Tarr and William Warren (Cognitive & Linguistic Sciences, www.cog.brown.edu). Three graduate traineeships are available, beginning in the Fall of 1998. Applicants should apply to either of these home departments. Application materials can be obtained from: The Graduate School, Brown University, Box 1867, Providence, RI 02912, phone (401) 863-2600, www.brown.edu. The application deadline is Jan. 1, 1998. -- Bill William H. Warren, Professor Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 (401) 863-3980 ofc, 863-2255 FAX Bill_Warren at brown.edu From ptodd at hellbender.mpib-berlin.mpg.de Wed Dec 10 10:43:34 1997 From: ptodd at hellbender.mpib-berlin.mpg.de (ptodd@hellbender.mpib-berlin.mpg.de) Date: Wed, 10 Dec 97 16:43:34 +0100 Subject: pre/postdoc positions in modeling cognitive mechanisms Message-ID: <9712101543.AA21683@hellbender.mpib-berlin.mpg.de> Hello--we are seeking pre- and postdoctoral applicants for positions in our Center for Adaptive Behavior and Cognition to study simple cognitive mechanisms in humans (and other animals), both through simulation modeling techniques and experimentation. Please visit our website (http://www.mpib-berlin.mpg.de/abc) for more information on what we're up to here, and circulate the following ad to all appropriate parties. cheers, Peter Todd Center for Adaptive Behavior and Cognition Max Planck Institute for Human Development Berlin, Germany ****************************************************************************** The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany, is seeking applicants for 1 one-year Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 two-year Postdoctoral Fellowship (tax-free stipend range DM 40,000-44,000) beginning in September 1998. Candidates should be interested in modeling bounded rationality in real-world domains, and should have expertise in one of the following areas: judgment and decision making, evolutionary psychology or biology, cognitive anthropology, experimental economics and social games, risk-taking. For a detailed description of our research projects and current researchers, please visit our WWW homepage at http://www.mpib-berlin.mpg.de/abc or write to Dr. Peter Todd at ptodd at mpib-berlin.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by February 28, 1998 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany. ****************************************************************************** From Rob.Callan at solent.ac.uk Thu Dec 11 06:12:27 1997 From: Rob.Callan at solent.ac.uk (Rob.Callan@solent.ac.uk) Date: Thu, 11 Dec 1997 11:12:27 +0000 Subject: PhD studentship Message-ID: <8025656A.003D7CDD.00@hercules.solent.ac.uk> A studentship is available for someone with an interest in traditional AI and connectionism (or 'new AI') to explore adapting a machine's behaviour by instruction. Further details can be found at: http://www.solent.ac.uk/syseng/create/html/ai/aires.html or from Academic Quality Service Southampton Institute East Park Terrace Southampton UK SO14 OYN Tel: (01703) 319901 Rob Callan From john at dcs.rhbnc.ac.uk Thu Dec 11 07:45:57 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 11 Dec 97 12:45:57 +0000 Subject: NEUROCOMPUTING Special Issue - note extended deadline January 19, 1998 Message-ID: <199712111245.MAA18713@platon.cs.rhbnc.ac.uk> NEUROCOMPUTING announces a Special Issue on Theoretical analysis of real-valued function classes. Manuscripts are solicited for a special issue of NEUROCOMPUTING on the topic of 'Theoretical analysis of real-valued function classes'. Analysis of Neural Networks both as approximators and classifiers frequently relies on viewing them as real-valued function classes. This perspective places Neural Network research in a far broader context linking it with approximation theory, complexity theory over the reals, statistical and computational learning theory, among others. The aim of the special issue is to bring these links into focus by inviting papers on the theoretical analysis of real-valued function classes where the results are relevant to the special case of Neural Networks. The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Measures of Complexity - Approximation rates - Learning algorithms - Generalization estimation - Real valued complexity analysis - Novel neural functionality and its analysis, eg spiking neurons - Kernel and alternative representations - Algorithms for forming real-valued combinations of functions - On-line learning algorithms The following team of editors will be processing the submitted papers: John Shawe-Taylor (coordinating editor), Royal Holloway, Univ. of London Shai Ben-David, Technion, Israel Pascal Koiran, ENS, Lyon, France Rob Schapire, AT&T Labs, Florham Park, USA The deadline for submissions is January 19, 1998. Every effort will be made to ensure fast processing of the papers by editors and referees. For this reason electronic submission of postscript files (compressed and uuencoded) via email is encouraged. Submission should be to the following address: John Shawe-Taylor Dept of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK or Email: jst at dcs.rhbnc.ac.uk From luo at late.e-technik.uni-erlangen.de Fri Dec 12 09:45:43 1997 From: luo at late.e-technik.uni-erlangen.de (Fa-Long Luo) Date: Fri, 12 Dec 1997 15:45:43 +0100 Subject: book announcement: Applied Neural Networks for Signal Processing Message-ID: <199712121445.PAA00140@late5.e-technik.uni-erlangen.de> New Book Applied Neural Networks for Signal Processing Authors: Fa-Long Luo and Rolf Unbehauen Publisher: Cambridge University Press ISBN: 0 521 56391 7 The use of neural networks in signal processing is becoming increasingly widespread, with applications in areas such as filtering, parameter estimation, signal detection, pattern recognition, signal reconstruction, system identification, signal compression, and signal transmission. The signals concerned include audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical and others. The key features of neural networks involved in signal processing are their asynchronous parallel and distributed processing, nonlinear dynamics, global interconnection of network elements, self-organization and high-speed computational capability. With these features, neural networks can provide very powerful means for solving many problems encountered in signal processing, especially, in nonlinear signal processing, real-time signal processing, adaptive signal processing and blind signal processing. From an engineering point of view, this book aims to provide a detailed treatment of neural networks for signal processing by covering basic principles, modelling, algorithms, architectures, implementation procedures and well-designed simulation examples. This book is organized into nine chapters: Chap. 1: Fundamental Models of Neural Networks for Signal Processing, Chap. 2: Neural Networks for Filtering, Chap. 3: Neural Networks for Spectral Estimation, Chap. 4: Neural Networks for Signal Detection, Chap. 5: Neural Networks for Signal Reconstruction, Chap. 6: Neural Networks for Principal Components and Minor Components, Chap. 7: Neural Networks for Array Signal Processing, Chap. 8: Neural Networks for System Identification, Chap. 9: Neural Networks for Signal Compression. This book will be an invaluable reference for scientists and engineers working in communications, control or any other field related to signal processing. It can also be used as a textbook for graduate courses in electrical engineering and computer science. Contact: Dr. Fa-Long Luo or Dr. Philip Meyler Lehrstuhl fuer Allgemeine und Cambridge University Press Theoretische Elektrotechnik 40 West 20th Street Cauerstr. 7, 91058 Erlangen New York, NY 10011-4211 Germany USA Tel: +49 9131 857794 Tel: +1 212 924 3900 ext. 472 Fax: +49 9131 13435 Fax: +1 212 691 3239 Email: luo at late.e-technik.uni-erlangen.de E-mail: pmeyler at cup.org http://www.cup.org/Titles/56/0521563917.html From harnad at coglit.soton.ac.uk Fri Dec 12 15:58:21 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 12 Dec 1997 20:58:21 GMT Subject: Relational Complexity: BBS Call for Commentators Message-ID: <199712122058.UAA25682@amnesia.psy.soton.ac.uk> Errors-to: harnad1 at coglit.soton.ac.uk Reply-to: bbs at coglit.soton.ac.uk Below is the abstract of a forthcoming BBS target article on: PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology by Graeme S. Halford, William H. Wilson, and Steven Phillips This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology Graeme Halford Department of Psychology University of Queensland 4072 AUSTRALIA gsh at psy.uq.oz.au William H. Wilson School of Computer Science and Engineering University of New South Wales Sydney New South Wales 2052 AUSTRALIA billw at cse.unsw.edu.au http://www.cse.unsw.edu.au/~billw Steven Phillips Cognitive Science Section Electrotechnical Laboratory 1-1-4 Umezono Tsukuba Ibaraki 305 JAPAN stevep at etl.go.jp http://www.etl.go.jp/etl/ninchi/stevep at etl.go.jp/welcome.html KEYWORDS: Capacity, complexity, working memory, central executive, resource, cognitive development, comparative psychology, neural nets, representation of relations, chunking ABSTRACT: Working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and and can distinguish between higher animal species as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation because its argument can be instantiated in only one way at a time. A binary relation has two arguments and two sources of variation because two argument instantiations are possible at once. A ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to the number of "chunks," because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. In segmentation tasks are broken into components which do not exceed processing capacity and are processed serially. In conceptual chunking representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Parallel distributed implementations of relational representations show that relations with more arguments have a higher computational cost; this corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity (1) distinguishes between higher species and is related to (2) processing load in reasoning and in sentence comprehension and to (3) the complexity of relations processed by children increases with age. Implications for neural net models and for theories of cognition and cognitive development are discussed. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.halford.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.halford ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.halford gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From atick at monaco.rockefeller.edu Sun Dec 14 11:15:40 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Sun, 14 Dec 1997 11:15:40 -0500 Subject: Research Positions in Computer Vision Message-ID: <9712141115.ZM10632@monaco.rockefeller.edu> Research Positions in Computer Vision and Pattern Recognition *****IMMEDIATE NEW OPENINGS******* Visionics Corporation announces the opening of four new positions for research scientists and engineers in the field of Computer Vision and Pattern Recognition. The positions are at various levels. Candidates are expected to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in developing real-world pattern recognition algorithms. This is the team that produced the face recognition technology that was recently awarded the prestigious PC WEEK's Best of Show AND Best New Technology at COMDEX Fall 97. The job openings are at Visionics' new R&D facility at Exchange Place on the Waterfront of Jersey City, New Jersey--right across the river from the World Trade Center in Manhattan. Visionics offers a competitive salary and benefits package that includes stock options, health insurance, retirement etc, and an excellent chance for rapid career advancement. ***For more information about Visionics please visit our webpage at http://www.faceit.com. FOR IMMEDIATE CONSIDERATION, FAX YOUR RESUME TO: (1) Fax: 201 332 9313 Dr. Norman Redlich--CTO Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer--women and minority candidates are encouraged to apply. Joseph J. Atick, CEO Visionics Corporation 201 332 2890 jja at faceit.com %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From terry at salk.edu Mon Dec 15 16:20:14 1997 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 15 Dec 1997 13:20:14 -0800 (PST) Subject: NEURAL COMPUTATION 10:1 Message-ID: <199712152120.NAA16748@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 1 - January 1, 1998 ARTICLE Memory Maintenance via Neuronal Regulation David Horn, Nir Levy and Eytan Ruppin NOTE Ordering of Self-Organizing Maps in Multidimensional Cases Guang-Bin Huang, Haroon A. Babri, and Hua-Tian Li LETTERS Predicting the Distribution of Synaptic Strengths and Cell Firing Correlations in a Self-Organizing, Sequence Prediction Model Asohan Amarasingham, and William B. Levy State-Dependent Weights for Neural Associative Memories Ravi Kothari, Rohit Lotlikar, and Marc Cathay The Role of the Hippocampus in Solving the Morris Water Maze A. David Redish and David S. Touretzky Near-Saddle-Node Bifurcation Behavior as Dynamics in Working Memory for Goal-Directed Behavior Hiroyuki Nakahara and Kenji Doya A Canonical Form of Nonlinear Discrete-Time Models Gerard Dreyfus and Yizhak Idan A Low-Sensitivity Recurrent Neural Network Andrew D. Back and Ah Chung Tsoi Fixed-Point Attractor Analysis for a Class of Neurodynamics Jianfeng Feng and David Brown GTM: The Generative Topographic Mapping Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Identification Criteria and Lower Bounds for Perceptron-Like Learning Rules Michael Schmitt ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From kia at particle.kth.se Mon Dec 15 17:13:07 1997 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 15 Dec 1997 23:13:07 +0100 Subject: VI-DYNN'98 Workshop on Virtual Intelligence and Dynamic Neural NetworksVI-DYNN'98, Workshop on Virtual Intelligence and Dynamic Neural Networks Message-ID: <3495AB73.53CA8816@particle.kth.se> ------------------------------------------------------------------- Announcement and call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Royal Institute of Technology, KTH Stockholm, Sweden June 22-26, 1998 ------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn Abstracts due to: February 28, 1998 VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit and join it with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn --------------------------------------------------------------------- From rafal at idsia.ch Tue Dec 16 11:00:07 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Tue, 16 Dec 1997 17:00:07 +0100 (MET) Subject: Learning Team Strategies: Soccer Case Studies Message-ID: LEARNING TEAM STRATEGIES: SOCCER CASE STUDIES Rafal Salustowicz Marco Wiering Juergen Schmidhuber IDSIA, Lugano (Switzerland) Revised Technical Report IDSIA-29-97 To appear in the Machine Learning Journal (1998) We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TD-Q learning with linear neural networks (TD-Q), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (CO-PIPE). TD-Q is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and CO-PIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE and CO-PIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in some multiagent learning scenarios direct search in policy space can offer advantages over EF-based approaches. http://www.idsia.ch/~rafal/research.html ftp://ftp.idsia.ch/pub/rafal/soccer.ps.gz Related papers by the same authors: Evolving soccer strategies. In N. Kasabov, R. Kozma, K. Ko, R. O'Shea, G. Coghill, and T. Gedeon, editors, Progress in Connectionist-based Information Systems:Proc. of the 4th Intl. Conf. on Neural Information Processing ICONIP'97, pages 502-505, Springer-Verlag, Singapore, 1997. ftp://ftp.idsia.ch/pub/rafal/ICONIP_soccer.ps.gz On learning soccer strategies. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, editors, Proc. of the 7th Intl. Conf. on Artificial Neural Networks (ICANN'97), volume 1327 of Lecture Notes in Computer Science, pages 769-774, Springer-Verlag Berlin Heidelberg, 1997. ftp://ftp.idsia.ch/pub/rafal/ICANN_soccer.ps.gz ********************************************************************** Rafal Salustowicz Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) Corso Elvezia 36 e-mail: rafal at idsia.ch 6900 Lugano ============== Switzerland raf at cs.tu-berlin.de Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu Tel (office): +41 91 91198-32 Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal ********************************************************************** From becker at curie.psychology.mcmaster.ca Tue Dec 16 13:26:22 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 16 Dec 1997 13:26:22 -0500 (EST) Subject: Faculty Position at McMaster University Message-ID: BEHAVIORAL NEUROSCIENCE The Department of Psychology at McMaster University seeks applications for a position as Assistant Professor commencing July 1, 1998, subject to final budgetary approval. We seek a candidate who holds a Ph.D. and who is conducting research in an area of behavioral neuroscience that will complement one or more of our current strengths in neural plasticity, perceptual development, cognition, computational neuroscience, and animal behavior. Appropriate research interests include, for example, cognitive neuroscience/neuropsychology, the physiology of memory, sensory physiology, the physiology of motivation/emotion, neuroethology, and computational neuroscience. The candidate would be expected to teach in the area of neuropsychology, as well as a laboratory course in neuroscience. This is initially for a 3-year contractually-limited appointment. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents. McMaster University is committed to Employment Equity and encourages applications from all qualified candidates, including aboriginal peoples, persons with disabilities, members of visible minorities, and women. To apply, send a curriculum vitae, a short statement of research interests, a publication list with selected reprints, and three letters of reference to the following address: Prof. Ron Racine, Department of Psychology, McMaster University, Hamilton, Ontario, CANADA L8S 4K1. From rvicente at onsager.if.usp.br Wed Dec 17 09:01:00 1997 From: rvicente at onsager.if.usp.br (Renato Vicente) Date: Wed, 17 Dec 1997 12:01:00 -0200 (EDT) Subject: Paper: Learning a Spin Glass Message-ID: <199712171401.MAA08019@curie.if.usp.br> Dear Connectionists, We would like to announce our new paper on statistical physics of neural networks. The paper is available at: http://www.fma.if.usp.br/~rvicente/nnsp.html/NNGUSP13.ps.gz Comments are welcome. Learning a spin glass: determining Hamiltonians from metastable states. ------------------------------------------------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From HEINKED at psycho1.bham.ac.uk Wed Dec 17 19:48:10 1997 From: HEINKED at psycho1.bham.ac.uk (Mr Dietmar Heinke) Date: Wed, 17 Dec 1997 16:48:10 -0800 Subject: CALL FOR ABSTRACTS Message-ID: <349872CA.238@psycho1.bham.ac.uk> ***************** Call for Abstracts **************************** 5th Neural Computation and Psychology Workshop Connectionist Models in Cognitive Neuroscience University of Birmingham, England Tuesday 8th September - Thursday 10th September 1998 This workshop is the fifth in a series, following on from the first at the University of Wales, Bangor (with theme "Neurodynamics and Psychology"), the second at the University of Edinburgh, Scotland ("Memory and Language"), the third at the University of Stirling, Scotland ("Perception") and the forth at the University College, London ("Connectionist Representations"). The general aim is to bring together researchers from such diverse disciplines as artificial intelligence, applied mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on the connectionist modelling of psychology. This year's workshop is to be hosted by the University of Birmingham. As in previous years there will be a theme to the workshop. The theme is to be: Connectionist Models in Cognitive Neuroscience. This covers many important issues ranging from modelling physiological structure to cognitive function and its disorders (in neuropsychological and psychiatric cases). As in previous years we aim to keep the workshop fairly small, informal and single track. As always, participants bringing expertise from outside the UK are particularly welcome. A one page abstract should be sent to Professor Glyn W. Humphreys School of Psychology University of Birmingham Birmingham B15 2TT United Kingdom Deadline for abstracts: 31th of May, 1998 Registration, Food and Accommodation The workshop will be held at the University of Birmingham. The conference registration fee (which includes lunch and morning and afternoon tea/coffee each day) is 60 pounds. A special conference dinner (optional) is planned for the Wednesday evening costing 20 pounds. Accommodation is available in university halls or local hotels. A special price of 150 pounds is available for 3 nights accommodation in the halls of residence and registration fee. Organising Committee Dietmar Heinke Andrew Olson Professor Glyn W. Humphreys Contact Details Workshop Email address: e.fox at bham.ac.uk Dietmar Heinke, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4920, Fax: +44 212 414 4897 Email: heinked at psycho1.bham.ac.uk Andrew Olson, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 3328, Fax: +44 212 414 4897 Email: olsona at psycho1.bham.ac.uk Glyn W. Humphreys, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4930, Fax: +44 212 414 4897 Email: humphreg at psycho1.bham.ac.uk From jordan at ai.mit.edu Wed Dec 17 15:09:36 1997 From: jordan at ai.mit.edu (Michael I. Jordan) Date: Wed, 17 Dec 1997 15:09:36 -0500 (EST) Subject: graphical models and variational methods Message-ID: <9712172009.AA01909@carpentras.ai.mit.edu> A tutorial paper on graphical models and variational approximations is available at: ftp://psyche.mit.edu/pub/jordan/variational-intro.ps.gz http://www.ai.mit.edu/projects/jordan.html ------------------------------------------------------- AN INTRODUCTION TO VARIATIONAL METHODS FOR GRAPHICAL MODELS Michael I. Jordan Massachusetts Institute of Technology Zoubin Ghahramani University of Toronto Tommi S. Jaakkola University of California Santa Cruz Lawrence K. Saul AT&T Labs -- Research This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models. We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, showing how upper and lower bounds can be found for local probabilities, and discussing methods for extending these bounds to bounds on global probabilities of interest. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case. From bruf at igi.tu-graz.ac.at Thu Dec 18 05:48:05 1997 From: bruf at igi.tu-graz.ac.at (Berthold Ruf) Date: Thu, 18 Dec 1997 11:48:05 +0100 Subject: Paper: Spatial and Temporal Pattern Analysis via Spiking Neurons Message-ID: <3498FF65.B9895E06@igi.tu-graz.ac.at> The following technical report is now available at http://www.cis.tu-graz.ac.at/igi/bruf/rbf-tr.ps.gz Thomas Natschlaeger and Berthold Ruf: Spatial and Temporal Pattern Analysis via Spiking Neurons Abstract: Spiking neurons, receiving temporally encoded inputs, can compute radial basis functions (RBFs) by storing the relevant information in their delays. In this paper we show how these delays can be learned using exclusively locally available information (basically the time difference between the pre- and postsynaptic spike). Our approach gives rise to a biologically plausible algorithm for finding clusters in a high dimensional input space with networks of spiking neurons, even if the environment is changing dynamically. Furthermore, we show that our learning mechanism makes it possible that such RBF neurons can perform some kind of feature extraction where they recognize that only certain input coordinates carry relevant information. Finally we demonstrate that this model allows the recognition of temporal sequences even if they are distorted in various ways. ---------------------------------------------------- Berthold Ruf | | Office: | Home: Institute for Theoretical | Vinzenz-Muchitsch- Computer Science | Strasse 56/9 TU Graz | Klosterwiesgasse 32/2 | A-8010 Graz | A-8020 Graz AUSTRIA | AUSTRIA | Tel:+43 316/873-5824 | Tel:+43 316/274580 FAX:+43 316/873-5805 | Email: bruf at igi.tu-graz.ac.at | Homepage: http://www.cis.tu-graz.ac.at/igi/bruf/ ---------------------------------------------------- From giles at research.nj.nec.com Thu Dec 18 17:38:06 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 18 Dec 97 17:38:06 EST Subject: paper available: financial prediction Message-ID: <9712182238.AA13422@alta> The following technical report and related conference paper is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/#recent-TRs www.cs.umd.edu/TRs/TRumiacs.html We apologize in advance for any multiple postings that may occur. ************************************************************************** Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference U. of Maryland Technical Report CS-TR-3625 and UMIACS-TR-96-27 Steve Lawrence (1), Ah Chung Tsoi (2), C. Lee Giles (1,3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Faculty of Informatics, University of Wollongong, NSW 2522 Australia (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA ABSTRACT Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. Keywords - noisy time series prediction, recurrent neural networks, self-organizing map, efficient market hypothesis, foreign exchange rate, non-stationarity, efficient market hypothesis, rule extraction ********************************************************************** A short published conference version: C.L. Giles, S. Lawrence, A-C. Tsoi, "Rule Inference for Financial Prediction using Recurrent Neural Networks," Proceedings of the IEEE/IAFE Conf. on Computational Intelligence for Financial Engineering, p. 253, IEEE Press, 1997. can be found at www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/papers/IEEE.CIFEr.ps.Z __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From majidi at DI.Unipi.IT Fri Dec 19 11:34:04 1997 From: majidi at DI.Unipi.IT (Darya Majidi) Date: Fri, 19 Dec 1997 17:34:04 +0100 Subject: NNESMED '98 Message-ID: <199712191634.RAA23176@mailserver.di.unipi.it> 2-4 September 1998 Call for papers: 3rd International Conference on Neural Networks and Expert Systems in Medicine and Healthcare (NNESMED 98) Pisa, Italy Deadline: 13 February 1997, electronic file (4 pages containing an abstract of 200 words) Contact: Prof Antonina Starita Computer Science Department University of Pisa Corso Italia 40, 56126 Pisa Tel: +39-50-887215 Fax: +39-50-887226 E-mail: nnesmed at di.unipi.it WWW: http://www.di.unipi.it/~nnesmed/home.htm From takane at takane2.psych.mcgill.ca Fri Dec 19 11:53:23 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Fri, 19 Dec 1997 11:53:23 -0500 (EST) Subject: No subject Message-ID: <199712191653.LAA03407@takane2.psych.mcgill.ca> This will be the last call for papers for the special issue of Behaviormetrika. *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From harnad at coglit.soton.ac.uk Fri Dec 19 08:20:24 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 19 Dec 1997 13:20:24 GMT Subject: PSYC Call for Commentators Message-ID: <199712191320.NAA14295@amnesia.psy.soton.ac.uk> Latimer/Stevens: PART-WHOLE PERCEPTION The target article whose abstract follows below has just appeared in PSYCOLOQUY, a refereed journal of Open Peer Commentary sponsored by the American Psychological Association. Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on this article. Please write for Instructions if you are not familiar with PSYCOLOQUY format and acceptance criteria (all submissions are refereed). The article can be read or retrieved at this URL: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1997.volume.8/psyc.97.8.13.part-whole-perception.1.latimer For submitting articles and commentaries and for information: EMAIL: psyc at pucc.princteton.edu URL: http://www.cogsci.soton.ac.uk/psycoloquy/ AUTHORS' RATIONALE FOR SOLICITING COMMENTARY: The topic of whole/part perception has a long history of controversy, and its ramifications still pervade research on perception and pattern recognition in psychology, artificial intelligence and cognitive science where controversies such as "global versus local precedence" and "holistic versus analytic processing" are still very much alive. We argue that, whereas the vast majority of studies on whole/part perception have been empirical, the major and largely unaddressed problems in the area are conceptual and concern how the terms "whole" and "part" are defined and how wholes and parts are related in each particular experimental context. Without some general theory of wholes and parts and their relations and some consensus on nomenclature, we feel that pseudo-controversies will persist. One possible principle of unification and clarification is a formal analysis of the whole/part relationship by Nicholas Rescher and Paul Oppenheim (1955). We outline this formalism and demonstrate that not only does it have important implications for how we conceptualize wholes and parts and their relations, but it also has far reaching implications for the conduct of experiments on a wide range of perceptual phenomena. We challenge the well established view that whole/part perception is essentially an empirical problem and that its solution will be found in experimental investigations. We argue that there are many purely conceptual issues that require attention prior to empirical work. We question the logic of global-precedence theory and any interpretation of experimental data in terms of the extraction of global attributes without prior analysis of local elements. We also challenge theorists to provide precise, testable theories and working mechanisms that embody the whole/part processing they purport to explain. Although it deals mainly with vision and audition, our approach can be generalized to include tactile perception. Finally, we argue that apparently disparate theories, controversies, results and phenomena can all be considered under the three main conditions for wholes and parts proposed by Rescher and Oppenheim. Commentary and counterargument are sought on all these issues. In particular, we would like to hear arguments to the effect that wholes and parts (perceptual or otherwise) can exist in some absolute sense, and we would like to learn of machines, devices, programs or systems that are capable of extracting holistic properties without prior analysis of parts. ----------------------------------------------------------------------- psycoloquy.97.8.13.part-whole-perception.1.latimer Wed 17 Dec 1997 ISSN 1055-0143 (39 paragraphs, 68 references, 923 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1997 Latimer & Stevens SOME REMARKS ON WHOLES, PARTS AND THEIR PERCEPTION Cyril Latimer Department of Psychology University of Sydney NSW 2006, Australia email: cyril at psych.su.oz.au URL: http://www.psych.su.oz.au/staff/cyril/ Catherine Stevens Department of Psychology/FASS University of Western Sydney, Macarthur PO Box 555, Campbelltown, NSW 2560, Australia email: kj.stevens at uws.edu.au URL: http://psy.uq.edu.au/CogPsych/Noetica/ ABSTRACT: We emphasize the relativity of wholes and parts in whole/part perception, and suggest that consideration must be given to what the terms "whole" and "part" mean, and how they relate in a particular context. A formal analysis of the part/whole relationship by Rescher & Oppenheim, (1955) is shown to have a unifying and clarifying role in many controversial issues including illusions, emergence, local/global precedence, holistic/analytic processing, schema/feature theories and "smart mechanisms". The logic of direct extraction of holistic properties is questioned, and attention drawn to vagueness of reference to wholes and parts which can refer to phenomenal units, physiological structures or theoretical units of perceptual analysis. KEYWORDS: analytic versus holistic processing, emergence, feature gestalt, global versus local precedence part, whole From rvicente at gibbs.if.usp.br Sat Dec 20 08:28:55 1997 From: rvicente at gibbs.if.usp.br (Renato Vicente) Date: Sat, 20 Dec 1997 13:28:55 -0000 Subject: Paper: Learning a Spin Glass - right link Message-ID: <199712201620.OAA17338@borges.uol.com.br> Dear Connectionists, The link for the paper previously announced was wrong, the right one is : http://www.fma.if.usp.br/~rvicente/NNGUSP13.ps.gz Sorry about that. Learning a spin glass: determining Hamiltonians from metastable states. --------------------------------------------------------------------------- ------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From hinton at cs.toronto.edu Tue Dec 23 13:51:45 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 23 Dec 1997 13:51:45 -0500 Subject: postdoc and student positions at new centre Message-ID: <97Dec23.135145edt.1329@neuron.ai.toronto.edu> THE GATSBY COMPUTATIONAL NEUROSCIENCE UNIT UNIVERSITY COLLEGE LONDON We are delighted to announce that the Gatsby Charitable Foundation has decided to create a centre for the study of Computational Neuroscience at University College London. The centre will include: Geoffrey Hinton (director) Zhaoping Li Peter Dayan Zoubin Ghahramani 5 Postdoctoral Fellows 10 Graduate Students We will study computational theories of perception and action with an emphasis on learning. Our main current interests are: unsupervised learning and activity dependent development statistical models of hierarchical representations sensorimotor adaptation and integration visual and olfactory segmentation and recognition computation by non-linear neural dynamics reinforcement learning By establishing this centre, the Gatsby Charitable Foundation is providing a unique opportunity for a critical mass of theoreticians to interact closely with each other and with University College's world class research groups, including those in psychology, neurophysiology, anatomy, functional neuro-imaging, computer science and statistics. The unit has some laboratory space for theoretically motivated experimental studies in visual and motor psychophysics. For further information see our temporary web site http://www.cs.toronto.edu/~zoubin/GCNU/ http://hera.ucl.ac.uk/~gcnu/ (The UK mirror of site above) We are seeking to recruit 4 postdocs and 6 graduate students to start in the Fall of 1998. We are particularly interested in candidates with strong analytical skills as well as a keen interest in and knowledge of neuroscience and cognitive science. Candidates should submit a CV, a one or two page statement of research interests, and the names of 3 referees. Candidates may also submit a copy of one or two papers. We prefer email submissions (hinton at cs.toronto.edu). If you use email please send papers as URL addresses or separate messages in postscript format. The deadline for receipt of submissions is Tuesday January 13, 1998. Professor Geoffrey Hinton Department of Computer Science University of Toronto Room 283 6 King's College Road Toronto, Ontario M5S 3H5 CANADA phone: 416-978-3707 fax: 416-978-1455 From john at dcs.rhbnc.ac.uk Tue Dec 23 08:31:39 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Dec 97 13:31:39 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199712231331.NAA11900@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-040: ---------------------------------------- Saturation and Stability in the Theory of Computation over the Reals by Olivier Chapuis, Universit\'e Lyon I, France Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: This paper was motivated by the following two questions which arise in the theory of complexity for computation over ordered rings in the now famous computational model introduced by Blum, Shub and Smale: (i) is the answer to the question $\p$ $=$? $\np$ the same in every real-closed field? (ii) if $\p\neq\np$ for $\r$, does there exist a problem of $\r$ which is $\np$ but not $\np$-complete? Some unclassical complexity classes arise naturally in the study of these questions. They are still open, but we could obtain unconditional results of independent interest. Michaux introduced $/\const$ complexity classes in an effort to attack question~(i). We show that $\a_{\r}/ \const = \a_{\r}$, answering a question of his. Here $\a$ is the class of real problems which are algorithmic in bounded time. We also prove the stronger result: $\parl_{\r}/ \const =\parl_{\r}$, where $\parl$ stands for parallel polynomial time. In our terminology, we say that $\r$ is $\a$-saturated and $\parl$-saturated. We also prove, at the nonuniform level, the above results for every real-closed field. It is not known whether $\r$ is $\p$-saturated. In the case of the reals with addition and order we obtain $\p$-saturation (and a positive answer to question (ii)). More generally, we show that an ordered $\q$-vector space is $\p$-saturated at the nonuniform level (this almost implies a positive answer to the analogue of question (i)). We also study a stronger notion that we call $\p$-stability. Blum, Cucker, Shub and Smale have (essentially) shown that fields of characteristic 0 are $\p$-stable. We show that the reals with addition and order are $\p$-stable, but real-closed fields are not. Questions (i) and (ii) and the $/\const$ complexity classes have some model theoretic flavor. This leads to the theory of complexity over ``arbitrary'' structures as introduced by Poizat. We give a summary of this theory with a special emphasis on the connections with model theory and we study $/\const$ complexity classes with this point of view. Note also that our proof of the $\parl$-saturation of $\r$ shows that an o-minimal structure which admits quantifier elimination is $\a$-saturated at the nonuniform level. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-041: ---------------------------------------- A survey on the Grzegorczyk Hierarchy and its extension through the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: This paper concerns the Grzegorczyk classes defined from a particular sequence of computable functions. We provide some relevant properties and the main problems about the Grzegorczyk classes through two settings of computability. The first one is the usual setting of recursiveness and the second one is the BSS model introduced at the end of the $80'$s. This model of computability allows to define the concept of effectiveness over continuous domains such that the real numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-043: ---------------------------------------- Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognise Arbitrary Regular Languages by Wolfgang Maass, Technische Universitaet Graz, Austria Eduardo D. Sontag, Rutgers University, USA Abstract: We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognised by networks of this type, and we give a precise characterization of those languages which can be recognised. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From lbl at nagoya.riken.go.jp Wed Dec 24 22:37:55 1997 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Thu, 25 Dec 1997 12:37:55 +0900 Subject: TR available:Inverting Neural Nets Using LP and NLP Message-ID: <9712250337.AA20262@xian> The following Technical Report is available via anonymous FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-ieee-tnn-inversion.ps.gz ========================================================================== TITLE: Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming BMC Technical Report BMC-TR-97-12 AUTHORS: Bao-Liang Lu (1) Hajime Kita (2) Yoshikazu Nishikawa (3) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Tokyo Institute of Technology (3) Osaka Institute of Technology ABSTRACT: The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem because the mapping from the output space to the input space is an one-to-many mapping. In this paper, we present a method for dealing with this inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem. An important advantage of the method is that multilayer perceptrons (MLPs) and radial basis function (RBF) neural networks can be inverted by solving the corresponding separable programming problems, which can be solved by a modified simplex method, a well-developed and efficient method for solving linear programming problems. As a result, large-scale MLPs and RBF networks can be inverted efficiently. We present several examples to demonstrate the proposed inversion algorithms and the applications of the network inversions to examining and improving the generalization performance of trained networks. The results show the effectiveness of the proposed method. (double space, 50 pages) Any comments are appreciated. Bao-Liang Lu ====================================== Bio-Mimetic Control Research Center The Institute of Physical and Chemical Research (RIKEN) Anagahora, Shimoshidami, Moriyama-ku Nagoya 463-0003, Japan Tel: +81-52-736-5870 Fax: +81-52-736-5871 Email: lbl at bmc.riken.go.jp From jkim at FIZ.HUJI.AC.IL Wed Dec 24 05:16:20 1997 From: jkim at FIZ.HUJI.AC.IL (Jai Won Kim) Date: Wed, 24 Dec 1997 12:16:20 +0200 Subject: two papers: On-Line Gibbs Learning Message-ID: <199712241016.AA02233@keter.fiz.huji.ac.il> ftp-host: keter.fiz.huji.ac.il ftp-filenames: pub/OLGA1.tar and pub/OLGA2.tar ______________________________________________________________________________o On-Line Gibbs Learning I: General Theory H. Sompolinsky and J. W. Kim Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT We study a new model of on-line learning, the On-Line Gibbs Algorithm (OLGA) which is particularly suitable for supervised learning of realizable and unrealizable discrete valued functions. The learning algorithm is based on an on-line energy function, $E$, that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. The relative importance of these terms in $E$ is determined by a parameter $\la$, the inverse of which plays a similar role as the learning rate parameters in other on-line learning algorithms. In the stochastic version of the algorithm, following each presentation of a labeled example the new weights are chosen from a Gibbs distribution with the on-line energy $E$ and a temperature parameter $T$. In the zero temperature version of OLGA, at each step, the new weights are those that minimize the on-line energy $E$. The generalization performance of OLGA is studied analytically in the limit of small learning rate, \ie~ $\la \rightarrow \infty$. It is shown that at finite temperature OLGA converges to an equilibrium Gibbs distribution of weight vectors with an energy function which equals the generalization error function. In its deterministic version OLGA converges to a local minimum of the generalization error. The rate of convergence is studied for the case in which the parameter $\la$ increases as a power-law in time. It is shown that in a generic unrealizable dichotomy, the asymptotic rate of decrease of the generalization error is proportional to the inverse square root of presented examples. This rate is similar to that of batch learning. The special cases of learning realizable dichotomies or dichotomies with output noise are also treated. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA1.tar _______________________________________________________________________________o On-Line Gibbs Learning II: Application to Perceptron and Multi-layer Networks J. W. Kim and H. Sompolinsky Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT In a previous paper (On-line Gibbs Learning I: General Theory) we have presented the On-line Gibbs Algorithm(OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to online supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a Winner-Takes-All (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in article I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power-law that is the same as that of batch learning. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA2.tar From biehl at physik.uni-wuerzburg.de Tue Dec 30 08:27:19 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 30 Dec 1997 14:27:19 +0100 (MET) Subject: preprint available Message-ID: <199712301327.OAA17838@wptx08.physik.uni-wuerzburg.de> Subject: paper available: Adaptive Systems on Different Time Scales FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-97-053.ps.gz The following preprint is now available via anonymous ftp: (See below for the retrieval procedure) --------------------------------------------------------------------- Adaptive Systems on Different Time Scales by D. Endres and Peter Riegler Abstract: The special character of certain degrees of freedom in two-layered neural networks is investigated for on-line learning of realizable rules. Our analysis shows that the dynamics of these degrees of freedom can be put on a faster time scale than the remaining, with the profit of speeding up the overall adaptation process. This is shown for two groups of degrees of freedom: second layer weights and bias weights. For the former case our analysis provides a theoretical explanation of phenomenological findings. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> get WUE-ITP-97-053.ps.gz ftp> quit unix> gunzip WUE-ITP-97-053.ps.gz e.g. unix> lp WUE-ITP-97-053.ps _____________________________________________________________________ _/_/_/_/_/_/_/_/_/_/_/_/ _/ _/ Peter Riegler _/ _/_/_/ _/ Institut fuer Theoretische Physik _/ _/ _/ _/ Universitaet Wuerzburg _/ _/_/_/ _/ Am Hubland _/ _/ _/ D-97074 Wuerzburg, Germany _/_/_/ _/_/_/ phone: (++49) (0)931 888-4908 fax: (++49) (0)931 888-5141 email: pr at physik.uni-wuerzburg.de www: http://theorie.physik.uni-wuerzburg.de/~pr ________________________________________________________________ From tgd at CS.ORST.EDU Tue Dec 30 20:09:13 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Wed, 31 Dec 1997 01:09:13 GMT Subject: Learning with Probabilistic Representations (Machine Learning Special Issue) Message-ID: <199712310109.BAA00980@edison.CS.ORST.EDU> Machine Learning has published the following special issue on LEARNING WITH PROBABILISTIC REPRESENTATIONS Guest Editors: Pat Langley, Gregory M. Provan, and Padhraic Smyth Learning with Probabilistic Representations Pat Langley, Gregory Provan, and Padhraic Smyth On the Optimality of the Simple Bayesian Classifier under Zero-One Loss Pedro Domingos and Michael Pazzani Bayesian Network Classifiers Nir Friedman, Dan Geiger, and Moises Goldszmidt The Sample Complexity of Learning Fixed-Structure Bayesian Networks Sanjoy Dasgupta Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables David Maxwell Chickering and David Heckerman Adaptive Probabilistic Networks with Hidden Variables John Binder, Daphne Koller, Stuart Russell, and Keiji Kanazawa Factorial Hidden Markov Models Zoubin Ghahramani and Michael I. Jordan Predicting Protein Secondary Structure using Stochastic Tree Grammars Naoki Abe and Hiroshi Mamitsuka For more information, see http://www.cs.orst.edu/~tgd/mlj --Tom From kchen at cis.ohio-state.edu Wed Dec 31 09:58:15 1997 From: kchen at cis.ohio-state.edu (Ke CHEN) Date: Wed, 31 Dec 1997 09:58:15 -0500 (EST) Subject: TRs available. Message-ID: The following two papers are available on line now. _________________________________________________________________________ Combining Linear Discriminant Functions with Neural Networks for Supervised Learning Ke Chen{1,2}, Xiang Yu {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in Neural Computing & Applications 6(1): 19-41, 1997. ABSTRACT A novel supervised learning method is presented by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, {\it growing} and {\it credit-assignment} algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multi-layered perceptron. URL: http://www.cis.ohio-state.edu/~kchen/nca.ps ftp://www.cis.ohio-state.edu/~kchen/nca.ps ___________________________________________________________________________ ___________________________________________________________________________ Methods of Combining Multiple Classifiers with Different Features and Their Applications to Text-Independent Speaker Identification Ke Chen{1,2}, Lan Wang {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in International Journal of Pattern Recognition and Artificial Intelligence 11(3): 417-445, 1997. ABSTRACT In practical applications of pattern recognition, there are often different features extracted from raw data which needs recognizing. Methods of combining multiple classifiers with different features are viewed as a general problem in various application areas of pattern recognition. In this paper, a systematic investigation has been made and possible solutions are classified into three frameworks, i.e. linear opinion pools, winner-take-all and evidential reasoning. For combining multiple classifiers with different features, a novel method is presented in the framework of linear opinion pools and a modified training algorithm for associative switch is also proposed in the framework of winner-take-all. In the framework of evidential reasoning, several typical methods are briefly reviewed for use. All aforementioned methods have already been applied to text-independent speaker identification. The simulations show that results yielded by the methods described in this paper are better than not only the individual classifiers' but also ones obtained by combining multiple classifiers with the same feature. It indicates that the use of combining multiple classifiers with different features is an effective way to attack the problem of text-independent speaker identification. URL: http://www.cis.ohio-state.edu/~kchen/ijprai.ps ftp://www.cis.ohio-state.edu/~kchen/ijprai.ps __________________________________________________________________________ ---------------------------------------------------- Dr. Ke CHEN Department of Computer and Information Science The Ohio State University 583 Dreese Laboratories 2015 Neil Avenue Columbus, Ohio 43210-1277, U.S.A. E-Mail: kchen at cis.ohio-state.edu WWW: http://www.cis.ohio-state.edu/~kchen ------------------------------------------------------ From robtag at dia.unisa.it Tue Dec 2 06:00:45 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 2 Dec 1997 12:00:45 +0100 Subject: WIRN 97 First call for paper Message-ID: <9712021100.AA31469@udsab> ***************** CALL FOR PAPERS ***************** The 10-th Italian Workshop on Neural Nets WIRN VIETRI-98 May 21-23, 1998 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) N. A. Borghese ( CNR Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) A. Colla (ELSAG Bailey Genova) A. Esposito ( I.I.A.S.S.) M. Frixione ( Univ. Salerno) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Siena) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) C. Morabito ( Univ. Reggio Calabria) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino) A. Petrosino ( I.I.A.S.S.) V. Piuri ( Politecnico Milano) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno) R. Serra ( Centro Ricerche Ambientali Montecatini Ravenna) F. Sorbello ( Univ. Palermo) R. Tagliaferri ( Univ. Salerno) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1998 Replies to Authors: March 31, 1998 Revised Papers Due: May 24, 1998 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Scienze Fisiche "E.R. Caianiello", University of Salerno Dept. of Informatica e Applicazioni "R.M. Capocelli", Univer. of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) IEEE Neural Network Council INNS/SIG Italy Istituto Italiano per gli Studi Filosofici, Napoli The 10-th Italian Workshop on Neural Nets (WIRN VIETRI-98) will take place in Vietri Sul Mare, Salerno ITALY, May 21-23, 1998. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by an International Publishing. Official languages are Italian and English, while papers must be in English. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 98 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. The camera ready format will be sent with the acceptation letter of the referees. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1995,1996,1997 until February 28 1998) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 98 c/o IIASS before February 28,1998. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From bocz at btk.jpte.hu Tue Dec 2 19:02:03 1997 From: bocz at btk.jpte.hu (Bocz Andras) Date: Wed, 3 Dec 1997 00:02:03 +0000 Subject: conference call Message-ID: FIRST CALL FOR PAPERS We are happy to announce a conference and workshop on Multidisciplinary Colloquium on Rules and Rule-Following: Philosophy, Linguistics and Psychology between April 30-May 1-2, 1998 at Janus Pannonius University P?cs, Hungary Keynote speakers (who have already accepted invitation): philosophy: Gy?rgy Kampis Hungarian Academy of Sciences, Budapest linguistics: Pierre-Yves Raccah Idl-CNRS, Paris psychology: Csaba Pl?h Dept. of General Pschology L?r?nd E?tv?s University, Budapest Organizing Committee: L?szl? Tarnay (JPTE, Dep. of Philosophy) L?szl? I. Koml?si (ELTE, Dept. of Psychology) Andr?s Bocz (JPTE, Dept. of English Studies) e-mail: tarnay at btk.jpte.hu; komlosi at btk.jpte.hu; bocz at btk.jpte.hu Advisory Board: G?bor Forrai (Budapest) Gy?rgy Kampis (Budapest) Mike Harnish (Tucson) Andr?s Kert?sz (Debrecen) Kuno Lorenz (Saarbr?cken) Pierre-Yves Raccah (Paris) J?nos S. Pet?fi (Macerata) Aims and scopes: The main aim of the conference is to bring together cholars from the field of cognitive linguistics, philosophy and psychology to investigate the concept of rule and to address various aspects of rule-following. Ever since Wittgenstein formulated in Philosophical investigations his famous 201 ? concerning a kind of rule-following which is not an interpretation, the concept of rule has become a key but elusive idea in almost every discipline and approach. And not only in the human sciences. No wonder, since without this idea the whole edifice of human (and possibly all other kinds of) rationality would surely collapse. With the rise of cognitive science, and especially the appearance of connectionist models and networks, however, the classical concept of rule is once again seriously contested. To put it very generally, there is an ongoing debate between the classical conception in which rules appear as a set of formulizable initial conditions or constraints on external operations linking different successive states of a given system (algorithms) and a dynamic conception in which there is nothing that could be correlated with a prior idea of internal well-formedness of the system's states. The debate centers on the representability of rules: either they are conceived of as meta-representations, or they are mere faon de parler concerning the development of complex systems. Idealizable on the one hand, while token-oriented on the other. Something to be implemented on the one hand, while self-controlling, backpropagational processing, on the other. There is however a common idea that almost all kinds of rule-conceptions address: the problem of learning. This idea reverberates from wittgenstenian pragmatics to strategic non-verbal and rule-governed speech behavior, from perceiving similarities to mental processing. Here are some haunting questions: - How do we acquire knowledge if there are no regularities in the world around us? - But how can we perceive those regularities? - And how do we reason on the basis of that knowledge if there are no observable constraints on infererring? - But if there are, where do they come from and how are they actually implemented mentally? - And finally: how do we come to act rationally, that is, in accordance with what we have perceived, processed and inferred? We are interested in all ways of defining rules and in all aspects of rule following, from the definiton of law, rule, regularity, similarity and analogy to logical consequence, argumentational and other inferences, statistical and linguistic rules, practical and strategic reasoning, pragmatic and praxeological activities. We expect contribution from the following reseach fields: game-theory, action theory, argumentation theory, cognitive science, linguitics, philosophy of language, epistemology, pragmatics, psychology and semiotics. We would be happy to include some contributions from natural sciences such as neuro-biology, physiology or brain sciences. The conference is organized in three major sections: philosophy, psychology and linguistics with three keynote lectures. Then contributions of 30 minutes (20 for paper and 10 for discussion) follow. We also plan to organize a workshop at the end of each section. Abstracts: Abstracts should be one-page (maximum 23 lines) specifying area of contribution and the particular aspect of rule-following to be addressed. Abstracts should be sent by e-mail to tarnay at btk.jpte.hu or bocz at btk.jpte.hu. Hard copies of abstracts may be sent to: Laszlo Tarnay Department of Philosphy Janus Pannonius University H-7624 Pecs, Hungray. Important dates: Deadline for submission: Jan.-15, 1998 Notification of acceptance: Febr.-28, 1998 conference: April 30-May 1-2, 1998 ************************************* Bocz Andr?s Department of English Janus Pannonius University Ifj?s?g u. 6. H-7624 P?cs, Hungary Tel/Fax: (36) (72) 314714 From terry at salk.edu Tue Dec 2 23:41:30 1997 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 2 Dec 1997 20:41:30 -0800 (PST) Subject: Telluride Workshop 1998 Message-ID: <199712030441.UAA08703@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO Deadline for application is February 1, 1998. Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250.- workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1., 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From jfeldman at ICSI.Berkeley.EDU Wed Dec 3 12:25:23 1997 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Wed, 03 Dec 1997 09:25:23 -0800 Subject: backprop w/o math? Message-ID: <34859603.7B03@icsi.berkeley.edu> I would appreciate advice on how best to teach backprop to undergraduates who may have little or no math. We are planning to use Plunkett and Elman's Tlearn for exercises. This is part of a new course with George Lakoff, modestly called "The Neural Basis of Thought and Language". We would also welcome any more general comments on the course: http://www.icsi.berkeley.edu/~mbrodsky/cogsci110/ -- Jerry Feldman From shavlik at cs.wisc.edu Fri Dec 5 15:12:30 1997 From: shavlik at cs.wisc.edu (Jude Shavlik) Date: Fri, 5 Dec 1997 14:12:30 -0600 (CST) Subject: CFP: 1998 Machine Learning Conference Message-ID: <199712052012.OAA17873@jersey.cs.wisc.edu> Call for Papers THE FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING July 24-26, 1998 Madison, Wisconsin, USA The Fifteenth International Conference on Machine Learning (ICML-98) will be held at the University of Wisconsin, Madison from July 24 to July 26, 1998. ICML-98 will be collocated with the Eleventh Annual Conference on Computational Learning Theory (COLT-98) and the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-98). Seven additional conferences, including the Fifteenth National Conference on Artificial Intelligence (AAAI-98), will also be held in Madison (see http://www.cs.wisc.edu/icml98/ for a complete list). Submissions are invited that describe empirical, theoretical, and cognitive-modeling research in all areas of machine learning. Submissions that present algorithms for novel learning tasks, interdisciplinary research involving machine learning, or innovative applications of machine learning techniques to challenging, real-world problems are especially encouraged. The deadline for submissions is MARCH 2, 1998. (An electronic version of the title page is due February 27, 1998.) See http://www.cs.wisc.edu/icml98/callForPapers.html for submission details. There are also three joint ICML/AAAI workshops being held July 27, 1998: Developing ML Applications: Problem Definition, Task Decomposition, and Technique Selection Learning for Text Categorization Predicting the Future: AI Approaches to Time-Series Analysis The submission deadline for these WORKSHOPS is MARCH 11, 1998. Additional details about the workshops are available via http://www.cs.wisc.edu/icml98/ [My apologies if you receive multiple copies of this announcement.] From lwh at montefiore.ulg.ac.be Fri Dec 5 02:43:56 1997 From: lwh at montefiore.ulg.ac.be (WEHENKEL Louis) Date: Fri, 5 Dec 1997 08:43:56 +0100 Subject: Book announcement "Automatic learning techniques in power systems" Message-ID: Dear colleagues, This is to announce the availability of a new book on "Automatic learning techniques in power systems". In the coming weeks, I will add some information on my own web-site http://www.montefiore.ulg.ac.be/~lwh/. Cordially, -- ******************************************************************************* Louis WEHENKEL Research Associate National Fund for Scientific Research Department of Electrical Engineering University of Liege Tel. + 32 4 366.26.84 Institut Montefiore - SART TILMAN B28, Fax. + 32 4 366.29.84 B-4000 LIEGE - BELGIUM Email. lwh at montefiore.ulg.ac.be !!! New telephone numbers from september 15th 1996 onwards ******************************************************************************* >> The web site for this book is http://www.wkap.nl/book.htm/0-7923-8068-1 >> >> >> Automatic Learning Techniques in Power Systems >> >> by >> Louis A. Wehenkel >> University of Lige, Institut Montefiore, Belgium >> >> THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND >> COMPUTER SCIENCE >> Volume 429 >> >> Automatic learning is a complex, multidisciplinary field of research >> and development, involving theoretical and applied methods from >> statistics, computer science, artificial intelligence, biology and >> psychology. Its applications to engineering problems, such as those >> encountered in electrical power systems, are therefore challenging, >> while extremely promising. More and more data have become available, >> collected from the field by systematic archiving, or generated through >> computer-based simulation. To handle this explosion of data, automatic >> learning can be used to provide systematic approaches, without which >> the increasing data amounts and computer power would be of little use. >> >> Automatic Learning Techniques in Power Systems is dedicated to the >> practical application of automatic learning to power systems. Power >> systems to which automatic learning can be applied are screened and >> the complementary aspects of automatic learning, with respect to >> analytical methods and numerical simulation, are investigated. >> >> This book presents a representative subset of automatic learning >> methods - basic and more sophisticated ones - available from >> statistics (both classical and modern), and from artificial >> intelligence (both hard and soft computing). The text also discusses >> appropriate methodologies for combining these methods to make the best >> use of available data in the context of real-life problems. >> >> Automatic Learning Techniques in Power Systems is a useful reference >> source for professionals and researchers developing automatic learning >> systems in the electrical power field. >> >> Contents >> >> List of Figures. >> List of Tables. >> Preface. >> 1. Introduction. >> Part I: Automatic Learning Methods. >> 2. Automatic Learning is Searching a Model Space. >> 3. Statistical Methods. >> 4. Artificial Neural Networks. >> 5. Machine Learning. >> 6. Auxiliary Tools and Hybrid Techniques. >> Part II: Application of Automatic Learning to Security Assessment. >> 7. Framework for Applying Automatic Learning to DSA. >> 8. Overview of Security Problems. >> 9. Security Information Data Bases. >> 10. A Sample of Real-Life Applications. >> 11. Added Value of Automatic Learning. >> 12. Future Orientations. >> Part III: Automatic Learning Applications in Power Systems. >> 13. Overview of Applications by Type. >> References. >> Index. >> Glossary. >> >> 1998, 320pp. ISBN 0-7923-8068-1 PRICE : US$ 122.00 ------- For information on how to subscribe / change address / delete your name from the Power Globe, see http://www.powerquality.com/powerglobe/ From rybaki at eplrx7.es.dupont.com Thu Dec 4 09:39:18 1997 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Thu, 4 Dec 1997 09:39:18 -0500 Subject: No subject Message-ID: <199712041439.JAA24126@pavlov> Dear colleagues, I have received a number of messages that the following my WEB pages, are not available from many servers because of some problem (?) in VOICENET: BMV: Behavioral model of active visual perception and recognition http://www.voicenet.com/~rybak/vnc.html Modeling neural mechanisms for orientation selectivity in the visual cortex: http://www.voicenet.com/~rybak/iod.html Modeling interacting populations of biologicalneurons: http://www.voicenet.com/~rybak/pop.html Modeling neural mechanisms for the respiratory rhythmogenesis: http://www.voicenet.com/~rybak/resp.html Modeling neural mechanisms for the baroreceptor vagal reflex: http://www.voicenet.com/~rybak/baro.html I thank all people informing me about the VOICENET restrictions. I also apologize for the inconvenience and would like to inform those of you who could not reach the above pages from your servers that YOU CAN VISIT THESE PAGES USING THE DIGITAL ADDRESS. In order to do this, please replace http://www.voicenet.com/~rybak/.... with http:// 207.103.26.247/~rybak/... The pages have been recently updated and may be of interest for the people working in the fields of computational neuroscience and vision. Sincerely, Ilya Rybak ======================================= Dr. Ilya Rybak Neural Computation Program E.I. du Pont de Nemours & Co. Central Research Department Experimental Station, E-328/B31 Wilmington, DE 19880-0328 USA Tel.: (302) 695-3511 Fax : (302) 695-8901 E-mail: rybaki at eplrx7.es.dupont.com URL: http://www.voicenet.com/~rybak/ http://207.103.26.247/~rybak/ ======================================== From sml%essex.ac.uk at seralph21.essex.ac.uk Thu Dec 4 11:30:39 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Thu, 04 Dec 1997 16:30:39 +0000 Subject: SOFM Demo Applet + source code Message-ID: <3486DAAE.21C9@essex.ac.uk> Dear All, I've written a colourful self-organising map demo - follow the link on my homepage listed below. This runs as a Java applet, and allows one to experiment with the effects of changing the neighbourhood radius. This may be of interest to those who have to teach the stuff - I've also included a complete listing of the source code. Best Regards, Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From sylee at eekaist.kaist.ac.kr Tue Dec 9 04:06:49 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 18:06:49 +0900 Subject: Special Invited Session on Neural Networks, SCI'98 Message-ID: <348D0A19.7CDE@ee.kaist.ac.kr> I had been asked to organize a Special Invited Session on Neural Networks at the World Multiconference on Systemics, Cybernetics, and Informatics (SCI'98) to be held on July 12-16, 1998, at Orlando, Florida, US. The SCI is an truly multi-disciplinary conference, where researchers come from many different disciplines. I believe this multi-disciplinary nature is quite unique and is helpful to neural networks researchers. We all know that neural networks are interdisciplinary researchers, and will be able to contribute to many different applications. You are cordially invited to present your valuable researches at SCI'98 and enjoy interesting discussion with researchers from different disciplines. If interested, please inform me of your intention by an e-mail. I need fix the author(s) and paper titles by January 10th, 1998. A brief introductory remark is attached for the SCI'98, and more may be found at http://www.iiis.org. Best regards, Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr ------------------------------------------------------------------ SCI'98 The purpose of the Conference is to bring together university professors, Corporate Leaders, Academic and Professional Leaders, consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'98 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. From sylee at eekaist.kaist.ac.kr Tue Dec 9 02:12:32 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 16:12:32 +0900 Subject: KAIST Brain Science Research Center Message-ID: <348CEF60.24CB@ee.kaist.ac.kr> Inauguration of Brain Science Research Center at Korea Advanced Institute of Science and Technology On December 6th, 1997, Brain Science Research Center (BSRC) had its inauguration ceremony at Korea Advanced Institute of Science and Technology (KAIST). Korean Ministry of Science and Technology announced a 10-year national program on "brain researches" on September 30, 1997. The research program consists of two main streams, i.e., developing brain-like computing systems and overcoming brain disease. Starting from 1998 the total budget will reach about 1 billion US dollars. The KAIST BSRC will be the main research organization for the brain-like computing systems, and 69 research members had joined the BSRC from 18 Korean education and research organizatrion. Its seven research areas are o neurobiology o cognitive science o neural network models o hardware implementation of neural networks o artificial vision and auditory systems o inferrence technology o intelligent control and communication systems More information may be available from Prof. Soo-Young Lee, Director Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr From moshe.sipper at di.epfl.ch Tue Dec 9 04:37:45 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Tue, 09 Dec 1997 10:37:45 +0100 Subject: ICES98: Second Call for Papers Message-ID: <199712090937.KAA01605@lslsun7.epfl.ch> Second Call for Papers +-------------------------------------------------------+ | Second International Conference on Evolvable Systems: | | From Biology to Hardware (ICES98) | +-------------------------------------------------------+ The 1998 International EPFL-Latsis Foundation Conference Swiss Federal Institute of Technology, Lausanne, Switzerland September 23-26, 1998 +-------------------------------+ | http://lslwww.epfl.ch/ices98/ | +-------------------------------+ In Cooperation With: IEEE Neural Networks Council EvoNet - The European Network of Excellence in Evolutionary Computing Societe Suisse des Informaticiens, Section Romande (SISR) Societe des Informaticiens (SI) Centre Suisse d'Electronique et de Microtechnique SA (CSEM) Swiss Foundation for Research in Microtechnology Schweizerische Gesellschaft fur Nanowissenschaften und Nanotechnik (SGNT) The idea of evolving machines, whose origins can be traced to the cybernetics movement of the 1940s and the 1950s, has recently resurged in the form of the nascent field of bio-inspired systems and evolvable hardware. The inaugural workshop, Towards Evolvable Hardware, took place in Lausanne in October 1995, followed by the First International Conference on Evolvable Systems: From Biology to Hardware (ICES96), held in Japan in October 1996. Following the success of these past events, ICES98 will reunite this burgeoning community, presenting the latest developments in the field, bringing together researchers who use biologically inspired concepts to implement real systems in artificial intelligence, artificial life, robotics, VLSI design, and related domains. Topics to be covered will include, but not be limited to, the following list: * Evolving hardware systems. * Evolutionary hardware design methodologies. * Evolutionary design of electronic circuits. * Self-replicating hardware. * Self-repairing hardware. * Embryonic hardware. * Neural hardware. * Adaptive hardware platforms. * Autonomous robots. * Evolutionary robotics. * Bio-robotics. * Applications of nanotechnology. * Biological- and chemical-based systems. * DNA computing. o General Chair: Daniel Mange, Swiss Federal Institute of Technology - Lausanne o Program Chair: Moshe Sipper, Swiss Federal Institute of Technology - Lausanne o International Steering Committee * Tetsuya Higuchi, Electrotechnical Laboratory (Japan) * Hiroaki Kitano, Sony Computer Science Laboratory (Japan) * Daniel Mange, Swiss Federal Institute of Technology (Switzerland) * Moshe Sipper, Swiss Federal Institute of Technology (Switzerland) * Andres Perez-Uribe, Swiss Federal Institute of Technology (Switzerland) o Conference Secretariat Andres Perez-Uribe, Swiss Federal Institute of Technology - Lausanne Inquiries: Andres.Perez at di.epfl.ch, Tel.: +41-21-6932652, Fax: +41-21-6933705. o Important Dates * March 1, 1998: Submission deadline. * May 1, 1998: Notification of acceptance. * June 1, 1998: Camera-ready due. * September 23-26, 1998: Conference dates. o Publisher: Springer-Verlag Official Language: English o Submission Procedure Papers should not be longer than 10 pages (including figures and bibliography) in the Springer-Verlag llncs style (see Web page for complete instructions). Authors must submit five (5) complete copies of their paper (hardcopy only), received by March 1st, 1998, to the Program Chairman: Moshe Sipper - ICES98 Program Chair Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland o The Self-Replication Contest * When: The self-replication contest will be held during the ICES98 conference. * Object: Demonstrate a self-replicating machine, implemented in some physical medium, e.g., mechanical, chemical, electronic, etc. * Important: The machine must be demonstrated AT THE CONFERENCE site. Paper submissions will not be considered. * Participation: The contest is open to all conference attendees (at least one member of any participating team must be a registered attendee). * Prize: + The most original design will be awarded a prize of $1000 (one thousand dollars). + The judgment shall be made by a special contest committee. + The committee's decision is final and incontestable. * WWW: Potential participants are advised to consult the self-replication page: http://lslwww.epfl.ch/~moshes/selfrep/ * If you intend to participate please inform the conference secretary Andres.Perez at di.epfl.ch. o Best-Paper Awards * Among the papers presented at ICES98, two will be chosen by a special committee and awarded, respectively, the best paper award and the best student paper award. * The committee's decision is final and incontestable. * All papers are eligible for the best paper award. To be eligible for the best student paper award, the first coauthor must be a full-time student. o Invited Speakers * Dr. David B. Fogel, Natural Selection, Inc. Editor-in-Chief, IEEE Transactions on Evolutionary Computation * Prof. Lewis Wolpert, University College London o Tutorials Four tutorials, delivered by experts in the field, will take place on Wednesday, September 23, 1998 (contingent upon a sufficient number of registrants). * "An Introduction to Molecular and DNA Computing," Prof. Max H. Garzon, University of Memphis * "An Introduction to Nanotechnology," Dr. James K. Gimzewski, IBM Zurich Research Laboratory * "Configurable Computing," Dr. Tetsuya Higuchi, Electrotechnical Laboratory * "An Introduction to Evolutionary Computation," Dr. David B. Fogel, Natural Selection, Inc. o Program * To be posted around May-June, 1998. o Local arrangements * See Web page. o Program Committee * Hojjat Adeli, Ohio State University (USA) * Igor Aleksander, Imperial College (UK) * David Andre, Stanford University (USA) * William W. Armstrong, University of Alberta (Canada) * Forrest H. Bennett III, Stanford University (USA) * Joan Cabestany, Universitat Politecnica de Catalunya (Spain) * Leon O. Chua, University of California at Berkeley (USA) * Russell J. Deaton, University of Memphis (USA) * Boi Faltings, Swiss Federal Institute of Technology (Switzerland) * Dario Floreano, Swiss Federal Institute of Technology (Switzerland) * Terry Fogarty, Napier University (UK) * David B. Fogel, Natural Selection, Inc. (USA) * Hugo de Garis, ATR Human Information Processing Laboratories (Japan) * Max H. Garzon, University of Memphis (USA) * Erol Gelenbe, Duke University (USA) * Wulfram Gerstner, Swiss Federal Institute of Technology (Switzerland) * Reiner W. Hartenstein, University of Kaiserslautern (Germany) * Inman Harvey, University of Sussex (UK) * Hitoshi Hemmi, NTT Human Interface Labs (Japan) * Jean-Claude Heudin, Pole Universitaire Leonard de Vinci (France) * Lishan Kang, Wuhan University (China) * John R. Koza, Stanford University (USA) * Pier L. Luisi, ETH Zentrum (Switzerland) * Bernard Manderick, Free University Brussels (Belgium) * Pierre Marchal, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Juan J. Merelo, Universidad de Granada (Spain) * Julian Miller, Napier University (UK) * Francesco Mondada, Swiss Federal Institute of Technology (Switzerland) * J. Manuel Moreno, Universitat Politecnica de Catalunya (Spain) * Pascal Nussbaum, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Christian Piguet Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * James Reggia, University of Maryland at College Park (USA) * Eytan Ruppin, Tel Aviv University (Israel) * Eduardo Sanchez, Swiss Federal Institute of Technology (Switzerland) * Andre Stauffer, Swiss Federal Institute of Technology (Switzerland) * Luc Steels, Vrije Universiteit Brussel (Belgium) * Daniel Thalmann, Swiss Federal Institute of Technology (Switzerland) * Adrian Thompson, University of Sussex (UK) * Marco Tomassini, University of Lausanne (Switzerland) * Goran Wendin, Chalmers University of Technology and Goeteborg University (Sweden) * Lewis Wolpert, University College London (UK) * Xin Yao, Australian Defense Force Academy (Australia) From Bill_Warren at Brown.edu Tue Dec 9 11:59:45 1997 From: Bill_Warren at Brown.edu (Bill Warren) Date: Tue, 9 Dec 1997 11:59:45 -0500 (EST) Subject: Graduate Traineeships: Visual Navigation in Humans and Robots Message-ID: Graduate Traineeships Visual Navigation in Humans and Robots Brown University The Department of Cognitive and Linguistic Sciences and the Department of Computer Science at Brown University are seeking graduate applicants interested in visual navigation. The project investigates situated learning of spatial knowledge used to guide navigation in humans and robots. Human experiments study active navigation and landmark recognition in virtual envrionments, whose structure is manipulated during learning and transfer. In conjunction, biologically-inspired navigation strategies are tested on mobile robot platform. Computational modeling pursues (a) a neural net model of the hippocampus and (b) reinforcement learning and hidden Markov models for spatial navigation. Underlying questions include the geometric structure of the spatial knowledge that is used in active navigation, and how it interacts with the structure of the environment and the navigational task during learning. The project is under the direction of Leslie Kaelbling (Computer Science, www.cs.brown.edu), Michael Tarr and William Warren (Cognitive & Linguistic Sciences, www.cog.brown.edu). Three graduate traineeships are available, beginning in the Fall of 1998. Applicants should apply to either of these home departments. Application materials can be obtained from: The Graduate School, Brown University, Box 1867, Providence, RI 02912, phone (401) 863-2600, www.brown.edu. The application deadline is Jan. 1, 1998. -- Bill William H. Warren, Professor Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 (401) 863-3980 ofc, 863-2255 FAX Bill_Warren at brown.edu From ptodd at hellbender.mpib-berlin.mpg.de Wed Dec 10 10:43:34 1997 From: ptodd at hellbender.mpib-berlin.mpg.de (ptodd@hellbender.mpib-berlin.mpg.de) Date: Wed, 10 Dec 97 16:43:34 +0100 Subject: pre/postdoc positions in modeling cognitive mechanisms Message-ID: <9712101543.AA21683@hellbender.mpib-berlin.mpg.de> Hello--we are seeking pre- and postdoctoral applicants for positions in our Center for Adaptive Behavior and Cognition to study simple cognitive mechanisms in humans (and other animals), both through simulation modeling techniques and experimentation. Please visit our website (http://www.mpib-berlin.mpg.de/abc) for more information on what we're up to here, and circulate the following ad to all appropriate parties. cheers, Peter Todd Center for Adaptive Behavior and Cognition Max Planck Institute for Human Development Berlin, Germany ****************************************************************************** The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany, is seeking applicants for 1 one-year Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 two-year Postdoctoral Fellowship (tax-free stipend range DM 40,000-44,000) beginning in September 1998. Candidates should be interested in modeling bounded rationality in real-world domains, and should have expertise in one of the following areas: judgment and decision making, evolutionary psychology or biology, cognitive anthropology, experimental economics and social games, risk-taking. For a detailed description of our research projects and current researchers, please visit our WWW homepage at http://www.mpib-berlin.mpg.de/abc or write to Dr. Peter Todd at ptodd at mpib-berlin.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by February 28, 1998 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany. ****************************************************************************** From Rob.Callan at solent.ac.uk Thu Dec 11 06:12:27 1997 From: Rob.Callan at solent.ac.uk (Rob.Callan@solent.ac.uk) Date: Thu, 11 Dec 1997 11:12:27 +0000 Subject: PhD studentship Message-ID: <8025656A.003D7CDD.00@hercules.solent.ac.uk> A studentship is available for someone with an interest in traditional AI and connectionism (or 'new AI') to explore adapting a machine's behaviour by instruction. Further details can be found at: http://www.solent.ac.uk/syseng/create/html/ai/aires.html or from Academic Quality Service Southampton Institute East Park Terrace Southampton UK SO14 OYN Tel: (01703) 319901 Rob Callan From john at dcs.rhbnc.ac.uk Thu Dec 11 07:45:57 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 11 Dec 97 12:45:57 +0000 Subject: NEUROCOMPUTING Special Issue - note extended deadline January 19, 1998 Message-ID: <199712111245.MAA18713@platon.cs.rhbnc.ac.uk> NEUROCOMPUTING announces a Special Issue on Theoretical analysis of real-valued function classes. Manuscripts are solicited for a special issue of NEUROCOMPUTING on the topic of 'Theoretical analysis of real-valued function classes'. Analysis of Neural Networks both as approximators and classifiers frequently relies on viewing them as real-valued function classes. This perspective places Neural Network research in a far broader context linking it with approximation theory, complexity theory over the reals, statistical and computational learning theory, among others. The aim of the special issue is to bring these links into focus by inviting papers on the theoretical analysis of real-valued function classes where the results are relevant to the special case of Neural Networks. The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Measures of Complexity - Approximation rates - Learning algorithms - Generalization estimation - Real valued complexity analysis - Novel neural functionality and its analysis, eg spiking neurons - Kernel and alternative representations - Algorithms for forming real-valued combinations of functions - On-line learning algorithms The following team of editors will be processing the submitted papers: John Shawe-Taylor (coordinating editor), Royal Holloway, Univ. of London Shai Ben-David, Technion, Israel Pascal Koiran, ENS, Lyon, France Rob Schapire, AT&T Labs, Florham Park, USA The deadline for submissions is January 19, 1998. Every effort will be made to ensure fast processing of the papers by editors and referees. For this reason electronic submission of postscript files (compressed and uuencoded) via email is encouraged. Submission should be to the following address: John Shawe-Taylor Dept of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK or Email: jst at dcs.rhbnc.ac.uk From luo at late.e-technik.uni-erlangen.de Fri Dec 12 09:45:43 1997 From: luo at late.e-technik.uni-erlangen.de (Fa-Long Luo) Date: Fri, 12 Dec 1997 15:45:43 +0100 Subject: book announcement: Applied Neural Networks for Signal Processing Message-ID: <199712121445.PAA00140@late5.e-technik.uni-erlangen.de> New Book Applied Neural Networks for Signal Processing Authors: Fa-Long Luo and Rolf Unbehauen Publisher: Cambridge University Press ISBN: 0 521 56391 7 The use of neural networks in signal processing is becoming increasingly widespread, with applications in areas such as filtering, parameter estimation, signal detection, pattern recognition, signal reconstruction, system identification, signal compression, and signal transmission. The signals concerned include audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical and others. The key features of neural networks involved in signal processing are their asynchronous parallel and distributed processing, nonlinear dynamics, global interconnection of network elements, self-organization and high-speed computational capability. With these features, neural networks can provide very powerful means for solving many problems encountered in signal processing, especially, in nonlinear signal processing, real-time signal processing, adaptive signal processing and blind signal processing. From an engineering point of view, this book aims to provide a detailed treatment of neural networks for signal processing by covering basic principles, modelling, algorithms, architectures, implementation procedures and well-designed simulation examples. This book is organized into nine chapters: Chap. 1: Fundamental Models of Neural Networks for Signal Processing, Chap. 2: Neural Networks for Filtering, Chap. 3: Neural Networks for Spectral Estimation, Chap. 4: Neural Networks for Signal Detection, Chap. 5: Neural Networks for Signal Reconstruction, Chap. 6: Neural Networks for Principal Components and Minor Components, Chap. 7: Neural Networks for Array Signal Processing, Chap. 8: Neural Networks for System Identification, Chap. 9: Neural Networks for Signal Compression. This book will be an invaluable reference for scientists and engineers working in communications, control or any other field related to signal processing. It can also be used as a textbook for graduate courses in electrical engineering and computer science. Contact: Dr. Fa-Long Luo or Dr. Philip Meyler Lehrstuhl fuer Allgemeine und Cambridge University Press Theoretische Elektrotechnik 40 West 20th Street Cauerstr. 7, 91058 Erlangen New York, NY 10011-4211 Germany USA Tel: +49 9131 857794 Tel: +1 212 924 3900 ext. 472 Fax: +49 9131 13435 Fax: +1 212 691 3239 Email: luo at late.e-technik.uni-erlangen.de E-mail: pmeyler at cup.org http://www.cup.org/Titles/56/0521563917.html From harnad at coglit.soton.ac.uk Fri Dec 12 15:58:21 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 12 Dec 1997 20:58:21 GMT Subject: Relational Complexity: BBS Call for Commentators Message-ID: <199712122058.UAA25682@amnesia.psy.soton.ac.uk> Errors-to: harnad1 at coglit.soton.ac.uk Reply-to: bbs at coglit.soton.ac.uk Below is the abstract of a forthcoming BBS target article on: PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology by Graeme S. Halford, William H. Wilson, and Steven Phillips This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology Graeme Halford Department of Psychology University of Queensland 4072 AUSTRALIA gsh at psy.uq.oz.au William H. Wilson School of Computer Science and Engineering University of New South Wales Sydney New South Wales 2052 AUSTRALIA billw at cse.unsw.edu.au http://www.cse.unsw.edu.au/~billw Steven Phillips Cognitive Science Section Electrotechnical Laboratory 1-1-4 Umezono Tsukuba Ibaraki 305 JAPAN stevep at etl.go.jp http://www.etl.go.jp/etl/ninchi/stevep at etl.go.jp/welcome.html KEYWORDS: Capacity, complexity, working memory, central executive, resource, cognitive development, comparative psychology, neural nets, representation of relations, chunking ABSTRACT: Working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and and can distinguish between higher animal species as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation because its argument can be instantiated in only one way at a time. A binary relation has two arguments and two sources of variation because two argument instantiations are possible at once. A ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to the number of "chunks," because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. In segmentation tasks are broken into components which do not exceed processing capacity and are processed serially. In conceptual chunking representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Parallel distributed implementations of relational representations show that relations with more arguments have a higher computational cost; this corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity (1) distinguishes between higher species and is related to (2) processing load in reasoning and in sentence comprehension and to (3) the complexity of relations processed by children increases with age. Implications for neural net models and for theories of cognition and cognitive development are discussed. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.halford.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.halford ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.halford gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From atick at monaco.rockefeller.edu Sun Dec 14 11:15:40 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Sun, 14 Dec 1997 11:15:40 -0500 Subject: Research Positions in Computer Vision Message-ID: <9712141115.ZM10632@monaco.rockefeller.edu> Research Positions in Computer Vision and Pattern Recognition *****IMMEDIATE NEW OPENINGS******* Visionics Corporation announces the opening of four new positions for research scientists and engineers in the field of Computer Vision and Pattern Recognition. The positions are at various levels. Candidates are expected to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in developing real-world pattern recognition algorithms. This is the team that produced the face recognition technology that was recently awarded the prestigious PC WEEK's Best of Show AND Best New Technology at COMDEX Fall 97. The job openings are at Visionics' new R&D facility at Exchange Place on the Waterfront of Jersey City, New Jersey--right across the river from the World Trade Center in Manhattan. Visionics offers a competitive salary and benefits package that includes stock options, health insurance, retirement etc, and an excellent chance for rapid career advancement. ***For more information about Visionics please visit our webpage at http://www.faceit.com. FOR IMMEDIATE CONSIDERATION, FAX YOUR RESUME TO: (1) Fax: 201 332 9313 Dr. Norman Redlich--CTO Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer--women and minority candidates are encouraged to apply. Joseph J. Atick, CEO Visionics Corporation 201 332 2890 jja at faceit.com %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From terry at salk.edu Mon Dec 15 16:20:14 1997 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 15 Dec 1997 13:20:14 -0800 (PST) Subject: NEURAL COMPUTATION 10:1 Message-ID: <199712152120.NAA16748@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 1 - January 1, 1998 ARTICLE Memory Maintenance via Neuronal Regulation David Horn, Nir Levy and Eytan Ruppin NOTE Ordering of Self-Organizing Maps in Multidimensional Cases Guang-Bin Huang, Haroon A. Babri, and Hua-Tian Li LETTERS Predicting the Distribution of Synaptic Strengths and Cell Firing Correlations in a Self-Organizing, Sequence Prediction Model Asohan Amarasingham, and William B. Levy State-Dependent Weights for Neural Associative Memories Ravi Kothari, Rohit Lotlikar, and Marc Cathay The Role of the Hippocampus in Solving the Morris Water Maze A. David Redish and David S. Touretzky Near-Saddle-Node Bifurcation Behavior as Dynamics in Working Memory for Goal-Directed Behavior Hiroyuki Nakahara and Kenji Doya A Canonical Form of Nonlinear Discrete-Time Models Gerard Dreyfus and Yizhak Idan A Low-Sensitivity Recurrent Neural Network Andrew D. Back and Ah Chung Tsoi Fixed-Point Attractor Analysis for a Class of Neurodynamics Jianfeng Feng and David Brown GTM: The Generative Topographic Mapping Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Identification Criteria and Lower Bounds for Perceptron-Like Learning Rules Michael Schmitt ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From kia at particle.kth.se Mon Dec 15 17:13:07 1997 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 15 Dec 1997 23:13:07 +0100 Subject: VI-DYNN'98 Workshop on Virtual Intelligence and Dynamic Neural NetworksVI-DYNN'98, Workshop on Virtual Intelligence and Dynamic Neural Networks Message-ID: <3495AB73.53CA8816@particle.kth.se> ------------------------------------------------------------------- Announcement and call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Royal Institute of Technology, KTH Stockholm, Sweden June 22-26, 1998 ------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn Abstracts due to: February 28, 1998 VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit and join it with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn --------------------------------------------------------------------- From rafal at idsia.ch Tue Dec 16 11:00:07 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Tue, 16 Dec 1997 17:00:07 +0100 (MET) Subject: Learning Team Strategies: Soccer Case Studies Message-ID: LEARNING TEAM STRATEGIES: SOCCER CASE STUDIES Rafal Salustowicz Marco Wiering Juergen Schmidhuber IDSIA, Lugano (Switzerland) Revised Technical Report IDSIA-29-97 To appear in the Machine Learning Journal (1998) We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TD-Q learning with linear neural networks (TD-Q), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (CO-PIPE). TD-Q is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and CO-PIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE and CO-PIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in some multiagent learning scenarios direct search in policy space can offer advantages over EF-based approaches. http://www.idsia.ch/~rafal/research.html ftp://ftp.idsia.ch/pub/rafal/soccer.ps.gz Related papers by the same authors: Evolving soccer strategies. In N. Kasabov, R. Kozma, K. Ko, R. O'Shea, G. Coghill, and T. Gedeon, editors, Progress in Connectionist-based Information Systems:Proc. of the 4th Intl. Conf. on Neural Information Processing ICONIP'97, pages 502-505, Springer-Verlag, Singapore, 1997. ftp://ftp.idsia.ch/pub/rafal/ICONIP_soccer.ps.gz On learning soccer strategies. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, editors, Proc. of the 7th Intl. Conf. on Artificial Neural Networks (ICANN'97), volume 1327 of Lecture Notes in Computer Science, pages 769-774, Springer-Verlag Berlin Heidelberg, 1997. ftp://ftp.idsia.ch/pub/rafal/ICANN_soccer.ps.gz ********************************************************************** Rafal Salustowicz Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) Corso Elvezia 36 e-mail: rafal at idsia.ch 6900 Lugano ============== Switzerland raf at cs.tu-berlin.de Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu Tel (office): +41 91 91198-32 Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal ********************************************************************** From becker at curie.psychology.mcmaster.ca Tue Dec 16 13:26:22 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 16 Dec 1997 13:26:22 -0500 (EST) Subject: Faculty Position at McMaster University Message-ID: BEHAVIORAL NEUROSCIENCE The Department of Psychology at McMaster University seeks applications for a position as Assistant Professor commencing July 1, 1998, subject to final budgetary approval. We seek a candidate who holds a Ph.D. and who is conducting research in an area of behavioral neuroscience that will complement one or more of our current strengths in neural plasticity, perceptual development, cognition, computational neuroscience, and animal behavior. Appropriate research interests include, for example, cognitive neuroscience/neuropsychology, the physiology of memory, sensory physiology, the physiology of motivation/emotion, neuroethology, and computational neuroscience. The candidate would be expected to teach in the area of neuropsychology, as well as a laboratory course in neuroscience. This is initially for a 3-year contractually-limited appointment. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents. McMaster University is committed to Employment Equity and encourages applications from all qualified candidates, including aboriginal peoples, persons with disabilities, members of visible minorities, and women. To apply, send a curriculum vitae, a short statement of research interests, a publication list with selected reprints, and three letters of reference to the following address: Prof. Ron Racine, Department of Psychology, McMaster University, Hamilton, Ontario, CANADA L8S 4K1. From rvicente at onsager.if.usp.br Wed Dec 17 09:01:00 1997 From: rvicente at onsager.if.usp.br (Renato Vicente) Date: Wed, 17 Dec 1997 12:01:00 -0200 (EDT) Subject: Paper: Learning a Spin Glass Message-ID: <199712171401.MAA08019@curie.if.usp.br> Dear Connectionists, We would like to announce our new paper on statistical physics of neural networks. The paper is available at: http://www.fma.if.usp.br/~rvicente/nnsp.html/NNGUSP13.ps.gz Comments are welcome. Learning a spin glass: determining Hamiltonians from metastable states. ------------------------------------------------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From HEINKED at psycho1.bham.ac.uk Wed Dec 17 19:48:10 1997 From: HEINKED at psycho1.bham.ac.uk (Mr Dietmar Heinke) Date: Wed, 17 Dec 1997 16:48:10 -0800 Subject: CALL FOR ABSTRACTS Message-ID: <349872CA.238@psycho1.bham.ac.uk> ***************** Call for Abstracts **************************** 5th Neural Computation and Psychology Workshop Connectionist Models in Cognitive Neuroscience University of Birmingham, England Tuesday 8th September - Thursday 10th September 1998 This workshop is the fifth in a series, following on from the first at the University of Wales, Bangor (with theme "Neurodynamics and Psychology"), the second at the University of Edinburgh, Scotland ("Memory and Language"), the third at the University of Stirling, Scotland ("Perception") and the forth at the University College, London ("Connectionist Representations"). The general aim is to bring together researchers from such diverse disciplines as artificial intelligence, applied mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on the connectionist modelling of psychology. This year's workshop is to be hosted by the University of Birmingham. As in previous years there will be a theme to the workshop. The theme is to be: Connectionist Models in Cognitive Neuroscience. This covers many important issues ranging from modelling physiological structure to cognitive function and its disorders (in neuropsychological and psychiatric cases). As in previous years we aim to keep the workshop fairly small, informal and single track. As always, participants bringing expertise from outside the UK are particularly welcome. A one page abstract should be sent to Professor Glyn W. Humphreys School of Psychology University of Birmingham Birmingham B15 2TT United Kingdom Deadline for abstracts: 31th of May, 1998 Registration, Food and Accommodation The workshop will be held at the University of Birmingham. The conference registration fee (which includes lunch and morning and afternoon tea/coffee each day) is 60 pounds. A special conference dinner (optional) is planned for the Wednesday evening costing 20 pounds. Accommodation is available in university halls or local hotels. A special price of 150 pounds is available for 3 nights accommodation in the halls of residence and registration fee. Organising Committee Dietmar Heinke Andrew Olson Professor Glyn W. Humphreys Contact Details Workshop Email address: e.fox at bham.ac.uk Dietmar Heinke, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4920, Fax: +44 212 414 4897 Email: heinked at psycho1.bham.ac.uk Andrew Olson, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 3328, Fax: +44 212 414 4897 Email: olsona at psycho1.bham.ac.uk Glyn W. Humphreys, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4930, Fax: +44 212 414 4897 Email: humphreg at psycho1.bham.ac.uk From jordan at ai.mit.edu Wed Dec 17 15:09:36 1997 From: jordan at ai.mit.edu (Michael I. Jordan) Date: Wed, 17 Dec 1997 15:09:36 -0500 (EST) Subject: graphical models and variational methods Message-ID: <9712172009.AA01909@carpentras.ai.mit.edu> A tutorial paper on graphical models and variational approximations is available at: ftp://psyche.mit.edu/pub/jordan/variational-intro.ps.gz http://www.ai.mit.edu/projects/jordan.html ------------------------------------------------------- AN INTRODUCTION TO VARIATIONAL METHODS FOR GRAPHICAL MODELS Michael I. Jordan Massachusetts Institute of Technology Zoubin Ghahramani University of Toronto Tommi S. Jaakkola University of California Santa Cruz Lawrence K. Saul AT&T Labs -- Research This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models. We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, showing how upper and lower bounds can be found for local probabilities, and discussing methods for extending these bounds to bounds on global probabilities of interest. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case. From bruf at igi.tu-graz.ac.at Thu Dec 18 05:48:05 1997 From: bruf at igi.tu-graz.ac.at (Berthold Ruf) Date: Thu, 18 Dec 1997 11:48:05 +0100 Subject: Paper: Spatial and Temporal Pattern Analysis via Spiking Neurons Message-ID: <3498FF65.B9895E06@igi.tu-graz.ac.at> The following technical report is now available at http://www.cis.tu-graz.ac.at/igi/bruf/rbf-tr.ps.gz Thomas Natschlaeger and Berthold Ruf: Spatial and Temporal Pattern Analysis via Spiking Neurons Abstract: Spiking neurons, receiving temporally encoded inputs, can compute radial basis functions (RBFs) by storing the relevant information in their delays. In this paper we show how these delays can be learned using exclusively locally available information (basically the time difference between the pre- and postsynaptic spike). Our approach gives rise to a biologically plausible algorithm for finding clusters in a high dimensional input space with networks of spiking neurons, even if the environment is changing dynamically. Furthermore, we show that our learning mechanism makes it possible that such RBF neurons can perform some kind of feature extraction where they recognize that only certain input coordinates carry relevant information. Finally we demonstrate that this model allows the recognition of temporal sequences even if they are distorted in various ways. ---------------------------------------------------- Berthold Ruf | | Office: | Home: Institute for Theoretical | Vinzenz-Muchitsch- Computer Science | Strasse 56/9 TU Graz | Klosterwiesgasse 32/2 | A-8010 Graz | A-8020 Graz AUSTRIA | AUSTRIA | Tel:+43 316/873-5824 | Tel:+43 316/274580 FAX:+43 316/873-5805 | Email: bruf at igi.tu-graz.ac.at | Homepage: http://www.cis.tu-graz.ac.at/igi/bruf/ ---------------------------------------------------- From giles at research.nj.nec.com Thu Dec 18 17:38:06 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 18 Dec 97 17:38:06 EST Subject: paper available: financial prediction Message-ID: <9712182238.AA13422@alta> The following technical report and related conference paper is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/#recent-TRs www.cs.umd.edu/TRs/TRumiacs.html We apologize in advance for any multiple postings that may occur. ************************************************************************** Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference U. of Maryland Technical Report CS-TR-3625 and UMIACS-TR-96-27 Steve Lawrence (1), Ah Chung Tsoi (2), C. Lee Giles (1,3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Faculty of Informatics, University of Wollongong, NSW 2522 Australia (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA ABSTRACT Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. Keywords - noisy time series prediction, recurrent neural networks, self-organizing map, efficient market hypothesis, foreign exchange rate, non-stationarity, efficient market hypothesis, rule extraction ********************************************************************** A short published conference version: C.L. Giles, S. Lawrence, A-C. Tsoi, "Rule Inference for Financial Prediction using Recurrent Neural Networks," Proceedings of the IEEE/IAFE Conf. on Computational Intelligence for Financial Engineering, p. 253, IEEE Press, 1997. can be found at www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/papers/IEEE.CIFEr.ps.Z __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From majidi at DI.Unipi.IT Fri Dec 19 11:34:04 1997 From: majidi at DI.Unipi.IT (Darya Majidi) Date: Fri, 19 Dec 1997 17:34:04 +0100 Subject: NNESMED '98 Message-ID: <199712191634.RAA23176@mailserver.di.unipi.it> 2-4 September 1998 Call for papers: 3rd International Conference on Neural Networks and Expert Systems in Medicine and Healthcare (NNESMED 98) Pisa, Italy Deadline: 13 February 1997, electronic file (4 pages containing an abstract of 200 words) Contact: Prof Antonina Starita Computer Science Department University of Pisa Corso Italia 40, 56126 Pisa Tel: +39-50-887215 Fax: +39-50-887226 E-mail: nnesmed at di.unipi.it WWW: http://www.di.unipi.it/~nnesmed/home.htm From takane at takane2.psych.mcgill.ca Fri Dec 19 11:53:23 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Fri, 19 Dec 1997 11:53:23 -0500 (EST) Subject: No subject Message-ID: <199712191653.LAA03407@takane2.psych.mcgill.ca> This will be the last call for papers for the special issue of Behaviormetrika. *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From harnad at coglit.soton.ac.uk Fri Dec 19 08:20:24 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 19 Dec 1997 13:20:24 GMT Subject: PSYC Call for Commentators Message-ID: <199712191320.NAA14295@amnesia.psy.soton.ac.uk> Latimer/Stevens: PART-WHOLE PERCEPTION The target article whose abstract follows below has just appeared in PSYCOLOQUY, a refereed journal of Open Peer Commentary sponsored by the American Psychological Association. Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on this article. Please write for Instructions if you are not familiar with PSYCOLOQUY format and acceptance criteria (all submissions are refereed). The article can be read or retrieved at this URL: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1997.volume.8/psyc.97.8.13.part-whole-perception.1.latimer For submitting articles and commentaries and for information: EMAIL: psyc at pucc.princteton.edu URL: http://www.cogsci.soton.ac.uk/psycoloquy/ AUTHORS' RATIONALE FOR SOLICITING COMMENTARY: The topic of whole/part perception has a long history of controversy, and its ramifications still pervade research on perception and pattern recognition in psychology, artificial intelligence and cognitive science where controversies such as "global versus local precedence" and "holistic versus analytic processing" are still very much alive. We argue that, whereas the vast majority of studies on whole/part perception have been empirical, the major and largely unaddressed problems in the area are conceptual and concern how the terms "whole" and "part" are defined and how wholes and parts are related in each particular experimental context. Without some general theory of wholes and parts and their relations and some consensus on nomenclature, we feel that pseudo-controversies will persist. One possible principle of unification and clarification is a formal analysis of the whole/part relationship by Nicholas Rescher and Paul Oppenheim (1955). We outline this formalism and demonstrate that not only does it have important implications for how we conceptualize wholes and parts and their relations, but it also has far reaching implications for the conduct of experiments on a wide range of perceptual phenomena. We challenge the well established view that whole/part perception is essentially an empirical problem and that its solution will be found in experimental investigations. We argue that there are many purely conceptual issues that require attention prior to empirical work. We question the logic of global-precedence theory and any interpretation of experimental data in terms of the extraction of global attributes without prior analysis of local elements. We also challenge theorists to provide precise, testable theories and working mechanisms that embody the whole/part processing they purport to explain. Although it deals mainly with vision and audition, our approach can be generalized to include tactile perception. Finally, we argue that apparently disparate theories, controversies, results and phenomena can all be considered under the three main conditions for wholes and parts proposed by Rescher and Oppenheim. Commentary and counterargument are sought on all these issues. In particular, we would like to hear arguments to the effect that wholes and parts (perceptual or otherwise) can exist in some absolute sense, and we would like to learn of machines, devices, programs or systems that are capable of extracting holistic properties without prior analysis of parts. ----------------------------------------------------------------------- psycoloquy.97.8.13.part-whole-perception.1.latimer Wed 17 Dec 1997 ISSN 1055-0143 (39 paragraphs, 68 references, 923 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1997 Latimer & Stevens SOME REMARKS ON WHOLES, PARTS AND THEIR PERCEPTION Cyril Latimer Department of Psychology University of Sydney NSW 2006, Australia email: cyril at psych.su.oz.au URL: http://www.psych.su.oz.au/staff/cyril/ Catherine Stevens Department of Psychology/FASS University of Western Sydney, Macarthur PO Box 555, Campbelltown, NSW 2560, Australia email: kj.stevens at uws.edu.au URL: http://psy.uq.edu.au/CogPsych/Noetica/ ABSTRACT: We emphasize the relativity of wholes and parts in whole/part perception, and suggest that consideration must be given to what the terms "whole" and "part" mean, and how they relate in a particular context. A formal analysis of the part/whole relationship by Rescher & Oppenheim, (1955) is shown to have a unifying and clarifying role in many controversial issues including illusions, emergence, local/global precedence, holistic/analytic processing, schema/feature theories and "smart mechanisms". The logic of direct extraction of holistic properties is questioned, and attention drawn to vagueness of reference to wholes and parts which can refer to phenomenal units, physiological structures or theoretical units of perceptual analysis. KEYWORDS: analytic versus holistic processing, emergence, feature gestalt, global versus local precedence part, whole From rvicente at gibbs.if.usp.br Sat Dec 20 08:28:55 1997 From: rvicente at gibbs.if.usp.br (Renato Vicente) Date: Sat, 20 Dec 1997 13:28:55 -0000 Subject: Paper: Learning a Spin Glass - right link Message-ID: <199712201620.OAA17338@borges.uol.com.br> Dear Connectionists, The link for the paper previously announced was wrong, the right one is : http://www.fma.if.usp.br/~rvicente/NNGUSP13.ps.gz Sorry about that. Learning a spin glass: determining Hamiltonians from metastable states. --------------------------------------------------------------------------- ------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From hinton at cs.toronto.edu Tue Dec 23 13:51:45 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 23 Dec 1997 13:51:45 -0500 Subject: postdoc and student positions at new centre Message-ID: <97Dec23.135145edt.1329@neuron.ai.toronto.edu> THE GATSBY COMPUTATIONAL NEUROSCIENCE UNIT UNIVERSITY COLLEGE LONDON We are delighted to announce that the Gatsby Charitable Foundation has decided to create a centre for the study of Computational Neuroscience at University College London. The centre will include: Geoffrey Hinton (director) Zhaoping Li Peter Dayan Zoubin Ghahramani 5 Postdoctoral Fellows 10 Graduate Students We will study computational theories of perception and action with an emphasis on learning. Our main current interests are: unsupervised learning and activity dependent development statistical models of hierarchical representations sensorimotor adaptation and integration visual and olfactory segmentation and recognition computation by non-linear neural dynamics reinforcement learning By establishing this centre, the Gatsby Charitable Foundation is providing a unique opportunity for a critical mass of theoreticians to interact closely with each other and with University College's world class research groups, including those in psychology, neurophysiology, anatomy, functional neuro-imaging, computer science and statistics. The unit has some laboratory space for theoretically motivated experimental studies in visual and motor psychophysics. For further information see our temporary web site http://www.cs.toronto.edu/~zoubin/GCNU/ http://hera.ucl.ac.uk/~gcnu/ (The UK mirror of site above) We are seeking to recruit 4 postdocs and 6 graduate students to start in the Fall of 1998. We are particularly interested in candidates with strong analytical skills as well as a keen interest in and knowledge of neuroscience and cognitive science. Candidates should submit a CV, a one or two page statement of research interests, and the names of 3 referees. Candidates may also submit a copy of one or two papers. We prefer email submissions (hinton at cs.toronto.edu). If you use email please send papers as URL addresses or separate messages in postscript format. The deadline for receipt of submissions is Tuesday January 13, 1998. Professor Geoffrey Hinton Department of Computer Science University of Toronto Room 283 6 King's College Road Toronto, Ontario M5S 3H5 CANADA phone: 416-978-3707 fax: 416-978-1455 From john at dcs.rhbnc.ac.uk Tue Dec 23 08:31:39 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Dec 97 13:31:39 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199712231331.NAA11900@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-040: ---------------------------------------- Saturation and Stability in the Theory of Computation over the Reals by Olivier Chapuis, Universit\'e Lyon I, France Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: This paper was motivated by the following two questions which arise in the theory of complexity for computation over ordered rings in the now famous computational model introduced by Blum, Shub and Smale: (i) is the answer to the question $\p$ $=$? $\np$ the same in every real-closed field? (ii) if $\p\neq\np$ for $\r$, does there exist a problem of $\r$ which is $\np$ but not $\np$-complete? Some unclassical complexity classes arise naturally in the study of these questions. They are still open, but we could obtain unconditional results of independent interest. Michaux introduced $/\const$ complexity classes in an effort to attack question~(i). We show that $\a_{\r}/ \const = \a_{\r}$, answering a question of his. Here $\a$ is the class of real problems which are algorithmic in bounded time. We also prove the stronger result: $\parl_{\r}/ \const =\parl_{\r}$, where $\parl$ stands for parallel polynomial time. In our terminology, we say that $\r$ is $\a$-saturated and $\parl$-saturated. We also prove, at the nonuniform level, the above results for every real-closed field. It is not known whether $\r$ is $\p$-saturated. In the case of the reals with addition and order we obtain $\p$-saturation (and a positive answer to question (ii)). More generally, we show that an ordered $\q$-vector space is $\p$-saturated at the nonuniform level (this almost implies a positive answer to the analogue of question (i)). We also study a stronger notion that we call $\p$-stability. Blum, Cucker, Shub and Smale have (essentially) shown that fields of characteristic 0 are $\p$-stable. We show that the reals with addition and order are $\p$-stable, but real-closed fields are not. Questions (i) and (ii) and the $/\const$ complexity classes have some model theoretic flavor. This leads to the theory of complexity over ``arbitrary'' structures as introduced by Poizat. We give a summary of this theory with a special emphasis on the connections with model theory and we study $/\const$ complexity classes with this point of view. Note also that our proof of the $\parl$-saturation of $\r$ shows that an o-minimal structure which admits quantifier elimination is $\a$-saturated at the nonuniform level. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-041: ---------------------------------------- A survey on the Grzegorczyk Hierarchy and its extension through the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: This paper concerns the Grzegorczyk classes defined from a particular sequence of computable functions. We provide some relevant properties and the main problems about the Grzegorczyk classes through two settings of computability. The first one is the usual setting of recursiveness and the second one is the BSS model introduced at the end of the $80'$s. This model of computability allows to define the concept of effectiveness over continuous domains such that the real numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-043: ---------------------------------------- Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognise Arbitrary Regular Languages by Wolfgang Maass, Technische Universitaet Graz, Austria Eduardo D. Sontag, Rutgers University, USA Abstract: We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognised by networks of this type, and we give a precise characterization of those languages which can be recognised. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From lbl at nagoya.riken.go.jp Wed Dec 24 22:37:55 1997 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Thu, 25 Dec 1997 12:37:55 +0900 Subject: TR available:Inverting Neural Nets Using LP and NLP Message-ID: <9712250337.AA20262@xian> The following Technical Report is available via anonymous FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-ieee-tnn-inversion.ps.gz ========================================================================== TITLE: Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming BMC Technical Report BMC-TR-97-12 AUTHORS: Bao-Liang Lu (1) Hajime Kita (2) Yoshikazu Nishikawa (3) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Tokyo Institute of Technology (3) Osaka Institute of Technology ABSTRACT: The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem because the mapping from the output space to the input space is an one-to-many mapping. In this paper, we present a method for dealing with this inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem. An important advantage of the method is that multilayer perceptrons (MLPs) and radial basis function (RBF) neural networks can be inverted by solving the corresponding separable programming problems, which can be solved by a modified simplex method, a well-developed and efficient method for solving linear programming problems. As a result, large-scale MLPs and RBF networks can be inverted efficiently. We present several examples to demonstrate the proposed inversion algorithms and the applications of the network inversions to examining and improving the generalization performance of trained networks. The results show the effectiveness of the proposed method. (double space, 50 pages) Any comments are appreciated. Bao-Liang Lu ====================================== Bio-Mimetic Control Research Center The Institute of Physical and Chemical Research (RIKEN) Anagahora, Shimoshidami, Moriyama-ku Nagoya 463-0003, Japan Tel: +81-52-736-5870 Fax: +81-52-736-5871 Email: lbl at bmc.riken.go.jp From jkim at FIZ.HUJI.AC.IL Wed Dec 24 05:16:20 1997 From: jkim at FIZ.HUJI.AC.IL (Jai Won Kim) Date: Wed, 24 Dec 1997 12:16:20 +0200 Subject: two papers: On-Line Gibbs Learning Message-ID: <199712241016.AA02233@keter.fiz.huji.ac.il> ftp-host: keter.fiz.huji.ac.il ftp-filenames: pub/OLGA1.tar and pub/OLGA2.tar ______________________________________________________________________________o On-Line Gibbs Learning I: General Theory H. Sompolinsky and J. W. Kim Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT We study a new model of on-line learning, the On-Line Gibbs Algorithm (OLGA) which is particularly suitable for supervised learning of realizable and unrealizable discrete valued functions. The learning algorithm is based on an on-line energy function, $E$, that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. The relative importance of these terms in $E$ is determined by a parameter $\la$, the inverse of which plays a similar role as the learning rate parameters in other on-line learning algorithms. In the stochastic version of the algorithm, following each presentation of a labeled example the new weights are chosen from a Gibbs distribution with the on-line energy $E$ and a temperature parameter $T$. In the zero temperature version of OLGA, at each step, the new weights are those that minimize the on-line energy $E$. The generalization performance of OLGA is studied analytically in the limit of small learning rate, \ie~ $\la \rightarrow \infty$. It is shown that at finite temperature OLGA converges to an equilibrium Gibbs distribution of weight vectors with an energy function which equals the generalization error function. In its deterministic version OLGA converges to a local minimum of the generalization error. The rate of convergence is studied for the case in which the parameter $\la$ increases as a power-law in time. It is shown that in a generic unrealizable dichotomy, the asymptotic rate of decrease of the generalization error is proportional to the inverse square root of presented examples. This rate is similar to that of batch learning. The special cases of learning realizable dichotomies or dichotomies with output noise are also treated. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA1.tar _______________________________________________________________________________o On-Line Gibbs Learning II: Application to Perceptron and Multi-layer Networks J. W. Kim and H. Sompolinsky Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT In a previous paper (On-line Gibbs Learning I: General Theory) we have presented the On-line Gibbs Algorithm(OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to online supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a Winner-Takes-All (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in article I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power-law that is the same as that of batch learning. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA2.tar From biehl at physik.uni-wuerzburg.de Tue Dec 30 08:27:19 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 30 Dec 1997 14:27:19 +0100 (MET) Subject: preprint available Message-ID: <199712301327.OAA17838@wptx08.physik.uni-wuerzburg.de> Subject: paper available: Adaptive Systems on Different Time Scales FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-97-053.ps.gz The following preprint is now available via anonymous ftp: (See below for the retrieval procedure) --------------------------------------------------------------------- Adaptive Systems on Different Time Scales by D. Endres and Peter Riegler Abstract: The special character of certain degrees of freedom in two-layered neural networks is investigated for on-line learning of realizable rules. Our analysis shows that the dynamics of these degrees of freedom can be put on a faster time scale than the remaining, with the profit of speeding up the overall adaptation process. This is shown for two groups of degrees of freedom: second layer weights and bias weights. For the former case our analysis provides a theoretical explanation of phenomenological findings. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> get WUE-ITP-97-053.ps.gz ftp> quit unix> gunzip WUE-ITP-97-053.ps.gz e.g. unix> lp WUE-ITP-97-053.ps _____________________________________________________________________ _/_/_/_/_/_/_/_/_/_/_/_/ _/ _/ Peter Riegler _/ _/_/_/ _/ Institut fuer Theoretische Physik _/ _/ _/ _/ Universitaet Wuerzburg _/ _/_/_/ _/ Am Hubland _/ _/ _/ D-97074 Wuerzburg, Germany _/_/_/ _/_/_/ phone: (++49) (0)931 888-4908 fax: (++49) (0)931 888-5141 email: pr at physik.uni-wuerzburg.de www: http://theorie.physik.uni-wuerzburg.de/~pr ________________________________________________________________ From tgd at CS.ORST.EDU Tue Dec 30 20:09:13 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Wed, 31 Dec 1997 01:09:13 GMT Subject: Learning with Probabilistic Representations (Machine Learning Special Issue) Message-ID: <199712310109.BAA00980@edison.CS.ORST.EDU> Machine Learning has published the following special issue on LEARNING WITH PROBABILISTIC REPRESENTATIONS Guest Editors: Pat Langley, Gregory M. Provan, and Padhraic Smyth Learning with Probabilistic Representations Pat Langley, Gregory Provan, and Padhraic Smyth On the Optimality of the Simple Bayesian Classifier under Zero-One Loss Pedro Domingos and Michael Pazzani Bayesian Network Classifiers Nir Friedman, Dan Geiger, and Moises Goldszmidt The Sample Complexity of Learning Fixed-Structure Bayesian Networks Sanjoy Dasgupta Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables David Maxwell Chickering and David Heckerman Adaptive Probabilistic Networks with Hidden Variables John Binder, Daphne Koller, Stuart Russell, and Keiji Kanazawa Factorial Hidden Markov Models Zoubin Ghahramani and Michael I. Jordan Predicting Protein Secondary Structure using Stochastic Tree Grammars Naoki Abe and Hiroshi Mamitsuka For more information, see http://www.cs.orst.edu/~tgd/mlj --Tom From kchen at cis.ohio-state.edu Wed Dec 31 09:58:15 1997 From: kchen at cis.ohio-state.edu (Ke CHEN) Date: Wed, 31 Dec 1997 09:58:15 -0500 (EST) Subject: TRs available. Message-ID: The following two papers are available on line now. _________________________________________________________________________ Combining Linear Discriminant Functions with Neural Networks for Supervised Learning Ke Chen{1,2}, Xiang Yu {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in Neural Computing & Applications 6(1): 19-41, 1997. ABSTRACT A novel supervised learning method is presented by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, {\it growing} and {\it credit-assignment} algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multi-layered perceptron. URL: http://www.cis.ohio-state.edu/~kchen/nca.ps ftp://www.cis.ohio-state.edu/~kchen/nca.ps ___________________________________________________________________________ ___________________________________________________________________________ Methods of Combining Multiple Classifiers with Different Features and Their Applications to Text-Independent Speaker Identification Ke Chen{1,2}, Lan Wang {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in International Journal of Pattern Recognition and Artificial Intelligence 11(3): 417-445, 1997. ABSTRACT In practical applications of pattern recognition, there are often different features extracted from raw data which needs recognizing. Methods of combining multiple classifiers with different features are viewed as a general problem in various application areas of pattern recognition. In this paper, a systematic investigation has been made and possible solutions are classified into three frameworks, i.e. linear opinion pools, winner-take-all and evidential reasoning. For combining multiple classifiers with different features, a novel method is presented in the framework of linear opinion pools and a modified training algorithm for associative switch is also proposed in the framework of winner-take-all. In the framework of evidential reasoning, several typical methods are briefly reviewed for use. All aforementioned methods have already been applied to text-independent speaker identification. The simulations show that results yielded by the methods described in this paper are better than not only the individual classifiers' but also ones obtained by combining multiple classifiers with the same feature. It indicates that the use of combining multiple classifiers with different features is an effective way to attack the problem of text-independent speaker identification. URL: http://www.cis.ohio-state.edu/~kchen/ijprai.ps ftp://www.cis.ohio-state.edu/~kchen/ijprai.ps __________________________________________________________________________ ---------------------------------------------------- Dr. Ke CHEN Department of Computer and Information Science The Ohio State University 583 Dreese Laboratories 2015 Neil Avenue Columbus, Ohio 43210-1277, U.S.A. E-Mail: kchen at cis.ohio-state.edu WWW: http://www.cis.ohio-state.edu/~kchen ------------------------------------------------------ From robtag at dia.unisa.it Tue Dec 2 06:00:45 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Tue, 2 Dec 1997 12:00:45 +0100 Subject: WIRN 97 First call for paper Message-ID: <9712021100.AA31469@udsab> ***************** CALL FOR PAPERS ***************** The 10-th Italian Workshop on Neural Nets WIRN VIETRI-98 May 21-23, 1998 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) N. A. Borghese ( CNR Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) A. Colla (ELSAG Bailey Genova) A. Esposito ( I.I.A.S.S.) M. Frixione ( Univ. Salerno) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Siena) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) C. Morabito ( Univ. Reggio Calabria) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino) A. Petrosino ( I.I.A.S.S.) V. Piuri ( Politecnico Milano) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno) R. Serra ( Centro Ricerche Ambientali Montecatini Ravenna) F. Sorbello ( Univ. Palermo) R. Tagliaferri ( Univ. Salerno) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1998 Replies to Authors: March 31, 1998 Revised Papers Due: May 24, 1998 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Scienze Fisiche "E.R. Caianiello", University of Salerno Dept. of Informatica e Applicazioni "R.M. Capocelli", Univer. of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) IEEE Neural Network Council INNS/SIG Italy Istituto Italiano per gli Studi Filosofici, Napoli The 10-th Italian Workshop on Neural Nets (WIRN VIETRI-98) will take place in Vietri Sul Mare, Salerno ITALY, May 21-23, 1998. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by an International Publishing. Official languages are Italian and English, while papers must be in English. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 98 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. The camera ready format will be sent with the acceptation letter of the referees. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1995,1996,1997 until February 28 1998) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 98 c/o IIASS before February 28,1998. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From bocz at btk.jpte.hu Tue Dec 2 19:02:03 1997 From: bocz at btk.jpte.hu (Bocz Andras) Date: Wed, 3 Dec 1997 00:02:03 +0000 Subject: conference call Message-ID: FIRST CALL FOR PAPERS We are happy to announce a conference and workshop on Multidisciplinary Colloquium on Rules and Rule-Following: Philosophy, Linguistics and Psychology between April 30-May 1-2, 1998 at Janus Pannonius University P?cs, Hungary Keynote speakers (who have already accepted invitation): philosophy: Gy?rgy Kampis Hungarian Academy of Sciences, Budapest linguistics: Pierre-Yves Raccah Idl-CNRS, Paris psychology: Csaba Pl?h Dept. of General Pschology L?r?nd E?tv?s University, Budapest Organizing Committee: L?szl? Tarnay (JPTE, Dep. of Philosophy) L?szl? I. Koml?si (ELTE, Dept. of Psychology) Andr?s Bocz (JPTE, Dept. of English Studies) e-mail: tarnay at btk.jpte.hu; komlosi at btk.jpte.hu; bocz at btk.jpte.hu Advisory Board: G?bor Forrai (Budapest) Gy?rgy Kampis (Budapest) Mike Harnish (Tucson) Andr?s Kert?sz (Debrecen) Kuno Lorenz (Saarbr?cken) Pierre-Yves Raccah (Paris) J?nos S. Pet?fi (Macerata) Aims and scopes: The main aim of the conference is to bring together cholars from the field of cognitive linguistics, philosophy and psychology to investigate the concept of rule and to address various aspects of rule-following. Ever since Wittgenstein formulated in Philosophical investigations his famous 201 ? concerning a kind of rule-following which is not an interpretation, the concept of rule has become a key but elusive idea in almost every discipline and approach. And not only in the human sciences. No wonder, since without this idea the whole edifice of human (and possibly all other kinds of) rationality would surely collapse. With the rise of cognitive science, and especially the appearance of connectionist models and networks, however, the classical concept of rule is once again seriously contested. To put it very generally, there is an ongoing debate between the classical conception in which rules appear as a set of formulizable initial conditions or constraints on external operations linking different successive states of a given system (algorithms) and a dynamic conception in which there is nothing that could be correlated with a prior idea of internal well-formedness of the system's states. The debate centers on the representability of rules: either they are conceived of as meta-representations, or they are mere faon de parler concerning the development of complex systems. Idealizable on the one hand, while token-oriented on the other. Something to be implemented on the one hand, while self-controlling, backpropagational processing, on the other. There is however a common idea that almost all kinds of rule-conceptions address: the problem of learning. This idea reverberates from wittgenstenian pragmatics to strategic non-verbal and rule-governed speech behavior, from perceiving similarities to mental processing. Here are some haunting questions: - How do we acquire knowledge if there are no regularities in the world around us? - But how can we perceive those regularities? - And how do we reason on the basis of that knowledge if there are no observable constraints on infererring? - But if there are, where do they come from and how are they actually implemented mentally? - And finally: how do we come to act rationally, that is, in accordance with what we have perceived, processed and inferred? We are interested in all ways of defining rules and in all aspects of rule following, from the definiton of law, rule, regularity, similarity and analogy to logical consequence, argumentational and other inferences, statistical and linguistic rules, practical and strategic reasoning, pragmatic and praxeological activities. We expect contribution from the following reseach fields: game-theory, action theory, argumentation theory, cognitive science, linguitics, philosophy of language, epistemology, pragmatics, psychology and semiotics. We would be happy to include some contributions from natural sciences such as neuro-biology, physiology or brain sciences. The conference is organized in three major sections: philosophy, psychology and linguistics with three keynote lectures. Then contributions of 30 minutes (20 for paper and 10 for discussion) follow. We also plan to organize a workshop at the end of each section. Abstracts: Abstracts should be one-page (maximum 23 lines) specifying area of contribution and the particular aspect of rule-following to be addressed. Abstracts should be sent by e-mail to tarnay at btk.jpte.hu or bocz at btk.jpte.hu. Hard copies of abstracts may be sent to: Laszlo Tarnay Department of Philosphy Janus Pannonius University H-7624 Pecs, Hungray. Important dates: Deadline for submission: Jan.-15, 1998 Notification of acceptance: Febr.-28, 1998 conference: April 30-May 1-2, 1998 ************************************* Bocz Andr?s Department of English Janus Pannonius University Ifj?s?g u. 6. H-7624 P?cs, Hungary Tel/Fax: (36) (72) 314714 From terry at salk.edu Tue Dec 2 23:41:30 1997 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 2 Dec 1997 20:41:30 -0800 (PST) Subject: Telluride Workshop 1998 Message-ID: <199712030441.UAA08703@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 29 - JULY 19, 1998 TELLURIDE, COLORADO Deadline for application is February 1, 1998. Avis COHEN (University of Maryland) Rodney DOUGLAS (University of Zurich and ETH, Zurich/Switzerland) Christof KOCH (California Institute of Technology) Terrence SEJNOWSKI (Salk Institute and UCSD) Shihab SHAMMA (University of Maryland) We invite applications for a three week summer workshop that will be held in Telluride, Colorado from Monday, June 29 to Sunday, July 19, 1998. The 1997 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the "Center for Neuromorphic Systems Engineering" at the California Institute of Technology, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html We strongly encourage interested parties to browse through these reports and photo albums. GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of real biological nervous systems as whole systems. FORMAT: The three week summer workshop will include background lectures systems neuroscience (in particular learning, oculo-motor and other motor systems and attention), practical tutorials on analog VLSI design, small mobile robots (Khoalas), hands-on projects, and special interest groups. Participants are required to take part and possibly complete at least one of the projects proposed (soon to be defined). They are furthermore encouraged to become involved in as many of the other activities proposed as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The analog VLSI practical tutorials will cover all aspects of analog VLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with analog VLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing analog VLSI retinas to video output monitors. Retina chips will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling and injection, and on inter-chip communication systems. We will also feature a tutorial on the use of small, mobile robots, focussing on Khoala's, as an ideal platform for vision, auditory and sensory-motor circuits. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple analog VLSI sensors for autonomous robots. The "robotics" group will use rovers, robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. LOCATION AND ARRANGEMENTS: The workshop will take place at the Telluride Elementary School located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles). Continental and United Airlines provide daily flights directly into Telluride. All facilities within the beautifully renovated public school building are fully accessible to participants with disabilities. Participants will be housed in ski condominiums, within walking distance of the school. Participants are expected to share condominiums. No cars are required. Bring hiking boots, warm clothes and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. FINANCIAL ARRANGEMENT: We have several funding requests pending to pay for most of the costs associated with this workshop. Different from previous years, after notification of acceptances have been mailed out around March 15., 1998, participants are expected to pay a $250.- workshop fee. In case of real hardship, this can be waived. Shared condominiums will be provided for all academic participants at no cost to them. We expect participant from National Laboratories and Industry to pay for these modestly priced condominiums. We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and up to $800 for overseas travel). Please specify on the application whether such financial help is needed. HOW TO APPLY: The deadline for receipt of applications is February 1., 1998. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around March 15. 1998. From jfeldman at ICSI.Berkeley.EDU Wed Dec 3 12:25:23 1997 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Wed, 03 Dec 1997 09:25:23 -0800 Subject: backprop w/o math? Message-ID: <34859603.7B03@icsi.berkeley.edu> I would appreciate advice on how best to teach backprop to undergraduates who may have little or no math. We are planning to use Plunkett and Elman's Tlearn for exercises. This is part of a new course with George Lakoff, modestly called "The Neural Basis of Thought and Language". We would also welcome any more general comments on the course: http://www.icsi.berkeley.edu/~mbrodsky/cogsci110/ -- Jerry Feldman From shavlik at cs.wisc.edu Fri Dec 5 15:12:30 1997 From: shavlik at cs.wisc.edu (Jude Shavlik) Date: Fri, 5 Dec 1997 14:12:30 -0600 (CST) Subject: CFP: 1998 Machine Learning Conference Message-ID: <199712052012.OAA17873@jersey.cs.wisc.edu> Call for Papers THE FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING July 24-26, 1998 Madison, Wisconsin, USA The Fifteenth International Conference on Machine Learning (ICML-98) will be held at the University of Wisconsin, Madison from July 24 to July 26, 1998. ICML-98 will be collocated with the Eleventh Annual Conference on Computational Learning Theory (COLT-98) and the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI-98). Seven additional conferences, including the Fifteenth National Conference on Artificial Intelligence (AAAI-98), will also be held in Madison (see http://www.cs.wisc.edu/icml98/ for a complete list). Submissions are invited that describe empirical, theoretical, and cognitive-modeling research in all areas of machine learning. Submissions that present algorithms for novel learning tasks, interdisciplinary research involving machine learning, or innovative applications of machine learning techniques to challenging, real-world problems are especially encouraged. The deadline for submissions is MARCH 2, 1998. (An electronic version of the title page is due February 27, 1998.) See http://www.cs.wisc.edu/icml98/callForPapers.html for submission details. There are also three joint ICML/AAAI workshops being held July 27, 1998: Developing ML Applications: Problem Definition, Task Decomposition, and Technique Selection Learning for Text Categorization Predicting the Future: AI Approaches to Time-Series Analysis The submission deadline for these WORKSHOPS is MARCH 11, 1998. Additional details about the workshops are available via http://www.cs.wisc.edu/icml98/ [My apologies if you receive multiple copies of this announcement.] From lwh at montefiore.ulg.ac.be Fri Dec 5 02:43:56 1997 From: lwh at montefiore.ulg.ac.be (WEHENKEL Louis) Date: Fri, 5 Dec 1997 08:43:56 +0100 Subject: Book announcement "Automatic learning techniques in power systems" Message-ID: Dear colleagues, This is to announce the availability of a new book on "Automatic learning techniques in power systems". In the coming weeks, I will add some information on my own web-site http://www.montefiore.ulg.ac.be/~lwh/. Cordially, -- ******************************************************************************* Louis WEHENKEL Research Associate National Fund for Scientific Research Department of Electrical Engineering University of Liege Tel. + 32 4 366.26.84 Institut Montefiore - SART TILMAN B28, Fax. + 32 4 366.29.84 B-4000 LIEGE - BELGIUM Email. lwh at montefiore.ulg.ac.be !!! New telephone numbers from september 15th 1996 onwards ******************************************************************************* >> The web site for this book is http://www.wkap.nl/book.htm/0-7923-8068-1 >> >> >> Automatic Learning Techniques in Power Systems >> >> by >> Louis A. Wehenkel >> University of Lige, Institut Montefiore, Belgium >> >> THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND >> COMPUTER SCIENCE >> Volume 429 >> >> Automatic learning is a complex, multidisciplinary field of research >> and development, involving theoretical and applied methods from >> statistics, computer science, artificial intelligence, biology and >> psychology. Its applications to engineering problems, such as those >> encountered in electrical power systems, are therefore challenging, >> while extremely promising. More and more data have become available, >> collected from the field by systematic archiving, or generated through >> computer-based simulation. To handle this explosion of data, automatic >> learning can be used to provide systematic approaches, without which >> the increasing data amounts and computer power would be of little use. >> >> Automatic Learning Techniques in Power Systems is dedicated to the >> practical application of automatic learning to power systems. Power >> systems to which automatic learning can be applied are screened and >> the complementary aspects of automatic learning, with respect to >> analytical methods and numerical simulation, are investigated. >> >> This book presents a representative subset of automatic learning >> methods - basic and more sophisticated ones - available from >> statistics (both classical and modern), and from artificial >> intelligence (both hard and soft computing). The text also discusses >> appropriate methodologies for combining these methods to make the best >> use of available data in the context of real-life problems. >> >> Automatic Learning Techniques in Power Systems is a useful reference >> source for professionals and researchers developing automatic learning >> systems in the electrical power field. >> >> Contents >> >> List of Figures. >> List of Tables. >> Preface. >> 1. Introduction. >> Part I: Automatic Learning Methods. >> 2. Automatic Learning is Searching a Model Space. >> 3. Statistical Methods. >> 4. Artificial Neural Networks. >> 5. Machine Learning. >> 6. Auxiliary Tools and Hybrid Techniques. >> Part II: Application of Automatic Learning to Security Assessment. >> 7. Framework for Applying Automatic Learning to DSA. >> 8. Overview of Security Problems. >> 9. Security Information Data Bases. >> 10. A Sample of Real-Life Applications. >> 11. Added Value of Automatic Learning. >> 12. Future Orientations. >> Part III: Automatic Learning Applications in Power Systems. >> 13. Overview of Applications by Type. >> References. >> Index. >> Glossary. >> >> 1998, 320pp. ISBN 0-7923-8068-1 PRICE : US$ 122.00 ------- For information on how to subscribe / change address / delete your name from the Power Globe, see http://www.powerquality.com/powerglobe/ From rybaki at eplrx7.es.dupont.com Thu Dec 4 09:39:18 1997 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Thu, 4 Dec 1997 09:39:18 -0500 Subject: No subject Message-ID: <199712041439.JAA24126@pavlov> Dear colleagues, I have received a number of messages that the following my WEB pages, are not available from many servers because of some problem (?) in VOICENET: BMV: Behavioral model of active visual perception and recognition http://www.voicenet.com/~rybak/vnc.html Modeling neural mechanisms for orientation selectivity in the visual cortex: http://www.voicenet.com/~rybak/iod.html Modeling interacting populations of biologicalneurons: http://www.voicenet.com/~rybak/pop.html Modeling neural mechanisms for the respiratory rhythmogenesis: http://www.voicenet.com/~rybak/resp.html Modeling neural mechanisms for the baroreceptor vagal reflex: http://www.voicenet.com/~rybak/baro.html I thank all people informing me about the VOICENET restrictions. I also apologize for the inconvenience and would like to inform those of you who could not reach the above pages from your servers that YOU CAN VISIT THESE PAGES USING THE DIGITAL ADDRESS. In order to do this, please replace http://www.voicenet.com/~rybak/.... with http:// 207.103.26.247/~rybak/... The pages have been recently updated and may be of interest for the people working in the fields of computational neuroscience and vision. Sincerely, Ilya Rybak ======================================= Dr. Ilya Rybak Neural Computation Program E.I. du Pont de Nemours & Co. Central Research Department Experimental Station, E-328/B31 Wilmington, DE 19880-0328 USA Tel.: (302) 695-3511 Fax : (302) 695-8901 E-mail: rybaki at eplrx7.es.dupont.com URL: http://www.voicenet.com/~rybak/ http://207.103.26.247/~rybak/ ======================================== From sml%essex.ac.uk at seralph21.essex.ac.uk Thu Dec 4 11:30:39 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Thu, 04 Dec 1997 16:30:39 +0000 Subject: SOFM Demo Applet + source code Message-ID: <3486DAAE.21C9@essex.ac.uk> Dear All, I've written a colourful self-organising map demo - follow the link on my homepage listed below. This runs as a Java applet, and allows one to experiment with the effects of changing the neighbourhood radius. This may be of interest to those who have to teach the stuff - I've also included a complete listing of the source code. Best Regards, Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From sylee at eekaist.kaist.ac.kr Tue Dec 9 04:06:49 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 18:06:49 +0900 Subject: Special Invited Session on Neural Networks, SCI'98 Message-ID: <348D0A19.7CDE@ee.kaist.ac.kr> I had been asked to organize a Special Invited Session on Neural Networks at the World Multiconference on Systemics, Cybernetics, and Informatics (SCI'98) to be held on July 12-16, 1998, at Orlando, Florida, US. The SCI is an truly multi-disciplinary conference, where researchers come from many different disciplines. I believe this multi-disciplinary nature is quite unique and is helpful to neural networks researchers. We all know that neural networks are interdisciplinary researchers, and will be able to contribute to many different applications. You are cordially invited to present your valuable researches at SCI'98 and enjoy interesting discussion with researchers from different disciplines. If interested, please inform me of your intention by an e-mail. I need fix the author(s) and paper titles by January 10th, 1998. A brief introductory remark is attached for the SCI'98, and more may be found at http://www.iiis.org. Best regards, Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr ------------------------------------------------------------------ SCI'98 The purpose of the Conference is to bring together university professors, Corporate Leaders, Academic and Professional Leaders, consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'98 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. From sylee at eekaist.kaist.ac.kr Tue Dec 9 02:12:32 1997 From: sylee at eekaist.kaist.ac.kr (sylee) Date: Tue, 09 Dec 1997 16:12:32 +0900 Subject: KAIST Brain Science Research Center Message-ID: <348CEF60.24CB@ee.kaist.ac.kr> Inauguration of Brain Science Research Center at Korea Advanced Institute of Science and Technology On December 6th, 1997, Brain Science Research Center (BSRC) had its inauguration ceremony at Korea Advanced Institute of Science and Technology (KAIST). Korean Ministry of Science and Technology announced a 10-year national program on "brain researches" on September 30, 1997. The research program consists of two main streams, i.e., developing brain-like computing systems and overcoming brain disease. Starting from 1998 the total budget will reach about 1 billion US dollars. The KAIST BSRC will be the main research organization for the brain-like computing systems, and 69 research members had joined the BSRC from 18 Korean education and research organizatrion. Its seven research areas are o neurobiology o cognitive science o neural network models o hardware implementation of neural networks o artificial vision and auditory systems o inferrence technology o intelligent control and communication systems More information may be available from Prof. Soo-Young Lee, Director Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr From moshe.sipper at di.epfl.ch Tue Dec 9 04:37:45 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Tue, 09 Dec 1997 10:37:45 +0100 Subject: ICES98: Second Call for Papers Message-ID: <199712090937.KAA01605@lslsun7.epfl.ch> Second Call for Papers +-------------------------------------------------------+ | Second International Conference on Evolvable Systems: | | From Biology to Hardware (ICES98) | +-------------------------------------------------------+ The 1998 International EPFL-Latsis Foundation Conference Swiss Federal Institute of Technology, Lausanne, Switzerland September 23-26, 1998 +-------------------------------+ | http://lslwww.epfl.ch/ices98/ | +-------------------------------+ In Cooperation With: IEEE Neural Networks Council EvoNet - The European Network of Excellence in Evolutionary Computing Societe Suisse des Informaticiens, Section Romande (SISR) Societe des Informaticiens (SI) Centre Suisse d'Electronique et de Microtechnique SA (CSEM) Swiss Foundation for Research in Microtechnology Schweizerische Gesellschaft fur Nanowissenschaften und Nanotechnik (SGNT) The idea of evolving machines, whose origins can be traced to the cybernetics movement of the 1940s and the 1950s, has recently resurged in the form of the nascent field of bio-inspired systems and evolvable hardware. The inaugural workshop, Towards Evolvable Hardware, took place in Lausanne in October 1995, followed by the First International Conference on Evolvable Systems: From Biology to Hardware (ICES96), held in Japan in October 1996. Following the success of these past events, ICES98 will reunite this burgeoning community, presenting the latest developments in the field, bringing together researchers who use biologically inspired concepts to implement real systems in artificial intelligence, artificial life, robotics, VLSI design, and related domains. Topics to be covered will include, but not be limited to, the following list: * Evolving hardware systems. * Evolutionary hardware design methodologies. * Evolutionary design of electronic circuits. * Self-replicating hardware. * Self-repairing hardware. * Embryonic hardware. * Neural hardware. * Adaptive hardware platforms. * Autonomous robots. * Evolutionary robotics. * Bio-robotics. * Applications of nanotechnology. * Biological- and chemical-based systems. * DNA computing. o General Chair: Daniel Mange, Swiss Federal Institute of Technology - Lausanne o Program Chair: Moshe Sipper, Swiss Federal Institute of Technology - Lausanne o International Steering Committee * Tetsuya Higuchi, Electrotechnical Laboratory (Japan) * Hiroaki Kitano, Sony Computer Science Laboratory (Japan) * Daniel Mange, Swiss Federal Institute of Technology (Switzerland) * Moshe Sipper, Swiss Federal Institute of Technology (Switzerland) * Andres Perez-Uribe, Swiss Federal Institute of Technology (Switzerland) o Conference Secretariat Andres Perez-Uribe, Swiss Federal Institute of Technology - Lausanne Inquiries: Andres.Perez at di.epfl.ch, Tel.: +41-21-6932652, Fax: +41-21-6933705. o Important Dates * March 1, 1998: Submission deadline. * May 1, 1998: Notification of acceptance. * June 1, 1998: Camera-ready due. * September 23-26, 1998: Conference dates. o Publisher: Springer-Verlag Official Language: English o Submission Procedure Papers should not be longer than 10 pages (including figures and bibliography) in the Springer-Verlag llncs style (see Web page for complete instructions). Authors must submit five (5) complete copies of their paper (hardcopy only), received by March 1st, 1998, to the Program Chairman: Moshe Sipper - ICES98 Program Chair Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland o The Self-Replication Contest * When: The self-replication contest will be held during the ICES98 conference. * Object: Demonstrate a self-replicating machine, implemented in some physical medium, e.g., mechanical, chemical, electronic, etc. * Important: The machine must be demonstrated AT THE CONFERENCE site. Paper submissions will not be considered. * Participation: The contest is open to all conference attendees (at least one member of any participating team must be a registered attendee). * Prize: + The most original design will be awarded a prize of $1000 (one thousand dollars). + The judgment shall be made by a special contest committee. + The committee's decision is final and incontestable. * WWW: Potential participants are advised to consult the self-replication page: http://lslwww.epfl.ch/~moshes/selfrep/ * If you intend to participate please inform the conference secretary Andres.Perez at di.epfl.ch. o Best-Paper Awards * Among the papers presented at ICES98, two will be chosen by a special committee and awarded, respectively, the best paper award and the best student paper award. * The committee's decision is final and incontestable. * All papers are eligible for the best paper award. To be eligible for the best student paper award, the first coauthor must be a full-time student. o Invited Speakers * Dr. David B. Fogel, Natural Selection, Inc. Editor-in-Chief, IEEE Transactions on Evolutionary Computation * Prof. Lewis Wolpert, University College London o Tutorials Four tutorials, delivered by experts in the field, will take place on Wednesday, September 23, 1998 (contingent upon a sufficient number of registrants). * "An Introduction to Molecular and DNA Computing," Prof. Max H. Garzon, University of Memphis * "An Introduction to Nanotechnology," Dr. James K. Gimzewski, IBM Zurich Research Laboratory * "Configurable Computing," Dr. Tetsuya Higuchi, Electrotechnical Laboratory * "An Introduction to Evolutionary Computation," Dr. David B. Fogel, Natural Selection, Inc. o Program * To be posted around May-June, 1998. o Local arrangements * See Web page. o Program Committee * Hojjat Adeli, Ohio State University (USA) * Igor Aleksander, Imperial College (UK) * David Andre, Stanford University (USA) * William W. Armstrong, University of Alberta (Canada) * Forrest H. Bennett III, Stanford University (USA) * Joan Cabestany, Universitat Politecnica de Catalunya (Spain) * Leon O. Chua, University of California at Berkeley (USA) * Russell J. Deaton, University of Memphis (USA) * Boi Faltings, Swiss Federal Institute of Technology (Switzerland) * Dario Floreano, Swiss Federal Institute of Technology (Switzerland) * Terry Fogarty, Napier University (UK) * David B. Fogel, Natural Selection, Inc. (USA) * Hugo de Garis, ATR Human Information Processing Laboratories (Japan) * Max H. Garzon, University of Memphis (USA) * Erol Gelenbe, Duke University (USA) * Wulfram Gerstner, Swiss Federal Institute of Technology (Switzerland) * Reiner W. Hartenstein, University of Kaiserslautern (Germany) * Inman Harvey, University of Sussex (UK) * Hitoshi Hemmi, NTT Human Interface Labs (Japan) * Jean-Claude Heudin, Pole Universitaire Leonard de Vinci (France) * Lishan Kang, Wuhan University (China) * John R. Koza, Stanford University (USA) * Pier L. Luisi, ETH Zentrum (Switzerland) * Bernard Manderick, Free University Brussels (Belgium) * Pierre Marchal, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Juan J. Merelo, Universidad de Granada (Spain) * Julian Miller, Napier University (UK) * Francesco Mondada, Swiss Federal Institute of Technology (Switzerland) * J. Manuel Moreno, Universitat Politecnica de Catalunya (Spain) * Pascal Nussbaum, Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * Christian Piguet Centre Suisse d'Electronique et de Microtechnique SA (Switzerland) * James Reggia, University of Maryland at College Park (USA) * Eytan Ruppin, Tel Aviv University (Israel) * Eduardo Sanchez, Swiss Federal Institute of Technology (Switzerland) * Andre Stauffer, Swiss Federal Institute of Technology (Switzerland) * Luc Steels, Vrije Universiteit Brussel (Belgium) * Daniel Thalmann, Swiss Federal Institute of Technology (Switzerland) * Adrian Thompson, University of Sussex (UK) * Marco Tomassini, University of Lausanne (Switzerland) * Goran Wendin, Chalmers University of Technology and Goeteborg University (Sweden) * Lewis Wolpert, University College London (UK) * Xin Yao, Australian Defense Force Academy (Australia) From Bill_Warren at Brown.edu Tue Dec 9 11:59:45 1997 From: Bill_Warren at Brown.edu (Bill Warren) Date: Tue, 9 Dec 1997 11:59:45 -0500 (EST) Subject: Graduate Traineeships: Visual Navigation in Humans and Robots Message-ID: Graduate Traineeships Visual Navigation in Humans and Robots Brown University The Department of Cognitive and Linguistic Sciences and the Department of Computer Science at Brown University are seeking graduate applicants interested in visual navigation. The project investigates situated learning of spatial knowledge used to guide navigation in humans and robots. Human experiments study active navigation and landmark recognition in virtual envrionments, whose structure is manipulated during learning and transfer. In conjunction, biologically-inspired navigation strategies are tested on mobile robot platform. Computational modeling pursues (a) a neural net model of the hippocampus and (b) reinforcement learning and hidden Markov models for spatial navigation. Underlying questions include the geometric structure of the spatial knowledge that is used in active navigation, and how it interacts with the structure of the environment and the navigational task during learning. The project is under the direction of Leslie Kaelbling (Computer Science, www.cs.brown.edu), Michael Tarr and William Warren (Cognitive & Linguistic Sciences, www.cog.brown.edu). Three graduate traineeships are available, beginning in the Fall of 1998. Applicants should apply to either of these home departments. Application materials can be obtained from: The Graduate School, Brown University, Box 1867, Providence, RI 02912, phone (401) 863-2600, www.brown.edu. The application deadline is Jan. 1, 1998. -- Bill William H. Warren, Professor Dept. of Cognitive & Linguistic Sciences Box 1978 Brown University Providence, RI 02912 (401) 863-3980 ofc, 863-2255 FAX Bill_Warren at brown.edu From ptodd at hellbender.mpib-berlin.mpg.de Wed Dec 10 10:43:34 1997 From: ptodd at hellbender.mpib-berlin.mpg.de (ptodd@hellbender.mpib-berlin.mpg.de) Date: Wed, 10 Dec 97 16:43:34 +0100 Subject: pre/postdoc positions in modeling cognitive mechanisms Message-ID: <9712101543.AA21683@hellbender.mpib-berlin.mpg.de> Hello--we are seeking pre- and postdoctoral applicants for positions in our Center for Adaptive Behavior and Cognition to study simple cognitive mechanisms in humans (and other animals), both through simulation modeling techniques and experimentation. Please visit our website (http://www.mpib-berlin.mpg.de/abc) for more information on what we're up to here, and circulate the following ad to all appropriate parties. cheers, Peter Todd Center for Adaptive Behavior and Cognition Max Planck Institute for Human Development Berlin, Germany ****************************************************************************** The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany, is seeking applicants for 1 one-year Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 two-year Postdoctoral Fellowship (tax-free stipend range DM 40,000-44,000) beginning in September 1998. Candidates should be interested in modeling bounded rationality in real-world domains, and should have expertise in one of the following areas: judgment and decision making, evolutionary psychology or biology, cognitive anthropology, experimental economics and social games, risk-taking. For a detailed description of our research projects and current researchers, please visit our WWW homepage at http://www.mpib-berlin.mpg.de/abc or write to Dr. Peter Todd at ptodd at mpib-berlin.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by February 28, 1998 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany. ****************************************************************************** From Rob.Callan at solent.ac.uk Thu Dec 11 06:12:27 1997 From: Rob.Callan at solent.ac.uk (Rob.Callan@solent.ac.uk) Date: Thu, 11 Dec 1997 11:12:27 +0000 Subject: PhD studentship Message-ID: <8025656A.003D7CDD.00@hercules.solent.ac.uk> A studentship is available for someone with an interest in traditional AI and connectionism (or 'new AI') to explore adapting a machine's behaviour by instruction. Further details can be found at: http://www.solent.ac.uk/syseng/create/html/ai/aires.html or from Academic Quality Service Southampton Institute East Park Terrace Southampton UK SO14 OYN Tel: (01703) 319901 Rob Callan From john at dcs.rhbnc.ac.uk Thu Dec 11 07:45:57 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Thu, 11 Dec 97 12:45:57 +0000 Subject: NEUROCOMPUTING Special Issue - note extended deadline January 19, 1998 Message-ID: <199712111245.MAA18713@platon.cs.rhbnc.ac.uk> NEUROCOMPUTING announces a Special Issue on Theoretical analysis of real-valued function classes. Manuscripts are solicited for a special issue of NEUROCOMPUTING on the topic of 'Theoretical analysis of real-valued function classes'. Analysis of Neural Networks both as approximators and classifiers frequently relies on viewing them as real-valued function classes. This perspective places Neural Network research in a far broader context linking it with approximation theory, complexity theory over the reals, statistical and computational learning theory, among others. The aim of the special issue is to bring these links into focus by inviting papers on the theoretical analysis of real-valued function classes where the results are relevant to the special case of Neural Networks. The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Measures of Complexity - Approximation rates - Learning algorithms - Generalization estimation - Real valued complexity analysis - Novel neural functionality and its analysis, eg spiking neurons - Kernel and alternative representations - Algorithms for forming real-valued combinations of functions - On-line learning algorithms The following team of editors will be processing the submitted papers: John Shawe-Taylor (coordinating editor), Royal Holloway, Univ. of London Shai Ben-David, Technion, Israel Pascal Koiran, ENS, Lyon, France Rob Schapire, AT&T Labs, Florham Park, USA The deadline for submissions is January 19, 1998. Every effort will be made to ensure fast processing of the papers by editors and referees. For this reason electronic submission of postscript files (compressed and uuencoded) via email is encouraged. Submission should be to the following address: John Shawe-Taylor Dept of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK or Email: jst at dcs.rhbnc.ac.uk From luo at late.e-technik.uni-erlangen.de Fri Dec 12 09:45:43 1997 From: luo at late.e-technik.uni-erlangen.de (Fa-Long Luo) Date: Fri, 12 Dec 1997 15:45:43 +0100 Subject: book announcement: Applied Neural Networks for Signal Processing Message-ID: <199712121445.PAA00140@late5.e-technik.uni-erlangen.de> New Book Applied Neural Networks for Signal Processing Authors: Fa-Long Luo and Rolf Unbehauen Publisher: Cambridge University Press ISBN: 0 521 56391 7 The use of neural networks in signal processing is becoming increasingly widespread, with applications in areas such as filtering, parameter estimation, signal detection, pattern recognition, signal reconstruction, system identification, signal compression, and signal transmission. The signals concerned include audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical and others. The key features of neural networks involved in signal processing are their asynchronous parallel and distributed processing, nonlinear dynamics, global interconnection of network elements, self-organization and high-speed computational capability. With these features, neural networks can provide very powerful means for solving many problems encountered in signal processing, especially, in nonlinear signal processing, real-time signal processing, adaptive signal processing and blind signal processing. From an engineering point of view, this book aims to provide a detailed treatment of neural networks for signal processing by covering basic principles, modelling, algorithms, architectures, implementation procedures and well-designed simulation examples. This book is organized into nine chapters: Chap. 1: Fundamental Models of Neural Networks for Signal Processing, Chap. 2: Neural Networks for Filtering, Chap. 3: Neural Networks for Spectral Estimation, Chap. 4: Neural Networks for Signal Detection, Chap. 5: Neural Networks for Signal Reconstruction, Chap. 6: Neural Networks for Principal Components and Minor Components, Chap. 7: Neural Networks for Array Signal Processing, Chap. 8: Neural Networks for System Identification, Chap. 9: Neural Networks for Signal Compression. This book will be an invaluable reference for scientists and engineers working in communications, control or any other field related to signal processing. It can also be used as a textbook for graduate courses in electrical engineering and computer science. Contact: Dr. Fa-Long Luo or Dr. Philip Meyler Lehrstuhl fuer Allgemeine und Cambridge University Press Theoretische Elektrotechnik 40 West 20th Street Cauerstr. 7, 91058 Erlangen New York, NY 10011-4211 Germany USA Tel: +49 9131 857794 Tel: +1 212 924 3900 ext. 472 Fax: +49 9131 13435 Fax: +1 212 691 3239 Email: luo at late.e-technik.uni-erlangen.de E-mail: pmeyler at cup.org http://www.cup.org/Titles/56/0521563917.html From harnad at coglit.soton.ac.uk Fri Dec 12 15:58:21 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 12 Dec 1997 20:58:21 GMT Subject: Relational Complexity: BBS Call for Commentators Message-ID: <199712122058.UAA25682@amnesia.psy.soton.ac.uk> Errors-to: harnad1 at coglit.soton.ac.uk Reply-to: bbs at coglit.soton.ac.uk Below is the abstract of a forthcoming BBS target article on: PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology by Graeme S. Halford, William H. Wilson, and Steven Phillips This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. ____________________________________________________________________ PROCESSING CAPACITY DEFINED BY RELATIONAL COMPLEXITY: Implications for Comparative, Developmental, and Cognitive Psychology Graeme Halford Department of Psychology University of Queensland 4072 AUSTRALIA gsh at psy.uq.oz.au William H. Wilson School of Computer Science and Engineering University of New South Wales Sydney New South Wales 2052 AUSTRALIA billw at cse.unsw.edu.au http://www.cse.unsw.edu.au/~billw Steven Phillips Cognitive Science Section Electrotechnical Laboratory 1-1-4 Umezono Tsukuba Ibaraki 305 JAPAN stevep at etl.go.jp http://www.etl.go.jp/etl/ninchi/stevep at etl.go.jp/welcome.html KEYWORDS: Capacity, complexity, working memory, central executive, resource, cognitive development, comparative psychology, neural nets, representation of relations, chunking ABSTRACT: Working memory limitations are best defined in terms of the complexity of relations that can be processed in parallel. Relational complexity is related to processing loads in problem solving, and and can distinguish between higher animal species as well as between children of different ages. Complexity is defined by the number of dimensions, or sources of variation, that are related. A unary relation has one argument and one source of variation because its argument can be instantiated in only one way at a time. A binary relation has two arguments and two sources of variation because two argument instantiations are possible at once. A ternary relation is three dimensional, a quaternary relation is four dimensional, and so on. Dimensionality is related to the number of "chunks," because both attributes on dimensions and chunks are independent units of information of arbitrary size. Empirical studies of working memory limitations indicate a soft limit which corresponds to processing one quaternary relation in parallel. More complex concepts are processed by segmentation or conceptual chunking. In segmentation tasks are broken into components which do not exceed processing capacity and are processed serially. In conceptual chunking representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Parallel distributed implementations of relational representations show that relations with more arguments have a higher computational cost; this corresponds to empirical observations of higher processing loads in humans. Empirical evidence is presented that relational complexity (1) distinguishes between higher species and is related to (2) processing load in reasoning and in sentence comprehension and to (3) the complexity of relations processed by children increases with age. Implications for neural net models and for theories of cognition and cognitive development are discussed. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp or gopher from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.halford.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.halford ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.halford gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.howe When you have the file(s) you want, type: quit From atick at monaco.rockefeller.edu Sun Dec 14 11:15:40 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Sun, 14 Dec 1997 11:15:40 -0500 Subject: Research Positions in Computer Vision Message-ID: <9712141115.ZM10632@monaco.rockefeller.edu> Research Positions in Computer Vision and Pattern Recognition *****IMMEDIATE NEW OPENINGS******* Visionics Corporation announces the opening of four new positions for research scientists and engineers in the field of Computer Vision and Pattern Recognition. The positions are at various levels. Candidates are expected to have demonstrated research abilities in computer vision, artificial neural networks, image processing, computational neuroscience or pattern recognition. The successful candidates will join the growing R&D team of Visionics in developing real-world pattern recognition algorithms. This is the team that produced the face recognition technology that was recently awarded the prestigious PC WEEK's Best of Show AND Best New Technology at COMDEX Fall 97. The job openings are at Visionics' new R&D facility at Exchange Place on the Waterfront of Jersey City, New Jersey--right across the river from the World Trade Center in Manhattan. Visionics offers a competitive salary and benefits package that includes stock options, health insurance, retirement etc, and an excellent chance for rapid career advancement. ***For more information about Visionics please visit our webpage at http://www.faceit.com. FOR IMMEDIATE CONSIDERATION, FAX YOUR RESUME TO: (1) Fax: 201 332 9313 Dr. Norman Redlich--CTO Visionics Corporation 1 Exchange Place Jersey City, NJ 07302 or you can (2) Email it to jobs at faceit.com Visionics is an equal opportunity employer--women and minority candidates are encouraged to apply. Joseph J. Atick, CEO Visionics Corporation 201 332 2890 jja at faceit.com %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From terry at salk.edu Mon Dec 15 16:20:14 1997 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 15 Dec 1997 13:20:14 -0800 (PST) Subject: NEURAL COMPUTATION 10:1 Message-ID: <199712152120.NAA16748@helmholtz.salk.edu> Neural Computation - Contents Volume 10, Number 1 - January 1, 1998 ARTICLE Memory Maintenance via Neuronal Regulation David Horn, Nir Levy and Eytan Ruppin NOTE Ordering of Self-Organizing Maps in Multidimensional Cases Guang-Bin Huang, Haroon A. Babri, and Hua-Tian Li LETTERS Predicting the Distribution of Synaptic Strengths and Cell Firing Correlations in a Self-Organizing, Sequence Prediction Model Asohan Amarasingham, and William B. Levy State-Dependent Weights for Neural Associative Memories Ravi Kothari, Rohit Lotlikar, and Marc Cathay The Role of the Hippocampus in Solving the Morris Water Maze A. David Redish and David S. Touretzky Near-Saddle-Node Bifurcation Behavior as Dynamics in Working Memory for Goal-Directed Behavior Hiroyuki Nakahara and Kenji Doya A Canonical Form of Nonlinear Discrete-Time Models Gerard Dreyfus and Yizhak Idan A Low-Sensitivity Recurrent Neural Network Andrew D. Back and Ah Chung Tsoi Fixed-Point Attractor Analysis for a Class of Neurodynamics Jianfeng Feng and David Brown GTM: The Generative Topographic Mapping Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Identification Criteria and Lower Bounds for Perceptron-Like Learning Rules Michael Schmitt ----- ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1998 - VOLUME 10 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $78 Individual $82 $87.74 $110 Institution $285 $304.95 $318 * includes 7% GST (Back issues from Volumes 1-9 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From kia at particle.kth.se Mon Dec 15 17:13:07 1997 From: kia at particle.kth.se (Karina Waldemark) Date: Mon, 15 Dec 1997 23:13:07 +0100 Subject: VI-DYNN'98 Workshop on Virtual Intelligence and Dynamic Neural NetworksVI-DYNN'98, Workshop on Virtual Intelligence and Dynamic Neural Networks Message-ID: <3495AB73.53CA8816@particle.kth.se> ------------------------------------------------------------------- Announcement and call for papers: VI-DYNN'98 Workshop on Virtual Intelligence - Dynamic Neural Networks Royal Institute of Technology, KTH Stockholm, Sweden June 22-26, 1998 ------------------------------------------------------------------ VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn Abstracts due to: February 28, 1998 VI-DYNN'98 will combine the DYNN emphasis on biologically inspired neural network models, especially Pulse Coupled Neural Networks (PCNN), to the practical applications emphasis of the VI workshops. In particular we will focus on why, how, and where to use biologically inspired neural systems. For example, we will learn how to adapt such systems to sensors such as digital X-Ray imaging devices, CCD's and SAR, etc. and examine questions of accuracy, speed, etc. Developments in research on biological neural systems, such as the mammalian visual systems, and how smart sensors can benefit from this knowledge will also be presented. Pulse Coupled Neural Networks (PCNN) have recently become among the most exciting new developments in the field of artificial neural networks (ANN), showing great promise for pattern recognition and other applications. The PCNN type models are related much more closely to real biological neural systems than most ANN's and many researchers in the field of ANN- Pattern Recognition are unfamiliar with them. VI-DYNN'98 will continue in the spirit and join it with the Virtual Intelligence workshop series. ----------------------------------------------------------------- VI-DYNN'98 Topics: Dynamic NN Fuzzy Systems Spiking Neurons Rough Sets Brain Image Genetic Algorithms Virtual Reality ------------------------------------------------------------------ Applications: Medical Defense & Space Others ------------------------------------------------------------------- Special sessions: PCNN - Pulse Coupled Neural Networks exciting new artificial neural networks related to real biological neural systems PCNN applications: pattern recognition image processing digital x-ray imaging devices, CCDs & SAR Biologically inspired neural network models why, how and where to use them The mammalian visual system smart sensors benefit from their knowledge The Electronic Nose ------------------------------------------------------------------------ International Organizing Committee: John L. Johnson (MICOM, USA), Jason M. Kinser (George Mason U., USA) Thomas Lindblad (KTH, Sweden) Robert Lorenz (Univ. Wisconsin, USA) Mary Lou Padgett (Auburn U., USA), Robert T. Savely (NASA, Houston) Manual Samuelides(CERT-ONERA,Toulouse,France) John Taylor (Kings College,UK) Simon Thorpe (CERI-CNRS, Toulouse, France) ------------------------------------------------------------------------ Local Organizing Committee: Thomas Lindblad (KTH) - Conf. Chairman ClarkS. Lindsey (KTH) - Conf. Secretary Kenneth Agehed (KTH) Joakim Waldemark (KTH) Karina Waldemark (KTH) Nina Weil (KTH) Moyra Mann - registration officer --------------------------------------------------------------------- Contact: Thomas Lindblad (KTH) - Conf. Chairman email: lindblad at particle.kth.se Phone: [+46] - (0)8 - 16 11 09 ClarkS. Lindsey (KTH) - Conf. Secretary email: lindsey at particle.kth.se Phone: [+46] - (0)8 - 16 10 74 Switchboard: [+46] - (0)8 - 16 10 00 Fax: [+46] - (0)8 - 15 86 74 VI-DYNN'98 Web: http://msia02.msi.se/vi-dynn --------------------------------------------------------------------- From rafal at idsia.ch Tue Dec 16 11:00:07 1997 From: rafal at idsia.ch (Rafal Salustowicz) Date: Tue, 16 Dec 1997 17:00:07 +0100 (MET) Subject: Learning Team Strategies: Soccer Case Studies Message-ID: LEARNING TEAM STRATEGIES: SOCCER CASE STUDIES Rafal Salustowicz Marco Wiering Juergen Schmidhuber IDSIA, Lugano (Switzerland) Revised Technical Report IDSIA-29-97 To appear in the Machine Learning Journal (1998) We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TD-Q learning with linear neural networks (TD-Q), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (CO-PIPE). TD-Q is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and CO-PIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE and CO-PIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in some multiagent learning scenarios direct search in policy space can offer advantages over EF-based approaches. http://www.idsia.ch/~rafal/research.html ftp://ftp.idsia.ch/pub/rafal/soccer.ps.gz Related papers by the same authors: Evolving soccer strategies. In N. Kasabov, R. Kozma, K. Ko, R. O'Shea, G. Coghill, and T. Gedeon, editors, Progress in Connectionist-based Information Systems:Proc. of the 4th Intl. Conf. on Neural Information Processing ICONIP'97, pages 502-505, Springer-Verlag, Singapore, 1997. ftp://ftp.idsia.ch/pub/rafal/ICONIP_soccer.ps.gz On learning soccer strategies. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, editors, Proc. of the 7th Intl. Conf. on Artificial Neural Networks (ICANN'97), volume 1327 of Lecture Notes in Computer Science, pages 769-774, Springer-Verlag Berlin Heidelberg, 1997. ftp://ftp.idsia.ch/pub/rafal/ICANN_soccer.ps.gz ********************************************************************** Rafal Salustowicz Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA) Corso Elvezia 36 e-mail: rafal at idsia.ch 6900 Lugano ============== Switzerland raf at cs.tu-berlin.de Tel (IDSIA) : +41 91 91198-38 raf at psych.stanford.edu Tel (office): +41 91 91198-32 Fax : +41 91 91198-39 WWW: http://www.idsia.ch/~rafal ********************************************************************** From becker at curie.psychology.mcmaster.ca Tue Dec 16 13:26:22 1997 From: becker at curie.psychology.mcmaster.ca (Sue Becker) Date: Tue, 16 Dec 1997 13:26:22 -0500 (EST) Subject: Faculty Position at McMaster University Message-ID: BEHAVIORAL NEUROSCIENCE The Department of Psychology at McMaster University seeks applications for a position as Assistant Professor commencing July 1, 1998, subject to final budgetary approval. We seek a candidate who holds a Ph.D. and who is conducting research in an area of behavioral neuroscience that will complement one or more of our current strengths in neural plasticity, perceptual development, cognition, computational neuroscience, and animal behavior. Appropriate research interests include, for example, cognitive neuroscience/neuropsychology, the physiology of memory, sensory physiology, the physiology of motivation/emotion, neuroethology, and computational neuroscience. The candidate would be expected to teach in the area of neuropsychology, as well as a laboratory course in neuroscience. This is initially for a 3-year contractually-limited appointment. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents. McMaster University is committed to Employment Equity and encourages applications from all qualified candidates, including aboriginal peoples, persons with disabilities, members of visible minorities, and women. To apply, send a curriculum vitae, a short statement of research interests, a publication list with selected reprints, and three letters of reference to the following address: Prof. Ron Racine, Department of Psychology, McMaster University, Hamilton, Ontario, CANADA L8S 4K1. From rvicente at onsager.if.usp.br Wed Dec 17 09:01:00 1997 From: rvicente at onsager.if.usp.br (Renato Vicente) Date: Wed, 17 Dec 1997 12:01:00 -0200 (EDT) Subject: Paper: Learning a Spin Glass Message-ID: <199712171401.MAA08019@curie.if.usp.br> Dear Connectionists, We would like to announce our new paper on statistical physics of neural networks. The paper is available at: http://www.fma.if.usp.br/~rvicente/nnsp.html/NNGUSP13.ps.gz Comments are welcome. Learning a spin glass: determining Hamiltonians from metastable states. ------------------------------------------------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From HEINKED at psycho1.bham.ac.uk Wed Dec 17 19:48:10 1997 From: HEINKED at psycho1.bham.ac.uk (Mr Dietmar Heinke) Date: Wed, 17 Dec 1997 16:48:10 -0800 Subject: CALL FOR ABSTRACTS Message-ID: <349872CA.238@psycho1.bham.ac.uk> ***************** Call for Abstracts **************************** 5th Neural Computation and Psychology Workshop Connectionist Models in Cognitive Neuroscience University of Birmingham, England Tuesday 8th September - Thursday 10th September 1998 This workshop is the fifth in a series, following on from the first at the University of Wales, Bangor (with theme "Neurodynamics and Psychology"), the second at the University of Edinburgh, Scotland ("Memory and Language"), the third at the University of Stirling, Scotland ("Perception") and the forth at the University College, London ("Connectionist Representations"). The general aim is to bring together researchers from such diverse disciplines as artificial intelligence, applied mathematics, cognitive science, computer science, neurobiology, philosophy and psychology to discuss their work on the connectionist modelling of psychology. This year's workshop is to be hosted by the University of Birmingham. As in previous years there will be a theme to the workshop. The theme is to be: Connectionist Models in Cognitive Neuroscience. This covers many important issues ranging from modelling physiological structure to cognitive function and its disorders (in neuropsychological and psychiatric cases). As in previous years we aim to keep the workshop fairly small, informal and single track. As always, participants bringing expertise from outside the UK are particularly welcome. A one page abstract should be sent to Professor Glyn W. Humphreys School of Psychology University of Birmingham Birmingham B15 2TT United Kingdom Deadline for abstracts: 31th of May, 1998 Registration, Food and Accommodation The workshop will be held at the University of Birmingham. The conference registration fee (which includes lunch and morning and afternoon tea/coffee each day) is 60 pounds. A special conference dinner (optional) is planned for the Wednesday evening costing 20 pounds. Accommodation is available in university halls or local hotels. A special price of 150 pounds is available for 3 nights accommodation in the halls of residence and registration fee. Organising Committee Dietmar Heinke Andrew Olson Professor Glyn W. Humphreys Contact Details Workshop Email address: e.fox at bham.ac.uk Dietmar Heinke, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4920, Fax: +44 212 414 4897 Email: heinked at psycho1.bham.ac.uk Andrew Olson, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 3328, Fax: +44 212 414 4897 Email: olsona at psycho1.bham.ac.uk Glyn W. Humphreys, NCPW5, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK. Phone: +44 (0)121 414 4930, Fax: +44 212 414 4897 Email: humphreg at psycho1.bham.ac.uk From jordan at ai.mit.edu Wed Dec 17 15:09:36 1997 From: jordan at ai.mit.edu (Michael I. Jordan) Date: Wed, 17 Dec 1997 15:09:36 -0500 (EST) Subject: graphical models and variational methods Message-ID: <9712172009.AA01909@carpentras.ai.mit.edu> A tutorial paper on graphical models and variational approximations is available at: ftp://psyche.mit.edu/pub/jordan/variational-intro.ps.gz http://www.ai.mit.edu/projects/jordan.html ------------------------------------------------------- AN INTRODUCTION TO VARIATIONAL METHODS FOR GRAPHICAL MODELS Michael I. Jordan Massachusetts Institute of Technology Zoubin Ghahramani University of Toronto Tommi S. Jaakkola University of California Santa Cruz Lawrence K. Saul AT&T Labs -- Research This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models. We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, showing how upper and lower bounds can be found for local probabilities, and discussing methods for extending these bounds to bounds on global probabilities of interest. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case. From bruf at igi.tu-graz.ac.at Thu Dec 18 05:48:05 1997 From: bruf at igi.tu-graz.ac.at (Berthold Ruf) Date: Thu, 18 Dec 1997 11:48:05 +0100 Subject: Paper: Spatial and Temporal Pattern Analysis via Spiking Neurons Message-ID: <3498FF65.B9895E06@igi.tu-graz.ac.at> The following technical report is now available at http://www.cis.tu-graz.ac.at/igi/bruf/rbf-tr.ps.gz Thomas Natschlaeger and Berthold Ruf: Spatial and Temporal Pattern Analysis via Spiking Neurons Abstract: Spiking neurons, receiving temporally encoded inputs, can compute radial basis functions (RBFs) by storing the relevant information in their delays. In this paper we show how these delays can be learned using exclusively locally available information (basically the time difference between the pre- and postsynaptic spike). Our approach gives rise to a biologically plausible algorithm for finding clusters in a high dimensional input space with networks of spiking neurons, even if the environment is changing dynamically. Furthermore, we show that our learning mechanism makes it possible that such RBF neurons can perform some kind of feature extraction where they recognize that only certain input coordinates carry relevant information. Finally we demonstrate that this model allows the recognition of temporal sequences even if they are distorted in various ways. ---------------------------------------------------- Berthold Ruf | | Office: | Home: Institute for Theoretical | Vinzenz-Muchitsch- Computer Science | Strasse 56/9 TU Graz | Klosterwiesgasse 32/2 | A-8010 Graz | A-8020 Graz AUSTRIA | AUSTRIA | Tel:+43 316/873-5824 | Tel:+43 316/274580 FAX:+43 316/873-5805 | Email: bruf at igi.tu-graz.ac.at | Homepage: http://www.cis.tu-graz.ac.at/igi/bruf/ ---------------------------------------------------- From giles at research.nj.nec.com Thu Dec 18 17:38:06 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 18 Dec 97 17:38:06 EST Subject: paper available: financial prediction Message-ID: <9712182238.AA13422@alta> The following technical report and related conference paper is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/#recent-TRs www.cs.umd.edu/TRs/TRumiacs.html We apologize in advance for any multiple postings that may occur. ************************************************************************** Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference U. of Maryland Technical Report CS-TR-3625 and UMIACS-TR-96-27 Steve Lawrence (1), Ah Chung Tsoi (2), C. Lee Giles (1,3) (1) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA (2) Faculty of Informatics, University of Wollongong, NSW 2522 Australia (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA ABSTRACT Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. Keywords - noisy time series prediction, recurrent neural networks, self-organizing map, efficient market hypothesis, foreign exchange rate, non-stationarity, efficient market hypothesis, rule extraction ********************************************************************** A short published conference version: C.L. Giles, S. Lawrence, A-C. Tsoi, "Rule Inference for Financial Prediction using Recurrent Neural Networks," Proceedings of the IEEE/IAFE Conf. on Computational Intelligence for Financial Engineering, p. 253, IEEE Press, 1997. can be found at www.neci.nj.nec.com/homepages/lawrence/papers.html www.neci.nj.nec.com/homepages/giles/papers/IEEE.CIFEr.ps.Z __ C. Lee Giles / Computer Science / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From majidi at DI.Unipi.IT Fri Dec 19 11:34:04 1997 From: majidi at DI.Unipi.IT (Darya Majidi) Date: Fri, 19 Dec 1997 17:34:04 +0100 Subject: NNESMED '98 Message-ID: <199712191634.RAA23176@mailserver.di.unipi.it> 2-4 September 1998 Call for papers: 3rd International Conference on Neural Networks and Expert Systems in Medicine and Healthcare (NNESMED 98) Pisa, Italy Deadline: 13 February 1997, electronic file (4 pages containing an abstract of 200 words) Contact: Prof Antonina Starita Computer Science Department University of Pisa Corso Italia 40, 56126 Pisa Tel: +39-50-887215 Fax: +39-50-887226 E-mail: nnesmed at di.unipi.it WWW: http://www.di.unipi.it/~nnesmed/home.htm From takane at takane2.psych.mcgill.ca Fri Dec 19 11:53:23 1997 From: takane at takane2.psych.mcgill.ca (Yoshio Takane) Date: Fri, 19 Dec 1997 11:53:23 -0500 (EST) Subject: No subject Message-ID: <199712191653.LAA03407@takane2.psych.mcgill.ca> This will be the last call for papers for the special issue of Behaviormetrika. *****CALL FOR PAPERS***** BEHAVIORMETRIKA, an English journal published in Japan to promote the development and dissemination of quantitative methodology for analysis of human behavior, is planning to publish a special issue on ANALYSIS OF KNOWLEDGE REPRESENTATIONS IN NEURAL NETWORK (NN) MODELS broadly construed. I have been asked to serve as the guest editor for the special issue and would like to invite all potential contributors to submit high quality articles for possible publication in the issue. In statistics information extracted from the data are stored in estimates of model parameters. In regression analysis, for example, information in observed predictor variables useful in prediction is summarized in estimates of regression coefficients. Due to the linearity of the regression model interpretation of the estimated coefficients is relatively straightforward. In NN models knowledge acquired from training samples is represented by the weights indicating the strength of connections between neurons. However, due to the nonlinear nature of the model interpretation of these weights is extremely difficult, if not impossible. Consequently, NN models have largely been treated as black boxes. This special issue is intended to break the ground by bringing together various attempts to understand internal representations of knowledge in NN models. Papers are invited on network analysis including: * Methods of analyzing basic mechanisms of NN models * Examples of successful network analysis * Comparison among different network architectures in their knowledge representation (e.g., BP vs Cascade Correlation) * Comparison with statistical approaches * Visualization of high dimensional functions * Regularization methods to improve the quality of knowledge representation * Model selection in NN models * Assessment of stability and generalizability of knowledge in NN models * Effects of network topology, data encoding scheme, algorithm, environmental bias, etc. on network performance * Implementing prior knowledge in NN models SUBMISSION: Deadline for submission: January 31, 1998 Deadline for the first round reviews: April 30, 1998 Deadline for submission of the final version: August 31, 1998 Number of copies of a manuscript to be submitted: four Format: no longer than 10,000 words; APA style ADDRESS FOR SUBMISSION: Professor Yoshio Takane Department of Psychology McGill University 1205 Dr. Penfield Avenue Montreal QC H3A 1B1 CANADA email: takane at takane2.psych.mcgill.ca tel: 514 398 6125 fax: 514 398 4896 From harnad at coglit.soton.ac.uk Fri Dec 19 08:20:24 1997 From: harnad at coglit.soton.ac.uk (S.Harnad) Date: Fri, 19 Dec 1997 13:20:24 GMT Subject: PSYC Call for Commentators Message-ID: <199712191320.NAA14295@amnesia.psy.soton.ac.uk> Latimer/Stevens: PART-WHOLE PERCEPTION The target article whose abstract follows below has just appeared in PSYCOLOQUY, a refereed journal of Open Peer Commentary sponsored by the American Psychological Association. Qualified professional biobehavioral, neural or cognitive scientists are hereby invited to submit Open Peer Commentary on this article. Please write for Instructions if you are not familiar with PSYCOLOQUY format and acceptance criteria (all submissions are refereed). The article can be read or retrieved at this URL: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1997.volume.8/psyc.97.8.13.part-whole-perception.1.latimer For submitting articles and commentaries and for information: EMAIL: psyc at pucc.princteton.edu URL: http://www.cogsci.soton.ac.uk/psycoloquy/ AUTHORS' RATIONALE FOR SOLICITING COMMENTARY: The topic of whole/part perception has a long history of controversy, and its ramifications still pervade research on perception and pattern recognition in psychology, artificial intelligence and cognitive science where controversies such as "global versus local precedence" and "holistic versus analytic processing" are still very much alive. We argue that, whereas the vast majority of studies on whole/part perception have been empirical, the major and largely unaddressed problems in the area are conceptual and concern how the terms "whole" and "part" are defined and how wholes and parts are related in each particular experimental context. Without some general theory of wholes and parts and their relations and some consensus on nomenclature, we feel that pseudo-controversies will persist. One possible principle of unification and clarification is a formal analysis of the whole/part relationship by Nicholas Rescher and Paul Oppenheim (1955). We outline this formalism and demonstrate that not only does it have important implications for how we conceptualize wholes and parts and their relations, but it also has far reaching implications for the conduct of experiments on a wide range of perceptual phenomena. We challenge the well established view that whole/part perception is essentially an empirical problem and that its solution will be found in experimental investigations. We argue that there are many purely conceptual issues that require attention prior to empirical work. We question the logic of global-precedence theory and any interpretation of experimental data in terms of the extraction of global attributes without prior analysis of local elements. We also challenge theorists to provide precise, testable theories and working mechanisms that embody the whole/part processing they purport to explain. Although it deals mainly with vision and audition, our approach can be generalized to include tactile perception. Finally, we argue that apparently disparate theories, controversies, results and phenomena can all be considered under the three main conditions for wholes and parts proposed by Rescher and Oppenheim. Commentary and counterargument are sought on all these issues. In particular, we would like to hear arguments to the effect that wholes and parts (perceptual or otherwise) can exist in some absolute sense, and we would like to learn of machines, devices, programs or systems that are capable of extracting holistic properties without prior analysis of parts. ----------------------------------------------------------------------- psycoloquy.97.8.13.part-whole-perception.1.latimer Wed 17 Dec 1997 ISSN 1055-0143 (39 paragraphs, 68 references, 923 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1997 Latimer & Stevens SOME REMARKS ON WHOLES, PARTS AND THEIR PERCEPTION Cyril Latimer Department of Psychology University of Sydney NSW 2006, Australia email: cyril at psych.su.oz.au URL: http://www.psych.su.oz.au/staff/cyril/ Catherine Stevens Department of Psychology/FASS University of Western Sydney, Macarthur PO Box 555, Campbelltown, NSW 2560, Australia email: kj.stevens at uws.edu.au URL: http://psy.uq.edu.au/CogPsych/Noetica/ ABSTRACT: We emphasize the relativity of wholes and parts in whole/part perception, and suggest that consideration must be given to what the terms "whole" and "part" mean, and how they relate in a particular context. A formal analysis of the part/whole relationship by Rescher & Oppenheim, (1955) is shown to have a unifying and clarifying role in many controversial issues including illusions, emergence, local/global precedence, holistic/analytic processing, schema/feature theories and "smart mechanisms". The logic of direct extraction of holistic properties is questioned, and attention drawn to vagueness of reference to wholes and parts which can refer to phenomenal units, physiological structures or theoretical units of perceptual analysis. KEYWORDS: analytic versus holistic processing, emergence, feature gestalt, global versus local precedence part, whole From rvicente at gibbs.if.usp.br Sat Dec 20 08:28:55 1997 From: rvicente at gibbs.if.usp.br (Renato Vicente) Date: Sat, 20 Dec 1997 13:28:55 -0000 Subject: Paper: Learning a Spin Glass - right link Message-ID: <199712201620.OAA17338@borges.uol.com.br> Dear Connectionists, The link for the paper previously announced was wrong, the right one is : http://www.fma.if.usp.br/~rvicente/NNGUSP13.ps.gz Sorry about that. Learning a spin glass: determining Hamiltonians from metastable states. --------------------------------------------------------------------------- ------------------------------ SILVIA KUVA, OSAME KINOUCHI and NESTOR CATICHA Abstract: We study the problem of determining the Hamiltonian of a fully connected Ising Spin Glass of $N$ units from a set of measurements, whose sizes needs to be ${\cal O}(N^2)$ bits. The student-teacher scenario, used to study learning in feed-forward neural networks, is here extended to spin systems with arbitrary couplings. The set of measurements consists of data about the local minima of the rugged energy landscape. We compare simulations and analytical approximations for the resulting learning curves obtained by using different algorithms. From hinton at cs.toronto.edu Tue Dec 23 13:51:45 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 23 Dec 1997 13:51:45 -0500 Subject: postdoc and student positions at new centre Message-ID: <97Dec23.135145edt.1329@neuron.ai.toronto.edu> THE GATSBY COMPUTATIONAL NEUROSCIENCE UNIT UNIVERSITY COLLEGE LONDON We are delighted to announce that the Gatsby Charitable Foundation has decided to create a centre for the study of Computational Neuroscience at University College London. The centre will include: Geoffrey Hinton (director) Zhaoping Li Peter Dayan Zoubin Ghahramani 5 Postdoctoral Fellows 10 Graduate Students We will study computational theories of perception and action with an emphasis on learning. Our main current interests are: unsupervised learning and activity dependent development statistical models of hierarchical representations sensorimotor adaptation and integration visual and olfactory segmentation and recognition computation by non-linear neural dynamics reinforcement learning By establishing this centre, the Gatsby Charitable Foundation is providing a unique opportunity for a critical mass of theoreticians to interact closely with each other and with University College's world class research groups, including those in psychology, neurophysiology, anatomy, functional neuro-imaging, computer science and statistics. The unit has some laboratory space for theoretically motivated experimental studies in visual and motor psychophysics. For further information see our temporary web site http://www.cs.toronto.edu/~zoubin/GCNU/ http://hera.ucl.ac.uk/~gcnu/ (The UK mirror of site above) We are seeking to recruit 4 postdocs and 6 graduate students to start in the Fall of 1998. We are particularly interested in candidates with strong analytical skills as well as a keen interest in and knowledge of neuroscience and cognitive science. Candidates should submit a CV, a one or two page statement of research interests, and the names of 3 referees. Candidates may also submit a copy of one or two papers. We prefer email submissions (hinton at cs.toronto.edu). If you use email please send papers as URL addresses or separate messages in postscript format. The deadline for receipt of submissions is Tuesday January 13, 1998. Professor Geoffrey Hinton Department of Computer Science University of Toronto Room 283 6 King's College Road Toronto, Ontario M5S 3H5 CANADA phone: 416-978-3707 fax: 416-978-1455 From john at dcs.rhbnc.ac.uk Tue Dec 23 08:31:39 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Dec 97 13:31:39 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199712231331.NAA11900@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-040: ---------------------------------------- Saturation and Stability in the Theory of Computation over the Reals by Olivier Chapuis, Universit\'e Lyon I, France Pascal Koiran, Ecole Normale Sup\'erieure de Lyon, France Abstract: This paper was motivated by the following two questions which arise in the theory of complexity for computation over ordered rings in the now famous computational model introduced by Blum, Shub and Smale: (i) is the answer to the question $\p$ $=$? $\np$ the same in every real-closed field? (ii) if $\p\neq\np$ for $\r$, does there exist a problem of $\r$ which is $\np$ but not $\np$-complete? Some unclassical complexity classes arise naturally in the study of these questions. They are still open, but we could obtain unconditional results of independent interest. Michaux introduced $/\const$ complexity classes in an effort to attack question~(i). We show that $\a_{\r}/ \const = \a_{\r}$, answering a question of his. Here $\a$ is the class of real problems which are algorithmic in bounded time. We also prove the stronger result: $\parl_{\r}/ \const =\parl_{\r}$, where $\parl$ stands for parallel polynomial time. In our terminology, we say that $\r$ is $\a$-saturated and $\parl$-saturated. We also prove, at the nonuniform level, the above results for every real-closed field. It is not known whether $\r$ is $\p$-saturated. In the case of the reals with addition and order we obtain $\p$-saturation (and a positive answer to question (ii)). More generally, we show that an ordered $\q$-vector space is $\p$-saturated at the nonuniform level (this almost implies a positive answer to the analogue of question (i)). We also study a stronger notion that we call $\p$-stability. Blum, Cucker, Shub and Smale have (essentially) shown that fields of characteristic 0 are $\p$-stable. We show that the reals with addition and order are $\p$-stable, but real-closed fields are not. Questions (i) and (ii) and the $/\const$ complexity classes have some model theoretic flavor. This leads to the theory of complexity over ``arbitrary'' structures as introduced by Poizat. We give a summary of this theory with a special emphasis on the connections with model theory and we study $/\const$ complexity classes with this point of view. Note also that our proof of the $\parl$-saturation of $\r$ shows that an o-minimal structure which admits quantifier elimination is $\a$-saturated at the nonuniform level. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-041: ---------------------------------------- A survey on the Grzegorczyk Hierarchy and its extension through the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: This paper concerns the Grzegorczyk classes defined from a particular sequence of computable functions. We provide some relevant properties and the main problems about the Grzegorczyk classes through two settings of computability. The first one is the usual setting of recursiveness and the second one is the BSS model introduced at the end of the $80'$s. This model of computability allows to define the concept of effectiveness over continuous domains such that the real numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-042: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-043: ---------------------------------------- Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognise Arbitrary Regular Languages by Wolfgang Maass, Technische Universitaet Graz, Austria Eduardo D. Sontag, Rutgers University, USA Abstract: We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognised by networks of this type, and we give a precise characterization of those languages which can be recognised. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From lbl at nagoya.riken.go.jp Wed Dec 24 22:37:55 1997 From: lbl at nagoya.riken.go.jp (Bao-Liang Lu) Date: Thu, 25 Dec 1997 12:37:55 +0900 Subject: TR available:Inverting Neural Nets Using LP and NLP Message-ID: <9712250337.AA20262@xian> The following Technical Report is available via anonymous FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:pub/publish/Lu/lu-ieee-tnn-inversion.ps.gz ========================================================================== TITLE: Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming BMC Technical Report BMC-TR-97-12 AUTHORS: Bao-Liang Lu (1) Hajime Kita (2) Yoshikazu Nishikawa (3) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (RIKEN) (2) Tokyo Institute of Technology (3) Osaka Institute of Technology ABSTRACT: The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem because the mapping from the output space to the input space is an one-to-many mapping. In this paper, we present a method for dealing with this inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem. An important advantage of the method is that multilayer perceptrons (MLPs) and radial basis function (RBF) neural networks can be inverted by solving the corresponding separable programming problems, which can be solved by a modified simplex method, a well-developed and efficient method for solving linear programming problems. As a result, large-scale MLPs and RBF networks can be inverted efficiently. We present several examples to demonstrate the proposed inversion algorithms and the applications of the network inversions to examining and improving the generalization performance of trained networks. The results show the effectiveness of the proposed method. (double space, 50 pages) Any comments are appreciated. Bao-Liang Lu ====================================== Bio-Mimetic Control Research Center The Institute of Physical and Chemical Research (RIKEN) Anagahora, Shimoshidami, Moriyama-ku Nagoya 463-0003, Japan Tel: +81-52-736-5870 Fax: +81-52-736-5871 Email: lbl at bmc.riken.go.jp From jkim at FIZ.HUJI.AC.IL Wed Dec 24 05:16:20 1997 From: jkim at FIZ.HUJI.AC.IL (Jai Won Kim) Date: Wed, 24 Dec 1997 12:16:20 +0200 Subject: two papers: On-Line Gibbs Learning Message-ID: <199712241016.AA02233@keter.fiz.huji.ac.il> ftp-host: keter.fiz.huji.ac.il ftp-filenames: pub/OLGA1.tar and pub/OLGA2.tar ______________________________________________________________________________o On-Line Gibbs Learning I: General Theory H. Sompolinsky and J. W. Kim Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT We study a new model of on-line learning, the On-Line Gibbs Algorithm (OLGA) which is particularly suitable for supervised learning of realizable and unrealizable discrete valued functions. The learning algorithm is based on an on-line energy function, $E$, that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. The relative importance of these terms in $E$ is determined by a parameter $\la$, the inverse of which plays a similar role as the learning rate parameters in other on-line learning algorithms. In the stochastic version of the algorithm, following each presentation of a labeled example the new weights are chosen from a Gibbs distribution with the on-line energy $E$ and a temperature parameter $T$. In the zero temperature version of OLGA, at each step, the new weights are those that minimize the on-line energy $E$. The generalization performance of OLGA is studied analytically in the limit of small learning rate, \ie~ $\la \rightarrow \infty$. It is shown that at finite temperature OLGA converges to an equilibrium Gibbs distribution of weight vectors with an energy function which equals the generalization error function. In its deterministic version OLGA converges to a local minimum of the generalization error. The rate of convergence is studied for the case in which the parameter $\la$ increases as a power-law in time. It is shown that in a generic unrealizable dichotomy, the asymptotic rate of decrease of the generalization error is proportional to the inverse square root of presented examples. This rate is similar to that of batch learning. The special cases of learning realizable dichotomies or dichotomies with output noise are also treated. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA1.tar _______________________________________________________________________________o On-Line Gibbs Learning II: Application to Perceptron and Multi-layer Networks J. W. Kim and H. Sompolinsky Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim at fiz.huji.ac.il ; haim at fiz.huji.ac.il (Submitted to Physical Review E, Dec 97) ABSTRACT In a previous paper (On-line Gibbs Learning I: General Theory) we have presented the On-line Gibbs Algorithm(OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to online supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a Winner-Takes-All (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in article I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power-law that is the same as that of batch learning. The tar postscript file of the paper is placed on "anonymous ftp" at ftp-host: keter.fiz.huji.ac.il ftp-filename: pub/OLGA2.tar From biehl at physik.uni-wuerzburg.de Tue Dec 30 08:27:19 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 30 Dec 1997 14:27:19 +0100 (MET) Subject: preprint available Message-ID: <199712301327.OAA17838@wptx08.physik.uni-wuerzburg.de> Subject: paper available: Adaptive Systems on Different Time Scales FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-97-053.ps.gz The following preprint is now available via anonymous ftp: (See below for the retrieval procedure) --------------------------------------------------------------------- Adaptive Systems on Different Time Scales by D. Endres and Peter Riegler Abstract: The special character of certain degrees of freedom in two-layered neural networks is investigated for on-line learning of realizable rules. Our analysis shows that the dynamics of these degrees of freedom can be put on a faster time scale than the remaining, with the profit of speeding up the overall adaptation process. This is shown for two groups of degrees of freedom: second layer weights and bias weights. For the former case our analysis provides a theoretical explanation of phenomenological findings. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> get WUE-ITP-97-053.ps.gz ftp> quit unix> gunzip WUE-ITP-97-053.ps.gz e.g. unix> lp WUE-ITP-97-053.ps _____________________________________________________________________ _/_/_/_/_/_/_/_/_/_/_/_/ _/ _/ Peter Riegler _/ _/_/_/ _/ Institut fuer Theoretische Physik _/ _/ _/ _/ Universitaet Wuerzburg _/ _/_/_/ _/ Am Hubland _/ _/ _/ D-97074 Wuerzburg, Germany _/_/_/ _/_/_/ phone: (++49) (0)931 888-4908 fax: (++49) (0)931 888-5141 email: pr at physik.uni-wuerzburg.de www: http://theorie.physik.uni-wuerzburg.de/~pr ________________________________________________________________ From tgd at CS.ORST.EDU Tue Dec 30 20:09:13 1997 From: tgd at CS.ORST.EDU (Tom Dietterich) Date: Wed, 31 Dec 1997 01:09:13 GMT Subject: Learning with Probabilistic Representations (Machine Learning Special Issue) Message-ID: <199712310109.BAA00980@edison.CS.ORST.EDU> Machine Learning has published the following special issue on LEARNING WITH PROBABILISTIC REPRESENTATIONS Guest Editors: Pat Langley, Gregory M. Provan, and Padhraic Smyth Learning with Probabilistic Representations Pat Langley, Gregory Provan, and Padhraic Smyth On the Optimality of the Simple Bayesian Classifier under Zero-One Loss Pedro Domingos and Michael Pazzani Bayesian Network Classifiers Nir Friedman, Dan Geiger, and Moises Goldszmidt The Sample Complexity of Learning Fixed-Structure Bayesian Networks Sanjoy Dasgupta Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables David Maxwell Chickering and David Heckerman Adaptive Probabilistic Networks with Hidden Variables John Binder, Daphne Koller, Stuart Russell, and Keiji Kanazawa Factorial Hidden Markov Models Zoubin Ghahramani and Michael I. Jordan Predicting Protein Secondary Structure using Stochastic Tree Grammars Naoki Abe and Hiroshi Mamitsuka For more information, see http://www.cs.orst.edu/~tgd/mlj --Tom From kchen at cis.ohio-state.edu Wed Dec 31 09:58:15 1997 From: kchen at cis.ohio-state.edu (Ke CHEN) Date: Wed, 31 Dec 1997 09:58:15 -0500 (EST) Subject: TRs available. Message-ID: The following two papers are available on line now. _________________________________________________________________________ Combining Linear Discriminant Functions with Neural Networks for Supervised Learning Ke Chen{1,2}, Xiang Yu {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in Neural Computing & Applications 6(1): 19-41, 1997. ABSTRACT A novel supervised learning method is presented by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, {\it growing} and {\it credit-assignment} algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multi-layered perceptron. URL: http://www.cis.ohio-state.edu/~kchen/nca.ps ftp://www.cis.ohio-state.edu/~kchen/nca.ps ___________________________________________________________________________ ___________________________________________________________________________ Methods of Combining Multiple Classifiers with Different Features and Their Applications to Text-Independent Speaker Identification Ke Chen{1,2}, Lan Wang {2} and Huisheng Chi {2} {1} Dept. of CIS and Center for Cognitive Science, Ohio State U. {2} National Lab of Machine Perception, Peking U., China appearing in International Journal of Pattern Recognition and Artificial Intelligence 11(3): 417-445, 1997. ABSTRACT In practical applications of pattern recognition, there are often different features extracted from raw data which needs recognizing. Methods of combining multiple classifiers with different features are viewed as a general problem in various application areas of pattern recognition. In this paper, a systematic investigation has been made and possible solutions are classified into three frameworks, i.e. linear opinion pools, winner-take-all and evidential reasoning. For combining multiple classifiers with different features, a novel method is presented in the framework of linear opinion pools and a modified training algorithm for associative switch is also proposed in the framework of winner-take-all. In the framework of evidential reasoning, several typical methods are briefly reviewed for use. All aforementioned methods have already been applied to text-independent speaker identification. The simulations show that results yielded by the methods described in this paper are better than not only the individual classifiers' but also ones obtained by combining multiple classifiers with the same feature. It indicates that the use of combining multiple classifiers with different features is an effective way to attack the problem of text-independent speaker identification. URL: http://www.cis.ohio-state.edu/~kchen/ijprai.ps ftp://www.cis.ohio-state.edu/~kchen/ijprai.ps __________________________________________________________________________ ---------------------------------------------------- Dr. Ke CHEN Department of Computer and Information Science The Ohio State University 583 Dreese Laboratories 2015 Neil Avenue Columbus, Ohio 43210-1277, U.S.A. E-Mail: kchen at cis.ohio-state.edu WWW: http://www.cis.ohio-state.edu/~kchen ------------------------------------------------------