From harnad at coglit.soton.ac.uk Sat Nov 1 14:02:58 1997 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Sat, 1 Nov 1997 19:02:58 +0000 (GMT) Subject: Dynamical Hypothesis: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article on: THE DYNAMICAL HYPOTHESIS IN COGNITIVE SCIENCE by Tim van Gelder This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ THE DYNAMICAL HYPOTHESIS IN COGNITIVE SCIENCE Tim van Gelder Department of Philosophy University of Melbourne Parkville VIC 3052 Australia tgelder at ariel.unimelb.edu.au http.//ariel.its.unimelb.edu.au/~tgelder KEYWORDS: cognition, systems, dynamical systems, computers, computational systems, computability, modeling, time. ABSTRACT: Recent years have seen increasing use of dynamics in cognitive science. If the heart of the dominant computational approach is the hypothesis that cognitive agents are digital computers, the heart of the alternative dynamical approach is the hypothesis that cognitive agents are dynamical systems. This target article attempts to articulate the dynamical hypothesis and to defend it as an empirical alternative to the computational hypothesis. Digital computers and dynamical systems are characterized as specific kinds of systems. The dynamical hypothesis has two major components: the nature hypothesis (cognitive agents are dynamical system) the knowledge hypothesis (cognitive agents can be understood dynamically). A wide range of objections to the general hypothesis are then rebutted. The conclusion is that cognitive systems may well be dynamical systems, and only sustained empirical research in cognitive science will determine the extent to which that is true. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.vangelder). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.vangelder ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.vangelder gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.vanGelder When you have the file(s) you want, type: quit From rojas at inf.fu-berlin.de Sun Nov 2 18:07:00 1997 From: rojas at inf.fu-berlin.de (Raul Rojas) Date: Sun, 2 Nov 97 18:07 MET Subject: connectionist school Message-ID: Dear connectionists, IK-98 is a one-week spring school on neural networks, neuroscience, AI and cognition which will be held for the second time in Guenne (a small town near Dortmund), Germany, from March 7 to March 14, 1998. The general theme of next year's IK is "Language and Communication". Most of the courses will be held in German, some in English. The announcement below can be of interest for members of this mailing list living in German speaking countries or who are fluent German speakers. There is a home page for IK-98 with additional information regarding registration, cost, and abstracts of the courses: http://www.tzi.informatik.uni-bremen.de/ik98 Raul Rojas Freie Universitaet Berlin %%%%%%%%%%%%%%%%%%%%%% ANNOUNCEMENT in GERMAN %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >>> Interdisziplinaeres Kolleg 98 (IK-98) <<< >>> 7-14 Maerz 1998, Guenne am Moehnesee <<< >> http://www.tzi.informatik.uni-bremen.de/ik98 << Das Interdisziplinaere Kolleg (IK) ist eine intensive Fruehjahrsschule zum Generalthema "Intelligenz und Gehirn". Die Schirmwissenschaften des IK sind die Neurowissenschaft, die Kognitionswissenschaft, die Kuenstliche Intelligenz und die Neuroinformatik. Angesehene Dozenten aus diesen Diszi- plinen vermitteln Grundlagenkenntnisse, fuehren in methodische Vorgehens- weisen ein und erlaeutern aktuelle Forschungsfragen. Ein abgestimmtes Spek- trum von Grund-, Theorie- und Spezialkursen, sowie disziplinuebergreifenden Veranstaltungen teilweise mit praktischen Uebungen richtet sich an Studen- ten und Forscher aus dem akademischen und industriellen Bereich. Veranstalter ist die Gesellschaft fuer Informatik (GI) mit Unterstuetzung von: FB 1 (KI) der GI, Fachgruppe 0.0.2 (NN) der GI, European Neural Network Society (ENNS) und German Chapter (GNNS), DFG-Graduiertenkolleg "Signalketten in lebenden Systemen", Neurowissenschaftliche Gesellschaft, GMD, Gesellschaft fuer Kognitionswissenschaft. ** Veranstaltungsort ** Das Tagungsheim ist die Familienbildungsstaette Heinrich-Luebke-Haus" in Guenne (Sauerland). Dies Haus liegt abgeschieden am Moehnesee im Naturpark Arnsberger Wald. Die Teilnehmer sind im Tagungsheim untergebracht. All dies foerdert einen konzentrierten und geselligen Austausch zwischen den Teilnehmern. ** Schwerpunktthema ** Das IK-98 hat als besonderen Schwerpunkt das Thema=93Sprache und Kommunik= ation", der in mehreren weiterfuehrenden Kursen von unterschiedlichen Dis= ziplinen her beleuchtet wird. ** Kurse und Dozenten ** >>>>>> Grundkurse G1 Neurobiologie (Gerhard Roth) G2 Kuenstliche Neuronale Netze (Guenther Palm) G3 Einfuehrung in die KI (Ipke Wachsmuth) G4 Kognitive Systeme - Eine Einfuehrung in die Kognitionswissenschaft (Gerhard Strube) >>>>>> Theoriekurse T1 Das komplexe reale Neuron (Helmut Schwegler) T2 Connectionist Speech Recognition (Herve Bourlard) T3 Perception of Temporal Structures - Especially in Speech (Robert F. Port) T4 Sprachstruktur - Hirnarchitektur; Sprachverarbeitung - Hirnprozesse (Helmut Schnelle) T5 Optimierungsstrategien fuer neuronale Lernverfahren (Helge Ritter) >>>>>> Spezialkurse S1 Hybride konnektionistische und symbolische Ansaetze zur Verarbeitung natuerlicher Sprache (Stefan Wermter) S2 Intelligente Agenten fuer Multimedia-Schnittstellen (Wolfgang Wahlster, Elisabeth Andre) S3 Wie hoert das Gehirn? Neurobiologie des Hoersystems (Guenter Ehret) S4 Sprachproduktion (Thomas Pechmann) >>>>>> Disziplinuebergreifende Kurse D1 Fuzzy und Neurosysteme (Rudolf Kruse) D2 Zeitliche Kognition (Ernst P=F6ppel) D3 The origins and evolution of language and meaning (Luc Steels) D4 Kontrolle von Bewegung in biologischen Systemen und Navigation mobiler Roboter (Josef Schmitz, Thomas Christaller) D5 Optimieren neuronaler Netze durch Lernen und Evolution (Heinz Braun) D6 Koordination von Sprache und Handlung (Wolfgang Heydrich, Hannes Rieser) D7 Dinamik spikender Neurone und zeitliche Codierung (Andreas Herz) ** Abendprogramm ** In visionaeren, feurigen und/oder kuehnen after-dinner-talks" werden her- ausragende Forscher und Forscherinnen zu Kontroversen einladen. ** Kursunterlagen ** Zu allen Kursen wird es schriftliche Dokumentationen geben, welche als Sammel- band allen Teilnehmern ausgehaendigt werden. ** Wissenschaftlicher Beirat ** Wolfgang Banzhaf, Wilfried Brauer, Armin B. Cremers, Christian Freksa, Otthein Herzog, Wolfgang Hoeppner, Hanspeter Mallot, Thomas Metzinger, Heiko Neumann, Hermann Ney, Guenther Palm, Ernst Poeppel, Wolfgang Prinz, Burghard Rieger, Helge Ritter, Claus Rollinger, Werner von Seelen, Hans Spada, Gerhard Strube, Helmut Schwegler, Ipke Wachsmuth, Wolfgang Wahlster. ** Organisationskomitee ** Thomas Christaller, Bernhard Froetschl, Christopher Habel, Herbert Jaeger, Anthony Jameson, Frank Pasemann, Bjoern-Olaf Peters, Annegret Pfoh, Raul Rojas (Gesamtleitung), Gerhard Roth, Kerstin Schill, Werner Tack. ** Tagungsbuero und Anmeldung ** Christine Harms, c/o GMD, Schloss Birlinghoven, D-53754 Sankt Augustin, Telefon 02241-14-2473, Fax 02241-14-2472, email christine.harms at gmd.de ** Weitere Informationen ** Detaillierte Infos zum Hintergrund und dem Tagungsprogramm des IK-98 sind auf dessen Internet-homepage http://www.tzi.uni-bremen.de/ik98/ abrufbar. From rsun at cs.ua.edu Tue Nov 4 08:58:46 1997 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 4 Nov 1997 07:58:46 -0600 Subject: book on hybrid models Message-ID: <199711041358.HAA22661@sun.cs.ua.edu> Book announcement: Title: ** CONNECTIONIST-SYMBOLIC INTEGRATION ** edited by Ron Sun and F. Alexandre (information for ordering is at the end of this description) This book is concerned with the development, analysis, and application of hybrid connectionist-symbolic models in artificial intelligence and cognitive science, drawing contributions from an international group of leading experts. It describes and compares a variety of models in this area. Thus, it serves as a well-balanced report on the state of the art in this area. This book is the outgrowth of The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches}, which was held for two days during August 19-20 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95). TABLE of CONTENT ----------------------- 1. An Introduction to Connectionist Symbolic Integration R. Sun part 1: Reviews and Overviews 2. An overview of strategies for neurosymbolic integration M. Hilario 3. Task structure and computational level: architectural issues in symbolic-connectionist integration R. Khosla and T. Dillon 4. Cognitive aspects of neurosymbolic integration Y. Lallement and F. Alexandre 5. A first approach to a taxonomy of fuzzy-neural systems L. Magdalena part 2: Learning in Multi-Module Systems 6. A hybrid learning model of abductive reasoning T. Johnson and J. Zhang 7. A hybrid learning model of reactive sequential decision making R. Sun and T. Peterson 8. A preprocessing model for integrating CBR and prototype-based neural network M. Malek and B. Amy 9. A neurosymbolic system with 3 levels B. Orsier and A. Labbi 10. A distributed platform for symbolic-connectionist integration J. C. Gonzalez, J. R. Velasco, C. A. Iglesias part 3: Representing Symbolic Knowledge 11. Micro-level hybridization in DUAL B. Kokinov 12. An integrated symbolic/connectionist model of parsing S. Stevenson 13. A hybrid system framework for disambiguating word senses X. Wu, M. McTear, P. Ojha, H. Dai 14. A localist network architecture for logical inference N. Park and D. Robertson 15. Distributed associative memory J. Austin 16. Symbolic neural networks derived from stochastic grammar domain models E. Mjolsness Part 4: Learning Distributed Representation 17. Holographic reduced representation T. Plate 18. Distributed representations for terms in hybrid reasoning systems A. Sperduti, A. Starita, C. Goller 19. Learning distributed representation R. Krosley and M. Misra Conclusion 20. Conclusion F Alexandre --------------------------- To ORDER the book, call 1-800-9-books-9 201-236-9500 FAX: 201-236-0072 email: orders at erlbaum.com ISBN 0-8058-2348-4 (hard cover) 0-8058-2349-2 (paper) --------------------------- For a closed related book, "Computational Architectures Integrating Symbolic and Connectionist Processing" (edited by Ron Sun and Larry Bookman, published by Kluwer Academic Publishers) see my Web page for information. (This previous book also contains an extensive, annotated bibliography on hybrid neural network models.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 Houser Hall ftp://ftp.cs.ua.edu/pub/tech-reports/ Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - From thrun+ at heaven.learning.cs.cmu.edu Tue Nov 4 21:59:46 1997 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Tue, 4 Nov 97 21:59:46 EST Subject: Conference on Automated Learning and Discovery Message-ID: CONFERENCE ON AUTOMATED LEARNING AND DISCOVERY June 11-13, 1998 at Carnegie Mellon University, Pittsburgh, PA The Conference on Automated Learning and Discovery will bring together leading researchers from various scientific disciplines concerned with learning from data. It will cover scientific research at the intersection of statistics, computer science, artificial intelligence, databases, social sciences and language technologies. The goal of this meeting is to explore new, unified research directions in this cross-disciplinary field. The conference features eight one-day cross-disciplinary workshops, interleaved with seven invited plenary talks by renowned statisticians, computer scientists, and cognitive scientists. The workshops will address issues such as: what is the state of the art, what can we do and what is missing? what are promising research directions? what are the most promising opportunities for cross-disciplinary research? ___Plenary speakers________________________________________________ * Tom Dietterich * Stuart Geman * David Heckerman * Michael Jordan * Daryl Pregibon * Herb Simon * Robert Tibshirani ___Workshops_______________________________________________________ * Visual Methods for the Study of Massive Data Sets organized by Bill Eddy and Steve Eick * Learning Causal Bayesian Networks organized by Richard Scheines and Larry Wasserman * Discovery in Natural and Social Science organized by Raul Valdes-Perez * Mixed-Media Databases organized by Shumeet Baluja, Christos Faloutsos, Alex Hauptmann and Michael Witbrock * Learning from Text and the Web organized by Jaime Carbonell, Steve Fienberg, Tom Mitchell and Yi-Ming Yang * Robot Exploration and Learning organized by Howie Choset, Maja Mataric and Sebastian Thrun * Machine Learning and Reinforcement Learning for Manufacturing organized by Sridhar Mahadevan and Andrew Moore * Large-Scale Consumer Databases organized by Mike Meyer, Teddy Seidenfeld and Kannan Srinivasan ___Deadline_for_paper_submissions__________________________________ * February 15, 1998 ___More_information________________________________________________ * Web: http://www.cs.cmu.edu/~conald * E-mail: conald at cs.cmu.edu For submission instructions, consult our Web page or contact the organizers of the specific workshop. A limited number of travel stipends will be available. The conference will be sponsored by CMU's newly created Center for Automated Learning and Discovery. From dana at cs.rochester.edu Wed Nov 5 15:24:07 1997 From: dana at cs.rochester.edu (dana@cs.rochester.edu) Date: Wed, 5 Nov 1997 15:24:07 -0500 Subject: New Book from MIT Press Message-ID: <199711052024.PAA16698@gazelle.cs.rochester.edu> **********************NEW BOOK FROM MIT PRESS ************************* Dana H. Ballard An Introduction to Natural Computation It is now clear that the brain is unlikely to be understood without recourse to computational theories. The theme of An Introduction to Natural Computation is that ideas from diverse areas such as neuroscience, information theory, and optimization theory have recently been extended in ways that makes them useful for describing the brain's program. The book provides a comprehensive introduction to the computational material that will form the underpinnings of the ultimate set of brain models. It stresses the broad spectrum of learning models--ranging from neural network learning through reinforcement learning to genetic learning--and situates the various models in their appropriate neural context. Writing about models of the brain before the brain is fully understood is a delicate matter. At one extreme are very detailed models of the neural circuitry. Such models are in danger of losing track of the task the brain is trying to solve. At the other extreme are very abstract models representing cognitive constructs that can be readily tested. Such models can be so abstract that they lose all relationship to neurobiology. To avoid both dangers, An Introduction to Natural Computation takes the middle ground and stresses the computational task while staying near the neurobiology. The material is accessible to advanced undergraduates as well as beginning graduate students. Dana Ballard is a Professor in the Department of Computer Science and Brain and Cognitive Sciences at the University of Rochester. For more information see: http://mitpress.mit.edu/book-home.tcl?isbn=0262024209 and http://www.cs.rochester.edu:80/users/faculty/dana/ From Reimar.Hofmann at mchp.siemens.de Thu Nov 6 03:38:32 1997 From: Reimar.Hofmann at mchp.siemens.de (Reimar Hofmann) Date: Thu, 06 Nov 1997 09:38:32 +0100 Subject: Nonlinear Markov Networks Message-ID: <34618208.8D32271C@mchp.siemens.de> *** The following NIPS*97 preprint is available *** Nonlinear Markov Networks for Continuous Variables Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract In this paper we address the problem of learning the structure in nonlinear Markov networks with continuous variables. Markov networks are well suited to model relationships which do not exhibit a natural causal ordering. We use neural network structures to model the quantitative relationships between variables. Using a financial and a social data set we show that interesting structures can be found using our approach. Available by ftp from: ftp://flop.informatik.tu-muenchen.de/pub/hofmannr/nips97prerl.ps.gz or from my homepage http://wwwbrauer.informatik.tu-muenchen.de/~hofmannr Also of interest might be our NIPS*95 paper which addresses the corresponding problem for Bayesian networks: Discovering Structure in Continuous Variables Using Bayesian Networks Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract We study Bayesian networks for continuous variables using nonlinear conditional density estimators. We demonstrate that useful structures can be extracted from a data set in a self-organized way and we present sampling techniques for belief update based on Markov blanket conditional density models. From sontag at control.rutgers.edu Thu Nov 6 19:33:43 1997 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Thu, 6 Nov 1997 19:33:43 -0500 Subject: TR available - Noisy nets cannot recognize regular languages Message-ID: <199711070033.TAA26247@control.rutgers.edu> TR available: ANALOG NEURAL NETS WITH GAUSSIAN OR OTHER COMMON NOISE DISTRIBUTIONS CANNOT RECOGNIZE ARBITRARY REGULAR LANGUAGES Wolfgang Maass, Graz, Austria Eduardo D. Sontag, Rutgers, USA ABSTRACT We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognized by networks of this type, and we give a precise characterization of those languages which can be recognized. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand, we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. The paper can be retrieved from http://www.math.rutgers.edu/~sontag (follow link to "online papers"). The file is a gzipped postscript file. If Web access if inconvenient, it is also possible to use anonymous FTP: ftp math.rutgers.edu login: anonymous cd pub/sontag bin get noisy-nets.ps.gz quit gunzip noisy-nets.ps.gz lpr noisy-nets.ps From bert at mbfys.kun.nl Fri Nov 7 03:56:36 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Fri, 7 Nov 1997 09:56:36 +0100 Subject: paper available learning in BMs with linear response Message-ID: <199711070856.JAA02945@vitellius.mbfys.kun.nl> Dear Connectionists, The following article Title: Efficient learning in Boltzmann Machines using linear response theory Authors: H.J. Kappen and F.B. Rodriguez can now be downloaded from as ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NC.ps.Z This article has been accepted for publication in the journal Neural Computation. Abstract: The learning process in Boltzmann Machines is computationally very expensive. The computational complexity of the exact algorithm is exponential in the number of neurons. We present a new approximate learning algorithm for Boltzmann Machines, which is based on mean field theory and the linear response theorem. The computational complexity of the algorithm is cubic in the number of neurons. In the absence of hidden units, we show how the weights can be directly computed from the fixed point equation of the learning rules. Thus, in this case we do not need to use a gradient descent procedure for the learning process. We show that the solutions of this method are close to the optimal solutions and give a significant improvement when correlations play a significant role. Finally, we apply the method to a pattern completion task and show good performance for networks up to 100 neurons. Best Regards, Bert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NC.ps.Z ftp> bye unix% uncompress Kappen.LR_NC.ps.Z unix% lpr Kappen.LR_NC.ps From ericr at mech.gla.ac.uk Fri Nov 7 11:24:11 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Fri, 7 Nov 1997 16:24:11 GMT Subject: No subject Message-ID: <519.199711071624@googie.mech.gla.ac.uk> From Sebastian_Thrun at heaven.learning.cs.cmu.edu Fri Nov 7 13:32:35 1997 From: Sebastian_Thrun at heaven.learning.cs.cmu.edu (Sebastian Thrun) Date: Fri, 07 Nov 1997 13:32:35 -0500 Subject: New book: Learning to learn Message-ID: L E A R N I N G T O L E A R N Sebastian Thrun and Lorien Pratt (eds.) Kluwer Academic Publishers Over the past three decades, research on machine learning and data mining has led to a wide variety of algorithms that induce general functions from examples. As machine learning is maturing, it has begun to make the successful transition from academic research to various practical applications. Generic techniques such as decision trees and artificial neural networks, for example, are now being used in various commercial and industrial applications. Learning to learn is an exciting new research direction within machine learning. Similar to traditional machine learning algorithms, the methods described in LEARNING TO LEARN induce general functions from experience. However, the book investigates algorithms that can change the way they generalize, i.e., practice the last of learning itself, and improve on it. To illustrate the utility of learning to learn, it is worthwhile to compare machine learning to human learning. Humans encounter a continual stream of learning tasks. They do not just learn concepts of motor skills, they also learn bias, i.e., they learn how to generalize. As a result, humans are often able to generalize correctly from extremely few examples---often just a single example suffices to teach us a new thing. A deeper understanding of computer programs that improve their ability to learn can have large practical impact on the field of machine learning and beyond. In recent years, the field has made significant progress towards a theory of learning to learn along with practical new algorithms, some of which led to impressive results in real-world applications. LEARNING TO LEARN provides a survey of some of the most exciting new research approaches, written by leading researchers in the field. Its objective is to investigate the utility and feasibility of computer programs that can learn how to learn, both from a practical and a theoretical point of view. This book is organized into four parts Part I: Overview articles, in which basic taxonomies and the cognitive foundations for algorithms that "learn to learn" are introduced and discussed, Chapter 1: Learning To Learn: Introduction and Overview Sebastian Thrun and Lorien Pratt Chapter 2: A Survey of Connectionist Network Reuse Through Transfer Lorien Pratt and Barbara Jennings Chapter 3: Transfer in Cognition Anthony Robins Part II: Prediction/Supervised Learning, in which specific algorithms are presented that exploit information in multiple learning tasks in the context of supervised learning, Chapter 4: Theoretical Models of Learning to Learn Jonathan Baxter Chapter 5: Multitask Learning Rich Caruana Chapter 6: Making a Low-Dimensional Representation Suitable for Diverse Tasks Nathan Intrator and Shimon Edelman Chapter 7: The Canonical Distortion Measure for Vector Quantization and Function Approximation Jonathan Baxter Chapter 8: Lifelong Learning Algorithms Sebastian Thrun Part III: Relatedness, in which the issue of "task relatedness" is investigated and algorithms are described that selectively transfer knowledge across learning tasks, and Chapter 9: The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Daniel L. Silver and Robert E. Mercer Chapter 10: Clustering Learning Tasks and the Selective Cross-TaskTransfer of Knowledge Sebastian Thrun and Joseph O'Sullivan Part IV: Control, in which algorithms specifically designed for learning mappings from percepts to actions are presented. Chapter 11: CHILD: A First Step Towards Continual Learning Mark B. Ring Chapter 12: Reinforcement Learning With Self-Modifying Policies Juergen Schmidhuber, Jieyu Zhao, Nicol N. Schraudolph Chapter 13: Creating Advice-Taking Reinforcement Learners Richard Maclin and Jude W. Shavlik All contributions went to a journal-style reviewing process and are of journal quality (in fact, many of them were previously published in Machine Learning or Connection Science). The material is suited for advanced graduate classes in machine learning. 362 pages. More information at http://www.cs.cmu.edu/~thrun/papers/thrun.book3.html Please post. From juuso at nucleus.hut.fi Fri Nov 7 12:57:00 1997 From: juuso at nucleus.hut.fi (Juha Vesanto) Date: Fri, 07 Nov 1997 19:57:00 +0200 Subject: [Software] SOM Toolbox v1.0beta, freely available Message-ID: <3463566C.3B73370A@nucleus.hut.fi> SOM Toolbox for Matlab 5 version 1.0 beta now available in the WWW! A GNU licensed Matlab 5 toolbox for using self-organizing maps in data analysis is now available for free at URL http://www.cis.hut.fi/projects/somtoolbox/ The web page includes snapshots, the codes as a zip file and full documentation. If you are interested in practical data analysis and/or self-organizing maps and have Matlab 5 in your computer, be sure to check this out! The version is 1.0beta, so comments, suggestions and bug reports are welcome to the address: somtlbx at mail.cis.hut.fi Here are some SOM Toolbox features: + Modular programming style: the Toolbox code utilizes Matlab structures and the functions are constructed in a modular manner, which makes it convenient to tailor the code for each users' specific needs. (Note that you only need the basic Matlab 5 to run the SOM Toolbox - no other toolboxes are required.) + Batch and sequential training algorithms: in data analysis applications, the speed of training can be considerably improved by using the batch version. + Map dimension: maps may be N-dimensional. + Advanced graphics: based on Matlab's powerful graphics capabilities, illustrative figures can be easily produced. + Graphical user interface. + Compatibility with SOM_PAK: import/export functions for SOM_PAK codebook and data files are included in the package. + Basic preprocessing, labeling and validation tools. SOM Toolbox team http://www.cis.hut.fi/projects/somtoolbox/ Esa Alhoniemi, Johan Himberg, Kimmon Kiviluoto, Jukka Parviainen and Juha Vesanto -- IMHP Juha Vesanto juuso at mail.cis.hut.fi http://www.cis.hut.fi/~juuso -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* That's Dreddy for ya. Bastich makes us criminals! -Mean Machine Angel From simon.schultz at psy.ox.ac.uk Sun Nov 9 11:14:24 1997 From: simon.schultz at psy.ox.ac.uk (Simon Schultz) Date: Sun, 09 Nov 1997 16:14:24 +0000 Subject: Paper available Message-ID: <3465E160.345B@psy.ox.ac.uk> Dear Connectionists, Preprints of the following paper are available via WWW. It has been accepted for publication in Physical Review E. STABILITY OF THE REPLICA SYMMETRIC SOLUTION FOR THE INFORMATION CONVEYED BY A NEURAL NETWORK S. Schultz(1) and A. Treves(2) (1) Department of Experimental Psychology, University of Oxford, UK. (2) Programme in Neuroscience, SISSA, Trieste, Italy. Abstract: The information that a pattern of firing in the output layer of a feedforward network of threshold-linear neurons conveys about the network's inputs is considered. A replica-symmetric solution is found to be stable for all but small amounts of noise. The region of instability depends on the contribution of the threshold and the sparseness: for distributed pattern distributions, the unstable region extends to higher noise variances than for very sparse distributions, for which it is almost nonexistant. 19 pages, 5 figures. http://www.mrc-bbc.ox.ac.uk/~schultz/rstab.ps.gz Sincerely, S. Schultz -- ----------------------------------------------------------------------- Simon Schultz Department of Experimental Psychology also: University of Oxford Corpus Christi College South Parks Rd., Oxford OX1 3UD Oxford OX1 4JF Phone: +44-1865-271419 Fax: +44-1865-310447 http://www.mrc-bbc.ox.ac.uk/~schultz/ ----------------------------------------------------------------------- From tibs at utstat.toronto.edu Mon Nov 10 11:53:00 1997 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Mon, 10 Nov 97 11:53 EST Subject: new paper on model selection Message-ID: The covariance inflation criterion for adaptive model selection Rob Tibshirani and Keith Knight Univ of Toronto We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the dataset. This criterion can be applied to general prediction problems (for example regression or classification), and to general prediction rules (for example stepwise regression, tree-based models and neural nets). As a byproduct we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection. Available at http://utstat.toronto.edu/tibs/research.html or ftp://utstat.toronto.edu/pub/tibs/cic.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From mikkok at marconi.hut.fi Mon Nov 10 08:58:10 1997 From: mikkok at marconi.hut.fi (Mikko Kurimo) Date: Mon, 10 Nov 1997 15:58:10 +0200 Subject: Thesis available on using SOM and LVQ for HMMs Message-ID: <199711101358.PAA00240@marconi.hut.fi> The following Dr.Tech. thesis is available at http://www.cis.hut.fi/~mikkok/thesis/ (WWW home page) http://www.cis.hut.fi/~mikkok/thesis/book/ (output of latex2html) http://www.cis.hut.fi/~mikkok/intro.ps.gz (compressed postscript, 188K) http://www.cis.hut.fi/~mikkok/intro.ps (postscript, 57 pages, 712K) The articles that belong to the thesis can be accessed through the page http://www.cis.hut.fi/~mikkok/thesis/publications.html --------------------------------------------------------------- Using Self-Organizing Maps and Learning Vector Quantization for Mixture Density Hidden Markov Models Mikko Kurimo Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200, FIN-02015 HUT, Finland Email: Mikko.Kurimo at hut.fi Abstract -------- This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the collected speech samples is a difficult task because of the natural variation in the speech. Two neural computing paradigms, the Self-Organizing Map (SOM) and the Learning Vector Quantization (LVQ) are used in the experiments to improve the recognition performance of the models. A HMM consists of sequential states which are trained to model the feature changes in the signal produced during the modeled process. The output densities applied in this work are mixtures of Gaussian density functions. SOMs are applied to initialize and train the mixtures to give a smooth and faithful presentation of the feature vector space defined by the corresponding training samples. The SOM maps similar feature vectors to nearby units, which is here exploited in experiments to improve the recognition speed of the system. LVQ provides simple but efficient stochastic learning algorithms to improve the classification accuracy in pattern recognition problems. Here, LVQ is applied to develop an iterative training method for mixture density HMMs, which increases both the modeling accuracy of the states and the discrimination between the models of different phonemes. Experiments are also made with LVQ based corrective tuning methods for the mixture density HMMs, which aim at improving the models by learning from the observed recognition errors in the training samples. The suggested HMM training methods are tested using the Finnish speech database collected in the Neural Networks Research Centre at the Helsinki University of Technology. Statistically significant improvements compared to the best conventional HMM training methods are obtained using the speaker dependent but vocabulary independent phoneme models. The decrease in the average number of phoneme recognition errors for the tested speakers have been around 10 percent in the applied test material. -- Email: Mikko.Kurimo at hut.fi Office: Helsinki University of Technology, Neural Networks Research Centre Mail: P.O.Box 2200, FIN-02015 HUT, Finland From lemm at LORENTZ.UNI-MUENSTER.DE Mon Nov 10 12:39:12 1997 From: lemm at LORENTZ.UNI-MUENSTER.DE (Joerg_Lemm) Date: Mon, 10 Nov 1997 18:39:12 +0100 (MET) Subject: TR available: Prior Information and Generalized Questions Message-ID: Dear colleagues, the following TR is now available: "Prior Information and Generalized Questions" MIT AI Memo No. 1598 (C.B.C.L paper No. 141) Joerg C. Lemm Abstract In learning problems available information is usually divided into two categories: examples of function values (or training data) and prior information (e.g.\ a smoothness constraint). This paper 1.) studies aspects on which these two categories usually differ, like their relevance for generalization and their role in the loss function, 2.) presents a unifying formalism, where both types of information are identified with answers to generalized questions, 3.) shows what kind of generalized information is necessary to enable learning, 4.) aims to put usual training data and prior information on a more equal footing by discussing possibilities and variants of measurement and control for generalized questions, including the examples of smoothness and symmetries, 5.) reviews shortly the measurement of linguistic concepts based on fuzzy priors, and principles to combine preprocessors, 6.) uses a Bayesian decision theoretic framework, contrasting parallel and inverse decision problems, 7.) proposes, for problems with non--approximation aspects, a Bayesian two step approximation consisting of posterior maximization and a subsequent risk minimization, 8.) analyses empirical risk minimization under the aspect of nonlocal information 9.) compares the Bayesian two step approximation with empirical risk minimization, including their interpretations of Occam's razor, 10.) formulates examples of stationarity conditions for the maximum posterior approximation with nonlocal and nonconvex priors, leading to inhomogeneous nonlinear equations, similar for example to equations in scattering theory in physics. In summary, the paper emphasizes the need of empirical measurement and control of prior information and of their explicit treatment in theory. ---------------------------------------------------------------------- Comments are welcome! Download sites: ftp://publications.ai.mit.edu/ai-publications/1500-1999/AIM-1598.ps http://planck.uni-muenster.de:8080/~lemm/prior.ps.Z ======================================================================= Joerg Lemm Institute for Theoretical Physics I Wilhelm-Klemm-Str. 9 D- 48149 Muenster, Germany Email: lemm at uni-muenster.de Home page: http://planck.uni-muenster.de:8080/~lemm/ ======================================================================= From dror at coglit.soton.ac.uk Mon Nov 10 09:45:14 1997 From: dror at coglit.soton.ac.uk (Itiel Dror) Date: Mon, 10 Nov 1997 14:45:14 +0000 (GMT) Subject: Call for Papers Message-ID: CALL FOR PAPERS Pragmatics & Cognition announces a special issue on FACIAL INFORMATION PROCESSING: A MULTIDISCIPLINARY PERSPECTIVE Guest Editors Itiel E. Dror and Sarah V. Stevenage In many senses, faces are at the center of human interaction. At a very basic level, faces indicate identity. However, faces are remarkably rich information carriers. For example, facial gestures may be used as means of conveying intentions. Faces may also permit a direct glimpse into the person's inner self (by unintentionally revealing, for example, aspects of character or mood). Given their salient role, the processing of the information conveyed by faces and its integration with other sources of interactional information raise important issues in cognition and pragmatics. Research on facial information processing has investigated these (and other) issues utilizing a variety of approaches and methodologies, and developments in both computer and cognitive sciences have recently carried this research forward. The emerging picture is that there are cognitive subsystems which specialize in different aspects of facial processing. This has been supported by neuropsychological evidence suggesting that brain damaged patients show dissociations between the different aspects of face processing. In addition, research on the development of facial processing abilities, and on aspects of the face itself which affect these processing abilities, has contributed to our understanding of how facial information is perceived. This special issue of Pragmatics and Cognition is intended to provide a common forum for a variety of the topics currently under investigation. Given the breadth of issues and approaches used to investigate faces, we encourage submissions from a wide range of disciplines. Our aim is that this special issue will tie together the diverse research on faces, and show their links and interdependencies. Deadline for submission: August 1, 1998 Editorial decisions: November 1, 1998 Revised papers due: February 1, 1999 Expected publication: October 1999 Papers should be submitted according to the guidelines of the journal (see WWW URL: http://www.cogsci.soton.ac.uk/~dror/guideline.html). All submissions will be peer reviewed. Please send five copies of your submission either to: Dr. Itiel Dror (dror at coglab.psy.soton.ac.uk) or: Dr. Sarah Stevenage (svs1 at soton.ac.uk) Dept. of Psychology Southampton University Highfield, Southampton SO17 1BJ England For additional and updated information see WWW URL: http://www.cogsci.soton.ac.uk/~dror/faces.html or contact either of the guest editors. #======================================================================# | Itiel E. Dror, Ph.D. http://www.cogsci.soton.ac.uk/~dror/ | | Department of Psychology dror at coglab.psy.soton.ac.uk | | University of Southampton Office 44 (0)1703 594519 | | Highfield, Southampton Lab. 44 (0)1703 594518 | | England SO17 1BJ Fax. 44 (0)1703 594597 | #======================================================================# ******************************************************************************* From wsenn at iam.unibe.ch Mon Nov 10 10:17:29 1997 From: wsenn at iam.unibe.ch (Walter Senn) Date: Mon, 10 Nov 1997 16:17:29 +0100 Subject: Depressing synapses detect neural synchrony Message-ID: <9711101517.AA22688@barney.unibe.ch> New paper available (to appear in Neural Computation): READING NEURONAL SYNCHRONY WITH DEPRESSING SYNAPSES Walter Senn, Idan Segev, Misha Tsodyks According to recent experiments of deCharms and Merzenich (Nature 381, 610-613 (1996)), neurons in the primary auditory cortex of the monkey do not change their mean firing rate during an ongoing tone stimulus. The only change which is measured during the tone is a enhanced correlation among the individual spike trains of the auditory cells. We show that this coherence information in the auditory cell population could easily be extracted by a postsynaptic neuron using depressing synapses. The idea is that a dynamically depressing synapse shows a high response at a burst onset and then gets depressed towards the burst end. If some of the auditory cells now synchronize their bursts, the high postsynaptic responses at the burst onset may be enough to activate a postsynaptic cell. Such a partial synchronization may be possible while the mean firing rate of the whole auditory cell population still remains constant before and during the tone stimulus. In this case the tone would never have been detected by a postsynaptic cell using static synapses with constant weights. The manuscript (170 KB) can be downloaded from: http://iamwww.unibe.ch:80/~brainwww/publications/pub_walter.html From xjwang at cicada.ccs.brandeis.edu Mon Nov 10 17:30:38 1997 From: xjwang at cicada.ccs.brandeis.edu (Xiao-Jing Wang) Date: Mon, 10 Nov 1997 17:30:38 -0500 Subject: No subject Message-ID: <199711102230.RAA08141@cicada.ccs.brandeis.edu> Postdoctoral Positions Sloan Center for Theoretical Neuroscience at Brandeis University We anticipate making 3-4 new two-year postdoctoral appointments to the Sloan Center for Theoretical Neuroscience at Brandeis University between January l, 1998 and September 1, 1998. These positions are intended to allow young scientists with Ph.D.s in physics, mathematics or computer science to enter the field of neuroscience. Interested candidates should send a complete curriculum vitae and statement of research interests, and arrange for three letters of recommendation to be sent to : Dr. Larry Abbott Volen Center for Complex Systems Mailstop 013 Brandeis University 415 South Street Waltham, MA 02254 Associated faculty include L. F. Abbott, J. Lisman, E. Marder, S. Nelson, G. Turrigiano and X.-J. Wang. Women and minorities are especially encouraged to apply. Brandeis University is an equal opportunity employer. From keithm at cns.bu.edu Mon Nov 10 13:51:39 1997 From: keithm at cns.bu.edu (Keith McDuffee) Date: Mon, 10 Nov 1997 13:51:39 -0500 Subject: CALL FOR PAPERS Message-ID: <3.0.3.32.19971110135139.00773668@cns.bu.edu> CALL FOR PAPERS and FINAL INVITED PROGRAM SECOND INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS May 27-30, 1998 Sponsored by Boston University's Center for Adaptive Systems and Department of Cognitive and Neural Systems with financial support from DARPA and ONR How Does the Brain Control Behavior? How Can Technology Emulate Biological Intelligence? The conference will include invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at researchers and students of computational neuroscience, connectionist cognitive science, artificial neural networks, neuromorphic engineering, and artificial intelligence. A single oral or poster session enables all presented work to be highly visible. Abstract submissions encourage submissions of the latest results. Costs are kept at a minimum without compromising the quality of meeting handouts and social events. Although Memorial Day falls on Saturday, May 30, it is observed on Monday, May 25, 1998. CONFIRMED INVITED SPEAKERS TUTORIALS Wednesday, May 27, 1998: Larry Abbott, Short-term synaptic plasticity: Mathematical description and computational function George Cybenko, Understanding Q-learning and other adaptive learning methods Ennio Mingolla, Neural models of biological vision Alex Pentland, Visual recognition of people and their behavior Each tutorial is 90 minutes long. KEYNOTE SPEAKERS Stephen Grossberg, Adaptive resonance theory: From biology to technology Ken Nakayama, Psychological studies of visual attention INVITED SPEAKERS Thursday, May 28, 1998: Azriel Rosenfeld, Understanding object motion Takeo Kanade, Computational sensors: Further progress Tomaso Poggio, Sparse representations for learning Gail Carpenter, Applications of ART neural networks Rodney Brooks, Experiments in developmental models for a neurally controlled humanoid robot Lee Feldkamp, Recurrent networks: Promise and practice Friday, May 29, 1998: J. Anthony Movshon, Contrast gain control in the visual cortex Hugh Wilson, Global processes at intermediate levels of form vision Mel Goodale, Biological teleassistance: Perception and action in the human visual system Ken Stevens, The categorical representation of speech and its traces in acoustics and articulation Carol Fowler, Production-perception links in speech Frank Guenther, A theoretical framework for speech acquisition and production Saturday, May 30, 1998: Howard Eichenbaum, The hippocampus and mechanisms of declarative memory Earl Miller, Neural mechanisms for working memory and cognition Bruce McNaughton, Neuronal population dynamics and the interpretation of dreams Richard Thompson, The cerebellar circuitry essential for classical conditioning of discrete behavioral responses Daniel Bullock, Cortical control of arm movements Andrew Barto, Reinforcement learning applied to large-scale dynamic optimization problems There will be contributed oral and poster sessions on each day of the conference. CALL FOR ABSTRACTS Contributors are requested to list a first and second choice from among the topics below in their cover letter, and to say whether it is biological (B) or technological (T) work, when they submit their abstract, as described below. * vision * spatial mapping and navigation * object recognition * neural circuit models * image understanding * neural system models * audition * mathematics of neural systems * speech and language * robotics * unsupervised learning * hybrid systems (fuzzy, evolutionary, digital) * supervised learning * neuromorphic VLSI * reinforcement and emotion * industrial applications * sensory-motor control * other * cognition, planning, and attention Example: first choice: vision (T); second choice: neural system models (B). Contributed Abstracts must be received, in English, by January 31, 1998. Notification of acceptance will be given by February 28, 1998. A meeting registration fee of $45 for regular attendees and $30 for students must accompany each Abstract. See Registration Information for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted abstracts will be returned on request only until April 15, 1998. Each Abstract should fit on one 8.5" x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract; corresponding author and presenting author name, address, telephone, fax, and email address; and preference for oral or poster presentation. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. REGISTRATION INFORMATION: Early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to the address above. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: Fellowships for PhD candidates and postdoctoral fellows are available to cover meeting travel and living costs. The deadline to apply for fellowship support is January 31, 1998. Applicants will be notified by February 28, 1998. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. REGISTRATION FORM Second International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 27, 1998 Meeting: May 28-30, 1998 FAX: (617) 353-7755 (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $70 Conference plus Tutorial (Regular) ( ) $30 Conference Only (Student) ( ) $45 Conference plus Tutorial (Student) ( ) $25 Tutorial Only (Regular) ( ) $45 Conference Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ Keith McDuffee keithm at cns.bu.edu From cns-cas at cns.bu.edu Mon Nov 10 12:43:09 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Mon, 10 Nov 1997 12:43:09 -0500 Subject: Graduate Training in Cognitive and Neural Systems at B.U. Message-ID: <3.0.3.32.19971110124309.00f6e3fc@cns.bu.edu> ******************************************************************* GRADUATE TRAINING IN THE DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ******************************************************************* The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1998, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS Boston University 677 Beacon Street Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via e-mail your full name and mailing address to the attention of Mr. Robin Amos at: amos at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores will decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of outstanding technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; and biological rhythms; as well as the mathematical and computational methods needed to support modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of interdisciplinary graduate courses, each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the afternoon or evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as biology, computer science, engineering, mathematics, and psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (see page 2). Students interested in neural network hardware work with researchers in CNS, at the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River Campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the College of Engineering; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. The department is housed in its own new four-story building which includes ample space for faculty and student offices and laboratories, as well as an auditorium, classroom and seminar rooms, a library, and a faculty-student lounge. LABORATORY AND COMPUTER FACILITIES The department is funded by grants and contracts from federal agencies which support research in life sciences, mathematics, artificial intelligence, and engineering. Facilities include laboratories for experimental research and computational modeling in visual perception, speech and language processing, and sensory-motor control and robotics. Data analysis and numerical simulations are carried out on a state-of-the-art computer network comprised of Sun workstations, Silicon Graphics workstations, Macintoshes, and PCs. All students have access to X-terminals or UNIX workstation consoles, a selection of color systems and PCs, the Boston University connection machine and network of SGI machines, and standard modeling and mathematical simulation packages such as Mathematica, VisSim, Khoros, and Matlab. The department maintains a core collection of books and journals, and has access both to the Boston University Libraries and to the many other collections of the Boston Library Consortium. In addition, several specialized facilities and software are available for use. These include: Computer Vision/Computational Neuroscience Laboratory The Computer Vision/Computational Neuroscience Lab is comprised of an electronics workshop, including a surface-mount workstation, PCD fabrication tools, and an Alterra EPLD design system; a light machine shop; an active vision lab including actuators and video hardware; and systems for computer aided neuroanatomy and application of computer graphics and image processing to brain sections and MRI images. Neurobotics Laboratory The Neurobotics Lab utilizes wheeled mobile robots to study potential applications of neural networks in several areas, including adaptive dynamics and kinematics, obstacle avoidance, path planning and navigation, visual object recognition, and conditioning and motivation. The lab currently has three Pioneer robots equipped with sonar and visual sensors; one B-14 robot with a moveable camera, sonars, infrared, and bump sensors; and two Khepera miniature robots with infrared proximity detectors. Other platforms may be investigated in the future. Psychoacoustics Laboratory The Psychoacoustics Lab houses a newly installed, 8 ft. x 8 ft. sound-proof booth. The laboratory is extensively equipped to perform both traditional psychoacoustic and experiments using interactive auditory virtual-reality stimuli. The major equipment dedicated to the psychoacoustics laboratory includes two Pentium-based personal computers; two Power-PC-based Macintosh computers; a 50-MHz array processor capable of generating auditory stimuli in real time; programmable attenuators; analog-to-digital converters; digital-to-analog converters; a real-time head tracking system; a special-purpose, signal-processing hardware system capable of generating ?spatialized? stereo auditory signals in real time; a two-channel oscilloscope; a two-channel spectrum analyzer; various cables, headphones, and other miscellaneous electronics equipment; and software for signal generation, experimental control, data analysis, and word processing. Sensory-Motor Control Laboratory The Sensory-Motor Control Lab supports experimental studies of motor kinematics. An infrared WatSmart system allows measurement of large-scale movements, and a pressure-sensitive graphics tablet allows studies of handwriting and other fine-scale movements. Part of the equipment associated with the lab is shared with and housed in the Vision Lab. Equipment includes a 40-inch monitor that allows computer display of animations generated by an SGI workstation or a Pentium Pro (Windows NT) workstation. A second major component is a helmet-mounted, video-based, eye-head tracking system (ISCAN Corp, 1997). The latter?s camera samples eye position at 240Hz and also allows reconstruction of what subjects are attending to as they freely scan a scene under normal lighting. Thus the system affords a wide range of visuo-motor studies. Speech and Language Laboratory The Speech and Language Lab includes facilities for analog-to-digital and digital-to-analog software. The Ariel equipment allows reliable synthesis and playback of speech waveforms. An Entropic signal processing package provides facilities for detailed analysis, filtering, spectral construction, and formant tracking of the speech waveform. Various large databases, such as TIMIT and TIdigits, are available for testing algorithms of speech recognition. For high speed processing, the department provides supercomputer facilities to speed filtering and data analysis. Visual Psychophysics Laboratory The Visual Psychophysics Lab occupies an 800-square-foot suite, including three dedicated rooms for data collection, and houses a variety of computer controlled display platforms, including Silicon Graphics, Inc. (SGI) Onyx RE2, SGI Indigo2 High Impact, SGI Indigo2 Extreme, Power Computing (Macintosh compatible) PowerTower Pro 225, and Macintosh 7100/66 workstations. Ancillary resources for visual psychophysics include a computer-controlled video camera, stereo viewing glasses, prisms, a photometer, and a variety of display-generation, data-collection, and data-analysis software. Affiliated Laboratories Affiliated CAS/CNS faculty have additional laboratories ranging from visual and auditory psychophysics and neurophysiology, anatomy, and neuropsychology to engineering and chip design. These facilities can be used in the context of faculty/student collaborations. 1997-98 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior. Aijaz Baloch Research Assistant Professor of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention in recognition, learning and motor control, computa-tional vision, adaptive control systems, reinforcement learning. Helen Barbas Associate Professor, Department of Health Sciences PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neo- cortex. Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual perception, psychophysics, computational models. Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development. Gail A.Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies, Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations. Laird Cermak Director, Memory Disorders Research Center Boston Veterans Affairs Medical Center Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College PhD, Ohio State University Memory disorders. Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems. H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing. Howard Eichenbaum Professor of Psychology PhD, Psychology, University of Michigan Neurophysiological studies of how the hippocampal system is involved in reinforcement learning, spatial orientation, and declarative memory. William D. Eldred III Associate Professor of Biology PhD, University of Colorado, Health Science Center Visual neural biology. Bruce Fischl Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Anisotropic diffusion and nonlinear image filtering, space-variant vision, computational models of early visual processing, and automated analysis of magnetic resonance images. Paolo Gaudiano Associate Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of robotics, vision, adaptive sensory-motor control, and behavioral neurobiology. Jean Berko Gleason Professor of Psychology PhD, Harvard University Psycholinguistics. Sucharita Gopal Associate Professor of Geography PhD, University of California at Santa Barbara Neural networks, computational modeling of behavior, geographical information systems, fuzzy sets, and spatial cognition. Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Chairman, Department of Cognitive and Neural Systems Director, Center for Adaptive Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, and applied mathematics. Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, and speech production. Catherine L. Harris Assistant Professor of Psychology PhD, Cognitive Science and Psychology, University of California at San Diego Visual word recognition, psycholinguistics, cognitive semantics, second language acquisition, computational models. J. Pieter Jacobs Visiting Scholar, Cognitive and Neural Systems MMA, MM, Music, Yale University MMus, Music, University of Pretoria MEng, Electromagnetism, University of Pretoria Aspects of motor control in piano playing; the interface between psychophysical and cognitive phenomena in music perception. Thomas G. Kincaid Professor of Electrical, Computer and Systems Engineering, College of Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing. Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems. Jacqueline A. Liederman Associate Professor of Psychology PhD, Psychology, University of Rochester Dynamics of interhemispheric cooperation; prenatal correlates of neurodevelopmental disorders. Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes. Joseph Perkell Adjunct Professor of Cognitive and Neural Systems Senior Research Scientist, Research Lab of Electronics and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology PhD, Massachusetts Institute of Technology Motor control of speech production. Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system. Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems PhD, Warsaw Medical Academy Retinal physiology, mathematical and computer modeling of dynamical properties of neurons in the visual system. Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision. Mark Reinitz Assistant Professor of Psychology PhD, University of Washington Cognitive psychology, attention, explicit and implicit memory, memory-perception interactions. Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control. Elliot Saltzman Associate Professor of Physical Therapy, Sargent College Assistant Professor, Department of Psychology and Center for the Ecological Study of Perception and Action University of Connecticut, Storrs Research Scientist, Haskins Laboratories, New Haven, CT PhD, Developmental Psychology, University of Minnesota Modeling and experimental studies of human speech production. Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception. Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling. Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University PhD, Psychology, Brown University Visual motion, visual adaptation, relation of visual perception, memory, and movement. Barbara Shinn-Cunningham Assistant Professor of Cognitive and Neural Systems and Biomedical Engineering PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology Psychoacoustics, audition, auditory localization, binaural hearing, sensorimotor adaptation, mathematical models of human performance. Louis Tassinary Visiting Scholar, Cognitive and Neural Systems PhD, Psychology, Dartmouth College Dynamics of affective states as they relate to instigated and ongoing cognitive processes. Malvin Teich Professor of Electrical and Computer Systems Engineering and Biomedical Engineering PhD, Cornell University Quantum optics, photonics, fractal stochastic processes, information transmission in biological sensory systems. Takeo Watanabe Assistant Professor of Psychology PhD, Behavioral Sciences, University of Tokyo Perception of objects and motion and effects of attention on perception using psychophysics and brain imaging (f-MRI). Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid archi-tectures. James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests: dynamic binding, self-organization, shape representation, and classification. Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology Visual attention, preattentive and attentive object representation. Curtis Woodcock Associate Professor of Geography; Chairman, Department of Geography Director, Geographic Applications, Center for Remote Sensing PhD, University of California, Santa Barbara Biophysical remote sensing, particularly of forests and natural vegetation, canopy reflectance models and their inversion, spatial modeling, and change detection; biogeography; spatial analysis; geographic information systems; digital image processing. Other Boston University faculty affiliated with the CNS Department are listed at the end of the brochure. ******************************************************************* DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS GRADUATE TRAINING ANNOUNCEMENT Boston University 677 Beacon Street Boston, MA 02215 Phone: 617/353-9481 Fax: 617/353-7755 Email: inquiries at cns.bu.edu Web: http://cns-web.bu.edu/ ******************************************************************* From ericr at mech.gla.ac.uk Tue Nov 11 04:35:38 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Tue, 11 Nov 1997 09:35:38 GMT Subject: No subject Message-ID: <3219.199711110935@googie.mech.gla.ac.uk> From bert at mbfys.kun.nl Tue Nov 11 03:28:47 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 11 Nov 1997 09:28:47 +0100 Subject: correction on paper available learning in BMs with linear response Message-ID: <199711110828.JAA16200@vitellius.mbfys.kun.nl> Dear Connectionists, The following article Title: Efficient learning in Boltzmann Machines using linear response theory Authors: H.J. Kappen and F.B. Rodriguez was announced on connectionist mailing list on november 7. However, due to some failure at our server, this paper could not be downloaded. This problem has now been solved. The paper can now be downloaded from ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NC.ps.Z Sorry for the inconvenience. Best Regards, Bert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NC.ps.Z ftp> bye unix% uncompress Kappen.LR_NC.ps.Z unix% lpr Kappen.LR_NC.ps From ericr at mech.gla.ac.uk Wed Nov 12 08:06:30 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Wed, 12 Nov 1997 13:06:30 GMT Subject: No subject Message-ID: <2133.199711121306@googie.mech.gla.ac.uk> From cia at hare.riken.go.jp Wed Nov 12 06:04:38 1997 From: cia at hare.riken.go.jp (cia@hare.riken.go.jp) Date: Wed, 12 Nov 97 20:04:38 +0900 Subject: Software for ICA and BSS Message-ID: <9711121104.AA11233@hare.brain.riken.go.jp> Dear all, Just to let you know of the http availability of a new software for Independent Component Analysis (ICA) and Blind Separation of Sources (BSS). The Laboratory for Open Information Systems in the Research Group of Professor S. AMARI, (Brain-Style Information Processing Group) BRAIN SCIENCE INSTITUTE -RIKEN, JAPAN announces the availability of OOLABSS (Object Oriented LAboratory for Blind Source Separation), an experimental laboratory for ICA and BSS. OOLABSS has been developed by Dr. A. CICHOCKI and Dr. B. ORSIER (both worked on the concept of this software and on the development/unification of learning algorithms, while Dr. B. ORSIER designed and implemented the software in C++ under Windows95/NT). OOLABAS offers an interactive environment for experiments with a very wide family of recently developed on-line adaptive learning algorithms for Blind Separation of Sources and Independent Component Analysis. OOLABSS is free for non-commercial use. The current version is still experimental but is reasonably stable and robust. The program has the following features: 1. Users can define their own activation functions for each neuron (processing unit) or use a global activation function (e.g. hyperbolic tangent) for all neurons. 2. The program also enables automatic (self-adaptive) selection of quasi optimal activation functions (time variable or switching) depending on the stochastic distribution of extracted source signals (so called extended ICA problem). 3 Users can add a noise both to sensors signals as well as to synaptic weights. 4. The number of sources, sensors and outputs of the neural network can be arbitrary defined by users. 5. In the case where the number of source signals is completely unknown one of the proposed approaches enables not only to estimate source signals but also to estimate correctly their number on-line without any pre-processing, like pre-whitening or Principal Component Analysis (PCA). 6. The problem of optimal updating of a learning rate (step) is a key problem encountered in a wide class of on-line adaptive learning algorithms. Relying on properties of nonlinear low-pass filters a family of learning algorithms for self-adaptive (automatic) updating of learning rates (global one or local-individual for each synaptic weight) are implemented in the program. The learning rates can be self-adaptive, i.e. quasi optimal annealing of learning rates is automatically provided in a stationary environment. In a non-stationary environment the learning rates adaptively change their value to provide good tracking abilities. The users can also define their own function for changing the learning rate. 6. The program enables to compare performance of several different algorithms. 7. Special emphasis is given to robust algorithms with respect to noise and outliers and equivariant feature (i.e. independence of asymptotic performance for ill conditioning of the mixing process). 8. Advanced graphics: illustrative figures are produced and can be easily printed. Encapsulated Postscript files can be produced for easy integration into word processors. Data can be pasted to the clipboard for post-processing using specialized software like Matlab or even spreadsheets. 9. Users can easily enter their own data (sensors signals, or sources and mixing matrix, noise, a neural network model, etc.) in order to experiment with various kind of algorithms. 10. Modular programming style: the program code is based on well-defined C++ classes and is very modular, which makes it possible to tailor the software to each user's specific needs. Please visit OOLABSS home page at URL: http://www.bip.riken.go.jp/absl/orsier/OOLABSS The version is 1.0 beta, so comments, suggestions and bug reports are welcome at the address: oolabss at open.brain.riken.go.jp From ken at phy.ucsf.EDU Thu Nov 13 02:36:11 1997 From: ken at phy.ucsf.EDU (Ken Miller) Date: Wed, 12 Nov 1997 23:36:11 -0800 (PST) Subject: UCSF Postdoc and Graduate Positions in Theoretical Neurobiology Message-ID: <199711130736.XAA08018@coltrane.ucsf.edu> FULL INFO: http://www.sloan.ucsf.edu/sloan/sloan-info.html PLEASE DO NOT USE 'REPLY'; FOR MORE INFO USE ABOVE WEB SITE OR CONTACT ADDRESSES GIVEN BELOW The Sloan Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in mathematics, theoretical or experimental physics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan Center offers opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. Young scientists with strong theoretical backgrounds will receive scientific training in experimental approaches to understanding the operation of the intact brain. They will learn to integrate their theoretical abilities with these experimental approaches to form a mature research program in integrative neuroscience. The research undertaken by the trainees may be theoretical, experimental, or a combination. TO APPLY, please send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. The application deadline is February 1, 1998. Send applications to: Steve Lisberger Sloan Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 PRE-DOCTORAL applicants may be enrolled in a Ph.D. program in a theoretical discipline at another institution. In this case, the fellowship would support a cooperative training program between that institution and UCSF that is acceptable to both institutions. Applicants should indicate a faculty member at their home institution who we may contact who would sponsor their research at the Sloan Center. Applications for such a cooperative program will be accepted at any time. Alternatively, PRE-DOCTORAL applicants with strong theoretical training may seek admission into the UCSF Neuroscience Graduate Program as a first-year student. Applicants seeking such admission must apply by Jan. 10, 1998 to be considered for fall, 1998 admission. Application materials for the UCSF Neuroscience Program may be obtained from Cindy Kelly Neuroscience Graduate Program Department of Physiology University of California San Francisco San Francisco, CA 94143-0444 neuroscience at phy.ucsf.edu Be sure to include your surface-mail address. The procedure is: make a normal application to the UCSF Neuroscience program; but also alert the Sloan Center of your application, by writing to Steve Lisberger at the address given above. If you need more information: -- Consult the Sloan Center WWW Home Page: http://www.sloan.ucsf.edu/sloan -- Send e-mail to sloan-info at phy.ucsf.edu -- See also the home page for the W.M. Keck Foundation Center for Integrative Neuroscience, in which the Sloan Center is housed: http://www.keck.ucsf.edu/ From cmb35 at newton.cam.ac.uk Thu Nov 13 09:55:13 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Thu, 13 Nov 1997 14:55:13 +0000 Subject: Information Geometry Message-ID: <199711131455.OAA23985@feynman> A Newton Institute Themed Week INFORMATION GEOMETRY 8 - 12 December Isaac Newton Institute, Cambridge, U.K. Organisers: S Amari (RIKEN) and C M Bishop (Microsoft) *** http://www.newton.cam.ac.uk/programs/nnm_info.html *** This themed week is aimed at bringing together researchers from different disciplines with a common interest in the field of information geometry. Registration: Since the week falls during the Cambridge academic term, the Newton Institute will unfortunately not be able to provide any assistance with accommodation. Participants must therefore make their own arrangements for accommodation and evening meals. However, light lunches will be available for purchase in the Institute. In order to gauge numbers, participants are requested to complete and return the short registration form, attached below. Programme ========= Abstracts for each talk will are also available from the web site. Monday 8 December ----------------- Informal discussion Tuesday 9 December ------------------ 10:00 - 11:00 S Amari (RIKEN) Introduction to information geometry and applications to neural networks 11:00 Coffee 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 G Pistone (Torino) Non-parametric Information Geometry 15:30 Tea 16:00 - 17:30 Informal discussion Wednesday 10 December --------------------- 10:00 - 11:00 J-F Cardoso (CNRS) Information geometry of blind source separation and ICA 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 S Eguchi (ISM Japan) Near parametric inference 15:30 Tea 16:00 - 17:30 Informal discussion Thursday 11 December -------------------- 10:00 - 11:00 N Murata (RIKEN) Characteristics of AIC type criteria in case of singular Fisher Information matrices 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 H Zhu (Santa Fe) Some consideration of information geometry on function spaces and non-parametric inference 15:30 Tea 16:00 - 17:30 Informal discussion Friday 12 December ------------------ Informal discussions ----------------------------------------------------------------------------- A Newton Institute Themed Week INFORMATION GEOMETRY 8 - 12 December, 1997 Isaac Newton Institute, Cambridge, U.K. Registration form ----------------- Name: Address: Tel: Fax: Email: Days on which you plan to attend and on which you plan to have lunch in the Institute: Please return to Heather Hughes (H.Hughes at newton.cam.ac.uk) ----------------------------------------------------------------------------- From yweiss at psyche.mit.edu Thu Nov 13 13:12:16 1997 From: yweiss at psyche.mit.edu (Yair Weiss) Date: Thu, 13 Nov 1997 13:12:16 -0500 (EST) Subject: paper availble: belief propagation in networks with loops Message-ID: <199711131812.NAA26099@maxwell1> The following paper on belief propagation in networks with loops is available online via: http://www-bcs.mit.edu/~yweiss/cbcl.ps.gz This research will be presented at the NIPS*97 workshop on graphical models. The workshop will also feature talks by P. Smyth and B. Frey on a similar topic. Comments are most welcome. Yair -------------------------------------------------------------------------- Title: Belief Propagation and Revision in Networks with Loops Author: Yair Weiss Reference: MIT AI Memo 1616, MIT CBCL Paper 155. Abstract: Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay a foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive an analytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revision at convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the state space. For networks with multiple loops, we introduce the concept of a ``balanced network'' and show simulation results comparing belief revision and update in such networks. We show that the Turbo code structure is balanced and present simulations on a toy Turbo code problem indicating the decoding obtained by belief revision at convergence is significantly more likely to be correct. From payman at maxwell.ee.washington.edu Thu Nov 13 13:22:32 1997 From: payman at maxwell.ee.washington.edu (Payman Arabshahi 8834870) Date: Thu, 13 Nov 1997 10:22:32 PST Subject: CFP: CIFEr'98 - computational intelligence in finance Message-ID: <199711131822.KAA24554@compton.ee.washington.edu> IEEE/IAFE 1998 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer98 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 29-31, 1998 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting Institute for Operations Research * Risk Arbitrage and the Management Sciences * Exotic Options CIFER is the 4th annual collaboration between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at the IAFE by December 14, 1997. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Mark Larson at the IAFE by December 14, 1997. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr Organizational Chair, Mark Larson. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'98 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers); the IEEE Neural Networks Council, and INFORMS (Institute for Operations Research and the Management Sciences). The IEEE (Institute Mark Larson of Electrical and Electronics CIFEr'98 Organizational Chair Engineers) is the world's largest Meeting Management engineering and computer science IAFE Administrative Office professional non-profit association 646-16 Main Street and sponsors hundreds of technical Port Jefferson, NY 11777-2230 conferences and publications annually. The IAFE is a professional Tel: (516) 331-8069 non-profit financial association Fax: (516) 331-8044 with members worldwide specializing Email: m.larson at iafe.org in new financial product design, derivative structures, risk Web: http://www.ieee.org/nnc/cifer98 management strategies, arbitrage techniques, and application of computational techniques to finance. INFORMS serves the scientific and professional needs of OR/MS investigators, scientists, students, educators, and managers. ---------------------------------------------------------------------------- From nimzo at cerisep1.diepa.unipa.it Thu Nov 13 07:22:13 1997 From: nimzo at cerisep1.diepa.unipa.it (Maurizio Cirrincione) Date: Thu, 13 Nov 1997 12:22:13 GMT Subject: Abstract of PhD thesis about NN and electrical drives Message-ID: Dear Connectionists: Please find herein the abstract of my PhD thesis Diagnosis and Control of Electrical Drives Using Neural Networks PhD in Electrical Engineering, University of Palermo, Italy. This thesis has been successfully defended on the 3rd December 1996. On the 23rd May 1997 at Vietri sul Mare (Salerno) the SIREN (Societa' Italiana Reti Neuronali Italian Society of Neural Network) and the IIASS (Istituto Internazionale per gli Alti Studi Scientifici International Institute for High Scientific Studies) awarded it the prize "Edoardo R. Caianello '97" for the best Italian PhD thesis on neural networks. It is not yet available in the web, but I hope it will be soon. Meanwhile if you want a copy I can send you one by ordinary mail. ABSTRACT: Diagnosis and Control of Electrical Drives Using Neural Networks by Maurizio Cirrincione It is deeply known that the diagnosis of a system is a process consisting of the execution of suitable measurements and tests and, as a result, the recognition of the operating state and the behaviour of the system itself in order to fix the possible course of action to undertake for correcting this behaviour. The technique that develops the diagnosis is called diagnostics, while the one which develops the corrective actions is called maintenance or also control. In particular in an electrical drive connected with a load, the automatic operation may require an on-line closed-loop control where an artificial-based block interprets the load conditions and decides, on the basis of the recognition of the operating condition and behaviour, the control actions to undertake on the motor through the power converter. Both the processing of the measured data and the control action can contain neural network parts. This thesis therefore deals with use of neural networks for controlling and diagnosing an electrical drive by describing some original applications. More importance has been given to the engineering and experimental aspects of these applications than to a deep theoretical approach, in order to prove the suitability of these neural network techiques in a particular domain of industrial applications. In chapter 1 the general problems of control in electrical drives are discussed. The need of adaptive on-line control is emphasized and a brief overview of innovative techiques, such as those based on expert systems, fuzzy logic and neural networks, is then presented. In chapter 2 the two neural networks used by the author for the control of electrical drives are described. The first is the well-known backpropagation neural network (BPN) and the second is a new neural network, called PLN (Progressive Learning Network). In particular the latter is presented and compared with the former and it is highlighted that the PLN is more suitable than BPN for adaptive on-line real-time control as it requires no separate training and production phases. Chapter 3 deals with the main neuro-control techniques and their problems. The complementarity and continuity of these methods with the traditional techniques is emphasized. Chapter 4 describes the BPN-based supervised control of a stepper motor . It is shown here that a BPN can work as a robust controller of a stepper motor and this result has been verified experimentally. A suitable test-bed has been set up where the electrical drive is supervised by a neural network hosted on a PC. Moreover a comparison with a traditional algorithm is carried out. In the end the reliability of such neural controller is verified in the presence of faults of some of its components. It is remarked that hardly any test-beds for verifying neuro-controllers of electrical drives have been realised, since most applications in this domain of electrical drives have beed mostly carried out in simulation. Chapter 5 deals with the use of neural networks for realising a controller for high-performance dc drives. The target is the control of the rotational speed so as to follow the speed reference accurately. The innovation which is presented concerns the use of the direct inverse control with generalised and specialised learning for identifying the inverse model of a DC motor with separate excitation through a BPN and a PLN. The suitability of the BPN is verified both in simulation and on an experimental test-bed even in presence of a speed variable load, resulting in a non-linear controlled system. Subsequently the PLN is applied for the on-line control based on specialised learning. It is shown that this approach can control the electrical drive without a persistent excitation, in presence even of variations of the load or of the parameters of the drive, with a noisy environment. This new neuro-controller ia capable of adapting on-line to any new working condition as it is based on a neural network varying the number of its hidden neurons to learn situations non previously encountered or to forget rare ones. Chapter 6 gives an overview of the diagnosis of electrical drives for fault protection, maintenance, fault detection and evalution of performances. After showing traditional diagnosis techniques for each component of the drive, a brief survey of future trends in this field is described. The target of this chapter is to place the technique of neural networks in the framework of the diagnosis of electrical drives. Chapter 7 describes the self-organising neural networks used for the diagnosis, that is the well-known SOM of prof. Kohonen and the more recent VQP algorithm of prof. Herault. The diagnosis is considered as a particular case of pattern recognition. Chapter 8 is dedicated to the application and implementation of the above neural networks in the diagnosis of electrical drives. In particular it is original the use of these networks for the real-time diagnosis of the working conditions of a three-phase converter and an induction motor (ac drive). In this application the VQP proved to be more suitable that Kohonen's SOM for the projection of high dimensional input data onto a reduced dimension output space, also for visualisation. The conclusions present new problems that should be faced in the future. Best Regards Maurizio Cirrincione, PhD, C.Eng. CERISEP - CNR c/o Department of Electrical Engineering University of Palermo Viale delle Scienze 90128 PALERMO ITALY tel. 0039 91 484686 fax 0039 91 485555 http://wwwcerisep.diepa.unipa.it/ From juergen at idsia.ch Fri Nov 14 04:55:40 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 14 Nov 1997 10:55:40 +0100 Subject: 2 book chapters Message-ID: <199711140955.KAA10227@ruebe.idsia.ch> REINFORCEMENT LEARNING WITH SELF-MODIFYING POLICIES Juergen Schmidhuber & Jieyu Zhao & Nicol N. Schraudolph (IDSIA) In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, 1997 We apply the success-story algorithm to a reinforcement learner whose learning algorithm has modifiable components represented as part of its own policy. This is of interest in situations where the initial learning algorithm can be improved by experience (learning to learn). The system is tested in a complex partially observable environment. ftp://ftp.idsia.ch/pub/juergen/ssa.ps.gz _____________________________________________________________________ A COMPUTER SCIENTIST'S VIEW OF LIFE, THE UNIVERSE, AND EVERYTHING Juergen Schmidhuber (IDSIA) In C. Freksa, ed., Foundations of Computer Science: Potential - Theory - Cognition, Lecture Notes in Comp. Sci., Springer, 1997. Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. ftp://ftp.idsia.ch/pub/juergen/everything.ps.gz _____________________________________________________________________ IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland www.idsia.ch From sml%essex.ac.uk at seralph21.essex.ac.uk Fri Nov 14 08:14:38 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Fri, 14 Nov 1997 13:14:38 +0000 Subject: High-performance face recognition papers Message-ID: <346C4EBE.3937@essex.ac.uk> The following papers are available on-line from http://esewww.essex.ac.uk/~sml describing the continuous n-tuple classifer and its application to face recognition. The method offers high speed (can match an unknown image with about 4,000 'face-models' per second on a PC) and high accuracy. The BMVC '97 paper gives a more complete description of the system, while the Electronics Letters paper provides a more significant set of results. ------------------------------------------------ The continuous n-tuple classifier and its application to face recognition S.M. Lucas Electronics Letters, v33, pp 1676 - 1678 Abstract: This paper describes the continuous n-tuple classifier: a new type of n-tuple classifier that is well suited to problems where the input is continuous or multi-level rather than binary. Results on a widely used face database show the continuous n-tuple classifier to be as accurate as any method reported in the literature, while having the advantages of speed and simplicity over other methods. ------------------------------------------------ Face Recognition with the continuous n-tuple classifier S.M. Lucas In proceedings of Britich Machine Vision Conference-97 Abstract: Face recognition is an important field of research with many potential applications for suitably efficient systems, including biometric security and searching large face databases. This paper describes a new approach to the problem based on a new type of n-tuple classifier: the continuous n-tuple system. Results indicate that the new method is faster and more accurate than previous methods reported in the literature on the widely used Olivetti Research Laboratories face database. Comments welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From Dave_Touretzky at cs.cmu.edu Fri Nov 14 17:51:02 1997 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Fri, 14 Nov 1997 17:51:02 -0500 Subject: graduate training in cognitive/computational neuroscience Message-ID: <389.879547862@skinner.boltz.cs.cmu.edu> Graduate Training with the Center for the Neural Basis of Cognition The Center for the Neural Basis of Cognition offers interdisciplinary doctoral and postdoctoral training programs operated jointly with affiliated programs at Carnegie Mellon University and the University of Pittsburgh. Detailed information about these programs is available on our web site at http://www.cnbc.cmu.edu. The Center is dedicated to the study of the neural basis of cognitive processes including learning and memory, language and thought, perception, attention, and planning; to the study of the development of the neural substrate of these processes; to the study of disorders of these processes and their underlying neuropathology; and to the promotion of applications of the results of these studies to artificial intelligence, robotics, and medicine. CNBC students have access to some of the finest facilities for cognitive neuroscience research in the world: Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) scanners for functional brain imaging, neurophysiology laboratories for recording from brain slices and from anesthetized or awake, behaving animals, electron and confocal microscopes for structural imaging, high performance computing facilities including an in-house supercomputer for neural modeling and image analysis, and patient populations for neuropsychological studies. Students are admitted jointly to a home department and the CNBC Training Program. Applications are encouraged from students with interests in biology, neuroscience, psychology, engineering, physics, mathematics, computer science, or robotics. For a brochure describing the program and application materials, contact us at the following address: Center for the Neural Basis of Cognition 115 Mellon Institute 4400 Fifth Avenue Pittsburgh, PA 15213 Tel. (412) 268-4000. Fax: (412) 268-5060 email: cnbc-admissions at cnbc.cmu.edu Application materials are also available online. The affiliated PhD programs at the two universities are: Carnegie Mellon University of Pittsburgh Biological Sciences Mathematics Computer Science Neurobiology Psychology Neuroscience Robotics Psychology The CNBC training faculty includes: German Barrionuevo (Pitt Neuroscience): LTP in hippocampal slice Marlene Behrmann (CMU Psychology): spatial representations in parietal cortex Pat Carpenter (CMU Psychology): mental imagery, language, and problem solving B.J. Casey (Pitt Psychology): attention; developmental cognitive neuroscience Jonathan Cohen (CMU Psychology): schizophrenia; dopamine and attention Carol Colby (Pitt Neuroscience): spatial reps. in primate parietal cortex Bard Ermentrout (Pitt Mathematics): oscillations in neural systems Julie Fiez (Pitt Psychology): fMRI studies of language John Horn (Pitt Neurobiology): synaptic plasticity in autonomic ganglia Allen Humphrey (Pitt Neurobiology): motion processing in primary visual cortex Marcel Just (CMU Psychology): visual thinking, language comprehension Eric Klann (Pitt Neuroscience): hippocampal LTP and LTD Alan Koretsky (CMU Biological Sciences): new fMRI techniques for brain imaging Tai Sing Lee (CMU Comp. Sci.): primate visual cortex; computer vision David Lewis (Pitt Neuroscience): anatomy of frontal cortex James McClelland (CMU Psychology): connectionist models of cognition Carl Olson (CNBC): spatial representations in primate frontal cortex David Plaut (CMU Psychology): connectionist models of reading Michael Pogue-Geile (Pitt Psychology): development of schizophrenia John Pollock (CMU Biological Sci.): neurodevelopment of the fly visual system Walter Schneider (Pitt Psych.): fMRI, models of attention & skill acquisition Charles Scudder (Pitt Neurobiology): motor learning in cerebellum Susan Sesack (Pitt Neuroscience): anatomy of the dopaminergic system Dan Simons (Pitt Neurobiology): sensory physiology of the cerebral cortex William Skaggs (Pitt Neuroscience): representations in rodent hippocampus David Touretzky (CMU Comp. Sci.): hippocampus, rat navigation, animal learning See http://www.cnbc.cmu.edu for further details. From nkasabov at commerce.otago.ac.nz Tue Nov 18 01:19:49 1997 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 17 Nov 1997 18:19:49 -1200 Subject: A new book on brain-like computing and intelligent systems In-Reply-To: <389.879547862@skinner.boltz.cs.cmu.edu> Message-ID: <27707066429@jupiter.otago.ac.nz> Springer Verlag 1997 BRAIN-LIKE COMPUTING AND INTELLIGENT INFORMATION SYSTEMS edited by S. Amari, RIKEN, Japan N. Kasabov, University of Otago, New Zealand Contents: PART I. COMPUTER VISION AND IMAGE PROCESSING: Active Vision: Neural Network Models (K. Fukushima); Image Recognition by Brains and Machines (E. Postma et al.); The Properties and Training of a Neural Network Based Universal Window Filter Developed for Image Processing Tasks (R.H. Pugmire et al.); PART II. SPEECH RECOGNITION AND LANGUAGE PROCESSING: A Computational Model of the Auditory Pathway to the Superior Colliculus (R. J. W. Wang and M. Jabri); A Framework for Intelligent 'Conscious' Machines Utilising Fuzzy Neural Networks and Spatio-Temporal Maps and a Case Study of Multilingual Speech Recognition (N. Kasabov); PART III. DYNAMIC SYSTEMS: STATISTICAL AND CHAOS MODELLING. BLIND SOURCE SEPARATION: Noise-Mediated Cooperative Behavior in Integrate-Fire Models of Neuron Dynamics (A.R. Bulsara); Blind Source Separation - Mathematical Foundations (S. Amari); Neural Independent Component Analysis - Approaches and Applications (E. Oja et al.); General Regression Techniques Based on Spherical Kernel Functions for Intelligent Processing (A. Zankich and Y. Attikiouzel); Chaos and Fractal Analysis of Irregular Times Series Embedded in a Connectionist Stucture (R. Kozma and N. Kasabov); PART IV.LEARNING SYSTEMS AND EVOLUTIONARY COMPUTATION: Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach (I): Unsupervised and Semi-Unsupervised Learning (Lei Xu); Evolutionary Computation: An Introduction,Some Current Applications, and Future Directions (D. B. Fogel); Biologically Inspired New Operations for GeneticAlgorithms(A.Ghoshand and S. K. Pal). PART V. ADAPTIVE LEARNING FOR NAVIGATION, CONTROL AND DECISION MAKING: From nls at ra.abo.fi Sat Nov 15 10:21:26 1997 From: nls at ra.abo.fi (Nonlinear Solutions UPR) Date: Sat, 15 Nov 1997 17:21:26 +0200 (EET) Subject: EANN 98 CFP Message-ID: <199711151521.RAA13300@aton.abo.fi> International Conference on Engineering Applications of Neural Networks (EANN '98) Gibraltar 10-12 June 1998 First Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biomedical systems, and environmental engineering. Abstracts of one page (about 500 words) should be sent to eann98 at ctima.uma.es or NLS at abo.fi by 21 January 1998 by e-mail in plain text format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN97.html Proposals for special tracks are welcome. Among the special features of EANN '98 is a session on Japanese state-of-the-art in engineering applications of neural networks. Please contact Prof. Iwata (iwata at elcom.nitech.ac.jp) if you are interested. An industrial panel discussion may be organised in this conference also if there are several participants from industries. Organising committee (to be extended) A. Bulsari, Nonlinear Solutions Oy, Finland J. Fernandez de Canete, University of Malaga, Spain J. Heikkonen, Helsinki University of Technology, Finland A. Ruano, University of Algarve, Portugal D. Tsaptsinos, Kingston University, UK E. Tulunay, Middle East Technical University, Turkey P. Zufiria, Polytechnic University of Madrid, Spain International program committee (to be confirmed, extended) C. Andersson, Colorado State University, USA G. Baier, University of Tubingen, Germany R. Baratti, University of Cagliari, Italy S. Canu, Compiegne University of Technology, France S. Cho, Pohang University of Science and Technology, Korea T. Clarkson, King's College, UK J. DeMott, John Mansfield Corporation, USA S. Draghici, Wayne State University, USA W. Duch, Nicholas Copernicus University, Poland G. Forsgrn, Stora Corporate Research, Sweden P. Gallinari, University of Paris VI, France I. Grabec, University of Ljubljana, Slovenia A. Iwata, Nagoya Institute of Technology, Japan C. Kuroda, Tokyo Institute of Technology, Japan H. Liljenstrm, Royal Institute of Technology, Sweden L. Ludwig, University of Tubingen, Germany T. Nagano, Hosei University, Japan L. Niklasson, University of Skvde, Sweden F. Norlund, ABB Industrial Systems, Sweden R. Parenti, Ansaldo Ricerche, Italy V. Petridis, Aristotle University of Thessaloniki, Greece R. Rico-Martinez, Celaya Institute of Technology, Mexico J. Sa da Costa, Technical University of Lisbon, Portugal F. Sandoval, University of Malaga, Spain C. Schizas, University of Cyprus, Cyprus Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact again. You should receive an abstract number in a couple of days after the submission. From priel at alon.cc.biu.ac.il Mon Nov 17 08:05:23 1997 From: priel at alon.cc.biu.ac.il (Avner Priel) Date: Mon, 17 Nov 1997 15:05:23 +0200 (WET) Subject: New papers on time series generation Message-ID: Dear Connectionists, This is to announce the availability of 2 new papers on the subject of time series generation by feed-forward networks. The first paper will appear on the "Journal of Physics A" and the second on the NIPS-11 proceedings. The papers are available from my home-page : http://faculty.biu.ac.il/~priel/ comments are welcome. *************** NO HARD COPIES ****************** ---------------------------------------------------------------------- Noisy time series generation by feed-forward networks ----------------------------------------------------- A Priel, I Kanter and D A Kessler Department of Physics, Bar Ilan University, 52900 Ramat Gan,Israel ABSTRACT: We study the properties of a noisy time series generated by a continuous-valued feed-forward network in which the next input vector is determined from past output values. Numerical simulations of a perceptron-type network exhibit the expected broadening of the noise-free attractor, without changing the attractor dimension. We show that the broadening of the attractor due to the noise scales inversely with the size of the system ,$N$, as $1/ \sqrt{N}$. We show both analytically and numerically that the diffusion constant for the phase along the attractor scales inversely with $N$. Hence, phase coherence holds up to a time that scales linearly with the size of the system. We find that the mean first passage time, $t$, to switch between attractors depends on $N$, and the reduced distance from bifurcation $\tau$ as $t = a {N \over \tau} \exp(b \tau N^{1/2})$, where $b$ is a constant which depends on the amplitude of the external noise. This result is obtained analytically for small $\tau$ and confirmed by numerical simulations. Analytical study of the interplay between architecture and predictability ------------------------------------------------------------------------- Avner Priel, Ido Kanter , D.A. Kessler Minerva Center and Department of Physics, Bar Ilan University, Ramat-Gan 52900, Israel. ABSTRACT: We study model feed forward networks as time series predictors in the stationary limit. The focus is on complex, yet non-chaotic, behavior. The main question we address is whether the asymptotic behavior is governed by the architecture, regardless the details of the weights. We find hierarchies among classes of architectures with respect to the attractor dimension of the long term sequence they are capable of generating; larger number of hidden units can generate higher dimensional attractors. In the case of a perceptron, we develop the stationary solution for a general weight vector, and show that the flow is typically one dimensional. The relaxation time from an arbitrary initial condition to the stationary solution is found to scale linearly with the size of the network. In multilayer networks, the number of hidden units gives bounds on the number and dimension of the possible attractors. We conclude that long term prediction (in the non-chaotic regime) with such models is governed by attractor dynamics related to the architecture. ---------------------------------------------------- Priel Avner < priel at mail.cc.biu.ac.il > < http://faculty.biu.ac.il/~priel > Department of Physics, Bar-Ilan University. Ramat-Gan, 52900. Israel. From caironi at elet.polimi.it Tue Nov 18 08:00:25 1997 From: caironi at elet.polimi.it (Pierguido V.C. CAIRONI) Date: Tue, 18 Nov 1997 14:00:25 +0100 Subject: Technical Report Available Message-ID: <34719169.15A9BA6@elet.polimi.it> Please accept my apologies if you receive multiple copies of this message. The following technical report is available on the web at the page: http://www.elet.polimi.it/~caironi/listpub.html or directly at: ftp://www.elet.polimi.it/pub/data/Pierguido.Caironi/tr97_50.ps.gz ----------------------------------------------------------------------- Gradient-Based Reinforcement Learning: Learning Combinations of Control Policies Pierguido V.C. Caironi email: caironi at elet.polimi.it Technical Report 97.50 Dipartimento di Elettronica e Informazione Politecnico di Milano Abstract This report presents two innovative reinforcement learning algorithms for continuous state-action environments: Gradient REinforceMent LearnINg for Multiple control policies (GREMLIN-M) and Gradient REinforceMent LearnINg for Multiple and Single control policies (GREMLIN-MS). The two algorithms learn optimal combinations of control policies for autonomous agents. GREMLIN-M learns an optimal combination of fixed base control policies. GREMLIN-MS extends GREMLIN-M enabling the agent to learn simultaneously the base control policies as well. GREMLIN-M and GREMLIN-MS optimize a performance function equal to the sum of the expected reinforcements in a sliding temporal window of finite length. The optimization is carried out through gradient ascent with respect to the parameter values of the control functions. While being natural extensions of previously existing supervised learning algorithms, GREMLIN-M and GREMLIN-MS improve the current state of art of reinforcement learning taking into account the temporal credit assignment problem for the on-line and real-time combination of control policies. Furthermore, GREMLIN-M and GREMLIN-MS lend themselves to a motivational interpretation. That is, the combination function resulting from learning may be seen as a representation of the motivations to apply any single base control policy in different environmental conditions. -- Name: Pierguido V. C. CAIRONI Job: Ph.D. Student at the Politecnico di Milano - ITALY e-mail: caironi at elet.polimi.it Address: Politecnico di Milano - Dip. di Elettronica e Informazione Piazza Leonardo da Vinci 32 20133 - Milano - ITALY Tel: +39-2-23993622 Fax: +39-2-23993411 WWW: http://www.elet.polimi.it/~caironi From Kim.Plunkett at psy.ox.ac.uk Tue Nov 18 13:34:35 1997 From: Kim.Plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 18 Nov 1997 18:34:35 GMT Subject: No subject Message-ID: <199711181834.SAA22833@pegasus.psych.ox.ac.uk> To users of the tlearn neural network simulator: Version 1.0.1 of the tlearn simulator is now available for ftp at two sites: ftp://ftp.psych.ox.ac.uk/pub/tlearn/ (Old Worlds Users) ftp://crl.ucsd.edu/pub/neuralnets/tlearn (New Worlds Users) This version mostly involves bug fixes to the earlier version. A complete user manual for the software plus a set of tutorial exercises is available in: Plunkett and Elman (1997) "Exercises in Rethinking Innateness: A Handbook for Connectionist Simulations". MIT Press. For WWW access, the San Diego tlearn page is http://crl.ucsd.edu/innate/tlearn.html This contains a link to the directory containing the binaries: ftp://crl.ucsd.edu/pub/neuralnets/tlearn Then click on the filename(s) to download. For direct ftp/fetch access via anonymous login: - ftp/fetch to crl.ucsd.edu (132.239.63.1) - login anonymous/email address - cd pub/neuralnets/tlearn At the Oxford site: ftp://ftp.psych.ox.ac.uk/pub/tlearn/wintlrn1.0.1.zip is a zip archive that contains the windows 95 tlearn executable. ftp://ftp.psych.ox.ac.uk/pub/tlearn/wintlrn.zip is a link that always points to the latest version, in this case wintlrn1.0.1.zip. The mac version is in the following location: ftp://ftp.psych.ox.ac.uk/pub/tlearn/mac_tlearn_1.0.1.sea.hqx From stephen at cns.ed.ac.uk Tue Nov 18 13:50:02 1997 From: stephen at cns.ed.ac.uk (Stephen Eglen) Date: Tue, 18 Nov 1997 18:50:02 GMT Subject: PhD thesis on the development of the retinogeniculate pathway Message-ID: <199711181850.SAA18103@mango.cns.ed.ac.uk> The following PhD thesis is available from http://www.cns.ed.ac.uk/people/stephen/pubs.html Modelling the development of the retinogeniculate pathway Stephen Eglen How does the visual system develop before the onset of visually-driven activity? By the time photoreceptors can respond to visual stimulation, some pathways, including the retinogeniculate pathway, have already reached a near-adult form. This rules out visually-driven activity guiding pathway development. During this period however, spontaneous waves of activity travel across the retina, correlating the activity of neighbouring retinal cells. Activity-dependent mechanisms can exploit these correlations to guide retinogeniculate refinement. In this thesis I investigate, by means of computer simulation, the role of spontaneous retinal activity upon the development of ocular dominance and topography in the retinogeniculate pathway. Keesing, Stork and Shatz (1992) produced an initial model of retinogeniculate development driven by retinal waves. In this thesis, in addition to replicating their initial results, several new results are presented. First, the importance of presynaptic normalisation is highlighted. This is in contrast to most previous work on ocular dominance requiring postsynaptic normalisation. Second, the covariance rule is adapted so that development can occur under conditions of sparse input activity. Third, the model is shown to replicate development under conditions of monocular deprivation. Fourth, model development is analysed using different spatio-temporal inputs including anticorrelations between on- and off-centre retinal units. The layered pattern of ocular dominance in the LGN is quite different to the stripe patterns found in the cortex. The factors controlling the patterns of ocular dominance are investigated using a feature-based model of map formation (Obermayer, Ritter, & Schulten, 1991). In common with other models, variance of the ocularity feature controls the pattern of stripes. The model is extended to a three-dimensional output array to show that ocular dominance layers form in this model, and that the retinotopic maps are organised into projection columns. Future work involves extending this three-dimensional model to receive retinal-based, rather than feature-based, inputs. From wsenn at iam.unibe.ch Tue Nov 18 10:07:54 1997 From: wsenn at iam.unibe.ch (Walter Senn) Date: Tue, 18 Nov 1997 16:07:54 +0100 Subject: Coupled oscillations induced by synaptic depression Message-ID: <9711181507.AA04009@barney.unibe.ch> New paper available (to appear in Neural Computation): PATTERN GENERATION BY TWO COUPLED TIME-DISCRETE NEURAL NETWORKS WITH SYNAPTIC DEPRESSION W. Senn, Th. Wannier, J. Kleinle, H.-R. Luescher, L. Mueller, J. Streit, K. Wyler Spinal pattern generators are networks which produce rhythmic contractions alternating between different groups of muscles involved e.g. in locomotion. These networks are generally thought to rely on pacemaker cells or well designed circuits consisting of inhibitory and excitatory neurons. Recent experiments in organotypic cultures of embryonic rat spinal cord, however, have shown that neuronal networks with random and purely excitatory connections may oscillate as well, even without pacemaker cells. The reason of these oscillations was identified to be a fast depression of the activated synapses. In this theoretical study, we explore the dynamical behavior emerging by weakly coupling two random excitatory networks with synaptic depression. We discuss a time-discrete mean field model describing the average activity and the average synaptic depression of the two networks. As a mathematical tool we adapt the Average Phase Difference (APD) theory, originally developed for flows, to the present case of maps. Depending on the parameter values of the depression, one may predict whether the oscillations will be in-phase, anti-phase, quasiperiodic or phase-trapped. We put forward the hypothesis that pattern generators may rely on activity dependent tuning of the synaptic depression. The manuscript (262 KB) can be downloaded from: http://iamwww.unibe.ch:80/~brainwww/publications/pub_walter.html From weaveraj at helios.aston.ac.uk Wed Nov 19 11:06:33 1997 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Wed, 19 Nov 1997 16:06:33 +0000 Subject: Studentship, Aston University, UK Message-ID: <194.199711191606@sun.aston.ac.uk> Neural Computing Research Group, Aston University, Birmingham, UK We would like to invite applications for a TMR postgraduate grant (eligibility criteria mean that only non-UK EU nationals can apply), for study in the following areas Neural Nets for Control: Advancing the Theory (ITN) Statistical mechanics of error-correcting codes (DS) Average performance of support vector machines (DS) Image Understanding with Probabilistic Models (CKIW) Non-linear Signal Processing by Neural Networks (DL) Synergetics for computational networks (DL) The student would be assisted with their application to the EU for funding for a three year project leading to a postgraduate qualification. Should the EU application be unsuccessful there may be a possibility of awarding a Divisional Studentship. Applicants should have been (or expect to be) awarded an undergraduate degree in a numerate discipline. Please could you email (text only) a *brief* CV to include your name, address, email, nationality, qualifications (including marks) and any publications you may have, together with the (equivalent of one side of A4) description of how you would approach research in one of the above topics (not forgetting the topic title!) to ncrg at aston.ac.uk by 9.00am on Thursday 27th November 1997. The closing date for TMR applications is 15th December 1997 so deadlines are tight. Further details of the Research Group and some of the above projects can be found at http://www.ncrg.aston.ac.uk/ Further details of the TMR programme can be found at http://www.cordis.lu/tmr/home.html From hali at theophys.kth.se Wed Nov 19 13:52:39 1997 From: hali at theophys.kth.se (Hans Liljenstrm) Date: Wed, 19 Nov 1997 19:52:39 +0100 Subject: Workshop on Hippocampal Modeling Message-ID: <34733577.D88@theophys.kth.se> WORKSHOP ON HIPPOCAMPAL MODELING January 9-11, 1998 at Agora for Biosystems, Sigtuna, Sweden There are many modeling efforts regarding the hippocampus and its functions, in particular its learning capacities. In this meeting we will review some of these models and discuss their biological relevance, and in what direction future modeling efforts should go. The workshop is intended to promote an active dialogue between modelers and experimentalists. Some oral presentations will be given, but the main focus is on formal and informal discussions. Invited participants include Per Andersen (Oslo), Michael Hasselmo (Harvard), William Levy (Charlottesville), Sakire Pogun (Izmir), and Sven-Ove Ogren (Stockholm). The program and additional information will be available on the following web site, http://www.theophys.kth.se/~hali/agora/hippoc.html We welcome interested modellers and experimentalists to make early registrations with the organizers (the total number of participants is limited to 25). Please, also indicate whether you would like to give a short oral presentation. Hans Liljenstrom Agora for Biosystems Box 57 Sigtuna, Sweden Phone: +46-(0)8-592 50901 +46-(0)8-790 7172 Email: hali at theophys.kth.se From bakker at research.nj.nec.com Wed Nov 19 14:52:59 1997 From: bakker at research.nj.nec.com (Rembrandt Bakker) Date: Wed, 19 Nov 1997 14:52:59 -0500 Subject: TR on learning chaotic dynamics Message-ID: <9711191452.ZM325@heavenly.nj.nec.com> The following manuscript (7 pages) is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/bakker/UMD-CS-TR-3843.ps.gz www.neci.nj.nec.com/homepages/giles/papers/UMD-CS-TR-3843.neural.learning.chaotic.dynamics.ps.Z We apologize in advance for any multiple postings that may occur. *********************************************************************** Neural Learning of Chaotic Dynamics: The Error Propagation Algorithm Rembrandt Bakker(1), Jaap C. Schouten(1), C. Lee Giles(2,3), C.M. van den Bleek (1) (1) Delft University of Technology, Dept. Chemical Process Technology, Julianalaan 136, 2628 BL Delft, The Netherlands. (2) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA. (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA. ABSTRACT An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time-series. The algorithm has four special features: 1. The state of the system is extracted from the time-series using delays, followed by weighted Principal Component Analysis (PCA) data reduction. 2. The prediction model consists of both a linear model and a Multi- Layer-Perceptron (MLP). 3. The effective prediction horizon during training is user-adjustable, due to error propagation: prediction errors are partially propagated to the next time step. 4. A criterion is monitored during training to select the model that has a chaotic attractor most similar to the real system's attractor. The algorithm is applied to laser data from the Santa Fe time-series competition (set A). The resulting model is not only useful for short-term predictions but it also generates time-series with similar chaotic characteristics as the measured data. Keywords - time series, neural networks, chaotic dynamics, laser data, Santa Fe time series competition, Lyapunov exponents, principal component analysis, error propagation. -- Rembrandt Bakker r.bakker at stm.tudelft.nl http://www.neci.nj.nec.com/homepages/bakker From skalak at us.ibm.com Wed Nov 19 15:11:05 1997 From: skalak at us.ibm.com (David Skalak) Date: Wed, 19 Nov 1997 15:11:05 -0500 Subject: Ph.D. thesis: combining nearest neighbor classifiers Message-ID: <5010400014648620000002L002*@MHS> The following Ph.D. dissertation is available: Prototype Selection for Composite Nearest Neighbor Classifiers David B. Skalak Dept. of Computer Science University of Massachusetts Amherst, MA 01003-4610 The dissertation and other publications can be obtained from my homepage: http://www.cs.cornell.edu/Info/People/skalak The dissertation can also be retrieved directly from http://www.cs.cornell.edu/Info/People/skalak/thesis-header-dspace.ps.gz Keywords: classifier combination, ensemble methods, stacked generalization, boosting, accuracy-diversity graphs, classifier sampling, k-nearest neighbor, prototype selection, reference selection, editing algorithms, instance taxonomy, coarse classification, deliberate misclassification Abstract: Combining the predictions of a set of classifiers has been shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. Increased accuracy has been shown in a variety of real-world applications, ranging from protein sequence identification to determining the fat content of ground meat. Despite such individual successes, the answers are not known to fundamental questions about classifier combination, such as ``Can classifiers from any given model class be combined to create a composite classifier with higher accuracy?'' or ``Is it possible to increase the accuracy of a given classifier by combining its predictions with those of only a small number of other classifiers?''. The goal of this dissertation is to provide answers to these and closely related questions with respect to a particular model class, the class of nearest neighbor classifiers. We undertake the first study that investigates in depth the combination of nearest neighbor classifiers. Although previous research has questioned the utility of combining nearest neighbor classifiers, we introduce algorithms that combine a small number of component nearest neighbor classifiers, where each of the components stores a small number of prototypical instances. In a variety of domains, we show that these algorithms yield composite classifiers that are more accurate than a nearest neighbor classifier that stores all training instances as prototypes. The research presented in this dissertation also extends previous work on prototype selection for an independent nearest neighbor classifier. We show that in many domains, storing a very small number of prototypes can provide classification accuracy greater than or equal to that of a nearest neighbor classifier that stores all training instances. We extend previous work by demonstrating that algorithms that rely primarily on random sampling can effectively choose a small number of prototypes. Regards, David. David B. Skalak, Ph.D. Senior Data Mining Analyst IBM Global Business Intelligence Solutions IBM tie-line: 320-9774; Telephone: 607 257-5510 Internet: skalak at us.ibm.com From michael.haft at mchp.siemens.de Thu Nov 20 05:48:27 1997 From: michael.haft at mchp.siemens.de (Michael Haft) Date: Thu, 20 Nov 1997 11:48:27 +0100 (MET) Subject: Model-Independent Mean Field Theory for Approximate Inference Message-ID: <199711201048.LAA25254@yin.mchp.siemens.de> The following paper on approximate propagation of information is available online: Model-Independent Mean Field Theory as a Local Method for Approximate Propagation of Information Michael Haft, Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract We present a systematic approach to mean field theory in a general probabilistic setting without assuming a particular model and avoiding physical notation. The mean field equations derived here may serve as a {\em local} and thus very simple method for approximate inference in graphical models. In general, there are multiple solutions to the mean field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean field solutions. We derive simple approximate expressions for the mixture weights, which can also be obtained by means of only {\em local} computations. The benefits of taking into account multiple solutions are demonstrated by using mean field theory for inference in a small `noisy-or network'. The paper is available online from: http://www7.informatik.tu-muenchen.de/~hofmannr/mf_abstr.html Comments are welcome. A modified version of this paper is submitted for publication. ___________________________________________________________________________ Michael Haft ZT IK 4 Siemens AG, CR & D Email: Michael.Haft at mchp.siemens.de 81730 Muenchen Tel: +49/89/636-47953 Germany Fax: +49/89/636-49767 From georg at ai.univie.ac.at Thu Nov 20 10:41:54 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Thu, 20 Nov 1997 16:41:54 +0100 Subject: 2 papers: Application of Bayesian inference in NN Message-ID: <34745A42.500F9F30@ai.univie.ac.at> Dear connectionists, the following two papers are available at the WWW sites listed below. ---------- Experiences with bayesian learning in a real world application Sykacek P., Dorffner G., Rappelsberger P., Zeitlhofer J. to appear in: Advances in Neural Information Processing Systems, Vol.10, 1998. http://www.ai.univie.ac.at/~georg/papers/nips97.ps.Z Abstract: This paper reports about an application of Bayes' inferred neural network classifiers to the field of automatic sleep staging. The reason for using Bayesian learning for this task is two-fold. First, Bayesian inference is known to embody regularization automatically. Second, a side effect of Bayesian learning leads to larger variance of network outputs in regions without training data. This results in well known moderation effects, which can be used to detect outliers. In a 5 fold cross-validation experiment the full Bayesian solution was not better than a single maximum a-posteriori (MAP) solution found with D.J. MacKay's evidence approximation (see MacKay 1992). In a second experiment we studied the properties of both solutions in rejecting classification of movement artefacts. ----------- Evaluating confidence measures in a neural network based sleep stager Sykacek P., Dorffner G., Rappelsberger P., Zeitlhofer J. Austrian Research Institute for Artificial Intelligence, Vienna, Technical Report TR-97-21, 1997; submitted for publication http://www.ai.univie.ac.at/~georg/papers/ieee.ps.Z Abstract: In this paper we report about an extensive investigation on neural networks -- multilayer perceptrons (MLP), in particular -- in the task of automatic sleep staging based on electroencephalogram (EEG) and electrooculogram (EOG) signals. After the important first step of preprocessing and feature selection (for which, a search-based selection technique could reduce the large number of features to a feature vector of size ten), the main focus was on evaluating the used of so-called ``doubt-levels'' and ``confidence intervals'' (``error bars'') in improving the results by rejecting uncertain cases and patterns not well represented by the training set. The main technique used here is that of Bayesian inference to arrive at probability distributions of network weights based on training data. We compare the results of the full-blown Bayesian method with a reduced method calculating only the maximum posterior solution and with an MLP trained with the more common gradient descent technique for minimizing an error measure (``backpropagation''). The results show that, while both the full-blown Bayesian technique and the maximum posterior solution significantly outperform the simpler backpropagation technique, only the application of doubt-levels to reject uncertain cases can lead to an improvement of results. Our conclusion is that the data set of five independent all-night recordings from five normal subjects represents the possible data space well enough. At the same time, we show that Bayesian inference, for which we have developed a useful extension for reliable calculation of error bars, can still be used for the rejection of artefacts. ------------------- This work was done in the context of the concerted action ANNDEE (http://www.ai.univie.ac.at/oefai/nn/anndee), investigating neural networks in EEG processing, sponsored by the European Union and the Austrian Federal Ministry of Science and Transport. -------------------- Georg Dorffner Austrian Research Institute for Artificial Intelligence Neural Networks Group Schottengasse 3, A-1010 Vienna, Austria http://www.ai.univie.ac.at/oefai/nn From cjcb at molson.ho.lucent.com Thu Nov 20 15:55:18 1997 From: cjcb at molson.ho.lucent.com (Chris Burges) Date: Thu, 20 Nov 1997 15:55:18 -0500 Subject: Two Announcements on Support Vector Machines Message-ID: <199711202055.PAA13835@cottontail.dsp> TUTORIAL: The following paper is available at http://svm.research.bell-labs.com/SVMdoc.html A Tutorial on Support Vector Machines for Pattern Recognition C.J.C. Burges, Bell Laboratories, Lucent Technologies Invited Paper for Database Mining and Knowledge Discovery The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are non-linear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels; we then show how SVMs nevertheless provide a natural mechanism for implementing structural risk minimization, often resulting in good generalization performance. Finally, we discuss the various bounds on the generalization performance of SVMs. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light. * * * SUPPORT VECTOR MACHINE WEB PAGE: The following page allows users to submit their data, and have a support vector machine (SVM) trained automatically on that data. The results, as well as automatically generated ANSI C code which instantiates their classifier, are then available to them via FTP. There are also other resources available (for example, an Applet which allows users to play with two-dimensional SVM pattern recognition). The URL is: http://svm.research.bell-labs.com From nimzo at cerisep1.diepa.unipa.it Sat Nov 22 12:33:02 1997 From: nimzo at cerisep1.diepa.unipa.it (Maurizio Cirrincione) Date: Sat, 22 Nov 1997 18:33:02 +0100 Subject: Abstract of PhD thesis about NN and electrical drives Message-ID: Dear Connectionists Due to a crash of my system, I have lost the addresses of all those who had asked me for one hard copy of my PhD thesis. I therefore ask these people to send me again an email. As I have already written, at the moment the thesis is being translated by me from Italian into English, which means that I will be able to send a hard copy not earlier that Christmas. My email address is nimzo at vega.diepa.unipa.it Best regards and thank you for your attention and help Maurizio Cirrincione, PhD, C.Eng. CERISEP - CNR c/o Department of Electrical Engineering University of Palermo Viale delle Scienze 90128 PALERMO ITALY tel. 0039 91 484686 fax 0039 91 485555 http://wwwcerisep.diepa.unipa.it/ From niebur at russell.mb.jhu.edu Mon Nov 24 13:08:30 1997 From: niebur at russell.mb.jhu.edu (Ernst Niebur) Date: Mon, 24 Nov 1997 13:08:30 -0500 Subject: Graduate studies in systems and computational neuroscience at Johns Hopkins University Message-ID: <199711241808.NAA28289@russell.mb.jhu.edu> The Johns Hopkins University is a major private research university and its hospital and medical school have been rated consistently as the first or second in the nation. The Zanvyl Krieger Mind/Brain Institute at Johns Hopkins encourages students with interest in systems neuroscience, including computational neuroscience, to apply for the graduate program in the Neuroscience department. The Institute is an interdisciplinary research center devoted to the investigation of the neural mechanisms of mental function and particularly to the mechanisms of perception: How is complex information represented and processed in the brain, how is it stored and retrieved, and which brain centers are critical for these operations? Research opportunities exist in all of the laboratories of the Institute. Interdisciplinary projects, involving the student in more than one laboratory, are particularly encouraged. All students accepted to the PhD program of the Neuroscience department receive full tuition remission plus a stipend at or above the National Institutes of Health predoctoral level. Additional information on the research interests of the faculty in the Mind/Brain Institute and the Department of Neuroscience can be obtained at http://russell.mb.jhu.edu/mbi.html and at http://www.med.jhu.edu/neurosci/welcome.html, respectively. Applicants should have a B.S. or B.A. with a major in any of the biological or physical sciences. Applicants are required to take the Graduate Record Examination (GRE; both the aptitude tests and an advanced test), or the Medical College Admission Test. Further information on the admission procedure can be obtained from the Department of Neuroscience: Director of Graduate Studies Neuroscience Training Program Department of Neuroscience The Johns Hopkins University School of Medicine 725 Wolfe Street Baltimore, MD 21205 Completed applications (including three letters of recommendation and either GRE scores or Medical College Admission Test scores) must be received by January 1, 1998 at the above address. From Peter.Bartlett at keating.anu.edu.au Tue Nov 25 02:29:35 1997 From: Peter.Bartlett at keating.anu.edu.au (Peter Bartlett) Date: Tue, 25 Nov 1997 18:29:35 +1100 (EST) Subject: COLT98 call for papers Message-ID: <199711250729.SAA17927@reid.anu.edu.au> CALL FOR PAPERS: COLT '98 Eleventh Annual Conference on Computational Learning Theory University of Wisconsin-Madison July 24-26, 1998 The Eleventh Annual Conference on Computational Learning Theory (COLT '98) will be held at the University of Wisconsin-Madison from Friday, July 24 through Sunday, July 26, 1998. The conference will be co-located with the Fifteenth International Conference on Machine Learning (ICML '98) and the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI '98). Registrants to any of COLT, ICML, or UAI will be allowed to attend, without additional costs, the technical sessions of the other two conferences. Joint invited speakers, poster session, and a panel session are planned for the three conferences. The conferences will be directly followed by the Fifteenth National Conference on Artificial Intelligence (AAAI '98). The AAAI tutorial and workshop program will be held the day after the co-located conferences (Monday, July 27), and we anticipate that this program will include workshops and tutorials in the machine learning area. On the same day, UAI will offer a full day course on uncertain reasoning. There will be six other AI-related conferences held in Madison around this time. We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning. Some of the issues and topics that have been addressed in the past include: * design and analysis of learning algorithms; * sample and computational complexity of learning specific model classes; * frameworks modeling the interaction between the learner, teacher and the environment (such as learning with queries, learning control policies and inductive inference); * learning using complex models (such as neural networks and decision trees); * learning with minimal prior assumptions (such as mistake-bound models, universal prediction, and agnostic learning). We strongly encourage submissions from all disciplines engaged in research on these and related questions. Examples of such fields include computer science, statistics, information theory, pattern recognition, statistical physics, inductive logic programming, information retrieval and reinforcement learning. We also encourage the submission of papers describing experimental results that are supported by theoretical analysis. EXTENDED ABSTRACT SUBMISSION: Authors are encouraged to submit their extended abstracts electronically. Instructions for electronic submissions can be obtained by sending email to colt98 at anu.edu.au with subject "help". Alternatively, authors may submit fourteen copies (preferably two-sided) of an extended abstract to: Peter Bartlett -- COLT '98 Department of Systems Engineering RSISE Building 115 Australian National University Canberra 0200 Australia Telephone (for express mail): +61 2 6279 8681 Extended abstracts (whether hard-copy or electronic) must be received by 5:00pm Canberra time (= 1:00am Eastern Time) on FRIDAY, JANUARY 30, 1998. This deadline is firm. (We also will accept extended abstracts sent via air mail and postmarked by January 19.) Authors will be notified of acceptance or rejection on or before April 3, 1998. Final camera-ready versions will be due by May 1. Papers that have appeared in journals or other conferences, or that are being submitted to other conferences (including ICML and UAI), are not appropriate for submission to COLT. EXTENDED ABSTRACT FORMAT: The extended abstract should be accompanied by a cover page with title, authors' names, postal and email addresses, and a 200-word summary. The body of the extended abstract should be no longer than 10 pages in 12-point font. If it exceeds 10 pages, only the first 10 pages may be examined. The extended abstract should include a clear definition of the theoretical model used and a clear description of the results, as well as a discussion of their significance, including comparison to other work. Proofs or proof sketches should be included. PROGRAM FORMAT: All accepted papers will be presented orally, although some or all papers may also be included in a poster session. At the discretion of the program committee, the program may consist of both long and short talks, corresponding to longer and shorter papers in the proceedings. By default, all papers will be considered for both categories. Authors who do not want their papers considered for the short category should indicate that fact in a cover letter. PROGRAM CHAIRS: Peter Bartlett (Australian National University) Yishay Mansour (Tel-Aviv University) PROGRAM COMMITTEE: Dana Angluin (Yale University), Peter Auer (Technical University Graz), Jonathan Baxter (Australian National University), Avrim Blum (Carnegie Mellon University), Nicoló Cesa-Bianchi (University of Milan), William Cohen (AT&T Labs), Bill Gasarch (University of Maryland), Vijay Raghavan (Vanderbilt University), Dan Roth (University of Illinois, Urbana-Champaign), Ronitt Rubinfeld (Cornell University), Stuart Russell (University of California, Berkeley), Rolf Wiehagen (University of Kaiserslautern) LOCAL ARRANGEMENTS: John Case (University of Delaware), Jude Shavlik (University of Wisconsin, Madison), Bob Sloan (University of Illinois, Chicago). WEB: Dana Ron (MIT). STUDENT TRAVEL: We anticipate some funds will be available to partially support travel by student authors. Eligible authors who wish to apply for travel support should indicate this in a cover letter. STUDENT PAPER PRIZE: The Mark Fulk Award for the best paper authored or coauthored by a student is expected to be available for the first time this year. Eligible authors who wish to be considered for this prize should indicate this on the cover page. FOR MORE INFORMATION: Visit the COLT'98 web page at http://theory.lcs.mit.edu/COLT-98/, or send email to colt98 at anu.edu.au. From greiner at redwater.cs.ualberta.ca Tue Nov 25 19:19:32 1997 From: greiner at redwater.cs.ualberta.ca (Russ Greiner) Date: Tue, 25 Nov 1997 17:19:32 -0700 Subject: PostDoc - Learning, Bayesian Nets - UofAlberta Message-ID: <19971126001941Z15186-19530+70@scapa.cs.ualberta.ca> POST-DOCTORAL RESEARCH FELLOWSHIP IN COMPUTER SCIENCE University of Alberta Edmonton, Canada Applications are invited for a one-year (renewable) fellowship to work in the areas of * machine learning / learnability / datamining * knowledge representation, especially Bayesian networks and other probabilistic structures. Candidates should have a PhD in Computer Science or the equivalent, and will be required to carry out high quality research, to obtain both theoretical and empirical results. Previous research excellence and strong productivity in addition to good computing background is essential. Applications including * CV * statement of interests * 1 or 2 publications * list of references should be sent ASAP (but no later than 15 January 1998) to: Russell Greiner Department of Computing Science 615 General Service Bldg University of Alberta Edmonton, AB T6G 2H1 Email: greiner at cs.ualberta.ca Phone: 403 492 5461 Fax: 403 492 1071 Electronic submissions -- in plain text or postscript -- are encouraged, especially as there is currently a mail strike in Canada. See http://www.cs.ualberta.ca for more information about the department in general. From esann at dice.ucl.ac.be Wed Nov 26 03:28:05 1997 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Wed, 26 Nov 1997 10:28:05 +0200 Subject: extended deadline for ESANN 98 Message-ID: <199711260914.KAA07052@ns1.dice.ucl.ac.be> --------------------------------------------------- | 6th European Symposium | | on Artificial Neural Networks | | | | ESANN 98 | | | | Bruges - April 22-23-24, 1998 | | | | Final call for papers | --------------------------------------------------- We are pleased to announce the extended deadline for the submission of papers to ESANN'98: December 5th, 1997 (instead of December 1st, 1997). We encourage authors of late papers to announce their submission by sending the "author submission form" by fax (+ 32 10 47 25 98) before the deadline. The call for papers for the ESANN 98 conference (including the author submission form) is available on the Web: http://www.dice.ucl.ac.be/esann For any other question about the submission of papers, please send an email to esann at dice.ucl.ac.be. Sincerely yours, The ESANN'98 organizing committee. _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann _____________________________ _____________________________ From Yves.Moreau at esat.kuleuven.ac.be Wed Nov 26 05:09:24 1997 From: Yves.Moreau at esat.kuleuven.ac.be (Yves Moreau) Date: Wed, 26 Nov 1997 11:09:24 +0100 Subject: International Workshop Announcement Message-ID: <347BF554.A2C8F121@esat.kuleuven.ac.be> International Workshop on *** ADVANCED BLACK-BOX TECHNIQUES FOR NONLINEAR MODELING: THEORY AND APPLICATIONS *** with !!! TIME-SERIES PREDICTION COMPETITION !!! Date: July 8-10, 1998 Place: Katholieke Universiteit Leuven, Belgium Info: http://www.esat.kuleuven.ac.be/sista/workshop/ Organized at the Department of Electrical Engineering (ESAT-SISTA) and the Interdisciplinary Center for Neural Networks (ICNN) in the framework of the project KIT and the Belgian Interuniversity Attraction Pole IUAP P4/02. * GENERAL SCOPE The rapid growth of the field of neural networks, fuzzy systems and wavelets is offering a variety of new techniques for modeling of nonlinear systems in the broad sense. These topics have been investigated from differents points of view including statistics, identification and control theory, approximation theory, signal processing, nonlinear dynamics, information theory, physics and optimization theory among others. The aim of this workshop is to serve as an interdisciplinary forum for bringing together specialists in these research disciplines. Issues related to the fundamental theory as well as real-life applications will be addressed at the workshop. * TIME-SERIES PREDICTION COMPETITION Within the framework of this workshop a time-series prediction competition will be held. The results of the competition will be announced during the workshop, where the winner will be awarded. Participants in the competition are asked to submit their predicted data together with a short description and references of the methods used. In order to stimulate wide participation in the competition, attendence of the workshop is not mandatory but is of course encouraged. * INVITED SPEAKERS (confirmed) L. Feldkamp (Ford Research, USA) - Extended Kalman filtering C. Micchelli (IBM T.J. Watson, USA) - Density estimation U. Parlitz (Gottingen, Germany) - Nonlinear time-series analysis J. Sjoberg (Goeteborg, Sweden) - Nonlinear system identification S. Tan (Beijing, China) - Wavelet-based system modeling M. Vidyasagar (Bangalore, India) - Statistical learning theory V. Wertz (Louvain-la-Neuve, Belgium) - Fuzzy modeling * TOPICS include but are not limited to Nonlinear system identification Backpropagation Time series analysis Learning and nonlinear optimization Multilayer perceptrons Recursive algorithms Radial basis function networks Extended Kalman filtering Fuzzy modelling Embedding dimension Wavelets Subspace methods Piecewise linear models Identifiability Mixture of experts Model selection and validation Universal approximation Simulated annealing Recurrent networks Genetic algorithms Regularization Forecasting Bayesian estimation Frequency domain identification Density estimation Classification Information geometry Real-life applications Generalization Software * IMPORTANT DATES Deadline paper submission: April 2, 1998 Notification of acceptance: May 4, 1998 Workshop: July 8-10, 1998 Time-series competition: Deadline data submission: March 20, 1998 * Chairman: Johan Suykens Katholieke Universiteit Leuven Departement Elektrotechniek - ESAT/SISTA Kardinaal Mercierlaan 94 B-3001 Leuven (Heverlee), Belgium Tel: 32/16/32 18 02 Fax: 32/16/32 19 70 Email: Johan.Suykens at esat.kuleuven.ac.be Program Committee: B. De Moor, E. Deprettere, D. Roose, J. Schoukens, S. Tan, J. Vandewalle, V. Wertz, Y. Yu From Jon.Baxter at keating.anu.edu.au Wed Nov 26 06:34:36 1997 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Wed, 26 Nov 1997 22:34:36 +1100 (EST) Subject: Technical report available on reinforcement learning and chess Message-ID: <199711261134.WAA21952@reid.anu.edu.au> Technical Report Available -------------------------- Title ----- KnightCap: A chess program that learns by combining TD($\lambda$) with minimax search. Authors ------- Jonathan Baxter, Andrew Tridgell and Lex Weaver. Department of Systems Engineering and Department of Computer Science, Australian National University. Abstract ------- In this paper we present TDLeaf($\lambda$), a variation on the TD($\lambda$) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in which our chess program, ``KnightCap,'' used TDLeaf($\lambda$) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games and 3 days of play (equivalent to improving from mediocre to expert for a human). A more recent version of KnightCap is currently playing on the "Non-Free" Internet Chess Server (ICC, chessclub.com) with a rating of around 2500. We discuss some of the reasons for this success and also the relationship between our results and Tesauro's results in backgammon. Download Instructions --------------------- You can ftp the paper directly from ftp://syseng.anu.edu.au/~jon/publish/papers/knightcap.tar.gz If you want to learn more about KnightCap, check out http://syseng.anu.edu.au/lsg and follow the knightcap link. You can retrieve the paper from there, the latest source code for KnightCap, and watch a version of KnightCap ("KnightC") playing on ICC with our chess applet. From nimzo at vega.cerisep.pa.cnr.it Wed Nov 26 04:57:04 1997 From: nimzo at vega.cerisep.pa.cnr.it (Maurizio Cirrincione) Date: Wed, 26 Nov 1997 10:57:04 +0100 Subject: Abstract of PhD thesis about NN and electrical drives (change of address) Message-ID: dear Connctionists sorry again, but due to a final and definite crash of my system, my email address has changed into nimzo at vega.cerisep.pa.cnr.it All those who have thus required me a copy of my PhD thesis or those interested to have one should send me an email at this NEW email address Thank you for your patience Maurizio Cirrincione From Jon.Baxter at keating.anu.edu.au Wed Nov 26 14:37:30 1997 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Thu, 27 Nov 1997 06:37:30 +1100 (EST) Subject: Correction: Technical Report on RL and chess. Message-ID: <199711261937.GAA22467@reid.anu.edu.au> Whoops! My posting last night contained the wrong ftp address. The right one is included below. Also there was an incorrect link in the web page. Thanks to all those who pointed this out. Sorry for the inconvenience. Cheers, Jon ------------- Jonathan Baxter Research Fellow Department of Systems Engineering Research School of Information Science and Engineering Australian National University http://keating.anu.edu.au/~jon Tel: +61 2 6279 8678 Fax: +61 2 6279 8688 ----------------------------------------- Technical Report Available -------------------------- Title ----- KnightCap: A chess program that learns by combining TD($\lambda$) with minimax search. Authors ------- Jonathan Baxter, Andrew Tridgell and Lex Weaver. Department of Systems Engineering and Department of Computer Science, Australian National University. Abstract ------- In this paper we present TDLeaf($\lambda$), a variation on the TD($\lambda$) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in which our chess program, ``KnightCap,'' used TDLeaf($\lambda$) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games and 3 days of play (equivalent to improving from mediocre to expert for a human). A more recent version of KnightCap is currently playing on the "Non-Free" Internet Chess Server (ICC, chessclub.com) with a rating of around 2500. We discuss some of the reasons for this success and also the relationship between our results and Tesauro's results in backgammon. Download Instructions --------------------- You can ftp the paper directly from ftp://syseng.anu.edu.au/~jon/papers/knightcap.ps.gz If you want to learn more about KnightCap, check out http://syseng.anu.edu.au/lsg and follow the knightcap link. You can retrieve the paper from there, the latest source code and watch a version of KnightCap ("KnightC") playing on ICC with our chess applet. From niranjan at eng.cam.ac.uk Wed Nov 26 23:23:49 1997 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Thu, 27 Nov 97 04:23:49 GMT Subject: Neuran Nets for Signal Processing Message-ID: <9711270423.3329@baby.eng.cam.ac.uk> Provisional announcement of NNSP98 niranjan ---------------------------------------------- FIRST ANNOUNCEMENT AND CALL FOR PAPERS THE 1998 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING August 31 - September 3, 1998 Isaac Newton Institute for Mathematical Sciences, Cambridge, England The 1998 IEEE Workshop on Neural Networks for Signal Processing is the seventh in the series of workshops. Cambridge is a historic town, housing one of the leading Universities and several research institutions. In the Summer it is a beautiful place and a large number of visitors come here. It is easily reached by train and road from the Airports in London. The combination of these make it an ideal setting to host this workshop. The Isaac Newton Institute for Mathematical Sciences is based in Cambridge, adjoining the University and the Colleges. It was founded in 1992, and is devoted to the study of all branches of Mathematics. The Institute runs programmes that last for upto six months on various topics in mathematical sciences. Past programmes of relevance to this proposal include Computer Vision, Financial Mathematics and the current programme on Neural Networks and Machine Learning (July - December, 1997). One of the programmes at the Institute in July-December 1998 is Nonlinear and Nonstationary Signal Processing. Hence hosting this conference at the Institute will benefit the participants in many ways. 4. Accommodations Accommodation will be at Robinson College, Cambridge. Robinson is one of the new Colleges in Cambridge, and uses its facilities to host conferences during the summer months. It can accommodate about 300 guests in comfortable rooms. The College is within walking distance to the Cambridge city center and the Newton Institute. 5. Organization General Chairs Prof. Tony CONSTANTINIDES (Imperial) Prof. Sun-Yuan KUNG (Princeton) Vice-Chair Dr Bill Fitzgerald (Cambridge) Finance Chair Dr Christophe Molina (Anglia) Proceeding Chair Elizabeth J. Wilson (Raytheon Co.) Publicity Chairs Dr Aalbert De Vries (Sarnoff) Dr Jonathan Chambers (Imperial) Program Chair Dr Mahesan Niranjan (Cambridge) Program Committee Tulay ADALI Andrew BACK Jean-Francois CARDOSO Bert DEVRIES Lee GILES Federico GIROSSI Yu Hen HU Jenq-Neng HWANG Jan LARSEN Yann LECUN David LOWE Christophe MOLINA Visakan KADIRKAMANATHAN Shigeru KATAGIRI Gary KUHN Elias MANOLAKOS Mahesan NIRANJAN Dragan OBRADOVIC Erkki OJA Kuldip PALIWAL Lionel TARASSENKO Volker TRESP Marc VAN HULLE Andreas WEIGEND Papers describing original research are solicited in the areas described below. All submitted papers will be reviewd by members of the Programme Committee. 6. Technical Areas Paradigms artificial neural networks, Markov models, graphical models, dynamical systems, nonlinear signal processing, and wavelets Application areas speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories generalization, design algorithms, optimization, probabilistic inference, parameter estimation, and network architectures Implementations parallel and distributed implementation, hardware design, and other general implementation technologies 7. Schedule Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Dr Mahesan Niranjan, Cambridge University Engineering Department, Cambridge CB2 1PZ, England, (Tel.) +44 1223 332720, (Fax.) +44 1223 332662, (e-mail) niranjan at eng.cam.ac.uk. More information relating to the workshop will be available in http://www-svr.eng.cam.ac.uk/nnsp98. Submissions to: Dr Mahesan Niranjan IEEE NNSP'98 Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ England ***** Important Dates ****** Submission of extended summary : February 26, 1998 Notification of acceptance : April 6, 1998 Submission of photo-ready accepted paper : May 3, 1998 Advanced registration, before : June 30, 1998 ============================================================== From marco at idsia.ch Thu Nov 27 04:58:58 1997 From: marco at idsia.ch (Marco Wiering) Date: Thu, 27 Nov 1997 10:58:58 +0100 Subject: Paper Announcement Message-ID: <199711270958.KAA13031@zucca.idsia.ch> HQ-LEARNING Adaptive Behavior 6:2, 1997 (in press) Marco Wiering Juergen Schmidhuber marco at idsia.ch juergen at idsia.ch IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland HQ-learning is a hierarchical extension of Q(lambda)- learning designed to solve certain types of partially observable Markov decision problems (POMDPs). HQ automatically decomposes POMDPs into sequences of simpler subtasks that can be solved by memoryless policies learnable by reactive subagents. HQ solves partially observable mazes with more states than used in most previous POMDP work. FTP-host: ftp.idsia.ch FTP-files: /pub/marco/HQ-LEARNING.ps.gz http://www.idsia.ch/~marco/publications.html http://www.idsia.ch/~juergen/onlinepub.html Marco & Juergen, IDSIA www.idsia.ch From gluck at pavlov.rutgers.edu Thu Nov 27 10:05:10 1997 From: gluck at pavlov.rutgers.edu (Mark A. Gluck) Date: Thu, 27 Nov 1997 10:05:10 -0500 Subject: Graduate Study in Neural Computational at RUTGERS-NEWARK Message-ID: -------------------------------------------------------------- Graduate Study in Neural Computation at Rutgers-Newark -------------------------------------------------------------- Students interested in doing graduate study and research in COMPUTATIONAL NEUROSCIENCE and CONNECTIONIST MODELLING IN COGNITIVE SCIENCE, should be aware of a growing strength at Rutgers University-Newark in these areas. There are eight relevant faculty and research scientists available for advising students: Ben Martin Bly, Gyorgy Buzsaki, Mike Casey, Mark Gluck, Stephen Hanson, Catherine Myers, Michael Recce, and Ralph Siegel. Further information on their individual research interests and background is listed below. The Rutgers-Newark campus has a special strength in the use of these models as research tools when integrated with empirical studies of brain and behavior. Information on the relevant graduate programs and sources of further information are listed at the end of this email. Research Interests of Key Faculty and Research Scientists ---------------------------------------------------------------------------- BENJAMIN MARTIN BLY Ph.D., Stanford, Cognitive Psychology, 1993 Email: ben at psychology.rutgers.edu Web Page: http://psychology.rutgers.edu/~ben Research Interests: I want to understand how functional organization in the brain supports complex human behavior and cognition, in particular language use and the mental representation of conceptual information. To study the cerebral basis of these phenomena, I use cognitive psychological methods to investigate overt behavior that depends on language comprehension or production, concept formation, or inductive reasoning, and I use Magnetic Resonance Imaging (MRI) to measure behavior-dependent changes in brain function. Using mathematical modeling methods, I explore the consequences of these empirical results for theories of language function and conceptual representation. The primary goal of this research is to understand the physical basis of language and conceptual knowledge. Such understanding has broad consequences both for the scientific explanation of intelligent behavior and for the understanding and treatment of brain injuries that affect language and cognition. Selected Publications: Martin B. (1994). The Schema. in "Complexity: metaphors, models, and reality." Cowan G.A., Pines D., Meltzer D. (eds), Reading MA, Addison-Wesley, p. 263-286 Schlaug G., Martin B., Thangaraj V., Edelman R.R, Warach S.J. (1996). Functional anatomy of pitch perception and pitch memory in non-musicians and musicians. NeuroImage 3(3):S318. Siewert B., Bly B.M., Schlaug G., Thangaraj V., Warach S.J., Edelman R.R. (1996). Comparing the BOLD and EPISTAR techniques for functional brain imaging using Signal Detection Theory. The Journal of Magnetic Resonance in Medicine 36:249-255. Bly B.M., Kosslyn S.M. (1997). Functional anatomy of object recognition in humans: evidence from PET and FMRI. Current Opinions in Neurology 10, 1:5-9. ------------------------------------------------------------------------ GYORGY BUZSAKI M.D., University of Pecs, Hungary, 1974 Email: buzsaki at axon.rutgers.edu Web page: http://osiris.rutgers.edu/buzsaki.html Research Interests: Neurobiology of learning and memory. Experimental approaches are two-fold. The first is the study of axonal connectivity of hippocampal principal cells and interneurons, characterized physiologically and filled in vivo, with the explicit goal of a complete reconstruction of the true connectivity of the hippocampus, to serve as building blocks for computational models. A complementary approach uses large scale recordings of neurons with silicon probes to reveal cooperative, emergent properties of neuronal assemblies during behavior. Computational methods are used to understand the complex interactions of neurons in real networks and modeling oscillatory properties of interneuronal networks. Selected Publications: Buzs?ki, G. A two-stage model of memory trace formation: A role for "noisy" brain states. Neuroscience 31: 551-570, 1989. Buzs?ki, G. The hippocampo-neocortical dialogue. Cerebral Cortex 6: 81-92, 1996. Buzsaki, G., Horvath, Z., Urioste, R., Hetke, J., Wise, K. High frequency network oscillation in the hippocampus. Science 256: 1025-1027, 1992. Jand?, G., Siegel, R. M., Horv?th, Z., and Buzs?ki, G. Pattern recognition of the electroencephalogram by artificial neural networks. Electroencephalography and clinical Neurophysiology 86: 100-109, 1993. Sik, A., Ylinen, A., Penttonen, M., and Buzsaki, G. Inhibitory CA1-CA3-hilar region feedback in the hippocampus. Science 264: 1722-1724, 1994. Traub, R. D., Miles, R., and Buzs?ki, G. Computer simulation of carbachol-driven rhythmic population oscillations in the CA3 region of the in vitro rat hippocampus. Journal of Physiology 451: 653-672, 1992. ------------------------------------------------------------------------ MIKE CASEY Ph.D., UC, San Diego, Mathematics, 1995 Email: mcasey at psychology.rutgers.edu Web Page: http://psychology.rutgers.edu/~mcasey Research Interests: My research interests are in the mathematical foundations of cognitive science, and abstract physical models of intelligent behavior. This interest is pursued through the study of recurrent neural networks and other dynamical models of cognitive and other neural processes. My current research is focussed on dynamical models of abstract knowledge acquisition and use. Selected Publications: Casey, M. (1996) "The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction," Neural Computation, 8:6, 1135-1178 ------------------------------------------------------------------------ MARK GLUCK Ph.D., Stanford University, Cognitive Psychology, 1987 Email: gluck at pavlov.rutgers.edu Web Page: www.gluck.edu Research Interests: Neurobiology of learning and memory, with emphasis on the role of the hippocampus in associative learning. Computational models of conditioning and human learning. Experimental studies include behavioral neuroscience studies of rabbit eyeblink conditioning under various lesion and drug manipulations. Human experimental research includes studies of conditioning, associative learning, and categorization in normals, aged, and medial temporal lobe amnesics. Applied work in neural networks for pattern classification and novelty detection for mechanical fault diagnosis. Selected Publications: Gluck, M.A. & Myers, C. E. (1997). Psychobiological models of hippocampal function in learning and memory. Annual Review of Psychology. 48. 481-514. Gluck, M. A., Ermita, B. R., Oliver, L. M., & Myers, C. E. (1996). Extending models of hippocampal function in animal conditioning to human amnesia. Memory. Knowlton, B. J., Squire, L. R. , & Gluck, M. A. (1994). Probabilistic category learning in amnesia. Learning and Memory. 1, 106-120. ------------------------------------------------------------------------ STEPHEN JOSE HANSON Ph.D. Arizona State University, Experimental and Mathematical Psychology, 1981 Email: jose at psychology.rutgers.edu, Web Page: http://www-psych.rutgers.edu Research Interests: I am examining the following general aspects of connectionist learning systems as they relate to human/animal cognition and learning. (1) Learnability Theory: the effects of representation on learning, prior knowledge on learning (trade-off), sample protocol on learning, presence of noise and errors on learning. (2) Studies of Generalization: the effects of sample size, ``pedagogy'' (ordering or organizing the training samples), analyses of classification or categorization complexity, and the ability of the learning system to correctly generalize. (3) Studies of ``Scaling'' and complexity in task structure. Scaling involves ``realistic'' tasks that possess significant complexity. Scaling up the task may require increasing the dimensionality of the task (e.g. inverse dynamics with realistic degrees of freedom, 10 or 12), increasing task interactions (linguistic or language constraints arising from syntax, semantics, discourse etc.), increasing task memory requirements (as in grammar induction) or decreasing the supervision of the algorithm as in reinforcement learning. (4) Studies of Algorithms inspired by biophysical properties. Somehow the brain controls the degrees of freedom in its representation language and is able to induce complex ``rules'' for its conduct. What trick does it use? Are there simple principles that relate cell growth, death, noise, locality, parallelism, network topology, to seemingly more complex phenomena, like language use, problem solving, and reasoning? Selected Publications: Hanson S. J. & Burr, D. J. (1990). What Connectionist Models Learn: Toward a theory of representation in Connectionist Networks, Behavioral and Brain Sciences, 13, 471-518. Hanson, S. J.(1990). A Stochastic Version of the Delta Rule, PHYSICA D,42, 265-272. Hanson, S. J. (1991). Behavioral Diversity, Search, and Stochastic Connectionist Systems, In Neural Network Models of Conditioning and Action, M. Commons, S. Grossberg & J. Staddon (Eds.), New Jersey: Erlbaum. Hanson, S. J., Petsche, T., Kearns, M. & Rivest, R. (1994), Computational Learning Theory and Natural Learning ------------------------------------------------------------------------ CATHERINE MYERS Ph.D., Imperial College, University of London, Artificial Neural Networks, 1990 Email: myers at pavlov.rutgers.edu Research Interests: 1. Computational Neuroscience: I am interested in building connectionist models of brain regions involved in learning and memory. These models are meant to capture functionality but also be consistent with known anatomical and physiological constraints. In particular, I am concerned with the role of the hippocampal region in associative learning, its interaction with cerebellum, neocortex and amygdala, and its response to various pharmacological manipulations, particularly cholinergic drugs. 2. Experimental Neuropsychology: Anterograde amnesia is a syndrome which follows hippocampal-region damage via stroke, Alzheimer's Dementia and other etiologies. I am interested in developing simple procedures to determine what kinds of learning and memory survive such damage, and whether this pattern matches the impairments seen in animal models. One aim of this work is to develop discriminative/diagnostic procedures to differentiate amnesic etiologies, as well as identifying locus of damage in patients for whom neuroimaging is counterindicated. 3. Experimental Psychology: I focus on underlying representational principles of learning and memory which may operate across many different paradigms and response systems, including classical conditioning, computer-based operant analogs of conditioning, and category learning. Selected Publications: Myers, C. & Gluck, M. (1994). Context, conditioning and hippocampal re-representation. Behavioral Neuroscience, 108(5), 835-847. Myers, C., Gluck, M. & Granger (1995). Dissociation of hippocampal and entorhinal function in associative learning: A computational approach. Psychobiology, 23(2), 116-138. Myers, C., Ermita, B., Harris, K., Gluck, M. & Hasselmo, M. (1996). A computational model of the effects of septohippocampal disruption on classical eyeblink conditioning. Neurobiology of Learning and Memory, 66, 51-66. ------------------------------------------------------------------------ MICHAEL RECCE Ph.D. University College London, Neurophysiology, 1993 Email: recce at axon.rutgers.edu Research Interests: The spatial and memory function of the hippocampus and nearby brain structures. This is investigated using a wide range of methods, including neurophysiological recording from the hippocampus in freely moving rats and evaluation spatial abilities of human subjects using virtual reality. These and other data are then used to construct computational models, and the models are tested using computer simulation and on mobile robots. Selected Publications: Recce, M. and Harris, K.D. (1996) Memory for places: a navigational model in support fo Marr's theory of hippocampal function. Hippocampus. vol 6:735-748. Harris, K.D. and Recce, M. (1997) Absolute localization for a mobile robot using place cells. Robotics and autonomous systems 658 p 1-13. Hirase, H. and Recce, M. (1996) A search for the optimal thresholding sequence in an associative memory. Network. vol 7. pp 741-756. O'Keefe, J. and Recce, M. (1993) Phase relationship between hippocampal place units and EEG theta rhythm. Hippocampus. vol 3. pp 317-330 ------------------------------------------------------------------------ RALPH SIEGEL Ph.D, McGill University, Physiology, 1985 Email: axon at cortex.rutgers.edu Web Page: www.cmbn.rutgers.edu/cmbn/faculty/rsiegel.html Research Interests: We use a multidisciplinary approach to understand the physiology, psychophysics, theory and neurology underlying visual perception. The ultimate goal of this work is an understanding of the visual perceptual process and application of this knowledge to assist persons who have suffered neurological damage. Selected Publications: Read, H.L. and Siegel, R.M. Modulation of Responses to Optic Flow in Area 7a by Retinotopic and Oculomotor Cues in Monkey. Cerebral Cortex. In press, 1997 PDF Jando, G., Siegel, R.M., Horvath, Z. and Buzsaki, G., Pattern recognition of the electroencephalogram by artificial neural networks, Electroenceph. Clin. Neurophysiol. 86: 100-109 (1993). Siegel, R.M., Tresser, C and Zettler, G., A coding problem in dynamics and number theory. Chaos 2:473-494 (1992). =================================================================== INFORMATION ON GRADUATE PROGRAMS: There are two graduate program appropriate for the study of neural computation and connectionist modelling, depending on whether the students are more oriented towards brain (Neuroscience) or behavior (Psychology/Cognitive Science). Regardless of which graduate program students choose, they are free to take classes from, and do research with, faculty from both programs. There is an extensive computational infrastructure to support computational students in both programs, supported by a recent University strategic initiative in computational neuroscience. NEUROSCIENCE OPTION: Students whose interests are oriented towards neuroscience, including the study of basic molecular, cellular, systems, behavioral, and cognitive neuroscience, should apply to the BEHAVIORAL AND NEURAL SCIENCES graduate program at Rutgers-Newark. For more info, see the web page: www.bns.rutgers.edu For admissions applications, email: bns at cortex.rutgers.edu PSYCHOLOGY/COGNITIVE SCIENCE OPTION: Students whose interests are oriented towards behavior, including cognitive psychology, cognitive science, cognitive neuroscience, animal behavior, linguistics, and philosophy, should apply to the PSYCHOLOGY/COGNITIVE SCIENCE program at Rutgers-Newark. For more info, see the web page: www-psych.rutgers.edu For admissions applications, email: cogsci at psychology.rutgers.edu =================================================================== From N.Sharkey at dcs.shef.ac.uk Sat Nov 29 10:03:51 1997 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Sat, 29 Nov 1997 15:03:51 GMT Subject: Biologically inspired robotics - CALL FOR PARTICIPATION Message-ID: <199711291503.AA10850@gw.dcs.shef.ac.uk> Sorry if you receive this more than once **** SELF-LEARNING ROBOTS II: BIO-ROBOTICS **** An Institution of Electrical Engineers (IEE) Seminar Savoy Place, London: February, 12th, 1998. Co-sponsors: Royal Institute of Navigation (RIN) Biotechnology and Biological Sciences Research Council (BBSRC) British Computer Science Society (BCS) Society for the study of Artificial Intelligence and the Simulation of Behaviour (AISB) Biologically inspired robotics or bio-robotics is an exciting trend in the integration of engineering and life sciences. Although this has a long history dating back to the turn of the century, it is only within the last few years that it has picked up momentum as many have realised that life is still the best model we have for intelligent behavior. This cross fertilisation is beginning to bear fruit in robotics within specialist areas such as evolutionary methods, artificial life, neural computing, and navigation. It is now time to bring these threads together and ask the life scientists to assess the developments and also to discuss how and what the life sciences could learn from robotics. This one-day seminar aims to bring together some of Europe's leading researchers within the areas of animal and robot behavior to discuss the foundations and future directions of biologically inspired robotics. Each Speaker will be followed by a Discussant who will follow up on some of the issues raised by in the paper and make general points about the field. 9.30-10.30 EMBODIED COGNITION Francisco Varela (Speaker) Biologist and Neuroscientist, France. Stevan Harnad (Discussant) Psychologist, UK. 10.30-10.45 COFFEE 10.45-11.45 EVOLUTIONARY LEARNING Stefano Nolfi (Speaker) Roboticist and Psychologist, Italy. Richard Dawkins (Discussant) Evolutionary Zoologist, UK. 11.45-12.45 CONDITIONED LEARNING. Marco Dorigo (Speaker) Computer Scientist and Roboticist, Belgium. Tony Savage (Discussant) Animal Psychologist, N.Ireland. 12.45-2.00 LUNCH 2.00-3.00 NAVIGATION: THE INSECT MODEL Dimitrios Lambrinos (Speaker) Computer Scientist and Roboticist, Switzerland. Tom Collett (Discussant) Neurobiologist, UK. 3.00-4.00 NAVIGATION: THE MAMMALIAN MODEL Neil Burgess (Speaker) Neuroscientist, UK. Ariane Etienne (Discussant) Ethologist, Switzerland. 4.00-4.15 TEA PANEL: THE FUTURE OF BIO-ROBOTICS 4.15-5.45 Introduced and Chaired by Jean-Arcady Meyer, Computer Scientist and Ethologist, France. ORGANISERS Noel Sharkey, Computer Scientist, Psychologist, and Roboticist, University of Sheffield, UK. Tom Ziemke, Computer Scientist and Roboticist, Universities of Sheffield, UK and Skovde, Sweden. REGISTRATION. I would be advisable to register as early as possible since places will be limited. Please contact Jon Maddison jmaddison at iee.org.uk From smola at first.gmd.de Sat Nov 29 13:02:07 1997 From: smola at first.gmd.de (Alex Smola) Date: Sat, 29 Nov 1997 19:02:07 +0100 Subject: Homepage on Support Vectors and related topics References: <199711272012.VAA02453@viola.first.gmd.de> Message-ID: <3480589F.9A20D17B@first.gmd.de> Dear Connectionists, we would like to announce the availability of a webpage on Support Vectors and related topics at GMD FIRST, Berlin. It contains material on the upcoming NIPS SV workshop (including schedule and abstracts), as well as downloadable papers, links to people working on Support Vectors, related webpages and links to software and databases. The URL is: http://svm.first.gmd.de The goal of this webpage is to serve as a central switchboard and source of information about Support Vector machines and related topics. Researchers in this field are encouraged to contribute information (urls of papers, etc.) to this website. Alex Smola mailto:smola at first.gmd.de Bernhard Schoelkopf mailto:bs at first.gmd.de From wahba at stat.wisc.edu Sat Nov 29 23:12:56 1997 From: wahba at stat.wisc.edu (Grace Wahba) Date: Sat, 29 Nov 1997 22:12:56 -0600 (CST) Subject: NewTR: SVM's-RKHS-GACV Message-ID: <199711300412.WAA05864@hera.stat.wisc.edu> `Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV', University of Wisconsin-Madison Statistics Department TR 984, Nov 1997 by Grace Wahba available at URL ftp://ftp.stat.wisc.edu/pub/wahba/nips97.ps.gz or via my home page http://www.stat.wisc.edu/~wahba -> TRLIST ...........Abstract............................ This report is intended as background material for a talk to be presented in the NIPS 97 Workshop on Support Vector Machines (SVM's). It consists of three parts: (1) A brief review of some old but relevant results on constrained optimization in Reproducing Kernel Hilbert Spaces (RKHS); and a review of the relationship between zero-mean Gaussian processes and RKHS. Application of tensor sums and products of RKHS including smoothing spline ANOVA spaces in the context of SVM's also described. (2) Discussion of the relationship between penalized likelihood methods in RKHS for Bernoulli data when the goal is risk factor estimation, and SVM methods in RKHS when the goal is classification. When the goal is classification it is noted replacing the likelihood functional of the logit [log odds ratio] with an appropriate SVM functional is a natural method for concentrating computational effort on estimating the logit near the classification boundary and ignoring data far away. Remarks concerning the potential of SVM's for variable selection as an efficient preprocessor for risk factor estimation are made. (3) A discussion of how the the GACV for choosing smoothing parameters proposed in Xiang and Wahba (1996, 1997) may be implemented in the context of convex SVM's. From harnad at coglit.soton.ac.uk Sat Nov 1 14:02:58 1997 From: harnad at coglit.soton.ac.uk (Stevan Harnad) Date: Sat, 1 Nov 1997 19:02:58 +0000 (GMT) Subject: Dynamical Hypothesis: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article on: THE DYNAMICAL HYPOTHESIS IN COGNITIVE SCIENCE by Tim van Gelder This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://cogsci.soton.ac.uk/pub/harnad/BBS gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ THE DYNAMICAL HYPOTHESIS IN COGNITIVE SCIENCE Tim van Gelder Department of Philosophy University of Melbourne Parkville VIC 3052 Australia tgelder at ariel.unimelb.edu.au http.//ariel.its.unimelb.edu.au/~tgelder KEYWORDS: cognition, systems, dynamical systems, computers, computational systems, computability, modeling, time. ABSTRACT: Recent years have seen increasing use of dynamics in cognitive science. If the heart of the dominant computational approach is the hypothesis that cognitive agents are digital computers, the heart of the alternative dynamical approach is the hypothesis that cognitive agents are dynamical systems. This target article attempts to articulate the dynamical hypothesis and to defend it as an empirical alternative to the computational hypothesis. Digital computers and dynamical systems are characterized as specific kinds of systems. The dynamical hypothesis has two major components: the nature hypothesis (cognitive agents are dynamical system) the knowledge hypothesis (cognitive agents can be understood dynamically). A wide range of objections to the general hypothesis are then rebutted. The conclusion is that cognitive systems may well be dynamical systems, and only sustained empirical research in cognitive science will determine the extent to which that is true. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.vangelder). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.vangelder ftp://cogsci.ecs.soton.ac.uk/pub/harnad/BBS/bbs.vangelder gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.vanGelder When you have the file(s) you want, type: quit From rojas at inf.fu-berlin.de Sun Nov 2 18:07:00 1997 From: rojas at inf.fu-berlin.de (Raul Rojas) Date: Sun, 2 Nov 97 18:07 MET Subject: connectionist school Message-ID: Dear connectionists, IK-98 is a one-week spring school on neural networks, neuroscience, AI and cognition which will be held for the second time in Guenne (a small town near Dortmund), Germany, from March 7 to March 14, 1998. The general theme of next year's IK is "Language and Communication". Most of the courses will be held in German, some in English. The announcement below can be of interest for members of this mailing list living in German speaking countries or who are fluent German speakers. There is a home page for IK-98 with additional information regarding registration, cost, and abstracts of the courses: http://www.tzi.informatik.uni-bremen.de/ik98 Raul Rojas Freie Universitaet Berlin %%%%%%%%%%%%%%%%%%%%%% ANNOUNCEMENT in GERMAN %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >>> Interdisziplinaeres Kolleg 98 (IK-98) <<< >>> 7-14 Maerz 1998, Guenne am Moehnesee <<< >> http://www.tzi.informatik.uni-bremen.de/ik98 << Das Interdisziplinaere Kolleg (IK) ist eine intensive Fruehjahrsschule zum Generalthema "Intelligenz und Gehirn". Die Schirmwissenschaften des IK sind die Neurowissenschaft, die Kognitionswissenschaft, die Kuenstliche Intelligenz und die Neuroinformatik. Angesehene Dozenten aus diesen Diszi- plinen vermitteln Grundlagenkenntnisse, fuehren in methodische Vorgehens- weisen ein und erlaeutern aktuelle Forschungsfragen. Ein abgestimmtes Spek- trum von Grund-, Theorie- und Spezialkursen, sowie disziplinuebergreifenden Veranstaltungen teilweise mit praktischen Uebungen richtet sich an Studen- ten und Forscher aus dem akademischen und industriellen Bereich. Veranstalter ist die Gesellschaft fuer Informatik (GI) mit Unterstuetzung von: FB 1 (KI) der GI, Fachgruppe 0.0.2 (NN) der GI, European Neural Network Society (ENNS) und German Chapter (GNNS), DFG-Graduiertenkolleg "Signalketten in lebenden Systemen", Neurowissenschaftliche Gesellschaft, GMD, Gesellschaft fuer Kognitionswissenschaft. ** Veranstaltungsort ** Das Tagungsheim ist die Familienbildungsstaette Heinrich-Luebke-Haus" in Guenne (Sauerland). Dies Haus liegt abgeschieden am Moehnesee im Naturpark Arnsberger Wald. Die Teilnehmer sind im Tagungsheim untergebracht. All dies foerdert einen konzentrierten und geselligen Austausch zwischen den Teilnehmern. ** Schwerpunktthema ** Das IK-98 hat als besonderen Schwerpunkt das Thema=93Sprache und Kommunik= ation", der in mehreren weiterfuehrenden Kursen von unterschiedlichen Dis= ziplinen her beleuchtet wird. ** Kurse und Dozenten ** >>>>>> Grundkurse G1 Neurobiologie (Gerhard Roth) G2 Kuenstliche Neuronale Netze (Guenther Palm) G3 Einfuehrung in die KI (Ipke Wachsmuth) G4 Kognitive Systeme - Eine Einfuehrung in die Kognitionswissenschaft (Gerhard Strube) >>>>>> Theoriekurse T1 Das komplexe reale Neuron (Helmut Schwegler) T2 Connectionist Speech Recognition (Herve Bourlard) T3 Perception of Temporal Structures - Especially in Speech (Robert F. Port) T4 Sprachstruktur - Hirnarchitektur; Sprachverarbeitung - Hirnprozesse (Helmut Schnelle) T5 Optimierungsstrategien fuer neuronale Lernverfahren (Helge Ritter) >>>>>> Spezialkurse S1 Hybride konnektionistische und symbolische Ansaetze zur Verarbeitung natuerlicher Sprache (Stefan Wermter) S2 Intelligente Agenten fuer Multimedia-Schnittstellen (Wolfgang Wahlster, Elisabeth Andre) S3 Wie hoert das Gehirn? Neurobiologie des Hoersystems (Guenter Ehret) S4 Sprachproduktion (Thomas Pechmann) >>>>>> Disziplinuebergreifende Kurse D1 Fuzzy und Neurosysteme (Rudolf Kruse) D2 Zeitliche Kognition (Ernst P=F6ppel) D3 The origins and evolution of language and meaning (Luc Steels) D4 Kontrolle von Bewegung in biologischen Systemen und Navigation mobiler Roboter (Josef Schmitz, Thomas Christaller) D5 Optimieren neuronaler Netze durch Lernen und Evolution (Heinz Braun) D6 Koordination von Sprache und Handlung (Wolfgang Heydrich, Hannes Rieser) D7 Dinamik spikender Neurone und zeitliche Codierung (Andreas Herz) ** Abendprogramm ** In visionaeren, feurigen und/oder kuehnen after-dinner-talks" werden her- ausragende Forscher und Forscherinnen zu Kontroversen einladen. ** Kursunterlagen ** Zu allen Kursen wird es schriftliche Dokumentationen geben, welche als Sammel- band allen Teilnehmern ausgehaendigt werden. ** Wissenschaftlicher Beirat ** Wolfgang Banzhaf, Wilfried Brauer, Armin B. Cremers, Christian Freksa, Otthein Herzog, Wolfgang Hoeppner, Hanspeter Mallot, Thomas Metzinger, Heiko Neumann, Hermann Ney, Guenther Palm, Ernst Poeppel, Wolfgang Prinz, Burghard Rieger, Helge Ritter, Claus Rollinger, Werner von Seelen, Hans Spada, Gerhard Strube, Helmut Schwegler, Ipke Wachsmuth, Wolfgang Wahlster. ** Organisationskomitee ** Thomas Christaller, Bernhard Froetschl, Christopher Habel, Herbert Jaeger, Anthony Jameson, Frank Pasemann, Bjoern-Olaf Peters, Annegret Pfoh, Raul Rojas (Gesamtleitung), Gerhard Roth, Kerstin Schill, Werner Tack. ** Tagungsbuero und Anmeldung ** Christine Harms, c/o GMD, Schloss Birlinghoven, D-53754 Sankt Augustin, Telefon 02241-14-2473, Fax 02241-14-2472, email christine.harms at gmd.de ** Weitere Informationen ** Detaillierte Infos zum Hintergrund und dem Tagungsprogramm des IK-98 sind auf dessen Internet-homepage http://www.tzi.uni-bremen.de/ik98/ abrufbar. From rsun at cs.ua.edu Tue Nov 4 08:58:46 1997 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 4 Nov 1997 07:58:46 -0600 Subject: book on hybrid models Message-ID: <199711041358.HAA22661@sun.cs.ua.edu> Book announcement: Title: ** CONNECTIONIST-SYMBOLIC INTEGRATION ** edited by Ron Sun and F. Alexandre (information for ordering is at the end of this description) This book is concerned with the development, analysis, and application of hybrid connectionist-symbolic models in artificial intelligence and cognitive science, drawing contributions from an international group of leading experts. It describes and compares a variety of models in this area. Thus, it serves as a well-balanced report on the state of the art in this area. This book is the outgrowth of The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches}, which was held for two days during August 19-20 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95). TABLE of CONTENT ----------------------- 1. An Introduction to Connectionist Symbolic Integration R. Sun part 1: Reviews and Overviews 2. An overview of strategies for neurosymbolic integration M. Hilario 3. Task structure and computational level: architectural issues in symbolic-connectionist integration R. Khosla and T. Dillon 4. Cognitive aspects of neurosymbolic integration Y. Lallement and F. Alexandre 5. A first approach to a taxonomy of fuzzy-neural systems L. Magdalena part 2: Learning in Multi-Module Systems 6. A hybrid learning model of abductive reasoning T. Johnson and J. Zhang 7. A hybrid learning model of reactive sequential decision making R. Sun and T. Peterson 8. A preprocessing model for integrating CBR and prototype-based neural network M. Malek and B. Amy 9. A neurosymbolic system with 3 levels B. Orsier and A. Labbi 10. A distributed platform for symbolic-connectionist integration J. C. Gonzalez, J. R. Velasco, C. A. Iglesias part 3: Representing Symbolic Knowledge 11. Micro-level hybridization in DUAL B. Kokinov 12. An integrated symbolic/connectionist model of parsing S. Stevenson 13. A hybrid system framework for disambiguating word senses X. Wu, M. McTear, P. Ojha, H. Dai 14. A localist network architecture for logical inference N. Park and D. Robertson 15. Distributed associative memory J. Austin 16. Symbolic neural networks derived from stochastic grammar domain models E. Mjolsness Part 4: Learning Distributed Representation 17. Holographic reduced representation T. Plate 18. Distributed representations for terms in hybrid reasoning systems A. Sperduti, A. Starita, C. Goller 19. Learning distributed representation R. Krosley and M. Misra Conclusion 20. Conclusion F Alexandre --------------------------- To ORDER the book, call 1-800-9-books-9 201-236-9500 FAX: 201-236-0072 email: orders at erlbaum.com ISBN 0-8058-2348-4 (hard cover) 0-8058-2349-2 (paper) --------------------------- For a closed related book, "Computational Architectures Integrating Symbolic and Connectionist Processing" (edited by Ron Sun and Larry Bookman, published by Kluwer Academic Publishers) see my Web page for information. (This previous book also contains an extensive, annotated bibliography on hybrid neural network models.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 Houser Hall ftp://ftp.cs.ua.edu/pub/tech-reports/ Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - From thrun+ at heaven.learning.cs.cmu.edu Tue Nov 4 21:59:46 1997 From: thrun+ at heaven.learning.cs.cmu.edu (thrun+@heaven.learning.cs.cmu.edu) Date: Tue, 4 Nov 97 21:59:46 EST Subject: Conference on Automated Learning and Discovery Message-ID: CONFERENCE ON AUTOMATED LEARNING AND DISCOVERY June 11-13, 1998 at Carnegie Mellon University, Pittsburgh, PA The Conference on Automated Learning and Discovery will bring together leading researchers from various scientific disciplines concerned with learning from data. It will cover scientific research at the intersection of statistics, computer science, artificial intelligence, databases, social sciences and language technologies. The goal of this meeting is to explore new, unified research directions in this cross-disciplinary field. The conference features eight one-day cross-disciplinary workshops, interleaved with seven invited plenary talks by renowned statisticians, computer scientists, and cognitive scientists. The workshops will address issues such as: what is the state of the art, what can we do and what is missing? what are promising research directions? what are the most promising opportunities for cross-disciplinary research? ___Plenary speakers________________________________________________ * Tom Dietterich * Stuart Geman * David Heckerman * Michael Jordan * Daryl Pregibon * Herb Simon * Robert Tibshirani ___Workshops_______________________________________________________ * Visual Methods for the Study of Massive Data Sets organized by Bill Eddy and Steve Eick * Learning Causal Bayesian Networks organized by Richard Scheines and Larry Wasserman * Discovery in Natural and Social Science organized by Raul Valdes-Perez * Mixed-Media Databases organized by Shumeet Baluja, Christos Faloutsos, Alex Hauptmann and Michael Witbrock * Learning from Text and the Web organized by Jaime Carbonell, Steve Fienberg, Tom Mitchell and Yi-Ming Yang * Robot Exploration and Learning organized by Howie Choset, Maja Mataric and Sebastian Thrun * Machine Learning and Reinforcement Learning for Manufacturing organized by Sridhar Mahadevan and Andrew Moore * Large-Scale Consumer Databases organized by Mike Meyer, Teddy Seidenfeld and Kannan Srinivasan ___Deadline_for_paper_submissions__________________________________ * February 15, 1998 ___More_information________________________________________________ * Web: http://www.cs.cmu.edu/~conald * E-mail: conald at cs.cmu.edu For submission instructions, consult our Web page or contact the organizers of the specific workshop. A limited number of travel stipends will be available. The conference will be sponsored by CMU's newly created Center for Automated Learning and Discovery. From dana at cs.rochester.edu Wed Nov 5 15:24:07 1997 From: dana at cs.rochester.edu (dana@cs.rochester.edu) Date: Wed, 5 Nov 1997 15:24:07 -0500 Subject: New Book from MIT Press Message-ID: <199711052024.PAA16698@gazelle.cs.rochester.edu> **********************NEW BOOK FROM MIT PRESS ************************* Dana H. Ballard An Introduction to Natural Computation It is now clear that the brain is unlikely to be understood without recourse to computational theories. The theme of An Introduction to Natural Computation is that ideas from diverse areas such as neuroscience, information theory, and optimization theory have recently been extended in ways that makes them useful for describing the brain's program. The book provides a comprehensive introduction to the computational material that will form the underpinnings of the ultimate set of brain models. It stresses the broad spectrum of learning models--ranging from neural network learning through reinforcement learning to genetic learning--and situates the various models in their appropriate neural context. Writing about models of the brain before the brain is fully understood is a delicate matter. At one extreme are very detailed models of the neural circuitry. Such models are in danger of losing track of the task the brain is trying to solve. At the other extreme are very abstract models representing cognitive constructs that can be readily tested. Such models can be so abstract that they lose all relationship to neurobiology. To avoid both dangers, An Introduction to Natural Computation takes the middle ground and stresses the computational task while staying near the neurobiology. The material is accessible to advanced undergraduates as well as beginning graduate students. Dana Ballard is a Professor in the Department of Computer Science and Brain and Cognitive Sciences at the University of Rochester. For more information see: http://mitpress.mit.edu/book-home.tcl?isbn=0262024209 and http://www.cs.rochester.edu:80/users/faculty/dana/ From Reimar.Hofmann at mchp.siemens.de Thu Nov 6 03:38:32 1997 From: Reimar.Hofmann at mchp.siemens.de (Reimar Hofmann) Date: Thu, 06 Nov 1997 09:38:32 +0100 Subject: Nonlinear Markov Networks Message-ID: <34618208.8D32271C@mchp.siemens.de> *** The following NIPS*97 preprint is available *** Nonlinear Markov Networks for Continuous Variables Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract In this paper we address the problem of learning the structure in nonlinear Markov networks with continuous variables. Markov networks are well suited to model relationships which do not exhibit a natural causal ordering. We use neural network structures to model the quantitative relationships between variables. Using a financial and a social data set we show that interesting structures can be found using our approach. Available by ftp from: ftp://flop.informatik.tu-muenchen.de/pub/hofmannr/nips97prerl.ps.gz or from my homepage http://wwwbrauer.informatik.tu-muenchen.de/~hofmannr Also of interest might be our NIPS*95 paper which addresses the corresponding problem for Bayesian networks: Discovering Structure in Continuous Variables Using Bayesian Networks Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract We study Bayesian networks for continuous variables using nonlinear conditional density estimators. We demonstrate that useful structures can be extracted from a data set in a self-organized way and we present sampling techniques for belief update based on Markov blanket conditional density models. From sontag at control.rutgers.edu Thu Nov 6 19:33:43 1997 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Thu, 6 Nov 1997 19:33:43 -0500 Subject: TR available - Noisy nets cannot recognize regular languages Message-ID: <199711070033.TAA26247@control.rutgers.edu> TR available: ANALOG NEURAL NETS WITH GAUSSIAN OR OTHER COMMON NOISE DISTRIBUTIONS CANNOT RECOGNIZE ARBITRARY REGULAR LANGUAGES Wolfgang Maass, Graz, Austria Eduardo D. Sontag, Rutgers, USA ABSTRACT We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognized by networks of this type, and we give a precise characterization of those languages which can be recognized. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand, we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type. The paper can be retrieved from http://www.math.rutgers.edu/~sontag (follow link to "online papers"). The file is a gzipped postscript file. If Web access if inconvenient, it is also possible to use anonymous FTP: ftp math.rutgers.edu login: anonymous cd pub/sontag bin get noisy-nets.ps.gz quit gunzip noisy-nets.ps.gz lpr noisy-nets.ps From bert at mbfys.kun.nl Fri Nov 7 03:56:36 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Fri, 7 Nov 1997 09:56:36 +0100 Subject: paper available learning in BMs with linear response Message-ID: <199711070856.JAA02945@vitellius.mbfys.kun.nl> Dear Connectionists, The following article Title: Efficient learning in Boltzmann Machines using linear response theory Authors: H.J. Kappen and F.B. Rodriguez can now be downloaded from as ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NC.ps.Z This article has been accepted for publication in the journal Neural Computation. Abstract: The learning process in Boltzmann Machines is computationally very expensive. The computational complexity of the exact algorithm is exponential in the number of neurons. We present a new approximate learning algorithm for Boltzmann Machines, which is based on mean field theory and the linear response theorem. The computational complexity of the algorithm is cubic in the number of neurons. In the absence of hidden units, we show how the weights can be directly computed from the fixed point equation of the learning rules. Thus, in this case we do not need to use a gradient descent procedure for the learning process. We show that the solutions of this method are close to the optimal solutions and give a significant improvement when correlations play a significant role. Finally, we apply the method to a pattern completion task and show good performance for networks up to 100 neurons. Best Regards, Bert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NC.ps.Z ftp> bye unix% uncompress Kappen.LR_NC.ps.Z unix% lpr Kappen.LR_NC.ps From ericr at mech.gla.ac.uk Fri Nov 7 11:24:11 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Fri, 7 Nov 1997 16:24:11 GMT Subject: No subject Message-ID: <519.199711071624@googie.mech.gla.ac.uk> From Sebastian_Thrun at heaven.learning.cs.cmu.edu Fri Nov 7 13:32:35 1997 From: Sebastian_Thrun at heaven.learning.cs.cmu.edu (Sebastian Thrun) Date: Fri, 07 Nov 1997 13:32:35 -0500 Subject: New book: Learning to learn Message-ID: L E A R N I N G T O L E A R N Sebastian Thrun and Lorien Pratt (eds.) Kluwer Academic Publishers Over the past three decades, research on machine learning and data mining has led to a wide variety of algorithms that induce general functions from examples. As machine learning is maturing, it has begun to make the successful transition from academic research to various practical applications. Generic techniques such as decision trees and artificial neural networks, for example, are now being used in various commercial and industrial applications. Learning to learn is an exciting new research direction within machine learning. Similar to traditional machine learning algorithms, the methods described in LEARNING TO LEARN induce general functions from experience. However, the book investigates algorithms that can change the way they generalize, i.e., practice the last of learning itself, and improve on it. To illustrate the utility of learning to learn, it is worthwhile to compare machine learning to human learning. Humans encounter a continual stream of learning tasks. They do not just learn concepts of motor skills, they also learn bias, i.e., they learn how to generalize. As a result, humans are often able to generalize correctly from extremely few examples---often just a single example suffices to teach us a new thing. A deeper understanding of computer programs that improve their ability to learn can have large practical impact on the field of machine learning and beyond. In recent years, the field has made significant progress towards a theory of learning to learn along with practical new algorithms, some of which led to impressive results in real-world applications. LEARNING TO LEARN provides a survey of some of the most exciting new research approaches, written by leading researchers in the field. Its objective is to investigate the utility and feasibility of computer programs that can learn how to learn, both from a practical and a theoretical point of view. This book is organized into four parts Part I: Overview articles, in which basic taxonomies and the cognitive foundations for algorithms that "learn to learn" are introduced and discussed, Chapter 1: Learning To Learn: Introduction and Overview Sebastian Thrun and Lorien Pratt Chapter 2: A Survey of Connectionist Network Reuse Through Transfer Lorien Pratt and Barbara Jennings Chapter 3: Transfer in Cognition Anthony Robins Part II: Prediction/Supervised Learning, in which specific algorithms are presented that exploit information in multiple learning tasks in the context of supervised learning, Chapter 4: Theoretical Models of Learning to Learn Jonathan Baxter Chapter 5: Multitask Learning Rich Caruana Chapter 6: Making a Low-Dimensional Representation Suitable for Diverse Tasks Nathan Intrator and Shimon Edelman Chapter 7: The Canonical Distortion Measure for Vector Quantization and Function Approximation Jonathan Baxter Chapter 8: Lifelong Learning Algorithms Sebastian Thrun Part III: Relatedness, in which the issue of "task relatedness" is investigated and algorithms are described that selectively transfer knowledge across learning tasks, and Chapter 9: The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Daniel L. Silver and Robert E. Mercer Chapter 10: Clustering Learning Tasks and the Selective Cross-TaskTransfer of Knowledge Sebastian Thrun and Joseph O'Sullivan Part IV: Control, in which algorithms specifically designed for learning mappings from percepts to actions are presented. Chapter 11: CHILD: A First Step Towards Continual Learning Mark B. Ring Chapter 12: Reinforcement Learning With Self-Modifying Policies Juergen Schmidhuber, Jieyu Zhao, Nicol N. Schraudolph Chapter 13: Creating Advice-Taking Reinforcement Learners Richard Maclin and Jude W. Shavlik All contributions went to a journal-style reviewing process and are of journal quality (in fact, many of them were previously published in Machine Learning or Connection Science). The material is suited for advanced graduate classes in machine learning. 362 pages. More information at http://www.cs.cmu.edu/~thrun/papers/thrun.book3.html Please post. From juuso at nucleus.hut.fi Fri Nov 7 12:57:00 1997 From: juuso at nucleus.hut.fi (Juha Vesanto) Date: Fri, 07 Nov 1997 19:57:00 +0200 Subject: [Software] SOM Toolbox v1.0beta, freely available Message-ID: <3463566C.3B73370A@nucleus.hut.fi> SOM Toolbox for Matlab 5 version 1.0 beta now available in the WWW! A GNU licensed Matlab 5 toolbox for using self-organizing maps in data analysis is now available for free at URL http://www.cis.hut.fi/projects/somtoolbox/ The web page includes snapshots, the codes as a zip file and full documentation. If you are interested in practical data analysis and/or self-organizing maps and have Matlab 5 in your computer, be sure to check this out! The version is 1.0beta, so comments, suggestions and bug reports are welcome to the address: somtlbx at mail.cis.hut.fi Here are some SOM Toolbox features: + Modular programming style: the Toolbox code utilizes Matlab structures and the functions are constructed in a modular manner, which makes it convenient to tailor the code for each users' specific needs. (Note that you only need the basic Matlab 5 to run the SOM Toolbox - no other toolboxes are required.) + Batch and sequential training algorithms: in data analysis applications, the speed of training can be considerably improved by using the batch version. + Map dimension: maps may be N-dimensional. + Advanced graphics: based on Matlab's powerful graphics capabilities, illustrative figures can be easily produced. + Graphical user interface. + Compatibility with SOM_PAK: import/export functions for SOM_PAK codebook and data files are included in the package. + Basic preprocessing, labeling and validation tools. SOM Toolbox team http://www.cis.hut.fi/projects/somtoolbox/ Esa Alhoniemi, Johan Himberg, Kimmon Kiviluoto, Jukka Parviainen and Juha Vesanto -- IMHP Juha Vesanto juuso at mail.cis.hut.fi http://www.cis.hut.fi/~juuso -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* That's Dreddy for ya. Bastich makes us criminals! -Mean Machine Angel From simon.schultz at psy.ox.ac.uk Sun Nov 9 11:14:24 1997 From: simon.schultz at psy.ox.ac.uk (Simon Schultz) Date: Sun, 09 Nov 1997 16:14:24 +0000 Subject: Paper available Message-ID: <3465E160.345B@psy.ox.ac.uk> Dear Connectionists, Preprints of the following paper are available via WWW. It has been accepted for publication in Physical Review E. STABILITY OF THE REPLICA SYMMETRIC SOLUTION FOR THE INFORMATION CONVEYED BY A NEURAL NETWORK S. Schultz(1) and A. Treves(2) (1) Department of Experimental Psychology, University of Oxford, UK. (2) Programme in Neuroscience, SISSA, Trieste, Italy. Abstract: The information that a pattern of firing in the output layer of a feedforward network of threshold-linear neurons conveys about the network's inputs is considered. A replica-symmetric solution is found to be stable for all but small amounts of noise. The region of instability depends on the contribution of the threshold and the sparseness: for distributed pattern distributions, the unstable region extends to higher noise variances than for very sparse distributions, for which it is almost nonexistant. 19 pages, 5 figures. http://www.mrc-bbc.ox.ac.uk/~schultz/rstab.ps.gz Sincerely, S. Schultz -- ----------------------------------------------------------------------- Simon Schultz Department of Experimental Psychology also: University of Oxford Corpus Christi College South Parks Rd., Oxford OX1 3UD Oxford OX1 4JF Phone: +44-1865-271419 Fax: +44-1865-310447 http://www.mrc-bbc.ox.ac.uk/~schultz/ ----------------------------------------------------------------------- From tibs at utstat.toronto.edu Mon Nov 10 11:53:00 1997 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Mon, 10 Nov 97 11:53 EST Subject: new paper on model selection Message-ID: The covariance inflation criterion for adaptive model selection Rob Tibshirani and Keith Knight Univ of Toronto We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the dataset. This criterion can be applied to general prediction problems (for example regression or classification), and to general prediction rules (for example stepwise regression, tree-based models and neural nets). As a byproduct we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection. Available at http://utstat.toronto.edu/tibs/research.html or ftp://utstat.toronto.edu/pub/tibs/cic.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From mikkok at marconi.hut.fi Mon Nov 10 08:58:10 1997 From: mikkok at marconi.hut.fi (Mikko Kurimo) Date: Mon, 10 Nov 1997 15:58:10 +0200 Subject: Thesis available on using SOM and LVQ for HMMs Message-ID: <199711101358.PAA00240@marconi.hut.fi> The following Dr.Tech. thesis is available at http://www.cis.hut.fi/~mikkok/thesis/ (WWW home page) http://www.cis.hut.fi/~mikkok/thesis/book/ (output of latex2html) http://www.cis.hut.fi/~mikkok/intro.ps.gz (compressed postscript, 188K) http://www.cis.hut.fi/~mikkok/intro.ps (postscript, 57 pages, 712K) The articles that belong to the thesis can be accessed through the page http://www.cis.hut.fi/~mikkok/thesis/publications.html --------------------------------------------------------------- Using Self-Organizing Maps and Learning Vector Quantization for Mixture Density Hidden Markov Models Mikko Kurimo Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200, FIN-02015 HUT, Finland Email: Mikko.Kurimo at hut.fi Abstract -------- This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the collected speech samples is a difficult task because of the natural variation in the speech. Two neural computing paradigms, the Self-Organizing Map (SOM) and the Learning Vector Quantization (LVQ) are used in the experiments to improve the recognition performance of the models. A HMM consists of sequential states which are trained to model the feature changes in the signal produced during the modeled process. The output densities applied in this work are mixtures of Gaussian density functions. SOMs are applied to initialize and train the mixtures to give a smooth and faithful presentation of the feature vector space defined by the corresponding training samples. The SOM maps similar feature vectors to nearby units, which is here exploited in experiments to improve the recognition speed of the system. LVQ provides simple but efficient stochastic learning algorithms to improve the classification accuracy in pattern recognition problems. Here, LVQ is applied to develop an iterative training method for mixture density HMMs, which increases both the modeling accuracy of the states and the discrimination between the models of different phonemes. Experiments are also made with LVQ based corrective tuning methods for the mixture density HMMs, which aim at improving the models by learning from the observed recognition errors in the training samples. The suggested HMM training methods are tested using the Finnish speech database collected in the Neural Networks Research Centre at the Helsinki University of Technology. Statistically significant improvements compared to the best conventional HMM training methods are obtained using the speaker dependent but vocabulary independent phoneme models. The decrease in the average number of phoneme recognition errors for the tested speakers have been around 10 percent in the applied test material. -- Email: Mikko.Kurimo at hut.fi Office: Helsinki University of Technology, Neural Networks Research Centre Mail: P.O.Box 2200, FIN-02015 HUT, Finland From lemm at LORENTZ.UNI-MUENSTER.DE Mon Nov 10 12:39:12 1997 From: lemm at LORENTZ.UNI-MUENSTER.DE (Joerg_Lemm) Date: Mon, 10 Nov 1997 18:39:12 +0100 (MET) Subject: TR available: Prior Information and Generalized Questions Message-ID: Dear colleagues, the following TR is now available: "Prior Information and Generalized Questions" MIT AI Memo No. 1598 (C.B.C.L paper No. 141) Joerg C. Lemm Abstract In learning problems available information is usually divided into two categories: examples of function values (or training data) and prior information (e.g.\ a smoothness constraint). This paper 1.) studies aspects on which these two categories usually differ, like their relevance for generalization and their role in the loss function, 2.) presents a unifying formalism, where both types of information are identified with answers to generalized questions, 3.) shows what kind of generalized information is necessary to enable learning, 4.) aims to put usual training data and prior information on a more equal footing by discussing possibilities and variants of measurement and control for generalized questions, including the examples of smoothness and symmetries, 5.) reviews shortly the measurement of linguistic concepts based on fuzzy priors, and principles to combine preprocessors, 6.) uses a Bayesian decision theoretic framework, contrasting parallel and inverse decision problems, 7.) proposes, for problems with non--approximation aspects, a Bayesian two step approximation consisting of posterior maximization and a subsequent risk minimization, 8.) analyses empirical risk minimization under the aspect of nonlocal information 9.) compares the Bayesian two step approximation with empirical risk minimization, including their interpretations of Occam's razor, 10.) formulates examples of stationarity conditions for the maximum posterior approximation with nonlocal and nonconvex priors, leading to inhomogeneous nonlinear equations, similar for example to equations in scattering theory in physics. In summary, the paper emphasizes the need of empirical measurement and control of prior information and of their explicit treatment in theory. ---------------------------------------------------------------------- Comments are welcome! Download sites: ftp://publications.ai.mit.edu/ai-publications/1500-1999/AIM-1598.ps http://planck.uni-muenster.de:8080/~lemm/prior.ps.Z ======================================================================= Joerg Lemm Institute for Theoretical Physics I Wilhelm-Klemm-Str. 9 D- 48149 Muenster, Germany Email: lemm at uni-muenster.de Home page: http://planck.uni-muenster.de:8080/~lemm/ ======================================================================= From dror at coglit.soton.ac.uk Mon Nov 10 09:45:14 1997 From: dror at coglit.soton.ac.uk (Itiel Dror) Date: Mon, 10 Nov 1997 14:45:14 +0000 (GMT) Subject: Call for Papers Message-ID: CALL FOR PAPERS Pragmatics & Cognition announces a special issue on FACIAL INFORMATION PROCESSING: A MULTIDISCIPLINARY PERSPECTIVE Guest Editors Itiel E. Dror and Sarah V. Stevenage In many senses, faces are at the center of human interaction. At a very basic level, faces indicate identity. However, faces are remarkably rich information carriers. For example, facial gestures may be used as means of conveying intentions. Faces may also permit a direct glimpse into the person's inner self (by unintentionally revealing, for example, aspects of character or mood). Given their salient role, the processing of the information conveyed by faces and its integration with other sources of interactional information raise important issues in cognition and pragmatics. Research on facial information processing has investigated these (and other) issues utilizing a variety of approaches and methodologies, and developments in both computer and cognitive sciences have recently carried this research forward. The emerging picture is that there are cognitive subsystems which specialize in different aspects of facial processing. This has been supported by neuropsychological evidence suggesting that brain damaged patients show dissociations between the different aspects of face processing. In addition, research on the development of facial processing abilities, and on aspects of the face itself which affect these processing abilities, has contributed to our understanding of how facial information is perceived. This special issue of Pragmatics and Cognition is intended to provide a common forum for a variety of the topics currently under investigation. Given the breadth of issues and approaches used to investigate faces, we encourage submissions from a wide range of disciplines. Our aim is that this special issue will tie together the diverse research on faces, and show their links and interdependencies. Deadline for submission: August 1, 1998 Editorial decisions: November 1, 1998 Revised papers due: February 1, 1999 Expected publication: October 1999 Papers should be submitted according to the guidelines of the journal (see WWW URL: http://www.cogsci.soton.ac.uk/~dror/guideline.html). All submissions will be peer reviewed. Please send five copies of your submission either to: Dr. Itiel Dror (dror at coglab.psy.soton.ac.uk) or: Dr. Sarah Stevenage (svs1 at soton.ac.uk) Dept. of Psychology Southampton University Highfield, Southampton SO17 1BJ England For additional and updated information see WWW URL: http://www.cogsci.soton.ac.uk/~dror/faces.html or contact either of the guest editors. #======================================================================# | Itiel E. Dror, Ph.D. http://www.cogsci.soton.ac.uk/~dror/ | | Department of Psychology dror at coglab.psy.soton.ac.uk | | University of Southampton Office 44 (0)1703 594519 | | Highfield, Southampton Lab. 44 (0)1703 594518 | | England SO17 1BJ Fax. 44 (0)1703 594597 | #======================================================================# ******************************************************************************* From wsenn at iam.unibe.ch Mon Nov 10 10:17:29 1997 From: wsenn at iam.unibe.ch (Walter Senn) Date: Mon, 10 Nov 1997 16:17:29 +0100 Subject: Depressing synapses detect neural synchrony Message-ID: <9711101517.AA22688@barney.unibe.ch> New paper available (to appear in Neural Computation): READING NEURONAL SYNCHRONY WITH DEPRESSING SYNAPSES Walter Senn, Idan Segev, Misha Tsodyks According to recent experiments of deCharms and Merzenich (Nature 381, 610-613 (1996)), neurons in the primary auditory cortex of the monkey do not change their mean firing rate during an ongoing tone stimulus. The only change which is measured during the tone is a enhanced correlation among the individual spike trains of the auditory cells. We show that this coherence information in the auditory cell population could easily be extracted by a postsynaptic neuron using depressing synapses. The idea is that a dynamically depressing synapse shows a high response at a burst onset and then gets depressed towards the burst end. If some of the auditory cells now synchronize their bursts, the high postsynaptic responses at the burst onset may be enough to activate a postsynaptic cell. Such a partial synchronization may be possible while the mean firing rate of the whole auditory cell population still remains constant before and during the tone stimulus. In this case the tone would never have been detected by a postsynaptic cell using static synapses with constant weights. The manuscript (170 KB) can be downloaded from: http://iamwww.unibe.ch:80/~brainwww/publications/pub_walter.html From xjwang at cicada.ccs.brandeis.edu Mon Nov 10 17:30:38 1997 From: xjwang at cicada.ccs.brandeis.edu (Xiao-Jing Wang) Date: Mon, 10 Nov 1997 17:30:38 -0500 Subject: No subject Message-ID: <199711102230.RAA08141@cicada.ccs.brandeis.edu> Postdoctoral Positions Sloan Center for Theoretical Neuroscience at Brandeis University We anticipate making 3-4 new two-year postdoctoral appointments to the Sloan Center for Theoretical Neuroscience at Brandeis University between January l, 1998 and September 1, 1998. These positions are intended to allow young scientists with Ph.D.s in physics, mathematics or computer science to enter the field of neuroscience. Interested candidates should send a complete curriculum vitae and statement of research interests, and arrange for three letters of recommendation to be sent to : Dr. Larry Abbott Volen Center for Complex Systems Mailstop 013 Brandeis University 415 South Street Waltham, MA 02254 Associated faculty include L. F. Abbott, J. Lisman, E. Marder, S. Nelson, G. Turrigiano and X.-J. Wang. Women and minorities are especially encouraged to apply. Brandeis University is an equal opportunity employer. From keithm at cns.bu.edu Mon Nov 10 13:51:39 1997 From: keithm at cns.bu.edu (Keith McDuffee) Date: Mon, 10 Nov 1997 13:51:39 -0500 Subject: CALL FOR PAPERS Message-ID: <3.0.3.32.19971110135139.00773668@cns.bu.edu> CALL FOR PAPERS and FINAL INVITED PROGRAM SECOND INTERNATIONAL CONFERENCE ON COGNITIVE AND NEURAL SYSTEMS May 27-30, 1998 Sponsored by Boston University's Center for Adaptive Systems and Department of Cognitive and Neural Systems with financial support from DARPA and ONR How Does the Brain Control Behavior? How Can Technology Emulate Biological Intelligence? The conference will include invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is aimed at researchers and students of computational neuroscience, connectionist cognitive science, artificial neural networks, neuromorphic engineering, and artificial intelligence. A single oral or poster session enables all presented work to be highly visible. Abstract submissions encourage submissions of the latest results. Costs are kept at a minimum without compromising the quality of meeting handouts and social events. Although Memorial Day falls on Saturday, May 30, it is observed on Monday, May 25, 1998. CONFIRMED INVITED SPEAKERS TUTORIALS Wednesday, May 27, 1998: Larry Abbott, Short-term synaptic plasticity: Mathematical description and computational function George Cybenko, Understanding Q-learning and other adaptive learning methods Ennio Mingolla, Neural models of biological vision Alex Pentland, Visual recognition of people and their behavior Each tutorial is 90 minutes long. KEYNOTE SPEAKERS Stephen Grossberg, Adaptive resonance theory: From biology to technology Ken Nakayama, Psychological studies of visual attention INVITED SPEAKERS Thursday, May 28, 1998: Azriel Rosenfeld, Understanding object motion Takeo Kanade, Computational sensors: Further progress Tomaso Poggio, Sparse representations for learning Gail Carpenter, Applications of ART neural networks Rodney Brooks, Experiments in developmental models for a neurally controlled humanoid robot Lee Feldkamp, Recurrent networks: Promise and practice Friday, May 29, 1998: J. Anthony Movshon, Contrast gain control in the visual cortex Hugh Wilson, Global processes at intermediate levels of form vision Mel Goodale, Biological teleassistance: Perception and action in the human visual system Ken Stevens, The categorical representation of speech and its traces in acoustics and articulation Carol Fowler, Production-perception links in speech Frank Guenther, A theoretical framework for speech acquisition and production Saturday, May 30, 1998: Howard Eichenbaum, The hippocampus and mechanisms of declarative memory Earl Miller, Neural mechanisms for working memory and cognition Bruce McNaughton, Neuronal population dynamics and the interpretation of dreams Richard Thompson, The cerebellar circuitry essential for classical conditioning of discrete behavioral responses Daniel Bullock, Cortical control of arm movements Andrew Barto, Reinforcement learning applied to large-scale dynamic optimization problems There will be contributed oral and poster sessions on each day of the conference. CALL FOR ABSTRACTS Contributors are requested to list a first and second choice from among the topics below in their cover letter, and to say whether it is biological (B) or technological (T) work, when they submit their abstract, as described below. * vision * spatial mapping and navigation * object recognition * neural circuit models * image understanding * neural system models * audition * mathematics of neural systems * speech and language * robotics * unsupervised learning * hybrid systems (fuzzy, evolutionary, digital) * supervised learning * neuromorphic VLSI * reinforcement and emotion * industrial applications * sensory-motor control * other * cognition, planning, and attention Example: first choice: vision (T); second choice: neural system models (B). Contributed Abstracts must be received, in English, by January 31, 1998. Notification of acceptance will be given by February 28, 1998. A meeting registration fee of $45 for regular attendees and $30 for students must accompany each Abstract. See Registration Information for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted abstracts will be returned on request only until April 15, 1998. Each Abstract should fit on one 8.5" x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract; corresponding author and presenting author name, address, telephone, fax, and email address; and preference for oral or poster presentation. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. REGISTRATION INFORMATION: Early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to the address above. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: Fellowships for PhD candidates and postdoctoral fellows are available to cover meeting travel and living costs. The deadline to apply for fellowship support is January 31, 1998. Applicants will be notified by February 28, 1998. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. REGISTRATION FORM Second International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 27, 1998 Meeting: May 28-30, 1998 FAX: (617) 353-7755 (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $70 Conference plus Tutorial (Regular) ( ) $30 Conference Only (Student) ( ) $45 Conference plus Tutorial (Student) ( ) $25 Tutorial Only (Regular) ( ) $45 Conference Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ Keith McDuffee keithm at cns.bu.edu From cns-cas at cns.bu.edu Mon Nov 10 12:43:09 1997 From: cns-cas at cns.bu.edu (Boston University - Cognitive and Neural Systems) Date: Mon, 10 Nov 1997 12:43:09 -0500 Subject: Graduate Training in Cognitive and Neural Systems at B.U. Message-ID: <3.0.3.32.19971110124309.00f6e3fc@cns.bu.edu> ******************************************************************* GRADUATE TRAINING IN THE DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ******************************************************************* The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1998, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS Boston University 677 Beacon Street Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via e-mail your full name and mailing address to the attention of Mr. Robin Amos at: amos at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores will decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of outstanding technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; and biological rhythms; as well as the mathematical and computational methods needed to support modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of interdisciplinary graduate courses, each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the afternoon or evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as biology, computer science, engineering, mathematics, and psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (see page 2). Students interested in neural network hardware work with researchers in CNS, at the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River Campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the College of Engineering; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. The department is housed in its own new four-story building which includes ample space for faculty and student offices and laboratories, as well as an auditorium, classroom and seminar rooms, a library, and a faculty-student lounge. LABORATORY AND COMPUTER FACILITIES The department is funded by grants and contracts from federal agencies which support research in life sciences, mathematics, artificial intelligence, and engineering. Facilities include laboratories for experimental research and computational modeling in visual perception, speech and language processing, and sensory-motor control and robotics. Data analysis and numerical simulations are carried out on a state-of-the-art computer network comprised of Sun workstations, Silicon Graphics workstations, Macintoshes, and PCs. All students have access to X-terminals or UNIX workstation consoles, a selection of color systems and PCs, the Boston University connection machine and network of SGI machines, and standard modeling and mathematical simulation packages such as Mathematica, VisSim, Khoros, and Matlab. The department maintains a core collection of books and journals, and has access both to the Boston University Libraries and to the many other collections of the Boston Library Consortium. In addition, several specialized facilities and software are available for use. These include: Computer Vision/Computational Neuroscience Laboratory The Computer Vision/Computational Neuroscience Lab is comprised of an electronics workshop, including a surface-mount workstation, PCD fabrication tools, and an Alterra EPLD design system; a light machine shop; an active vision lab including actuators and video hardware; and systems for computer aided neuroanatomy and application of computer graphics and image processing to brain sections and MRI images. Neurobotics Laboratory The Neurobotics Lab utilizes wheeled mobile robots to study potential applications of neural networks in several areas, including adaptive dynamics and kinematics, obstacle avoidance, path planning and navigation, visual object recognition, and conditioning and motivation. The lab currently has three Pioneer robots equipped with sonar and visual sensors; one B-14 robot with a moveable camera, sonars, infrared, and bump sensors; and two Khepera miniature robots with infrared proximity detectors. Other platforms may be investigated in the future. Psychoacoustics Laboratory The Psychoacoustics Lab houses a newly installed, 8 ft. x 8 ft. sound-proof booth. The laboratory is extensively equipped to perform both traditional psychoacoustic and experiments using interactive auditory virtual-reality stimuli. The major equipment dedicated to the psychoacoustics laboratory includes two Pentium-based personal computers; two Power-PC-based Macintosh computers; a 50-MHz array processor capable of generating auditory stimuli in real time; programmable attenuators; analog-to-digital converters; digital-to-analog converters; a real-time head tracking system; a special-purpose, signal-processing hardware system capable of generating ?spatialized? stereo auditory signals in real time; a two-channel oscilloscope; a two-channel spectrum analyzer; various cables, headphones, and other miscellaneous electronics equipment; and software for signal generation, experimental control, data analysis, and word processing. Sensory-Motor Control Laboratory The Sensory-Motor Control Lab supports experimental studies of motor kinematics. An infrared WatSmart system allows measurement of large-scale movements, and a pressure-sensitive graphics tablet allows studies of handwriting and other fine-scale movements. Part of the equipment associated with the lab is shared with and housed in the Vision Lab. Equipment includes a 40-inch monitor that allows computer display of animations generated by an SGI workstation or a Pentium Pro (Windows NT) workstation. A second major component is a helmet-mounted, video-based, eye-head tracking system (ISCAN Corp, 1997). The latter?s camera samples eye position at 240Hz and also allows reconstruction of what subjects are attending to as they freely scan a scene under normal lighting. Thus the system affords a wide range of visuo-motor studies. Speech and Language Laboratory The Speech and Language Lab includes facilities for analog-to-digital and digital-to-analog software. The Ariel equipment allows reliable synthesis and playback of speech waveforms. An Entropic signal processing package provides facilities for detailed analysis, filtering, spectral construction, and formant tracking of the speech waveform. Various large databases, such as TIMIT and TIdigits, are available for testing algorithms of speech recognition. For high speed processing, the department provides supercomputer facilities to speed filtering and data analysis. Visual Psychophysics Laboratory The Visual Psychophysics Lab occupies an 800-square-foot suite, including three dedicated rooms for data collection, and houses a variety of computer controlled display platforms, including Silicon Graphics, Inc. (SGI) Onyx RE2, SGI Indigo2 High Impact, SGI Indigo2 Extreme, Power Computing (Macintosh compatible) PowerTower Pro 225, and Macintosh 7100/66 workstations. Ancillary resources for visual psychophysics include a computer-controlled video camera, stereo viewing glasses, prisms, a photometer, and a variety of display-generation, data-collection, and data-analysis software. Affiliated Laboratories Affiliated CAS/CNS faculty have additional laboratories ranging from visual and auditory psychophysics and neurophysiology, anatomy, and neuropsychology to engineering and chip design. These facilities can be used in the context of faculty/student collaborations. 1997-98 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior. Aijaz Baloch Research Assistant Professor of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention in recognition, learning and motor control, computa-tional vision, adaptive control systems, reinforcement learning. Helen Barbas Associate Professor, Department of Health Sciences PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neo- cortex. Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual perception, psychophysics, computational models. Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development. Gail A.Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies, Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations. Laird Cermak Director, Memory Disorders Research Center Boston Veterans Affairs Medical Center Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College PhD, Ohio State University Memory disorders. Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems. H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing. Howard Eichenbaum Professor of Psychology PhD, Psychology, University of Michigan Neurophysiological studies of how the hippocampal system is involved in reinforcement learning, spatial orientation, and declarative memory. William D. Eldred III Associate Professor of Biology PhD, University of Colorado, Health Science Center Visual neural biology. Bruce Fischl Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Anisotropic diffusion and nonlinear image filtering, space-variant vision, computational models of early visual processing, and automated analysis of magnetic resonance images. Paolo Gaudiano Associate Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of robotics, vision, adaptive sensory-motor control, and behavioral neurobiology. Jean Berko Gleason Professor of Psychology PhD, Harvard University Psycholinguistics. Sucharita Gopal Associate Professor of Geography PhD, University of California at Santa Barbara Neural networks, computational modeling of behavior, geographical information systems, fuzzy sets, and spatial cognition. Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Chairman, Department of Cognitive and Neural Systems Director, Center for Adaptive Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, and applied mathematics. Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, and speech production. Catherine L. Harris Assistant Professor of Psychology PhD, Cognitive Science and Psychology, University of California at San Diego Visual word recognition, psycholinguistics, cognitive semantics, second language acquisition, computational models. J. Pieter Jacobs Visiting Scholar, Cognitive and Neural Systems MMA, MM, Music, Yale University MMus, Music, University of Pretoria MEng, Electromagnetism, University of Pretoria Aspects of motor control in piano playing; the interface between psychophysical and cognitive phenomena in music perception. Thomas G. Kincaid Professor of Electrical, Computer and Systems Engineering, College of Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing. Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems. Jacqueline A. Liederman Associate Professor of Psychology PhD, Psychology, University of Rochester Dynamics of interhemispheric cooperation; prenatal correlates of neurodevelopmental disorders. Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes. Joseph Perkell Adjunct Professor of Cognitive and Neural Systems Senior Research Scientist, Research Lab of Electronics and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology PhD, Massachusetts Institute of Technology Motor control of speech production. Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system. Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems PhD, Warsaw Medical Academy Retinal physiology, mathematical and computer modeling of dynamical properties of neurons in the visual system. Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision. Mark Reinitz Assistant Professor of Psychology PhD, University of Washington Cognitive psychology, attention, explicit and implicit memory, memory-perception interactions. Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control. Elliot Saltzman Associate Professor of Physical Therapy, Sargent College Assistant Professor, Department of Psychology and Center for the Ecological Study of Perception and Action University of Connecticut, Storrs Research Scientist, Haskins Laboratories, New Haven, CT PhD, Developmental Psychology, University of Minnesota Modeling and experimental studies of human speech production. Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception. Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling. Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University PhD, Psychology, Brown University Visual motion, visual adaptation, relation of visual perception, memory, and movement. Barbara Shinn-Cunningham Assistant Professor of Cognitive and Neural Systems and Biomedical Engineering PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology Psychoacoustics, audition, auditory localization, binaural hearing, sensorimotor adaptation, mathematical models of human performance. Louis Tassinary Visiting Scholar, Cognitive and Neural Systems PhD, Psychology, Dartmouth College Dynamics of affective states as they relate to instigated and ongoing cognitive processes. Malvin Teich Professor of Electrical and Computer Systems Engineering and Biomedical Engineering PhD, Cornell University Quantum optics, photonics, fractal stochastic processes, information transmission in biological sensory systems. Takeo Watanabe Assistant Professor of Psychology PhD, Behavioral Sciences, University of Tokyo Perception of objects and motion and effects of attention on perception using psychophysics and brain imaging (f-MRI). Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid archi-tectures. James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests: dynamic binding, self-organization, shape representation, and classification. Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology Visual attention, preattentive and attentive object representation. Curtis Woodcock Associate Professor of Geography; Chairman, Department of Geography Director, Geographic Applications, Center for Remote Sensing PhD, University of California, Santa Barbara Biophysical remote sensing, particularly of forests and natural vegetation, canopy reflectance models and their inversion, spatial modeling, and change detection; biogeography; spatial analysis; geographic information systems; digital image processing. Other Boston University faculty affiliated with the CNS Department are listed at the end of the brochure. ******************************************************************* DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS GRADUATE TRAINING ANNOUNCEMENT Boston University 677 Beacon Street Boston, MA 02215 Phone: 617/353-9481 Fax: 617/353-7755 Email: inquiries at cns.bu.edu Web: http://cns-web.bu.edu/ ******************************************************************* From ericr at mech.gla.ac.uk Tue Nov 11 04:35:38 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Tue, 11 Nov 1997 09:35:38 GMT Subject: No subject Message-ID: <3219.199711110935@googie.mech.gla.ac.uk> From bert at mbfys.kun.nl Tue Nov 11 03:28:47 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 11 Nov 1997 09:28:47 +0100 Subject: correction on paper available learning in BMs with linear response Message-ID: <199711110828.JAA16200@vitellius.mbfys.kun.nl> Dear Connectionists, The following article Title: Efficient learning in Boltzmann Machines using linear response theory Authors: H.J. Kappen and F.B. Rodriguez was announced on connectionist mailing list on november 7. However, due to some failure at our server, this paper could not be downloaded. This problem has now been solved. The paper can now be downloaded from ftp://ftp.mbfys.kun.nl/snn/pub/reports/Kappen.LR_NC.ps.Z Sorry for the inconvenience. Best Regards, Bert Kappen FTP INSTRUCTIONS unix% ftp ftp.mbfys.kun.nl Name: anonymous Password: (use your e-mail address) ftp> cd snn/pub/reports/ ftp> binary ftp> get Kappen.LR_NC.ps.Z ftp> bye unix% uncompress Kappen.LR_NC.ps.Z unix% lpr Kappen.LR_NC.ps From ericr at mech.gla.ac.uk Wed Nov 12 08:06:30 1997 From: ericr at mech.gla.ac.uk (Eric Ronco) Date: Wed, 12 Nov 1997 13:06:30 GMT Subject: No subject Message-ID: <2133.199711121306@googie.mech.gla.ac.uk> From cia at hare.riken.go.jp Wed Nov 12 06:04:38 1997 From: cia at hare.riken.go.jp (cia@hare.riken.go.jp) Date: Wed, 12 Nov 97 20:04:38 +0900 Subject: Software for ICA and BSS Message-ID: <9711121104.AA11233@hare.brain.riken.go.jp> Dear all, Just to let you know of the http availability of a new software for Independent Component Analysis (ICA) and Blind Separation of Sources (BSS). The Laboratory for Open Information Systems in the Research Group of Professor S. AMARI, (Brain-Style Information Processing Group) BRAIN SCIENCE INSTITUTE -RIKEN, JAPAN announces the availability of OOLABSS (Object Oriented LAboratory for Blind Source Separation), an experimental laboratory for ICA and BSS. OOLABSS has been developed by Dr. A. CICHOCKI and Dr. B. ORSIER (both worked on the concept of this software and on the development/unification of learning algorithms, while Dr. B. ORSIER designed and implemented the software in C++ under Windows95/NT). OOLABAS offers an interactive environment for experiments with a very wide family of recently developed on-line adaptive learning algorithms for Blind Separation of Sources and Independent Component Analysis. OOLABSS is free for non-commercial use. The current version is still experimental but is reasonably stable and robust. The program has the following features: 1. Users can define their own activation functions for each neuron (processing unit) or use a global activation function (e.g. hyperbolic tangent) for all neurons. 2. The program also enables automatic (self-adaptive) selection of quasi optimal activation functions (time variable or switching) depending on the stochastic distribution of extracted source signals (so called extended ICA problem). 3 Users can add a noise both to sensors signals as well as to synaptic weights. 4. The number of sources, sensors and outputs of the neural network can be arbitrary defined by users. 5. In the case where the number of source signals is completely unknown one of the proposed approaches enables not only to estimate source signals but also to estimate correctly their number on-line without any pre-processing, like pre-whitening or Principal Component Analysis (PCA). 6. The problem of optimal updating of a learning rate (step) is a key problem encountered in a wide class of on-line adaptive learning algorithms. Relying on properties of nonlinear low-pass filters a family of learning algorithms for self-adaptive (automatic) updating of learning rates (global one or local-individual for each synaptic weight) are implemented in the program. The learning rates can be self-adaptive, i.e. quasi optimal annealing of learning rates is automatically provided in a stationary environment. In a non-stationary environment the learning rates adaptively change their value to provide good tracking abilities. The users can also define their own function for changing the learning rate. 6. The program enables to compare performance of several different algorithms. 7. Special emphasis is given to robust algorithms with respect to noise and outliers and equivariant feature (i.e. independence of asymptotic performance for ill conditioning of the mixing process). 8. Advanced graphics: illustrative figures are produced and can be easily printed. Encapsulated Postscript files can be produced for easy integration into word processors. Data can be pasted to the clipboard for post-processing using specialized software like Matlab or even spreadsheets. 9. Users can easily enter their own data (sensors signals, or sources and mixing matrix, noise, a neural network model, etc.) in order to experiment with various kind of algorithms. 10. Modular programming style: the program code is based on well-defined C++ classes and is very modular, which makes it possible to tailor the software to each user's specific needs. Please visit OOLABSS home page at URL: http://www.bip.riken.go.jp/absl/orsier/OOLABSS The version is 1.0 beta, so comments, suggestions and bug reports are welcome at the address: oolabss at open.brain.riken.go.jp From ken at phy.ucsf.EDU Thu Nov 13 02:36:11 1997 From: ken at phy.ucsf.EDU (Ken Miller) Date: Wed, 12 Nov 1997 23:36:11 -0800 (PST) Subject: UCSF Postdoc and Graduate Positions in Theoretical Neurobiology Message-ID: <199711130736.XAA08018@coltrane.ucsf.edu> FULL INFO: http://www.sloan.ucsf.edu/sloan/sloan-info.html PLEASE DO NOT USE 'REPLY'; FOR MORE INFO USE ABOVE WEB SITE OR CONTACT ADDRESSES GIVEN BELOW The Sloan Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in mathematics, theoretical or experimental physics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan Center offers opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. Young scientists with strong theoretical backgrounds will receive scientific training in experimental approaches to understanding the operation of the intact brain. They will learn to integrate their theoretical abilities with these experimental approaches to form a mature research program in integrative neuroscience. The research undertaken by the trainees may be theoretical, experimental, or a combination. TO APPLY, please send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. The application deadline is February 1, 1998. Send applications to: Steve Lisberger Sloan Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 PRE-DOCTORAL applicants may be enrolled in a Ph.D. program in a theoretical discipline at another institution. In this case, the fellowship would support a cooperative training program between that institution and UCSF that is acceptable to both institutions. Applicants should indicate a faculty member at their home institution who we may contact who would sponsor their research at the Sloan Center. Applications for such a cooperative program will be accepted at any time. Alternatively, PRE-DOCTORAL applicants with strong theoretical training may seek admission into the UCSF Neuroscience Graduate Program as a first-year student. Applicants seeking such admission must apply by Jan. 10, 1998 to be considered for fall, 1998 admission. Application materials for the UCSF Neuroscience Program may be obtained from Cindy Kelly Neuroscience Graduate Program Department of Physiology University of California San Francisco San Francisco, CA 94143-0444 neuroscience at phy.ucsf.edu Be sure to include your surface-mail address. The procedure is: make a normal application to the UCSF Neuroscience program; but also alert the Sloan Center of your application, by writing to Steve Lisberger at the address given above. If you need more information: -- Consult the Sloan Center WWW Home Page: http://www.sloan.ucsf.edu/sloan -- Send e-mail to sloan-info at phy.ucsf.edu -- See also the home page for the W.M. Keck Foundation Center for Integrative Neuroscience, in which the Sloan Center is housed: http://www.keck.ucsf.edu/ From cmb35 at newton.cam.ac.uk Thu Nov 13 09:55:13 1997 From: cmb35 at newton.cam.ac.uk (C.M. Bishop) Date: Thu, 13 Nov 1997 14:55:13 +0000 Subject: Information Geometry Message-ID: <199711131455.OAA23985@feynman> A Newton Institute Themed Week INFORMATION GEOMETRY 8 - 12 December Isaac Newton Institute, Cambridge, U.K. Organisers: S Amari (RIKEN) and C M Bishop (Microsoft) *** http://www.newton.cam.ac.uk/programs/nnm_info.html *** This themed week is aimed at bringing together researchers from different disciplines with a common interest in the field of information geometry. Registration: Since the week falls during the Cambridge academic term, the Newton Institute will unfortunately not be able to provide any assistance with accommodation. Participants must therefore make their own arrangements for accommodation and evening meals. However, light lunches will be available for purchase in the Institute. In order to gauge numbers, participants are requested to complete and return the short registration form, attached below. Programme ========= Abstracts for each talk will are also available from the web site. Monday 8 December ----------------- Informal discussion Tuesday 9 December ------------------ 10:00 - 11:00 S Amari (RIKEN) Introduction to information geometry and applications to neural networks 11:00 Coffee 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 G Pistone (Torino) Non-parametric Information Geometry 15:30 Tea 16:00 - 17:30 Informal discussion Wednesday 10 December --------------------- 10:00 - 11:00 J-F Cardoso (CNRS) Information geometry of blind source separation and ICA 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 S Eguchi (ISM Japan) Near parametric inference 15:30 Tea 16:00 - 17:30 Informal discussion Thursday 11 December -------------------- 10:00 - 11:00 N Murata (RIKEN) Characteristics of AIC type criteria in case of singular Fisher Information matrices 11:30 Informal discussion 12:30 Lunch 14:30 - 15:30 H Zhu (Santa Fe) Some consideration of information geometry on function spaces and non-parametric inference 15:30 Tea 16:00 - 17:30 Informal discussion Friday 12 December ------------------ Informal discussions ----------------------------------------------------------------------------- A Newton Institute Themed Week INFORMATION GEOMETRY 8 - 12 December, 1997 Isaac Newton Institute, Cambridge, U.K. Registration form ----------------- Name: Address: Tel: Fax: Email: Days on which you plan to attend and on which you plan to have lunch in the Institute: Please return to Heather Hughes (H.Hughes at newton.cam.ac.uk) ----------------------------------------------------------------------------- From yweiss at psyche.mit.edu Thu Nov 13 13:12:16 1997 From: yweiss at psyche.mit.edu (Yair Weiss) Date: Thu, 13 Nov 1997 13:12:16 -0500 (EST) Subject: paper availble: belief propagation in networks with loops Message-ID: <199711131812.NAA26099@maxwell1> The following paper on belief propagation in networks with loops is available online via: http://www-bcs.mit.edu/~yweiss/cbcl.ps.gz This research will be presented at the NIPS*97 workshop on graphical models. The workshop will also feature talks by P. Smyth and B. Frey on a similar topic. Comments are most welcome. Yair -------------------------------------------------------------------------- Title: Belief Propagation and Revision in Networks with Loops Author: Yair Weiss Reference: MIT AI Memo 1616, MIT CBCL Paper 155. Abstract: Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay a foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive an analytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revision at convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the state space. For networks with multiple loops, we introduce the concept of a ``balanced network'' and show simulation results comparing belief revision and update in such networks. We show that the Turbo code structure is balanced and present simulations on a toy Turbo code problem indicating the decoding obtained by belief revision at convergence is significantly more likely to be correct. From payman at maxwell.ee.washington.edu Thu Nov 13 13:22:32 1997 From: payman at maxwell.ee.washington.edu (Payman Arabshahi 8834870) Date: Thu, 13 Nov 1997 10:22:32 PST Subject: CFP: CIFEr'98 - computational intelligence in finance Message-ID: <199711131822.KAA24554@compton.ee.washington.edu> IEEE/IAFE 1998 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer98 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 29-31, 1998 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting Institute for Operations Research * Risk Arbitrage and the Management Sciences * Exotic Options CIFER is the 4th annual collaboration between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at the IAFE by December 14, 1997. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Mark Larson at the IAFE by December 14, 1997. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr Organizational Chair, Mark Larson. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'98 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers); the IEEE Neural Networks Council, and INFORMS (Institute for Operations Research and the Management Sciences). The IEEE (Institute Mark Larson of Electrical and Electronics CIFEr'98 Organizational Chair Engineers) is the world's largest Meeting Management engineering and computer science IAFE Administrative Office professional non-profit association 646-16 Main Street and sponsors hundreds of technical Port Jefferson, NY 11777-2230 conferences and publications annually. The IAFE is a professional Tel: (516) 331-8069 non-profit financial association Fax: (516) 331-8044 with members worldwide specializing Email: m.larson at iafe.org in new financial product design, derivative structures, risk Web: http://www.ieee.org/nnc/cifer98 management strategies, arbitrage techniques, and application of computational techniques to finance. INFORMS serves the scientific and professional needs of OR/MS investigators, scientists, students, educators, and managers. ---------------------------------------------------------------------------- From nimzo at cerisep1.diepa.unipa.it Thu Nov 13 07:22:13 1997 From: nimzo at cerisep1.diepa.unipa.it (Maurizio Cirrincione) Date: Thu, 13 Nov 1997 12:22:13 GMT Subject: Abstract of PhD thesis about NN and electrical drives Message-ID: Dear Connectionists: Please find herein the abstract of my PhD thesis Diagnosis and Control of Electrical Drives Using Neural Networks PhD in Electrical Engineering, University of Palermo, Italy. This thesis has been successfully defended on the 3rd December 1996. On the 23rd May 1997 at Vietri sul Mare (Salerno) the SIREN (Societa' Italiana Reti Neuronali Italian Society of Neural Network) and the IIASS (Istituto Internazionale per gli Alti Studi Scientifici International Institute for High Scientific Studies) awarded it the prize "Edoardo R. Caianello '97" for the best Italian PhD thesis on neural networks. It is not yet available in the web, but I hope it will be soon. Meanwhile if you want a copy I can send you one by ordinary mail. ABSTRACT: Diagnosis and Control of Electrical Drives Using Neural Networks by Maurizio Cirrincione It is deeply known that the diagnosis of a system is a process consisting of the execution of suitable measurements and tests and, as a result, the recognition of the operating state and the behaviour of the system itself in order to fix the possible course of action to undertake for correcting this behaviour. The technique that develops the diagnosis is called diagnostics, while the one which develops the corrective actions is called maintenance or also control. In particular in an electrical drive connected with a load, the automatic operation may require an on-line closed-loop control where an artificial-based block interprets the load conditions and decides, on the basis of the recognition of the operating condition and behaviour, the control actions to undertake on the motor through the power converter. Both the processing of the measured data and the control action can contain neural network parts. This thesis therefore deals with use of neural networks for controlling and diagnosing an electrical drive by describing some original applications. More importance has been given to the engineering and experimental aspects of these applications than to a deep theoretical approach, in order to prove the suitability of these neural network techiques in a particular domain of industrial applications. In chapter 1 the general problems of control in electrical drives are discussed. The need of adaptive on-line control is emphasized and a brief overview of innovative techiques, such as those based on expert systems, fuzzy logic and neural networks, is then presented. In chapter 2 the two neural networks used by the author for the control of electrical drives are described. The first is the well-known backpropagation neural network (BPN) and the second is a new neural network, called PLN (Progressive Learning Network). In particular the latter is presented and compared with the former and it is highlighted that the PLN is more suitable than BPN for adaptive on-line real-time control as it requires no separate training and production phases. Chapter 3 deals with the main neuro-control techniques and their problems. The complementarity and continuity of these methods with the traditional techniques is emphasized. Chapter 4 describes the BPN-based supervised control of a stepper motor . It is shown here that a BPN can work as a robust controller of a stepper motor and this result has been verified experimentally. A suitable test-bed has been set up where the electrical drive is supervised by a neural network hosted on a PC. Moreover a comparison with a traditional algorithm is carried out. In the end the reliability of such neural controller is verified in the presence of faults of some of its components. It is remarked that hardly any test-beds for verifying neuro-controllers of electrical drives have been realised, since most applications in this domain of electrical drives have beed mostly carried out in simulation. Chapter 5 deals with the use of neural networks for realising a controller for high-performance dc drives. The target is the control of the rotational speed so as to follow the speed reference accurately. The innovation which is presented concerns the use of the direct inverse control with generalised and specialised learning for identifying the inverse model of a DC motor with separate excitation through a BPN and a PLN. The suitability of the BPN is verified both in simulation and on an experimental test-bed even in presence of a speed variable load, resulting in a non-linear controlled system. Subsequently the PLN is applied for the on-line control based on specialised learning. It is shown that this approach can control the electrical drive without a persistent excitation, in presence even of variations of the load or of the parameters of the drive, with a noisy environment. This new neuro-controller ia capable of adapting on-line to any new working condition as it is based on a neural network varying the number of its hidden neurons to learn situations non previously encountered or to forget rare ones. Chapter 6 gives an overview of the diagnosis of electrical drives for fault protection, maintenance, fault detection and evalution of performances. After showing traditional diagnosis techniques for each component of the drive, a brief survey of future trends in this field is described. The target of this chapter is to place the technique of neural networks in the framework of the diagnosis of electrical drives. Chapter 7 describes the self-organising neural networks used for the diagnosis, that is the well-known SOM of prof. Kohonen and the more recent VQP algorithm of prof. Herault. The diagnosis is considered as a particular case of pattern recognition. Chapter 8 is dedicated to the application and implementation of the above neural networks in the diagnosis of electrical drives. In particular it is original the use of these networks for the real-time diagnosis of the working conditions of a three-phase converter and an induction motor (ac drive). In this application the VQP proved to be more suitable that Kohonen's SOM for the projection of high dimensional input data onto a reduced dimension output space, also for visualisation. The conclusions present new problems that should be faced in the future. Best Regards Maurizio Cirrincione, PhD, C.Eng. CERISEP - CNR c/o Department of Electrical Engineering University of Palermo Viale delle Scienze 90128 PALERMO ITALY tel. 0039 91 484686 fax 0039 91 485555 http://wwwcerisep.diepa.unipa.it/ From juergen at idsia.ch Fri Nov 14 04:55:40 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Fri, 14 Nov 1997 10:55:40 +0100 Subject: 2 book chapters Message-ID: <199711140955.KAA10227@ruebe.idsia.ch> REINFORCEMENT LEARNING WITH SELF-MODIFYING POLICIES Juergen Schmidhuber & Jieyu Zhao & Nicol N. Schraudolph (IDSIA) In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, 1997 We apply the success-story algorithm to a reinforcement learner whose learning algorithm has modifiable components represented as part of its own policy. This is of interest in situations where the initial learning algorithm can be improved by experience (learning to learn). The system is tested in a complex partially observable environment. ftp://ftp.idsia.ch/pub/juergen/ssa.ps.gz _____________________________________________________________________ A COMPUTER SCIENTIST'S VIEW OF LIFE, THE UNIVERSE, AND EVERYTHING Juergen Schmidhuber (IDSIA) In C. Freksa, ed., Foundations of Computer Science: Potential - Theory - Cognition, Lecture Notes in Comp. Sci., Springer, 1997. Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. ftp://ftp.idsia.ch/pub/juergen/everything.ps.gz _____________________________________________________________________ IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland www.idsia.ch From sml%essex.ac.uk at seralph21.essex.ac.uk Fri Nov 14 08:14:38 1997 From: sml%essex.ac.uk at seralph21.essex.ac.uk (Simon Lucas) Date: Fri, 14 Nov 1997 13:14:38 +0000 Subject: High-performance face recognition papers Message-ID: <346C4EBE.3937@essex.ac.uk> The following papers are available on-line from http://esewww.essex.ac.uk/~sml describing the continuous n-tuple classifer and its application to face recognition. The method offers high speed (can match an unknown image with about 4,000 'face-models' per second on a PC) and high accuracy. The BMVC '97 paper gives a more complete description of the system, while the Electronics Letters paper provides a more significant set of results. ------------------------------------------------ The continuous n-tuple classifier and its application to face recognition S.M. Lucas Electronics Letters, v33, pp 1676 - 1678 Abstract: This paper describes the continuous n-tuple classifier: a new type of n-tuple classifier that is well suited to problems where the input is continuous or multi-level rather than binary. Results on a widely used face database show the continuous n-tuple classifier to be as accurate as any method reported in the literature, while having the advantages of speed and simplicity over other methods. ------------------------------------------------ Face Recognition with the continuous n-tuple classifier S.M. Lucas In proceedings of Britich Machine Vision Conference-97 Abstract: Face recognition is an important field of research with many potential applications for suitably efficient systems, including biometric security and searching large face databases. This paper describes a new approach to the problem based on a new type of n-tuple classifier: the continuous n-tuple system. Results indicate that the new method is faster and more accurate than previous methods reported in the literature on the widely used Olivetti Research Laboratories face database. Comments welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk http://esewww.essex.ac.uk/~sml secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From Dave_Touretzky at cs.cmu.edu Fri Nov 14 17:51:02 1997 From: Dave_Touretzky at cs.cmu.edu (Dave_Touretzky@cs.cmu.edu) Date: Fri, 14 Nov 1997 17:51:02 -0500 Subject: graduate training in cognitive/computational neuroscience Message-ID: <389.879547862@skinner.boltz.cs.cmu.edu> Graduate Training with the Center for the Neural Basis of Cognition The Center for the Neural Basis of Cognition offers interdisciplinary doctoral and postdoctoral training programs operated jointly with affiliated programs at Carnegie Mellon University and the University of Pittsburgh. Detailed information about these programs is available on our web site at http://www.cnbc.cmu.edu. The Center is dedicated to the study of the neural basis of cognitive processes including learning and memory, language and thought, perception, attention, and planning; to the study of the development of the neural substrate of these processes; to the study of disorders of these processes and their underlying neuropathology; and to the promotion of applications of the results of these studies to artificial intelligence, robotics, and medicine. CNBC students have access to some of the finest facilities for cognitive neuroscience research in the world: Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) scanners for functional brain imaging, neurophysiology laboratories for recording from brain slices and from anesthetized or awake, behaving animals, electron and confocal microscopes for structural imaging, high performance computing facilities including an in-house supercomputer for neural modeling and image analysis, and patient populations for neuropsychological studies. Students are admitted jointly to a home department and the CNBC Training Program. Applications are encouraged from students with interests in biology, neuroscience, psychology, engineering, physics, mathematics, computer science, or robotics. For a brochure describing the program and application materials, contact us at the following address: Center for the Neural Basis of Cognition 115 Mellon Institute 4400 Fifth Avenue Pittsburgh, PA 15213 Tel. (412) 268-4000. Fax: (412) 268-5060 email: cnbc-admissions at cnbc.cmu.edu Application materials are also available online. The affiliated PhD programs at the two universities are: Carnegie Mellon University of Pittsburgh Biological Sciences Mathematics Computer Science Neurobiology Psychology Neuroscience Robotics Psychology The CNBC training faculty includes: German Barrionuevo (Pitt Neuroscience): LTP in hippocampal slice Marlene Behrmann (CMU Psychology): spatial representations in parietal cortex Pat Carpenter (CMU Psychology): mental imagery, language, and problem solving B.J. Casey (Pitt Psychology): attention; developmental cognitive neuroscience Jonathan Cohen (CMU Psychology): schizophrenia; dopamine and attention Carol Colby (Pitt Neuroscience): spatial reps. in primate parietal cortex Bard Ermentrout (Pitt Mathematics): oscillations in neural systems Julie Fiez (Pitt Psychology): fMRI studies of language John Horn (Pitt Neurobiology): synaptic plasticity in autonomic ganglia Allen Humphrey (Pitt Neurobiology): motion processing in primary visual cortex Marcel Just (CMU Psychology): visual thinking, language comprehension Eric Klann (Pitt Neuroscience): hippocampal LTP and LTD Alan Koretsky (CMU Biological Sciences): new fMRI techniques for brain imaging Tai Sing Lee (CMU Comp. Sci.): primate visual cortex; computer vision David Lewis (Pitt Neuroscience): anatomy of frontal cortex James McClelland (CMU Psychology): connectionist models of cognition Carl Olson (CNBC): spatial representations in primate frontal cortex David Plaut (CMU Psychology): connectionist models of reading Michael Pogue-Geile (Pitt Psychology): development of schizophrenia John Pollock (CMU Biological Sci.): neurodevelopment of the fly visual system Walter Schneider (Pitt Psych.): fMRI, models of attention & skill acquisition Charles Scudder (Pitt Neurobiology): motor learning in cerebellum Susan Sesack (Pitt Neuroscience): anatomy of the dopaminergic system Dan Simons (Pitt Neurobiology): sensory physiology of the cerebral cortex William Skaggs (Pitt Neuroscience): representations in rodent hippocampus David Touretzky (CMU Comp. Sci.): hippocampus, rat navigation, animal learning See http://www.cnbc.cmu.edu for further details. From nkasabov at commerce.otago.ac.nz Tue Nov 18 01:19:49 1997 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 17 Nov 1997 18:19:49 -1200 Subject: A new book on brain-like computing and intelligent systems In-Reply-To: <389.879547862@skinner.boltz.cs.cmu.edu> Message-ID: <27707066429@jupiter.otago.ac.nz> Springer Verlag 1997 BRAIN-LIKE COMPUTING AND INTELLIGENT INFORMATION SYSTEMS edited by S. Amari, RIKEN, Japan N. Kasabov, University of Otago, New Zealand Contents: PART I. COMPUTER VISION AND IMAGE PROCESSING: Active Vision: Neural Network Models (K. Fukushima); Image Recognition by Brains and Machines (E. Postma et al.); The Properties and Training of a Neural Network Based Universal Window Filter Developed for Image Processing Tasks (R.H. Pugmire et al.); PART II. SPEECH RECOGNITION AND LANGUAGE PROCESSING: A Computational Model of the Auditory Pathway to the Superior Colliculus (R. J. W. Wang and M. Jabri); A Framework for Intelligent 'Conscious' Machines Utilising Fuzzy Neural Networks and Spatio-Temporal Maps and a Case Study of Multilingual Speech Recognition (N. Kasabov); PART III. DYNAMIC SYSTEMS: STATISTICAL AND CHAOS MODELLING. BLIND SOURCE SEPARATION: Noise-Mediated Cooperative Behavior in Integrate-Fire Models of Neuron Dynamics (A.R. Bulsara); Blind Source Separation - Mathematical Foundations (S. Amari); Neural Independent Component Analysis - Approaches and Applications (E. Oja et al.); General Regression Techniques Based on Spherical Kernel Functions for Intelligent Processing (A. Zankich and Y. Attikiouzel); Chaos and Fractal Analysis of Irregular Times Series Embedded in a Connectionist Stucture (R. Kozma and N. Kasabov); PART IV.LEARNING SYSTEMS AND EVOLUTIONARY COMPUTATION: Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach (I): Unsupervised and Semi-Unsupervised Learning (Lei Xu); Evolutionary Computation: An Introduction,Some Current Applications, and Future Directions (D. B. Fogel); Biologically Inspired New Operations for GeneticAlgorithms(A.Ghoshand and S. K. Pal). PART V. ADAPTIVE LEARNING FOR NAVIGATION, CONTROL AND DECISION MAKING: From nls at ra.abo.fi Sat Nov 15 10:21:26 1997 From: nls at ra.abo.fi (Nonlinear Solutions UPR) Date: Sat, 15 Nov 1997 17:21:26 +0200 (EET) Subject: EANN 98 CFP Message-ID: <199711151521.RAA13300@aton.abo.fi> International Conference on Engineering Applications of Neural Networks (EANN '98) Gibraltar 10-12 June 1998 First Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biomedical systems, and environmental engineering. Abstracts of one page (about 500 words) should be sent to eann98 at ctima.uma.es or NLS at abo.fi by 21 January 1998 by e-mail in plain text format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN97.html Proposals for special tracks are welcome. Among the special features of EANN '98 is a session on Japanese state-of-the-art in engineering applications of neural networks. Please contact Prof. Iwata (iwata at elcom.nitech.ac.jp) if you are interested. An industrial panel discussion may be organised in this conference also if there are several participants from industries. Organising committee (to be extended) A. Bulsari, Nonlinear Solutions Oy, Finland J. Fernandez de Canete, University of Malaga, Spain J. Heikkonen, Helsinki University of Technology, Finland A. Ruano, University of Algarve, Portugal D. Tsaptsinos, Kingston University, UK E. Tulunay, Middle East Technical University, Turkey P. Zufiria, Polytechnic University of Madrid, Spain International program committee (to be confirmed, extended) C. Andersson, Colorado State University, USA G. Baier, University of Tubingen, Germany R. Baratti, University of Cagliari, Italy S. Canu, Compiegne University of Technology, France S. Cho, Pohang University of Science and Technology, Korea T. Clarkson, King's College, UK J. DeMott, John Mansfield Corporation, USA S. Draghici, Wayne State University, USA W. Duch, Nicholas Copernicus University, Poland G. Forsgrn, Stora Corporate Research, Sweden P. Gallinari, University of Paris VI, France I. Grabec, University of Ljubljana, Slovenia A. Iwata, Nagoya Institute of Technology, Japan C. Kuroda, Tokyo Institute of Technology, Japan H. Liljenstrm, Royal Institute of Technology, Sweden L. Ludwig, University of Tubingen, Germany T. Nagano, Hosei University, Japan L. Niklasson, University of Skvde, Sweden F. Norlund, ABB Industrial Systems, Sweden R. Parenti, Ansaldo Ricerche, Italy V. Petridis, Aristotle University of Thessaloniki, Greece R. Rico-Martinez, Celaya Institute of Technology, Mexico J. Sa da Costa, Technical University of Lisbon, Portugal F. Sandoval, University of Malaga, Spain C. Schizas, University of Cyprus, Cyprus Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact again. You should receive an abstract number in a couple of days after the submission. From priel at alon.cc.biu.ac.il Mon Nov 17 08:05:23 1997 From: priel at alon.cc.biu.ac.il (Avner Priel) Date: Mon, 17 Nov 1997 15:05:23 +0200 (WET) Subject: New papers on time series generation Message-ID: Dear Connectionists, This is to announce the availability of 2 new papers on the subject of time series generation by feed-forward networks. The first paper will appear on the "Journal of Physics A" and the second on the NIPS-11 proceedings. The papers are available from my home-page : http://faculty.biu.ac.il/~priel/ comments are welcome. *************** NO HARD COPIES ****************** ---------------------------------------------------------------------- Noisy time series generation by feed-forward networks ----------------------------------------------------- A Priel, I Kanter and D A Kessler Department of Physics, Bar Ilan University, 52900 Ramat Gan,Israel ABSTRACT: We study the properties of a noisy time series generated by a continuous-valued feed-forward network in which the next input vector is determined from past output values. Numerical simulations of a perceptron-type network exhibit the expected broadening of the noise-free attractor, without changing the attractor dimension. We show that the broadening of the attractor due to the noise scales inversely with the size of the system ,$N$, as $1/ \sqrt{N}$. We show both analytically and numerically that the diffusion constant for the phase along the attractor scales inversely with $N$. Hence, phase coherence holds up to a time that scales linearly with the size of the system. We find that the mean first passage time, $t$, to switch between attractors depends on $N$, and the reduced distance from bifurcation $\tau$ as $t = a {N \over \tau} \exp(b \tau N^{1/2})$, where $b$ is a constant which depends on the amplitude of the external noise. This result is obtained analytically for small $\tau$ and confirmed by numerical simulations. Analytical study of the interplay between architecture and predictability ------------------------------------------------------------------------- Avner Priel, Ido Kanter , D.A. Kessler Minerva Center and Department of Physics, Bar Ilan University, Ramat-Gan 52900, Israel. ABSTRACT: We study model feed forward networks as time series predictors in the stationary limit. The focus is on complex, yet non-chaotic, behavior. The main question we address is whether the asymptotic behavior is governed by the architecture, regardless the details of the weights. We find hierarchies among classes of architectures with respect to the attractor dimension of the long term sequence they are capable of generating; larger number of hidden units can generate higher dimensional attractors. In the case of a perceptron, we develop the stationary solution for a general weight vector, and show that the flow is typically one dimensional. The relaxation time from an arbitrary initial condition to the stationary solution is found to scale linearly with the size of the network. In multilayer networks, the number of hidden units gives bounds on the number and dimension of the possible attractors. We conclude that long term prediction (in the non-chaotic regime) with such models is governed by attractor dynamics related to the architecture. ---------------------------------------------------- Priel Avner < priel at mail.cc.biu.ac.il > < http://faculty.biu.ac.il/~priel > Department of Physics, Bar-Ilan University. Ramat-Gan, 52900. Israel. From caironi at elet.polimi.it Tue Nov 18 08:00:25 1997 From: caironi at elet.polimi.it (Pierguido V.C. CAIRONI) Date: Tue, 18 Nov 1997 14:00:25 +0100 Subject: Technical Report Available Message-ID: <34719169.15A9BA6@elet.polimi.it> Please accept my apologies if you receive multiple copies of this message. The following technical report is available on the web at the page: http://www.elet.polimi.it/~caironi/listpub.html or directly at: ftp://www.elet.polimi.it/pub/data/Pierguido.Caironi/tr97_50.ps.gz ----------------------------------------------------------------------- Gradient-Based Reinforcement Learning: Learning Combinations of Control Policies Pierguido V.C. Caironi email: caironi at elet.polimi.it Technical Report 97.50 Dipartimento di Elettronica e Informazione Politecnico di Milano Abstract This report presents two innovative reinforcement learning algorithms for continuous state-action environments: Gradient REinforceMent LearnINg for Multiple control policies (GREMLIN-M) and Gradient REinforceMent LearnINg for Multiple and Single control policies (GREMLIN-MS). The two algorithms learn optimal combinations of control policies for autonomous agents. GREMLIN-M learns an optimal combination of fixed base control policies. GREMLIN-MS extends GREMLIN-M enabling the agent to learn simultaneously the base control policies as well. GREMLIN-M and GREMLIN-MS optimize a performance function equal to the sum of the expected reinforcements in a sliding temporal window of finite length. The optimization is carried out through gradient ascent with respect to the parameter values of the control functions. While being natural extensions of previously existing supervised learning algorithms, GREMLIN-M and GREMLIN-MS improve the current state of art of reinforcement learning taking into account the temporal credit assignment problem for the on-line and real-time combination of control policies. Furthermore, GREMLIN-M and GREMLIN-MS lend themselves to a motivational interpretation. That is, the combination function resulting from learning may be seen as a representation of the motivations to apply any single base control policy in different environmental conditions. -- Name: Pierguido V. C. CAIRONI Job: Ph.D. Student at the Politecnico di Milano - ITALY e-mail: caironi at elet.polimi.it Address: Politecnico di Milano - Dip. di Elettronica e Informazione Piazza Leonardo da Vinci 32 20133 - Milano - ITALY Tel: +39-2-23993622 Fax: +39-2-23993411 WWW: http://www.elet.polimi.it/~caironi From Kim.Plunkett at psy.ox.ac.uk Tue Nov 18 13:34:35 1997 From: Kim.Plunkett at psy.ox.ac.uk (Kim Plunkett) Date: Tue, 18 Nov 1997 18:34:35 GMT Subject: No subject Message-ID: <199711181834.SAA22833@pegasus.psych.ox.ac.uk> To users of the tlearn neural network simulator: Version 1.0.1 of the tlearn simulator is now available for ftp at two sites: ftp://ftp.psych.ox.ac.uk/pub/tlearn/ (Old Worlds Users) ftp://crl.ucsd.edu/pub/neuralnets/tlearn (New Worlds Users) This version mostly involves bug fixes to the earlier version. A complete user manual for the software plus a set of tutorial exercises is available in: Plunkett and Elman (1997) "Exercises in Rethinking Innateness: A Handbook for Connectionist Simulations". MIT Press. For WWW access, the San Diego tlearn page is http://crl.ucsd.edu/innate/tlearn.html This contains a link to the directory containing the binaries: ftp://crl.ucsd.edu/pub/neuralnets/tlearn Then click on the filename(s) to download. For direct ftp/fetch access via anonymous login: - ftp/fetch to crl.ucsd.edu (132.239.63.1) - login anonymous/email address - cd pub/neuralnets/tlearn At the Oxford site: ftp://ftp.psych.ox.ac.uk/pub/tlearn/wintlrn1.0.1.zip is a zip archive that contains the windows 95 tlearn executable. ftp://ftp.psych.ox.ac.uk/pub/tlearn/wintlrn.zip is a link that always points to the latest version, in this case wintlrn1.0.1.zip. The mac version is in the following location: ftp://ftp.psych.ox.ac.uk/pub/tlearn/mac_tlearn_1.0.1.sea.hqx From stephen at cns.ed.ac.uk Tue Nov 18 13:50:02 1997 From: stephen at cns.ed.ac.uk (Stephen Eglen) Date: Tue, 18 Nov 1997 18:50:02 GMT Subject: PhD thesis on the development of the retinogeniculate pathway Message-ID: <199711181850.SAA18103@mango.cns.ed.ac.uk> The following PhD thesis is available from http://www.cns.ed.ac.uk/people/stephen/pubs.html Modelling the development of the retinogeniculate pathway Stephen Eglen How does the visual system develop before the onset of visually-driven activity? By the time photoreceptors can respond to visual stimulation, some pathways, including the retinogeniculate pathway, have already reached a near-adult form. This rules out visually-driven activity guiding pathway development. During this period however, spontaneous waves of activity travel across the retina, correlating the activity of neighbouring retinal cells. Activity-dependent mechanisms can exploit these correlations to guide retinogeniculate refinement. In this thesis I investigate, by means of computer simulation, the role of spontaneous retinal activity upon the development of ocular dominance and topography in the retinogeniculate pathway. Keesing, Stork and Shatz (1992) produced an initial model of retinogeniculate development driven by retinal waves. In this thesis, in addition to replicating their initial results, several new results are presented. First, the importance of presynaptic normalisation is highlighted. This is in contrast to most previous work on ocular dominance requiring postsynaptic normalisation. Second, the covariance rule is adapted so that development can occur under conditions of sparse input activity. Third, the model is shown to replicate development under conditions of monocular deprivation. Fourth, model development is analysed using different spatio-temporal inputs including anticorrelations between on- and off-centre retinal units. The layered pattern of ocular dominance in the LGN is quite different to the stripe patterns found in the cortex. The factors controlling the patterns of ocular dominance are investigated using a feature-based model of map formation (Obermayer, Ritter, & Schulten, 1991). In common with other models, variance of the ocularity feature controls the pattern of stripes. The model is extended to a three-dimensional output array to show that ocular dominance layers form in this model, and that the retinotopic maps are organised into projection columns. Future work involves extending this three-dimensional model to receive retinal-based, rather than feature-based, inputs. From wsenn at iam.unibe.ch Tue Nov 18 10:07:54 1997 From: wsenn at iam.unibe.ch (Walter Senn) Date: Tue, 18 Nov 1997 16:07:54 +0100 Subject: Coupled oscillations induced by synaptic depression Message-ID: <9711181507.AA04009@barney.unibe.ch> New paper available (to appear in Neural Computation): PATTERN GENERATION BY TWO COUPLED TIME-DISCRETE NEURAL NETWORKS WITH SYNAPTIC DEPRESSION W. Senn, Th. Wannier, J. Kleinle, H.-R. Luescher, L. Mueller, J. Streit, K. Wyler Spinal pattern generators are networks which produce rhythmic contractions alternating between different groups of muscles involved e.g. in locomotion. These networks are generally thought to rely on pacemaker cells or well designed circuits consisting of inhibitory and excitatory neurons. Recent experiments in organotypic cultures of embryonic rat spinal cord, however, have shown that neuronal networks with random and purely excitatory connections may oscillate as well, even without pacemaker cells. The reason of these oscillations was identified to be a fast depression of the activated synapses. In this theoretical study, we explore the dynamical behavior emerging by weakly coupling two random excitatory networks with synaptic depression. We discuss a time-discrete mean field model describing the average activity and the average synaptic depression of the two networks. As a mathematical tool we adapt the Average Phase Difference (APD) theory, originally developed for flows, to the present case of maps. Depending on the parameter values of the depression, one may predict whether the oscillations will be in-phase, anti-phase, quasiperiodic or phase-trapped. We put forward the hypothesis that pattern generators may rely on activity dependent tuning of the synaptic depression. The manuscript (262 KB) can be downloaded from: http://iamwww.unibe.ch:80/~brainwww/publications/pub_walter.html From weaveraj at helios.aston.ac.uk Wed Nov 19 11:06:33 1997 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Wed, 19 Nov 1997 16:06:33 +0000 Subject: Studentship, Aston University, UK Message-ID: <194.199711191606@sun.aston.ac.uk> Neural Computing Research Group, Aston University, Birmingham, UK We would like to invite applications for a TMR postgraduate grant (eligibility criteria mean that only non-UK EU nationals can apply), for study in the following areas Neural Nets for Control: Advancing the Theory (ITN) Statistical mechanics of error-correcting codes (DS) Average performance of support vector machines (DS) Image Understanding with Probabilistic Models (CKIW) Non-linear Signal Processing by Neural Networks (DL) Synergetics for computational networks (DL) The student would be assisted with their application to the EU for funding for a three year project leading to a postgraduate qualification. Should the EU application be unsuccessful there may be a possibility of awarding a Divisional Studentship. Applicants should have been (or expect to be) awarded an undergraduate degree in a numerate discipline. Please could you email (text only) a *brief* CV to include your name, address, email, nationality, qualifications (including marks) and any publications you may have, together with the (equivalent of one side of A4) description of how you would approach research in one of the above topics (not forgetting the topic title!) to ncrg at aston.ac.uk by 9.00am on Thursday 27th November 1997. The closing date for TMR applications is 15th December 1997 so deadlines are tight. Further details of the Research Group and some of the above projects can be found at http://www.ncrg.aston.ac.uk/ Further details of the TMR programme can be found at http://www.cordis.lu/tmr/home.html From hali at theophys.kth.se Wed Nov 19 13:52:39 1997 From: hali at theophys.kth.se (Hans Liljenstrm) Date: Wed, 19 Nov 1997 19:52:39 +0100 Subject: Workshop on Hippocampal Modeling Message-ID: <34733577.D88@theophys.kth.se> WORKSHOP ON HIPPOCAMPAL MODELING January 9-11, 1998 at Agora for Biosystems, Sigtuna, Sweden There are many modeling efforts regarding the hippocampus and its functions, in particular its learning capacities. In this meeting we will review some of these models and discuss their biological relevance, and in what direction future modeling efforts should go. The workshop is intended to promote an active dialogue between modelers and experimentalists. Some oral presentations will be given, but the main focus is on formal and informal discussions. Invited participants include Per Andersen (Oslo), Michael Hasselmo (Harvard), William Levy (Charlottesville), Sakire Pogun (Izmir), and Sven-Ove Ogren (Stockholm). The program and additional information will be available on the following web site, http://www.theophys.kth.se/~hali/agora/hippoc.html We welcome interested modellers and experimentalists to make early registrations with the organizers (the total number of participants is limited to 25). Please, also indicate whether you would like to give a short oral presentation. Hans Liljenstrom Agora for Biosystems Box 57 Sigtuna, Sweden Phone: +46-(0)8-592 50901 +46-(0)8-790 7172 Email: hali at theophys.kth.se From bakker at research.nj.nec.com Wed Nov 19 14:52:59 1997 From: bakker at research.nj.nec.com (Rembrandt Bakker) Date: Wed, 19 Nov 1997 14:52:59 -0500 Subject: TR on learning chaotic dynamics Message-ID: <9711191452.ZM325@heavenly.nj.nec.com> The following manuscript (7 pages) is now available at the WWW sites listed below: www.neci.nj.nec.com/homepages/bakker/UMD-CS-TR-3843.ps.gz www.neci.nj.nec.com/homepages/giles/papers/UMD-CS-TR-3843.neural.learning.chaotic.dynamics.ps.Z We apologize in advance for any multiple postings that may occur. *********************************************************************** Neural Learning of Chaotic Dynamics: The Error Propagation Algorithm Rembrandt Bakker(1), Jaap C. Schouten(1), C. Lee Giles(2,3), C.M. van den Bleek (1) (1) Delft University of Technology, Dept. Chemical Process Technology, Julianalaan 136, 2628 BL Delft, The Netherlands. (2) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA. (3) Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA. ABSTRACT An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time-series. The algorithm has four special features: 1. The state of the system is extracted from the time-series using delays, followed by weighted Principal Component Analysis (PCA) data reduction. 2. The prediction model consists of both a linear model and a Multi- Layer-Perceptron (MLP). 3. The effective prediction horizon during training is user-adjustable, due to error propagation: prediction errors are partially propagated to the next time step. 4. A criterion is monitored during training to select the model that has a chaotic attractor most similar to the real system's attractor. The algorithm is applied to laser data from the Santa Fe time-series competition (set A). The resulting model is not only useful for short-term predictions but it also generates time-series with similar chaotic characteristics as the measured data. Keywords - time series, neural networks, chaotic dynamics, laser data, Santa Fe time series competition, Lyapunov exponents, principal component analysis, error propagation. -- Rembrandt Bakker r.bakker at stm.tudelft.nl http://www.neci.nj.nec.com/homepages/bakker From skalak at us.ibm.com Wed Nov 19 15:11:05 1997 From: skalak at us.ibm.com (David Skalak) Date: Wed, 19 Nov 1997 15:11:05 -0500 Subject: Ph.D. thesis: combining nearest neighbor classifiers Message-ID: <5010400014648620000002L002*@MHS> The following Ph.D. dissertation is available: Prototype Selection for Composite Nearest Neighbor Classifiers David B. Skalak Dept. of Computer Science University of Massachusetts Amherst, MA 01003-4610 The dissertation and other publications can be obtained from my homepage: http://www.cs.cornell.edu/Info/People/skalak The dissertation can also be retrieved directly from http://www.cs.cornell.edu/Info/People/skalak/thesis-header-dspace.ps.gz Keywords: classifier combination, ensemble methods, stacked generalization, boosting, accuracy-diversity graphs, classifier sampling, k-nearest neighbor, prototype selection, reference selection, editing algorithms, instance taxonomy, coarse classification, deliberate misclassification Abstract: Combining the predictions of a set of classifiers has been shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. Increased accuracy has been shown in a variety of real-world applications, ranging from protein sequence identification to determining the fat content of ground meat. Despite such individual successes, the answers are not known to fundamental questions about classifier combination, such as ``Can classifiers from any given model class be combined to create a composite classifier with higher accuracy?'' or ``Is it possible to increase the accuracy of a given classifier by combining its predictions with those of only a small number of other classifiers?''. The goal of this dissertation is to provide answers to these and closely related questions with respect to a particular model class, the class of nearest neighbor classifiers. We undertake the first study that investigates in depth the combination of nearest neighbor classifiers. Although previous research has questioned the utility of combining nearest neighbor classifiers, we introduce algorithms that combine a small number of component nearest neighbor classifiers, where each of the components stores a small number of prototypical instances. In a variety of domains, we show that these algorithms yield composite classifiers that are more accurate than a nearest neighbor classifier that stores all training instances as prototypes. The research presented in this dissertation also extends previous work on prototype selection for an independent nearest neighbor classifier. We show that in many domains, storing a very small number of prototypes can provide classification accuracy greater than or equal to that of a nearest neighbor classifier that stores all training instances. We extend previous work by demonstrating that algorithms that rely primarily on random sampling can effectively choose a small number of prototypes. Regards, David. David B. Skalak, Ph.D. Senior Data Mining Analyst IBM Global Business Intelligence Solutions IBM tie-line: 320-9774; Telephone: 607 257-5510 Internet: skalak at us.ibm.com From michael.haft at mchp.siemens.de Thu Nov 20 05:48:27 1997 From: michael.haft at mchp.siemens.de (Michael Haft) Date: Thu, 20 Nov 1997 11:48:27 +0100 (MET) Subject: Model-Independent Mean Field Theory for Approximate Inference Message-ID: <199711201048.LAA25254@yin.mchp.siemens.de> The following paper on approximate propagation of information is available online: Model-Independent Mean Field Theory as a Local Method for Approximate Propagation of Information Michael Haft, Reimar Hofmann and Volker Tresp SIEMENS AG, Corporate Technology Abstract We present a systematic approach to mean field theory in a general probabilistic setting without assuming a particular model and avoiding physical notation. The mean field equations derived here may serve as a {\em local} and thus very simple method for approximate inference in graphical models. In general, there are multiple solutions to the mean field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean field solutions. We derive simple approximate expressions for the mixture weights, which can also be obtained by means of only {\em local} computations. The benefits of taking into account multiple solutions are demonstrated by using mean field theory for inference in a small `noisy-or network'. The paper is available online from: http://www7.informatik.tu-muenchen.de/~hofmannr/mf_abstr.html Comments are welcome. A modified version of this paper is submitted for publication. ___________________________________________________________________________ Michael Haft ZT IK 4 Siemens AG, CR & D Email: Michael.Haft at mchp.siemens.de 81730 Muenchen Tel: +49/89/636-47953 Germany Fax: +49/89/636-49767 From georg at ai.univie.ac.at Thu Nov 20 10:41:54 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Thu, 20 Nov 1997 16:41:54 +0100 Subject: 2 papers: Application of Bayesian inference in NN Message-ID: <34745A42.500F9F30@ai.univie.ac.at> Dear connectionists, the following two papers are available at the WWW sites listed below. ---------- Experiences with bayesian learning in a real world application Sykacek P., Dorffner G., Rappelsberger P., Zeitlhofer J. to appear in: Advances in Neural Information Processing Systems, Vol.10, 1998. http://www.ai.univie.ac.at/~georg/papers/nips97.ps.Z Abstract: This paper reports about an application of Bayes' inferred neural network classifiers to the field of automatic sleep staging. The reason for using Bayesian learning for this task is two-fold. First, Bayesian inference is known to embody regularization automatically. Second, a side effect of Bayesian learning leads to larger variance of network outputs in regions without training data. This results in well known moderation effects, which can be used to detect outliers. In a 5 fold cross-validation experiment the full Bayesian solution was not better than a single maximum a-posteriori (MAP) solution found with D.J. MacKay's evidence approximation (see MacKay 1992). In a second experiment we studied the properties of both solutions in rejecting classification of movement artefacts. ----------- Evaluating confidence measures in a neural network based sleep stager Sykacek P., Dorffner G., Rappelsberger P., Zeitlhofer J. Austrian Research Institute for Artificial Intelligence, Vienna, Technical Report TR-97-21, 1997; submitted for publication http://www.ai.univie.ac.at/~georg/papers/ieee.ps.Z Abstract: In this paper we report about an extensive investigation on neural networks -- multilayer perceptrons (MLP), in particular -- in the task of automatic sleep staging based on electroencephalogram (EEG) and electrooculogram (EOG) signals. After the important first step of preprocessing and feature selection (for which, a search-based selection technique could reduce the large number of features to a feature vector of size ten), the main focus was on evaluating the used of so-called ``doubt-levels'' and ``confidence intervals'' (``error bars'') in improving the results by rejecting uncertain cases and patterns not well represented by the training set. The main technique used here is that of Bayesian inference to arrive at probability distributions of network weights based on training data. We compare the results of the full-blown Bayesian method with a reduced method calculating only the maximum posterior solution and with an MLP trained with the more common gradient descent technique for minimizing an error measure (``backpropagation''). The results show that, while both the full-blown Bayesian technique and the maximum posterior solution significantly outperform the simpler backpropagation technique, only the application of doubt-levels to reject uncertain cases can lead to an improvement of results. Our conclusion is that the data set of five independent all-night recordings from five normal subjects represents the possible data space well enough. At the same time, we show that Bayesian inference, for which we have developed a useful extension for reliable calculation of error bars, can still be used for the rejection of artefacts. ------------------- This work was done in the context of the concerted action ANNDEE (http://www.ai.univie.ac.at/oefai/nn/anndee), investigating neural networks in EEG processing, sponsored by the European Union and the Austrian Federal Ministry of Science and Transport. -------------------- Georg Dorffner Austrian Research Institute for Artificial Intelligence Neural Networks Group Schottengasse 3, A-1010 Vienna, Austria http://www.ai.univie.ac.at/oefai/nn From cjcb at molson.ho.lucent.com Thu Nov 20 15:55:18 1997 From: cjcb at molson.ho.lucent.com (Chris Burges) Date: Thu, 20 Nov 1997 15:55:18 -0500 Subject: Two Announcements on Support Vector Machines Message-ID: <199711202055.PAA13835@cottontail.dsp> TUTORIAL: The following paper is available at http://svm.research.bell-labs.com/SVMdoc.html A Tutorial on Support Vector Machines for Pattern Recognition C.J.C. Burges, Bell Laboratories, Lucent Technologies Invited Paper for Database Mining and Knowledge Discovery The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are non-linear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels; we then show how SVMs nevertheless provide a natural mechanism for implementing structural risk minimization, often resulting in good generalization performance. Finally, we discuss the various bounds on the generalization performance of SVMs. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light. * * * SUPPORT VECTOR MACHINE WEB PAGE: The following page allows users to submit their data, and have a support vector machine (SVM) trained automatically on that data. The results, as well as automatically generated ANSI C code which instantiates their classifier, are then available to them via FTP. There are also other resources available (for example, an Applet which allows users to play with two-dimensional SVM pattern recognition). The URL is: http://svm.research.bell-labs.com From nimzo at cerisep1.diepa.unipa.it Sat Nov 22 12:33:02 1997 From: nimzo at cerisep1.diepa.unipa.it (Maurizio Cirrincione) Date: Sat, 22 Nov 1997 18:33:02 +0100 Subject: Abstract of PhD thesis about NN and electrical drives Message-ID: Dear Connectionists Due to a crash of my system, I have lost the addresses of all those who had asked me for one hard copy of my PhD thesis. I therefore ask these people to send me again an email. As I have already written, at the moment the thesis is being translated by me from Italian into English, which means that I will be able to send a hard copy not earlier that Christmas. My email address is nimzo at vega.diepa.unipa.it Best regards and thank you for your attention and help Maurizio Cirrincione, PhD, C.Eng. CERISEP - CNR c/o Department of Electrical Engineering University of Palermo Viale delle Scienze 90128 PALERMO ITALY tel. 0039 91 484686 fax 0039 91 485555 http://wwwcerisep.diepa.unipa.it/ From niebur at russell.mb.jhu.edu Mon Nov 24 13:08:30 1997 From: niebur at russell.mb.jhu.edu (Ernst Niebur) Date: Mon, 24 Nov 1997 13:08:30 -0500 Subject: Graduate studies in systems and computational neuroscience at Johns Hopkins University Message-ID: <199711241808.NAA28289@russell.mb.jhu.edu> The Johns Hopkins University is a major private research university and its hospital and medical school have been rated consistently as the first or second in the nation. The Zanvyl Krieger Mind/Brain Institute at Johns Hopkins encourages students with interest in systems neuroscience, including computational neuroscience, to apply for the graduate program in the Neuroscience department. The Institute is an interdisciplinary research center devoted to the investigation of the neural mechanisms of mental function and particularly to the mechanisms of perception: How is complex information represented and processed in the brain, how is it stored and retrieved, and which brain centers are critical for these operations? Research opportunities exist in all of the laboratories of the Institute. Interdisciplinary projects, involving the student in more than one laboratory, are particularly encouraged. All students accepted to the PhD program of the Neuroscience department receive full tuition remission plus a stipend at or above the National Institutes of Health predoctoral level. Additional information on the research interests of the faculty in the Mind/Brain Institute and the Department of Neuroscience can be obtained at http://russell.mb.jhu.edu/mbi.html and at http://www.med.jhu.edu/neurosci/welcome.html, respectively. Applicants should have a B.S. or B.A. with a major in any of the biological or physical sciences. Applicants are required to take the Graduate Record Examination (GRE; both the aptitude tests and an advanced test), or the Medical College Admission Test. Further information on the admission procedure can be obtained from the Department of Neuroscience: Director of Graduate Studies Neuroscience Training Program Department of Neuroscience The Johns Hopkins University School of Medicine 725 Wolfe Street Baltimore, MD 21205 Completed applications (including three letters of recommendation and either GRE scores or Medical College Admission Test scores) must be received by January 1, 1998 at the above address. From Peter.Bartlett at keating.anu.edu.au Tue Nov 25 02:29:35 1997 From: Peter.Bartlett at keating.anu.edu.au (Peter Bartlett) Date: Tue, 25 Nov 1997 18:29:35 +1100 (EST) Subject: COLT98 call for papers Message-ID: <199711250729.SAA17927@reid.anu.edu.au> CALL FOR PAPERS: COLT '98 Eleventh Annual Conference on Computational Learning Theory University of Wisconsin-Madison July 24-26, 1998 The Eleventh Annual Conference on Computational Learning Theory (COLT '98) will be held at the University of Wisconsin-Madison from Friday, July 24 through Sunday, July 26, 1998. The conference will be co-located with the Fifteenth International Conference on Machine Learning (ICML '98) and the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI '98). Registrants to any of COLT, ICML, or UAI will be allowed to attend, without additional costs, the technical sessions of the other two conferences. Joint invited speakers, poster session, and a panel session are planned for the three conferences. The conferences will be directly followed by the Fifteenth National Conference on Artificial Intelligence (AAAI '98). The AAAI tutorial and workshop program will be held the day after the co-located conferences (Monday, July 27), and we anticipate that this program will include workshops and tutorials in the machine learning area. On the same day, UAI will offer a full day course on uncertain reasoning. There will be six other AI-related conferences held in Madison around this time. We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning. Some of the issues and topics that have been addressed in the past include: * design and analysis of learning algorithms; * sample and computational complexity of learning specific model classes; * frameworks modeling the interaction between the learner, teacher and the environment (such as learning with queries, learning control policies and inductive inference); * learning using complex models (such as neural networks and decision trees); * learning with minimal prior assumptions (such as mistake-bound models, universal prediction, and agnostic learning). We strongly encourage submissions from all disciplines engaged in research on these and related questions. Examples of such fields include computer science, statistics, information theory, pattern recognition, statistical physics, inductive logic programming, information retrieval and reinforcement learning. We also encourage the submission of papers describing experimental results that are supported by theoretical analysis. EXTENDED ABSTRACT SUBMISSION: Authors are encouraged to submit their extended abstracts electronically. Instructions for electronic submissions can be obtained by sending email to colt98 at anu.edu.au with subject "help". Alternatively, authors may submit fourteen copies (preferably two-sided) of an extended abstract to: Peter Bartlett -- COLT '98 Department of Systems Engineering RSISE Building 115 Australian National University Canberra 0200 Australia Telephone (for express mail): +61 2 6279 8681 Extended abstracts (whether hard-copy or electronic) must be received by 5:00pm Canberra time (= 1:00am Eastern Time) on FRIDAY, JANUARY 30, 1998. This deadline is firm. (We also will accept extended abstracts sent via air mail and postmarked by January 19.) Authors will be notified of acceptance or rejection on or before April 3, 1998. Final camera-ready versions will be due by May 1. Papers that have appeared in journals or other conferences, or that are being submitted to other conferences (including ICML and UAI), are not appropriate for submission to COLT. EXTENDED ABSTRACT FORMAT: The extended abstract should be accompanied by a cover page with title, authors' names, postal and email addresses, and a 200-word summary. The body of the extended abstract should be no longer than 10 pages in 12-point font. If it exceeds 10 pages, only the first 10 pages may be examined. The extended abstract should include a clear definition of the theoretical model used and a clear description of the results, as well as a discussion of their significance, including comparison to other work. Proofs or proof sketches should be included. PROGRAM FORMAT: All accepted papers will be presented orally, although some or all papers may also be included in a poster session. At the discretion of the program committee, the program may consist of both long and short talks, corresponding to longer and shorter papers in the proceedings. By default, all papers will be considered for both categories. Authors who do not want their papers considered for the short category should indicate that fact in a cover letter. PROGRAM CHAIRS: Peter Bartlett (Australian National University) Yishay Mansour (Tel-Aviv University) PROGRAM COMMITTEE: Dana Angluin (Yale University), Peter Auer (Technical University Graz), Jonathan Baxter (Australian National University), Avrim Blum (Carnegie Mellon University), Nicoló Cesa-Bianchi (University of Milan), William Cohen (AT&T Labs), Bill Gasarch (University of Maryland), Vijay Raghavan (Vanderbilt University), Dan Roth (University of Illinois, Urbana-Champaign), Ronitt Rubinfeld (Cornell University), Stuart Russell (University of California, Berkeley), Rolf Wiehagen (University of Kaiserslautern) LOCAL ARRANGEMENTS: John Case (University of Delaware), Jude Shavlik (University of Wisconsin, Madison), Bob Sloan (University of Illinois, Chicago). WEB: Dana Ron (MIT). STUDENT TRAVEL: We anticipate some funds will be available to partially support travel by student authors. Eligible authors who wish to apply for travel support should indicate this in a cover letter. STUDENT PAPER PRIZE: The Mark Fulk Award for the best paper authored or coauthored by a student is expected to be available for the first time this year. Eligible authors who wish to be considered for this prize should indicate this on the cover page. FOR MORE INFORMATION: Visit the COLT'98 web page at http://theory.lcs.mit.edu/COLT-98/, or send email to colt98 at anu.edu.au. From greiner at redwater.cs.ualberta.ca Tue Nov 25 19:19:32 1997 From: greiner at redwater.cs.ualberta.ca (Russ Greiner) Date: Tue, 25 Nov 1997 17:19:32 -0700 Subject: PostDoc - Learning, Bayesian Nets - UofAlberta Message-ID: <19971126001941Z15186-19530+70@scapa.cs.ualberta.ca> POST-DOCTORAL RESEARCH FELLOWSHIP IN COMPUTER SCIENCE University of Alberta Edmonton, Canada Applications are invited for a one-year (renewable) fellowship to work in the areas of * machine learning / learnability / datamining * knowledge representation, especially Bayesian networks and other probabilistic structures. Candidates should have a PhD in Computer Science or the equivalent, and will be required to carry out high quality research, to obtain both theoretical and empirical results. Previous research excellence and strong productivity in addition to good computing background is essential. Applications including * CV * statement of interests * 1 or 2 publications * list of references should be sent ASAP (but no later than 15 January 1998) to: Russell Greiner Department of Computing Science 615 General Service Bldg University of Alberta Edmonton, AB T6G 2H1 Email: greiner at cs.ualberta.ca Phone: 403 492 5461 Fax: 403 492 1071 Electronic submissions -- in plain text or postscript -- are encouraged, especially as there is currently a mail strike in Canada. See http://www.cs.ualberta.ca for more information about the department in general. From esann at dice.ucl.ac.be Wed Nov 26 03:28:05 1997 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Wed, 26 Nov 1997 10:28:05 +0200 Subject: extended deadline for ESANN 98 Message-ID: <199711260914.KAA07052@ns1.dice.ucl.ac.be> --------------------------------------------------- | 6th European Symposium | | on Artificial Neural Networks | | | | ESANN 98 | | | | Bruges - April 22-23-24, 1998 | | | | Final call for papers | --------------------------------------------------- We are pleased to announce the extended deadline for the submission of papers to ESANN'98: December 5th, 1997 (instead of December 1st, 1997). We encourage authors of late papers to announce their submission by sending the "author submission form" by fax (+ 32 10 47 25 98) before the deadline. The call for papers for the ESANN 98 conference (including the author submission form) is available on the Web: http://www.dice.ucl.ac.be/esann For any other question about the submission of papers, please send an email to esann at dice.ucl.ac.be. Sincerely yours, The ESANN'98 organizing committee. _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/esann _____________________________ _____________________________ From Yves.Moreau at esat.kuleuven.ac.be Wed Nov 26 05:09:24 1997 From: Yves.Moreau at esat.kuleuven.ac.be (Yves Moreau) Date: Wed, 26 Nov 1997 11:09:24 +0100 Subject: International Workshop Announcement Message-ID: <347BF554.A2C8F121@esat.kuleuven.ac.be> International Workshop on *** ADVANCED BLACK-BOX TECHNIQUES FOR NONLINEAR MODELING: THEORY AND APPLICATIONS *** with !!! TIME-SERIES PREDICTION COMPETITION !!! Date: July 8-10, 1998 Place: Katholieke Universiteit Leuven, Belgium Info: http://www.esat.kuleuven.ac.be/sista/workshop/ Organized at the Department of Electrical Engineering (ESAT-SISTA) and the Interdisciplinary Center for Neural Networks (ICNN) in the framework of the project KIT and the Belgian Interuniversity Attraction Pole IUAP P4/02. * GENERAL SCOPE The rapid growth of the field of neural networks, fuzzy systems and wavelets is offering a variety of new techniques for modeling of nonlinear systems in the broad sense. These topics have been investigated from differents points of view including statistics, identification and control theory, approximation theory, signal processing, nonlinear dynamics, information theory, physics and optimization theory among others. The aim of this workshop is to serve as an interdisciplinary forum for bringing together specialists in these research disciplines. Issues related to the fundamental theory as well as real-life applications will be addressed at the workshop. * TIME-SERIES PREDICTION COMPETITION Within the framework of this workshop a time-series prediction competition will be held. The results of the competition will be announced during the workshop, where the winner will be awarded. Participants in the competition are asked to submit their predicted data together with a short description and references of the methods used. In order to stimulate wide participation in the competition, attendence of the workshop is not mandatory but is of course encouraged. * INVITED SPEAKERS (confirmed) L. Feldkamp (Ford Research, USA) - Extended Kalman filtering C. Micchelli (IBM T.J. Watson, USA) - Density estimation U. Parlitz (Gottingen, Germany) - Nonlinear time-series analysis J. Sjoberg (Goeteborg, Sweden) - Nonlinear system identification S. Tan (Beijing, China) - Wavelet-based system modeling M. Vidyasagar (Bangalore, India) - Statistical learning theory V. Wertz (Louvain-la-Neuve, Belgium) - Fuzzy modeling * TOPICS include but are not limited to Nonlinear system identification Backpropagation Time series analysis Learning and nonlinear optimization Multilayer perceptrons Recursive algorithms Radial basis function networks Extended Kalman filtering Fuzzy modelling Embedding dimension Wavelets Subspace methods Piecewise linear models Identifiability Mixture of experts Model selection and validation Universal approximation Simulated annealing Recurrent networks Genetic algorithms Regularization Forecasting Bayesian estimation Frequency domain identification Density estimation Classification Information geometry Real-life applications Generalization Software * IMPORTANT DATES Deadline paper submission: April 2, 1998 Notification of acceptance: May 4, 1998 Workshop: July 8-10, 1998 Time-series competition: Deadline data submission: March 20, 1998 * Chairman: Johan Suykens Katholieke Universiteit Leuven Departement Elektrotechniek - ESAT/SISTA Kardinaal Mercierlaan 94 B-3001 Leuven (Heverlee), Belgium Tel: 32/16/32 18 02 Fax: 32/16/32 19 70 Email: Johan.Suykens at esat.kuleuven.ac.be Program Committee: B. De Moor, E. Deprettere, D. Roose, J. Schoukens, S. Tan, J. Vandewalle, V. Wertz, Y. Yu From Jon.Baxter at keating.anu.edu.au Wed Nov 26 06:34:36 1997 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Wed, 26 Nov 1997 22:34:36 +1100 (EST) Subject: Technical report available on reinforcement learning and chess Message-ID: <199711261134.WAA21952@reid.anu.edu.au> Technical Report Available -------------------------- Title ----- KnightCap: A chess program that learns by combining TD($\lambda$) with minimax search. Authors ------- Jonathan Baxter, Andrew Tridgell and Lex Weaver. Department of Systems Engineering and Department of Computer Science, Australian National University. Abstract ------- In this paper we present TDLeaf($\lambda$), a variation on the TD($\lambda$) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in which our chess program, ``KnightCap,'' used TDLeaf($\lambda$) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games and 3 days of play (equivalent to improving from mediocre to expert for a human). A more recent version of KnightCap is currently playing on the "Non-Free" Internet Chess Server (ICC, chessclub.com) with a rating of around 2500. We discuss some of the reasons for this success and also the relationship between our results and Tesauro's results in backgammon. Download Instructions --------------------- You can ftp the paper directly from ftp://syseng.anu.edu.au/~jon/publish/papers/knightcap.tar.gz If you want to learn more about KnightCap, check out http://syseng.anu.edu.au/lsg and follow the knightcap link. You can retrieve the paper from there, the latest source code for KnightCap, and watch a version of KnightCap ("KnightC") playing on ICC with our chess applet. From nimzo at vega.cerisep.pa.cnr.it Wed Nov 26 04:57:04 1997 From: nimzo at vega.cerisep.pa.cnr.it (Maurizio Cirrincione) Date: Wed, 26 Nov 1997 10:57:04 +0100 Subject: Abstract of PhD thesis about NN and electrical drives (change of address) Message-ID: dear Connctionists sorry again, but due to a final and definite crash of my system, my email address has changed into nimzo at vega.cerisep.pa.cnr.it All those who have thus required me a copy of my PhD thesis or those interested to have one should send me an email at this NEW email address Thank you for your patience Maurizio Cirrincione From Jon.Baxter at keating.anu.edu.au Wed Nov 26 14:37:30 1997 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Thu, 27 Nov 1997 06:37:30 +1100 (EST) Subject: Correction: Technical Report on RL and chess. Message-ID: <199711261937.GAA22467@reid.anu.edu.au> Whoops! My posting last night contained the wrong ftp address. The right one is included below. Also there was an incorrect link in the web page. Thanks to all those who pointed this out. Sorry for the inconvenience. Cheers, Jon ------------- Jonathan Baxter Research Fellow Department of Systems Engineering Research School of Information Science and Engineering Australian National University http://keating.anu.edu.au/~jon Tel: +61 2 6279 8678 Fax: +61 2 6279 8688 ----------------------------------------- Technical Report Available -------------------------- Title ----- KnightCap: A chess program that learns by combining TD($\lambda$) with minimax search. Authors ------- Jonathan Baxter, Andrew Tridgell and Lex Weaver. Department of Systems Engineering and Department of Computer Science, Australian National University. Abstract ------- In this paper we present TDLeaf($\lambda$), a variation on the TD($\lambda$) algorithm that enables it to be used in conjunction with minimax search. We present some experiments in which our chess program, ``KnightCap,'' used TDLeaf($\lambda$) to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games and 3 days of play (equivalent to improving from mediocre to expert for a human). A more recent version of KnightCap is currently playing on the "Non-Free" Internet Chess Server (ICC, chessclub.com) with a rating of around 2500. We discuss some of the reasons for this success and also the relationship between our results and Tesauro's results in backgammon. Download Instructions --------------------- You can ftp the paper directly from ftp://syseng.anu.edu.au/~jon/papers/knightcap.ps.gz If you want to learn more about KnightCap, check out http://syseng.anu.edu.au/lsg and follow the knightcap link. You can retrieve the paper from there, the latest source code and watch a version of KnightCap ("KnightC") playing on ICC with our chess applet. From niranjan at eng.cam.ac.uk Wed Nov 26 23:23:49 1997 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Thu, 27 Nov 97 04:23:49 GMT Subject: Neuran Nets for Signal Processing Message-ID: <9711270423.3329@baby.eng.cam.ac.uk> Provisional announcement of NNSP98 niranjan ---------------------------------------------- FIRST ANNOUNCEMENT AND CALL FOR PAPERS THE 1998 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING August 31 - September 3, 1998 Isaac Newton Institute for Mathematical Sciences, Cambridge, England The 1998 IEEE Workshop on Neural Networks for Signal Processing is the seventh in the series of workshops. Cambridge is a historic town, housing one of the leading Universities and several research institutions. In the Summer it is a beautiful place and a large number of visitors come here. It is easily reached by train and road from the Airports in London. The combination of these make it an ideal setting to host this workshop. The Isaac Newton Institute for Mathematical Sciences is based in Cambridge, adjoining the University and the Colleges. It was founded in 1992, and is devoted to the study of all branches of Mathematics. The Institute runs programmes that last for upto six months on various topics in mathematical sciences. Past programmes of relevance to this proposal include Computer Vision, Financial Mathematics and the current programme on Neural Networks and Machine Learning (July - December, 1997). One of the programmes at the Institute in July-December 1998 is Nonlinear and Nonstationary Signal Processing. Hence hosting this conference at the Institute will benefit the participants in many ways. 4. Accommodations Accommodation will be at Robinson College, Cambridge. Robinson is one of the new Colleges in Cambridge, and uses its facilities to host conferences during the summer months. It can accommodate about 300 guests in comfortable rooms. The College is within walking distance to the Cambridge city center and the Newton Institute. 5. Organization General Chairs Prof. Tony CONSTANTINIDES (Imperial) Prof. Sun-Yuan KUNG (Princeton) Vice-Chair Dr Bill Fitzgerald (Cambridge) Finance Chair Dr Christophe Molina (Anglia) Proceeding Chair Elizabeth J. Wilson (Raytheon Co.) Publicity Chairs Dr Aalbert De Vries (Sarnoff) Dr Jonathan Chambers (Imperial) Program Chair Dr Mahesan Niranjan (Cambridge) Program Committee Tulay ADALI Andrew BACK Jean-Francois CARDOSO Bert DEVRIES Lee GILES Federico GIROSSI Yu Hen HU Jenq-Neng HWANG Jan LARSEN Yann LECUN David LOWE Christophe MOLINA Visakan KADIRKAMANATHAN Shigeru KATAGIRI Gary KUHN Elias MANOLAKOS Mahesan NIRANJAN Dragan OBRADOVIC Erkki OJA Kuldip PALIWAL Lionel TARASSENKO Volker TRESP Marc VAN HULLE Andreas WEIGEND Papers describing original research are solicited in the areas described below. All submitted papers will be reviewd by members of the Programme Committee. 6. Technical Areas Paradigms artificial neural networks, Markov models, graphical models, dynamical systems, nonlinear signal processing, and wavelets Application areas speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories generalization, design algorithms, optimization, probabilistic inference, parameter estimation, and network architectures Implementations parallel and distributed implementation, hardware design, and other general implementation technologies 7. Schedule Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Dr Mahesan Niranjan, Cambridge University Engineering Department, Cambridge CB2 1PZ, England, (Tel.) +44 1223 332720, (Fax.) +44 1223 332662, (e-mail) niranjan at eng.cam.ac.uk. More information relating to the workshop will be available in http://www-svr.eng.cam.ac.uk/nnsp98. Submissions to: Dr Mahesan Niranjan IEEE NNSP'98 Cambridge University Engineering Department Trumpington Street, Cambridge CB2 1PZ England ***** Important Dates ****** Submission of extended summary : February 26, 1998 Notification of acceptance : April 6, 1998 Submission of photo-ready accepted paper : May 3, 1998 Advanced registration, before : June 30, 1998 ============================================================== From marco at idsia.ch Thu Nov 27 04:58:58 1997 From: marco at idsia.ch (Marco Wiering) Date: Thu, 27 Nov 1997 10:58:58 +0100 Subject: Paper Announcement Message-ID: <199711270958.KAA13031@zucca.idsia.ch> HQ-LEARNING Adaptive Behavior 6:2, 1997 (in press) Marco Wiering Juergen Schmidhuber marco at idsia.ch juergen at idsia.ch IDSIA, Corso Elvezia 36, 6900 Lugano, Switzerland HQ-learning is a hierarchical extension of Q(lambda)- learning designed to solve certain types of partially observable Markov decision problems (POMDPs). HQ automatically decomposes POMDPs into sequences of simpler subtasks that can be solved by memoryless policies learnable by reactive subagents. HQ solves partially observable mazes with more states than used in most previous POMDP work. FTP-host: ftp.idsia.ch FTP-files: /pub/marco/HQ-LEARNING.ps.gz http://www.idsia.ch/~marco/publications.html http://www.idsia.ch/~juergen/onlinepub.html Marco & Juergen, IDSIA www.idsia.ch From gluck at pavlov.rutgers.edu Thu Nov 27 10:05:10 1997 From: gluck at pavlov.rutgers.edu (Mark A. Gluck) Date: Thu, 27 Nov 1997 10:05:10 -0500 Subject: Graduate Study in Neural Computational at RUTGERS-NEWARK Message-ID: -------------------------------------------------------------- Graduate Study in Neural Computation at Rutgers-Newark -------------------------------------------------------------- Students interested in doing graduate study and research in COMPUTATIONAL NEUROSCIENCE and CONNECTIONIST MODELLING IN COGNITIVE SCIENCE, should be aware of a growing strength at Rutgers University-Newark in these areas. There are eight relevant faculty and research scientists available for advising students: Ben Martin Bly, Gyorgy Buzsaki, Mike Casey, Mark Gluck, Stephen Hanson, Catherine Myers, Michael Recce, and Ralph Siegel. Further information on their individual research interests and background is listed below. The Rutgers-Newark campus has a special strength in the use of these models as research tools when integrated with empirical studies of brain and behavior. Information on the relevant graduate programs and sources of further information are listed at the end of this email. Research Interests of Key Faculty and Research Scientists ---------------------------------------------------------------------------- BENJAMIN MARTIN BLY Ph.D., Stanford, Cognitive Psychology, 1993 Email: ben at psychology.rutgers.edu Web Page: http://psychology.rutgers.edu/~ben Research Interests: I want to understand how functional organization in the brain supports complex human behavior and cognition, in particular language use and the mental representation of conceptual information. To study the cerebral basis of these phenomena, I use cognitive psychological methods to investigate overt behavior that depends on language comprehension or production, concept formation, or inductive reasoning, and I use Magnetic Resonance Imaging (MRI) to measure behavior-dependent changes in brain function. Using mathematical modeling methods, I explore the consequences of these empirical results for theories of language function and conceptual representation. The primary goal of this research is to understand the physical basis of language and conceptual knowledge. Such understanding has broad consequences both for the scientific explanation of intelligent behavior and for the understanding and treatment of brain injuries that affect language and cognition. Selected Publications: Martin B. (1994). The Schema. in "Complexity: metaphors, models, and reality." Cowan G.A., Pines D., Meltzer D. (eds), Reading MA, Addison-Wesley, p. 263-286 Schlaug G., Martin B., Thangaraj V., Edelman R.R, Warach S.J. (1996). Functional anatomy of pitch perception and pitch memory in non-musicians and musicians. NeuroImage 3(3):S318. Siewert B., Bly B.M., Schlaug G., Thangaraj V., Warach S.J., Edelman R.R. (1996). Comparing the BOLD and EPISTAR techniques for functional brain imaging using Signal Detection Theory. The Journal of Magnetic Resonance in Medicine 36:249-255. Bly B.M., Kosslyn S.M. (1997). Functional anatomy of object recognition in humans: evidence from PET and FMRI. Current Opinions in Neurology 10, 1:5-9. ------------------------------------------------------------------------ GYORGY BUZSAKI M.D., University of Pecs, Hungary, 1974 Email: buzsaki at axon.rutgers.edu Web page: http://osiris.rutgers.edu/buzsaki.html Research Interests: Neurobiology of learning and memory. Experimental approaches are two-fold. The first is the study of axonal connectivity of hippocampal principal cells and interneurons, characterized physiologically and filled in vivo, with the explicit goal of a complete reconstruction of the true connectivity of the hippocampus, to serve as building blocks for computational models. A complementary approach uses large scale recordings of neurons with silicon probes to reveal cooperative, emergent properties of neuronal assemblies during behavior. Computational methods are used to understand the complex interactions of neurons in real networks and modeling oscillatory properties of interneuronal networks. Selected Publications: Buzs?ki, G. A two-stage model of memory trace formation: A role for "noisy" brain states. Neuroscience 31: 551-570, 1989. Buzs?ki, G. The hippocampo-neocortical dialogue. Cerebral Cortex 6: 81-92, 1996. Buzsaki, G., Horvath, Z., Urioste, R., Hetke, J., Wise, K. High frequency network oscillation in the hippocampus. Science 256: 1025-1027, 1992. Jand?, G., Siegel, R. M., Horv?th, Z., and Buzs?ki, G. Pattern recognition of the electroencephalogram by artificial neural networks. Electroencephalography and clinical Neurophysiology 86: 100-109, 1993. Sik, A., Ylinen, A., Penttonen, M., and Buzsaki, G. Inhibitory CA1-CA3-hilar region feedback in the hippocampus. Science 264: 1722-1724, 1994. Traub, R. D., Miles, R., and Buzs?ki, G. Computer simulation of carbachol-driven rhythmic population oscillations in the CA3 region of the in vitro rat hippocampus. Journal of Physiology 451: 653-672, 1992. ------------------------------------------------------------------------ MIKE CASEY Ph.D., UC, San Diego, Mathematics, 1995 Email: mcasey at psychology.rutgers.edu Web Page: http://psychology.rutgers.edu/~mcasey Research Interests: My research interests are in the mathematical foundations of cognitive science, and abstract physical models of intelligent behavior. This interest is pursued through the study of recurrent neural networks and other dynamical models of cognitive and other neural processes. My current research is focussed on dynamical models of abstract knowledge acquisition and use. Selected Publications: Casey, M. (1996) "The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction," Neural Computation, 8:6, 1135-1178 ------------------------------------------------------------------------ MARK GLUCK Ph.D., Stanford University, Cognitive Psychology, 1987 Email: gluck at pavlov.rutgers.edu Web Page: www.gluck.edu Research Interests: Neurobiology of learning and memory, with emphasis on the role of the hippocampus in associative learning. Computational models of conditioning and human learning. Experimental studies include behavioral neuroscience studies of rabbit eyeblink conditioning under various lesion and drug manipulations. Human experimental research includes studies of conditioning, associative learning, and categorization in normals, aged, and medial temporal lobe amnesics. Applied work in neural networks for pattern classification and novelty detection for mechanical fault diagnosis. Selected Publications: Gluck, M.A. & Myers, C. E. (1997). Psychobiological models of hippocampal function in learning and memory. Annual Review of Psychology. 48. 481-514. Gluck, M. A., Ermita, B. R., Oliver, L. M., & Myers, C. E. (1996). Extending models of hippocampal function in animal conditioning to human amnesia. Memory. Knowlton, B. J., Squire, L. R. , & Gluck, M. A. (1994). Probabilistic category learning in amnesia. Learning and Memory. 1, 106-120. ------------------------------------------------------------------------ STEPHEN JOSE HANSON Ph.D. Arizona State University, Experimental and Mathematical Psychology, 1981 Email: jose at psychology.rutgers.edu, Web Page: http://www-psych.rutgers.edu Research Interests: I am examining the following general aspects of connectionist learning systems as they relate to human/animal cognition and learning. (1) Learnability Theory: the effects of representation on learning, prior knowledge on learning (trade-off), sample protocol on learning, presence of noise and errors on learning. (2) Studies of Generalization: the effects of sample size, ``pedagogy'' (ordering or organizing the training samples), analyses of classification or categorization complexity, and the ability of the learning system to correctly generalize. (3) Studies of ``Scaling'' and complexity in task structure. Scaling involves ``realistic'' tasks that possess significant complexity. Scaling up the task may require increasing the dimensionality of the task (e.g. inverse dynamics with realistic degrees of freedom, 10 or 12), increasing task interactions (linguistic or language constraints arising from syntax, semantics, discourse etc.), increasing task memory requirements (as in grammar induction) or decreasing the supervision of the algorithm as in reinforcement learning. (4) Studies of Algorithms inspired by biophysical properties. Somehow the brain controls the degrees of freedom in its representation language and is able to induce complex ``rules'' for its conduct. What trick does it use? Are there simple principles that relate cell growth, death, noise, locality, parallelism, network topology, to seemingly more complex phenomena, like language use, problem solving, and reasoning? Selected Publications: Hanson S. J. & Burr, D. J. (1990). What Connectionist Models Learn: Toward a theory of representation in Connectionist Networks, Behavioral and Brain Sciences, 13, 471-518. Hanson, S. J.(1990). A Stochastic Version of the Delta Rule, PHYSICA D,42, 265-272. Hanson, S. J. (1991). Behavioral Diversity, Search, and Stochastic Connectionist Systems, In Neural Network Models of Conditioning and Action, M. Commons, S. Grossberg & J. Staddon (Eds.), New Jersey: Erlbaum. Hanson, S. J., Petsche, T., Kearns, M. & Rivest, R. (1994), Computational Learning Theory and Natural Learning ------------------------------------------------------------------------ CATHERINE MYERS Ph.D., Imperial College, University of London, Artificial Neural Networks, 1990 Email: myers at pavlov.rutgers.edu Research Interests: 1. Computational Neuroscience: I am interested in building connectionist models of brain regions involved in learning and memory. These models are meant to capture functionality but also be consistent with known anatomical and physiological constraints. In particular, I am concerned with the role of the hippocampal region in associative learning, its interaction with cerebellum, neocortex and amygdala, and its response to various pharmacological manipulations, particularly cholinergic drugs. 2. Experimental Neuropsychology: Anterograde amnesia is a syndrome which follows hippocampal-region damage via stroke, Alzheimer's Dementia and other etiologies. I am interested in developing simple procedures to determine what kinds of learning and memory survive such damage, and whether this pattern matches the impairments seen in animal models. One aim of this work is to develop discriminative/diagnostic procedures to differentiate amnesic etiologies, as well as identifying locus of damage in patients for whom neuroimaging is counterindicated. 3. Experimental Psychology: I focus on underlying representational principles of learning and memory which may operate across many different paradigms and response systems, including classical conditioning, computer-based operant analogs of conditioning, and category learning. Selected Publications: Myers, C. & Gluck, M. (1994). Context, conditioning and hippocampal re-representation. Behavioral Neuroscience, 108(5), 835-847. Myers, C., Gluck, M. & Granger (1995). Dissociation of hippocampal and entorhinal function in associative learning: A computational approach. Psychobiology, 23(2), 116-138. Myers, C., Ermita, B., Harris, K., Gluck, M. & Hasselmo, M. (1996). A computational model of the effects of septohippocampal disruption on classical eyeblink conditioning. Neurobiology of Learning and Memory, 66, 51-66. ------------------------------------------------------------------------ MICHAEL RECCE Ph.D. University College London, Neurophysiology, 1993 Email: recce at axon.rutgers.edu Research Interests: The spatial and memory function of the hippocampus and nearby brain structures. This is investigated using a wide range of methods, including neurophysiological recording from the hippocampus in freely moving rats and evaluation spatial abilities of human subjects using virtual reality. These and other data are then used to construct computational models, and the models are tested using computer simulation and on mobile robots. Selected Publications: Recce, M. and Harris, K.D. (1996) Memory for places: a navigational model in support fo Marr's theory of hippocampal function. Hippocampus. vol 6:735-748. Harris, K.D. and Recce, M. (1997) Absolute localization for a mobile robot using place cells. Robotics and autonomous systems 658 p 1-13. Hirase, H. and Recce, M. (1996) A search for the optimal thresholding sequence in an associative memory. Network. vol 7. pp 741-756. O'Keefe, J. and Recce, M. (1993) Phase relationship between hippocampal place units and EEG theta rhythm. Hippocampus. vol 3. pp 317-330 ------------------------------------------------------------------------ RALPH SIEGEL Ph.D, McGill University, Physiology, 1985 Email: axon at cortex.rutgers.edu Web Page: www.cmbn.rutgers.edu/cmbn/faculty/rsiegel.html Research Interests: We use a multidisciplinary approach to understand the physiology, psychophysics, theory and neurology underlying visual perception. The ultimate goal of this work is an understanding of the visual perceptual process and application of this knowledge to assist persons who have suffered neurological damage. Selected Publications: Read, H.L. and Siegel, R.M. Modulation of Responses to Optic Flow in Area 7a by Retinotopic and Oculomotor Cues in Monkey. Cerebral Cortex. In press, 1997 PDF Jando, G., Siegel, R.M., Horvath, Z. and Buzsaki, G., Pattern recognition of the electroencephalogram by artificial neural networks, Electroenceph. Clin. Neurophysiol. 86: 100-109 (1993). Siegel, R.M., Tresser, C and Zettler, G., A coding problem in dynamics and number theory. Chaos 2:473-494 (1992). =================================================================== INFORMATION ON GRADUATE PROGRAMS: There are two graduate program appropriate for the study of neural computation and connectionist modelling, depending on whether the students are more oriented towards brain (Neuroscience) or behavior (Psychology/Cognitive Science). Regardless of which graduate program students choose, they are free to take classes from, and do research with, faculty from both programs. There is an extensive computational infrastructure to support computational students in both programs, supported by a recent University strategic initiative in computational neuroscience. NEUROSCIENCE OPTION: Students whose interests are oriented towards neuroscience, including the study of basic molecular, cellular, systems, behavioral, and cognitive neuroscience, should apply to the BEHAVIORAL AND NEURAL SCIENCES graduate program at Rutgers-Newark. For more info, see the web page: www.bns.rutgers.edu For admissions applications, email: bns at cortex.rutgers.edu PSYCHOLOGY/COGNITIVE SCIENCE OPTION: Students whose interests are oriented towards behavior, including cognitive psychology, cognitive science, cognitive neuroscience, animal behavior, linguistics, and philosophy, should apply to the PSYCHOLOGY/COGNITIVE SCIENCE program at Rutgers-Newark. For more info, see the web page: www-psych.rutgers.edu For admissions applications, email: cogsci at psychology.rutgers.edu =================================================================== From N.Sharkey at dcs.shef.ac.uk Sat Nov 29 10:03:51 1997 From: N.Sharkey at dcs.shef.ac.uk (Noel Sharkey) Date: Sat, 29 Nov 1997 15:03:51 GMT Subject: Biologically inspired robotics - CALL FOR PARTICIPATION Message-ID: <199711291503.AA10850@gw.dcs.shef.ac.uk> Sorry if you receive this more than once **** SELF-LEARNING ROBOTS II: BIO-ROBOTICS **** An Institution of Electrical Engineers (IEE) Seminar Savoy Place, London: February, 12th, 1998. Co-sponsors: Royal Institute of Navigation (RIN) Biotechnology and Biological Sciences Research Council (BBSRC) British Computer Science Society (BCS) Society for the study of Artificial Intelligence and the Simulation of Behaviour (AISB) Biologically inspired robotics or bio-robotics is an exciting trend in the integration of engineering and life sciences. Although this has a long history dating back to the turn of the century, it is only within the last few years that it has picked up momentum as many have realised that life is still the best model we have for intelligent behavior. This cross fertilisation is beginning to bear fruit in robotics within specialist areas such as evolutionary methods, artificial life, neural computing, and navigation. It is now time to bring these threads together and ask the life scientists to assess the developments and also to discuss how and what the life sciences could learn from robotics. This one-day seminar aims to bring together some of Europe's leading researchers within the areas of animal and robot behavior to discuss the foundations and future directions of biologically inspired robotics. Each Speaker will be followed by a Discussant who will follow up on some of the issues raised by in the paper and make general points about the field. 9.30-10.30 EMBODIED COGNITION Francisco Varela (Speaker) Biologist and Neuroscientist, France. Stevan Harnad (Discussant) Psychologist, UK. 10.30-10.45 COFFEE 10.45-11.45 EVOLUTIONARY LEARNING Stefano Nolfi (Speaker) Roboticist and Psychologist, Italy. Richard Dawkins (Discussant) Evolutionary Zoologist, UK. 11.45-12.45 CONDITIONED LEARNING. Marco Dorigo (Speaker) Computer Scientist and Roboticist, Belgium. Tony Savage (Discussant) Animal Psychologist, N.Ireland. 12.45-2.00 LUNCH 2.00-3.00 NAVIGATION: THE INSECT MODEL Dimitrios Lambrinos (Speaker) Computer Scientist and Roboticist, Switzerland. Tom Collett (Discussant) Neurobiologist, UK. 3.00-4.00 NAVIGATION: THE MAMMALIAN MODEL Neil Burgess (Speaker) Neuroscientist, UK. Ariane Etienne (Discussant) Ethologist, Switzerland. 4.00-4.15 TEA PANEL: THE FUTURE OF BIO-ROBOTICS 4.15-5.45 Introduced and Chaired by Jean-Arcady Meyer, Computer Scientist and Ethologist, France. ORGANISERS Noel Sharkey, Computer Scientist, Psychologist, and Roboticist, University of Sheffield, UK. Tom Ziemke, Computer Scientist and Roboticist, Universities of Sheffield, UK and Skovde, Sweden. REGISTRATION. I would be advisable to register as early as possible since places will be limited. Please contact Jon Maddison jmaddison at iee.org.uk From smola at first.gmd.de Sat Nov 29 13:02:07 1997 From: smola at first.gmd.de (Alex Smola) Date: Sat, 29 Nov 1997 19:02:07 +0100 Subject: Homepage on Support Vectors and related topics References: <199711272012.VAA02453@viola.first.gmd.de> Message-ID: <3480589F.9A20D17B@first.gmd.de> Dear Connectionists, we would like to announce the availability of a webpage on Support Vectors and related topics at GMD FIRST, Berlin. It contains material on the upcoming NIPS SV workshop (including schedule and abstracts), as well as downloadable papers, links to people working on Support Vectors, related webpages and links to software and databases. The URL is: http://svm.first.gmd.de The goal of this webpage is to serve as a central switchboard and source of information about Support Vector machines and related topics. Researchers in this field are encouraged to contribute information (urls of papers, etc.) to this website. Alex Smola mailto:smola at first.gmd.de Bernhard Schoelkopf mailto:bs at first.gmd.de From wahba at stat.wisc.edu Sat Nov 29 23:12:56 1997 From: wahba at stat.wisc.edu (Grace Wahba) Date: Sat, 29 Nov 1997 22:12:56 -0600 (CST) Subject: NewTR: SVM's-RKHS-GACV Message-ID: <199711300412.WAA05864@hera.stat.wisc.edu> `Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV', University of Wisconsin-Madison Statistics Department TR 984, Nov 1997 by Grace Wahba available at URL ftp://ftp.stat.wisc.edu/pub/wahba/nips97.ps.gz or via my home page http://www.stat.wisc.edu/~wahba -> TRLIST ...........Abstract............................ This report is intended as background material for a talk to be presented in the NIPS 97 Workshop on Support Vector Machines (SVM's). It consists of three parts: (1) A brief review of some old but relevant results on constrained optimization in Reproducing Kernel Hilbert Spaces (RKHS); and a review of the relationship between zero-mean Gaussian processes and RKHS. Application of tensor sums and products of RKHS including smoothing spline ANOVA spaces in the context of SVM's also described. (2) Discussion of the relationship between penalized likelihood methods in RKHS for Bernoulli data when the goal is risk factor estimation, and SVM methods in RKHS when the goal is classification. When the goal is classification it is noted replacing the likelihood functional of the logit [log odds ratio] with an appropriate SVM functional is a natural method for concentrating computational effort on estimating the logit near the classification boundary and ignoring data far away. Remarks concerning the potential of SVM's for variable selection as an efficient preprocessor for risk factor estimation are made. (3) A discussion of how the the GACV for choosing smoothing parameters proposed in Xiang and Wahba (1996, 1997) may be implemented in the context of convex SVM's.