From hinton at cs.toronto.edu Tue Apr 1 12:03:17 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 1 Apr 1997 12:03:17 -0500 Subject: new paper available Message-ID: <97Apr1.120321edt.809@neuron.ai.toronto.edu> "Generative Models for Discovering Sparse Distributed Representations" Geoffrey E. Hinton and Zoubin Ghahramani Department of Computer Science University of Toronto ABSTRACT We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. The paper is available at http://www.cs.toronto.edu/~hinton/ftp/RGBN.ps.Z From qian at brahms.cpmc.columbia.edu Tue Apr 1 17:07:33 1997 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Tue, 1 Apr 1997 15:07:33 -0700 Subject: vision postdoc position at Columbia Message-ID: <199704012207.PAA19974@brahms.cpmc.columbia.edu> Postdoctoral Position in Visual Psychophysics Center for Neurobiology and Behavior Columbia University New York, NY A postdoctoral fellowship position in Visual Psychophysics is available immediately in my lab at Columbia University. The individual will participate in psychophysics projects that investigate the mechanisms of motion detection, stereoscopic depth perception and motion-stereo integration. There will be close interactions between these projects and the related computational modeling projects in the same lab. The details of our research interests and publications can be found at the web site listed below. The funding for the position is available for two years with the possibility of renewal. Applicants should have a strong background in visual psychophysics and should be able to write programs (or adapt our current psychophysics software package) for generating visual stimuli. Please send a CV, representative publications and two letters of recommendations to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., A730 New York, NY 10032 qian at brahms.cpmc.columbia.edu (email) 212-960-2213 (phone) 212-960-2561 (fax) I will attend the ARVO meeting in May. The phone number of my hotel is 954-525-3484. Please email me if you would like to arrange a meeting there. *********************************************************************h The details of our research interests and publications can be found at: http://brahms.cpmc.columbia.edu Selected Papers: Binocular Disparity and the Perception of Depth [Review], Ning Qian, Neuron, 1997, 18:359-368. The Effect of Complex Motion Pattern on Speed Perception, Bard J Geesaman and Ning Qian (submitted to Vision Research). A Novel Speed Illusion Involving Expansion and Rotation Patterns, Bard J Geesaman and Ning Qian, Vision Research, 1996, 36:3281-3292. Transparent Motion Perception as Detection of Unbalanced Motion Signals I: Psychophysics, Ning Qian, Richard A. Andersen and Edward H. Adelson, J. Neurosci., 1994, 14:7357--7366. A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena, Ning Qian and Richard A. Andersen, Vision Research, 1997 (in press). Physiological Computation of Binocular Disparity, Ning Qian and Yudong Zhu, Vision Research, 1997 (in press). Binocular Receptive Field Profiles, Disparity Tuning and Characteristic Disparity, Yudong Zhu and Ning Qian, Neural Computation, 1996, 8:1647-1677. Computing Stereo Disparity and Motion with Known Binocular Cell Properties, Ning Qian, Neural Computation, 1994, 6:390-404. From cas-cns at cns.bu.edu Tue Apr 1 10:45:17 1997 From: cas-cns at cns.bu.edu (BU - Cognitive and Neural Systems) Date: Tue, 01 Apr 1997 10:45:17 -0500 Subject: VISION, RECOGNITION, ACTION: FINAL CALL Message-ID: <3.0.1.32.19970401104517.006b2534@cns.bu.edu> **** FINAL CALL FOR REGISTRATION ***** International Conference on VISION, RECOGNITION, ACTION: NEURAL MODELS OF MIND AND MACHINE May 28--31, 1997 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research This conference will include 21 invited lectures and 88 contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems see, understand, and act upon a changing world. The program is listed below. Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If paying by check, mail to: Neural Models of Mind and Machine, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail to the above address, or fax to (617) 353-7755. The meeting registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. A day of tutorials will be held on Wednesday, May 28. The tutorial registration fee helps to pay for 2 coffee breaks and a hard copy of the 7 hours of tutorial viewgraphs. See the meeting web page at http://cns-web.bu.edu/cns-meeting for further meeting information. **************************************** REGISTRATION FORM (Please Type or Print) Vision, Recognition, Action: Neural Models of Mind and Machine Boston University Boston, Massachusetts Tutorials: May 28, 1997 Meeting: May 29-31, 1997 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, coffee breaks, and meeting proceedings. For registered participants in the conference, the regular tutorial registration fee is $20 and the student fee is $15. For attendees of only the tutorial, the regular registration fee is $30 and the student fee is $25. Two coffee breaks and a tutorial handout will be covered by the tutorial registration fee. CHECK ONE: [ ] $55 Conference plus Tutorial (Regular) [ ] $40 Conference plus Tutorial (Student) [ ] $35 Conference Only (Regular) [ ] $25 Conference Only (Student) [ ] $30 Tutorial Only (Regular) [ ] $25 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: **************************************** MEETING SCHEDULE (poster session details follow the oral session schedule) WEDNESDAY, MAY 28, 1997 (Tutorials): 7:30am---8:30am MEETING REGISTRATION 8:30am--10:00am Stephen Grossberg (Part I): "Vision, Brain, and Technology" 10:00am--10:30am COFFEE BREAK 10:30am--12:00pm Stephen Grossberg (Part II): "Vision, Brain, and Technology" 12:00pm---1:15pm LUNCH 1:15pm---3:15pm Gail Carpenter: "Self-Organizing Neural Networks for Learning, Recognition, and Prediction: ART Architectures and Applications" 3:15pm---3:45pm COFFEE BREAK 3:45pm---5:45pm Eric Schwartz: "Algorithms and Hardware for the Application of Space-Variant Active Vision to High Performance Machine Vision" THURSDAY, MAY 29, 1997 (Invited Lectures and Posters): 7:30am---8:30am MEETING REGISTRATION 8:30am---9:15am Robert Shapley: "Brain Mechanisms for Visual Perception of Occlusion" 9:15am--10:00am George Sperling: "An Integrated Theory for Attentional Processes in Vision, Recognition, and Memory" 10:00am--10:30am COFFEE BREAK AND POSTER SESSION I 10:30am--11:15am Patrick Cavanagh: "Direct Recognition" 11:15am--12:00pm Stephen Grossberg: "Perceptual Grouping during Neural Form and Motion Processing" 12:00pm---1:30pm LUNCH 1:30pm---2:15pm Robert Desimone: "Neuronal Mechanisms of Visual Attention" 2:15pm---3:00pm Ennio Mingolla: "Visual Search" 3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION I 3:30pm---4:15pm Patricia Goldman-Rakic: "The Machinery of Mind: Models from Neurobiology" 4:15pm---5:00pm Larry Squire: "Brain Systems for Recognition Memory" 5:00pm---8:00pm POSTER SESSION I FRIDAY, MAY 30, 1997 (Invited and Contributed Lectures): 8:00am---8:30am MEETING REGISTRATION 8:30am---9:15am Lance Optican: "Neural Control of Rapid Eye Movements" 9:15am--10:00am John Kalaska: "Reaching to Visual Targets: Cerebral Cortical Neuronal Mechanisms" 10:00am--10:30am COFFEE BREAK 10:30am--11:15am Rodney Brooks: "Models of Vision-Based Human Interaction" 11:15am--12:00pm Alex Pentland: "Interpretation of Human Action" 12:00pm---1:30pm LUNCH 1:30pm---1:45pm Paolo Gaudiano: "Retinal Processing of IRFPA Imagery" 1:45pm---2:00pm Zili Liu: "2D Ideal Observers in 3D Object Recognition" 2:00pm---2:15pm Soheil Shams: "Object Segmentation and Recognition via a Network of Resonating Spiking Neurons" 2:15pm---2:30pm Wey-Shiuan Hwang and John Weng: "Autonomous Learning for Visual Attention Selection" 2:30pm---2:45pm Shane W. McWhorter, Theodore J. Doll, and Anthony A. Wasilewski: "Integration of Computational Vision Research Models for Visual Performance Prediction" 2:45pm---3:00pm Frank S. Holman III and Robert J. Marks II: "Platform Independent Geometry Verification Using Neural Networks Including Color Visualization" 3:00pm---3:30pm COFFEE BREAK 3:30pm---3:45pm Heiko Neumann and Wolfgang Sepp: "A Model of Cortico-Cortical Integration of Visual Information: Receptive Fields, Grouping, and Illusory Contours" 3:45pm---4:00pm Constance S. Royden: "A Biological Model for Computing Observer Motion in the Presence of Moving Objects" 4:00pm---4:15pm Michele Fabre-Thorpe, Ghislaine Richard, and Simon Thorpe: "Rapid Categorization of Natural Images in Rhesus Monkeys: Implications for Models of Visual Processing" 4:15pm---4:30pm Raju S. Bapi and Michael J. Denham: "Neural Network Model of Experiments on Set-Shifting Paradigm" 4:30pm---4:45pm Jose L. Contreras-Vidal and George E. Stelmach: "Adaptive Resonance Theory Computations in the Cortico-Striatal Circuits are Gated by Dopamine Activity during Reward-Related Learning of Approach Behavior" 4:45pm---5:00pm Mingui Sun, Murat Sonmez, Ching-Chung Li, and Robert J. Sclabassi: "Application of Time-Frequency Analysis, Artificial Neural Networks, and Decision Making Theory to Localization of Electrical Sources in the Brain Based on Multichannel EEG" 5:00pm---6:30pm MEETING RECEPTION 6:30pm---7:30pm Stuart Anstis Keynote Lecture: "Moving in Unexpected Directions" SATURDAY, MAY 31 (Invited Lectures and Posters): 8:00am---8:30am MEETING REGISTRATION 8:30am---9:15am Eric Schwartz: "Multi-Scale Vortex of the Brain: Anatomy as Architecture in Biological and Machine Vision" 9:15am--10:00am Terrence Boult: "Polarization Vision" 10:00am--10:30am COFFEE BREAK AND POSTER SESSION II 10:30am--11:15am Allen Waxman: "Opponent Color Models of Visible/IR Fusion for Color Night Vision" 11:15am--12:00pm Gail Carpenter: "Distributed Learning, Recognition, and Prediction in ART and ARTMAP Networks" 12:00pm---1:30pm LUNCH 1:30pm---2:15pm Tomaso Poggio: "Representing Images for Visual Learning" 2:15pm---3:00pm Michael Jordan: "Graphical Models, Neural Networks, and Variational Approximations" 3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION II 3:30pm---4:15pm Andreas Andreou: "Mixed Analog/Digital Neuromorphic VLSI for Sensory Systems" 4:15pm---5:00pm Takeo Kanade: "Computational VLSI Sensors: Integrating Sensing and Processing" 5:00pm---8:00pm POSTER SESSION II POSTER SESSION I: Thursday, May 29, 1997 All posters will be displayed for the full day. Biological Vision Session: #1 Vlad Cardei, Brian Funt, and Kobus Barnard: "Modeling Color Constancy with Neural Networks" #2 E.J. Pauwels, P. Fiddelaers, and L. Van Gool: "Send in the DOGs: Robust Clustering using Center-Surround Receptive Fields" #3 Tony Vladusich and Jack Broerse: "Neural Networks for Adaptive Compensation of Ocular Chromatic Aberration and Discounting Variable Illumination" #4 Alexander Dimitrov and Jack D. Cowan: "Objects and Texture Need Different Cortical Representations" #5 Miguel Las-Heras, Jordi Saludes, and Josep Amat: "Adaptive Analysis of Singular Points Correspondence in Stereo Images" #6 Neil Middleton: "Properties of Receptive Fields in Radial Basis Function (RBF) Networks" #7 David Enke and Cihan Dagli: "Modeling the Bidirectional Interactions within and between the LGN and Area V1 Cells" #8 Scott Oddo, Jacob Beck, and Ennio Mingolla: "Texture Segregation in Chromatic Element-Arrangement Patterns" #9 David Alexander and Phil Sheridan: "Local from Global Geometry of Layers 2, 3, and 4C of the Macaque Striate Cortex" #10 Phil Sheridan and David Alexander: "Invariant Transformations on a Space-Variant Hexagonal Grid" #11 Irak Vicarte Mayer and Haruhisa Takahashi: "Simultaneous Edge Detection and Image Segmentation using Neural Networks and Color Theory" #12 Adam Reeves and Shuang Wu: "Visual Adaptation: Stochastic or Deterministic?" #13 Peter Kalocsai and Irving Biederman: "Biologically Inspired Recognition Model with Extension Fields" #14 Stephane J.M. Rainville, Frederick A.A. Kingdom, and Anthony Hayes: "Effects of Local Phase Structure on Motion Perception" #15 Alex Harner and Paolo Gaudiano: "A Neural Model of Attentive Visual Search" #16 Lars Liden, Ennio Mingolla, and Takeo Watanabe: "The Effects of Spatial Frequency, Contrast, Disparity, and Phase on Motion Integration between Different Areas of Visual Space" #17 Brett R. Fajen, Nam-Gyoon Kim, and Michael T. Turvey: "Robustness of Heading Perception Along Curvilinear Paths" #18 L.N. Podladchikova, I.A. Rybak, V.I. Gusakova, N.A. Shevtsova, and A.V. Golovan: "A Behavioral Model of Active Visual Perception" #19 Julie Epelboim and Patrick Suppes: "Models of Eye Movements during Geometrical Problem Solving" Biological Learning and Recognition Session: #20 George J. Kalarickal and Jonathan A. Marshall: "Visual Classical Rearing and Synaptic Plasticity: Comparison of EXIN and BCM Learning Rules" #21 Jean-Daniel Kant and Daniel S. Levine: "ARTCRITIC: An Adaptive Critic Model for Decision Making in Context" #22 L. Andrew Coward: "Electronic Simulation of Unguided Learning, Associative Memory, Dreaming, and Internally Generated Succession of Mental Images" #23 K. Torii, T. Kitsukawa, S. Kunifuji, and T. Matsuzawa: "A Synaptic Model by Temporal Coding" #24 Gabriel Robles-de-la-Torre and Robert Sekuler: "Learning a Virtual Object's Dynamics: Spectral Analysis of Human Subjects' Internal Representation" #25 Sheila R. Cooke, Robert Sekuler, Brendan Kitts, and Maja Mataric: "Delayed and Real-Time Imitation of Complex Visual `Gestures' " #26 Brendan Kitts, Sheila R. Cooke, Maja Mataric, and Robert Sekuler: "Improved Pattern Recognition by Combining Invariance Methods" #27 Gregory R. Mulhauser: "Can ART Dynamics Create a 'Centre of Cognitive Action' Capable of Supporting Phenomenal Consciousness?" #28 Bruce F. Katz: "The Pleasingness of Polygons" #29 Stephen L. Thaler: "Device for the Autonomous Generation of Useful Information" Control and Robotics Session: #30 John Demeris and Gillian Hayes: "Integrating Visual Perception and Action in a Robot Model of Imitation" #31 Danil V. Prokhorov and Donald C. Wunsch II: "A General Training Procedure for Stable Control with Adaptive Critic Designs" #32 Juan Cires and Pedro J. Zufiria: "Space Perception through a Self-Organizing Map for Mobile Robot Control" #33 Alex Guazzelli and Michael A. Arbib: "NeWG: The Neural World Graph" #34 Minh-Chinh Nguyen: "Robot Vision Without Calibration" #35 Erol Sahin and Paolo Gaudiano: "Real-Time Object Localization from Monocular Camera Motion" #36 Carolina Chang and Paolo Gaudiano: "A Neural Network for Obstacle Avoidance in Mobile Robots" #37 P. Gaussier, J.-P. Banquet, C. Joulain, A. Revel, and S. Zrehen: "Validation of a Hippocampal Model on a Mobile Robot" #38 J.-P. Banquet, P. Gaussier, C. Joulain, and A. Revel: "Learning, Recognition, and Generation of Tempero-Spatial Sequences by a Cortico-Hippocampal System: A Neural Network Model" POSTER SESSION II: Saturday, May 31, 1997 All posters will be displayed for the full day. Machine Vision Session: #1 Tyler C. Folsom: "Edge Detection by Sparse Sampling with Steerable Quadrature Filters" #2 Mario Aguilar and Allen M. Waxman: "Comparison of Opponent-Color Neural Processing and Principal Components Analysis in the Fusion of Visible and Thermal IR Imagery" #3 Magnus Snorrason: "A Multi-Resolution Feature Integration Model for the Next-Look Problem" #4 Charles B. Owen: "Application of Multiple Media Stream Correlation to Functional Imaging of the Brain" Machine Learning Session: #5 Mukund Balasubramanian and Stephen Grossberg: "A Neural Architecture for Recognizing 3-D Objects from Multiple 2-D Views" #6 Maartje E.J. Raijmakers and Peter C.M. Molenaar: "Exact ART: A Complete Implementation of an ART Network" #7 Danil V. Prokhorov and Lee A. Feldkamp: "On the Relationship between Derivative Adaptive Critics and Backpropagation through Time" #8 Tulay Yildirim and John S. Marsland: "Optimization by Back Propagation of Error in Conic Section Functions" #9 John M. Zachary, Jacob Barhen, Nageswara S. Rao, and Sitharama S. Iyengar: "A Dynamical Systems Approach to Neural Network Learning from Finite Examples" #10 Christos Orovas and James Austin: "Cellular Associative Neural Networks" #11 M.A. Grudin, P.J.G. Lisboa, and D.M. Harvey: "A Sparse Representation of Human Faces for Recognition" #12 Mike Y.W. Leung and David K.Y. Chiu: "Feature Selection for Two-Dimensional Shape Discrimination using Feedforward Neural Networks" #13 Robert Alan Brown: "The Creation of Order in a Self-Learning Duplex Network" #14 C.H. Chen: "Designing a Neural Network to Predict Human Responses" #15 Jean-Marc Fellous, Laurenz Wiskott, Norbert Kruger, and Christoph von der Malsburg: "Face Recognition by Elastic Bunch Graph Matching" #16 Gerard J. Rinkus: "A Monolithic Distributed Representation Supporting Multi-Scale Spatio-Temporal Pattern Recognition" #17 Harald Ruda and Magnus Snorrason: "Evaluating Automatically Constructed Hierarchies of Self-Organized Neural Network Classifiers" #18 Ken J. Tomita: "A Method for Building an Artificial Neural Network with 2/3 Dimensional Visualization of Input Data" #19 Fernando J. Corbacho and Michael A. Arbib: "Towards a Coherence Theory of the Brain and Adaptive Systems" #20 Gail A. Carpenter, Mark A. Rubin, and William W. Streilein: "ARTMAP-FD: Familiarity Discrimination of Radar Range Profiles" #21 James R. Williamson: "Multifield ARTMAP: A Network for Local, Incremental, Constructive Learning" #22 Marcos M. Campos: "Constructing Adaptive Orthogonal Wavelet Bases with Self-Organizing Feature Maps" #23 Sucharita Gopal, Curtis E. Woodcock, and Alan H. Strahler: "Fuzzy ARTMAP Classification of Global Land Cover from AVHRR Data Set" #24 A.F. Rocha and A. Serapiao: "Fuzzy Modeling of the Visual Mind" #25 Eun-Jin Kim and Yillbyung Lee: "Handwritten Hangul Recognition Based on Psychologically Motivated Model" #26 Jayadeva: "A Nonlinear Programming Based Approach to the Traveling Salesman Problem" #27 Haruhisa Takahashi: "Biologically Plausible Efficient Learning Via Local Delta Rule" #28 Raonak Zaman and Donald C. Wunsch II: "Prediction of Yarn Strength from Fiber Properties using Fuzzy ARTMAP" VLSI Session: #29 James Waskiewicz and Gert Cauwenberghs: "The Boundary Contour System on a Single Chip: Analog VLSI Architecture" #30 Marc Cohen, Pamela Abshire, and Gert Cauwenberghs: "Current Mode VLSI Fuzzy ART Processor with On-Chip Learning" #31 Shinji Karasawa, Senri Ikeda, Yong Hea Ku, and Jun Hum Chung: "Methodology of the Decision-Making Device" #32 Todd Hinck and Allyn E. Hubbard: "Circuits that Implement Shunting Neurons and Steerable Spatial Filters" Audition, Speech, and Language Session: #33 Colin Davis and Sally Andrews: "Competitive and Cooperative Effects of Similarity in Stationary and Self-Organizing Models of Visual Word Recognition" #34 Susan L. McCabe and Michael J. Denham: "Towards a Neurocomputational Model of Auditory Perception" #35 Dave Johnson: "A Wavelet-Based Auditory Planning Space for Production of Vowel Sounds" #36 Michael A. Cohen, Stephen Grossberg, and Christopher Myers: "A Neural Model of Context Effects in Variable-Rate Speech Perception" #37 Peter Cariani: "Neural Computation in the Time Domain" #38 N.K. Kasabov and R. Kozma: "Chaotic Adaptive Fuzzy Neural Networks and their Applications for Phoneme-Based Spoken Language Recognition" **************************************** MEETING HOTEL INFORMATION: For all hotels listed below, meeting attendees should make their own reservations directly with the hotel using the meeting name "Vision, Recognition, Action". 1. THE ELIOT HOTEL 370 Commonwealth Avenue Boston, MA 02215 (617) 267-1607 (800) 443-5468 Janet Brown, director of sales $130/night is the Boston University rate, and is the lowest rate that the Eliot will offer to anyone, whether individual or group. A block of 12 rooms is being held until April 28, 1997. This hotel is 3 or 4 blocks from the CNS Department. 2. HOWARD JOHNSONS 575 Commonwealth Avenue Boston, MA 02215 (617) 267-3100 (reservations) (617) 864-0300 (sales office) Eric Perryman, group sales office Rates: $115/night/single and $125/night/double. A block of 15 rooms is being held until April 28, 1997. This hotel is across the street from the CNS Department. 3. THE BUCKMINSTER 645 Beacon Street Boston, MA 02215 (617) 236-7050 (800) 727-2825 Dan Betro, group sales office A block of 29 rooms is being held until April 28, 1997. This hotel is a few steps away from the CNS Department. Pricing will vary depending on the kind of room; the range is $55/night up to $129/night. Please inquire directly with the hotel when making your reservation. 4. HOLIDAY INN, BROOKLINE 1200 Beacon Street Brookline, MA 02146 (617) 277-1200 (800) 465-4329 Lisa Pedulla, Director of Sales, x-320 $99/night single, $109/night double are the Boston University rates. A block of 25 rooms will be held for us until April 28, 1997. This hotel is within a mile of the CNS Department. There is a trolley stop directly outside the hotel that will take you to within a block of the CNS Department. For information about other Boston-area hotels, please see http://www.boston.com/travel/lodging.htm. From john at dcs.rhbnc.ac.uk Wed Apr 2 04:56:14 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 02 Apr 97 10:56:14 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199704020956.KAA31415@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-015: ---------------------------------------- Exact Learning of subclasses of CDNF formulas with membership queries by Carlos Domingo, Universitat Polit\`ecnica de Catalunya, Spain Abstract: We consider the exact learnability of subclasses of Boolean formulas from membership queries alone. We show how to combine known learning algorithms that use membership and equivalence queries to obtain new learning results only with memberships. In particular we show the exact learnability of read-$k$ monotone CDNF formulas, Sat-$k$ ${\cal O}(\log n)$-CDNF, and ${\cal O}(\sqrt{\log n})\mbox{-size CDNF}$ from membership queries only. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-016: ---------------------------------------- Decision Trees have Approximate Fingerprints by Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Vijay Raghavan, Vanderbilt University, USA Abstract: We prove that decision trees exhibit the ``approximate fingerprint'' property, and therefore are not polynomially learnable using only equivalence queries. A slight modification of the proof extends this result to several other representation classes of boolean concepts which have been studied in computational learning theory. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-017: ---------------------------------------- Learning Monotone Term Decision Lists by David Guijarro, Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Vijay Raghavan, Vanderbilt University, USA Abstract: We study the learnability of monotone term decision lists in the exact model of equivalence and membership queries. We show that, for any constant $k \ge 0$, $k$-term monotone decision lists are exactly and properly learnable with $n^{O(k)}$ membership queries in O($n^{k^3}$) time. We also show $n^{\Omega (k)}$ membership queries are necessary for exact learning. In contrast, both $k$-term monotone decision lists ($k \ge 2$) and general monotone decision lists are not learnable with equivalence queries alone. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-018: ---------------------------------------- Learning nearly monotone $k$-term DNF by Jorge Castro, David Guijarro, Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Abstract: This note studies the learnability of the class $k$-term DNF with a bounded number of negations per term. We study the case of learning with membership queries alone, and give tight upper and lower bounds on the number of negations that makes the learning task feasible. We also prove a negative result for equivalence queries. Finally, we show that a slight modification in our algorithm proves that the considered class is also learnable in the Simple PAC model, extending Li and Vit\'anyi result for monotone $k$-term DNF. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-019: ---------------------------------------- $\delta$-uniform BSS Machines by Paolo Boldi, Sebastiano Vigna, Universit\`a degli Studi di Milano, Italy Abstract: A $\delta$-uniform BSS machine is almost like a standard BSS machine, but the negativity test is replaced by a ``smaller than $-\delta$'' test, where the threshold $\delta\in(0,1)$ is not known: in this way we represent the impossibility of performing exact equality tests. We prove that, for any real closed archimedean field $R$, the $\delta$-uniform semi-decidable sets are exactly the interiors of BSS semi-decidable sets. Then, we show that the sets semi-decidable by Turing machines are the sets semi-decidable by $\delta$-uniform machines with coefficients in $Q$ or $T$, the field of Turing computable numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-020: ---------------------------------------- The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials by Wolfgang Maass, Berthold Ruf, Technische Universitaet Graz, Austria Abstract: Recently one has started to investigate the computational power of spiking neurons (also called ``integrate and fire neurons''). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatio-temporal coding, i.e.~encoded in the time points when specific neurons ``fire'' (and thus send a ``spike'' to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are ``accidental'' aspects of their realization in biological ``wetware''. Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatio-temporal coding in a new generation of artificial neural nets, such as for example pulse stream VLSI. The firing mechanism of spiking neurons is defined in terms of their postsynaptic potentials or ``response functions'', which describe the change in their electric membrane potential as a result of the firing of another neuron. We consider in this article the case where the response functions of spiking neurons are assumed to be of the mathematically most elementary type: they are assumed to be step-functions (i.e. piecewise constant functions). This happens to be the functional form which has so far been adapted most frequently in pulse stream VLSI as the form of potential changes (``pulses'') that mimic the role of postsynaptic potentials in biological neural systems. We prove the rather surprising result that in models without noise the computational power of networks of spiking neurons with arbitrary piecewise constant response functions is strictly weaker than that of networks where the response functions of neurons also contain short segments where they increase respectively decrease in a linear fashion (which is in fact biologically more realistic). More precisely we show for example that an addition of analog numbers is impossible for a network of spiking neurons with piecewise constant response functions (with any bounded number of computation steps, i.e. spikes), whereas addition of analog numbers is easy if the response functions have linearly increasing segments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-021: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog noise in analog computation with discrete time that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-022: ---------------------------------------- Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding by Wolfgang Maass and Thomas Natschl"ager, Technische Universitaet Graz, Austria Abstract: A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes in networks of detailed compartmental neuron models. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of \emph{excitatory} synapses. A corresponding \emph{layered} architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-023: ---------------------------------------- The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant by Jyrki Kivinen, University of Helsinki, Finland Manfred Warmuth, University of California, Santa Cruz, USA Peter Auer, Technische Universitaet Graz, Austria Abstract: We give an adversary strategy that forces the Perceptron algorithm to make $\Omega(k N)$ mistakes in learning monotone disjunctions over $N$ variables with at most $k$ literals. In contrast, Littlestone's algorithm Winnow makes at most $O(k\log N)$ mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. The Perceptron algorithm is an example of {\em additive\/} algorithms, which have the property that their weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. We show that an adversary can force any additive algorithm to make $(N+k-1)/2$ mistakes in learning a monotone disjunction of at most $k$ literals. Simple experiments show that for $k\ll N$, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-024: ---------------------------------------- Approximating Hyper-Rectangles: Learning and Pseudo-random Sets by Peter Auer, Technische Universitaet Graz, Austria Philip Long, National University of Singapore, Singapore Aravind Srinivasan, National University of Singapore, Singapore Abstract: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subproblem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of $n$ independent multivalued random variables. We present improved upper bounds for a class of such problems of ``approximating'' high-dimensional rectangles that arise in PAC learning and pseudorandomness. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-025: ---------------------------------------- On Learning from Multi-Instance Examples: Empirical Evaluation of a Theoretical Approach by Peter Auer, Technische Universitaet Graz, Austria Abstract: We describe a practical algorithm for learning axis-parallel high-dimensional boxes from multi-instance examples. The first solution to this practical learning problem arising in drug design was given by Dietterich, Lathrop, and Lozano-Perez. A theoretical analysis was performed by Auer, Long, Srinivasan, and Tan. In this work we derive a competitive algorithm from theoretical considerations which is completely different from the approach taken by Dietterich et. al. Our algorithm uses for learning only simple statistics of the training data and avoids potentially hard computational problems which were solved by heuristics by Dietterich et. al. In empirical experiments our algorithm performs quite well although it does not reach the performance of the fine-tuned algorithm of Dietterich et. al. We conjecture that our approach can be fruitfully applied also to other learning problems where certain statistical assumptions are satisfied. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-026: ---------------------------------------- Computing Functions with Spiking Neurons in Temporal Coding by Berthold Ruf, Technische Universitaet Graz, Austria Abstract: For fast neural computations within the brain it is very likely that the timing of single firing events is relevant. Recently Maass has shown that under certain weak assumptions functions can be computed in temporal coding by leaky integrate-and-fire neurons. Here we demonstrate with the help of computer simulations using GENESIS that biologically more realistic neurons can compute linear functions in a natural and straightforward way based on the basic principles of the construction given by Maass. One only has to assume that a neuron receives all its inputs in a time intervall of approximately the length of the rising segment of its excitatory postsynaptic potentials. We also show that under certain assumptions there exists within this construction some type of activation function being computed by such neurons, which allows the fast computation of arbitrary continuous bounded functions. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-027: ---------------------------------------- Hebbian Learning in Networks of Spiking Neurons Using Temporal Coding by Berthold Ruf, Michael Schmitt, Technische Universitaet Graz, Austria Abstract: Computational tasks in biological systems that require short response times can be implemented in a straightforward way by networks of spiking neurons that encode analogue values in temporal coding. We investigate the question how spiking neurons can learn on the basis of differences between firing times. In particular, we provide learning rules of the Hebbian type in terms of single spiking events of the pre- and postsynaptic neuron and show that the weights approach some value given by the difference between pre- and postsynaptic firing times with arbitrary high precision. Our learning rules give rise to a straightforward possibility for realizing very fast pattern analysis tasks with spiking neurons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-028: ---------------------------------------- Overview of Learning Systems produced by NeuroCOLT Partners by NeuroCOLT Partners Abstract: This NeuroCOLT Technical Report documents a number of systems that have been produced withing the NeuroCOLT partnership. It only includes a summary of each system together with pointers to where the system is located and more information about its performance and design can be found. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-029: ---------------------------------------- On Bayesian Case Matching by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we present a new probabilist formalization of the case-based reasoning paradigm. In contrast to earlier Bayesian approaches, the new formalization does not need a transformation step between the original case space and the distribution space. We concentrate on applying this Bayesian framework to the case matching problem, and propose a probabilistic scoring metric for this task. In the experimental part of the paper, the Bayesian case matching score is evaluated empirically by using publicly available real-world case bases. The results show that when encountered with cases where some of the feature values have been removed, a relatively small number of remaining values is sufficient for retrieving the original case from the case base by using the proposed measure. The experiments also show that the approach is computationally very efficient. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-030: ---------------------------------------- Batch Classifications with Discrete Finite Mixtures by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we study batch classification problems where multiple predictions can be made simultaneously, instead of performing the classifications independently one at a time. For the predictions we use the model family of discrete finite mixtures, where, by introducing a hidden latent variable, we implicitly assume missing data that has to be estimated in order to be able to construct models from sample data. The main contribution of this paper is to demonstrate how the standard EM algorithm can be modified for estimating both the missing latent variable data, and the batch classification data at the same time, thus allowing us to use the same algorithm both for constructing the models from training data and for making predictions. In our framework the amount of data available for making predictions is greater than with the traditional approach, as the algorithm can also exploit the information available in the query vectors. In the empirical part of the paper, the results obtained by the batch classification approach are compared to those obtained by standard (independent) predictions by using public domain classification data sets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-031: ---------------------------------------- Bayes Optimal Lazy Learning by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we present a new probabilistic formalization of the lazy learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a lazy learning approach, where predictions are made by combining the training data at query time, is equivalent to integrating out all the model parameters. Hence in Bayesian Lazy Learning the predictions are made by using all the (infinitely many) models. We present the formalization of this general framework, and illustrate its use in practice in the case of the Naive Bayes classifier model family. The Bayesian lazy learning approach is validated empriically with public domain data sets and the results are compared to the performance of the traditional, single model Naive Bayes. The general framework described in this paper can be applied with any formal model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-032: ---------------------------------------- On Predictive Distributions and Bayesian Networks by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general framework, where we consider a set of probability distributions defined by some parametric model class. The standard Bayesian approach is to compute the posterior probability for the model parameters, given a prior distribution and sample data, and fix the parameters to the instantiation with the {\em maximum a posteriori} probability. A more accurate predictive distribution can be obtained by comupting the {\em evidence}, i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of {\em stochastic complexity} for determining predictive distributions. We then describe how these predictive inference methods can be realized in the case of Bayesian networks. In particular, we demonstrate the use of Jeffrey's prior as the prior distribution for computing the evidence predictive distribution. It can be shown that the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. For computational reasons in the experimental part of the paper the three predictive distributions are compared by using the tree-structures simple Naive Bayes model. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense, especially with small training sets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-033: ---------------------------------------- Partial Occam's Razor and its Applications by Carlos Domingo, Tatsuie Tsukiji and Osamu Watanabe, Tokyo Institute of Technology, Japan Abstract: We introduce the notion of ``partial Occam algorithm''. A partial Occam algorithm produces a succinct hypothesis that is partially consistent with given examples, where the proportion of consistent examples is a bit more than half. By using this new notion, we propose one approach for obtaining a PAC learning algorithm. First, as shown in this paper, a partial Occam algorithm is equivalent to a weak PAC learning algorithm. Then by using boosting techniques of Schapire or Freund, we can obtain an ordinary PAC learning algorithm from this weak PAC learning algorithm. We demonstrate with some examples that some improvement is possible by this approach, in particular in the hypothesis size. First, we obtain a (non-proper) PAC learning algorithm for $k$-DNF, which has similar sample complexity as Littlestone's Winnow, but produces hypothesis of size polynomial in $d$ and $\log k$ for a $k$-DNF target with $n$ variables and $d$ terms ({\it Cf.}~ The hypothesis size of Winnow is $\CO(n^k)$). Next we show that 1-decision lists of length $d$ with $n$ variables are (non-proper) PAC learnable by using $\dsp{\CO\rpr{\frac{1}{\epsilon} \rpr{\log \frac{1}{\delta}+16^d\log n(d+\log \log n)^2}}}$ examples within polynomial time w.r.t.\ $n$, $2^d$, $1/\epsilon$, and $\log 1/\delta$. Again, we obtain a sample complexity similar to Winnow for the same problem but with a much smaller hypothesis size. We also show that our algorithms are robust against random classification noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-034: ---------------------------------------- Algorithms for Learning Finite Automata from Queries: A Unified View by Jos\'e Balc\'azar, Josep D\'iaz, Ricard Gavalda Universitat Polit\`ecnica de Catalunya, Spain Osamu Watanabe, Tokyo Institute of Technology, Japan Abstract: In this survey we compare several known variants of the algorithm for learning deterministic finite automata via membership and equivalence queries. We believe that our presentation makes it easier to understand what is going on and what the differences between the various algorithms mean. We also include the comparative analysis of the algorithms, review some known lower bounds, prove a new one, and discuss the question of parallelizing this sort of algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-035: ---------------------------------------- Using Fewer Examples to Simulate Equivalence Queries by Ricard Gavalda, Universitat Polit\`ecnica de Catalunya, Spain Abstract: It is well known that an algorithm that learns exactly using Equivalence queries can be transformed into a PAC algorithm that asks for random labelled examples. The first transformation due to Angluin (1988) uses a number of examples quadratic in the number of queries. Later, Littlestone (1989) and Schuurmans and Greiner (1995) gave transformations using linearly many examples. We present here another analysis of Littlestone's transformation which is both simpler and gives better leading constants. Our constants are still worse than Schuurmans and Greiner's, but while ours is a worst-case bound on the number of examples to achieve PAC learning, theirs is only an expected one. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-036: ---------------------------------------- A Dichotomy Theorem for Learning Quantified Boolean Formulas by Victor Dalmau, Universitat Polit\`ecnica de Catalunya, Spain Abstract: We consider the following classes of quantified boolean formulas. Fix a finite set of basic boolean functions. Take conjunctions of these basic functions applied to variables and constants in arbitrary way. Finally quantify existentially or universally some of the variables. We prove the following {\em dichotomy theorem}: For any set of basic boolean functions, the resulting set of formulas is either polynomially learnable from equivalence queries alone or else it is not PAC-predictable even with membership queries under cryptographic assumptions. Furthermore we identify precisely which sets of basic functions are in which of the two cases. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-037: ---------------------------------------- Discontinuities in Recurrent Neural Networks by Ricard Gavald\`a, Universitat Polit\`ecnica de Catalunya, Spain Hava Siegelmann, Technion, Israel Abstract: This paper studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net-function and a sigmoid-like continuous activation-function. The authors introduce ``arithmetic networks'' as ARNN augmented with a few simple discontinuous (eg., threshold) neurons. They argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrary complex recursive functions. A proof is provided to show that arithmetic networks are computationally equivalent to networks comprised of neurons that compute divisions and polynomials net-functions inside sigmoid-like continuous activation functions. Further, the authors prove that these arithmetic networks are equivalent to the Blum-Shub-Smale (BSS) model, when the latter is restricted to a bounded number of registers. With regards to implementation on digital computers, the authors demonstrate that arithmetic networks with rational weights require exponential precision; but even with very simple real weights arithmetic networks are not subject to precision bounds. As such, they can not be approximated on digital machines. This is in contrast with the ARNN that are known to demand only precision that is linear in the computation time. When complex periodic discontinuous neurons (eg. sine, tangent, fractional parts) are augmented to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, this highly discontinuous network can solve the presumably intractable class of PSPACE-complete problems in polynomial time. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From krogh at frame.cbs.dtu.dk Thu Apr 3 10:24:46 1997 From: krogh at frame.cbs.dtu.dk (Anders Krogh) Date: Thu, 3 Apr 1997 09:24:46 -0600 Subject: Papers on ensemble learning Message-ID: <9704030924.ZM24091@frame.cbs.dtu.dk> Dear connectionists, the following paper is now available from our web page (http://www.ph.ed.ac.uk/~pkso/papers/EnsemblePREVII.ps.gz) : STATISTICAL MECHANICS OF ENSEMBLE LEARNING Anders Krogh and Peter Sollich (Physical Review E, 55:811-825, 1997) Abstract Within the context of learning a rule from examples, we study the general characteristics of learning with ensembles. The generalization performance achieved by a simple model ensemble of linear students is calculated exactly in the thermodynamic limit of a large number of input components and shows a surprisingly rich behavior. Our main findings are the following. For learning in large ensembles, it is advantageous to use underregularized students, which actually overfit the training data. Globally optimal generalization performance can be obtained by choosing the training set sizes of the students optimally. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of corruption of the training data. An abbreviated version of this paper appeared in NIPS 8 and is also available (http://www.ph.ed.ac.uk/~pkso/papers/EnsembleNIPSVI.ps.gz). Both papers are also available from http://www.cbs.dtu.dk/krogh/refs.html. Further papers of potential interest to readers of connectionists can be found on our home pages: * Peter Sollich (http://www.ph.ed.ac.uk/~pkso/publications): learning from queries, online learning, finite size effects in neural networks, ensemble learning * Anders Krogh (http://www.cbs.dtu.dk/krogh/): Most current work on using hidden Markov models in `computational biology.' All comments and suggestions are welcome - Anders Krogh and Peter Sollich -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road fax: +44 - (0)131 - 650 5212 Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- Anders Krogh Center for Biological Sequence Analysis (CBS) Technical University of Denmark Building 206 DK-2800 Lyngby DENMARK Phone: +45 4525 2470 Fax: +45 4593 4808 E-mail: krogh at cbs.dtu.dk _____________________________________________ From nburgess at lbs.ac.uk Thu Apr 3 09:16:34 1997 From: nburgess at lbs.ac.uk (Neil Burgess) Date: Thu, 3 Apr 1997 09:16:34 BST Subject: Pre-prints available - Neural Networks in the Capital Markets Message-ID: <9B468B447F1@neptune.lbs.ac.uk> Neural Networks in the Capital Markets: The following NNCM-96 pre-prints are now available on request. Please send your postal address to: boguntula at lbs.ac.uk ===================================================== ASSET ALLOCATION ACROSS EUROPEAN EQUITY INDICES USING A PORTFOLIO OF DYNAMIC COINTEGRATION MODELS A. N. BURGESS Department of Decision Science London Business School Regents Park, London, NW1 4SA, UK In modelling financial time-series, the model selection process is complicated by the presence of noise and possible structural non-stationarity. Additionally the near-efficiency of financial markets combined with the flexibility of advanced modelling techniques creates a significant risk of "data-snooping". These factors combine to make trading a single model a very risky proposition, particularly in a situation which allows for high leverage, such as futures trading. We believe that the risks inherent in relying on a given model can be reduced by combining a whole set of models and, to this end, describe a population-based methodology which involves building a portfolio of complementary models. We describe an application of the technique to the problem of modelling a set of European equity indices using a portfolio of cointegration-based models. ===================================================== FORECASTING VOLATILITY MISPRICING P. J. BOLLAND & A. N. BURGESS Department of Decision Science London Business School Regents Park, London, NW1 4SA, UK A simple strategy is employed to exploit volatility mispricing based on discrepancies between implied and actual market volatility. The strategy uses forward and Log contracts to either buy or sell volatility depending on whether volatility is over or under priced. As expected, buying volatility gives small profits on average but with occasional large losses in adverse market conditions. In this paper multivariate non-linear methods are used to forecast the returns of a Log contract portfolio. The explanatory power of implied volatility and the volatility term structure from several indices (FTSE, CAC, DAX) are investigated. Neural network methodologies are benchmarked against linear regression. The use of both multivariate data and non-linear techniques are shown to significantly improve the accuracy of predictions. Keywords: Options, Volatility Mispricing, Log contract, Volatility Term Structure, ===================================================== From w.penny at ic.ac.uk Thu Apr 3 09:22:20 1997 From: w.penny at ic.ac.uk (w.penny@ic.ac.uk) Date: Thu, 3 Apr 1997 15:22:20 +0100 Subject: Research jobs in neural nets and pattern recognition Message-ID: <22983.199704031422@albert.ee.ic.ac.uk> THREE POST-DOCTORAL RESEARCH POSITIONS IN PATTERN RECOGNITION / NEURAL NETWORKS RESEARCH Three post-doctoral research positions are available within the Neural Systems Section of the Department of Electrical & Electronic Engineering to work on the theory and application of advanced pattern recognition techniques, in particular the use of Bayesian methods and neural networks. Two positions are funded for two years and the third nominally for three years with a yearly evaluation. All projects involve research in statistical pattern recognition with applications in the biomedical field. Experience in pattern recognition and Bayesian statistics would be an advantage. A good understanding of data processing (especially signal processing) techniques is desired as is experience of UNIX, C and Matlab. The positions are funded by the Jefferiss Research Trust, the European Commission and British Aerospace plc respectively. The salary scale will be RA1A , GBP 14,732 - 22,143 per annum (exclusive of London Allowance of GBP 2,134) depending on age and experience. Further information may be obtained from http://www.ee.ic.ac.uk/research/neural/positions.html or via e-mail to Dr Stephen Roberts (s.j.roberts at ic.ac.uk). The closing date for applications is April 11th 1997. In recent years, great interest has developed in the use of non-classical methods for statistical analysis of data as part of a general increase towards the use of artificial intelligence methods. One genre which has shown itself to be particularly suitable is that of connectionist models, a subset of which are referred to as a artificial neural networks (ANNs). Classical statistical methods rely upon the use of simple models, such as linear or logistic regression, in order to 'learn' relationships between variables and outcomes. ANNs offer a far more flexible model set, indeed it has been shown that they have the property of universal approximation so they are able, in principle, to estimate any set of arbitrary relationships between variables. Furthermore, they may model non-linear coupling between sets of variables. Part of the momentum of the recent development of ANNs for pattern recognition, regression and estimation problems must be attributed to the manner in which ANNs conform to many of the traditional statistical approaches, i.e. they may estimate Bayesian probablilities in the case of classification and conditional averages in the case of regression. 1) The use of Neural Networks to Predict the Development and Progression of Kaposi's Sarcoma (KS). This is a joint project funded by the Jefferiss Research Trust between the Department of Electrical and Electronic Engineering, Imperial College of Science, Technology & Medicine and the Department of Genito-urinary Medicine, St. Mary's Hospital. Kaposi's sarcoma (KS) is a vascular tumour, which is more common and often aggressive in patients with underlying immunosuppression (post-transplant KS and AIDS-associated KS). KS was first described by the Hungarian pathologist Moritz Kaposi in 1872, yet still remains something of a clinical enigma, being an unusual tumour of unknown origin. The aim of this research is to determine factors that influence the variable progression rate of KS in HIV infected individuals. There is currently no means of predicting which patients will develop KS and no understanding of the relationship between the forms of the disease. The aim of the project is to carry out multi-variable analyses in order to define clinical end-points and provide guidelines for better patient management. A number of variables will be available to the system. The reliability and utility of each with regard to the prediction of patient outcome, however, is generally unknown. Classical regression analysis offers some powerful methods of selection and ranking within a subset of features or variables. Whilst such methods should be used for completeness and comparison, it is noted that recent developments in Bayesian learning theory have offered the possibility to assess the utility of variables from within the ANN structure. Each input variable has a separate weighting factor, or in Bayesian terminology, a hyper-prior, associated with it. This technique has become known as automatic relevance determination or ARD. Such an assessment is devoid of the strong assumptions of independence and linearity of most of the classical regression methods. It is feasible for an ANN to produce not only a set of output variables (predictions or classifications, for example) but also an associated set of confidence or validation measures (describing the probable error on each output). This enables the tracking of predictions of future events in a more robust framework and furthermore allows for the accurate fusion of information from more than one source and the incorporation of temporal information, i.e. poor quality information from the present time may be suppressed in favour of more reliable information from past or future as it becomes available. If we may regard the system as aiming to produce a probability distribution in some 'outcome space', then several possible approaches to analysis are made available. As temporal information is retained (i.e. outcomes are based upon the entire course of the patient's history, not just present information) we may seek information regarding the effect of each piece of information (test result or partial diagnosis) on the probability distribution in the 'outcome space'. Two pieces of information may be obtained from this approach. How important a partial decision or test result is to the probability of certain outcomes and how important it is to changing the uncertainty we have in the outcome results. Clearly, the goal will be to indicate tests and/or procedures which not only increase the survivability probabilities but also make the estimated outcomes less variant, so we have more confidence in the predictions (this means not only increasing the height of a favourable node in the posterior probability space, but also attempting to reduce the variance of the distribution). In order to accommodate for multiple output hypotheses we propose to utilise a procedure similar to that detailed in (Bishop 1995) whereby the output distribution is modelled multi- modally. This has the added benefit that individual modes (possible outcomes) may be tracked separately. This representation is also similar to that taken in a mixture of experts approach. REFERENCES 1. Bishop CM. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1995. 2. Ripley BD. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. 3. Roberts SJ and Penny W. Novelty, Confidence and Errors in Connectionist Systems. Proceedings of IEE colloquium on fault detection and intelligent sensors, IEE, September 1996. 4. Penny W and Roberts SJ. Neural Networks with Error Bars. Departmental report, also submitted to IEEE transactions on neural networks, February 1997, available from http://www.ee.ic.ac.uk/staff/hp/sroberts.html 2) SIESTA (EU funded project) Siesta is a an EU funded project which involves Imperial College and 10 other European partners. The aim of the project is to define and produce a system which is capable of continuous evaluation of the state of the brain during the sleep-wake cycle. Such an automated system is of enormous value in the clinical field and the research into multi-channel signal processing, fusion and pattern recognition form a challenge to the most modern techniques. The state of the brain will, primarily, be monitored via its electrical activity (the EEG). One of the most well-known approaches from literature to achieve a continuous description of EEG state is the system developed by Roberts & Tarassenko (1992a, 1992b). This approach will be used as a general basis for the research in SIESTA. Roberts & Tarassenko (henceforth, R&T') used a self-organizing feature map (SOM) to perform unsupervised topographic mapping of feature vectors consisting of 10 coefficients of a Kalman filter algorithm applied to the raw EEG. This self- organizing network discovered eight distinct clusters in which the brain state remained preferentially. From biehl at physik.uni-wuerzburg.de Fri Apr 4 03:31:35 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Fri, 4 Apr 1997 10:31:35 +0200 (MESZ) Subject: preprint on on-line unsupervised learning Message-ID: <199704040831.KAA29194@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1997/WUE-ITP-97-003.ps.gz The following manuscript is now available via anonymous ftp (see below for the retrieval procedure), or, alternatively from http://www.physik.uni-wuerzburg.de/~biehl ------------------------------------------------------------------ "Specialization processes in on-line unsupervised learning" Michael Biehl, Ansgar Freking, Georg Reents, and Enno Schl"osser Contribution to the Minerva Workshop on Mesoscopics, Fractals and Neural Networks Eilat, Israel, March 1997 Ref: WUE-ITP-97-003 Abstract From krogh at frame.cbs.dtu.dk Fri Apr 4 13:26:44 1997 From: krogh at frame.cbs.dtu.dk (Anders Krogh) Date: Fri, 4 Apr 1997 12:26:44 -0600 Subject: Papers on ensemble learning Message-ID: <9704041226.ZM26633@frame.cbs.dtu.dk> Dear connectionists, the following paper is now available from our web page (http://www.ph.ed.ac.uk/~pkso/papers/EnsemblePREVII.ps.gz) : STATISTICAL MECHANICS OF ENSEMBLE LEARNING Anders Krogh and Peter Sollich (Physical Review E, 55:811-825, 1997) Abstract Within the context of learning a rule from examples, we study the general characteristics of learning with ensembles. The generalization performance achieved by a simple model ensemble of linear students is calculated exactly in the thermodynamic limit of a large number of input components and shows a surprisingly rich behavior. Our main findings are the following. For learning in large ensembles, it is advantageous to use underregularized students, which actually overfit the training data. Globally optimal generalization performance can be obtained by choosing the training set sizes of the students optimally. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of corruption of the training data. An abbreviated version of this paper appeared in NIPS 8 and is also available (http://www.ph.ed.ac.uk/~pkso/papers/EnsembleNIPSVI.ps.gz). Both papers are also available from http://www.cbs.dtu.dk/krogh/refs.html. Further papers of potential interest to readers of connectionists can be found on our home pages: * Peter Sollich (http://www.ph.ed.ac.uk/~pkso/publications): learning from queries, online learning, finite size effects in neural networks, ensemble learning * Anders Krogh (http://www.cbs.dtu.dk/krogh/): Most current work on using hidden Markov models in `computational biology.' All comments and suggestions are welcome - Anders Krogh and Peter Sollich -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road fax: +44 - (0)131 - 650 5212 Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- Anders Krogh Center for Biological Sequence Analysis (CBS) Technical University of Denmark Building 206 DK-2800 Lyngby DENMARK Phone: +45 4525 2470 Fax: +45 4593 4808 E-mail: krogh at cbs.dtu.dk _____________________________________________ From sami at guillotin.hut.fi Fri Apr 4 07:56:49 1997 From: sami at guillotin.hut.fi (Sami Kaski) Date: Fri, 4 Apr 1997 14:56:49 +0200 Subject: Thesis on data exploration with SOMs available Message-ID: <199704041256.OAA01930@guillotin.hut.fi> The following Dr.Tech. thesis is available at http://nucleus.hut.fi/~sami/thesis/thesis.html (html-version) http://nucleus.hut.fi/~sami/thesis.ps.gz (compressed postscript, 300K) http://nucleus.hut.fi/~sami/thesis.ps (postscript, 2M) The articles that belong to the thesis can be accessed through the page http://nucleus.hut.fi/~sami/thesis/node3.html --------------------------------------------------------------- Data Exploration Using Self-Organizing Maps Samuel Kaski Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Finding structures in vast multidimensional data sets, be they measurement data, statistics, or textual documents, is difficult and time-consuming. Interesting, novel relations between the data items may be hidden in the data. The self-organizing map (SOM) algorithm of Kohonen can be used to aid the exploration: the structures in the data sets can be illustrated on special map displays. In this work, the methodology of using SOMs for exploratory data analysis or data mining is reviewed and developed further. The properties of the maps are compared with the properties of related methods intended for visualizing high-dimensional multivariate data sets. In a set of case studies the SOM algorithm is applied to analyzing electroencephalograms, to illustrating structures of the standard of living in the world, and to organizing full-text document collections. Measures are proposed for evaluating the quality of different types of maps in representing a given data set, and for measuring the robustness of the illustrations the maps produce. The same measures may also be used for comparing the knowledge that different maps represent. Feature extraction must in general be tailored to the application, as is done in the case studies. There exists, however, an algorithm called the adaptive-subspace self-organizing map, recently developed by Kohonen, which may be of help. It extracts invariant features automatically from a data set. The algorithm is here characterized in terms of an objective function, and demonstrated to be able to identify input patterns subject to different transformations. Moreover, it could also aid in feature exploration: the kernels that the algorithm creates to achieve invariance can be illustrated on map displays similar to those that are used for illustrating the data sets. From riegler at ifi.unizh.ch Fri Apr 4 08:51:10 1997 From: riegler at ifi.unizh.ch (Alex Riegler) Date: Fri, 4 Apr 1997 15:51:10 +0200 Subject: NTCS-97 Call For Participation Message-ID: CALL FOR PARTICIPATION /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ International Workshop N E W T R E N D S I N C O G N I T I V E S C I E N C E NTCS '97 /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ "Does Representation need Reality?" Perspectives from Cognitive Science, Neuroscience, Epistemology, and Artificial Life Vienna, Austria, May 14 - 16, 1997 with plenary talks by: Larry Cauller, Georg Dorffner, Ernst von Glasersfeld, Stevan Harnad, Wolf Singer, and Sverre Sjoelander organized by the Austrian Society of Cognitive Science (ASoCS) =========================================================================== Latest information can be retrieved from the conference WWW-page =========================================================================== P u r p o s e ___________________________________________________________________________ The goal of this single-track conference is to investigate and discuss new approaches and movements in cognitive science in a workshop-like atmosphere. Among the topics which seem to have emerged in the last years are: embodiment of knowledge, system theoretic and computational neuroscience approaches to cognition, dynamics in recurrent neural architectures, evolutionary and artificial life approaches to cognition, and (epistemological) implications for perception and representation, constructivist concepts and the problem of knowledge representation, autopoiesis, implications for epistemology and philosophy (of science). Evidence for a failure of the traditional understanding of neural representation converges from several fields. Neuroscientific results in the last decade have shown that single cell representations with hierarchical processing towards representing units seems not the way the cortex represents environmental entities. Instead, distributed cell ensemble coding has become a popular concept for representation, both in computational and in empirical neuroscience. However, new problems arise from the new concepts. The problem of binding the distributed parts into a uniform percept can be "solved" by introducing synchronization of the member neurons. A deeper (epistemological) problem, however, is created by recurrent architectures within ensembles generating an internal dynamics in the network. The cortical response to an environmental stimulus is no longer dominated by stimulus properties themselves, but to a considerable degree by the internal state of the network. Thus, a clear and stable reference between a representational state (e.g. in a neuron, a Hebbian ensemble, an activation state, etc.) and the environmental state becomes questionable. Already learned experiences and expectancies might have an impact on the neural activity which is as strong as the stimulus itself. Since these internally stored experiences are constantly changing, the notion of (fixed) representations is challenged. At this point, system theory and constructivism, both investigating the interaction between environment and organism at an abstract level, come into the scene and turn out to provide helpful epistemological concepts. The goal of this conference is to discuss these phenomena and their implications for the understanding of representation, semantics, language, cognitive science, and artificial life. Contrary to many conferences in this field, the focus is on interdisciplinary cooperation and on conceptual and epistemological questions, rather than on technical details. We are trying to achieve this by giving more room to discussion and interaction between the participants (e.g., invited comments on papers, distribution of papers to the participants before the conference, etc.). According to the interdisciplinary character of cognitive science, we welcome papers/talks from the fields of artificial life, empirical, cognitive, and computational neuroscience, philosophy (of science), epistemology, anthropology, computer science, psychology, and linguistics. T a l k s ___________________________________________________________________________ NOTE: Names with * are invited speakers. Constructivism Ernst von Glasersfeld* Piaget's legacy: Cognition as adaptive activity Sverre Sjoelander* How animals handle reality: the adaptive aspect of representation Annika Wallin Is there a way to distinguish representation from Perception... Tom Routen Habitus and Animats General Epistemology and Methodology W.F.G. Haselager Is cognitive science advancing towards behaviorism? Michael Pauen Reality and representation William Robinson Representation and cognitive explanation Matthias Scheutz The ontological status of representations Anthony Chemero Two types of anti-representationism: a taxonomy Georg Schwarz Can representation get reality? Neuroscience Larry Cauller* NeuroInteractivism: Explaining emergence without representation Wolf Singer* The observer in the brain Steven Bressler The dynamic manifestation of cognitive structures in the cerebral cortex Erich Harth Sketchpads in and beyond the brain Marius Usher Active Neural representations: neurophysiological data and its implications Symbol Grounding and Communication Georg Dorffner* The connectionist route to embodiment and dynamicism Stevan Harnad* Keeping a grip on the real/virtual distinction in this representationalist age Mark Wexler Must mental representation be internal? Tom Ziemke Rethinking Grounding Christian Balkenius Explorations in synthetic pragmatics Horst Hendriks-Jansen Does natural cognition need internal knowledge structures? Nathan Chandler On the importance of reality in representations Peter Gaerdenfors Does semantics need reality? P o s t e r s ___________________________________________________________________________ Chris Browne Iconic Learning and Epistemology Mark Claessen RabbitWorld: the concept of space can be learned Valentin Constantinescu Interaction between perception & expectancy... Andrew Coward Unguided categorization, direct and symbolic representation, and evolution of cognition... David Davenport PAL: A constructivist model of cognitive activity Karl Diller Representation and reality: where are the rules of grammar Richard Eiser Representation and the social reality Robert French When coffe cups are like old elephants Maurita Harney Representation and its metaphysics Daniel Hutto Cognition without representation? Amy Ione Symbolic creation and re-representation of reality Sydney Lamb Top-Down Modeling, Bottom-up Learning Michael Luntley Real Representations Ralf Moeller Perception through anticipation Ken Mogi Response selectivity, neuron doctrine, and Mach's principle in perception Alfredo Pereira The term "representation" in cognitive neuroscience Michael Ramscar Judgement of association: problems with cognitive theories of analogy Hanna Risku Constructivist Consequences: does tramslation need reality? Sabine Weiss Cooperation of different neural networks during single word and sentence processing R e g i s t r a t i o n ___________________________________________________________________________ To register please fill out the registration form at the bottom of this CFP and send it by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science (address below) Registration Fee (includes admission to talks, presentations, and proceedings): Member * 1300 ATS (about 118 US$) Non-Member 1800 ATS (about 163 US$) Student Member ** 500 ATS (about 45 US$) Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID C o n f e r e n c e S i t e a n d A c c o m o d a t i o n ___________________________________________________________________________ The conference takes place in a small beautiful baroque castle in the suburbs of Vienna; the address is: Schloss Neuwaldegg Waldegghofg. 5 A-1170 Wien Austria Tel: +43 1 485 3605 Fax: +43 1 485 3605-112 It is surrounded by a beautiful forest and a good (international and Viennese gastronomic) infrastructure. On the tram it takes only 20 minutes to the center of Vienna (see overview). (Limited) Accommodation is provided by the castle (about 41 US$ per night (single), 30 US$ per night, per person (double) including breakfast). Please contact the telephone number above. You can find more information about Vienna and accommodation at the Vienna Tourist Board or at the Intropa Travel agent Tel: +43-1-5151-242. Note: In case you want to stay over the weekend we refer you to the following hotel which is near the conference site (single about 75 US$ / 850 ATS per night): Hotel Jaeger Hernalser Hauptstrasse 187 A-1170 Wien Austria Tel: +43 1 486 6620 Fax: +43 1 486 6620 8 D e s t i n a t i o n V i e n n a ? ___________________________________________________________________________ Vienna, Austria, can be reached internationally by plane or train. The Vienna Schwechat airport is located about 16 km from the city center. From the airport, the city air-terminal can be reached by bus (ATS 60.- per person) or taxi (about ATS 400). Rail-passengers arrive at one of the main stations which are located almost in the city center. From the air-terminal and the railway stations the congress site and hotels can be reached easily by underground (U-Bahn), tramway, or bus. A detailed description will be given to the participants. In May the climate is mild in Vienna. It is the time when spring is at its climax and everything is blooming. The weather is warm with occasional (rare) showers. The temperature is about 18 to 24 degrees Celsius. More information about Vienna and Austria on the web: Welcome to Vienna Scene Vienna City Wiener Festwochen - Vienna Festival Public Transport in Vienna (subway) Welcome to Austria General information about Austria Austria Annoted S c i e n t i f i c C o m m i t t e e ___________________________________________________________________________ R. Born Univ. of Linz (A) R. Born Univ. of Linz (A) G. Dorffner Univ. of Vienna (A) E. v. Glasersfeld Univ. of Amherst, MA (USA) S. Harnad Univ. of Southampton (GB) M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) H. Risku Univ. of Skovde (S) M. Scheutz Univ. of Indiana (USA) W. Singer Max Planck Institut, Frankfurt (D) S. Sjoelander Linkoeping University (S) A. v. Stein Neuroscience Institute, La Jolla (USA) O r g a n i z i n g C o m m i t t e e ___________________________________________________________________________ M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) S p o n s o r i n g O r g a n i z a t i o n s ___________________________________________________________________________ o Christian Doppler Laboratory for Expert Systems (Vienna University of Technology) o Oesterreichische Forschgungsgemeinschaft o Austrian Federal Ministry of Science, Transport and the Arts o City of Vienna A d d i t i o n a l I n f o r m a t i o n ___________________________________________________________________________ For further information on the conference contact: Markus Peschl Dept. for Philosophy of Science University of Vienna Sensengasse 8/10 A-1090 Wien Austria Tel: +43-1-402-7601/41 Fax: +43-1-408-8838 Email: franz-markus.peschl at univie.ac.at General information about the Austrian Society for Cognitive Science can be found on the Society webpage or by contacting Alexander Riegler AILab, Dept. of Computer Science University of Zurich Winterthurerstr. 190 CH-8057 Zurich Switzerland Email: riegler at ifi.unizh.ch R e g i s t r a t i o n f o r m ___________________________________________________________________________ I participate at the Workshop "New Trends in Cognitive Science (NTCS'97)" Full Name ........................................................................ Full Postal Address: ........................................................................ ........................................................................ ........................................................................ Telephone Number (Voice): Fax: ..................................... .................................. Email address: ........................................................................ Payment in ATS (= Austrian Schillings; 1 US$ is currently about 11 ATS). This fee includes admission to talks, presentations, and proceedings: [ ] Member * 1300 ATS (about 118 US$) [ ] Non-Member 1800 ATS (about 163 US$) [ ] Student Member ** 500 ATS (about 45 US$) [ ] Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID Total: .................... ATS [ ] Visa [ ] Master-/Eurocard Name of Cardholder ........................................ Credit Card Number ........................................ Expiration Date ................. Date: ................ Signature: ........................................ Please send this form by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science, Univ. of Vienna, Sensengasse 8/10, A-1090 Wien, Austria From movellan at ergo.ucsd.edu Fri Apr 4 17:27:09 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Fri, 4 Apr 1997 14:27:09 -0800 Subject: UCSD Cogsci TR Message-ID: <199704042227.OAA31931@ergo.ucsd.edu> The following technical report is available online at http://cogsci.ucsd.edu (follow links to Tech Reports & Software ) Physical copies are also available (see the site for information). Analysis of Direction Selectivity Arising From Recurrent Cortical Interactions. Paul Mineiro and David Zipser UCSD Cogsci TR.97.03 The relative contributions of feedforward and recurrent connectivity to the direction selective responses of cells in layer IVB of primary visual cortex is currently the subject of debate in the neuroscience community. Recently biophysically detailed simulations have shown realistic direction selective responses can be achieved via recurrent cortical interactions between cells with non-direction selective feedforward input \cite{Koch:DS,Maex:SpikeMotion}. Unfortunately the complexity of these models, while desirable for detailed comparison with biology, are difficult to analyze mathematically. In this paper a relatively simple cortical dynamical model is used to analyze the emergence of direction selective responses via recurrent interactions. Comparison between a model based on our analysis and physiological data is presented. The approach also allows analysis of the recurrently propagated signal, revealing the predictive nature of the implementation. From fritzke at neuroinformatik.ruhr-uni-bochum.de Mon Apr 7 13:35:31 1997 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Mon, 7 Apr 1997 19:35:31 +0200 (MET DST) Subject: Java software and TR available (competitive learning) Message-ID: <199704071735.TAA29955@urda.neuroinformatik.ruhr-uni-bochum.de> Dear connectionists, this is to announce the availability of version 1.3 of the "DemoGNG" Java applet and a new version of the accompanying technical report draft "Some Competitive Learning Methods". The TR describes in detail all methods implemented in DemoGNG as well as some others (such as $k$-means and growing cell structures). URLs and descriptions follow below. Enjoy, Bernd Fritzke and Hartmut Loos URLs: ======== DemoGNG, for immediate execution: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html DemoGNG, for download (972 kBytes): ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG-1.3.tar.gz TR, HTML: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper/ TR, Postscript, 45 pages, 376 kBytes: ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz DemoGNG 1.3 ======== DemoGNG is a Java applet which distributed as free software under the GNU PUBLIC LICENSE and implements several methods related to competitive learning. It is possible to experiment with the methods using various (hardwired) data distributions and observe the learning process. DemoGNG is highly interactive (e.g. dragging of neurons during self-organization is possible) and has already been used for neural network courses in several countries. The following algorithms are now implemented: (new) LBG Hard Competitive Learning (constant and exponentially decaying learning rate) Neural Gas Competitive Hebbian Learning Neural Gas with Competitive Hebbian Learning Growing Neural Gas (new) Self-Organizing Map (new) Growing Grid Features added since the previously released version include * display of Voronoi diagrams * display of Delaunay triangulations * additional probability distributions * a detailed manual * sound switched off by default 8v) Draft Report ======== Some Competitive Learning Methods Bernd Fritzke Systems Biophysics Institute for Neural Computation Ruhr-Universit"at Bochum This report has the purpose of describing several algorithms from the literature all related to competitive learning. A uniform terminology is used for all methods. Moreover, identical examples are provided to allow a qualitative comparisons of the methods. The complete Java source code as well as a postscript version of this document may be accessed by ftp. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From josh at eas.uccs.edu Mon Apr 7 18:00:09 1997 From: josh at eas.uccs.edu (Alspector) Date: Mon, 7 Apr 1997 15:00:09 -0700 Subject: IWANNT*97 program & registration info Message-ID: <199704072200.PAA13032@eas.uccs.edu> International Workshop on Applications of Neural Networks (and other intelligent systems) to Telecommunications (IWANNT*97) Melbourne, Australia June 9-11, 1997 You are invited to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunications and information networking. This is the third workshop in a series that began in Princeton, New Jersey on October 18-20, 1993 and continued in Stockholm, Sweden on May 22-24, 1995. This conference will be at the University of Melbourne on the Monday through Wednesday (June 9 - 11, 1997) just before the Australian Conference on Neural Networks (ACNN) which will be at the same location on June 11 - 13 (Wednesday - Friday). There will be a hard cover proceedings available at the workshop. There is further information on the IWANNT home page at: http://ece-www.colorado.edu/~timxb/iwannt.html Organizing Committee General Chair Josh Alspector, U. of Colorado Program Chair Rod Goodman, Caltech Publications Chair Timothy X Brown,U. of Colorado Treasurer Suzana Brown, U. of Colorado Publicity Atul Chhabra, NYNEX Lee Giles, NEC Research Institute Local Arrangements Adam Kowalczyk, Telstra, Chair Chris Leckie, Telstra Andrew Jennings, RMIT M. Palaniswami, U. of Melbourne Robert Slaviero, Sig Proc Ass. Jacek Szymanski, Telstra Program Committee Nader Azarmi, British Telecom Miklos Boda, Ellemtel Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan U Bruce Denby, U of Versailles Simon Field, Nortel Francoise Fogelman, SLIGOS Marwan A. Jabri, Sydney Univ. Thomas John, Southwestern Bell S Y Kung, Princeton University Tadashi Sone, ATR Scott Toborg, SBC TRI IEEE Liaison Steve Weinstein, NEC Conference Administrator Helen Alspector IWANNT Conference Administrator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. Colorado Springs, CO 80933-7150 (719) 262-3351 (719) 262-3589 (fax) neuranet at mail.uccs.edu Tentative Conference Program Monday, June 9, 1997: 7:00 Registration and Coffee Session 1: 8:30 J. Alspector, Welcome 8:45 Invited Speaker: TBA 9:30 Techniques for Telecommunications Fraud Management, S.D.H. Field, P.W.Hobson 10:00 Employing Remote Monitoring and Artificial Intelligence Techniques to Develop the Proactive Network Management, A.S.M. De Franceschi, M. A. da Rocha, H.L. Weber, C.B.Westphall 10:30 Break 11:00 Local Diagnosis for Real-Time Network Traffic Management, P. Leray, P.Gallinari, E. Didelet 11:30 Intelligent Capacity Evaluation/Planning with Neural Network Clustering Algorithms, L. Lewis, U. Datta, S. Sycamore 12:00 Neural Networks for Network Topological Design, D.B. Hoang 12:30 Lunch Session 2: 13:30 Self Adaptive Network Utilisation, S. Olafsson 14:00 Neural Networks for Computing Blocking Probabilities in ATM Virtual Subnetworks, J. Br, M. Boda, A. Farag, T. Henk 14:30 Fuzzy Mean Flow Estimation with Neural Networks for Multistage ATM Systems, A. Murgu 15:00 Break 15:30 Generation of ATM Video Traffic Using Neural Networks, A. Casilari, A.Reyes, A. Daz-Estrella, F. Sandoval 16:00 Model Generation of Aggregate ATM Traffic Using A Neural Control with Accelerated Self-Scaling, E. Casilari, A. Jurado, G.Pansard, A. Daz Estrella, F. Sandoval 16:30 Dynamic Routing in ATM Networks with Effective Bandwidth Estimation by Neural Networks, Z. Fan, P. Mars Tuesday, June 10, 1997: 8:00 Registration and Coffee Session 3: 8:30 Invited Speaker: Tadashi Sone: TBA 9:00 Neural Networks for Location Prediction in Mobile Networks, J.Biesterfeld, E. Ennigrou, K. Jobmann 9:30 Reinforcement Learning and Supervised Learning Control of Dynamic Channel Allocation for Mobile Radio Systems, E.J.Wilmes, K.T. Erickson 10:00 Equalisation of Rapidly Time-Varying Channels Using an Efficient RBF Neural Network, Q. Gan, N. Sundararajan, P. Saratchandran, R.Subramanian 10:30 Break 11:00 Equalization and the Impulsive MOOSE: Fast Adaptive Signal Recovery in Very Heavy Tailed Noise, E. Dubossarsky, T.R. Osborn, S. Reisenfeld 11:30 Neural Receiver Structures Based on Self-Organizing Maps in Nonlinear Multipath Channels, K. Raivio, J. Henriksson, O. Simula 12:00 Using Neural Networks for Alarm Correlation in Cellular Phone Networks, H.Wietgrefe K-D. Tuchs, K. Jobmann, G. Carls, P.Frhlich, W.Nejdl, S.Steinfeld 12:30 Lunch Session 4: 13:30 A Space-Based Radio Frequency Transient Event Classifier, K.R.Moore, P.C. Blain, M.P. Caffrey, R.C. Franz, K.M. Henneke, R.G.Jones 14:00 Traffic Trends Analysis using Neural Networks, T. Edwards, D.S.W.Tansley, R.J. Frank, N. Davey 14:30 Learning Customer Profiles to Generate Cash over the Internet, C.Giraud-Carrier, M. Ward 15:00 Break 15:30 Keyword Search in Handwritten Documents, A. Kolcz, J. Alspector, M.Augusteijn, R. Carlson, G. Viorel Popescu 16:00 Face Recognition Using Hierarchical Neural Networks, Y.-H.Huang,C.-J.Liou, S.-T. Wu, L.-G. Chen, T.-D. Chiueh 16:30 Query Word-Concept Clusters in a Legal Document Collection, T.D.Gedeon, B.J. Briedis, R.A. Bustos, G. Greenleaf, A. Mowbray Wednesday, June 10, 1997: 8:00 Registration and Coffee Session 5: 9:00 Invited Speaker: TBA 9:30 A Novel Microsatellite Control System, M.W. Tilden, J.R. Frigo, K.R.Moore 10:00 Overload Control for Distributed Call Processors Using Neural Networks, S.Wu, K.Y.M. Wong 10:00 Break 10:30 Neural Networks for Resource Allocation in Telecommunication Networks, A.Christiansen, A. Herschtal, M. Herzberg, A. Kowalczyk, J.Szymanski 11:00 New Q-routing Approaches to Adaptive Traffic Control, L. Hrault, D.Drou, M. Gordon 11:30 Neural Network Aided Soft Decision Decoding of Block Codes with Random Percentage Training, S.W. Mok, M.Z. Wang, K.C. Li 12:00 Lunch Session 6: 13:30 Control of Self-Similar ATM Call Traffic by Reinforcement Learning, J.Carlstrm, E. Nordstrm 14:00 Towards a Hardware Implementation of Reinforcement Learning for Call Admission Control in Networks for Integrated Services, K.Steenhaut, A.Now, M. Fakir, E. Dirkx 14:30 ATM Connection Admission Control using Modular Neural Networks, C.-K.Tham, W.-S. Soh 15:00 Break 15:30 Admission Control in ATM Networks using Fuzzy-ARTMAP, I.Mahadevan, C.S. Raghavendra 16:00 Bandwidth Dimensioning for Data Traffic, T. X Brown 16:30 ATM Traffic Policing using a Classifier System, K.C. Tsui, B. Azvine 15:00 Adjourn ACNN'97 & IWANNT'97 ACCOMMODATION LIST Victoria Market Backpackers / Global Backpackers 238 Victoria Street, North Melbourne, Victoria, 3051, Australia Phone: +61 3 9328 3728 Fax: +61 3 9329 8966 Dorm. bed: $14/night Single room: $25/night, shared facilities Double/twin room: $35/night, shared facilities Features: Social environment, bar downstairs, within city centre, 10 min. walk from conference venue. Melbourne YHA Hostels - Queensberry Hill 78 Howard Street, North Melbourne, Victoria, 3051, Australia Phone: +61 3 9329 8599 Fax: +61 3 9326 8427 Dorm. bed: $20/night Single bed: $45/night Twin bed: $58/night Single with bathroom: $55/night Twin with bathroom: $68/night Features: Bistro, within city centre, 10 min. walk from conference venue. College Accommodation University of Melbourne - Ormond College, Queen's College, Trinity College, Newman College Grattan Street, Parkville, Victoria, 3052, Australia Phone/fax: +61 3 9347 9320 Rates: $40-60/night Features: Situated within University grounds (conference venue), budget student accommodation comprises single bed, desk, wardrobe, shared facilities and breakfast included. Elizabeth Tower Motel 792 Elizabeth Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9347 9211 Fax: +61 3 9347 0396 Single or double: $95/night Features: Airconditioning, minibar, coffee & tea facilities, fully licensed restaurant & bar, free car parking & outdoor swimming pool. Opposite conference venue (Melb. Uni.), 5 min. tram ride to city centre. Royal Parade Irico Hotel 441 Royal Parade, Parkville, Victoria, 3052, Australia Phone: +61 3 9380 9222 Fax: +613 9387 6448 Single: $99.50/night (includes breakfast) Twin share: $113.50/night (includes breakfast) Features: Air conditioning, movies, minibar, 24 hour room service, valet service and coffee & tea making facilities, fully licensed restaurant and bar, swimming pool, room service & car parking. Situated 2.5 km (10 min. tram ride) from city centre, and 10 min. walk from conference venue. The Townhouse Melbourne 701 Swanston Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9347 7811 Fax: +61 3 9347 8225 Twin: $110/night (suites available) Features: Air conditioning, movies, minibar, coffee & tea facilities, writing desk, restaurant, valet laundry service, guest parking, bar, outdoor swimming pool, 5 min. tram ride to city centre, 5 min. walk to conference venue. Grand Hyatt Hotel 123 Collins Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9657 1234 Fax: +61 3 9650 3491 Deluxe twin or king: $270/night Regency Club twin or king: $320/night (includes breakfast & morn/afternoon refreshments) Features: Airconditioned, movies, minibar, 24hr room service, coffee & tea facilities, two restaurants & food court, indoor heated swimming pool, gym, tennis courts. Situated within city centre, 10-15 min. tram ride to conference venue. Sheraton Towers Hotel 1 Brown Street, Southbank, Victoria, 3006, Australia Phone: +61 3 9696 3100 Fax: +61 3 9690 5889 Double/twin: $270/night City view: $310/night Deluxe: $360/night - 30 days advanced booking required on all rooms Features: Full buffet breakfast included, airconditioned, king-sized beds, desk, marble bath & shower, minibar, movies, tea & coffee facilities, 24hr room service, 3 restaurants, 2 bars, 1 night club, health club access, indoor heated swimming pool, SPA & sauna. Situated within city centre, 15-20 min. tram ride to conference venue. -------------------------------------------------------------------------- REGISTRATION FORM -------------------------------------------------------------------------- International Workshop on Applications of Neural Networks (and other intelligent systems) to Telecommunications (IWANNT*97) Melbourne, Australia June 9-11, 1997 Name: Institution: Mailing Address: Telephone: Fax: E-mail: Make check ($400; $500 after May 1, 1997; $200 students) out to IWANNT*97. Please make sure your name is on the check. Registration includes breaks and proceedings available at the conference. Mail to: Helen Alspector IWANNT Conference Administrator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 (719) 262-3351 (719) 262-3589 (fax) neuranet at mail.uccs.edu Site The conference will be held at the University of Melbourne. There are several good hotels within walking distance of the university. More information will be sent to registrants or upon request. From mackay at mrao.cam.ac.uk Wed Apr 9 16:46:00 1997 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Wed, 9 Apr 97 16:46 BST Subject: Information Theory, Probability and Neural Networks Message-ID: The following *draft* book is available for anonymous ftp. Feedback from the information theory and neural networks communities would be warmly welcomed. ======================================================================== "Information Theory, Probability and Neural Networks" by David J.C. MacKay ------------------------------------------------------------------------- An undergraduate / graduate textbook. This book will feature: * lots of figures and demonstrations. * more than one hundred exercises with worked solutions. * up to date exposition of: . source coding - including arithmetic coding, `bits back' coding . channel coding - including Gallager codes, turbo codes . neural networks - including Gaussian processes . Monte Carlo methods - including Hybrid Monte Carlo, Overrelaxation The current draft (April 9th 1997) is Draft 1.2.3 (308 pages). (Estimated to be about 70% complete.) =================== COMPLETED CHAPTERS =============================== 1. Introduction to Information Theory --------- Data Compression ------------------------------------------- 2. The Source Coding Theorem 3. Data Compression II: Symbol Codes 4. Data Compression III: Stream Codes --------- Noisy Channel Coding --------------------------------------- 5. Communication over a noisy channel 6. The noisy channel coding theorem 7. Error correcting codes & real channels --------- Probabilities ---------------------------------------------- 8. Bayesian Inference 9. Ising Models 10. Variational Methods 11. Monte Carlo methods --------- Neural networks ----------------------------------------------- 12. Introduction to neural networks 13. The single neuron as a classifier 14. Capacity of a single neuron 15. Learning as Inference 16. The Hopfield network 17. From Hopfield networks to Boltzmann machines 18. Supervised learning in multilayer networks ==================== INCOMPLETE CHAPTERS ============================== ------- Unsupervised learning ----------------------------------------- Clustering Independent component analysis Helmholtz machines A single neuron as an unsupervised learning element ------- Probability, data modelling and supervised neural networks ---- Laplace's method Graphical models and belief propagation Complexity control and model comparison Gaussian processes ------- Unifying chapters --------------------------------------------- Hash codes: codes for efficient information retrieval `Bits back' source coding Low density parity check codes Turbo codes ======================================================================== downloading instructions: ------------------------------------------------------------------------ The book (1.1Mbytes) can be clicked from this web page in Cambridge, England: http://wol.ra.phy.cam.ac.uk/mackay/itprnn/#book or from this MIRROR in Toronto, Canada: http://www.cs.toronto.edu/~mackay/itprnn/#book If you prefer to use ftp, ftp wol.ra.phy.cam.ac.uk (131.111.48.24) anonymous your name cd pub/mackay/itprnn binary get book.ps2.gz (tree saving two pages to a page version) OR get book.ps.gz (ordinary version) quit gunzip book.* ========================================================================== David J.C. MacKay email: mackay at mrao.cam.ac.uk www: http://wol.ra.phy.cam.ac.uk/mackay/ Cavendish Laboratory, tel: (01223) 339852 fax: 354599 home: 276411 Madingley Road, international code: +44 1223 Cambridge CB3 0HE. U.K. room: 982 Rutherford Building From tommi at cse.ucsc.edu Tue Apr 8 13:48:29 1997 From: tommi at cse.ucsc.edu (Tommi Jaakkola) Date: Tue, 8 Apr 1997 10:48:29 -0700 Subject: Thesis and paper available: variational methods Message-ID: <199704081748.KAA26123@baa.cse.ucsc.edu> The following Ph.D. thesis is available on the web at ftp://psyche.mit.edu/pub/tommi/thesis.ps.gz -------------------------------------------------------------------- Variational Methods for Inference and Estimation in Graphical Models Tommi S. Jaakkola MIT Graphical models enhance the representational power of probability models through qualitative characterization of their properties. This also leads to greater efficiency in terms of the computational algorithms that empower such representations. The increasing complexity of these models, however, quickly renders exact probabilistic calculations infeasible. We propose a principled framework for approximating graphical models based on variational methods. We develop variational techniques from the perspective that unifies and expands their applicability to graphical models. These methods allow the (recursive) computation of upper and lower bounds on the quantities of interest. Such bounds yield considerably more information than mere approximations and provide an inherent error metric for tailoring the approximations individually to the cases considered. These desirable properties, concomitant to the variational methods, are unlikely to arise as a result of other deterministic or stochastic approximations. The thesis consists of the development of this variational methodology for probabilistic inference, Bayesian estimation, and towards efficient diagnostic reasoning in the domain of internal medicine. ================================================================ The following technical report is now available via ftp: ftp://psyche.mit.edu/pub/tommi/varqmr.ps (~400kb) ftp://psyche.mit.edu/pub/tommi/varqmr.ps.Z (~150kb) ftp://psyche.mit.edu/pub/tommi/varqmr.ps.gz (~ 95kb) ------------------------------------------------------------- Variational methods and the QMR-DT database Tommi S. Jaakkola and Michael I. Jordan MIT We describe variational approximation methods for efficient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMR-DT database. The QMR-DT database is a large-scale belief network based on statistical and expert knowledge in internal medicine. The size and complexity of this network render exact probabilistic diagnosis infeasible for all but a small set of cases. This has hindered the development of the QMR-DT network as a practical diagnostic tool and has hindered researchers from exploring and critiquing the diagnostic behavior of QMR. In this paper we describe how variational approximation methods can be applied to the QMR network, resulting in fast diagnostic inference. We evaluate the accuracy of our methods on a set of standard diagnostic cases and compare to stochastic sampling methods. MIT Computational Cognitive Science Technical Report 9701. From Dragan.Obradovic at mchp.siemens.de Thu Apr 10 04:15:44 1997 From: Dragan.Obradovic at mchp.siemens.de (Dragan Obradovic) Date: Thu, 10 Apr 1997 10:15:44 +0200 (MET DST) Subject: Book: Information Theory and Neural Networks Message-ID: <199704100815.KAA08415@sava.mchp.siemens.de> Due to the high interest on applications of Information Theory in Neural Networks, we have installed a new WWW home page for the Springer Verlag book: "AN INFORMATION-THEORETIC APPROACH TO NEURAL COMPUTING" ------------------------------------------------------- by Gustavo Deco and Dragan Obradovic at the following address: http://www.siemens.de/research/NeuralNet/BOOK/Welcome.html The book covers, among others, the following topics: - Linear and Non-linear Independent Component Analysis (ICA), and - the statistical theory of supervised learning. The detailed description of the book including the foreword and the table of contents is available on the above WWW home page. From carmesin at schoner.physik.uni-bremen.de Thu Apr 10 06:58:24 1997 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 10 Apr 1997 12:58:24 +0200 Subject: paper on cortical functionality emergence Message-ID: <199704101058.MAA08804@schoner.physik.uni-bremen.de> Dear Connectionists! The following paper is now available via WWW http://schoner.physik.uni-bremen.de/~carmesin/docs/Gordon.ps; also few hardcopies are available. --- Title: Cortical Functionality Emergence: General Theory & Quantitative Results --- by Dr. Hans-Otto Carmesin Institute for Theoretical Physics and Center for Cognition Sciences University Bremen, 28334 Bremen, Germany Fax: 0049 421 218 4869, E-mail: Carmesin at theo.physik.uni-bremen-de WWW: http://schoner.physik.uni-bremen.de/~carmesin/ --- Appeared in: Frank Schweitzer (Ed.): Self-Organization of Complex Structures: From Individual to Collective Dynamics, vol. I, chapt. 18, 215-233, London: Gordon and Breach, 1996. --- Abstract: The human genotype represents at most ten billion binary informations, whereas the human brain contains more than a million times a billion synapses. So a differentiated brain structure is essentially due to self-organization. Such self-organization is relevant for areas ranging from medicine to the design of intelligent complex systems. Many brain structures emerge as collective phenomenon of a microscopic neurosynaptic dynamics: a stochastic dynamics mimics the neuronal action potentials, while the synaptic dynamics is modeled by a local coupling dynamics of type Hebb-rule, that is, a synaptic efficiency increases after coincident spiking of pre- and postsynaptic neuron. The microscopic dynamics is transformed to a collective dynamics reminiscent of hydrodynamics. The theory models empirical findings quantitatively: Topology preserving neuronal maps were assumed by Descartes in 1664; their self-organization was suggested by Weiss in 1928; their empirical observation was reported by Marshall in 1941; it is shown that they are neurosynaptically stable due to ubiquitous infinitesimal short range electrical or chemical leakage. In the visual cortex, neuronal stimulus orientation preference emerges; empirically measured orientation patterns are determined by the Poisson equation of electrostatics; this Poisson equation orientation pattern emergence is derived here. Complex cognitive abilities emerge when the basic local synaptic changes are regulated by valuation, emergent valuation, attention, attention focus or combination of subnetworks. Altogether a general theory is presented for the emergence of functionality from synaptic growth in neurobiological systems. The theory provides a transformation to a collective dynamics and is used for quantitative modeling of empirical data. From dario at lamisun9.epfl.ch Thu Apr 10 04:36:28 1997 From: dario at lamisun9.epfl.ch (Dario Floreano) Date: Thu, 10 Apr 97 10:36:28 +0200 Subject: 2 PhD positions/research assistants in neural computation Message-ID: <9704100836.AA14539@lamisun9.epfl.ch> ********************************************************* 2 PhD positions/research assistants in neural computation ********************************************************* available at the Center for Neural Computation (Centre Mantra pour les systemes neuro-mimetiques) Swiss Federal Institute of Technology at Lausanne (Ecole Polytechnique Federale de Lausanne) DI-EPFL, CH-1015, Lausanne, Switzerland We seek outstanding candidates for two PhD positions at the Mantra Center for Neural Computation, an interdisciplinary research unit formally attached to the Department of Computer Science. The positions will open up between April and August 1997. (i) The first position will be in the area of bio-inspired approaches to navigation and path planning. Strategies of neural network learning will be combined with evolutionary approaches and will be implemented on mobile robots. (ii) The second position will be in the field of computational neuroscience and should address the problems of temporal coding with spiking neurons. It will involve simulations studies and mathematical analysis. More informations available on the web page http://diwww.epfl.ch/lami/team/floreano/jobs.html We prefer candidates with a good theoretical/mathematical background who have had a prior exposure to neural networks, computational neuroscience, evolutionary computation, or robotics. Candidates should have the equivalent of a diploma or master degree in computer science, physics, engineering, or neural computation. Successful candidates will be hired as research assistants (75% of a full salary). Interested candidates should send, at this stage, a cv and a short statement of research experience/research interest by email to wulfram.gerstner at di.epfl.ch or dario.floreano at di.epfl.ch with the subject line marked PhD-application ---------------------------------------------------- Dr. Wulfram Gerstner, Assistant Professor Dr. Dario Floreano, Researcher Swiss Federal Institure of Technolgy Center for Neural Computation Mantra-LAMI EPFL, IN-J 1015 Lausanne Tel. +41-21-693 6713 Fax. +41-21-693 5263 http://diwww.epfl.ch/mantra/ http://diwww.epfl.ch/lami/learning/ ---------------------------------------------------- From Wulfram.Gerstner at di.epfl.ch Thu Apr 10 04:16:34 1997 From: Wulfram.Gerstner at di.epfl.ch (Wulfram Gerstner) Date: Thu, 10 Apr 97 10:16:34 +0200 Subject: ICANN97-Lausanne-Oct7-10 Message-ID: <9704100816.AA14405@lamisun9.epfl.ch> --------- ICANN'97 ------ 7th Annual Conference of the European Neural Network Society ENNS I CCC A N N N N '' 999 77777 I C C A A NN N NN N 9 9 7 I C A A N N N N N N 9999 7 I C C AAAAA N NN N NN 9 7 I CCC A A N N N N 9999 7 International Conference on Neural Networks October 8-10 - Lausanne Switzerland Tutorials on Tuesday, October 7 -- The 1997 Latsis Conference -- http://www.epfl.ch/icann97 Email icann97 at epfl.ch Fax +41 21 693-5656 paper submission deadline: May 15, 1997 (6-page full papers) __________________________________________________________ Proceedings will be published by Springer Verlag, Lecture Note Serie, see layout instructions below. Papers will be evaluated on the basis of relevance, originality and clarity. Mathematical models Applications +++++++++++++++++++ ++++++++++++ Learning, Dynamical systems, Optimization, Prediction, Self-organization, Process control, Robotics, Cellular neural nets. Energy and Comm. Networks. Biological models Implementations +++++++++++++++++ +++++++++++++++ Neural codes, Spiking neurons, Bio-inspiration, Cortex modeling, Sensory processing, Sensory-motor areas Hardware accelerators, Analogue VLSI _____________________________________________________________ Conference structure """""""""""""""""""" The program will include plenary talks and 3 or 4 tracks of parallel sessions covering complementary fields of interest. Posters presentations will be complemented by short poster spotlights during oral presentations. Tutorials ^^^^^^^^^ Tutorials will take place on October 7, before the Conference. Y. Abu-Mostafa (USA), P. Refenes (GB) Finance Applications X. Arreguit (CH) Silicon Implementations J.L. van Hemmen (D), A. Kreiter (D) Cortical Oscillations M. Opper (D) Generalization Theories Invited speakers ^^^^^^^^^^^^^^^^ W. Bialek, Princeton, USA, Decoding Spike Trains H. Bourlard, Martigny, CH, Speech recognition S. Grossberg, Boston, USA, Visual Perception H. Markram, Rehovot, Israel, Synaptic Plasticity E. Oja, Espoo, Finland, Independent Comp. Analysis H. Ritter, Bielefeld, D, Robotics T. Roska, Budapest, HU, Cellular Neural Networks R. Sutton, Amherst, USA, Reinforcement Learning V. Vapnik, Holmdel, USA, Support Vector Machines E. Vittoz, Neuchatel, CH, Bioinspired Circuits Special Sessions are planned on ^^^^^^^^^^^^^^^^ Cortical Maps and receptive fields, Temporal Patterns and Brain Dynamics, Time Series Prediction, Financial Modeling, Adaptive Autonomous Agents, Applications in Power/communication networks. _____________________________________________________________ Instructions for authors """""""""""""""""""""""" Interested authors should: - Prepare a 6-page paper in English according to the Springer layout instructions (1-column book format, about 2000 char/page only) See http://www.epfl.ch/icann97/authors.html - Classify the paper according to our list of categories and keywords. See http://www.epfl.ch/icann97/cata.html - Fill-in the author submission form. See http://www.epfl.ch/icann97/sub.html - Mail 5 copies of the paper and the form before May 15 (do not include the original if it includes glued pictures or tables) Sorry, we do not accept electronic copies of papers (FTP, Web) All papers will be reviewed and the program committee will meet on June 19-21 for the selection. Authors will be informed by fax before June 25. Corrections and changes will be requested for July 10. Final conference program will be available early July. The principal author of an accepted paper must register before July 30. Student grants are available from Neuronet. Until May 5, the printed set of forms, layout instructions and examples can also be requested by fax or e-mail. Please indicate your postal address. All these documents are available on the Web. _____________________________________________________________ Registration information and fees """"""""""""""""""""""""""""""""" Registration fee includes admission to all sessions, one copy of the proceedings, coffee breaks and 3 lunches, welcome drinks and banquet. before August 30 -- after Regular registration fee 580 CHF -- 640 CHF Student (with lunch, no banquet, no proceedings) 270 CHF -- 330 CHF Tutorial day (October 7) 30 CHF -- 50 CHF Ask for a copy of the forms, for the program booklet, or see on the Web http://www.epfl.ch/icann97/reg.html Participant registration form http://www.epfl.ch/icann97/stu.html Student special condition form http://www.epfl.ch/icann97/hotel.html Hotel reservation form _____________________________________________________________ Conference location and accomodation """""""""""""""""""""""""""""""""""" The conference will be held at the EPFL, Ecublens, 5 km South-West of Lausanne. A tram provides easy access from the hotels. Lausanne is located on the lake of Geneva, with easy access by train and planes. Hotels are in the 50 to 150 CHF range. Reservation is not handled by the conference. Ask Fassbind Hotels, fax +41 21 323 0145 _____________________________________________________________ Organizers ^^^^^^^^^^ General Chairman: Prof Wulfram Gerstner, Mantra-EPFL Co-chairmen: Prof Alain Germond, Martin Hasler, J.D. Nicoud, EPFL Program committee secretariat: Monique Dubois, LAMI-EPFL, tel +41 21 693-6635 Registration secretariat: Andrii Moinat, LRC-EPFL, tel +41 21 693-2661 FAX: +41 21 693 5656 Technical Programm Committee ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Frangois Blayo Lyon . Marie Cottrell Paris F. de Viron Bruxelles . Christian Jutten Grenoble Cedex H. Mallot Berlin . Eddy Mayoraz Martigny K. Pawelzik Frankfurt . Eric Vittoz Neuchatel Advisory Board ^^^^^^^^^^^^^^ Larry Abbott Waltham . Moshe Abeles Jerusalem Sun-ichi Amari Saitama . Michael Arbib Los Angeles William Bialek Princeton . C.M. Bishop Birmigham Joan Cabestany Barcelona . J.P. Changeux Paris Holk Cruse Bielefeld . Emil Fieseler Martigny Frangoise Fogelman Clamart . Stan Gielen Nijmegen Karl Goser Dortmund . Stephen Grossberg Boston Klaus Hepp Zurich . Jeanny Herault Grenoble J.A. Hertz Kobenhaun . Michael Jordan Cambridge Christof Koch Pasadena . T. Kohonen Espoo Lennart Ljung Linkoping . Th. Martinetz Bochum Pietro Morasso Genova . A.F. Murray Edinburgh Dagmar Niebur Philadelphia . Erkki Oja Espoo G. Palm Ulm . Vincenzo Piuri Milano Alberto Prieto Granada . U. Ramacher Dresden Helge Ritter Bielfeld . Tamas Roska Budapest Terrence Sejnowski La Jolla . Sara A. Solla Holmdel John G. Taylor London . J.L. van Hemmen Garching V.N. Vapnik Holmdel . Michel Verleysen Louvain-la-Neuve C. von der Malsburg Bochum . W. von Seelen Bochum Chr. Wellekens Valbonne . David Willshaw Edinburgh _____________________________________________________________ From moshe.sipper at di.epfl.ch Thu Apr 10 02:55:32 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Thu, 10 Apr 1997 08:55:32 +0200 Subject: Book Announcement: Evolution of Parallel Cellular Machines Message-ID: <199704100655.IAA05290@lslsun7.epfl.ch> The following book, recently published, may be of interest to subscribers of this list: "Evolution of Parallel Cellular Machines: The Cellular Programming Approach", Moshe Sipper, Springer-Verlag, 1997. Information is available at: http://lslwww.epfl.ch/~moshes/pcm.html Back-cover text: Nature abounds in systems involving the actions of simple, locally-interacting components, that give rise to coordinated global behavior. These collective systems have evolved by means of natural selection to exhibit striking problem-solving capacities, while functioning within a complex, dynamic environment. Employing simple yet versatile parallel cellular models, coupled with evolutionary computation techniques, this volume explores the issue of constructing man-made systems that exhibit characteristics such as those manifest by their natural counterparts. Parallel cellular machines hold potential both scientifically, as vehicles for studying phenomena of interest in areas such as complex adaptive systems and artificial life, as well as practically, enabling the construction of novel systems, endowed with evolutionary, reproductive, regenerative, and learning capabilities. This self-contained volume examines the behavior of such machines, the complex computation they exhibit, and the application of artificial evolution to attain such systems. From A.Sharkey at dcs.shef.ac.uk Fri Apr 11 11:37:58 1997 From: A.Sharkey at dcs.shef.ac.uk (Amanda Sharkey) Date: Fri, 11 Apr 97 11:37:58 BST Subject: Connection Science Message-ID: <9704111037.AA08540@gw.dcs.shef.ac.uk> Announcing: Connection Science Special Issue. 1997, 9,1. Combining Artificial Neural Nets: Modular Approaches. Special Issue Editor: Amanda Sharkey Editorial Board for Special Issue Leo Breiman, University of Berkeley, USA. Nathan Intrator, Tel-Aviv University, Israel. Robert Jacobs, University of Rochester, USA. Michael Jordan, MIT, USA. Paul Munro, University of Pittsburgh, USA. Michael Perrone, IBM, USA. David Wolpert, IBM, USA. Contents: Amanda J.C. Sharkey. Modularity, Combining and Artificial Neural Nets, 3-10 Stephen P. Luttrell. Self-organization of Multiple Winner-take-all Neural Networks. 11-30 Cesare Furlanello, Diego Giuliani, Edmondo Trentin and Stefano Merler. Speaker Normalization and Model Selection of Combined Neural Networks. 31-50. Thierry Catfolis and Kurt Meert. Hybridization and Specialization of Real-time Recurrent Learning-based Neural Networks. 51-70. Lucila Ohno-Machado and Mark A. Musen. Modular Neural Networs for Medical Prognosis: Quantifying the Benefits of Combining Neural Networks for Survival Prediction. 71-86. Guszti Bartfai and Roger White. Adaptive Resonance Theory-based Modular Networks for Incremental Learning of Hierarchical Clusterings 87-112. Research Notes: Alex Aussem and Fionn Murtagh. Combining Neural Network Forecasts on Wavelet transformed Time Series. 113-122. Colin McCormack. Adaptation of Learning Rule Parameters Using a Meta Neural Network. 123-136. ------------------------------------------------------------------------- See also Connection Science, 8, 3/4 Combining Artificial Neural Nets: Ensemble Approaches. Amanda J.C. Sharkey. On Combining Artificial Neural Nets. 299-314. Sherif Hashem. Effects of Collinearity on Combining Neural Networks. 315-336. David W. Opitz & Jude W. Shavlik. Actively Searching for an Effective Neural Network Ensemble. 337-354. Yuval Raviv & Nathan Intrator. Bootstrapping with Noise: An Effective Regularization Technique. 355-372. Bruce E. Rosen. Ensemble Learning Using Decorrelated Neural Networks. 373-384. Kagan Tumer & Joydeep Ghosh. Error Correlation and Error Reduction in Ensemble Classifiers. 385-404. Bambang Parmanto, Paul W. Munro & Howard R. Doyle. Reducing Variance of Committee Prediction with Resampling Techniques. 405-426. Peter A. Zhilkin & Ray L. Somorjai. Application of Several methods of Classification Fusion to Magnetic Resonance Spectra. 427-442. From biehl at physik.uni-wuerzburg.de Fri Apr 11 09:34:13 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Fri, 11 Apr 1997 15:34:13 +0200 (MESZ) Subject: paper available: phase transitions in neural networks Message-ID: <199704111334.PAA05919@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1997/WUE-ITP-97-005.ps.gz The following manuscript is now available via anonymous ftp (See below for the retrieval procedure), or, alternatively from http://xxx.lanl.gov/abs/cond-mat/9704098 ---------------------------------------------------------------- Phase Transitions of Neural Networks Wolfgang Kinzel Plenary talk for MINERVA workshop on mesoscopics, fractals and neural networks, Eilat, March 1997 Ref.: WUE-ITP-97-005 Abstract The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> binary ftp> get WUE-ITP-97-005.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-97-005.ps.gz e.g. unix> lp WUE-ITP-97-005.ps [33 pages] (*) can be replaced by "get WUE-ITP-97-005.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ Prof. Dr. W. Kinzel Universit"at W"urzburg Institut f"ur Theoretische Physik Am Hubland D-97074 W"urzburg, Germany From pbolland at lbs.ac.uk Fri Apr 11 11:48:57 1997 From: pbolland at lbs.ac.uk (Peter Bolland) Date: Fri, 11 Apr 1997 15:48:57 UTC Subject: Call For Papers - Neural Networks in the Capital Markets 1997 Message-ID: <25667C95B0D@deimos.lbs.ac.uk> ANNOUNCEMENT AND CALL FOR PAPERS _________________________________________________________ COMPUTATIONAL FINANCE 1997 _________________________________________________________ The Fifth International Conference on NEURAL NETWORKS IN THE CAPITAL MARKETS Monday-Wednesday, December 15-17, 1997 London Business School, London, England. After four years of continuous success and evolution, NNCM has emerged as a truly multi-disciplinary international conference. Born out of neurotechnology, NNCM now provides an international focus for innovative research on the application of a multiplicity of advanced decision technologies to many areas of financial engineering. It draws upon theoretical advances in financial economics and robust methodological developments in the statistical, econometric and computer sciences. The fifth NNCM conference will be held in London December 15-17 1997 under the new title COMPUTATIONAL FINANCE 1997 to reflect its multi-disciplinary nature. COMPUTATIONAL FINANCE 1997 is a research meeting where original, high-quality contributions are presented and discussed. In addition, a day of introductory tutorials (Monday, December 15) will be included to familiarise participants of different backgrounds with the financial, and methodological aspects of the field. COMPUTATIONAL FINANCE 1997 invites research papers representing new and significant developments in methodology as well as applications of practical use and value in finance. In-depth analysis and comparison with established approaches is encouraged. Areas of interest include, but are not limited to: ______________________________________________ Methodologies ______________________________________________ Neural networks & Machine learning Fuzzy Logic & Expert systems Genetic algorithms & multi-criteria Optimisation Non-parametric statistics & Econometrics Non-linear time series & Cross-sectional analysis Adaptive/Kalman filtering techniques Hybrid models Model identification, selection and specification Hypothesis testing and confidence intervals Parameter sensitivity and prediction uncertainty Robust model estimation Stochastic Analysis, Monte Carlo ______________________________________________ Applications areas ______________________________________________ Portfolio management / asset allocation Derivative & term structure models Models for equity investment Bond and stock valuation and trading Currency models, forecasting & hedging Trading strategies Hedging and Arbitrage Strategies Cointegration Modelling & hedging correlation & volatility Portfolio replication: simulation & optimisation Retail finance Corporate distress & risk models Submission of Papers: Authors who wish to present a paper should mail three copies of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (or US 8.5" by 11") paper to the secretariat no later than June 28, 1997. Submissions will be refereed rigorously and authors will be notified on acceptance by 20 Sept. 1997. Location: The conference will be held at London Business School which is situated near Regent's Park, London and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Registration and Mailing List: if you wish to be added to the mailing list or register for COMPUTATIONAL FINANCE 1997, please send your postal address, e-mail address, and fax number to the secretariat. _________________________________________________ Programme Committee _________________________________________________ Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, Sydney University Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University, PUC-Rio Dr M. Steiner, Augsburg Universitaet Dr A. Timermann, UCSD Dr A. Weigend, New York University Dr H. White, UCSD Dr L. Xu, Chinese University, Hong Kong Secretariat: Please submit your papers and further inquiries to the secretariat at the address below: Ms Busola Oguntula, London Business School, Sussex Place, Regent's Park, London NW1 4SA, UK. E-mail: boguntula at lbs.ac.uk. Phone (+44) (0171)-262 50 50, Fax (+44) (0171) 724 78 75. __________________________________________________ WEB PAGE __________________________________________________ For more information on COMPUTATIONAL FINANCE 1997, please visit the NNCM home page at London Business School, http://www.lbs.lon.ac.uk/desci/nncmhome.html For information on previous conference and the program of previous NNCM conferences, please visit the NNCM homepages; London Business School http://www.lbs.lon.ac.uk/desci/nncmhome.html Caltech http://www.cs.caltech.edu/~learn/nncm.html __________________________________________________ From Tom_Mitchell at daylily.learning.cs.cmu.edu Sun Apr 13 15:44:05 1997 From: Tom_Mitchell at daylily.learning.cs.cmu.edu (Tom Mitchell) Date: Sun, 13 Apr 1997 15:44:05 -0400 Subject: new Machine Learning book Message-ID: NEW COMPREHENSIVE TEXTBOOK: Machine Learning, Tom Mitchell, McGraw Hill McGraw Hill announces immediate availability of MACHINE LEARNING, a new textbook that provides a thorough, multi-disciplinary introduction to computer algorithms for automated learning. The chapter outline is: 1. Introduction 2. Concept Learning and the General-to-Specific Ordering 3. Decision Tree Learning 4. Artificial Neural Networks 5. Evaluating Hypotheses 6. Bayesian Learning 7. Computational Learning Theory 8. Instance-Based Learning 9. Genetic Algorithms 10. Learning Sets of Rules 11. Analytical Learning 12. Combining Inductive and Analytical Learning 13. Reinforcement Learning (414 pages) This book is intended for upper-level undergraduates, graduate students, and professionals working in the area of neural networks, machine learning, datamining, and statistics. It includes over a hundred homework exercises, along with web-accessible code and datasets (e.g., neural networks applied to face recognition, Bayesian learning applied to text classification). For further information and ordering instructions, see http://www.cs.cmu.edu/~tom/mlbook.html From amari at zoo.riken.go.jp Sun Apr 13 23:52:07 1997 From: amari at zoo.riken.go.jp (Shunichi Amari) Date: Mon, 14 Apr 1997 12:52:07 +0900 Subject: new papers (Natural gradient learning, Blind source separation, etc) Message-ID: <9704140352.AA28110@zoo.riken.go.jp> The following three papers are now available from my home page: http://www.bip.riken.go.jp/irl/amari/amari.html There are many other recent papers to be publishes in the same home page. There are some other joint papers in the home page of Prof. Cichocki. I am very bad at maintaining my home page, and I have renewed it. ********************** 1. Natural Gradient Works Efficiently in Learning ------submitted to Neural Computation for possible publication abstract When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation) and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient on-line learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon which appears in the backpropagation learning algorithm of multilayer perceptrons might disappear or might be not so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed. ********************** title 2. STABILITY ANALYSIS OF ADAPTIVE BLIND SOURCE SEPARATION -------accepted for publication in Neural Networks abstract Recently a number of adaptive learning algorithms have been proposed for blind source separation. Although the underlying principles and approaches are different, most of them have very similar forms. Two important issues have remained to be elucidated further: the statistical efficiency and the stability of learning algorithms. The present letter analyzes a general form of statistically efficient algorithm and give a necessary and sufficient condition for the separating solution to be a stable equilibrium of a general learning algorithm. Moreover, when the separating solution is unstable, a simple method is given for stabilizing the separating solution by modifying the algorithm. ************************* title 3. Superefficiency in Blind Source Separation ----------submitted to IEEE Tr. on Signal Processing abstract Blind source separation extracts independent component signals from their mixtures without knowing the mixing coefficients nor the probability distributions of source signals. It is known that some algorithms work surprisingly well. The present paper elucidates the superefficiency of algorithms based on the statistical analysis. It is in general known from the asymptotic theory of statistical analysis that the covariance of any two extracted independent signals converges to $0$ in the order of $1/t$ in the case of statistical estimation by using $t$ examples. In the case of on-line learning, the theory of on-line dynamics shows that the covariances converge to $0$ in the order of $\eta$ when the learning rate $\eta$ is fixed to be a small constant. In contrast with the above general properties, the surprising superefficiency holds in blind source separation under a certain conditions. The superefficiency implies that the covariance decreases in the order of $1/t^2$ or of $\eta ^2$. The present paper uses the natural gradient learning algorithm and the method of estimating functions to obtain the superefficient procedures for both estimation and on-line learning. The superefficiency does not imply that the error variances of the extracted signals decrease in the order of $1/t^2$ or $\eta ^2$, but implies that their covariances do. From elman at crl.ucsd.edu Sat Apr 12 23:10:25 1997 From: elman at crl.ucsd.edu (Jeff Elman) Date: Sat, 12 Apr 1997 20:10:25 -0700 (PDT) Subject: new book announcement: Exercises in Rethinking Innateness Message-ID: <199704130310.UAA09634@crl.UCSD.EDU> EXERCISES IN RETHINKING INNATENESS A Handbook for Connectionist Simulations by Kim Plunkett and Jeffrey L. Elman This book is the companion volume to Rethinking Innateness: A Connectionist Perspective on Development (The MIT Press, 1996), which proposed a new theoretical framework to answer the question "What does it mean to say that a behavior is innate?" The new work provides concrete illustrations--in the form of computer simulations--of properties of connectionist models that are particularly relevant to cognitive development. This enables the reader to pursue in depth some of the practical and empirical issues raised in the first book. The authors' larger goal is to demonstrate the usefulness of neural network modeling as a research methodology. The book comes with a complete software package, including demonstration projects, for running neural network simulations on both Macintosh and Windows 95. It also contains a series of exercises in the use of the neural network simulator provided with the book. The software is also available to run on a variety of UNIX platforms. Neural Network Modeling and Connectionism series MIT Press/Bradford Books May 1997 ISBN 0-262-66105-5 254 pp. $40.00 (paper) MIT Press WWW page, with ordering information: http://mitpress.mit.edu:8000/mitp/recent-books/cog/pluep.html From john at dcs.rhbnc.ac.uk Mon Apr 14 10:24:48 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 14 Apr 97 15:24:48 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199704141424.PAA02779@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. The following technical report has been updated to include information about the system described in NC-TR-97-038: ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-028: ---------------------------------------- Overview of Learning Systems produced by NeuroCOLT Partners by NeuroCOLT Partners Abstract: This NeuroCOLT Technical Report documents a number of systems that have been produced withing the NeuroCOLT partnership. It only includes a summary of each system together with pointers to where the system is located and more information about its performance and design can be found. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-038: ---------------------------------------- Using Computational Learning Strategies as a Tool for Combinatorial Optimization by Andreas Birkendorf and Han Ulrich Simon, Universit"at Dortmund, Germany Abstract: In this paper, we describe how a basic strategy from computational learning theory can be used to attach a class of NP-hard combinatorial optimization problems. It turns out that the learning strategy can be used as an iterative booster: given a solution to the combinatorial problem, we will start an efficient simulation of a learning algorithm which as a ``good chance'' to output an improved solution. This boosting technique is a new and surprisingly simple application of an existing learning strategy. It yields a novel heuristic approach to attach NP-hard optimization problems. It does not apply to each combinatorial problem, but we are able to exactly formalize some sufficient conditions. The new technique applies, for instance, to the problems of minimizing a deterministic finite automaton relative to a given domain, the analogous problem for ordered binary decision diagrams, and to graph colouring. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-039: ---------------------------------------- A Unifying Framework for Invariant Pattern Recognition by Jeffrey Wood and John Shawe-Taylor, Royal Holloway, University of London, UK Abstract: We introduce a group-theoretic model of invariant pattern recognition, the {\em Group Representation Network}. We show that many standard invariance techniques can be viewed as GRNs, including the DFT power spectrum, higher order neural network and fast translation-invariant transform. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From movellan at ergo.ucsd.edu Mon Apr 14 17:08:45 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Mon, 14 Apr 1997 14:08:45 -0700 Subject: TR announcement Message-ID: <199704142108.OAA13893@ergo.ucsd.edu> The following technical report is available online at http://cogsci.ucsd.edu (follow links to Tech Reports & Software ) Physical copies are also available (see the site for information). A Learning Theorem for Networks at Detailed Stochastic Equilibrium. Javier R. Movellan Department of Cognitive Science University of California San Diego The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradient descent learning rules to train their equilibrium solutions. A theorem is given that specifies sufficient conditions for the gradient descent learning rules to be local covariance statistics between two random variables: 1) an evaluator which is the same for all the network parameters, and 2) a system variable which is independent of the learning objective. The generality of the theorem suggests that instead of suppressing noise present in physical devices, a natural alternative is to use it to simplify the credit assignment problem. In deterministic networks credit assignment requires an evaluation signal which is different for each node in the network. Surprisingly, when noise is not suppressed, all is needed is an evaluator which is the same for the entire network, and a local Hebbian signal. This modularization of signals greatly simplifies hardware and software implementations. The paper shows how the theorem applies to four different learning objectives which span supervised, reinforcement and unsupervised problems: 1) regression, 2) density estimation, 3) risk minimization, 4) information maximization. Simulations, implementation issues and implications for computational neuroscience are discussed. From ajay_jain at metaxen.com Mon Apr 14 22:22:18 1997 From: ajay_jain at metaxen.com (Ajay N. Jain) Date: Mon, 14 Apr 97 18:22:18 -0800 Subject: Position Available Message-ID: <199704150122.SAA02230@InterJet.metaxen.com> [PLEASE FORWARD TO APPROPRIATE MAILING LISTS] Position: Scientist/Senior Scientist, Computational Sciences Requirements: PhD in Computer Science Company: MetaXen LLC (Palo Alto, CA) Hiring Manager: Ajay N. Jain MetaXen is a start-up biopharmaceutical company based in the San Francisco Bay Area. We emphasize an integrated parallel approach to drug discovery that seeks to optimize specific binding of small molecules to proteins simultaneously with pharmacological parameters such as oral absorption. We combine state-of-the art computational technology for structure-based drug design with medicinal chemistry, molecular biology, X-ray crystallography, biochemistry, molecular pharmacology, and related fields in a stimulating multidisciplinary environment. Our current therapeutic research areas include cancer and thrombotic disease. We are looking for a computer scientist to join our growing group. The ideal candidate will have demonstrated success in applying sophisticated computation to real-world problems (e.g. drug discovery, object recognition, robotics, etc...). Experience in machine-learning/neural networks, computational geometry, or physical modeling would be considered beneficial, as would formal training in chemistry, biology, or physics. The duties for the position are to develop, implement, and apply novel algorithms for structure-based drug design (e.g. molecular docking, 3D structure-activity prediction, etc...), genomic data analysis, and prediciton of in vivo pharmacological parameters from in vitro pharmacology data as well as molecular structure. Qualified applicants should send a CV and cover letter to the address below. Ajay -------------------------------------------------------------------------- Dr. Ajay N. Jain: Principal Scientist/Group Leader, Computational Sciences Tel (415) 858-4942 MetaXen LLC Fax (415) 858-4931 3181 Porter Dr Email: ajay_jain at metaxen.com Palo Alto, CA 94304 -------------------------------------------------------------------------- From tho at nucleus.hut.fi Tue Apr 15 07:00:52 1997 From: tho at nucleus.hut.fi (Timo Honkela) Date: Tue, 15 Apr 1997 14:00:52 +0300 (EET DST) Subject: WSOM'97 Call For Participation Message-ID: ================== CALL FOR PARTICIPATION ======================= W O R K S H O P O N S E L F - O R G A N I Z I N G M A P S Helsinki University of Technology, Finland June 4-6, 1997 ------------------------- WWW: -------------------------- http://nucleus.hut.fi/wsom97/ ----------------------------------------------------------------- WSOM'97 is the first international meeting to be entirely dedicated to the theory and applications of the Self-Organizing Map (SOM). The SOM is a new powerful software tool for hard real-world problems listed below. Highlights of the program: - TUTORIAL SHORT COURSE: prof. Teuvo Kohonen - PLENARIES: prof. Helge Ritter: Learning with the parameterized self-organizing map prof. Karl Goser: Self-organizing map for intelligent process control prof. Marie Cottrell: Theoretical aspects of the SOM algorithm - INVITED: prof. Risto Miikkulainen: SOM research in the USA prof. Heizo Tokutaka: Condensed review of SOM and LVQ research in Japan - SESSIONS: Pattern recognition and Optimization signal processing Monitoring and data mining Financial analysis Temporal sequence processing Image analysis and vision Theory and extensions Probabilistic interpretations Text and document maps Hardware - PANEL: Document search - SOM CLINIC: Practical advice is given during the breaks - BANQUET: Dinner speech by Robert Hecht-Nielsen: A neural network saga - DEMONSTRATIONS WSOM'97 is a unique occasion for anyone who - is looking for efficient means for analyzing and visualizing complex real-world data, or - wishes to gain new insight into the newest applications of the SOM. REGISTRATION General registration fee is FIM 1200 for the workshop and FIM 700 for the tutorial. Reduced registration fees are available for students as well as for early registration before May 1. Please see http://nucleus.hut.fi/wsom97/ for the program. On the page there is also information on the registration, accomodation, travel, and other practical arrangements. For further information, please contact wsom97 at nucleus.hut.fi CO-OPERATING SOCIETIES WSOM'97 is a satellite workshop of the 10th Scandinavian Conference on Image Analysis (SCIA) to be held on June 9 to 11 in Lappeenranta, Finland, arranged by the Pattern Recognition Society of Finland. Other co-operating societies are the European Neural Network Society (ENNS), IEEE Finland Section, and the Finnish Artificial Intelligence Society. Teuvo Kohonen, WSOM'97 Chairman Erkki Oja, Program Chairman Olli Simula, Organization Chairman From E.Heit at csv.warwick.ac.uk Tue Apr 15 07:28:14 1997 From: E.Heit at csv.warwick.ac.uk (Evan Heit) Date: Tue, 15 Apr 1997 12:28:14 +0100 Subject: PhD Studentship In Cognitive Science (UK) Message-ID: Department of Psychology University of Warwick Coventry, United Kingdom The growing Cognitive Science group at Warwick has available a three-year BBSRC Special Studentship, to commence October 1997, leading to a PhD in Psychology. The general topic is applications of neural network modelling to experimental psychology results in learning, categorisation, and memory. Applicants should have or expect a first degree or MSc in a field such as Psychology, Cognitive Science, or Computer Science, and some experience in experimental psychology or computer modelling (but not necessarily both). The Research Committee Special Studentship is intended primarily for UK residents, and it attracts a higher level of stipend than other Research Council studentships. Further details are available at http://www.warwick.ac.uk/~pssak/st.html . Apply by sending a CV, names of two referees, and a statement of research interests and experience to Dr E Heit, Department of Psychology, University of Warwick, Coventry CV4 7AL, or E.Heit at warwick.ac.uk . Please direct informal enquiries to these same addresses. Closing date 9 May 1997. ------------------------------------------------------------------------- Evan Heit Email: E.Heit at warwick.ac.uk Department of Psychology Office: +44|0 1203 523183 University of Warwick Fax: +44|0 1203 524225 Coventry CV4 7AL, United Kingdom http://www.warwick.ac.uk/~pssak From philh at cogs.susx.ac.uk Tue Apr 15 12:05:13 1997 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Tue, 15 Apr 1997 17:05:13 +0100 (BST) Subject: Lectureship at Sussex University Message-ID: -------------------------------------------------------------- University of Sussex SCHOOL OF COGNITIVE AND COMPUTING SCIENCES Computer Science and Artificial Intelligence - 2 lectureships Applications are invited for two Lectureships in the Computer Science and Artificial Intelligence Subject Group with an expected start date of October 1997. Candidates for the first lectureship should be able to show evidence of significant research achievement in any aspect of the Foundations of Computation. Candidates for the second lectureship should be undertaking innovatory research in some aspect of Vision, Neural Computation or Evolutionary and Adaptive Systems. All candidates should be willing to teach in areas other than their research speciality. The appointments are planned on the Lecturer A scale, for which salaries run from 15,593 to 20,424 p.a. (under negotiation), though for exceptional candidates an appointment on the Lecturer B scale may be considered (21,277 to 27,196, under negotiation). The posts can be discussed informally with Dr Hilary Buxton, hilaryb at cogs.susx.ac.uk, tel. 01273 678569. Details of the School are available at http://www.cogs.susx.ac.uk/ Application forms and further particulars are available from and should be returned to Sandra Jenks, Staffing Services, University of Sussex, Falmer, Brighton, East Sussex, BN1 9RH. Tel: (01273) 606755, ext 3768. Email S.Jenks at sussex.ac.uk. Closing date: Monday 21st April 1997. -------------------------------------------------------------- From juergen at idsia.ch Wed Apr 16 03:27:54 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 16 Apr 1997 09:27:54 +0200 Subject: Job openings Message-ID: <199704160727.JAA13718@ruebe.idsia.ch> 1 POSTDOC POSITION 1 PHD STUDENT POSITION RECURRENT NEURAL NETS The Swiss machine learning research institute IDSIA offers 2-year positions for one postdoc and one PhD student. Both will be funded by a research grant on recurrent neural networks. Intended application areas include speech recognition and music composition. The grant's initial focus will be on "Long Short-Term Memory" (LSTM) - see reference below. Ideal candidates should fully understand the LSTM paper, have strong mathematical and programming skills, outstanding research potential, experience with neural nets, excellent ability to communicate research results, and be willing to build on previous work. BEGIN: around October 1997. SWITZERLAND tends to be nice to scientists. It boasts the highest super- computing capacity pc (per capita), the most Nobel prizes pc (4-5 times the US value), the highest GNP pc, and the best chocolate. IDSIA's research focuses on artificial neural nets, reinforcement learning in partially observable environments, MDL, complexity and generalization issues, unsupervised learning and information theory, forecasting, combinatorial optimization, evolutionary computation, and metalearning. IDSIA is small but visible, competitive, and influential. It employs about a dozen active researchers and supervises students at several European universities. Its 1997 scientific output so far includes 9 journal publications (published or in press). LOCATION: beautiful Lugano, the capital of Ticino, the scenic southernmost province of Switzerland (pictures in my home page). Milano, Italy's center of fashion and finance, is 1 hour away, Venice 3 hours. CSCS, the Ticino supercomputing center, is nearby - we have a direct connection. SALARY: commensurate with experience. Postdoc: SFR 60-70K/year (US$ 41-48K). PhD student: SFR 25-30K. Subtract ~20% for taxes & social security. There is travel funding in case of papers accepted at important conferences. INSTRUCTIONS: please send HARDCOPIES (no email!) of CV, list of publications, cover letter, and your 3 best papers (if applicable) to Juergen Schmidhuber, IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland. Also send a brief email message listing email addresses of three references to juergen at idsia.ch. Please use your full name as subject header. EXAMPLE: subject: John_Smith DEADLINE: MAY 31 1997. Earlier applications preferred. RECURRENT NET REFS (more in http://www.idsia.ch/~juergen/onlinepub.html) -S.Hochreiter & JS: Long Short-Term Memory. Neural Comp. (accepted 1997) See ftp://ftp.idsia.ch/pub/juergen/lstm.ps.gz -JS: A fixed size storage O(n^3) time complexity learning algorithm for fully recurrent continually running nets. Neural Comp.4(2):243-248,1992. -JS: Learning complex, extended sequences using the principle of history compression. Neural Comp.4(2):234-242,1992. -JS: Learning to control fast-weight memories: An alternative to recurrent nets. Neural Comp.4(1):131-139,1992. Juergen Schmidhuber, research director, IDSIA juergen at idsia.ch http://www.idsia.ch/~juergen From kpfleger at cs.stanford.edu Wed Apr 16 05:49:49 1997 From: kpfleger at cs.stanford.edu (Karl Pfleger) Date: Wed, 16 Apr 1997 02:49:49 -0700 (PDT) Subject: NIPS*97 workshop on NNs related to graphical models? Message-ID: <199704160949.CAA10184@HPP.Stanford.EDU> Would you like to see a NIPS*97 workshop on "Neural Network Models Related to Graphical Probabilistic Models"? Michael Jordan recently sent out the NIPS*97 Call for Post Conference Workshop Proposals. I don't feel qualified to organize and run a workshop on this topic myself, but I believe it is a topic of great interest to many people. Thus, I'm suggseting it here in the hopes that someone who is qualified might be inspired to organize such a workshop and draw up a proposal. If you would be interested only in attending or participating in such a workshop, send me a quick note anyway (or even an abstract of a potential submission) and I'll collect the responses to pass on to anyone who does volunteer to organize. Evidently, Mike Jordan and David Heckerman held a similar and extremely popular workshop 2 years ago at NIPS. However, opnions from a number of people, including Jordan, suggest that there is probably still sufficient interest for another such workshop. ---------------------------------------------------------------------------- NIPS Workshop idea: Neural Network Models Related to Graphical Probabilistic Models Graphical probabilistic models are currently a hot topic, especially Bayesian/Belief Networks, particularly in the AI communities. But there are neural network models that are intimiately related to some of these models, and can be used in similar ways. For example, Boltzmann machines are essentially neural-net versions of Markov Networks, with properties closely related to probabilistically explicit Markov Networks and to Bayes nets. Extended example--Some parallels between BMs and BNs: - Both represent a joint prob. distribution over a set of random variables. - In both network structure represents conditional independence assumptions amongst the variables, whether by d-separation or locally Markov properties. Based on Pearl (1988) neither is uniformly superior. - In both, exact prob. inference is possible but intractable, in general. - In both one can do Monte Carlo approx. inference. In both one can use Gibbs sampling, and this is exactly what BM settling corresponds to. - Both can be used to learn a joint distribution over visible units. - Both can use hidden variables. - In both you can do search over network structure to learn that as well, though this is almost completely unexplored for BMs. - Both can represent any joint distribution (the BM may need hidden units). - There are even mechanisms for converting from one to the other. There isn't much literature that draws this parallel clearly. Neal (1991a) comes closest. Probabilistically explicit Markov Network models are used in vision and graphics, and also in physics communities, and also share the above properties. There are certainly many other such parallels. Particularly salient are those involving variations on BMs and BNs, as investigated by people like Radford Neal. Integrative views like this help increase everyone's understanding. (Obviously, the workshop is not meant to be a battleground for the formalisms!) There are also plenty of research issues here and plenty of suggestions that arise from understanding the parallels. ---------------------------------------------------------------------------- -Karl ---------------------------------------------------------------------------- Karl Pfleger kpfleger at cs.stanford.edu http://www.stanford.edu/~kpfleger/ ---------------------------------------------------------------------------- From weaveraj at helios.aston.ac.uk Wed Apr 16 10:37:21 1997 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Wed, 16 Apr 1997 15:37:21 +0100 Subject: Postdoctoral Research Fellowship Message-ID: <15738.199704161437@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Learning Fixed Example Sets in Multilayer Neural Networks --------------------------------------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of `Learning Fixed Example Sets in Multilayer Neural Networks'. The emphasis of the research will be on applying a theoretically well-founded approach based on methods adopted from statistical mechanics to analyse learning from fixed example sets in multilayer networks. Potential candidates should have strong mathematical and computational skills, with a background in statistical mechanics and neural networks. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 16,450 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 3 referees, to: Dr. David Saad Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 e-mail: D.Saad at aston.ac.uk e-mail submission of postscript files is welcome. Closing date: 30 May, 1997. From robtag at dia.unisa.it Thu Apr 17 06:02:37 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 17 Apr 1997 12:02:37 +0200 Subject: WIRN Vietri 97 Preliminary Program Message-ID: <9704171002.AA20706@udsab.dia.unisa.it> WIRN VIETRI `97 IX ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY 22 - 24 May 1997 PRELIMINARY PROGRAM Thursday 22 May 9:30 - Artificial Neural Networks in Pharmacology and Therapeutics M. Eandi & M. Costa (Review Talk) Applications 10:30 - Are Hybrid Fuzzy-Neural Systems Actually Useful in plasma Engineering? F.C. Morabito & M. Campolo 10:50 - The GTM as Predictor of Risk in Pregnancy B. Rosario, D.R. Lovell, M. Naranjan, R.W. Prager & K.J. Dalton 11:10 - An Application of the Bootstrap 632+ Rule to Ecological data C. Furlanello, S. Merler, C. Chemini & A. Rizzoli 11:30 - Coffee Break Mathematical models 12:00 - What size Needs Testing? B. Apolloni 12:20 - Sequences of Discrete Hopfield Networks for the Maximum Clique Problem G. Grossi 12:40 - Entropy Based Comparison of Neural Networks for Classification S. Draghici & V. Beiu 13:00 - Energy Functional and Fixed Points of a Neural Network L.B. Litinsky 13:20 - Lunch 15:00 - High light spotting of the posters 16:30 - Poster Session 18:00 - Tavola rotonda "I fondamenti delle reti neurali dopo 10 anni dal loro revival" Friday 23 May 9:30 - Title to be announced C.M. Bishop (Invited Talk) Pattern Recognition & Signal Processing 10:30 - Speeding up Neural Network Execution: an Application to Speech Recognition D. Albesano, F. Mana & R. Gemello 10:50 - Word Recognition by MLP-Based Character Spotting and Dynamic Programming F. Camastra, E. Cepollina & A.M. Colla 11:10 - Periodicity Analysis of Unevenly Spaced Data by means of Neural Networks M. Rasile, L. Milano, R. Tagliaferri & G. Longo 11:30 - Coffee Break 12:00 - Image Reconstruction Using a Hierarchical RBF Network Architecture N.A. Borghese, G. Ferrigno & S. Ferrari 12:20 - Fuzzy Neural Networks for Pattern Recognition A. Blonda & A. Petrosino (Review Talk) 13:20 - Lunch 16:00 - Eduardo R. Caianiello Lecture: - (The winner of the 1997 E.R. Caianiello Fellowship Award) 17:00 - Annual S.I.R.E.N. Meeting 20:00 - Conference Dinner Saturday 24 May 9:30 - Extracting Useful Information from Recurrent Neural Networks: Applications to Financial Time Series. C. Lee Giles & S. Lawrence (Invited Talk) Architectures and Algorithms 10:30 - Computational Maps for Articulatory Speech Synthesis V. Sanguineti & P. Morasso 10:50 - EM Algorithm: A Neural Network View A. Budillon & F. Palmieri 11:10 - Discriminative Least Squares Learning for Fast Adaptive Neural Equalization R. Parisi, E.D. Di Claudio & G. Orlandi 11:30 - Coffee Break 12:00 - Training Analog VLSI Multi Layer Perceptron Networks with Continuous Time Back Propagation G.M. Bo, D.D. Caviglia, H. Chible' & M. Valle 12:20 - A Unifying View of Gradient Calculation and Learning for Locally Recurrent Neural Networks P. Campolucci, A. Uncini & F. Piazza (Review Talk) POSTER SESSION - Hidden Recursive Models P. Frasconi, M. Gori & A. Sperduti - Attractor Neural Networks as Models of Semantic Memory E. Pessa & M. Pietronilla Penna - Plastic Tabu Search for Training Multilayer Perceptrons M. Battisti & P. Burrascano - Cluster Connections: a Visualization Technique to Reveal Cluster Boundaries in Self-Organizing Maps D. Merkl & A. Rauber - Geometrical Constructive Algorithms of Binary Neural Networks G. Martinelli, F.M. Frattale Mascioli & V. Catini - Classifying Magnetic Resonance Spectra of Brain Neoplasms Using Fuzzy and Robust Gold Standard Adjustments N. Pizzi - Application of Fuzzy Neural Networks on Financial Problems M. Rast - On the Cognitive Behaviour of a Multi-Layer Perceptron in Forecasting Meteorological Visibility A. Pasini, V. Pelino & S. Potesta' - Una Soluzione Neurale per la Navigazione di un Robot Mobile in Ambienti Non Strutturati e Non Noti Utilizzando Landmarks Visivi S. Vitabile, F. Bianco, G. Vassallo & F. Sorbello - Video Data Compression Using Multilayer Perceptrons S. Carrato - On the Adaptable Boolean Neural Net Paradigm F.E. Lauria, R. Prevete, M. Milo & S. Visco - Interval Arithmetic Perceptron with Pruning Capability G.P. Drago & S. Ridella - MAIA Neural Network. An Application to the Railway Antiskating System G. Pappalardo, M.N. Postorino, D. Rosaci & G.M.L. Sarne' - A Hebbian Model for Space Representation F. Frisone & P. Morasso - HW/SW Co-Design of a Complete Pre-Processing/Recognition System Based on Sgs-Thomson OCR Analog Chip M. Costa, D. Palmisano, E. Pasero & R. Tosco - Rates of Approximation of Multivariable Functions by One-Hidden-Layer Neural Networks V. Kurkova - Icarus Platform G. Russo - A Distribution-Free VC-Dimension-Based Performance Bound D. Mattera & F. Palmieri The registration is of 300.000 Italian Lire (275.000 Italian Lire for SIREN members) and can be made on site. More information can be found in the www pages at the address below: http:://www-dsi.ing.unifi.it/neural The workshop is the annual meeting of SIREN (Societa' Italiana Reti Neuroniche) Organized and Supported by IIASS (Istituto Internazionale Alti Studi Scientifici) "E.R. Caianiello" Istituto Italiano Studi Filosofici IEEE NNC Italian RIG Universita' degli Studi di Salerno and sponsored by : ELSAG BAILEY, Genova Dipartimento di Informatica ed Applicazioni "R.M. Capocelli", Univ. Salerno Dipartimento di Scienze dell'Informazione, Univ. Milano Dipartimento di Scienze Fisiche "E.R. Caianiello", Univ. Salerno IRSIP-CNR, Napoli From mpolycar at ececs.uc.edu Thu Apr 17 17:10:42 1997 From: mpolycar at ececs.uc.edu (Marios Polycarpou) Date: Thu, 17 Apr 1997 17:10:42 -0400 (EDT) Subject: Interdisciplinary Ph.D. Fellowships Message-ID: <199704172110.RAA07239@zoe.ece.uc.edu> REAL-TIME SPATIALLY-DISTRIBUTED DISINFECTANT CONTROL IN WATER DISTRIBUTION NETWORKS USING NEURAL NETWORKS The University of Cincinnati Earth System Science Program announces the availability of several Ph.D. graduate student fellowships, beginning Fall 1997. These Ph.D. fellowships are funded by the National Science Foundation. Each fellow will receive a full tuition scholarship and a monthly stipend ($1450/month for 12 months), plus funds for research supplies, travel to meetings, and support of a summer sabbatical at an off-campus location. Due to financial support requirements, all fellows must be either U.S. citizens or permanent residents. While the scope of research topics is broad and flexible, we are specifically seeking one or more individuals with an interest in studying spatially distributed real-time control of disinfectant residual in water distribution systems. The question we wish to answer is stated quite simply: How best to control the spatio-temporal distribution of disinfectant residual within a water distribution network? The actual problem is not simple, however, due to complex system dynamics and chemical kinetics. The looped distribution network (i.e., it is not a spanning tree) is a multiple-input multiple-output, spatially extended dynamic system with significant time delays. The network hydraulics, which govern disinfectant transport on a system scale, are driven by external consumer loads and time varying pump operations, and it is typical that network flows will change dramatically and frequently, both in magnitude and direction. Coupled with these dynamics are complex disinfectant kinetics that depend on the pipe material at a physical location, the water source(s), and the type of disinfectant addition at the source(s) (multiple disinfectants are sometimes used). Further, the reactions with chlorine (the most common disinfectant) produce byproducts that evidence suggests are carcinogenic. This evidence has led to tightening lower and upper limits on acceptable chlorine concentrations within the network, and thus to a real need for more advanced control approaches and for a better understanding of disinfectant decay kinetics. Our plans for this work include the development and adaptation of control-theoretic approaches, including neural network methodologies, for real-time spatially-distributed control of multiple simultaneous disinfectant additions. Activities that fall within the project scope include: 1) development of appropriate modeling and simulation methods, 2) consideration of system robustness in the face of uncertain fluctuations in water demands, and 3) optimal location of disinfectant additions to minimize control effort. Applications are encouraged from individuals in any branch of engineering or the physical sciences; applicants should demonstrate a high degree of creativity along with strong quantitative and programming skills, and are expected to interact with chemists, electrical and environmental engineers, and utility personnel participating on the research team. For more information including application materials, contact Prof. Jim Uber (Environ. Hydrology, 513-556-3643, Jim.Uber at uc.edu) or Prof. Marios Polycarpou (Elec. & Comp. Eng. & Comp. Sci., 513-556-4763, Marios.Polycarpou at uc.edu). ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * ************************************************************************** From HECKATHO at B.PSC.EDU Thu Apr 17 13:52:23 1997 From: HECKATHO at B.PSC.EDU (HECKATHO@B.PSC.EDU) Date: Thu, 17 Apr 1997 13:52:23 -0400 Subject: Neural Workshop Announcement Message-ID: <970417135223.20408978@B.PSC.EDU> Simulations in Computational Neuroscience June 11-14, 1997 Pittsburgh Supercomputing Center Pittsburgh, PA Participants in this workshop will learn to use PGENESIS, a parallel version of the GENESIS simulator, and PNEURON (under development), a parallel version of the NEURON simulator. This course will be of interest to active modelers who perceive the need for large simulations which are beyond the effective capabilities of single-cpu workstations. Both PGENESIS and PNEURON are suitable for large scale parallel search of parameter space for single neuron and neuronal network models. PGENESIS is also suitable for parallel simulation of very large network models. Both of these packages run on single workstations, workstation networks, small-scale parallel computers and large massively parallel supercomputers, providing a natural scale-up path. For large simulations NSF funds four supercomputing centers for the use of US-based computational scientists. Familiarity with the non-parallel version of GENESIS or NEURON is preferred but not required. Techniques for parallel search of parameter space and for decomposition of network models will be two foci of the workshop. Participants are encouraged to bring their models to the workshop. Each participant is provided with an SGI Irix workstation and accounts on PSCs advanced computing resources including our 512-node Cray T3E. Each day lectures will be followed by hands-on computing sessions at which experienced instructors will be available to assist in using PGENESIS and PNEURON, and optimizing models. Hotel accommodations during the workshop for researchers affiliated with U.S. academic institutions will be paid by our NIH grant. Complimentary breakfast and lunches also will be provided. There is no registration fee for this workshop. All other costs incurred in attending (travel, other meals, etc.) are the responsibility of the individual participant. The deadline for submitting applications is May 3, 1997. Enrollment is limited to 20 participants. Further information and application materials can be found at: http://www.psc.edu/biomed/workshops/wk-97/neural.html Support for this workshop is from NIH under the NCRR program and from NSF under the Computational Activities in Biology program. From devin at psy.uq.edu.au Fri Apr 18 00:42:06 1997 From: devin at psy.uq.edu.au (Devin McAuley) Date: Fri, 18 Apr 1997 14:42:06 +1000 (EST) Subject: Connectionist Models of Cognition: A Workshop Message-ID: CONNECTIONIST MODELS OF COGNITION: A WORKSHOP Monday July 7th - Friday July 11th, 1997 University of Queensland Brisbane, Queensland 4072 Australia sponsored by the School of Psychology and the Cognitive Science Program Workshop Home Page: http://psy.uq.edu.au/~brainwav/Workshop/ BrainWave Home Page: http://psy.uq.edu.au/~brainwav/ This workshop provides an opportunity for faculty and students with teaching and research interests in connectionist modeling to gain hands-on modeling experience with the BrainWave neural network simulator during an intensive 5-day workshop on connectionist models of cognition at the University of Queensland. The workshop has three primary objectives: * to provide training in specific connectionist models of cognition. * to introduce instructors to the BrainWave simulator and course materials. * to support the development of new models by the participants. The first two days of the workshop will provide training in specific connectionist models of cognition and introduce participants to the BrainWave simulator. Day 3 will focus on how to develop a new model using BrainWave. On days 4 and 5, the instructors will be available to assist faculty and students in the development of new models and to discuss the development of teaching materials for undergraduate and postgraduate courses on connectionist modeling. Instructors: * Simon Dennis, School of Psychology * Devin McAuley, School of Psychology * Janet Wiles, Schools of Information Technology and Psychology Registration for the 5-day workshop includes: * Course materials: o The BrainWave Simulator for the Mac, Windows95, and Unix Platforms o An Introduction to Neural Networks and the BrainWave Simulator o Three chapters of a workbook on connectionist models of cognition * Morning and afternoon tea (Monday - Friday) * Lunch (Monday and Tuesday) The registration deadline for the workshop is June 6, 1997. ---------------------------------------------------------------------------- PROGRAM July 7 Session 1: Introduction to Connectionist Models and the BrainWave Simulator Session 2: Automatic and Controlled Processing: A Stroop Effect Model: Cohen, Dunbar, and McClelland (1990) July 8 Session 1: Language Disorders: A Deep Dyslexia Model: Hinton and Shallice (1991) Session 2: Episodic Memory: The Matrix Model: Humphreys, Bain, and Pike (1989) July 9 Introduction to Model Development in BrainWave July 10 Individual Model Development and Group Discussion July 11 Individual Model Development and Group Discussion ---------------------------------------------------------------------------- Send general inquiries and registration form to: email: brainwav at psy.uq.edu.au fax: +61-7-3365-4466 (attn: Dr. Devin McAuley) postal mail: BrainWave Workshop (c/o Dr. Devin McAuley) School of Psychology University of Queensland Brisbane, Queensland 4072 Australia ---------------------------------------------------------------------------- REGISTRATION FORM Name: ________________________________________ Email: ________________________________________ Address: ________________________________________ ________________________________________ ________________________________________ ________________________________________ Student (AUS$60) ____ Academic (AUS$95) ____ Industry (AUS$295) ____ ACCOMODATION I would like accomodation at King's College ($45 per night - private room with ensuite shared between two, bed and breakfast) Yes ____ No ____ Arrival Date: ____ Departure Date: ____ Accomodation total: AUS $ ______ I would like to be billeted: Yes ____ No ____ Arrival Date: ____ Departure Date: ____ Total payment including registration: AUS $______ FORM OF PAYMENT Cheque or Money Order ____ Visa ____ Mastercard ____ Card # ____________________________ Expiration Date _____ Please debit my credit card for AUS$_______________ Signature _________________________________________ Cheques and money orders should be made out to University of Queensland, School of Psychology. From bishopc at helios.aston.ac.uk Fri Apr 18 02:27:49 1997 From: bishopc at helios.aston.ac.uk (Prof. Chris Bishop) Date: Fri, 18 Apr 1997 07:27:49 +0100 Subject: GTM paper and software available Message-ID: <21297.199704180627@sun.aston.ac.uk> GTM: The Generative Topographic Mapping ======================================= Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Accepted for publication in Neural Computation Abstract Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline. Available as a postscript file from the GTM home page: http://www.ncrg.aston.ac.uk/GTM/ This home page also provides a Matlab implementation of GTM as well as data sets used in its development. Related technical reports are available here too. To access other publications by the Neural Computing Research Group, go to the group home page: http://www.ncrg.aston.ac.uk/ and click on `Publications' -- you can then obtain a list of all online NCRG publications, or search by author, title or abstract. From bogner at argos.eleceng.adelaide.edu.au Fri Apr 18 09:14:50 1997 From: bogner at argos.eleceng.adelaide.edu.au (Robert E. Bogner) Date: Fri, 18 Apr 1997 22:44:50 +0930 Subject: Good Job in Australia Message-ID: <199704181314.WAA21872@argos.eleceng.adelaide.edu.au> POSTDOCTORAL OR RESEARCH FELLOW Signal Processing and Pattern Recognition at CSSIP and the University of Adelaide, South Australia The Cooperative Research Centre for Sensor Signal and Information (CSSIP) is one of several cooperative research centres awarded by the Australian Government to establish excellence in research, development and industrial application of key technologies. The University of Adelaide, represented by the Electrical and Electronic Engineering Dept., is a partner in this cooperative research centre, together with the Defence Science and Technology Organisation, four other universities, and several companies. CSSIP consists of about 100 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine. The aim of the position is to develop and investigate principles in the areas of sensor signal and image processing, classification and separation of signals, pattern recognition and data fusion. The Pattern Recognition program is in a formative phase. An appointment may be made for up to two years, with possibility of renewal depending on funding. DUTIES: In consultation with task leaders and specialist researchers to contribute to the formulation of research plans, investigate principles for algorithm design, design experiments, prepare data and software and carry out experiments, prepare or assist with the preparation of technical reports, and liaise with other groups and industrial contacts. QUALIFICATIONS: The successful candidate must have a Ph.D. or equivalent achievement, a proven research record, and a demonstrated ability to communicate excellently in written and spoken English. CONDITIONS and PAY will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$36285 to A$41000 for a postdoc., and A$42538 to A$48688 for a research fellow (A$1 is approx. US$0.78.) plus superannuation contribution. The position may be offered as soon as an outstanding applicant is found. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering, Dept., The University of Adelaide, Adelaide South Australia 5005, phone: (61)-8-8303-5589, Fax: (61)-8-8303 4360 Email: bogner at eleceng.adelaide.edu.au APPLICATIONS should include nationality, residence qualification, date on which the applicant could take up duty in Adelaide, and the names of three referees. They should be sent to Mr. Tim Anderson, Luminis Pty. Ltd, Box 149, Rundle Mall, Adelaide, South Australia 5000. Email: luminis at luminis.adelaide.edu.au From piuri at elet.polimi.it Sat Apr 19 05:13:27 1997 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Sat, 19 Apr 1997 11:13:27 +0200 Subject: Call for Papers Message-ID: <1.5.4.32.19970419072717.006d2c10@elet.polimi.it> ICONIP'97, The Fourth International Conference on Neural Information Processing, November 24-28, 1997. Dunedin/Queenstown, New Zealand Special Session on System Monitoring, Modeling, and Analysis CALL FOR PAPERS Adaptive and intelligent systems based on neural computation and related techniques have successfully been applied in the analysis of various complex processes. This is due to the inherent learning capability of neural networks which is superior in analyzing systems that cannot be modeled analytically. In addition to various fields of engineering, like pattern recognition, industrial process monitoring, and telecommunications, practical applications include information retrieval, data analysis, and financial applications. A special session devoted to these areas of neural computation will be organized at ICONIP'97. The scope of the special session covers neural networks methods and related techniques as well as applications in the following areas: - monitoring, modeling, and analysis, of complex industrial processes - telecommunications applications, including resource management and optimization - data analysis and fusion, including financial applications - time series modeling and forecasting Prospective authors are invited to submit papers to the special session on any area of neural techniques on system monitoring, modeling, and analysis including, but not limited to the topics listed above. The submissions must be received by May 30, 1997. Please, send five copies of your manuscript to Prof. Olli Simula, Special Session Organizer Helsinki University of Technology, Laboratory of Computer and Information Science, Rakentajanaukio 2 C, FIN-02150 Espoo, Finland. More detailed instructions for manuscript submission procedure can be found at WWW, on the special session home page: http://nucleus.hut.fi/ICONIP97/ssmonitor/ For the most up-to-date information about ICONIP'97, please browse the conference home page: http://divcom.otago.ac.nz:800/com/infosci/kel/iconip97.htm Important dates: Papers due: May 30, 1997 Notification of acceptance: July 20, 1997 Final camera-ready papers due: August 20, 1997 ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- Olli Simula Professor of Computer Science Helsinki University of Technology Telephone: +358-9-4513271 Department of Computer Science and Engineering Mobile: +358-400-448412 Laboratory of Computer and Information Science Fax: +358-9-4513277 http://nucleus.hut.fi/~ollis/ Email: Olli.Simula at hut.fi From piero at matilde.laboratorium.dist.unige.it Tue Apr 22 19:52:50 1997 From: piero at matilde.laboratorium.dist.unige.it (Piero Morasso) Date: Tue, 22 Apr 97 19:52:50 MET DST Subject: Book announcement Message-ID: <9704221752.AA12961@matilde.laboratorium.dist.unige.it> ========================================================================= ANNOUNCEMENT OF A NEW BOOK OF COMPUTATIONAL NEUROSCIENCE SELF-ORGANIZATION, COMPUTATIONAL MAPS, AND MOTOR CONTROL edited by Pietro Morasso and Vittorio SanguinetI North Holland Elsevier - Advances in Psychology vol. 119 ISBN 0 444 823239, 1997, 635 pages In the study of the computational structure of biological/robotic sensorimotor systems, distributed models have gained center stage in recent years, with a range of issues including self-organization, non-linear dynamics, field computing, etc. This multidisciplinary research area is addressed by a multidisciplinary team of contributors, who provide a balanced set of articulated presentations which include reviews, computational models, simulation studies, psychophysical and neurophysiological experiments. For convenience, the book is divided into three parts, without a clearcut boundary but a slightly different focus. The reader can find different approaches on controversial issues, such as the role and nature of force fields, the need of internal representations, the nature of invariant commands, the vexing question about coordinate transformations, the distinction between hierarchical and bidirectional modelling, and the influence of muscle stiffness. In Part I, the major theme concerns computational maps which typically model cortical areas, according to a view of the sensorimotor cortex as a "geometric engine" and the site of "internal models" of external spaces. Part II also addresses problems of self-organization and field-computing but in a simpler computational architecture which, although lacking a specialized cortical machinery, can still behave in a very adaptive and surprising way by exploiting the interaction with the real world. Finally, Part III is focused on the motor control issues related to the physical properties of muscular actuators and the dynamic interactions with the world, attempting to complete the picture from planning to control. PART I Cortical Maps of Sensorimotor Spaces V. Sanguineti, P. Morasso, and F. Frisone Field Computation in Motor Control B. MacLennan A Probability Interpretation of Neural Population Coding for Movement T.D. Sanger Computational Models of Sensorimotor integration Z. Ghahramani, D.M. Wolpert, and M.I. Jordan How Relevant are Subcortical Maps for the Cortical Machinery? An Hypothesis Based on Parametric Study of Extra-Relay Afferents to Primary Sensory Areas D. Minciacchi and A. Granato PART II Artificial Force-Field Based Methods in Robotics T. Tsuji, P. Morasso, V. Sanguineti, and M. Kaneko Learning Newtonian Mechanics F.A. Mussa Ivaldi and E. Bizzi Motor Intelligence in a Simple Distributed Control System: Walking Machines and Stick Insects H. Cruse and J. Dean The Dynamic Neural Field Theory of Motor Programming: Arm and Eye Movements G. Schner, K. Kopecz, and W. Erlhagen Network Models in Motor Control and Music A. Camurri PART III Human Arm Impedance in Multi-Joint Movement T. Tsuji Neural Models for Flexible Control of Redundant Systems F.H. Guenther and D. Micci Barreca Models of Motor Adaptation and Impedance Control in Human Arm Movements T. Flash and I. Gurevich Control of Human Arm and Jaw Motion: Issues Related to Musculo-Skeletal Geometry P.L. Gribble, R. Laboissire, and D.J. Ostry Computational Maps and Target Fields for Reaching Movements V. Sanguineti and P. Morasso From Uwe.Zimmer at GMD.de Tue Apr 22 18:39:45 1997 From: Uwe.Zimmer at GMD.de (Uwe R. Zimmer) Date: Wed, 23 Apr 1997 00:39:45 +0200 Subject: Japanese Robotics Research - a report and more Message-ID: <335D3E30.77C@GMD.de> Dear all, a report discussing outstanding research topics in Japanese robotics laboratories as well as governmental issues is just released. Some of the discussed groups deal with neural (biological) sensory-motor control, others are involved in biologically plausible sensory systems or redundant (humanoid) kinematics. Even creatures combining mechatronic and biological structures are investigated. ---------------------------------------------------------------------- Recent Developments in Japanese Robotics Research - Notes of a Japan Tour - Uwe R. Zimmer, Thomas Christaller, Christfried Webers ---------------------------------------------------------------------- http://www.gmd.de/People/Uwe.Zimmer/Publications/abs.Japan-Report.html ---------------------------------------------------------------------- (containing links to .pdf, .ps.gz, and ps.Z formats of the report) Abstract: Robotics appears to be a very lively and fruitful field of research in Japan. Some of the research topics cannot be found elsewhere at all, and some are significantly advanced. Discussing this impression, a collection of laboratories is introduced with their most outstanding topics. Moreover some background information about research plans, politics, and organisations are given. ---------------------------------------------------------------------- For an extensive web-presentation of robotics in Japan see also: ---------------------------------------------------------------------- http://www.gmd.de/People/Uwe.Zimmer/Lists/Robotics.in.Japan.html ---------------------------------------------------------------------- For more publications on robotics from GMD, please refer to: ---------------------------------------------------------------------- http://www.gmd.de/FIT/KI/CogRob/Publications/CogRob.Publications.html ---------------------------------------------------------------------- And for general information about scientific activities in Japan: ---------------------------------------------------------------------- http://www.gmd.de/Japan/ ---------------------------------------------------------------------- ,,,,, ___________________________________________ (o o) _____| ________oOO__( )__OOo_______| Uwe R. Zimmer GMD - FIT-KI ___| Schloss Birlinghoven | 53754 St. Augustin, Germany | _______________________________________________________________. Voice: +49 2241 14 2373 - Fax: +49 2241 14 2384 | http://www.gmd.de/People/Uwe.Zimmer/ | From meyer at wotan.ens.fr Wed Apr 23 08:57:09 1997 From: meyer at wotan.ens.fr (Jean-Arcady MEYER) Date: Wed, 23 Apr 1997 14:57:09 +0200 (MET DST) Subject: MODELS OF SPATIAL NAVIGATION Message-ID: <199704231257.OAA00763@eole.ens.fr> ================================================================== CALL FOR PAPERS ADAPTIVE BEHAVIOR Journal (The MIT Press) Special Issue on BIOLOGICALLY INSPIRED MODELS OF SPATIAL NAVIGATION Guest editor: Nestor Schmajuk Submission Deadline: August 31, 1997. In the last decades, computational models of animal and human spatial navigation have received increasing attention from computer scientists, engineers, psychologists, and neurophysiologists. At the same time, roboticists have applied enormous efforts to the design of robots capable of spatial navigation. The combined contribution of these fields to the study of spatial navigation promises a rapid progress in this area. This special issue of Adaptive Behavior will focus on models of spatial navigation in both animals and robots. We are soliciting papers describing finished work on models and technology applied to maze navigation, search behavior, and exploration. Also models of brain areas involved in navigation are welcome. We encourage submissions that address the following topics: -spatial learning in animals or robots -maze navigation in animals or robots -exploratory and searching behavior by individual animals or robots -exploratory behavior by groups of animals or robots -learning from incremental and delayed feedback Submitted papers should be delivered by August 31, 1997. Authors intending to submit a manuscript should contact the guest editor as soon as possible to discuss paper ideas and suitability for this issue. Use nestor at acpub.duke.edu or tel: (919) 660-5695 or fax: (919) 660-5726. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Copies of the complete Adaptive Behavior Instructions to Contributors are available on request--also see the Adaptive Behavior journal's home page at: http://www.biologie.ens.fr/AnimatLab/AB.html For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Dr. Nestor Schmajuk Department of Psychology Duke University Durham, NC 27708 ================================================================== From mel at quake.usc.edu Wed Apr 23 02:04:19 1997 From: mel at quake.usc.edu (Bartlett Mel) Date: Wed, 23 Apr 1997 14:04:19 +0800 Subject: Joint Symposium Registration and Program Message-ID: <9704232104.AA12648@quake.usc.edu> Please find REGISTRATION INFORMATION and PRELIMINARY PROGRAM for the upcoming 4th Annual Southern California JSNC: ----------------------------- --- 4th Annual Joint Symposium on Neural Computation --- Co-sponsored by Institute for Neural Computation University of California, San Diego and Biomedical Engineering Department and Neuroscience Program University of Southern California to be hosted at The University of Southern California University Park Campus Rm. 124, Seeley G. Mudd Building Saturday, May 17, 1997 8:00 a.m. to 5:30 p.m. 8:00 am Registration/Coffee 8:50 am Opening Remarks Session 1: "VISION" - Bartlett Mel, Chair 9:00 am Peter Kalocsai, USC "Using Extension Fields to Improve Proformance of a Biologically Inspired Recognition Model" 9:15 am Kechen Zhang, The Salk Institute "A Conjugate Neural Representation of Visual Objects in Three Dimensions" 9:30 am Alexander Grunewald, Caltech "Detection of First and Second Order Motion" 9:45 am Zhong-Lin Lu, USC "Extracting Characteristic Structures from Natural Images Through Statistically Certified Unsupervised Learning" 10:00 am Don McCleod, UC San Diego "Optimal Nonlinear Codes" 10:15 am Lisa J. Croner, The Salk Institute "Segmentation by Color Influences Response of Motion-Sensitive Neurons in Cortical Area MT" 10:15 am - 10:30 am *** BREAK *** Session 2: "CODING in NEURAL SYSTEMS" - Christof Koch, Chair 10:30 am Dawei Dong, Caltech "How Efficient is Temporal Coding in the Early Visual System?" 10:45 am Martin Stemmler, Caltech "Entropy Maximization in Hodgkin-Huxley Models" 11:00 am Michael Wehr, Caltech "Temporal coding with Oscillatory Sequences of Firing" 11:15 am Martin J. McKeown, The Salk Institutde "Functional Magnetic Resonance Imaging Data Interpreted as Spatially Independent Mixtures" 11:30 am KEYNOTE SPEAKER: Prof. Irving Biederman, William M. Keck Professor of Cognitive Neuroscience Departments of Psychology and Computer Science and the Neuroscience Program, USC "Shape Representation in Mind and Brain" ------------- 12:30 pm - 2:30 pm *** LUNCH/POSTERS *** P1. Konstantinos Alataris, USC "Modeling of Neuronal Ensemble Dynamics" P2. George Barbastathis, Caltech "Awareness-Based Computation" P3. Marian Stewart Bartlett, UC San Diego "What are the Independent Components of Face Images?" P4. Maxim Bazhenov,The Salk Institute "A Computational Model of Intrathalamic Augmenting Responses" P5. Alan Bond, Caltech "A Computational Model for the Primate Brain Based on its Functional Architecture" P6. Glen Brown, The Salk Institute "Output Sign Switching by Neurons is Mediated by a Novel Voltage-Dependent Sodium Current" P7. Martin Chian, USC "Characterization of Unobservable Neural Circuitry in the Hippocampus with Nonlinear Systems Analysis" P8. Carl Chiang, The Neuroscience Institute "Visual and Sensorimotor Intra- and Intercolumnar Synchronization in Awake Behaving Cat" P9. Matthew Dailey, UC San Diego "Learning a Specializtion for Face Recognition" P10. Emmanuel Gillissen, Caltech "Comparative Studies of Callosal Specification in M ammals" P11. Micheal Gray, The Salk Institute "Infomative Features for Visual Speechreading" P12. Alex Guazzelli, USC "A Taxon-Affordances Model of Rat Navigation" P13. Marwan Jabri, The Salk Instutute/Sydney University "A Neural Network Model for Saccades and Fixation on Superior Colliculus" P14. Mathew Lamb, USC "Depth Based Prey Capture in Frogs and Salamanders" P15. Te-Won Lee, The Salk Instutute "Independent Component Analysis for Mixed Sub-Gaussian and Super-Gaussian Sources" P16. George Marnellos, The Salk Institute "A Gene Network of Early Neurogenesis in Drosophila" P17. Steve Potter, Caltech "Animat in a Petri Dish: Cultured Neural Networks for Studying Neural Computation" P18. James Prechtl, UC San Diego "Visual Stimuli Induce Propagating Waves of Electrical Activity in Turtle Cortex" P19. Raphael Ritz, The Salk Institute "Multiple Synfire Chains in Simultaneous Action Lead to Poisson-Like Neuronal Firing" P20. Adrian Robert, UC San Diego "A Model of the Effects of Lamination and Celltype Specialization in the Neocortex" P21. Joseph Sirosh, HNC Software Inc. "Large-Scale Neural Network Simulations Suggest a Single Mechanism for the Self-Organization of Orientation Maps, Lateral Connections and Dynamic Receptive Fields in the Primary Visual Cortex" P22. George Sperling, UC Irvine "A Proposed Architecture for Visual Motion Perception" P23. Adam Taylor, UC San Diego "Dynamics of a Recurrent Network of Two Bipolar Units" P24. Laurenz Wiskott, The Salk Institute "Objective Functions for Neural Map Formation" ------------------------------------------- Session 3: "HARDWARE" - Michael Arbib, Chair 2:30 pm Christof Born, Caltech "Real Time Ego-Motion Estimation with Neuromorphic Analog VLSI Sensors" 2:45 pm Anil Thakoor, JPL "High Speed Image Computation with 3D Analog Neural Hardware" Session 4: "VISUOMOTOR COORDINATION" - Michael Arbib, Chair 3:00 pm Marwan Jabri, The Salk Institute/Sydney University "A Computational Model of Auditory Space Neural Coding in the Superior Colliculus" 3:15 pm Amanda Bischoff, USC "Modeling the Basal Ganglia in a Reciprocal Aiming Task" 3:30 pm Jacob Spoelstra, USC "A Computational Model of the Role of the Cerebellum in Adapting to Throwing While Wearing Wedge Prism Glasses" 3:45 pm - 4:00 pm *** BREAK *** Session 5: "CHANNELS, SYNAPSES, and DENDRITES" - Terry Sejnowski, Chair 4:00 pm Akaysha C. Tang, The Salk Institute "Modeling the Effect of Neuromodulation of Spike Timing in Neocortical Neurons" 4:15 pm Michael Eisele, The Salk Institute "Reinforcement Learning by Pyramidal Neurons" 4:30 pm Sunil S. Dalal, USC "A Nonlinear Prositive Feedback Model of Glutamatergic Synaptic Transmission in Dentate Gyrus" 4:45 pm Venkatesh Murthy, The Salk Institute "Are Neighboring Synapses Independent?" 5:00 pm Gary Holt, Caltech "Shunting Inhibition Does Not Have a Divisive Effect on Firing Rates" 5:15 pm Kevin Archie, USC "Binocular Disparity Tuning in Cortical 'Complex' Cells: Yet Another Role for Intradendritic Computation?" 5:30 pm Closing Remarks *** Adjourn for DINNER *** ------------------------------------------- ORGANIZERS Bartlett Mel, USC (Chair) Michael Arbib, USC Terry Sejnowski, Salk/UCSD PROGRAM COMMITTEE Michael Arbib, USC Bartlett Mel, USC (Chair) Christof Koch, Caltech Terry Sejnowski, UCSD ------------------------------------------- REGISTRATION INFORMATION If you have not already registered, you still have 10 days to do so before the Pre-Registration DEADLINE: Pre-Registration: $25 Late/On-site Registration: $35 (received after Friday, May 2) Registration fee includes coffee, snacks, lunch, and proceedings. Registration form and checks payable to the "Department of Biomedical Engineering, USC" should be mailed to: Joint Symposium, attn: Linda Yokote Biomedical Engineering Department USC, MC 1451 Los Angeles, CA 90089 Administrative questions can be addressed to Linda at: yokote at bmsrs.usc.edu (213)740-0840, (213)740-0343 fax ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1997 JSNC Attendees Registration Form Name: ________________________________________________________________________ Affiliation: _________________________________________________________________ Address: _____________________________________________________________________ _____________________________________________________________________ _____________________________________________________________________ Phone: _________________ Fax: ____________________ E-mail: __________________ Special Dietary Preference: __________________________________________________ Registration fee enclosed: _________ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ DIRECTIONS TO THE SYMPOSIUM From juergen at idsia.ch Thu Apr 24 04:06:39 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 24 Apr 1997 10:06:39 +0200 Subject: IDSIA job interviews Message-ID: <199704240806.KAA01923@ruebe.idsia.ch> Concerning the recent IDSIA job openings in http://www.idsia.ch/~juergen/lstm.html : Between May 25 and June 1 I'll be in Hong Kong (for TANC-97). Candidates from Southeast Asia may be interested in arranging job interviews there. Juergen Schmidhuber, IDSIA From terry at salk.edu Thu Apr 24 21:31:57 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 24 Apr 1997 18:31:57 -0700 (PDT) Subject: NEURAL COMPUTATION 9:4 Message-ID: <199704250131.SAA04124@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 4 - May 15, 1997 Review Similarity, Connectionism, and the Problem of Representation in Vision Shimon Edelman and Sharon Duvdevani-Bar Article Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex Rajesh P. N. Rao and Dana H. Ballard Notes Correction to "Lower Bounds on the VC-Dimension of Smoothly Parametrized Function Classes Wee Sun Lee, Peter L. Bartlett and Robert C. Williamson Lower Bound on VC-Dimension by Local Shattering Yossi Erlich, Dan Chazan, Scott Petrack, and Avi Levi Letters SEEMORE: Combining Color, Shape and Texture Histogramming in a Neurally-Inspired Approach to Visual Object Recognition Bartlett Mel Image Segmentation Based on Oscillatory Correlation DeLiang Wang and David Terman Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience Lance R. Williams and David W. Jacobs Local Parallel Computation of Stochastic Completion Fields Lance R. Williams and David W. Jacobs Optimal, Unsupervised Learning in Invariant Object Recognition Guy Wallis and Roland Baddeley Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance Jim Kay and W. A. Phillips Marr's Theory of the Neocortex as a Self-Organizing Neural Network David Willshaw, John Hallam, Sarah Gingell and Soo Leng Lau ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From ataxr at IMAP1.ASU.EDU Wed Apr 23 21:12:46 1997 From: ataxr at IMAP1.ASU.EDU (Asim Roy) Date: Wed, 23 Apr 1997 21:12:46 -0400 (EDT) Subject: CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? Message-ID: 1997 International Conference on Neural Networks (ICNN'97) Houston, Texas (June 8 -12, 1997) ---------------------------------------------------------------- Further information on the conference is available on the conference web page: http://www.eng.auburn.edu/department/ee/ICNN97 ------------------------------------------------------------------ PANEL DISCUSSION ON "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ------------------------------------------------------------------- This is to announce that a panel will discuss the above question at ICNN'97 on Monday afternoon (June 9). Below is the abstract for the panel discussion broadly outlining the questions to be addressed. I am also attaching a slightly modified version of a subsequent note sent to the panelist. I think the issues are very broad and the questions are simple. The questions are not tied to any specific "algorithm" or "network architecture" or "task to be performed." However, the answers to these simple questions may have an enormous effect on the "nature of algorithms" that we would call "brain-like" and for the design and construction of autonomous learning systems and robots. I believe these questions also have a bearing on other brain related sciences such as neuroscience, neurobiology and cognitive science. Please send any comments on these issues directly to me (asim.roy at asu.edu). I will post the collection of responses to the newsgroups in a few weeks. All comments/criticisms/suggestions are welcome. All good science depends on vigorous debate. Asim Roy Arizona State University ------------------------- PANEL MEMBERS 1. Igor Aleksander 2. Shunichi Amari 3. Eric Baum 4. Jim Bezdek 5. Rolf Eckmiller 6. Lee Giles 7. Geoffrey Hinton 8. Dan Levine 9. Robert Marks 10. Jean Jacques Slotine 11. John G. Taylor 12. David Waltz 13. Paul Werbos 14. Nicolaos Karayiannis (Panel Moderator, ICNN'97 General Chair) 15. Asim Roy Six of the above members are plenary speakers at the meeting. ------------------------- PANEL TITLE: "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners. Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn’t. Second, strict local learning (e.g. back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.) can be developed if we don’t constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionists learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? ------------------------- A SUBSEQUENT NOTE SENT TO THE PANELIST The panel abstract was written to question the two pillars of classical connectionist learning - memoryless learning and pure local learning. With regards to memoryless learning, the basic argument against it is that humans do store information (remember facts/information) in order to learn. So memoryless learning, as far I understand, cannot be justified by any behavioral or biological observations/facts. That does not mean that humans store any and all information provided to them. They are definitely selective and parsimonious in the choice of information/facts to collect and store. We have been arguing that it is the "combination" of memoryless learning and pure local learning that is not feasible for any system, biological or otherwise. Pure local learning, in this context, implies that the system somehow puts together a set of "local learners" that start learning with each learning example given to it (e.g. in back propagation) without having seen a single training example before and without knowing anything about the complexity of the problem. Such a system can be demonstrated to do well in some cases, but would not work in general. Note that not all existing neural network algorithms are of this pure local learning type. For example, if I understand correctly, in constructive algorithms such as ART, RBF, RCE/hypersphere and others, a "decision" to create a new node is made by a "global decision-maker" based on evidence on performance of the existing system. So there is quite a bit of global coordination and "decision-making" in those algorithms beyond the simple "local learning". Anyway, if we "accept" the idea that memory can indeed be used for the purpose of learning (Paul Werbos indicated so in one of his notes), the terms of the debate/discussion change dramatically. We then open the door to the development of far more robust and reliable learning algorithms with much nicer properties than before. We can then start to develop algorithms that are closer to "normal human learning processes". Normal human learning includes processes such as (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets)for the problem, (4) testing of trial solutions (nets), (5) discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. And these learning processes are implemented within the brain, without doubt, using local computing mechanisms of different types. But these learning processes cannot exist without allowing for storage of information about the problem. One of the "large" missing pieces in the neural network field is the definition or characterization of an autonomous learning system such as the brain. We have never defined the external behavioral characteristics of our learning algorithms. We have largely pursued algorithm development from an "internal mechanisms" point of view (local learning, memoryless learning) rather than from the point of view of "external behavior or characteristics" of these resulting algorithms. Some of these external characteristics of our learning algorithms might be:(1) the capability to design the net on their own, (2) polynomial time complexity of the algorithm in design and training of the net, (3) generalization capability, and (4) learning from as few examples as possible (quickness in learning). It is perhaps time to define a set of desirable external characteristics for our learning algorithms. We need to define characteristics that are "independent of": (1) a particular architecture, (2) the problem to be solved (function approximation, classification, memory, etc.), (3)local/global learning issues, and (4) issues of whether to use memory or not to learn. We should rather argue about these external properties than issues of global/local learning and of memoryless learning. With best regards, Asim Roy Arizona State University From atick at monaco.rockefeller.edu Mon Apr 28 08:01:02 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Mon, 28 Apr 1997 08:01:02 -0400 Subject: Network:CNS 8:2 Message-ID: <9704280801.ZM7641@monaco.rockefeller.edu> Network: Computation in Neural Systems 8: 2, 1997 Table of Contents TOPICAL REVIEW R33 Plasticity in adult sensory cortex: a review A Das PAPERS 107 A neuronal model of stroboscopic alternative motion A Bartsch and J L van Hemmen 127 Metric-space analysis of spike trains: theory, algorithms and application J D Victor and K P Purpura 165 Dynamic transitions in global network activity influenced by the balance of excitation and inhibition S L Hill and A E P Villa 185 Spontaneous origin of topological complexity in self-organizing neural networks G Chapline 195 Hebbian learning and the development of direction selectivity: the role of geniculate response timings J C Feidler, A B Saul, A Murthy and A L Humphrey 215 Self-organized feature maps and information theory K Holthausen and O Breidbach 229 Topological singularities in cortical orientation maps: the sign theorem correctly predicts orientation column patterns in primate striate cortex D Tal and E L Schwartz The journal is fully online--those with institutional subscription can access it directly from the web: http:/www.iop.org/Journals/ne -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From bert at mbfys.kun.nl Tue Apr 29 07:07:49 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 29 Apr 1997 13:07:49 +0200 Subject: job opening Message-ID: <199704291107.NAA16975@anthemius.mbfys.kun.nl> Post doc position available at SNN, University of Nijmegen, the Netherlands. Background: The group consists of 8 researchers and PhD students and conducts theoretical and applied research on neural networks. The group is part of the Laboratory of Biophysics which is involved in experimental brain science. The group will significantly expand in the coming years due to a recently received PIONIER research grant. Recent research of the group has focussed on theoretical description of learning processes using the theory of stochastic processes and the design of efficient learning rules for Boltzmann machines using techniques from statistical mechanics; the extraction of rules from data and the integration of knowledge and data for modeling; the design of robust methods for confidence estimation with neural networks. (See also http://www.mbfys.kun.nl/SNN) Job specification: The tasks of the post-doc will be to conduct independent research in one of the above areas. In addition, it is expected that the post-doc will initiate novel research and will assist in the supervision of PhD students. The postdoc should have a PhD in physics, mathematics or computer science and a strong theoretical background in neural networks. The post-doc salary will be maximally Dfl. 7178 per month, depending on experience. The position is available for 3 years with possible extension to 5 years. Applications: Interested candidates should send a letter with a CV and list of publications before June 1 1997 to dr. H.J. Kappen, Stichting Neurale Netwerken, University of Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen. For information contact dr. H.J. Kappen, +31 24 3614241. From wiskott at salk.edu Tue Apr 29 13:17:46 1997 From: wiskott at salk.edu (Laurenz Wiskott) Date: Tue, 29 Apr 1997 10:17:46 -0700 Subject: technical report on Neural Map Formation available Message-ID: <199704291717.KAA04638@katz.salk.edu> The following technical report is now available by anonymous ftp from http://www.cnl.salk.edu/~wiskott/Abstracts/WisSej97a.html ftp://ftp.cnl.salk.edu/pub/wiskott/publications/WisSej97a-NeuralMapFormation-TRINC9701.ps.gz Comments are welcome! Laurenz Wiskott. Objective Functions for Neural Map Formation Laurenz Wiskott and Terrence Sejnowski Abstract: Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics which abstract from the the neural activity dynamics by an adiabatic approximation, and objective functions from which weight dynamics may be derived as gradient flows. In this paper we present an example of how an objective function can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from objective functions generated from a few prototypical terms. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same objective function. Coordinate transformations are also important in deriving normalization rules from constraints. Several examples illustrate how objective functions can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics. --. .--. .-----------------------------------------------------. | || SALK || | | Dr. Laurenz Wiskott, CNL, Salk Institute, San Diego | | |||____||| | | wiskott at salk.edu, http://www.cnl.salk.edu/~wiskott/ | `--''' ```--' `-----------------------------------------------------' From hinton at cs.toronto.edu Tue Apr 1 12:03:17 1997 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Tue, 1 Apr 1997 12:03:17 -0500 Subject: new paper available Message-ID: <97Apr1.120321edt.809@neuron.ai.toronto.edu> "Generative Models for Discovering Sparse Distributed Representations" Geoffrey E. Hinton and Zoubin Ghahramani Department of Computer Science University of Toronto ABSTRACT We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. The paper is available at http://www.cs.toronto.edu/~hinton/ftp/RGBN.ps.Z From qian at brahms.cpmc.columbia.edu Tue Apr 1 17:07:33 1997 From: qian at brahms.cpmc.columbia.edu (Ning Qian) Date: Tue, 1 Apr 1997 15:07:33 -0700 Subject: vision postdoc position at Columbia Message-ID: <199704012207.PAA19974@brahms.cpmc.columbia.edu> Postdoctoral Position in Visual Psychophysics Center for Neurobiology and Behavior Columbia University New York, NY A postdoctoral fellowship position in Visual Psychophysics is available immediately in my lab at Columbia University. The individual will participate in psychophysics projects that investigate the mechanisms of motion detection, stereoscopic depth perception and motion-stereo integration. There will be close interactions between these projects and the related computational modeling projects in the same lab. The details of our research interests and publications can be found at the web site listed below. The funding for the position is available for two years with the possibility of renewal. Applicants should have a strong background in visual psychophysics and should be able to write programs (or adapt our current psychophysics software package) for generating visual stimuli. Please send a CV, representative publications and two letters of recommendations to: Dr. Ning Qian Center for Neurobiology and Behavior Columbia University 722 W. 168th St., A730 New York, NY 10032 qian at brahms.cpmc.columbia.edu (email) 212-960-2213 (phone) 212-960-2561 (fax) I will attend the ARVO meeting in May. The phone number of my hotel is 954-525-3484. Please email me if you would like to arrange a meeting there. *********************************************************************h The details of our research interests and publications can be found at: http://brahms.cpmc.columbia.edu Selected Papers: Binocular Disparity and the Perception of Depth [Review], Ning Qian, Neuron, 1997, 18:359-368. The Effect of Complex Motion Pattern on Speed Perception, Bard J Geesaman and Ning Qian (submitted to Vision Research). A Novel Speed Illusion Involving Expansion and Rotation Patterns, Bard J Geesaman and Ning Qian, Vision Research, 1996, 36:3281-3292. Transparent Motion Perception as Detection of Unbalanced Motion Signals I: Psychophysics, Ning Qian, Richard A. Andersen and Edward H. Adelson, J. Neurosci., 1994, 14:7357--7366. A Physiological Model for Motion-stereo Integration and a Unified Explanation of the Pulfrich-like Phenomena, Ning Qian and Richard A. Andersen, Vision Research, 1997 (in press). Physiological Computation of Binocular Disparity, Ning Qian and Yudong Zhu, Vision Research, 1997 (in press). Binocular Receptive Field Profiles, Disparity Tuning and Characteristic Disparity, Yudong Zhu and Ning Qian, Neural Computation, 1996, 8:1647-1677. Computing Stereo Disparity and Motion with Known Binocular Cell Properties, Ning Qian, Neural Computation, 1994, 6:390-404. From cas-cns at cns.bu.edu Tue Apr 1 10:45:17 1997 From: cas-cns at cns.bu.edu (BU - Cognitive and Neural Systems) Date: Tue, 01 Apr 1997 10:45:17 -0500 Subject: VISION, RECOGNITION, ACTION: FINAL CALL Message-ID: <3.0.1.32.19970401104517.006b2534@cns.bu.edu> **** FINAL CALL FOR REGISTRATION ***** International Conference on VISION, RECOGNITION, ACTION: NEURAL MODELS OF MIND AND MACHINE May 28--31, 1997 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research This conference will include 21 invited lectures and 88 contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems see, understand, and act upon a changing world. The program is listed below. Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If paying by check, mail to: Neural Models of Mind and Machine, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail to the above address, or fax to (617) 353-7755. The meeting registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. A day of tutorials will be held on Wednesday, May 28. The tutorial registration fee helps to pay for 2 coffee breaks and a hard copy of the 7 hours of tutorial viewgraphs. See the meeting web page at http://cns-web.bu.edu/cns-meeting for further meeting information. **************************************** REGISTRATION FORM (Please Type or Print) Vision, Recognition, Action: Neural Models of Mind and Machine Boston University Boston, Massachusetts Tutorials: May 28, 1997 Meeting: May 29-31, 1997 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, coffee breaks, and meeting proceedings. For registered participants in the conference, the regular tutorial registration fee is $20 and the student fee is $15. For attendees of only the tutorial, the regular registration fee is $30 and the student fee is $25. Two coffee breaks and a tutorial handout will be covered by the tutorial registration fee. CHECK ONE: [ ] $55 Conference plus Tutorial (Regular) [ ] $40 Conference plus Tutorial (Student) [ ] $35 Conference Only (Regular) [ ] $25 Conference Only (Student) [ ] $30 Tutorial Only (Regular) [ ] $25 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: **************************************** MEETING SCHEDULE (poster session details follow the oral session schedule) WEDNESDAY, MAY 28, 1997 (Tutorials): 7:30am---8:30am MEETING REGISTRATION 8:30am--10:00am Stephen Grossberg (Part I): "Vision, Brain, and Technology" 10:00am--10:30am COFFEE BREAK 10:30am--12:00pm Stephen Grossberg (Part II): "Vision, Brain, and Technology" 12:00pm---1:15pm LUNCH 1:15pm---3:15pm Gail Carpenter: "Self-Organizing Neural Networks for Learning, Recognition, and Prediction: ART Architectures and Applications" 3:15pm---3:45pm COFFEE BREAK 3:45pm---5:45pm Eric Schwartz: "Algorithms and Hardware for the Application of Space-Variant Active Vision to High Performance Machine Vision" THURSDAY, MAY 29, 1997 (Invited Lectures and Posters): 7:30am---8:30am MEETING REGISTRATION 8:30am---9:15am Robert Shapley: "Brain Mechanisms for Visual Perception of Occlusion" 9:15am--10:00am George Sperling: "An Integrated Theory for Attentional Processes in Vision, Recognition, and Memory" 10:00am--10:30am COFFEE BREAK AND POSTER SESSION I 10:30am--11:15am Patrick Cavanagh: "Direct Recognition" 11:15am--12:00pm Stephen Grossberg: "Perceptual Grouping during Neural Form and Motion Processing" 12:00pm---1:30pm LUNCH 1:30pm---2:15pm Robert Desimone: "Neuronal Mechanisms of Visual Attention" 2:15pm---3:00pm Ennio Mingolla: "Visual Search" 3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION I 3:30pm---4:15pm Patricia Goldman-Rakic: "The Machinery of Mind: Models from Neurobiology" 4:15pm---5:00pm Larry Squire: "Brain Systems for Recognition Memory" 5:00pm---8:00pm POSTER SESSION I FRIDAY, MAY 30, 1997 (Invited and Contributed Lectures): 8:00am---8:30am MEETING REGISTRATION 8:30am---9:15am Lance Optican: "Neural Control of Rapid Eye Movements" 9:15am--10:00am John Kalaska: "Reaching to Visual Targets: Cerebral Cortical Neuronal Mechanisms" 10:00am--10:30am COFFEE BREAK 10:30am--11:15am Rodney Brooks: "Models of Vision-Based Human Interaction" 11:15am--12:00pm Alex Pentland: "Interpretation of Human Action" 12:00pm---1:30pm LUNCH 1:30pm---1:45pm Paolo Gaudiano: "Retinal Processing of IRFPA Imagery" 1:45pm---2:00pm Zili Liu: "2D Ideal Observers in 3D Object Recognition" 2:00pm---2:15pm Soheil Shams: "Object Segmentation and Recognition via a Network of Resonating Spiking Neurons" 2:15pm---2:30pm Wey-Shiuan Hwang and John Weng: "Autonomous Learning for Visual Attention Selection" 2:30pm---2:45pm Shane W. McWhorter, Theodore J. Doll, and Anthony A. Wasilewski: "Integration of Computational Vision Research Models for Visual Performance Prediction" 2:45pm---3:00pm Frank S. Holman III and Robert J. Marks II: "Platform Independent Geometry Verification Using Neural Networks Including Color Visualization" 3:00pm---3:30pm COFFEE BREAK 3:30pm---3:45pm Heiko Neumann and Wolfgang Sepp: "A Model of Cortico-Cortical Integration of Visual Information: Receptive Fields, Grouping, and Illusory Contours" 3:45pm---4:00pm Constance S. Royden: "A Biological Model for Computing Observer Motion in the Presence of Moving Objects" 4:00pm---4:15pm Michele Fabre-Thorpe, Ghislaine Richard, and Simon Thorpe: "Rapid Categorization of Natural Images in Rhesus Monkeys: Implications for Models of Visual Processing" 4:15pm---4:30pm Raju S. Bapi and Michael J. Denham: "Neural Network Model of Experiments on Set-Shifting Paradigm" 4:30pm---4:45pm Jose L. Contreras-Vidal and George E. Stelmach: "Adaptive Resonance Theory Computations in the Cortico-Striatal Circuits are Gated by Dopamine Activity during Reward-Related Learning of Approach Behavior" 4:45pm---5:00pm Mingui Sun, Murat Sonmez, Ching-Chung Li, and Robert J. Sclabassi: "Application of Time-Frequency Analysis, Artificial Neural Networks, and Decision Making Theory to Localization of Electrical Sources in the Brain Based on Multichannel EEG" 5:00pm---6:30pm MEETING RECEPTION 6:30pm---7:30pm Stuart Anstis Keynote Lecture: "Moving in Unexpected Directions" SATURDAY, MAY 31 (Invited Lectures and Posters): 8:00am---8:30am MEETING REGISTRATION 8:30am---9:15am Eric Schwartz: "Multi-Scale Vortex of the Brain: Anatomy as Architecture in Biological and Machine Vision" 9:15am--10:00am Terrence Boult: "Polarization Vision" 10:00am--10:30am COFFEE BREAK AND POSTER SESSION II 10:30am--11:15am Allen Waxman: "Opponent Color Models of Visible/IR Fusion for Color Night Vision" 11:15am--12:00pm Gail Carpenter: "Distributed Learning, Recognition, and Prediction in ART and ARTMAP Networks" 12:00pm---1:30pm LUNCH 1:30pm---2:15pm Tomaso Poggio: "Representing Images for Visual Learning" 2:15pm---3:00pm Michael Jordan: "Graphical Models, Neural Networks, and Variational Approximations" 3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION II 3:30pm---4:15pm Andreas Andreou: "Mixed Analog/Digital Neuromorphic VLSI for Sensory Systems" 4:15pm---5:00pm Takeo Kanade: "Computational VLSI Sensors: Integrating Sensing and Processing" 5:00pm---8:00pm POSTER SESSION II POSTER SESSION I: Thursday, May 29, 1997 All posters will be displayed for the full day. Biological Vision Session: #1 Vlad Cardei, Brian Funt, and Kobus Barnard: "Modeling Color Constancy with Neural Networks" #2 E.J. Pauwels, P. Fiddelaers, and L. Van Gool: "Send in the DOGs: Robust Clustering using Center-Surround Receptive Fields" #3 Tony Vladusich and Jack Broerse: "Neural Networks for Adaptive Compensation of Ocular Chromatic Aberration and Discounting Variable Illumination" #4 Alexander Dimitrov and Jack D. Cowan: "Objects and Texture Need Different Cortical Representations" #5 Miguel Las-Heras, Jordi Saludes, and Josep Amat: "Adaptive Analysis of Singular Points Correspondence in Stereo Images" #6 Neil Middleton: "Properties of Receptive Fields in Radial Basis Function (RBF) Networks" #7 David Enke and Cihan Dagli: "Modeling the Bidirectional Interactions within and between the LGN and Area V1 Cells" #8 Scott Oddo, Jacob Beck, and Ennio Mingolla: "Texture Segregation in Chromatic Element-Arrangement Patterns" #9 David Alexander and Phil Sheridan: "Local from Global Geometry of Layers 2, 3, and 4C of the Macaque Striate Cortex" #10 Phil Sheridan and David Alexander: "Invariant Transformations on a Space-Variant Hexagonal Grid" #11 Irak Vicarte Mayer and Haruhisa Takahashi: "Simultaneous Edge Detection and Image Segmentation using Neural Networks and Color Theory" #12 Adam Reeves and Shuang Wu: "Visual Adaptation: Stochastic or Deterministic?" #13 Peter Kalocsai and Irving Biederman: "Biologically Inspired Recognition Model with Extension Fields" #14 Stephane J.M. Rainville, Frederick A.A. Kingdom, and Anthony Hayes: "Effects of Local Phase Structure on Motion Perception" #15 Alex Harner and Paolo Gaudiano: "A Neural Model of Attentive Visual Search" #16 Lars Liden, Ennio Mingolla, and Takeo Watanabe: "The Effects of Spatial Frequency, Contrast, Disparity, and Phase on Motion Integration between Different Areas of Visual Space" #17 Brett R. Fajen, Nam-Gyoon Kim, and Michael T. Turvey: "Robustness of Heading Perception Along Curvilinear Paths" #18 L.N. Podladchikova, I.A. Rybak, V.I. Gusakova, N.A. Shevtsova, and A.V. Golovan: "A Behavioral Model of Active Visual Perception" #19 Julie Epelboim and Patrick Suppes: "Models of Eye Movements during Geometrical Problem Solving" Biological Learning and Recognition Session: #20 George J. Kalarickal and Jonathan A. Marshall: "Visual Classical Rearing and Synaptic Plasticity: Comparison of EXIN and BCM Learning Rules" #21 Jean-Daniel Kant and Daniel S. Levine: "ARTCRITIC: An Adaptive Critic Model for Decision Making in Context" #22 L. Andrew Coward: "Electronic Simulation of Unguided Learning, Associative Memory, Dreaming, and Internally Generated Succession of Mental Images" #23 K. Torii, T. Kitsukawa, S. Kunifuji, and T. Matsuzawa: "A Synaptic Model by Temporal Coding" #24 Gabriel Robles-de-la-Torre and Robert Sekuler: "Learning a Virtual Object's Dynamics: Spectral Analysis of Human Subjects' Internal Representation" #25 Sheila R. Cooke, Robert Sekuler, Brendan Kitts, and Maja Mataric: "Delayed and Real-Time Imitation of Complex Visual `Gestures' " #26 Brendan Kitts, Sheila R. Cooke, Maja Mataric, and Robert Sekuler: "Improved Pattern Recognition by Combining Invariance Methods" #27 Gregory R. Mulhauser: "Can ART Dynamics Create a 'Centre of Cognitive Action' Capable of Supporting Phenomenal Consciousness?" #28 Bruce F. Katz: "The Pleasingness of Polygons" #29 Stephen L. Thaler: "Device for the Autonomous Generation of Useful Information" Control and Robotics Session: #30 John Demeris and Gillian Hayes: "Integrating Visual Perception and Action in a Robot Model of Imitation" #31 Danil V. Prokhorov and Donald C. Wunsch II: "A General Training Procedure for Stable Control with Adaptive Critic Designs" #32 Juan Cires and Pedro J. Zufiria: "Space Perception through a Self-Organizing Map for Mobile Robot Control" #33 Alex Guazzelli and Michael A. Arbib: "NeWG: The Neural World Graph" #34 Minh-Chinh Nguyen: "Robot Vision Without Calibration" #35 Erol Sahin and Paolo Gaudiano: "Real-Time Object Localization from Monocular Camera Motion" #36 Carolina Chang and Paolo Gaudiano: "A Neural Network for Obstacle Avoidance in Mobile Robots" #37 P. Gaussier, J.-P. Banquet, C. Joulain, A. Revel, and S. Zrehen: "Validation of a Hippocampal Model on a Mobile Robot" #38 J.-P. Banquet, P. Gaussier, C. Joulain, and A. Revel: "Learning, Recognition, and Generation of Tempero-Spatial Sequences by a Cortico-Hippocampal System: A Neural Network Model" POSTER SESSION II: Saturday, May 31, 1997 All posters will be displayed for the full day. Machine Vision Session: #1 Tyler C. Folsom: "Edge Detection by Sparse Sampling with Steerable Quadrature Filters" #2 Mario Aguilar and Allen M. Waxman: "Comparison of Opponent-Color Neural Processing and Principal Components Analysis in the Fusion of Visible and Thermal IR Imagery" #3 Magnus Snorrason: "A Multi-Resolution Feature Integration Model for the Next-Look Problem" #4 Charles B. Owen: "Application of Multiple Media Stream Correlation to Functional Imaging of the Brain" Machine Learning Session: #5 Mukund Balasubramanian and Stephen Grossberg: "A Neural Architecture for Recognizing 3-D Objects from Multiple 2-D Views" #6 Maartje E.J. Raijmakers and Peter C.M. Molenaar: "Exact ART: A Complete Implementation of an ART Network" #7 Danil V. Prokhorov and Lee A. Feldkamp: "On the Relationship between Derivative Adaptive Critics and Backpropagation through Time" #8 Tulay Yildirim and John S. Marsland: "Optimization by Back Propagation of Error in Conic Section Functions" #9 John M. Zachary, Jacob Barhen, Nageswara S. Rao, and Sitharama S. Iyengar: "A Dynamical Systems Approach to Neural Network Learning from Finite Examples" #10 Christos Orovas and James Austin: "Cellular Associative Neural Networks" #11 M.A. Grudin, P.J.G. Lisboa, and D.M. Harvey: "A Sparse Representation of Human Faces for Recognition" #12 Mike Y.W. Leung and David K.Y. Chiu: "Feature Selection for Two-Dimensional Shape Discrimination using Feedforward Neural Networks" #13 Robert Alan Brown: "The Creation of Order in a Self-Learning Duplex Network" #14 C.H. Chen: "Designing a Neural Network to Predict Human Responses" #15 Jean-Marc Fellous, Laurenz Wiskott, Norbert Kruger, and Christoph von der Malsburg: "Face Recognition by Elastic Bunch Graph Matching" #16 Gerard J. Rinkus: "A Monolithic Distributed Representation Supporting Multi-Scale Spatio-Temporal Pattern Recognition" #17 Harald Ruda and Magnus Snorrason: "Evaluating Automatically Constructed Hierarchies of Self-Organized Neural Network Classifiers" #18 Ken J. Tomita: "A Method for Building an Artificial Neural Network with 2/3 Dimensional Visualization of Input Data" #19 Fernando J. Corbacho and Michael A. Arbib: "Towards a Coherence Theory of the Brain and Adaptive Systems" #20 Gail A. Carpenter, Mark A. Rubin, and William W. Streilein: "ARTMAP-FD: Familiarity Discrimination of Radar Range Profiles" #21 James R. Williamson: "Multifield ARTMAP: A Network for Local, Incremental, Constructive Learning" #22 Marcos M. Campos: "Constructing Adaptive Orthogonal Wavelet Bases with Self-Organizing Feature Maps" #23 Sucharita Gopal, Curtis E. Woodcock, and Alan H. Strahler: "Fuzzy ARTMAP Classification of Global Land Cover from AVHRR Data Set" #24 A.F. Rocha and A. Serapiao: "Fuzzy Modeling of the Visual Mind" #25 Eun-Jin Kim and Yillbyung Lee: "Handwritten Hangul Recognition Based on Psychologically Motivated Model" #26 Jayadeva: "A Nonlinear Programming Based Approach to the Traveling Salesman Problem" #27 Haruhisa Takahashi: "Biologically Plausible Efficient Learning Via Local Delta Rule" #28 Raonak Zaman and Donald C. Wunsch II: "Prediction of Yarn Strength from Fiber Properties using Fuzzy ARTMAP" VLSI Session: #29 James Waskiewicz and Gert Cauwenberghs: "The Boundary Contour System on a Single Chip: Analog VLSI Architecture" #30 Marc Cohen, Pamela Abshire, and Gert Cauwenberghs: "Current Mode VLSI Fuzzy ART Processor with On-Chip Learning" #31 Shinji Karasawa, Senri Ikeda, Yong Hea Ku, and Jun Hum Chung: "Methodology of the Decision-Making Device" #32 Todd Hinck and Allyn E. Hubbard: "Circuits that Implement Shunting Neurons and Steerable Spatial Filters" Audition, Speech, and Language Session: #33 Colin Davis and Sally Andrews: "Competitive and Cooperative Effects of Similarity in Stationary and Self-Organizing Models of Visual Word Recognition" #34 Susan L. McCabe and Michael J. Denham: "Towards a Neurocomputational Model of Auditory Perception" #35 Dave Johnson: "A Wavelet-Based Auditory Planning Space for Production of Vowel Sounds" #36 Michael A. Cohen, Stephen Grossberg, and Christopher Myers: "A Neural Model of Context Effects in Variable-Rate Speech Perception" #37 Peter Cariani: "Neural Computation in the Time Domain" #38 N.K. Kasabov and R. Kozma: "Chaotic Adaptive Fuzzy Neural Networks and their Applications for Phoneme-Based Spoken Language Recognition" **************************************** MEETING HOTEL INFORMATION: For all hotels listed below, meeting attendees should make their own reservations directly with the hotel using the meeting name "Vision, Recognition, Action". 1. THE ELIOT HOTEL 370 Commonwealth Avenue Boston, MA 02215 (617) 267-1607 (800) 443-5468 Janet Brown, director of sales $130/night is the Boston University rate, and is the lowest rate that the Eliot will offer to anyone, whether individual or group. A block of 12 rooms is being held until April 28, 1997. This hotel is 3 or 4 blocks from the CNS Department. 2. HOWARD JOHNSONS 575 Commonwealth Avenue Boston, MA 02215 (617) 267-3100 (reservations) (617) 864-0300 (sales office) Eric Perryman, group sales office Rates: $115/night/single and $125/night/double. A block of 15 rooms is being held until April 28, 1997. This hotel is across the street from the CNS Department. 3. THE BUCKMINSTER 645 Beacon Street Boston, MA 02215 (617) 236-7050 (800) 727-2825 Dan Betro, group sales office A block of 29 rooms is being held until April 28, 1997. This hotel is a few steps away from the CNS Department. Pricing will vary depending on the kind of room; the range is $55/night up to $129/night. Please inquire directly with the hotel when making your reservation. 4. HOLIDAY INN, BROOKLINE 1200 Beacon Street Brookline, MA 02146 (617) 277-1200 (800) 465-4329 Lisa Pedulla, Director of Sales, x-320 $99/night single, $109/night double are the Boston University rates. A block of 25 rooms will be held for us until April 28, 1997. This hotel is within a mile of the CNS Department. There is a trolley stop directly outside the hotel that will take you to within a block of the CNS Department. For information about other Boston-area hotels, please see http://www.boston.com/travel/lodging.htm. From john at dcs.rhbnc.ac.uk Wed Apr 2 04:56:14 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Wed, 02 Apr 97 10:56:14 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199704020956.KAA31415@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-015: ---------------------------------------- Exact Learning of subclasses of CDNF formulas with membership queries by Carlos Domingo, Universitat Polit\`ecnica de Catalunya, Spain Abstract: We consider the exact learnability of subclasses of Boolean formulas from membership queries alone. We show how to combine known learning algorithms that use membership and equivalence queries to obtain new learning results only with memberships. In particular we show the exact learnability of read-$k$ monotone CDNF formulas, Sat-$k$ ${\cal O}(\log n)$-CDNF, and ${\cal O}(\sqrt{\log n})\mbox{-size CDNF}$ from membership queries only. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-016: ---------------------------------------- Decision Trees have Approximate Fingerprints by Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Vijay Raghavan, Vanderbilt University, USA Abstract: We prove that decision trees exhibit the ``approximate fingerprint'' property, and therefore are not polynomially learnable using only equivalence queries. A slight modification of the proof extends this result to several other representation classes of boolean concepts which have been studied in computational learning theory. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-017: ---------------------------------------- Learning Monotone Term Decision Lists by David Guijarro, Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Vijay Raghavan, Vanderbilt University, USA Abstract: We study the learnability of monotone term decision lists in the exact model of equivalence and membership queries. We show that, for any constant $k \ge 0$, $k$-term monotone decision lists are exactly and properly learnable with $n^{O(k)}$ membership queries in O($n^{k^3}$) time. We also show $n^{\Omega (k)}$ membership queries are necessary for exact learning. In contrast, both $k$-term monotone decision lists ($k \ge 2$) and general monotone decision lists are not learnable with equivalence queries alone. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-018: ---------------------------------------- Learning nearly monotone $k$-term DNF by Jorge Castro, David Guijarro, Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain Abstract: This note studies the learnability of the class $k$-term DNF with a bounded number of negations per term. We study the case of learning with membership queries alone, and give tight upper and lower bounds on the number of negations that makes the learning task feasible. We also prove a negative result for equivalence queries. Finally, we show that a slight modification in our algorithm proves that the considered class is also learnable in the Simple PAC model, extending Li and Vit\'anyi result for monotone $k$-term DNF. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-019: ---------------------------------------- $\delta$-uniform BSS Machines by Paolo Boldi, Sebastiano Vigna, Universit\`a degli Studi di Milano, Italy Abstract: A $\delta$-uniform BSS machine is almost like a standard BSS machine, but the negativity test is replaced by a ``smaller than $-\delta$'' test, where the threshold $\delta\in(0,1)$ is not known: in this way we represent the impossibility of performing exact equality tests. We prove that, for any real closed archimedean field $R$, the $\delta$-uniform semi-decidable sets are exactly the interiors of BSS semi-decidable sets. Then, we show that the sets semi-decidable by Turing machines are the sets semi-decidable by $\delta$-uniform machines with coefficients in $Q$ or $T$, the field of Turing computable numbers. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-020: ---------------------------------------- The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials by Wolfgang Maass, Berthold Ruf, Technische Universitaet Graz, Austria Abstract: Recently one has started to investigate the computational power of spiking neurons (also called ``integrate and fire neurons''). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatio-temporal coding, i.e.~encoded in the time points when specific neurons ``fire'' (and thus send a ``spike'' to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are ``accidental'' aspects of their realization in biological ``wetware''. Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatio-temporal coding in a new generation of artificial neural nets, such as for example pulse stream VLSI. The firing mechanism of spiking neurons is defined in terms of their postsynaptic potentials or ``response functions'', which describe the change in their electric membrane potential as a result of the firing of another neuron. We consider in this article the case where the response functions of spiking neurons are assumed to be of the mathematically most elementary type: they are assumed to be step-functions (i.e. piecewise constant functions). This happens to be the functional form which has so far been adapted most frequently in pulse stream VLSI as the form of potential changes (``pulses'') that mimic the role of postsynaptic potentials in biological neural systems. We prove the rather surprising result that in models without noise the computational power of networks of spiking neurons with arbitrary piecewise constant response functions is strictly weaker than that of networks where the response functions of neurons also contain short segments where they increase respectively decrease in a linear fashion (which is in fact biologically more realistic). More precisely we show for example that an addition of analog numbers is impossible for a network of spiking neurons with piecewise constant response functions (with any bounded number of computation steps, i.e. spikes), whereas addition of analog numbers is easy if the response functions have linearly increasing segments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-021: ---------------------------------------- On the Effect of Analog Noise in Discrete-Time Analog Computations by Wolfgang Maass, Technische Universitaet Graz, Austria Pekka Orponen, University of Jyv\"askyl\"a, Finland Abstract: We introduce a model for analog noise in analog computation with discrete time that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the VC-dimension of computational models with analog noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-022: ---------------------------------------- Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding by Wolfgang Maass and Thomas Natschl"ager, Technische Universitaet Graz, Austria Abstract: A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes in networks of detailed compartmental neuron models. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of \emph{excitatory} synapses. A corresponding \emph{layered} architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-023: ---------------------------------------- The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant by Jyrki Kivinen, University of Helsinki, Finland Manfred Warmuth, University of California, Santa Cruz, USA Peter Auer, Technische Universitaet Graz, Austria Abstract: We give an adversary strategy that forces the Perceptron algorithm to make $\Omega(k N)$ mistakes in learning monotone disjunctions over $N$ variables with at most $k$ literals. In contrast, Littlestone's algorithm Winnow makes at most $O(k\log N)$ mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. The Perceptron algorithm is an example of {\em additive\/} algorithms, which have the property that their weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. We show that an adversary can force any additive algorithm to make $(N+k-1)/2$ mistakes in learning a monotone disjunction of at most $k$ literals. Simple experiments show that for $k\ll N$, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-024: ---------------------------------------- Approximating Hyper-Rectangles: Learning and Pseudo-random Sets by Peter Auer, Technische Universitaet Graz, Austria Philip Long, National University of Singapore, Singapore Aravind Srinivasan, National University of Singapore, Singapore Abstract: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subproblem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of $n$ independent multivalued random variables. We present improved upper bounds for a class of such problems of ``approximating'' high-dimensional rectangles that arise in PAC learning and pseudorandomness. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-025: ---------------------------------------- On Learning from Multi-Instance Examples: Empirical Evaluation of a Theoretical Approach by Peter Auer, Technische Universitaet Graz, Austria Abstract: We describe a practical algorithm for learning axis-parallel high-dimensional boxes from multi-instance examples. The first solution to this practical learning problem arising in drug design was given by Dietterich, Lathrop, and Lozano-Perez. A theoretical analysis was performed by Auer, Long, Srinivasan, and Tan. In this work we derive a competitive algorithm from theoretical considerations which is completely different from the approach taken by Dietterich et. al. Our algorithm uses for learning only simple statistics of the training data and avoids potentially hard computational problems which were solved by heuristics by Dietterich et. al. In empirical experiments our algorithm performs quite well although it does not reach the performance of the fine-tuned algorithm of Dietterich et. al. We conjecture that our approach can be fruitfully applied also to other learning problems where certain statistical assumptions are satisfied. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-026: ---------------------------------------- Computing Functions with Spiking Neurons in Temporal Coding by Berthold Ruf, Technische Universitaet Graz, Austria Abstract: For fast neural computations within the brain it is very likely that the timing of single firing events is relevant. Recently Maass has shown that under certain weak assumptions functions can be computed in temporal coding by leaky integrate-and-fire neurons. Here we demonstrate with the help of computer simulations using GENESIS that biologically more realistic neurons can compute linear functions in a natural and straightforward way based on the basic principles of the construction given by Maass. One only has to assume that a neuron receives all its inputs in a time intervall of approximately the length of the rising segment of its excitatory postsynaptic potentials. We also show that under certain assumptions there exists within this construction some type of activation function being computed by such neurons, which allows the fast computation of arbitrary continuous bounded functions. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-027: ---------------------------------------- Hebbian Learning in Networks of Spiking Neurons Using Temporal Coding by Berthold Ruf, Michael Schmitt, Technische Universitaet Graz, Austria Abstract: Computational tasks in biological systems that require short response times can be implemented in a straightforward way by networks of spiking neurons that encode analogue values in temporal coding. We investigate the question how spiking neurons can learn on the basis of differences between firing times. In particular, we provide learning rules of the Hebbian type in terms of single spiking events of the pre- and postsynaptic neuron and show that the weights approach some value given by the difference between pre- and postsynaptic firing times with arbitrary high precision. Our learning rules give rise to a straightforward possibility for realizing very fast pattern analysis tasks with spiking neurons. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-028: ---------------------------------------- Overview of Learning Systems produced by NeuroCOLT Partners by NeuroCOLT Partners Abstract: This NeuroCOLT Technical Report documents a number of systems that have been produced withing the NeuroCOLT partnership. It only includes a summary of each system together with pointers to where the system is located and more information about its performance and design can be found. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-029: ---------------------------------------- On Bayesian Case Matching by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we present a new probabilist formalization of the case-based reasoning paradigm. In contrast to earlier Bayesian approaches, the new formalization does not need a transformation step between the original case space and the distribution space. We concentrate on applying this Bayesian framework to the case matching problem, and propose a probabilistic scoring metric for this task. In the experimental part of the paper, the Bayesian case matching score is evaluated empirically by using publicly available real-world case bases. The results show that when encountered with cases where some of the feature values have been removed, a relatively small number of remaining values is sufficient for retrieving the original case from the case base by using the proposed measure. The experiments also show that the approach is computationally very efficient. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-030: ---------------------------------------- Batch Classifications with Discrete Finite Mixtures by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we study batch classification problems where multiple predictions can be made simultaneously, instead of performing the classifications independently one at a time. For the predictions we use the model family of discrete finite mixtures, where, by introducing a hidden latent variable, we implicitly assume missing data that has to be estimated in order to be able to construct models from sample data. The main contribution of this paper is to demonstrate how the standard EM algorithm can be modified for estimating both the missing latent variable data, and the batch classification data at the same time, thus allowing us to use the same algorithm both for constructing the models from training data and for making predictions. In our framework the amount of data available for making predictions is greater than with the traditional approach, as the algorithm can also exploit the information available in the query vectors. In the empirical part of the paper, the results obtained by the batch classification approach are compared to those obtained by standard (independent) predictions by using public domain classification data sets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-031: ---------------------------------------- Bayes Optimal Lazy Learning by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we present a new probabilistic formalization of the lazy learning approach. In our Bayesian framework, moving from the construction of an explicit hypothesis to a lazy learning approach, where predictions are made by combining the training data at query time, is equivalent to integrating out all the model parameters. Hence in Bayesian Lazy Learning the predictions are made by using all the (infinitely many) models. We present the formalization of this general framework, and illustrate its use in practice in the case of the Naive Bayes classifier model family. The Bayesian lazy learning approach is validated empriically with public domain data sets and the results are compared to the performance of the traditional, single model Naive Bayes. The general framework described in this paper can be applied with any formal model family, and to any discrete prediction task where the number of simultaneously predicted attributes is small, which includes for example all classification tasks prevalent in the machine learning literature. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-032: ---------------------------------------- On Predictive Distributions and Bayesian Networks by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri, University of Helsinki, Finland Abstract: In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general framework, where we consider a set of probability distributions defined by some parametric model class. The standard Bayesian approach is to compute the posterior probability for the model parameters, given a prior distribution and sample data, and fix the parameters to the instantiation with the {\em maximum a posteriori} probability. A more accurate predictive distribution can be obtained by comupting the {\em evidence}, i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of {\em stochastic complexity} for determining predictive distributions. We then describe how these predictive inference methods can be realized in the case of Bayesian networks. In particular, we demonstrate the use of Jeffrey's prior as the prior distribution for computing the evidence predictive distribution. It can be shown that the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. For computational reasons in the experimental part of the paper the three predictive distributions are compared by using the tree-structures simple Naive Bayes model. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense, especially with small training sets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-033: ---------------------------------------- Partial Occam's Razor and its Applications by Carlos Domingo, Tatsuie Tsukiji and Osamu Watanabe, Tokyo Institute of Technology, Japan Abstract: We introduce the notion of ``partial Occam algorithm''. A partial Occam algorithm produces a succinct hypothesis that is partially consistent with given examples, where the proportion of consistent examples is a bit more than half. By using this new notion, we propose one approach for obtaining a PAC learning algorithm. First, as shown in this paper, a partial Occam algorithm is equivalent to a weak PAC learning algorithm. Then by using boosting techniques of Schapire or Freund, we can obtain an ordinary PAC learning algorithm from this weak PAC learning algorithm. We demonstrate with some examples that some improvement is possible by this approach, in particular in the hypothesis size. First, we obtain a (non-proper) PAC learning algorithm for $k$-DNF, which has similar sample complexity as Littlestone's Winnow, but produces hypothesis of size polynomial in $d$ and $\log k$ for a $k$-DNF target with $n$ variables and $d$ terms ({\it Cf.}~ The hypothesis size of Winnow is $\CO(n^k)$). Next we show that 1-decision lists of length $d$ with $n$ variables are (non-proper) PAC learnable by using $\dsp{\CO\rpr{\frac{1}{\epsilon} \rpr{\log \frac{1}{\delta}+16^d\log n(d+\log \log n)^2}}}$ examples within polynomial time w.r.t.\ $n$, $2^d$, $1/\epsilon$, and $\log 1/\delta$. Again, we obtain a sample complexity similar to Winnow for the same problem but with a much smaller hypothesis size. We also show that our algorithms are robust against random classification noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-034: ---------------------------------------- Algorithms for Learning Finite Automata from Queries: A Unified View by Jos\'e Balc\'azar, Josep D\'iaz, Ricard Gavalda Universitat Polit\`ecnica de Catalunya, Spain Osamu Watanabe, Tokyo Institute of Technology, Japan Abstract: In this survey we compare several known variants of the algorithm for learning deterministic finite automata via membership and equivalence queries. We believe that our presentation makes it easier to understand what is going on and what the differences between the various algorithms mean. We also include the comparative analysis of the algorithms, review some known lower bounds, prove a new one, and discuss the question of parallelizing this sort of algorithms. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-035: ---------------------------------------- Using Fewer Examples to Simulate Equivalence Queries by Ricard Gavalda, Universitat Polit\`ecnica de Catalunya, Spain Abstract: It is well known that an algorithm that learns exactly using Equivalence queries can be transformed into a PAC algorithm that asks for random labelled examples. The first transformation due to Angluin (1988) uses a number of examples quadratic in the number of queries. Later, Littlestone (1989) and Schuurmans and Greiner (1995) gave transformations using linearly many examples. We present here another analysis of Littlestone's transformation which is both simpler and gives better leading constants. Our constants are still worse than Schuurmans and Greiner's, but while ours is a worst-case bound on the number of examples to achieve PAC learning, theirs is only an expected one. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-036: ---------------------------------------- A Dichotomy Theorem for Learning Quantified Boolean Formulas by Victor Dalmau, Universitat Polit\`ecnica de Catalunya, Spain Abstract: We consider the following classes of quantified boolean formulas. Fix a finite set of basic boolean functions. Take conjunctions of these basic functions applied to variables and constants in arbitrary way. Finally quantify existentially or universally some of the variables. We prove the following {\em dichotomy theorem}: For any set of basic boolean functions, the resulting set of formulas is either polynomially learnable from equivalence queries alone or else it is not PAC-predictable even with membership queries under cryptographic assumptions. Furthermore we identify precisely which sets of basic functions are in which of the two cases. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-037: ---------------------------------------- Discontinuities in Recurrent Neural Networks by Ricard Gavald\`a, Universitat Polit\`ecnica de Catalunya, Spain Hava Siegelmann, Technion, Israel Abstract: This paper studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net-function and a sigmoid-like continuous activation-function. The authors introduce ``arithmetic networks'' as ARNN augmented with a few simple discontinuous (eg., threshold) neurons. They argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrary complex recursive functions. A proof is provided to show that arithmetic networks are computationally equivalent to networks comprised of neurons that compute divisions and polynomials net-functions inside sigmoid-like continuous activation functions. Further, the authors prove that these arithmetic networks are equivalent to the Blum-Shub-Smale (BSS) model, when the latter is restricted to a bounded number of registers. With regards to implementation on digital computers, the authors demonstrate that arithmetic networks with rational weights require exponential precision; but even with very simple real weights arithmetic networks are not subject to precision bounds. As such, they can not be approximated on digital machines. This is in contrast with the ARNN that are known to demand only precision that is linear in the computation time. When complex periodic discontinuous neurons (eg. sine, tangent, fractional parts) are augmented to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, this highly discontinuous network can solve the presumably intractable class of PSPACE-complete problems in polynomial time. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From krogh at frame.cbs.dtu.dk Thu Apr 3 10:24:46 1997 From: krogh at frame.cbs.dtu.dk (Anders Krogh) Date: Thu, 3 Apr 1997 09:24:46 -0600 Subject: Papers on ensemble learning Message-ID: <9704030924.ZM24091@frame.cbs.dtu.dk> Dear connectionists, the following paper is now available from our web page (http://www.ph.ed.ac.uk/~pkso/papers/EnsemblePREVII.ps.gz) : STATISTICAL MECHANICS OF ENSEMBLE LEARNING Anders Krogh and Peter Sollich (Physical Review E, 55:811-825, 1997) Abstract Within the context of learning a rule from examples, we study the general characteristics of learning with ensembles. The generalization performance achieved by a simple model ensemble of linear students is calculated exactly in the thermodynamic limit of a large number of input components and shows a surprisingly rich behavior. Our main findings are the following. For learning in large ensembles, it is advantageous to use underregularized students, which actually overfit the training data. Globally optimal generalization performance can be obtained by choosing the training set sizes of the students optimally. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of corruption of the training data. An abbreviated version of this paper appeared in NIPS 8 and is also available (http://www.ph.ed.ac.uk/~pkso/papers/EnsembleNIPSVI.ps.gz). Both papers are also available from http://www.cbs.dtu.dk/krogh/refs.html. Further papers of potential interest to readers of connectionists can be found on our home pages: * Peter Sollich (http://www.ph.ed.ac.uk/~pkso/publications): learning from queries, online learning, finite size effects in neural networks, ensemble learning * Anders Krogh (http://www.cbs.dtu.dk/krogh/): Most current work on using hidden Markov models in `computational biology.' All comments and suggestions are welcome - Anders Krogh and Peter Sollich -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road fax: +44 - (0)131 - 650 5212 Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- Anders Krogh Center for Biological Sequence Analysis (CBS) Technical University of Denmark Building 206 DK-2800 Lyngby DENMARK Phone: +45 4525 2470 Fax: +45 4593 4808 E-mail: krogh at cbs.dtu.dk _____________________________________________ From nburgess at lbs.ac.uk Thu Apr 3 09:16:34 1997 From: nburgess at lbs.ac.uk (Neil Burgess) Date: Thu, 3 Apr 1997 09:16:34 BST Subject: Pre-prints available - Neural Networks in the Capital Markets Message-ID: <9B468B447F1@neptune.lbs.ac.uk> Neural Networks in the Capital Markets: The following NNCM-96 pre-prints are now available on request. Please send your postal address to: boguntula at lbs.ac.uk ===================================================== ASSET ALLOCATION ACROSS EUROPEAN EQUITY INDICES USING A PORTFOLIO OF DYNAMIC COINTEGRATION MODELS A. N. BURGESS Department of Decision Science London Business School Regents Park, London, NW1 4SA, UK In modelling financial time-series, the model selection process is complicated by the presence of noise and possible structural non-stationarity. Additionally the near-efficiency of financial markets combined with the flexibility of advanced modelling techniques creates a significant risk of "data-snooping". These factors combine to make trading a single model a very risky proposition, particularly in a situation which allows for high leverage, such as futures trading. We believe that the risks inherent in relying on a given model can be reduced by combining a whole set of models and, to this end, describe a population-based methodology which involves building a portfolio of complementary models. We describe an application of the technique to the problem of modelling a set of European equity indices using a portfolio of cointegration-based models. ===================================================== FORECASTING VOLATILITY MISPRICING P. J. BOLLAND & A. N. BURGESS Department of Decision Science London Business School Regents Park, London, NW1 4SA, UK A simple strategy is employed to exploit volatility mispricing based on discrepancies between implied and actual market volatility. The strategy uses forward and Log contracts to either buy or sell volatility depending on whether volatility is over or under priced. As expected, buying volatility gives small profits on average but with occasional large losses in adverse market conditions. In this paper multivariate non-linear methods are used to forecast the returns of a Log contract portfolio. The explanatory power of implied volatility and the volatility term structure from several indices (FTSE, CAC, DAX) are investigated. Neural network methodologies are benchmarked against linear regression. The use of both multivariate data and non-linear techniques are shown to significantly improve the accuracy of predictions. Keywords: Options, Volatility Mispricing, Log contract, Volatility Term Structure, ===================================================== From w.penny at ic.ac.uk Thu Apr 3 09:22:20 1997 From: w.penny at ic.ac.uk (w.penny@ic.ac.uk) Date: Thu, 3 Apr 1997 15:22:20 +0100 Subject: Research jobs in neural nets and pattern recognition Message-ID: <22983.199704031422@albert.ee.ic.ac.uk> THREE POST-DOCTORAL RESEARCH POSITIONS IN PATTERN RECOGNITION / NEURAL NETWORKS RESEARCH Three post-doctoral research positions are available within the Neural Systems Section of the Department of Electrical & Electronic Engineering to work on the theory and application of advanced pattern recognition techniques, in particular the use of Bayesian methods and neural networks. Two positions are funded for two years and the third nominally for three years with a yearly evaluation. All projects involve research in statistical pattern recognition with applications in the biomedical field. Experience in pattern recognition and Bayesian statistics would be an advantage. A good understanding of data processing (especially signal processing) techniques is desired as is experience of UNIX, C and Matlab. The positions are funded by the Jefferiss Research Trust, the European Commission and British Aerospace plc respectively. The salary scale will be RA1A , GBP 14,732 - 22,143 per annum (exclusive of London Allowance of GBP 2,134) depending on age and experience. Further information may be obtained from http://www.ee.ic.ac.uk/research/neural/positions.html or via e-mail to Dr Stephen Roberts (s.j.roberts at ic.ac.uk). The closing date for applications is April 11th 1997. In recent years, great interest has developed in the use of non-classical methods for statistical analysis of data as part of a general increase towards the use of artificial intelligence methods. One genre which has shown itself to be particularly suitable is that of connectionist models, a subset of which are referred to as a artificial neural networks (ANNs). Classical statistical methods rely upon the use of simple models, such as linear or logistic regression, in order to 'learn' relationships between variables and outcomes. ANNs offer a far more flexible model set, indeed it has been shown that they have the property of universal approximation so they are able, in principle, to estimate any set of arbitrary relationships between variables. Furthermore, they may model non-linear coupling between sets of variables. Part of the momentum of the recent development of ANNs for pattern recognition, regression and estimation problems must be attributed to the manner in which ANNs conform to many of the traditional statistical approaches, i.e. they may estimate Bayesian probablilities in the case of classification and conditional averages in the case of regression. 1) The use of Neural Networks to Predict the Development and Progression of Kaposi's Sarcoma (KS). This is a joint project funded by the Jefferiss Research Trust between the Department of Electrical and Electronic Engineering, Imperial College of Science, Technology & Medicine and the Department of Genito-urinary Medicine, St. Mary's Hospital. Kaposi's sarcoma (KS) is a vascular tumour, which is more common and often aggressive in patients with underlying immunosuppression (post-transplant KS and AIDS-associated KS). KS was first described by the Hungarian pathologist Moritz Kaposi in 1872, yet still remains something of a clinical enigma, being an unusual tumour of unknown origin. The aim of this research is to determine factors that influence the variable progression rate of KS in HIV infected individuals. There is currently no means of predicting which patients will develop KS and no understanding of the relationship between the forms of the disease. The aim of the project is to carry out multi-variable analyses in order to define clinical end-points and provide guidelines for better patient management. A number of variables will be available to the system. The reliability and utility of each with regard to the prediction of patient outcome, however, is generally unknown. Classical regression analysis offers some powerful methods of selection and ranking within a subset of features or variables. Whilst such methods should be used for completeness and comparison, it is noted that recent developments in Bayesian learning theory have offered the possibility to assess the utility of variables from within the ANN structure. Each input variable has a separate weighting factor, or in Bayesian terminology, a hyper-prior, associated with it. This technique has become known as automatic relevance determination or ARD. Such an assessment is devoid of the strong assumptions of independence and linearity of most of the classical regression methods. It is feasible for an ANN to produce not only a set of output variables (predictions or classifications, for example) but also an associated set of confidence or validation measures (describing the probable error on each output). This enables the tracking of predictions of future events in a more robust framework and furthermore allows for the accurate fusion of information from more than one source and the incorporation of temporal information, i.e. poor quality information from the present time may be suppressed in favour of more reliable information from past or future as it becomes available. If we may regard the system as aiming to produce a probability distribution in some 'outcome space', then several possible approaches to analysis are made available. As temporal information is retained (i.e. outcomes are based upon the entire course of the patient's history, not just present information) we may seek information regarding the effect of each piece of information (test result or partial diagnosis) on the probability distribution in the 'outcome space'. Two pieces of information may be obtained from this approach. How important a partial decision or test result is to the probability of certain outcomes and how important it is to changing the uncertainty we have in the outcome results. Clearly, the goal will be to indicate tests and/or procedures which not only increase the survivability probabilities but also make the estimated outcomes less variant, so we have more confidence in the predictions (this means not only increasing the height of a favourable node in the posterior probability space, but also attempting to reduce the variance of the distribution). In order to accommodate for multiple output hypotheses we propose to utilise a procedure similar to that detailed in (Bishop 1995) whereby the output distribution is modelled multi- modally. This has the added benefit that individual modes (possible outcomes) may be tracked separately. This representation is also similar to that taken in a mixture of experts approach. REFERENCES 1. Bishop CM. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1995. 2. Ripley BD. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. 3. Roberts SJ and Penny W. Novelty, Confidence and Errors in Connectionist Systems. Proceedings of IEE colloquium on fault detection and intelligent sensors, IEE, September 1996. 4. Penny W and Roberts SJ. Neural Networks with Error Bars. Departmental report, also submitted to IEEE transactions on neural networks, February 1997, available from http://www.ee.ic.ac.uk/staff/hp/sroberts.html 2) SIESTA (EU funded project) Siesta is a an EU funded project which involves Imperial College and 10 other European partners. The aim of the project is to define and produce a system which is capable of continuous evaluation of the state of the brain during the sleep-wake cycle. Such an automated system is of enormous value in the clinical field and the research into multi-channel signal processing, fusion and pattern recognition form a challenge to the most modern techniques. The state of the brain will, primarily, be monitored via its electrical activity (the EEG). One of the most well-known approaches from literature to achieve a continuous description of EEG state is the system developed by Roberts & Tarassenko (1992a, 1992b). This approach will be used as a general basis for the research in SIESTA. Roberts & Tarassenko (henceforth, R&T') used a self-organizing feature map (SOM) to perform unsupervised topographic mapping of feature vectors consisting of 10 coefficients of a Kalman filter algorithm applied to the raw EEG. This self- organizing network discovered eight distinct clusters in which the brain state remained preferentially. From biehl at physik.uni-wuerzburg.de Fri Apr 4 03:31:35 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Fri, 4 Apr 1997 10:31:35 +0200 (MESZ) Subject: preprint on on-line unsupervised learning Message-ID: <199704040831.KAA29194@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1997/WUE-ITP-97-003.ps.gz The following manuscript is now available via anonymous ftp (see below for the retrieval procedure), or, alternatively from http://www.physik.uni-wuerzburg.de/~biehl ------------------------------------------------------------------ "Specialization processes in on-line unsupervised learning" Michael Biehl, Ansgar Freking, Georg Reents, and Enno Schl"osser Contribution to the Minerva Workshop on Mesoscopics, Fractals and Neural Networks Eilat, Israel, March 1997 Ref: WUE-ITP-97-003 Abstract From krogh at frame.cbs.dtu.dk Fri Apr 4 13:26:44 1997 From: krogh at frame.cbs.dtu.dk (Anders Krogh) Date: Fri, 4 Apr 1997 12:26:44 -0600 Subject: Papers on ensemble learning Message-ID: <9704041226.ZM26633@frame.cbs.dtu.dk> Dear connectionists, the following paper is now available from our web page (http://www.ph.ed.ac.uk/~pkso/papers/EnsemblePREVII.ps.gz) : STATISTICAL MECHANICS OF ENSEMBLE LEARNING Anders Krogh and Peter Sollich (Physical Review E, 55:811-825, 1997) Abstract Within the context of learning a rule from examples, we study the general characteristics of learning with ensembles. The generalization performance achieved by a simple model ensemble of linear students is calculated exactly in the thermodynamic limit of a large number of input components and shows a surprisingly rich behavior. Our main findings are the following. For learning in large ensembles, it is advantageous to use underregularized students, which actually overfit the training data. Globally optimal generalization performance can be obtained by choosing the training set sizes of the students optimally. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of corruption of the training data. An abbreviated version of this paper appeared in NIPS 8 and is also available (http://www.ph.ed.ac.uk/~pkso/papers/EnsembleNIPSVI.ps.gz). Both papers are also available from http://www.cbs.dtu.dk/krogh/refs.html. Further papers of potential interest to readers of connectionists can be found on our home pages: * Peter Sollich (http://www.ph.ed.ac.uk/~pkso/publications): learning from queries, online learning, finite size effects in neural networks, ensemble learning * Anders Krogh (http://www.cbs.dtu.dk/krogh/): Most current work on using hidden Markov models in `computational biology.' All comments and suggestions are welcome - Anders Krogh and Peter Sollich -------------------------------------------------------------------------- Peter Sollich Department of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings phone: +44 - (0)131 - 650 5293 Mayfield Road fax: +44 - (0)131 - 650 5212 Edinburgh EH9 3JZ, U.K. -------------------------------------------------------------------------- Anders Krogh Center for Biological Sequence Analysis (CBS) Technical University of Denmark Building 206 DK-2800 Lyngby DENMARK Phone: +45 4525 2470 Fax: +45 4593 4808 E-mail: krogh at cbs.dtu.dk _____________________________________________ From sami at guillotin.hut.fi Fri Apr 4 07:56:49 1997 From: sami at guillotin.hut.fi (Sami Kaski) Date: Fri, 4 Apr 1997 14:56:49 +0200 Subject: Thesis on data exploration with SOMs available Message-ID: <199704041256.OAA01930@guillotin.hut.fi> The following Dr.Tech. thesis is available at http://nucleus.hut.fi/~sami/thesis/thesis.html (html-version) http://nucleus.hut.fi/~sami/thesis.ps.gz (compressed postscript, 300K) http://nucleus.hut.fi/~sami/thesis.ps (postscript, 2M) The articles that belong to the thesis can be accessed through the page http://nucleus.hut.fi/~sami/thesis/node3.html --------------------------------------------------------------- Data Exploration Using Self-Organizing Maps Samuel Kaski Helsinki University of Technology Neural Networks Research Centre P.O.Box 2200 (Rakentajanaukio 2C) FIN-02015 HUT, Finland Finding structures in vast multidimensional data sets, be they measurement data, statistics, or textual documents, is difficult and time-consuming. Interesting, novel relations between the data items may be hidden in the data. The self-organizing map (SOM) algorithm of Kohonen can be used to aid the exploration: the structures in the data sets can be illustrated on special map displays. In this work, the methodology of using SOMs for exploratory data analysis or data mining is reviewed and developed further. The properties of the maps are compared with the properties of related methods intended for visualizing high-dimensional multivariate data sets. In a set of case studies the SOM algorithm is applied to analyzing electroencephalograms, to illustrating structures of the standard of living in the world, and to organizing full-text document collections. Measures are proposed for evaluating the quality of different types of maps in representing a given data set, and for measuring the robustness of the illustrations the maps produce. The same measures may also be used for comparing the knowledge that different maps represent. Feature extraction must in general be tailored to the application, as is done in the case studies. There exists, however, an algorithm called the adaptive-subspace self-organizing map, recently developed by Kohonen, which may be of help. It extracts invariant features automatically from a data set. The algorithm is here characterized in terms of an objective function, and demonstrated to be able to identify input patterns subject to different transformations. Moreover, it could also aid in feature exploration: the kernels that the algorithm creates to achieve invariance can be illustrated on map displays similar to those that are used for illustrating the data sets. From riegler at ifi.unizh.ch Fri Apr 4 08:51:10 1997 From: riegler at ifi.unizh.ch (Alex Riegler) Date: Fri, 4 Apr 1997 15:51:10 +0200 Subject: NTCS-97 Call For Participation Message-ID: CALL FOR PARTICIPATION /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ International Workshop N E W T R E N D S I N C O G N I T I V E S C I E N C E NTCS '97 /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ "Does Representation need Reality?" Perspectives from Cognitive Science, Neuroscience, Epistemology, and Artificial Life Vienna, Austria, May 14 - 16, 1997 with plenary talks by: Larry Cauller, Georg Dorffner, Ernst von Glasersfeld, Stevan Harnad, Wolf Singer, and Sverre Sjoelander organized by the Austrian Society of Cognitive Science (ASoCS) =========================================================================== Latest information can be retrieved from the conference WWW-page =========================================================================== P u r p o s e ___________________________________________________________________________ The goal of this single-track conference is to investigate and discuss new approaches and movements in cognitive science in a workshop-like atmosphere. Among the topics which seem to have emerged in the last years are: embodiment of knowledge, system theoretic and computational neuroscience approaches to cognition, dynamics in recurrent neural architectures, evolutionary and artificial life approaches to cognition, and (epistemological) implications for perception and representation, constructivist concepts and the problem of knowledge representation, autopoiesis, implications for epistemology and philosophy (of science). Evidence for a failure of the traditional understanding of neural representation converges from several fields. Neuroscientific results in the last decade have shown that single cell representations with hierarchical processing towards representing units seems not the way the cortex represents environmental entities. Instead, distributed cell ensemble coding has become a popular concept for representation, both in computational and in empirical neuroscience. However, new problems arise from the new concepts. The problem of binding the distributed parts into a uniform percept can be "solved" by introducing synchronization of the member neurons. A deeper (epistemological) problem, however, is created by recurrent architectures within ensembles generating an internal dynamics in the network. The cortical response to an environmental stimulus is no longer dominated by stimulus properties themselves, but to a considerable degree by the internal state of the network. Thus, a clear and stable reference between a representational state (e.g. in a neuron, a Hebbian ensemble, an activation state, etc.) and the environmental state becomes questionable. Already learned experiences and expectancies might have an impact on the neural activity which is as strong as the stimulus itself. Since these internally stored experiences are constantly changing, the notion of (fixed) representations is challenged. At this point, system theory and constructivism, both investigating the interaction between environment and organism at an abstract level, come into the scene and turn out to provide helpful epistemological concepts. The goal of this conference is to discuss these phenomena and their implications for the understanding of representation, semantics, language, cognitive science, and artificial life. Contrary to many conferences in this field, the focus is on interdisciplinary cooperation and on conceptual and epistemological questions, rather than on technical details. We are trying to achieve this by giving more room to discussion and interaction between the participants (e.g., invited comments on papers, distribution of papers to the participants before the conference, etc.). According to the interdisciplinary character of cognitive science, we welcome papers/talks from the fields of artificial life, empirical, cognitive, and computational neuroscience, philosophy (of science), epistemology, anthropology, computer science, psychology, and linguistics. T a l k s ___________________________________________________________________________ NOTE: Names with * are invited speakers. Constructivism Ernst von Glasersfeld* Piaget's legacy: Cognition as adaptive activity Sverre Sjoelander* How animals handle reality: the adaptive aspect of representation Annika Wallin Is there a way to distinguish representation from Perception... Tom Routen Habitus and Animats General Epistemology and Methodology W.F.G. Haselager Is cognitive science advancing towards behaviorism? Michael Pauen Reality and representation William Robinson Representation and cognitive explanation Matthias Scheutz The ontological status of representations Anthony Chemero Two types of anti-representationism: a taxonomy Georg Schwarz Can representation get reality? Neuroscience Larry Cauller* NeuroInteractivism: Explaining emergence without representation Wolf Singer* The observer in the brain Steven Bressler The dynamic manifestation of cognitive structures in the cerebral cortex Erich Harth Sketchpads in and beyond the brain Marius Usher Active Neural representations: neurophysiological data and its implications Symbol Grounding and Communication Georg Dorffner* The connectionist route to embodiment and dynamicism Stevan Harnad* Keeping a grip on the real/virtual distinction in this representationalist age Mark Wexler Must mental representation be internal? Tom Ziemke Rethinking Grounding Christian Balkenius Explorations in synthetic pragmatics Horst Hendriks-Jansen Does natural cognition need internal knowledge structures? Nathan Chandler On the importance of reality in representations Peter Gaerdenfors Does semantics need reality? P o s t e r s ___________________________________________________________________________ Chris Browne Iconic Learning and Epistemology Mark Claessen RabbitWorld: the concept of space can be learned Valentin Constantinescu Interaction between perception & expectancy... Andrew Coward Unguided categorization, direct and symbolic representation, and evolution of cognition... David Davenport PAL: A constructivist model of cognitive activity Karl Diller Representation and reality: where are the rules of grammar Richard Eiser Representation and the social reality Robert French When coffe cups are like old elephants Maurita Harney Representation and its metaphysics Daniel Hutto Cognition without representation? Amy Ione Symbolic creation and re-representation of reality Sydney Lamb Top-Down Modeling, Bottom-up Learning Michael Luntley Real Representations Ralf Moeller Perception through anticipation Ken Mogi Response selectivity, neuron doctrine, and Mach's principle in perception Alfredo Pereira The term "representation" in cognitive neuroscience Michael Ramscar Judgement of association: problems with cognitive theories of analogy Hanna Risku Constructivist Consequences: does tramslation need reality? Sabine Weiss Cooperation of different neural networks during single word and sentence processing R e g i s t r a t i o n ___________________________________________________________________________ To register please fill out the registration form at the bottom of this CFP and send it by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science (address below) Registration Fee (includes admission to talks, presentations, and proceedings): Member * 1300 ATS (about 118 US$) Non-Member 1800 ATS (about 163 US$) Student Member ** 500 ATS (about 45 US$) Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID C o n f e r e n c e S i t e a n d A c c o m o d a t i o n ___________________________________________________________________________ The conference takes place in a small beautiful baroque castle in the suburbs of Vienna; the address is: Schloss Neuwaldegg Waldegghofg. 5 A-1170 Wien Austria Tel: +43 1 485 3605 Fax: +43 1 485 3605-112 It is surrounded by a beautiful forest and a good (international and Viennese gastronomic) infrastructure. On the tram it takes only 20 minutes to the center of Vienna (see overview). (Limited) Accommodation is provided by the castle (about 41 US$ per night (single), 30 US$ per night, per person (double) including breakfast). Please contact the telephone number above. You can find more information about Vienna and accommodation at the Vienna Tourist Board or at the Intropa Travel agent Tel: +43-1-5151-242. Note: In case you want to stay over the weekend we refer you to the following hotel which is near the conference site (single about 75 US$ / 850 ATS per night): Hotel Jaeger Hernalser Hauptstrasse 187 A-1170 Wien Austria Tel: +43 1 486 6620 Fax: +43 1 486 6620 8 D e s t i n a t i o n V i e n n a ? ___________________________________________________________________________ Vienna, Austria, can be reached internationally by plane or train. The Vienna Schwechat airport is located about 16 km from the city center. From the airport, the city air-terminal can be reached by bus (ATS 60.- per person) or taxi (about ATS 400). Rail-passengers arrive at one of the main stations which are located almost in the city center. From the air-terminal and the railway stations the congress site and hotels can be reached easily by underground (U-Bahn), tramway, or bus. A detailed description will be given to the participants. In May the climate is mild in Vienna. It is the time when spring is at its climax and everything is blooming. The weather is warm with occasional (rare) showers. The temperature is about 18 to 24 degrees Celsius. More information about Vienna and Austria on the web: Welcome to Vienna Scene Vienna City Wiener Festwochen - Vienna Festival Public Transport in Vienna (subway) Welcome to Austria General information about Austria Austria Annoted S c i e n t i f i c C o m m i t t e e ___________________________________________________________________________ R. Born Univ. of Linz (A) R. Born Univ. of Linz (A) G. Dorffner Univ. of Vienna (A) E. v. Glasersfeld Univ. of Amherst, MA (USA) S. Harnad Univ. of Southampton (GB) M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) H. Risku Univ. of Skovde (S) M. Scheutz Univ. of Indiana (USA) W. Singer Max Planck Institut, Frankfurt (D) S. Sjoelander Linkoeping University (S) A. v. Stein Neuroscience Institute, La Jolla (USA) O r g a n i z i n g C o m m i t t e e ___________________________________________________________________________ M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) S p o n s o r i n g O r g a n i z a t i o n s ___________________________________________________________________________ o Christian Doppler Laboratory for Expert Systems (Vienna University of Technology) o Oesterreichische Forschgungsgemeinschaft o Austrian Federal Ministry of Science, Transport and the Arts o City of Vienna A d d i t i o n a l I n f o r m a t i o n ___________________________________________________________________________ For further information on the conference contact: Markus Peschl Dept. for Philosophy of Science University of Vienna Sensengasse 8/10 A-1090 Wien Austria Tel: +43-1-402-7601/41 Fax: +43-1-408-8838 Email: franz-markus.peschl at univie.ac.at General information about the Austrian Society for Cognitive Science can be found on the Society webpage or by contacting Alexander Riegler AILab, Dept. of Computer Science University of Zurich Winterthurerstr. 190 CH-8057 Zurich Switzerland Email: riegler at ifi.unizh.ch R e g i s t r a t i o n f o r m ___________________________________________________________________________ I participate at the Workshop "New Trends in Cognitive Science (NTCS'97)" Full Name ........................................................................ Full Postal Address: ........................................................................ ........................................................................ ........................................................................ Telephone Number (Voice): Fax: ..................................... .................................. Email address: ........................................................................ Payment in ATS (= Austrian Schillings; 1 US$ is currently about 11 ATS). This fee includes admission to talks, presentations, and proceedings: [ ] Member * 1300 ATS (about 118 US$) [ ] Non-Member 1800 ATS (about 163 US$) [ ] Student Member ** 500 ATS (about 45 US$) [ ] Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID Total: .................... ATS [ ] Visa [ ] Master-/Eurocard Name of Cardholder ........................................ Credit Card Number ........................................ Expiration Date ................. Date: ................ Signature: ........................................ Please send this form by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science, Univ. of Vienna, Sensengasse 8/10, A-1090 Wien, Austria From movellan at ergo.ucsd.edu Fri Apr 4 17:27:09 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Fri, 4 Apr 1997 14:27:09 -0800 Subject: UCSD Cogsci TR Message-ID: <199704042227.OAA31931@ergo.ucsd.edu> The following technical report is available online at http://cogsci.ucsd.edu (follow links to Tech Reports & Software ) Physical copies are also available (see the site for information). Analysis of Direction Selectivity Arising From Recurrent Cortical Interactions. Paul Mineiro and David Zipser UCSD Cogsci TR.97.03 The relative contributions of feedforward and recurrent connectivity to the direction selective responses of cells in layer IVB of primary visual cortex is currently the subject of debate in the neuroscience community. Recently biophysically detailed simulations have shown realistic direction selective responses can be achieved via recurrent cortical interactions between cells with non-direction selective feedforward input \cite{Koch:DS,Maex:SpikeMotion}. Unfortunately the complexity of these models, while desirable for detailed comparison with biology, are difficult to analyze mathematically. In this paper a relatively simple cortical dynamical model is used to analyze the emergence of direction selective responses via recurrent interactions. Comparison between a model based on our analysis and physiological data is presented. The approach also allows analysis of the recurrently propagated signal, revealing the predictive nature of the implementation. From fritzke at neuroinformatik.ruhr-uni-bochum.de Mon Apr 7 13:35:31 1997 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Mon, 7 Apr 1997 19:35:31 +0200 (MET DST) Subject: Java software and TR available (competitive learning) Message-ID: <199704071735.TAA29955@urda.neuroinformatik.ruhr-uni-bochum.de> Dear connectionists, this is to announce the availability of version 1.3 of the "DemoGNG" Java applet and a new version of the accompanying technical report draft "Some Competitive Learning Methods". The TR describes in detail all methods implemented in DemoGNG as well as some others (such as $k$-means and growing cell structures). URLs and descriptions follow below. Enjoy, Bernd Fritzke and Hartmut Loos URLs: ======== DemoGNG, for immediate execution: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html DemoGNG, for download (972 kBytes): ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/DemoGNG-1.3.tar.gz TR, HTML: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper/ TR, Postscript, 45 pages, 376 kBytes: ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/software/NN/DemoGNG/sclm.ps.gz DemoGNG 1.3 ======== DemoGNG is a Java applet which distributed as free software under the GNU PUBLIC LICENSE and implements several methods related to competitive learning. It is possible to experiment with the methods using various (hardwired) data distributions and observe the learning process. DemoGNG is highly interactive (e.g. dragging of neurons during self-organization is possible) and has already been used for neural network courses in several countries. The following algorithms are now implemented: (new) LBG Hard Competitive Learning (constant and exponentially decaying learning rate) Neural Gas Competitive Hebbian Learning Neural Gas with Competitive Hebbian Learning Growing Neural Gas (new) Self-Organizing Map (new) Growing Grid Features added since the previously released version include * display of Voronoi diagrams * display of Delaunay triangulations * additional probability distributions * a detailed manual * sound switched off by default 8v) Draft Report ======== Some Competitive Learning Methods Bernd Fritzke Systems Biophysics Institute for Neural Computation Ruhr-Universit"at Bochum This report has the purpose of describing several algorithms from the literature all related to competitive learning. A uniform terminology is used for all methods. Moreover, identical examples are provided to allow a qualitative comparisons of the methods. The complete Java source code as well as a postscript version of this document may be accessed by ftp. -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From josh at eas.uccs.edu Mon Apr 7 18:00:09 1997 From: josh at eas.uccs.edu (Alspector) Date: Mon, 7 Apr 1997 15:00:09 -0700 Subject: IWANNT*97 program & registration info Message-ID: <199704072200.PAA13032@eas.uccs.edu> International Workshop on Applications of Neural Networks (and other intelligent systems) to Telecommunications (IWANNT*97) Melbourne, Australia June 9-11, 1997 You are invited to an international workshop on applications of neural networks and other intelligent systems to problems in telecommunications and information networking. This is the third workshop in a series that began in Princeton, New Jersey on October 18-20, 1993 and continued in Stockholm, Sweden on May 22-24, 1995. This conference will be at the University of Melbourne on the Monday through Wednesday (June 9 - 11, 1997) just before the Australian Conference on Neural Networks (ACNN) which will be at the same location on June 11 - 13 (Wednesday - Friday). There will be a hard cover proceedings available at the workshop. There is further information on the IWANNT home page at: http://ece-www.colorado.edu/~timxb/iwannt.html Organizing Committee General Chair Josh Alspector, U. of Colorado Program Chair Rod Goodman, Caltech Publications Chair Timothy X Brown,U. of Colorado Treasurer Suzana Brown, U. of Colorado Publicity Atul Chhabra, NYNEX Lee Giles, NEC Research Institute Local Arrangements Adam Kowalczyk, Telstra, Chair Chris Leckie, Telstra Andrew Jennings, RMIT M. Palaniswami, U. of Melbourne Robert Slaviero, Sig Proc Ass. Jacek Szymanski, Telstra Program Committee Nader Azarmi, British Telecom Miklos Boda, Ellemtel Harald Brandt, Ellemtel Tzi-Dar Chiueh, National Taiwan U Bruce Denby, U of Versailles Simon Field, Nortel Francoise Fogelman, SLIGOS Marwan A. Jabri, Sydney Univ. Thomas John, Southwestern Bell S Y Kung, Princeton University Tadashi Sone, ATR Scott Toborg, SBC TRI IEEE Liaison Steve Weinstein, NEC Conference Administrator Helen Alspector IWANNT Conference Administrator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. Colorado Springs, CO 80933-7150 (719) 262-3351 (719) 262-3589 (fax) neuranet at mail.uccs.edu Tentative Conference Program Monday, June 9, 1997: 7:00 Registration and Coffee Session 1: 8:30 J. Alspector, Welcome 8:45 Invited Speaker: TBA 9:30 Techniques for Telecommunications Fraud Management, S.D.H. Field, P.W.Hobson 10:00 Employing Remote Monitoring and Artificial Intelligence Techniques to Develop the Proactive Network Management, A.S.M. De Franceschi, M. A. da Rocha, H.L. Weber, C.B.Westphall 10:30 Break 11:00 Local Diagnosis for Real-Time Network Traffic Management, P. Leray, P.Gallinari, E. Didelet 11:30 Intelligent Capacity Evaluation/Planning with Neural Network Clustering Algorithms, L. Lewis, U. Datta, S. Sycamore 12:00 Neural Networks for Network Topological Design, D.B. Hoang 12:30 Lunch Session 2: 13:30 Self Adaptive Network Utilisation, S. Olafsson 14:00 Neural Networks for Computing Blocking Probabilities in ATM Virtual Subnetworks, J. Br, M. Boda, A. Farag, T. Henk 14:30 Fuzzy Mean Flow Estimation with Neural Networks for Multistage ATM Systems, A. Murgu 15:00 Break 15:30 Generation of ATM Video Traffic Using Neural Networks, A. Casilari, A.Reyes, A. Daz-Estrella, F. Sandoval 16:00 Model Generation of Aggregate ATM Traffic Using A Neural Control with Accelerated Self-Scaling, E. Casilari, A. Jurado, G.Pansard, A. Daz Estrella, F. Sandoval 16:30 Dynamic Routing in ATM Networks with Effective Bandwidth Estimation by Neural Networks, Z. Fan, P. Mars Tuesday, June 10, 1997: 8:00 Registration and Coffee Session 3: 8:30 Invited Speaker: Tadashi Sone: TBA 9:00 Neural Networks for Location Prediction in Mobile Networks, J.Biesterfeld, E. Ennigrou, K. Jobmann 9:30 Reinforcement Learning and Supervised Learning Control of Dynamic Channel Allocation for Mobile Radio Systems, E.J.Wilmes, K.T. Erickson 10:00 Equalisation of Rapidly Time-Varying Channels Using an Efficient RBF Neural Network, Q. Gan, N. Sundararajan, P. Saratchandran, R.Subramanian 10:30 Break 11:00 Equalization and the Impulsive MOOSE: Fast Adaptive Signal Recovery in Very Heavy Tailed Noise, E. Dubossarsky, T.R. Osborn, S. Reisenfeld 11:30 Neural Receiver Structures Based on Self-Organizing Maps in Nonlinear Multipath Channels, K. Raivio, J. Henriksson, O. Simula 12:00 Using Neural Networks for Alarm Correlation in Cellular Phone Networks, H.Wietgrefe K-D. Tuchs, K. Jobmann, G. Carls, P.Frhlich, W.Nejdl, S.Steinfeld 12:30 Lunch Session 4: 13:30 A Space-Based Radio Frequency Transient Event Classifier, K.R.Moore, P.C. Blain, M.P. Caffrey, R.C. Franz, K.M. Henneke, R.G.Jones 14:00 Traffic Trends Analysis using Neural Networks, T. Edwards, D.S.W.Tansley, R.J. Frank, N. Davey 14:30 Learning Customer Profiles to Generate Cash over the Internet, C.Giraud-Carrier, M. Ward 15:00 Break 15:30 Keyword Search in Handwritten Documents, A. Kolcz, J. Alspector, M.Augusteijn, R. Carlson, G. Viorel Popescu 16:00 Face Recognition Using Hierarchical Neural Networks, Y.-H.Huang,C.-J.Liou, S.-T. Wu, L.-G. Chen, T.-D. Chiueh 16:30 Query Word-Concept Clusters in a Legal Document Collection, T.D.Gedeon, B.J. Briedis, R.A. Bustos, G. Greenleaf, A. Mowbray Wednesday, June 10, 1997: 8:00 Registration and Coffee Session 5: 9:00 Invited Speaker: TBA 9:30 A Novel Microsatellite Control System, M.W. Tilden, J.R. Frigo, K.R.Moore 10:00 Overload Control for Distributed Call Processors Using Neural Networks, S.Wu, K.Y.M. Wong 10:00 Break 10:30 Neural Networks for Resource Allocation in Telecommunication Networks, A.Christiansen, A. Herschtal, M. Herzberg, A. Kowalczyk, J.Szymanski 11:00 New Q-routing Approaches to Adaptive Traffic Control, L. Hrault, D.Drou, M. Gordon 11:30 Neural Network Aided Soft Decision Decoding of Block Codes with Random Percentage Training, S.W. Mok, M.Z. Wang, K.C. Li 12:00 Lunch Session 6: 13:30 Control of Self-Similar ATM Call Traffic by Reinforcement Learning, J.Carlstrm, E. Nordstrm 14:00 Towards a Hardware Implementation of Reinforcement Learning for Call Admission Control in Networks for Integrated Services, K.Steenhaut, A.Now, M. Fakir, E. Dirkx 14:30 ATM Connection Admission Control using Modular Neural Networks, C.-K.Tham, W.-S. Soh 15:00 Break 15:30 Admission Control in ATM Networks using Fuzzy-ARTMAP, I.Mahadevan, C.S. Raghavendra 16:00 Bandwidth Dimensioning for Data Traffic, T. X Brown 16:30 ATM Traffic Policing using a Classifier System, K.C. Tsui, B. Azvine 15:00 Adjourn ACNN'97 & IWANNT'97 ACCOMMODATION LIST Victoria Market Backpackers / Global Backpackers 238 Victoria Street, North Melbourne, Victoria, 3051, Australia Phone: +61 3 9328 3728 Fax: +61 3 9329 8966 Dorm. bed: $14/night Single room: $25/night, shared facilities Double/twin room: $35/night, shared facilities Features: Social environment, bar downstairs, within city centre, 10 min. walk from conference venue. Melbourne YHA Hostels - Queensberry Hill 78 Howard Street, North Melbourne, Victoria, 3051, Australia Phone: +61 3 9329 8599 Fax: +61 3 9326 8427 Dorm. bed: $20/night Single bed: $45/night Twin bed: $58/night Single with bathroom: $55/night Twin with bathroom: $68/night Features: Bistro, within city centre, 10 min. walk from conference venue. College Accommodation University of Melbourne - Ormond College, Queen's College, Trinity College, Newman College Grattan Street, Parkville, Victoria, 3052, Australia Phone/fax: +61 3 9347 9320 Rates: $40-60/night Features: Situated within University grounds (conference venue), budget student accommodation comprises single bed, desk, wardrobe, shared facilities and breakfast included. Elizabeth Tower Motel 792 Elizabeth Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9347 9211 Fax: +61 3 9347 0396 Single or double: $95/night Features: Airconditioning, minibar, coffee & tea facilities, fully licensed restaurant & bar, free car parking & outdoor swimming pool. Opposite conference venue (Melb. Uni.), 5 min. tram ride to city centre. Royal Parade Irico Hotel 441 Royal Parade, Parkville, Victoria, 3052, Australia Phone: +61 3 9380 9222 Fax: +613 9387 6448 Single: $99.50/night (includes breakfast) Twin share: $113.50/night (includes breakfast) Features: Air conditioning, movies, minibar, 24 hour room service, valet service and coffee & tea making facilities, fully licensed restaurant and bar, swimming pool, room service & car parking. Situated 2.5 km (10 min. tram ride) from city centre, and 10 min. walk from conference venue. The Townhouse Melbourne 701 Swanston Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9347 7811 Fax: +61 3 9347 8225 Twin: $110/night (suites available) Features: Air conditioning, movies, minibar, coffee & tea facilities, writing desk, restaurant, valet laundry service, guest parking, bar, outdoor swimming pool, 5 min. tram ride to city centre, 5 min. walk to conference venue. Grand Hyatt Hotel 123 Collins Street, Melbourne, Victoria, 3000, Australia Phone: +61 3 9657 1234 Fax: +61 3 9650 3491 Deluxe twin or king: $270/night Regency Club twin or king: $320/night (includes breakfast & morn/afternoon refreshments) Features: Airconditioned, movies, minibar, 24hr room service, coffee & tea facilities, two restaurants & food court, indoor heated swimming pool, gym, tennis courts. Situated within city centre, 10-15 min. tram ride to conference venue. Sheraton Towers Hotel 1 Brown Street, Southbank, Victoria, 3006, Australia Phone: +61 3 9696 3100 Fax: +61 3 9690 5889 Double/twin: $270/night City view: $310/night Deluxe: $360/night - 30 days advanced booking required on all rooms Features: Full buffet breakfast included, airconditioned, king-sized beds, desk, marble bath & shower, minibar, movies, tea & coffee facilities, 24hr room service, 3 restaurants, 2 bars, 1 night club, health club access, indoor heated swimming pool, SPA & sauna. Situated within city centre, 15-20 min. tram ride to conference venue. -------------------------------------------------------------------------- REGISTRATION FORM -------------------------------------------------------------------------- International Workshop on Applications of Neural Networks (and other intelligent systems) to Telecommunications (IWANNT*97) Melbourne, Australia June 9-11, 1997 Name: Institution: Mailing Address: Telephone: Fax: E-mail: Make check ($400; $500 after May 1, 1997; $200 students) out to IWANNT*97. Please make sure your name is on the check. Registration includes breaks and proceedings available at the conference. Mail to: Helen Alspector IWANNT Conference Administrator Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 (719) 262-3351 (719) 262-3589 (fax) neuranet at mail.uccs.edu Site The conference will be held at the University of Melbourne. There are several good hotels within walking distance of the university. More information will be sent to registrants or upon request. From mackay at mrao.cam.ac.uk Wed Apr 9 16:46:00 1997 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Wed, 9 Apr 97 16:46 BST Subject: Information Theory, Probability and Neural Networks Message-ID: The following *draft* book is available for anonymous ftp. Feedback from the information theory and neural networks communities would be warmly welcomed. ======================================================================== "Information Theory, Probability and Neural Networks" by David J.C. MacKay ------------------------------------------------------------------------- An undergraduate / graduate textbook. This book will feature: * lots of figures and demonstrations. * more than one hundred exercises with worked solutions. * up to date exposition of: . source coding - including arithmetic coding, `bits back' coding . channel coding - including Gallager codes, turbo codes . neural networks - including Gaussian processes . Monte Carlo methods - including Hybrid Monte Carlo, Overrelaxation The current draft (April 9th 1997) is Draft 1.2.3 (308 pages). (Estimated to be about 70% complete.) =================== COMPLETED CHAPTERS =============================== 1. Introduction to Information Theory --------- Data Compression ------------------------------------------- 2. The Source Coding Theorem 3. Data Compression II: Symbol Codes 4. Data Compression III: Stream Codes --------- Noisy Channel Coding --------------------------------------- 5. Communication over a noisy channel 6. The noisy channel coding theorem 7. Error correcting codes & real channels --------- Probabilities ---------------------------------------------- 8. Bayesian Inference 9. Ising Models 10. Variational Methods 11. Monte Carlo methods --------- Neural networks ----------------------------------------------- 12. Introduction to neural networks 13. The single neuron as a classifier 14. Capacity of a single neuron 15. Learning as Inference 16. The Hopfield network 17. From Hopfield networks to Boltzmann machines 18. Supervised learning in multilayer networks ==================== INCOMPLETE CHAPTERS ============================== ------- Unsupervised learning ----------------------------------------- Clustering Independent component analysis Helmholtz machines A single neuron as an unsupervised learning element ------- Probability, data modelling and supervised neural networks ---- Laplace's method Graphical models and belief propagation Complexity control and model comparison Gaussian processes ------- Unifying chapters --------------------------------------------- Hash codes: codes for efficient information retrieval `Bits back' source coding Low density parity check codes Turbo codes ======================================================================== downloading instructions: ------------------------------------------------------------------------ The book (1.1Mbytes) can be clicked from this web page in Cambridge, England: http://wol.ra.phy.cam.ac.uk/mackay/itprnn/#book or from this MIRROR in Toronto, Canada: http://www.cs.toronto.edu/~mackay/itprnn/#book If you prefer to use ftp, ftp wol.ra.phy.cam.ac.uk (131.111.48.24) anonymous your name cd pub/mackay/itprnn binary get book.ps2.gz (tree saving two pages to a page version) OR get book.ps.gz (ordinary version) quit gunzip book.* ========================================================================== David J.C. MacKay email: mackay at mrao.cam.ac.uk www: http://wol.ra.phy.cam.ac.uk/mackay/ Cavendish Laboratory, tel: (01223) 339852 fax: 354599 home: 276411 Madingley Road, international code: +44 1223 Cambridge CB3 0HE. U.K. room: 982 Rutherford Building From tommi at cse.ucsc.edu Tue Apr 8 13:48:29 1997 From: tommi at cse.ucsc.edu (Tommi Jaakkola) Date: Tue, 8 Apr 1997 10:48:29 -0700 Subject: Thesis and paper available: variational methods Message-ID: <199704081748.KAA26123@baa.cse.ucsc.edu> The following Ph.D. thesis is available on the web at ftp://psyche.mit.edu/pub/tommi/thesis.ps.gz -------------------------------------------------------------------- Variational Methods for Inference and Estimation in Graphical Models Tommi S. Jaakkola MIT Graphical models enhance the representational power of probability models through qualitative characterization of their properties. This also leads to greater efficiency in terms of the computational algorithms that empower such representations. The increasing complexity of these models, however, quickly renders exact probabilistic calculations infeasible. We propose a principled framework for approximating graphical models based on variational methods. We develop variational techniques from the perspective that unifies and expands their applicability to graphical models. These methods allow the (recursive) computation of upper and lower bounds on the quantities of interest. Such bounds yield considerably more information than mere approximations and provide an inherent error metric for tailoring the approximations individually to the cases considered. These desirable properties, concomitant to the variational methods, are unlikely to arise as a result of other deterministic or stochastic approximations. The thesis consists of the development of this variational methodology for probabilistic inference, Bayesian estimation, and towards efficient diagnostic reasoning in the domain of internal medicine. ================================================================ The following technical report is now available via ftp: ftp://psyche.mit.edu/pub/tommi/varqmr.ps (~400kb) ftp://psyche.mit.edu/pub/tommi/varqmr.ps.Z (~150kb) ftp://psyche.mit.edu/pub/tommi/varqmr.ps.gz (~ 95kb) ------------------------------------------------------------- Variational methods and the QMR-DT database Tommi S. Jaakkola and Michael I. Jordan MIT We describe variational approximation methods for efficient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMR-DT database. The QMR-DT database is a large-scale belief network based on statistical and expert knowledge in internal medicine. The size and complexity of this network render exact probabilistic diagnosis infeasible for all but a small set of cases. This has hindered the development of the QMR-DT network as a practical diagnostic tool and has hindered researchers from exploring and critiquing the diagnostic behavior of QMR. In this paper we describe how variational approximation methods can be applied to the QMR network, resulting in fast diagnostic inference. We evaluate the accuracy of our methods on a set of standard diagnostic cases and compare to stochastic sampling methods. MIT Computational Cognitive Science Technical Report 9701. From Dragan.Obradovic at mchp.siemens.de Thu Apr 10 04:15:44 1997 From: Dragan.Obradovic at mchp.siemens.de (Dragan Obradovic) Date: Thu, 10 Apr 1997 10:15:44 +0200 (MET DST) Subject: Book: Information Theory and Neural Networks Message-ID: <199704100815.KAA08415@sava.mchp.siemens.de> Due to the high interest on applications of Information Theory in Neural Networks, we have installed a new WWW home page for the Springer Verlag book: "AN INFORMATION-THEORETIC APPROACH TO NEURAL COMPUTING" ------------------------------------------------------- by Gustavo Deco and Dragan Obradovic at the following address: http://www.siemens.de/research/NeuralNet/BOOK/Welcome.html The book covers, among others, the following topics: - Linear and Non-linear Independent Component Analysis (ICA), and - the statistical theory of supervised learning. The detailed description of the book including the foreword and the table of contents is available on the above WWW home page. From carmesin at schoner.physik.uni-bremen.de Thu Apr 10 06:58:24 1997 From: carmesin at schoner.physik.uni-bremen.de (Hans-Otto Carmesin) Date: Thu, 10 Apr 1997 12:58:24 +0200 Subject: paper on cortical functionality emergence Message-ID: <199704101058.MAA08804@schoner.physik.uni-bremen.de> Dear Connectionists! The following paper is now available via WWW http://schoner.physik.uni-bremen.de/~carmesin/docs/Gordon.ps; also few hardcopies are available. --- Title: Cortical Functionality Emergence: General Theory & Quantitative Results --- by Dr. Hans-Otto Carmesin Institute for Theoretical Physics and Center for Cognition Sciences University Bremen, 28334 Bremen, Germany Fax: 0049 421 218 4869, E-mail: Carmesin at theo.physik.uni-bremen-de WWW: http://schoner.physik.uni-bremen.de/~carmesin/ --- Appeared in: Frank Schweitzer (Ed.): Self-Organization of Complex Structures: From Individual to Collective Dynamics, vol. I, chapt. 18, 215-233, London: Gordon and Breach, 1996. --- Abstract: The human genotype represents at most ten billion binary informations, whereas the human brain contains more than a million times a billion synapses. So a differentiated brain structure is essentially due to self-organization. Such self-organization is relevant for areas ranging from medicine to the design of intelligent complex systems. Many brain structures emerge as collective phenomenon of a microscopic neurosynaptic dynamics: a stochastic dynamics mimics the neuronal action potentials, while the synaptic dynamics is modeled by a local coupling dynamics of type Hebb-rule, that is, a synaptic efficiency increases after coincident spiking of pre- and postsynaptic neuron. The microscopic dynamics is transformed to a collective dynamics reminiscent of hydrodynamics. The theory models empirical findings quantitatively: Topology preserving neuronal maps were assumed by Descartes in 1664; their self-organization was suggested by Weiss in 1928; their empirical observation was reported by Marshall in 1941; it is shown that they are neurosynaptically stable due to ubiquitous infinitesimal short range electrical or chemical leakage. In the visual cortex, neuronal stimulus orientation preference emerges; empirically measured orientation patterns are determined by the Poisson equation of electrostatics; this Poisson equation orientation pattern emergence is derived here. Complex cognitive abilities emerge when the basic local synaptic changes are regulated by valuation, emergent valuation, attention, attention focus or combination of subnetworks. Altogether a general theory is presented for the emergence of functionality from synaptic growth in neurobiological systems. The theory provides a transformation to a collective dynamics and is used for quantitative modeling of empirical data. From dario at lamisun9.epfl.ch Thu Apr 10 04:36:28 1997 From: dario at lamisun9.epfl.ch (Dario Floreano) Date: Thu, 10 Apr 97 10:36:28 +0200 Subject: 2 PhD positions/research assistants in neural computation Message-ID: <9704100836.AA14539@lamisun9.epfl.ch> ********************************************************* 2 PhD positions/research assistants in neural computation ********************************************************* available at the Center for Neural Computation (Centre Mantra pour les systemes neuro-mimetiques) Swiss Federal Institute of Technology at Lausanne (Ecole Polytechnique Federale de Lausanne) DI-EPFL, CH-1015, Lausanne, Switzerland We seek outstanding candidates for two PhD positions at the Mantra Center for Neural Computation, an interdisciplinary research unit formally attached to the Department of Computer Science. The positions will open up between April and August 1997. (i) The first position will be in the area of bio-inspired approaches to navigation and path planning. Strategies of neural network learning will be combined with evolutionary approaches and will be implemented on mobile robots. (ii) The second position will be in the field of computational neuroscience and should address the problems of temporal coding with spiking neurons. It will involve simulations studies and mathematical analysis. More informations available on the web page http://diwww.epfl.ch/lami/team/floreano/jobs.html We prefer candidates with a good theoretical/mathematical background who have had a prior exposure to neural networks, computational neuroscience, evolutionary computation, or robotics. Candidates should have the equivalent of a diploma or master degree in computer science, physics, engineering, or neural computation. Successful candidates will be hired as research assistants (75% of a full salary). Interested candidates should send, at this stage, a cv and a short statement of research experience/research interest by email to wulfram.gerstner at di.epfl.ch or dario.floreano at di.epfl.ch with the subject line marked PhD-application ---------------------------------------------------- Dr. Wulfram Gerstner, Assistant Professor Dr. Dario Floreano, Researcher Swiss Federal Institure of Technolgy Center for Neural Computation Mantra-LAMI EPFL, IN-J 1015 Lausanne Tel. +41-21-693 6713 Fax. +41-21-693 5263 http://diwww.epfl.ch/mantra/ http://diwww.epfl.ch/lami/learning/ ---------------------------------------------------- From Wulfram.Gerstner at di.epfl.ch Thu Apr 10 04:16:34 1997 From: Wulfram.Gerstner at di.epfl.ch (Wulfram Gerstner) Date: Thu, 10 Apr 97 10:16:34 +0200 Subject: ICANN97-Lausanne-Oct7-10 Message-ID: <9704100816.AA14405@lamisun9.epfl.ch> --------- ICANN'97 ------ 7th Annual Conference of the European Neural Network Society ENNS I CCC A N N N N '' 999 77777 I C C A A NN N NN N 9 9 7 I C A A N N N N N N 9999 7 I C C AAAAA N NN N NN 9 7 I CCC A A N N N N 9999 7 International Conference on Neural Networks October 8-10 - Lausanne Switzerland Tutorials on Tuesday, October 7 -- The 1997 Latsis Conference -- http://www.epfl.ch/icann97 Email icann97 at epfl.ch Fax +41 21 693-5656 paper submission deadline: May 15, 1997 (6-page full papers) __________________________________________________________ Proceedings will be published by Springer Verlag, Lecture Note Serie, see layout instructions below. Papers will be evaluated on the basis of relevance, originality and clarity. Mathematical models Applications +++++++++++++++++++ ++++++++++++ Learning, Dynamical systems, Optimization, Prediction, Self-organization, Process control, Robotics, Cellular neural nets. Energy and Comm. Networks. Biological models Implementations +++++++++++++++++ +++++++++++++++ Neural codes, Spiking neurons, Bio-inspiration, Cortex modeling, Sensory processing, Sensory-motor areas Hardware accelerators, Analogue VLSI _____________________________________________________________ Conference structure """""""""""""""""""" The program will include plenary talks and 3 or 4 tracks of parallel sessions covering complementary fields of interest. Posters presentations will be complemented by short poster spotlights during oral presentations. Tutorials ^^^^^^^^^ Tutorials will take place on October 7, before the Conference. Y. Abu-Mostafa (USA), P. Refenes (GB) Finance Applications X. Arreguit (CH) Silicon Implementations J.L. van Hemmen (D), A. Kreiter (D) Cortical Oscillations M. Opper (D) Generalization Theories Invited speakers ^^^^^^^^^^^^^^^^ W. Bialek, Princeton, USA, Decoding Spike Trains H. Bourlard, Martigny, CH, Speech recognition S. Grossberg, Boston, USA, Visual Perception H. Markram, Rehovot, Israel, Synaptic Plasticity E. Oja, Espoo, Finland, Independent Comp. Analysis H. Ritter, Bielefeld, D, Robotics T. Roska, Budapest, HU, Cellular Neural Networks R. Sutton, Amherst, USA, Reinforcement Learning V. Vapnik, Holmdel, USA, Support Vector Machines E. Vittoz, Neuchatel, CH, Bioinspired Circuits Special Sessions are planned on ^^^^^^^^^^^^^^^^ Cortical Maps and receptive fields, Temporal Patterns and Brain Dynamics, Time Series Prediction, Financial Modeling, Adaptive Autonomous Agents, Applications in Power/communication networks. _____________________________________________________________ Instructions for authors """""""""""""""""""""""" Interested authors should: - Prepare a 6-page paper in English according to the Springer layout instructions (1-column book format, about 2000 char/page only) See http://www.epfl.ch/icann97/authors.html - Classify the paper according to our list of categories and keywords. See http://www.epfl.ch/icann97/cata.html - Fill-in the author submission form. See http://www.epfl.ch/icann97/sub.html - Mail 5 copies of the paper and the form before May 15 (do not include the original if it includes glued pictures or tables) Sorry, we do not accept electronic copies of papers (FTP, Web) All papers will be reviewed and the program committee will meet on June 19-21 for the selection. Authors will be informed by fax before June 25. Corrections and changes will be requested for July 10. Final conference program will be available early July. The principal author of an accepted paper must register before July 30. Student grants are available from Neuronet. Until May 5, the printed set of forms, layout instructions and examples can also be requested by fax or e-mail. Please indicate your postal address. All these documents are available on the Web. _____________________________________________________________ Registration information and fees """"""""""""""""""""""""""""""""" Registration fee includes admission to all sessions, one copy of the proceedings, coffee breaks and 3 lunches, welcome drinks and banquet. before August 30 -- after Regular registration fee 580 CHF -- 640 CHF Student (with lunch, no banquet, no proceedings) 270 CHF -- 330 CHF Tutorial day (October 7) 30 CHF -- 50 CHF Ask for a copy of the forms, for the program booklet, or see on the Web http://www.epfl.ch/icann97/reg.html Participant registration form http://www.epfl.ch/icann97/stu.html Student special condition form http://www.epfl.ch/icann97/hotel.html Hotel reservation form _____________________________________________________________ Conference location and accomodation """""""""""""""""""""""""""""""""""" The conference will be held at the EPFL, Ecublens, 5 km South-West of Lausanne. A tram provides easy access from the hotels. Lausanne is located on the lake of Geneva, with easy access by train and planes. Hotels are in the 50 to 150 CHF range. Reservation is not handled by the conference. Ask Fassbind Hotels, fax +41 21 323 0145 _____________________________________________________________ Organizers ^^^^^^^^^^ General Chairman: Prof Wulfram Gerstner, Mantra-EPFL Co-chairmen: Prof Alain Germond, Martin Hasler, J.D. Nicoud, EPFL Program committee secretariat: Monique Dubois, LAMI-EPFL, tel +41 21 693-6635 Registration secretariat: Andrii Moinat, LRC-EPFL, tel +41 21 693-2661 FAX: +41 21 693 5656 Technical Programm Committee ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Frangois Blayo Lyon . Marie Cottrell Paris F. de Viron Bruxelles . Christian Jutten Grenoble Cedex H. Mallot Berlin . Eddy Mayoraz Martigny K. Pawelzik Frankfurt . Eric Vittoz Neuchatel Advisory Board ^^^^^^^^^^^^^^ Larry Abbott Waltham . Moshe Abeles Jerusalem Sun-ichi Amari Saitama . Michael Arbib Los Angeles William Bialek Princeton . C.M. Bishop Birmigham Joan Cabestany Barcelona . J.P. Changeux Paris Holk Cruse Bielefeld . Emil Fieseler Martigny Frangoise Fogelman Clamart . Stan Gielen Nijmegen Karl Goser Dortmund . Stephen Grossberg Boston Klaus Hepp Zurich . Jeanny Herault Grenoble J.A. Hertz Kobenhaun . Michael Jordan Cambridge Christof Koch Pasadena . T. Kohonen Espoo Lennart Ljung Linkoping . Th. Martinetz Bochum Pietro Morasso Genova . A.F. Murray Edinburgh Dagmar Niebur Philadelphia . Erkki Oja Espoo G. Palm Ulm . Vincenzo Piuri Milano Alberto Prieto Granada . U. Ramacher Dresden Helge Ritter Bielfeld . Tamas Roska Budapest Terrence Sejnowski La Jolla . Sara A. Solla Holmdel John G. Taylor London . J.L. van Hemmen Garching V.N. Vapnik Holmdel . Michel Verleysen Louvain-la-Neuve C. von der Malsburg Bochum . W. von Seelen Bochum Chr. Wellekens Valbonne . David Willshaw Edinburgh _____________________________________________________________ From moshe.sipper at di.epfl.ch Thu Apr 10 02:55:32 1997 From: moshe.sipper at di.epfl.ch (Moshe Sipper) Date: Thu, 10 Apr 1997 08:55:32 +0200 Subject: Book Announcement: Evolution of Parallel Cellular Machines Message-ID: <199704100655.IAA05290@lslsun7.epfl.ch> The following book, recently published, may be of interest to subscribers of this list: "Evolution of Parallel Cellular Machines: The Cellular Programming Approach", Moshe Sipper, Springer-Verlag, 1997. Information is available at: http://lslwww.epfl.ch/~moshes/pcm.html Back-cover text: Nature abounds in systems involving the actions of simple, locally-interacting components, that give rise to coordinated global behavior. These collective systems have evolved by means of natural selection to exhibit striking problem-solving capacities, while functioning within a complex, dynamic environment. Employing simple yet versatile parallel cellular models, coupled with evolutionary computation techniques, this volume explores the issue of constructing man-made systems that exhibit characteristics such as those manifest by their natural counterparts. Parallel cellular machines hold potential both scientifically, as vehicles for studying phenomena of interest in areas such as complex adaptive systems and artificial life, as well as practically, enabling the construction of novel systems, endowed with evolutionary, reproductive, regenerative, and learning capabilities. This self-contained volume examines the behavior of such machines, the complex computation they exhibit, and the application of artificial evolution to attain such systems. From A.Sharkey at dcs.shef.ac.uk Fri Apr 11 11:37:58 1997 From: A.Sharkey at dcs.shef.ac.uk (Amanda Sharkey) Date: Fri, 11 Apr 97 11:37:58 BST Subject: Connection Science Message-ID: <9704111037.AA08540@gw.dcs.shef.ac.uk> Announcing: Connection Science Special Issue. 1997, 9,1. Combining Artificial Neural Nets: Modular Approaches. Special Issue Editor: Amanda Sharkey Editorial Board for Special Issue Leo Breiman, University of Berkeley, USA. Nathan Intrator, Tel-Aviv University, Israel. Robert Jacobs, University of Rochester, USA. Michael Jordan, MIT, USA. Paul Munro, University of Pittsburgh, USA. Michael Perrone, IBM, USA. David Wolpert, IBM, USA. Contents: Amanda J.C. Sharkey. Modularity, Combining and Artificial Neural Nets, 3-10 Stephen P. Luttrell. Self-organization of Multiple Winner-take-all Neural Networks. 11-30 Cesare Furlanello, Diego Giuliani, Edmondo Trentin and Stefano Merler. Speaker Normalization and Model Selection of Combined Neural Networks. 31-50. Thierry Catfolis and Kurt Meert. Hybridization and Specialization of Real-time Recurrent Learning-based Neural Networks. 51-70. Lucila Ohno-Machado and Mark A. Musen. Modular Neural Networs for Medical Prognosis: Quantifying the Benefits of Combining Neural Networks for Survival Prediction. 71-86. Guszti Bartfai and Roger White. Adaptive Resonance Theory-based Modular Networks for Incremental Learning of Hierarchical Clusterings 87-112. Research Notes: Alex Aussem and Fionn Murtagh. Combining Neural Network Forecasts on Wavelet transformed Time Series. 113-122. Colin McCormack. Adaptation of Learning Rule Parameters Using a Meta Neural Network. 123-136. ------------------------------------------------------------------------- See also Connection Science, 8, 3/4 Combining Artificial Neural Nets: Ensemble Approaches. Amanda J.C. Sharkey. On Combining Artificial Neural Nets. 299-314. Sherif Hashem. Effects of Collinearity on Combining Neural Networks. 315-336. David W. Opitz & Jude W. Shavlik. Actively Searching for an Effective Neural Network Ensemble. 337-354. Yuval Raviv & Nathan Intrator. Bootstrapping with Noise: An Effective Regularization Technique. 355-372. Bruce E. Rosen. Ensemble Learning Using Decorrelated Neural Networks. 373-384. Kagan Tumer & Joydeep Ghosh. Error Correlation and Error Reduction in Ensemble Classifiers. 385-404. Bambang Parmanto, Paul W. Munro & Howard R. Doyle. Reducing Variance of Committee Prediction with Resampling Techniques. 405-426. Peter A. Zhilkin & Ray L. Somorjai. Application of Several methods of Classification Fusion to Magnetic Resonance Spectra. 427-442. From biehl at physik.uni-wuerzburg.de Fri Apr 11 09:34:13 1997 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Fri, 11 Apr 1997 15:34:13 +0200 (MESZ) Subject: paper available: phase transitions in neural networks Message-ID: <199704111334.PAA05919@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1997/WUE-ITP-97-005.ps.gz The following manuscript is now available via anonymous ftp (See below for the retrieval procedure), or, alternatively from http://xxx.lanl.gov/abs/cond-mat/9704098 ---------------------------------------------------------------- Phase Transitions of Neural Networks Wolfgang Kinzel Plenary talk for MINERVA workshop on mesoscopics, fractals and neural networks, Eilat, March 1997 Ref.: WUE-ITP-97-005 Abstract The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> binary ftp> get WUE-ITP-97-005.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-97-005.ps.gz e.g. unix> lp WUE-ITP-97-005.ps [33 pages] (*) can be replaced by "get WUE-ITP-97-005.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ Prof. Dr. W. Kinzel Universit"at W"urzburg Institut f"ur Theoretische Physik Am Hubland D-97074 W"urzburg, Germany From pbolland at lbs.ac.uk Fri Apr 11 11:48:57 1997 From: pbolland at lbs.ac.uk (Peter Bolland) Date: Fri, 11 Apr 1997 15:48:57 UTC Subject: Call For Papers - Neural Networks in the Capital Markets 1997 Message-ID: <25667C95B0D@deimos.lbs.ac.uk> ANNOUNCEMENT AND CALL FOR PAPERS _________________________________________________________ COMPUTATIONAL FINANCE 1997 _________________________________________________________ The Fifth International Conference on NEURAL NETWORKS IN THE CAPITAL MARKETS Monday-Wednesday, December 15-17, 1997 London Business School, London, England. After four years of continuous success and evolution, NNCM has emerged as a truly multi-disciplinary international conference. Born out of neurotechnology, NNCM now provides an international focus for innovative research on the application of a multiplicity of advanced decision technologies to many areas of financial engineering. It draws upon theoretical advances in financial economics and robust methodological developments in the statistical, econometric and computer sciences. The fifth NNCM conference will be held in London December 15-17 1997 under the new title COMPUTATIONAL FINANCE 1997 to reflect its multi-disciplinary nature. COMPUTATIONAL FINANCE 1997 is a research meeting where original, high-quality contributions are presented and discussed. In addition, a day of introductory tutorials (Monday, December 15) will be included to familiarise participants of different backgrounds with the financial, and methodological aspects of the field. COMPUTATIONAL FINANCE 1997 invites research papers representing new and significant developments in methodology as well as applications of practical use and value in finance. In-depth analysis and comparison with established approaches is encouraged. Areas of interest include, but are not limited to: ______________________________________________ Methodologies ______________________________________________ Neural networks & Machine learning Fuzzy Logic & Expert systems Genetic algorithms & multi-criteria Optimisation Non-parametric statistics & Econometrics Non-linear time series & Cross-sectional analysis Adaptive/Kalman filtering techniques Hybrid models Model identification, selection and specification Hypothesis testing and confidence intervals Parameter sensitivity and prediction uncertainty Robust model estimation Stochastic Analysis, Monte Carlo ______________________________________________ Applications areas ______________________________________________ Portfolio management / asset allocation Derivative & term structure models Models for equity investment Bond and stock valuation and trading Currency models, forecasting & hedging Trading strategies Hedging and Arbitrage Strategies Cointegration Modelling & hedging correlation & volatility Portfolio replication: simulation & optimisation Retail finance Corporate distress & risk models Submission of Papers: Authors who wish to present a paper should mail three copies of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (or US 8.5" by 11") paper to the secretariat no later than June 28, 1997. Submissions will be refereed rigorously and authors will be notified on acceptance by 20 Sept. 1997. Location: The conference will be held at London Business School which is situated near Regent's Park, London and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Registration and Mailing List: if you wish to be added to the mailing list or register for COMPUTATIONAL FINANCE 1997, please send your postal address, e-mail address, and fax number to the secretariat. _________________________________________________ Programme Committee _________________________________________________ Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, Sydney University Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University, PUC-Rio Dr M. Steiner, Augsburg Universitaet Dr A. Timermann, UCSD Dr A. Weigend, New York University Dr H. White, UCSD Dr L. Xu, Chinese University, Hong Kong Secretariat: Please submit your papers and further inquiries to the secretariat at the address below: Ms Busola Oguntula, London Business School, Sussex Place, Regent's Park, London NW1 4SA, UK. E-mail: boguntula at lbs.ac.uk. Phone (+44) (0171)-262 50 50, Fax (+44) (0171) 724 78 75. __________________________________________________ WEB PAGE __________________________________________________ For more information on COMPUTATIONAL FINANCE 1997, please visit the NNCM home page at London Business School, http://www.lbs.lon.ac.uk/desci/nncmhome.html For information on previous conference and the program of previous NNCM conferences, please visit the NNCM homepages; London Business School http://www.lbs.lon.ac.uk/desci/nncmhome.html Caltech http://www.cs.caltech.edu/~learn/nncm.html __________________________________________________ From Tom_Mitchell at daylily.learning.cs.cmu.edu Sun Apr 13 15:44:05 1997 From: Tom_Mitchell at daylily.learning.cs.cmu.edu (Tom Mitchell) Date: Sun, 13 Apr 1997 15:44:05 -0400 Subject: new Machine Learning book Message-ID: NEW COMPREHENSIVE TEXTBOOK: Machine Learning, Tom Mitchell, McGraw Hill McGraw Hill announces immediate availability of MACHINE LEARNING, a new textbook that provides a thorough, multi-disciplinary introduction to computer algorithms for automated learning. The chapter outline is: 1. Introduction 2. Concept Learning and the General-to-Specific Ordering 3. Decision Tree Learning 4. Artificial Neural Networks 5. Evaluating Hypotheses 6. Bayesian Learning 7. Computational Learning Theory 8. Instance-Based Learning 9. Genetic Algorithms 10. Learning Sets of Rules 11. Analytical Learning 12. Combining Inductive and Analytical Learning 13. Reinforcement Learning (414 pages) This book is intended for upper-level undergraduates, graduate students, and professionals working in the area of neural networks, machine learning, datamining, and statistics. It includes over a hundred homework exercises, along with web-accessible code and datasets (e.g., neural networks applied to face recognition, Bayesian learning applied to text classification). For further information and ordering instructions, see http://www.cs.cmu.edu/~tom/mlbook.html From amari at zoo.riken.go.jp Sun Apr 13 23:52:07 1997 From: amari at zoo.riken.go.jp (Shunichi Amari) Date: Mon, 14 Apr 1997 12:52:07 +0900 Subject: new papers (Natural gradient learning, Blind source separation, etc) Message-ID: <9704140352.AA28110@zoo.riken.go.jp> The following three papers are now available from my home page: http://www.bip.riken.go.jp/irl/amari/amari.html There are many other recent papers to be publishes in the same home page. There are some other joint papers in the home page of Prof. Cichocki. I am very bad at maintaining my home page, and I have renewed it. ********************** 1. Natural Gradient Works Efficiently in Learning ------submitted to Neural Computation for possible publication abstract When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation) and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient on-line learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon which appears in the backpropagation learning algorithm of multilayer perceptrons might disappear or might be not so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed. ********************** title 2. STABILITY ANALYSIS OF ADAPTIVE BLIND SOURCE SEPARATION -------accepted for publication in Neural Networks abstract Recently a number of adaptive learning algorithms have been proposed for blind source separation. Although the underlying principles and approaches are different, most of them have very similar forms. Two important issues have remained to be elucidated further: the statistical efficiency and the stability of learning algorithms. The present letter analyzes a general form of statistically efficient algorithm and give a necessary and sufficient condition for the separating solution to be a stable equilibrium of a general learning algorithm. Moreover, when the separating solution is unstable, a simple method is given for stabilizing the separating solution by modifying the algorithm. ************************* title 3. Superefficiency in Blind Source Separation ----------submitted to IEEE Tr. on Signal Processing abstract Blind source separation extracts independent component signals from their mixtures without knowing the mixing coefficients nor the probability distributions of source signals. It is known that some algorithms work surprisingly well. The present paper elucidates the superefficiency of algorithms based on the statistical analysis. It is in general known from the asymptotic theory of statistical analysis that the covariance of any two extracted independent signals converges to $0$ in the order of $1/t$ in the case of statistical estimation by using $t$ examples. In the case of on-line learning, the theory of on-line dynamics shows that the covariances converge to $0$ in the order of $\eta$ when the learning rate $\eta$ is fixed to be a small constant. In contrast with the above general properties, the surprising superefficiency holds in blind source separation under a certain conditions. The superefficiency implies that the covariance decreases in the order of $1/t^2$ or of $\eta ^2$. The present paper uses the natural gradient learning algorithm and the method of estimating functions to obtain the superefficient procedures for both estimation and on-line learning. The superefficiency does not imply that the error variances of the extracted signals decrease in the order of $1/t^2$ or $\eta ^2$, but implies that their covariances do. From elman at crl.ucsd.edu Sat Apr 12 23:10:25 1997 From: elman at crl.ucsd.edu (Jeff Elman) Date: Sat, 12 Apr 1997 20:10:25 -0700 (PDT) Subject: new book announcement: Exercises in Rethinking Innateness Message-ID: <199704130310.UAA09634@crl.UCSD.EDU> EXERCISES IN RETHINKING INNATENESS A Handbook for Connectionist Simulations by Kim Plunkett and Jeffrey L. Elman This book is the companion volume to Rethinking Innateness: A Connectionist Perspective on Development (The MIT Press, 1996), which proposed a new theoretical framework to answer the question "What does it mean to say that a behavior is innate?" The new work provides concrete illustrations--in the form of computer simulations--of properties of connectionist models that are particularly relevant to cognitive development. This enables the reader to pursue in depth some of the practical and empirical issues raised in the first book. The authors' larger goal is to demonstrate the usefulness of neural network modeling as a research methodology. The book comes with a complete software package, including demonstration projects, for running neural network simulations on both Macintosh and Windows 95. It also contains a series of exercises in the use of the neural network simulator provided with the book. The software is also available to run on a variety of UNIX platforms. Neural Network Modeling and Connectionism series MIT Press/Bradford Books May 1997 ISBN 0-262-66105-5 254 pp. $40.00 (paper) MIT Press WWW page, with ordering information: http://mitpress.mit.edu:8000/mitp/recent-books/cog/pluep.html From john at dcs.rhbnc.ac.uk Mon Apr 14 10:24:48 1997 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 14 Apr 97 15:24:48 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199704141424.PAA02779@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. The following technical report has been updated to include information about the system described in NC-TR-97-038: ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-028: ---------------------------------------- Overview of Learning Systems produced by NeuroCOLT Partners by NeuroCOLT Partners Abstract: This NeuroCOLT Technical Report documents a number of systems that have been produced withing the NeuroCOLT partnership. It only includes a summary of each system together with pointers to where the system is located and more information about its performance and design can be found. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-038: ---------------------------------------- Using Computational Learning Strategies as a Tool for Combinatorial Optimization by Andreas Birkendorf and Han Ulrich Simon, Universit"at Dortmund, Germany Abstract: In this paper, we describe how a basic strategy from computational learning theory can be used to attach a class of NP-hard combinatorial optimization problems. It turns out that the learning strategy can be used as an iterative booster: given a solution to the combinatorial problem, we will start an efficient simulation of a learning algorithm which as a ``good chance'' to output an improved solution. This boosting technique is a new and surprisingly simple application of an existing learning strategy. It yields a novel heuristic approach to attach NP-hard optimization problems. It does not apply to each combinatorial problem, but we are able to exactly formalize some sufficient conditions. The new technique applies, for instance, to the problems of minimizing a deterministic finite automaton relative to a given domain, the analogous problem for ordered binary decision diagrams, and to graph colouring. ---------------------------------------- NeuroCOLT Technical Report NC-TR-97-039: ---------------------------------------- A Unifying Framework for Invariant Pattern Recognition by Jeffrey Wood and John Shawe-Taylor, Royal Holloway, University of London, UK Abstract: We introduce a group-theoretic model of invariant pattern recognition, the {\em Group Representation Network}. We show that many standard invariance techniques can be viewed as GRNs, including the DFT power spectrum, higher order neural network and fast translation-invariant transform. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-97-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-97-001.ps.Z ftp> bye % zcat nc-tr-97-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-97-002-title.ps.Z nc-tr-97-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-97-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor From movellan at ergo.ucsd.edu Mon Apr 14 17:08:45 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Mon, 14 Apr 1997 14:08:45 -0700 Subject: TR announcement Message-ID: <199704142108.OAA13893@ergo.ucsd.edu> The following technical report is available online at http://cogsci.ucsd.edu (follow links to Tech Reports & Software ) Physical copies are also available (see the site for information). A Learning Theorem for Networks at Detailed Stochastic Equilibrium. Javier R. Movellan Department of Cognitive Science University of California San Diego The paper studies a stochastic extension of continuous recurrent neural networks and analyzes gradient descent learning rules to train their equilibrium solutions. A theorem is given that specifies sufficient conditions for the gradient descent learning rules to be local covariance statistics between two random variables: 1) an evaluator which is the same for all the network parameters, and 2) a system variable which is independent of the learning objective. The generality of the theorem suggests that instead of suppressing noise present in physical devices, a natural alternative is to use it to simplify the credit assignment problem. In deterministic networks credit assignment requires an evaluation signal which is different for each node in the network. Surprisingly, when noise is not suppressed, all is needed is an evaluator which is the same for the entire network, and a local Hebbian signal. This modularization of signals greatly simplifies hardware and software implementations. The paper shows how the theorem applies to four different learning objectives which span supervised, reinforcement and unsupervised problems: 1) regression, 2) density estimation, 3) risk minimization, 4) information maximization. Simulations, implementation issues and implications for computational neuroscience are discussed. From ajay_jain at metaxen.com Mon Apr 14 22:22:18 1997 From: ajay_jain at metaxen.com (Ajay N. Jain) Date: Mon, 14 Apr 97 18:22:18 -0800 Subject: Position Available Message-ID: <199704150122.SAA02230@InterJet.metaxen.com> [PLEASE FORWARD TO APPROPRIATE MAILING LISTS] Position: Scientist/Senior Scientist, Computational Sciences Requirements: PhD in Computer Science Company: MetaXen LLC (Palo Alto, CA) Hiring Manager: Ajay N. Jain MetaXen is a start-up biopharmaceutical company based in the San Francisco Bay Area. We emphasize an integrated parallel approach to drug discovery that seeks to optimize specific binding of small molecules to proteins simultaneously with pharmacological parameters such as oral absorption. We combine state-of-the art computational technology for structure-based drug design with medicinal chemistry, molecular biology, X-ray crystallography, biochemistry, molecular pharmacology, and related fields in a stimulating multidisciplinary environment. Our current therapeutic research areas include cancer and thrombotic disease. We are looking for a computer scientist to join our growing group. The ideal candidate will have demonstrated success in applying sophisticated computation to real-world problems (e.g. drug discovery, object recognition, robotics, etc...). Experience in machine-learning/neural networks, computational geometry, or physical modeling would be considered beneficial, as would formal training in chemistry, biology, or physics. The duties for the position are to develop, implement, and apply novel algorithms for structure-based drug design (e.g. molecular docking, 3D structure-activity prediction, etc...), genomic data analysis, and prediciton of in vivo pharmacological parameters from in vitro pharmacology data as well as molecular structure. Qualified applicants should send a CV and cover letter to the address below. Ajay -------------------------------------------------------------------------- Dr. Ajay N. Jain: Principal Scientist/Group Leader, Computational Sciences Tel (415) 858-4942 MetaXen LLC Fax (415) 858-4931 3181 Porter Dr Email: ajay_jain at metaxen.com Palo Alto, CA 94304 -------------------------------------------------------------------------- From tho at nucleus.hut.fi Tue Apr 15 07:00:52 1997 From: tho at nucleus.hut.fi (Timo Honkela) Date: Tue, 15 Apr 1997 14:00:52 +0300 (EET DST) Subject: WSOM'97 Call For Participation Message-ID: ================== CALL FOR PARTICIPATION ======================= W O R K S H O P O N S E L F - O R G A N I Z I N G M A P S Helsinki University of Technology, Finland June 4-6, 1997 ------------------------- WWW: -------------------------- http://nucleus.hut.fi/wsom97/ ----------------------------------------------------------------- WSOM'97 is the first international meeting to be entirely dedicated to the theory and applications of the Self-Organizing Map (SOM). The SOM is a new powerful software tool for hard real-world problems listed below. Highlights of the program: - TUTORIAL SHORT COURSE: prof. Teuvo Kohonen - PLENARIES: prof. Helge Ritter: Learning with the parameterized self-organizing map prof. Karl Goser: Self-organizing map for intelligent process control prof. Marie Cottrell: Theoretical aspects of the SOM algorithm - INVITED: prof. Risto Miikkulainen: SOM research in the USA prof. Heizo Tokutaka: Condensed review of SOM and LVQ research in Japan - SESSIONS: Pattern recognition and Optimization signal processing Monitoring and data mining Financial analysis Temporal sequence processing Image analysis and vision Theory and extensions Probabilistic interpretations Text and document maps Hardware - PANEL: Document search - SOM CLINIC: Practical advice is given during the breaks - BANQUET: Dinner speech by Robert Hecht-Nielsen: A neural network saga - DEMONSTRATIONS WSOM'97 is a unique occasion for anyone who - is looking for efficient means for analyzing and visualizing complex real-world data, or - wishes to gain new insight into the newest applications of the SOM. REGISTRATION General registration fee is FIM 1200 for the workshop and FIM 700 for the tutorial. Reduced registration fees are available for students as well as for early registration before May 1. Please see http://nucleus.hut.fi/wsom97/ for the program. On the page there is also information on the registration, accomodation, travel, and other practical arrangements. For further information, please contact wsom97 at nucleus.hut.fi CO-OPERATING SOCIETIES WSOM'97 is a satellite workshop of the 10th Scandinavian Conference on Image Analysis (SCIA) to be held on June 9 to 11 in Lappeenranta, Finland, arranged by the Pattern Recognition Society of Finland. Other co-operating societies are the European Neural Network Society (ENNS), IEEE Finland Section, and the Finnish Artificial Intelligence Society. Teuvo Kohonen, WSOM'97 Chairman Erkki Oja, Program Chairman Olli Simula, Organization Chairman From E.Heit at csv.warwick.ac.uk Tue Apr 15 07:28:14 1997 From: E.Heit at csv.warwick.ac.uk (Evan Heit) Date: Tue, 15 Apr 1997 12:28:14 +0100 Subject: PhD Studentship In Cognitive Science (UK) Message-ID: Department of Psychology University of Warwick Coventry, United Kingdom The growing Cognitive Science group at Warwick has available a three-year BBSRC Special Studentship, to commence October 1997, leading to a PhD in Psychology. The general topic is applications of neural network modelling to experimental psychology results in learning, categorisation, and memory. Applicants should have or expect a first degree or MSc in a field such as Psychology, Cognitive Science, or Computer Science, and some experience in experimental psychology or computer modelling (but not necessarily both). The Research Committee Special Studentship is intended primarily for UK residents, and it attracts a higher level of stipend than other Research Council studentships. Further details are available at http://www.warwick.ac.uk/~pssak/st.html . Apply by sending a CV, names of two referees, and a statement of research interests and experience to Dr E Heit, Department of Psychology, University of Warwick, Coventry CV4 7AL, or E.Heit at warwick.ac.uk . Please direct informal enquiries to these same addresses. Closing date 9 May 1997. ------------------------------------------------------------------------- Evan Heit Email: E.Heit at warwick.ac.uk Department of Psychology Office: +44|0 1203 523183 University of Warwick Fax: +44|0 1203 524225 Coventry CV4 7AL, United Kingdom http://www.warwick.ac.uk/~pssak From philh at cogs.susx.ac.uk Tue Apr 15 12:05:13 1997 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Tue, 15 Apr 1997 17:05:13 +0100 (BST) Subject: Lectureship at Sussex University Message-ID: -------------------------------------------------------------- University of Sussex SCHOOL OF COGNITIVE AND COMPUTING SCIENCES Computer Science and Artificial Intelligence - 2 lectureships Applications are invited for two Lectureships in the Computer Science and Artificial Intelligence Subject Group with an expected start date of October 1997. Candidates for the first lectureship should be able to show evidence of significant research achievement in any aspect of the Foundations of Computation. Candidates for the second lectureship should be undertaking innovatory research in some aspect of Vision, Neural Computation or Evolutionary and Adaptive Systems. All candidates should be willing to teach in areas other than their research speciality. The appointments are planned on the Lecturer A scale, for which salaries run from 15,593 to 20,424 p.a. (under negotiation), though for exceptional candidates an appointment on the Lecturer B scale may be considered (21,277 to 27,196, under negotiation). The posts can be discussed informally with Dr Hilary Buxton, hilaryb at cogs.susx.ac.uk, tel. 01273 678569. Details of the School are available at http://www.cogs.susx.ac.uk/ Application forms and further particulars are available from and should be returned to Sandra Jenks, Staffing Services, University of Sussex, Falmer, Brighton, East Sussex, BN1 9RH. Tel: (01273) 606755, ext 3768. Email S.Jenks at sussex.ac.uk. Closing date: Monday 21st April 1997. -------------------------------------------------------------- From juergen at idsia.ch Wed Apr 16 03:27:54 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 16 Apr 1997 09:27:54 +0200 Subject: Job openings Message-ID: <199704160727.JAA13718@ruebe.idsia.ch> 1 POSTDOC POSITION 1 PHD STUDENT POSITION RECURRENT NEURAL NETS The Swiss machine learning research institute IDSIA offers 2-year positions for one postdoc and one PhD student. Both will be funded by a research grant on recurrent neural networks. Intended application areas include speech recognition and music composition. The grant's initial focus will be on "Long Short-Term Memory" (LSTM) - see reference below. Ideal candidates should fully understand the LSTM paper, have strong mathematical and programming skills, outstanding research potential, experience with neural nets, excellent ability to communicate research results, and be willing to build on previous work. BEGIN: around October 1997. SWITZERLAND tends to be nice to scientists. It boasts the highest super- computing capacity pc (per capita), the most Nobel prizes pc (4-5 times the US value), the highest GNP pc, and the best chocolate. IDSIA's research focuses on artificial neural nets, reinforcement learning in partially observable environments, MDL, complexity and generalization issues, unsupervised learning and information theory, forecasting, combinatorial optimization, evolutionary computation, and metalearning. IDSIA is small but visible, competitive, and influential. It employs about a dozen active researchers and supervises students at several European universities. Its 1997 scientific output so far includes 9 journal publications (published or in press). LOCATION: beautiful Lugano, the capital of Ticino, the scenic southernmost province of Switzerland (pictures in my home page). Milano, Italy's center of fashion and finance, is 1 hour away, Venice 3 hours. CSCS, the Ticino supercomputing center, is nearby - we have a direct connection. SALARY: commensurate with experience. Postdoc: SFR 60-70K/year (US$ 41-48K). PhD student: SFR 25-30K. Subtract ~20% for taxes & social security. There is travel funding in case of papers accepted at important conferences. INSTRUCTIONS: please send HARDCOPIES (no email!) of CV, list of publications, cover letter, and your 3 best papers (if applicable) to Juergen Schmidhuber, IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland. Also send a brief email message listing email addresses of three references to juergen at idsia.ch. Please use your full name as subject header. EXAMPLE: subject: John_Smith DEADLINE: MAY 31 1997. Earlier applications preferred. RECURRENT NET REFS (more in http://www.idsia.ch/~juergen/onlinepub.html) -S.Hochreiter & JS: Long Short-Term Memory. Neural Comp. (accepted 1997) See ftp://ftp.idsia.ch/pub/juergen/lstm.ps.gz -JS: A fixed size storage O(n^3) time complexity learning algorithm for fully recurrent continually running nets. Neural Comp.4(2):243-248,1992. -JS: Learning complex, extended sequences using the principle of history compression. Neural Comp.4(2):234-242,1992. -JS: Learning to control fast-weight memories: An alternative to recurrent nets. Neural Comp.4(1):131-139,1992. Juergen Schmidhuber, research director, IDSIA juergen at idsia.ch http://www.idsia.ch/~juergen From kpfleger at cs.stanford.edu Wed Apr 16 05:49:49 1997 From: kpfleger at cs.stanford.edu (Karl Pfleger) Date: Wed, 16 Apr 1997 02:49:49 -0700 (PDT) Subject: NIPS*97 workshop on NNs related to graphical models? Message-ID: <199704160949.CAA10184@HPP.Stanford.EDU> Would you like to see a NIPS*97 workshop on "Neural Network Models Related to Graphical Probabilistic Models"? Michael Jordan recently sent out the NIPS*97 Call for Post Conference Workshop Proposals. I don't feel qualified to organize and run a workshop on this topic myself, but I believe it is a topic of great interest to many people. Thus, I'm suggseting it here in the hopes that someone who is qualified might be inspired to organize such a workshop and draw up a proposal. If you would be interested only in attending or participating in such a workshop, send me a quick note anyway (or even an abstract of a potential submission) and I'll collect the responses to pass on to anyone who does volunteer to organize. Evidently, Mike Jordan and David Heckerman held a similar and extremely popular workshop 2 years ago at NIPS. However, opnions from a number of people, including Jordan, suggest that there is probably still sufficient interest for another such workshop. ---------------------------------------------------------------------------- NIPS Workshop idea: Neural Network Models Related to Graphical Probabilistic Models Graphical probabilistic models are currently a hot topic, especially Bayesian/Belief Networks, particularly in the AI communities. But there are neural network models that are intimiately related to some of these models, and can be used in similar ways. For example, Boltzmann machines are essentially neural-net versions of Markov Networks, with properties closely related to probabilistically explicit Markov Networks and to Bayes nets. Extended example--Some parallels between BMs and BNs: - Both represent a joint prob. distribution over a set of random variables. - In both network structure represents conditional independence assumptions amongst the variables, whether by d-separation or locally Markov properties. Based on Pearl (1988) neither is uniformly superior. - In both, exact prob. inference is possible but intractable, in general. - In both one can do Monte Carlo approx. inference. In both one can use Gibbs sampling, and this is exactly what BM settling corresponds to. - Both can be used to learn a joint distribution over visible units. - Both can use hidden variables. - In both you can do search over network structure to learn that as well, though this is almost completely unexplored for BMs. - Both can represent any joint distribution (the BM may need hidden units). - There are even mechanisms for converting from one to the other. There isn't much literature that draws this parallel clearly. Neal (1991a) comes closest. Probabilistically explicit Markov Network models are used in vision and graphics, and also in physics communities, and also share the above properties. There are certainly many other such parallels. Particularly salient are those involving variations on BMs and BNs, as investigated by people like Radford Neal. Integrative views like this help increase everyone's understanding. (Obviously, the workshop is not meant to be a battleground for the formalisms!) There are also plenty of research issues here and plenty of suggestions that arise from understanding the parallels. ---------------------------------------------------------------------------- -Karl ---------------------------------------------------------------------------- Karl Pfleger kpfleger at cs.stanford.edu http://www.stanford.edu/~kpfleger/ ---------------------------------------------------------------------------- From weaveraj at helios.aston.ac.uk Wed Apr 16 10:37:21 1997 From: weaveraj at helios.aston.ac.uk (Andrew Weaver) Date: Wed, 16 Apr 1997 15:37:21 +0100 Subject: Postdoctoral Research Fellowship Message-ID: <15738.199704161437@sun.aston.ac.uk> Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Learning Fixed Example Sets in Multilayer Neural Networks --------------------------------------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of `Learning Fixed Example Sets in Multilayer Neural Networks'. The emphasis of the research will be on applying a theoretically well-founded approach based on methods adopted from statistical mechanics to analyse learning from fixed example sets in multilayer networks. Potential candidates should have strong mathematical and computational skills, with a background in statistical mechanics and neural networks. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 16,450 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 3 referees, to: Dr. David Saad Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 6215 e-mail: D.Saad at aston.ac.uk e-mail submission of postscript files is welcome. Closing date: 30 May, 1997. From robtag at dia.unisa.it Thu Apr 17 06:02:37 1997 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 17 Apr 1997 12:02:37 +0200 Subject: WIRN Vietri 97 Preliminary Program Message-ID: <9704171002.AA20706@udsab.dia.unisa.it> WIRN VIETRI `97 IX ITALIAN WORKSHOP ON NEURAL NETS IIASS "Eduardo R. Caianiello", Vietri sul Mare (SA) ITALY 22 - 24 May 1997 PRELIMINARY PROGRAM Thursday 22 May 9:30 - Artificial Neural Networks in Pharmacology and Therapeutics M. Eandi & M. Costa (Review Talk) Applications 10:30 - Are Hybrid Fuzzy-Neural Systems Actually Useful in plasma Engineering? F.C. Morabito & M. Campolo 10:50 - The GTM as Predictor of Risk in Pregnancy B. Rosario, D.R. Lovell, M. Naranjan, R.W. Prager & K.J. Dalton 11:10 - An Application of the Bootstrap 632+ Rule to Ecological data C. Furlanello, S. Merler, C. Chemini & A. Rizzoli 11:30 - Coffee Break Mathematical models 12:00 - What size Needs Testing? B. Apolloni 12:20 - Sequences of Discrete Hopfield Networks for the Maximum Clique Problem G. Grossi 12:40 - Entropy Based Comparison of Neural Networks for Classification S. Draghici & V. Beiu 13:00 - Energy Functional and Fixed Points of a Neural Network L.B. Litinsky 13:20 - Lunch 15:00 - High light spotting of the posters 16:30 - Poster Session 18:00 - Tavola rotonda "I fondamenti delle reti neurali dopo 10 anni dal loro revival" Friday 23 May 9:30 - Title to be announced C.M. Bishop (Invited Talk) Pattern Recognition & Signal Processing 10:30 - Speeding up Neural Network Execution: an Application to Speech Recognition D. Albesano, F. Mana & R. Gemello 10:50 - Word Recognition by MLP-Based Character Spotting and Dynamic Programming F. Camastra, E. Cepollina & A.M. Colla 11:10 - Periodicity Analysis of Unevenly Spaced Data by means of Neural Networks M. Rasile, L. Milano, R. Tagliaferri & G. Longo 11:30 - Coffee Break 12:00 - Image Reconstruction Using a Hierarchical RBF Network Architecture N.A. Borghese, G. Ferrigno & S. Ferrari 12:20 - Fuzzy Neural Networks for Pattern Recognition A. Blonda & A. Petrosino (Review Talk) 13:20 - Lunch 16:00 - Eduardo R. Caianiello Lecture: - (The winner of the 1997 E.R. Caianiello Fellowship Award) 17:00 - Annual S.I.R.E.N. Meeting 20:00 - Conference Dinner Saturday 24 May 9:30 - Extracting Useful Information from Recurrent Neural Networks: Applications to Financial Time Series. C. Lee Giles & S. Lawrence (Invited Talk) Architectures and Algorithms 10:30 - Computational Maps for Articulatory Speech Synthesis V. Sanguineti & P. Morasso 10:50 - EM Algorithm: A Neural Network View A. Budillon & F. Palmieri 11:10 - Discriminative Least Squares Learning for Fast Adaptive Neural Equalization R. Parisi, E.D. Di Claudio & G. Orlandi 11:30 - Coffee Break 12:00 - Training Analog VLSI Multi Layer Perceptron Networks with Continuous Time Back Propagation G.M. Bo, D.D. Caviglia, H. Chible' & M. Valle 12:20 - A Unifying View of Gradient Calculation and Learning for Locally Recurrent Neural Networks P. Campolucci, A. Uncini & F. Piazza (Review Talk) POSTER SESSION - Hidden Recursive Models P. Frasconi, M. Gori & A. Sperduti - Attractor Neural Networks as Models of Semantic Memory E. Pessa & M. Pietronilla Penna - Plastic Tabu Search for Training Multilayer Perceptrons M. Battisti & P. Burrascano - Cluster Connections: a Visualization Technique to Reveal Cluster Boundaries in Self-Organizing Maps D. Merkl & A. Rauber - Geometrical Constructive Algorithms of Binary Neural Networks G. Martinelli, F.M. Frattale Mascioli & V. Catini - Classifying Magnetic Resonance Spectra of Brain Neoplasms Using Fuzzy and Robust Gold Standard Adjustments N. Pizzi - Application of Fuzzy Neural Networks on Financial Problems M. Rast - On the Cognitive Behaviour of a Multi-Layer Perceptron in Forecasting Meteorological Visibility A. Pasini, V. Pelino & S. Potesta' - Una Soluzione Neurale per la Navigazione di un Robot Mobile in Ambienti Non Strutturati e Non Noti Utilizzando Landmarks Visivi S. Vitabile, F. Bianco, G. Vassallo & F. Sorbello - Video Data Compression Using Multilayer Perceptrons S. Carrato - On the Adaptable Boolean Neural Net Paradigm F.E. Lauria, R. Prevete, M. Milo & S. Visco - Interval Arithmetic Perceptron with Pruning Capability G.P. Drago & S. Ridella - MAIA Neural Network. An Application to the Railway Antiskating System G. Pappalardo, M.N. Postorino, D. Rosaci & G.M.L. Sarne' - A Hebbian Model for Space Representation F. Frisone & P. Morasso - HW/SW Co-Design of a Complete Pre-Processing/Recognition System Based on Sgs-Thomson OCR Analog Chip M. Costa, D. Palmisano, E. Pasero & R. Tosco - Rates of Approximation of Multivariable Functions by One-Hidden-Layer Neural Networks V. Kurkova - Icarus Platform G. Russo - A Distribution-Free VC-Dimension-Based Performance Bound D. Mattera & F. Palmieri The registration is of 300.000 Italian Lire (275.000 Italian Lire for SIREN members) and can be made on site. More information can be found in the www pages at the address below: http:://www-dsi.ing.unifi.it/neural The workshop is the annual meeting of SIREN (Societa' Italiana Reti Neuroniche) Organized and Supported by IIASS (Istituto Internazionale Alti Studi Scientifici) "E.R. Caianiello" Istituto Italiano Studi Filosofici IEEE NNC Italian RIG Universita' degli Studi di Salerno and sponsored by : ELSAG BAILEY, Genova Dipartimento di Informatica ed Applicazioni "R.M. Capocelli", Univ. Salerno Dipartimento di Scienze dell'Informazione, Univ. Milano Dipartimento di Scienze Fisiche "E.R. Caianiello", Univ. Salerno IRSIP-CNR, Napoli From mpolycar at ececs.uc.edu Thu Apr 17 17:10:42 1997 From: mpolycar at ececs.uc.edu (Marios Polycarpou) Date: Thu, 17 Apr 1997 17:10:42 -0400 (EDT) Subject: Interdisciplinary Ph.D. Fellowships Message-ID: <199704172110.RAA07239@zoe.ece.uc.edu> REAL-TIME SPATIALLY-DISTRIBUTED DISINFECTANT CONTROL IN WATER DISTRIBUTION NETWORKS USING NEURAL NETWORKS The University of Cincinnati Earth System Science Program announces the availability of several Ph.D. graduate student fellowships, beginning Fall 1997. These Ph.D. fellowships are funded by the National Science Foundation. Each fellow will receive a full tuition scholarship and a monthly stipend ($1450/month for 12 months), plus funds for research supplies, travel to meetings, and support of a summer sabbatical at an off-campus location. Due to financial support requirements, all fellows must be either U.S. citizens or permanent residents. While the scope of research topics is broad and flexible, we are specifically seeking one or more individuals with an interest in studying spatially distributed real-time control of disinfectant residual in water distribution systems. The question we wish to answer is stated quite simply: How best to control the spatio-temporal distribution of disinfectant residual within a water distribution network? The actual problem is not simple, however, due to complex system dynamics and chemical kinetics. The looped distribution network (i.e., it is not a spanning tree) is a multiple-input multiple-output, spatially extended dynamic system with significant time delays. The network hydraulics, which govern disinfectant transport on a system scale, are driven by external consumer loads and time varying pump operations, and it is typical that network flows will change dramatically and frequently, both in magnitude and direction. Coupled with these dynamics are complex disinfectant kinetics that depend on the pipe material at a physical location, the water source(s), and the type of disinfectant addition at the source(s) (multiple disinfectants are sometimes used). Further, the reactions with chlorine (the most common disinfectant) produce byproducts that evidence suggests are carcinogenic. This evidence has led to tightening lower and upper limits on acceptable chlorine concentrations within the network, and thus to a real need for more advanced control approaches and for a better understanding of disinfectant decay kinetics. Our plans for this work include the development and adaptation of control-theoretic approaches, including neural network methodologies, for real-time spatially-distributed control of multiple simultaneous disinfectant additions. Activities that fall within the project scope include: 1) development of appropriate modeling and simulation methods, 2) consideration of system robustness in the face of uncertain fluctuations in water demands, and 3) optimal location of disinfectant additions to minimize control effort. Applications are encouraged from individuals in any branch of engineering or the physical sciences; applicants should demonstrate a high degree of creativity along with strong quantitative and programming skills, and are expected to interact with chemists, electrical and environmental engineers, and utility personnel participating on the research team. For more information including application materials, contact Prof. Jim Uber (Environ. Hydrology, 513-556-3643, Jim.Uber at uc.edu) or Prof. Marios Polycarpou (Elec. & Comp. Eng. & Comp. Sci., 513-556-4763, Marios.Polycarpou at uc.edu). ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * ************************************************************************** From HECKATHO at B.PSC.EDU Thu Apr 17 13:52:23 1997 From: HECKATHO at B.PSC.EDU (HECKATHO@B.PSC.EDU) Date: Thu, 17 Apr 1997 13:52:23 -0400 Subject: Neural Workshop Announcement Message-ID: <970417135223.20408978@B.PSC.EDU> Simulations in Computational Neuroscience June 11-14, 1997 Pittsburgh Supercomputing Center Pittsburgh, PA Participants in this workshop will learn to use PGENESIS, a parallel version of the GENESIS simulator, and PNEURON (under development), a parallel version of the NEURON simulator. This course will be of interest to active modelers who perceive the need for large simulations which are beyond the effective capabilities of single-cpu workstations. Both PGENESIS and PNEURON are suitable for large scale parallel search of parameter space for single neuron and neuronal network models. PGENESIS is also suitable for parallel simulation of very large network models. Both of these packages run on single workstations, workstation networks, small-scale parallel computers and large massively parallel supercomputers, providing a natural scale-up path. For large simulations NSF funds four supercomputing centers for the use of US-based computational scientists. Familiarity with the non-parallel version of GENESIS or NEURON is preferred but not required. Techniques for parallel search of parameter space and for decomposition of network models will be two foci of the workshop. Participants are encouraged to bring their models to the workshop. Each participant is provided with an SGI Irix workstation and accounts on PSCs advanced computing resources including our 512-node Cray T3E. Each day lectures will be followed by hands-on computing sessions at which experienced instructors will be available to assist in using PGENESIS and PNEURON, and optimizing models. Hotel accommodations during the workshop for researchers affiliated with U.S. academic institutions will be paid by our NIH grant. Complimentary breakfast and lunches also will be provided. There is no registration fee for this workshop. All other costs incurred in attending (travel, other meals, etc.) are the responsibility of the individual participant. The deadline for submitting applications is May 3, 1997. Enrollment is limited to 20 participants. Further information and application materials can be found at: http://www.psc.edu/biomed/workshops/wk-97/neural.html Support for this workshop is from NIH under the NCRR program and from NSF under the Computational Activities in Biology program. From devin at psy.uq.edu.au Fri Apr 18 00:42:06 1997 From: devin at psy.uq.edu.au (Devin McAuley) Date: Fri, 18 Apr 1997 14:42:06 +1000 (EST) Subject: Connectionist Models of Cognition: A Workshop Message-ID: CONNECTIONIST MODELS OF COGNITION: A WORKSHOP Monday July 7th - Friday July 11th, 1997 University of Queensland Brisbane, Queensland 4072 Australia sponsored by the School of Psychology and the Cognitive Science Program Workshop Home Page: http://psy.uq.edu.au/~brainwav/Workshop/ BrainWave Home Page: http://psy.uq.edu.au/~brainwav/ This workshop provides an opportunity for faculty and students with teaching and research interests in connectionist modeling to gain hands-on modeling experience with the BrainWave neural network simulator during an intensive 5-day workshop on connectionist models of cognition at the University of Queensland. The workshop has three primary objectives: * to provide training in specific connectionist models of cognition. * to introduce instructors to the BrainWave simulator and course materials. * to support the development of new models by the participants. The first two days of the workshop will provide training in specific connectionist models of cognition and introduce participants to the BrainWave simulator. Day 3 will focus on how to develop a new model using BrainWave. On days 4 and 5, the instructors will be available to assist faculty and students in the development of new models and to discuss the development of teaching materials for undergraduate and postgraduate courses on connectionist modeling. Instructors: * Simon Dennis, School of Psychology * Devin McAuley, School of Psychology * Janet Wiles, Schools of Information Technology and Psychology Registration for the 5-day workshop includes: * Course materials: o The BrainWave Simulator for the Mac, Windows95, and Unix Platforms o An Introduction to Neural Networks and the BrainWave Simulator o Three chapters of a workbook on connectionist models of cognition * Morning and afternoon tea (Monday - Friday) * Lunch (Monday and Tuesday) The registration deadline for the workshop is June 6, 1997. ---------------------------------------------------------------------------- PROGRAM July 7 Session 1: Introduction to Connectionist Models and the BrainWave Simulator Session 2: Automatic and Controlled Processing: A Stroop Effect Model: Cohen, Dunbar, and McClelland (1990) July 8 Session 1: Language Disorders: A Deep Dyslexia Model: Hinton and Shallice (1991) Session 2: Episodic Memory: The Matrix Model: Humphreys, Bain, and Pike (1989) July 9 Introduction to Model Development in BrainWave July 10 Individual Model Development and Group Discussion July 11 Individual Model Development and Group Discussion ---------------------------------------------------------------------------- Send general inquiries and registration form to: email: brainwav at psy.uq.edu.au fax: +61-7-3365-4466 (attn: Dr. Devin McAuley) postal mail: BrainWave Workshop (c/o Dr. Devin McAuley) School of Psychology University of Queensland Brisbane, Queensland 4072 Australia ---------------------------------------------------------------------------- REGISTRATION FORM Name: ________________________________________ Email: ________________________________________ Address: ________________________________________ ________________________________________ ________________________________________ ________________________________________ Student (AUS$60) ____ Academic (AUS$95) ____ Industry (AUS$295) ____ ACCOMODATION I would like accomodation at King's College ($45 per night - private room with ensuite shared between two, bed and breakfast) Yes ____ No ____ Arrival Date: ____ Departure Date: ____ Accomodation total: AUS $ ______ I would like to be billeted: Yes ____ No ____ Arrival Date: ____ Departure Date: ____ Total payment including registration: AUS $______ FORM OF PAYMENT Cheque or Money Order ____ Visa ____ Mastercard ____ Card # ____________________________ Expiration Date _____ Please debit my credit card for AUS$_______________ Signature _________________________________________ Cheques and money orders should be made out to University of Queensland, School of Psychology. From bishopc at helios.aston.ac.uk Fri Apr 18 02:27:49 1997 From: bishopc at helios.aston.ac.uk (Prof. Chris Bishop) Date: Fri, 18 Apr 1997 07:27:49 +0100 Subject: GTM paper and software available Message-ID: <21297.199704180627@sun.aston.ac.uk> GTM: The Generative Topographic Mapping ======================================= Christopher M. Bishop, Markus Svensen and Christopher K. I. Williams Accepted for publication in Neural Computation Abstract Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline. Available as a postscript file from the GTM home page: http://www.ncrg.aston.ac.uk/GTM/ This home page also provides a Matlab implementation of GTM as well as data sets used in its development. Related technical reports are available here too. To access other publications by the Neural Computing Research Group, go to the group home page: http://www.ncrg.aston.ac.uk/ and click on `Publications' -- you can then obtain a list of all online NCRG publications, or search by author, title or abstract. From bogner at argos.eleceng.adelaide.edu.au Fri Apr 18 09:14:50 1997 From: bogner at argos.eleceng.adelaide.edu.au (Robert E. Bogner) Date: Fri, 18 Apr 1997 22:44:50 +0930 Subject: Good Job in Australia Message-ID: <199704181314.WAA21872@argos.eleceng.adelaide.edu.au> POSTDOCTORAL OR RESEARCH FELLOW Signal Processing and Pattern Recognition at CSSIP and the University of Adelaide, South Australia The Cooperative Research Centre for Sensor Signal and Information (CSSIP) is one of several cooperative research centres awarded by the Australian Government to establish excellence in research, development and industrial application of key technologies. The University of Adelaide, represented by the Electrical and Electronic Engineering Dept., is a partner in this cooperative research centre, together with the Defence Science and Technology Organisation, four other universities, and several companies. CSSIP consists of about 100 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine. The aim of the position is to develop and investigate principles in the areas of sensor signal and image processing, classification and separation of signals, pattern recognition and data fusion. The Pattern Recognition program is in a formative phase. An appointment may be made for up to two years, with possibility of renewal depending on funding. DUTIES: In consultation with task leaders and specialist researchers to contribute to the formulation of research plans, investigate principles for algorithm design, design experiments, prepare data and software and carry out experiments, prepare or assist with the preparation of technical reports, and liaise with other groups and industrial contacts. QUALIFICATIONS: The successful candidate must have a Ph.D. or equivalent achievement, a proven research record, and a demonstrated ability to communicate excellently in written and spoken English. CONDITIONS and PAY will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$36285 to A$41000 for a postdoc., and A$42538 to A$48688 for a research fellow (A$1 is approx. US$0.78.) plus superannuation contribution. The position may be offered as soon as an outstanding applicant is found. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering, Dept., The University of Adelaide, Adelaide South Australia 5005, phone: (61)-8-8303-5589, Fax: (61)-8-8303 4360 Email: bogner at eleceng.adelaide.edu.au APPLICATIONS should include nationality, residence qualification, date on which the applicant could take up duty in Adelaide, and the names of three referees. They should be sent to Mr. Tim Anderson, Luminis Pty. Ltd, Box 149, Rundle Mall, Adelaide, South Australia 5000. Email: luminis at luminis.adelaide.edu.au From piuri at elet.polimi.it Sat Apr 19 05:13:27 1997 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Sat, 19 Apr 1997 11:13:27 +0200 Subject: Call for Papers Message-ID: <1.5.4.32.19970419072717.006d2c10@elet.polimi.it> ICONIP'97, The Fourth International Conference on Neural Information Processing, November 24-28, 1997. Dunedin/Queenstown, New Zealand Special Session on System Monitoring, Modeling, and Analysis CALL FOR PAPERS Adaptive and intelligent systems based on neural computation and related techniques have successfully been applied in the analysis of various complex processes. This is due to the inherent learning capability of neural networks which is superior in analyzing systems that cannot be modeled analytically. In addition to various fields of engineering, like pattern recognition, industrial process monitoring, and telecommunications, practical applications include information retrieval, data analysis, and financial applications. A special session devoted to these areas of neural computation will be organized at ICONIP'97. The scope of the special session covers neural networks methods and related techniques as well as applications in the following areas: - monitoring, modeling, and analysis, of complex industrial processes - telecommunications applications, including resource management and optimization - data analysis and fusion, including financial applications - time series modeling and forecasting Prospective authors are invited to submit papers to the special session on any area of neural techniques on system monitoring, modeling, and analysis including, but not limited to the topics listed above. The submissions must be received by May 30, 1997. Please, send five copies of your manuscript to Prof. Olli Simula, Special Session Organizer Helsinki University of Technology, Laboratory of Computer and Information Science, Rakentajanaukio 2 C, FIN-02150 Espoo, Finland. More detailed instructions for manuscript submission procedure can be found at WWW, on the special session home page: http://nucleus.hut.fi/ICONIP97/ssmonitor/ For the most up-to-date information about ICONIP'97, please browse the conference home page: http://divcom.otago.ac.nz:800/com/infosci/kel/iconip97.htm Important dates: Papers due: May 30, 1997 Notification of acceptance: July 20, 1997 Final camera-ready papers due: August 20, 1997 ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- Olli Simula Professor of Computer Science Helsinki University of Technology Telephone: +358-9-4513271 Department of Computer Science and Engineering Mobile: +358-400-448412 Laboratory of Computer and Information Science Fax: +358-9-4513277 http://nucleus.hut.fi/~ollis/ Email: Olli.Simula at hut.fi From piero at matilde.laboratorium.dist.unige.it Tue Apr 22 19:52:50 1997 From: piero at matilde.laboratorium.dist.unige.it (Piero Morasso) Date: Tue, 22 Apr 97 19:52:50 MET DST Subject: Book announcement Message-ID: <9704221752.AA12961@matilde.laboratorium.dist.unige.it> ========================================================================= ANNOUNCEMENT OF A NEW BOOK OF COMPUTATIONAL NEUROSCIENCE SELF-ORGANIZATION, COMPUTATIONAL MAPS, AND MOTOR CONTROL edited by Pietro Morasso and Vittorio SanguinetI North Holland Elsevier - Advances in Psychology vol. 119 ISBN 0 444 823239, 1997, 635 pages In the study of the computational structure of biological/robotic sensorimotor systems, distributed models have gained center stage in recent years, with a range of issues including self-organization, non-linear dynamics, field computing, etc. This multidisciplinary research area is addressed by a multidisciplinary team of contributors, who provide a balanced set of articulated presentations which include reviews, computational models, simulation studies, psychophysical and neurophysiological experiments. For convenience, the book is divided into three parts, without a clearcut boundary but a slightly different focus. The reader can find different approaches on controversial issues, such as the role and nature of force fields, the need of internal representations, the nature of invariant commands, the vexing question about coordinate transformations, the distinction between hierarchical and bidirectional modelling, and the influence of muscle stiffness. In Part I, the major theme concerns computational maps which typically model cortical areas, according to a view of the sensorimotor cortex as a "geometric engine" and the site of "internal models" of external spaces. Part II also addresses problems of self-organization and field-computing but in a simpler computational architecture which, although lacking a specialized cortical machinery, can still behave in a very adaptive and surprising way by exploiting the interaction with the real world. Finally, Part III is focused on the motor control issues related to the physical properties of muscular actuators and the dynamic interactions with the world, attempting to complete the picture from planning to control. PART I Cortical Maps of Sensorimotor Spaces V. Sanguineti, P. Morasso, and F. Frisone Field Computation in Motor Control B. MacLennan A Probability Interpretation of Neural Population Coding for Movement T.D. Sanger Computational Models of Sensorimotor integration Z. Ghahramani, D.M. Wolpert, and M.I. Jordan How Relevant are Subcortical Maps for the Cortical Machinery? An Hypothesis Based on Parametric Study of Extra-Relay Afferents to Primary Sensory Areas D. Minciacchi and A. Granato PART II Artificial Force-Field Based Methods in Robotics T. Tsuji, P. Morasso, V. Sanguineti, and M. Kaneko Learning Newtonian Mechanics F.A. Mussa Ivaldi and E. Bizzi Motor Intelligence in a Simple Distributed Control System: Walking Machines and Stick Insects H. Cruse and J. Dean The Dynamic Neural Field Theory of Motor Programming: Arm and Eye Movements G. Schner, K. Kopecz, and W. Erlhagen Network Models in Motor Control and Music A. Camurri PART III Human Arm Impedance in Multi-Joint Movement T. Tsuji Neural Models for Flexible Control of Redundant Systems F.H. Guenther and D. Micci Barreca Models of Motor Adaptation and Impedance Control in Human Arm Movements T. Flash and I. Gurevich Control of Human Arm and Jaw Motion: Issues Related to Musculo-Skeletal Geometry P.L. Gribble, R. Laboissire, and D.J. Ostry Computational Maps and Target Fields for Reaching Movements V. Sanguineti and P. Morasso From Uwe.Zimmer at GMD.de Tue Apr 22 18:39:45 1997 From: Uwe.Zimmer at GMD.de (Uwe R. Zimmer) Date: Wed, 23 Apr 1997 00:39:45 +0200 Subject: Japanese Robotics Research - a report and more Message-ID: <335D3E30.77C@GMD.de> Dear all, a report discussing outstanding research topics in Japanese robotics laboratories as well as governmental issues is just released. Some of the discussed groups deal with neural (biological) sensory-motor control, others are involved in biologically plausible sensory systems or redundant (humanoid) kinematics. Even creatures combining mechatronic and biological structures are investigated. ---------------------------------------------------------------------- Recent Developments in Japanese Robotics Research - Notes of a Japan Tour - Uwe R. Zimmer, Thomas Christaller, Christfried Webers ---------------------------------------------------------------------- http://www.gmd.de/People/Uwe.Zimmer/Publications/abs.Japan-Report.html ---------------------------------------------------------------------- (containing links to .pdf, .ps.gz, and ps.Z formats of the report) Abstract: Robotics appears to be a very lively and fruitful field of research in Japan. Some of the research topics cannot be found elsewhere at all, and some are significantly advanced. Discussing this impression, a collection of laboratories is introduced with their most outstanding topics. Moreover some background information about research plans, politics, and organisations are given. ---------------------------------------------------------------------- For an extensive web-presentation of robotics in Japan see also: ---------------------------------------------------------------------- http://www.gmd.de/People/Uwe.Zimmer/Lists/Robotics.in.Japan.html ---------------------------------------------------------------------- For more publications on robotics from GMD, please refer to: ---------------------------------------------------------------------- http://www.gmd.de/FIT/KI/CogRob/Publications/CogRob.Publications.html ---------------------------------------------------------------------- And for general information about scientific activities in Japan: ---------------------------------------------------------------------- http://www.gmd.de/Japan/ ---------------------------------------------------------------------- ,,,,, ___________________________________________ (o o) _____| ________oOO__( )__OOo_______| Uwe R. Zimmer GMD - FIT-KI ___| Schloss Birlinghoven | 53754 St. Augustin, Germany | _______________________________________________________________. Voice: +49 2241 14 2373 - Fax: +49 2241 14 2384 | http://www.gmd.de/People/Uwe.Zimmer/ | From meyer at wotan.ens.fr Wed Apr 23 08:57:09 1997 From: meyer at wotan.ens.fr (Jean-Arcady MEYER) Date: Wed, 23 Apr 1997 14:57:09 +0200 (MET DST) Subject: MODELS OF SPATIAL NAVIGATION Message-ID: <199704231257.OAA00763@eole.ens.fr> ================================================================== CALL FOR PAPERS ADAPTIVE BEHAVIOR Journal (The MIT Press) Special Issue on BIOLOGICALLY INSPIRED MODELS OF SPATIAL NAVIGATION Guest editor: Nestor Schmajuk Submission Deadline: August 31, 1997. In the last decades, computational models of animal and human spatial navigation have received increasing attention from computer scientists, engineers, psychologists, and neurophysiologists. At the same time, roboticists have applied enormous efforts to the design of robots capable of spatial navigation. The combined contribution of these fields to the study of spatial navigation promises a rapid progress in this area. This special issue of Adaptive Behavior will focus on models of spatial navigation in both animals and robots. We are soliciting papers describing finished work on models and technology applied to maze navigation, search behavior, and exploration. Also models of brain areas involved in navigation are welcome. We encourage submissions that address the following topics: -spatial learning in animals or robots -maze navigation in animals or robots -exploratory and searching behavior by individual animals or robots -exploratory behavior by groups of animals or robots -learning from incremental and delayed feedback Submitted papers should be delivered by August 31, 1997. Authors intending to submit a manuscript should contact the guest editor as soon as possible to discuss paper ideas and suitability for this issue. Use nestor at acpub.duke.edu or tel: (919) 660-5695 or fax: (919) 660-5726. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Copies of the complete Adaptive Behavior Instructions to Contributors are available on request--also see the Adaptive Behavior journal's home page at: http://www.biologie.ens.fr/AnimatLab/AB.html For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Dr. Nestor Schmajuk Department of Psychology Duke University Durham, NC 27708 ================================================================== From mel at quake.usc.edu Wed Apr 23 02:04:19 1997 From: mel at quake.usc.edu (Bartlett Mel) Date: Wed, 23 Apr 1997 14:04:19 +0800 Subject: Joint Symposium Registration and Program Message-ID: <9704232104.AA12648@quake.usc.edu> Please find REGISTRATION INFORMATION and PRELIMINARY PROGRAM for the upcoming 4th Annual Southern California JSNC: ----------------------------- --- 4th Annual Joint Symposium on Neural Computation --- Co-sponsored by Institute for Neural Computation University of California, San Diego and Biomedical Engineering Department and Neuroscience Program University of Southern California to be hosted at The University of Southern California University Park Campus Rm. 124, Seeley G. Mudd Building Saturday, May 17, 1997 8:00 a.m. to 5:30 p.m. 8:00 am Registration/Coffee 8:50 am Opening Remarks Session 1: "VISION" - Bartlett Mel, Chair 9:00 am Peter Kalocsai, USC "Using Extension Fields to Improve Proformance of a Biologically Inspired Recognition Model" 9:15 am Kechen Zhang, The Salk Institute "A Conjugate Neural Representation of Visual Objects in Three Dimensions" 9:30 am Alexander Grunewald, Caltech "Detection of First and Second Order Motion" 9:45 am Zhong-Lin Lu, USC "Extracting Characteristic Structures from Natural Images Through Statistically Certified Unsupervised Learning" 10:00 am Don McCleod, UC San Diego "Optimal Nonlinear Codes" 10:15 am Lisa J. Croner, The Salk Institute "Segmentation by Color Influences Response of Motion-Sensitive Neurons in Cortical Area MT" 10:15 am - 10:30 am *** BREAK *** Session 2: "CODING in NEURAL SYSTEMS" - Christof Koch, Chair 10:30 am Dawei Dong, Caltech "How Efficient is Temporal Coding in the Early Visual System?" 10:45 am Martin Stemmler, Caltech "Entropy Maximization in Hodgkin-Huxley Models" 11:00 am Michael Wehr, Caltech "Temporal coding with Oscillatory Sequences of Firing" 11:15 am Martin J. McKeown, The Salk Institutde "Functional Magnetic Resonance Imaging Data Interpreted as Spatially Independent Mixtures" 11:30 am KEYNOTE SPEAKER: Prof. Irving Biederman, William M. Keck Professor of Cognitive Neuroscience Departments of Psychology and Computer Science and the Neuroscience Program, USC "Shape Representation in Mind and Brain" ------------- 12:30 pm - 2:30 pm *** LUNCH/POSTERS *** P1. Konstantinos Alataris, USC "Modeling of Neuronal Ensemble Dynamics" P2. George Barbastathis, Caltech "Awareness-Based Computation" P3. Marian Stewart Bartlett, UC San Diego "What are the Independent Components of Face Images?" P4. Maxim Bazhenov,The Salk Institute "A Computational Model of Intrathalamic Augmenting Responses" P5. Alan Bond, Caltech "A Computational Model for the Primate Brain Based on its Functional Architecture" P6. Glen Brown, The Salk Institute "Output Sign Switching by Neurons is Mediated by a Novel Voltage-Dependent Sodium Current" P7. Martin Chian, USC "Characterization of Unobservable Neural Circuitry in the Hippocampus with Nonlinear Systems Analysis" P8. Carl Chiang, The Neuroscience Institute "Visual and Sensorimotor Intra- and Intercolumnar Synchronization in Awake Behaving Cat" P9. Matthew Dailey, UC San Diego "Learning a Specializtion for Face Recognition" P10. Emmanuel Gillissen, Caltech "Comparative Studies of Callosal Specification in M ammals" P11. Micheal Gray, The Salk Institute "Infomative Features for Visual Speechreading" P12. Alex Guazzelli, USC "A Taxon-Affordances Model of Rat Navigation" P13. Marwan Jabri, The Salk Instutute/Sydney University "A Neural Network Model for Saccades and Fixation on Superior Colliculus" P14. Mathew Lamb, USC "Depth Based Prey Capture in Frogs and Salamanders" P15. Te-Won Lee, The Salk Instutute "Independent Component Analysis for Mixed Sub-Gaussian and Super-Gaussian Sources" P16. George Marnellos, The Salk Institute "A Gene Network of Early Neurogenesis in Drosophila" P17. Steve Potter, Caltech "Animat in a Petri Dish: Cultured Neural Networks for Studying Neural Computation" P18. James Prechtl, UC San Diego "Visual Stimuli Induce Propagating Waves of Electrical Activity in Turtle Cortex" P19. Raphael Ritz, The Salk Institute "Multiple Synfire Chains in Simultaneous Action Lead to Poisson-Like Neuronal Firing" P20. Adrian Robert, UC San Diego "A Model of the Effects of Lamination and Celltype Specialization in the Neocortex" P21. Joseph Sirosh, HNC Software Inc. "Large-Scale Neural Network Simulations Suggest a Single Mechanism for the Self-Organization of Orientation Maps, Lateral Connections and Dynamic Receptive Fields in the Primary Visual Cortex" P22. George Sperling, UC Irvine "A Proposed Architecture for Visual Motion Perception" P23. Adam Taylor, UC San Diego "Dynamics of a Recurrent Network of Two Bipolar Units" P24. Laurenz Wiskott, The Salk Institute "Objective Functions for Neural Map Formation" ------------------------------------------- Session 3: "HARDWARE" - Michael Arbib, Chair 2:30 pm Christof Born, Caltech "Real Time Ego-Motion Estimation with Neuromorphic Analog VLSI Sensors" 2:45 pm Anil Thakoor, JPL "High Speed Image Computation with 3D Analog Neural Hardware" Session 4: "VISUOMOTOR COORDINATION" - Michael Arbib, Chair 3:00 pm Marwan Jabri, The Salk Institute/Sydney University "A Computational Model of Auditory Space Neural Coding in the Superior Colliculus" 3:15 pm Amanda Bischoff, USC "Modeling the Basal Ganglia in a Reciprocal Aiming Task" 3:30 pm Jacob Spoelstra, USC "A Computational Model of the Role of the Cerebellum in Adapting to Throwing While Wearing Wedge Prism Glasses" 3:45 pm - 4:00 pm *** BREAK *** Session 5: "CHANNELS, SYNAPSES, and DENDRITES" - Terry Sejnowski, Chair 4:00 pm Akaysha C. Tang, The Salk Institute "Modeling the Effect of Neuromodulation of Spike Timing in Neocortical Neurons" 4:15 pm Michael Eisele, The Salk Institute "Reinforcement Learning by Pyramidal Neurons" 4:30 pm Sunil S. Dalal, USC "A Nonlinear Prositive Feedback Model of Glutamatergic Synaptic Transmission in Dentate Gyrus" 4:45 pm Venkatesh Murthy, The Salk Institute "Are Neighboring Synapses Independent?" 5:00 pm Gary Holt, Caltech "Shunting Inhibition Does Not Have a Divisive Effect on Firing Rates" 5:15 pm Kevin Archie, USC "Binocular Disparity Tuning in Cortical 'Complex' Cells: Yet Another Role for Intradendritic Computation?" 5:30 pm Closing Remarks *** Adjourn for DINNER *** ------------------------------------------- ORGANIZERS Bartlett Mel, USC (Chair) Michael Arbib, USC Terry Sejnowski, Salk/UCSD PROGRAM COMMITTEE Michael Arbib, USC Bartlett Mel, USC (Chair) Christof Koch, Caltech Terry Sejnowski, UCSD ------------------------------------------- REGISTRATION INFORMATION If you have not already registered, you still have 10 days to do so before the Pre-Registration DEADLINE: Pre-Registration: $25 Late/On-site Registration: $35 (received after Friday, May 2) Registration fee includes coffee, snacks, lunch, and proceedings. Registration form and checks payable to the "Department of Biomedical Engineering, USC" should be mailed to: Joint Symposium, attn: Linda Yokote Biomedical Engineering Department USC, MC 1451 Los Angeles, CA 90089 Administrative questions can be addressed to Linda at: yokote at bmsrs.usc.edu (213)740-0840, (213)740-0343 fax ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1997 JSNC Attendees Registration Form Name: ________________________________________________________________________ Affiliation: _________________________________________________________________ Address: _____________________________________________________________________ _____________________________________________________________________ _____________________________________________________________________ Phone: _________________ Fax: ____________________ E-mail: __________________ Special Dietary Preference: __________________________________________________ Registration fee enclosed: _________ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ DIRECTIONS TO THE SYMPOSIUM From juergen at idsia.ch Thu Apr 24 04:06:39 1997 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 24 Apr 1997 10:06:39 +0200 Subject: IDSIA job interviews Message-ID: <199704240806.KAA01923@ruebe.idsia.ch> Concerning the recent IDSIA job openings in http://www.idsia.ch/~juergen/lstm.html : Between May 25 and June 1 I'll be in Hong Kong (for TANC-97). Candidates from Southeast Asia may be interested in arranging job interviews there. Juergen Schmidhuber, IDSIA From terry at salk.edu Thu Apr 24 21:31:57 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 24 Apr 1997 18:31:57 -0700 (PDT) Subject: NEURAL COMPUTATION 9:4 Message-ID: <199704250131.SAA04124@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 4 - May 15, 1997 Review Similarity, Connectionism, and the Problem of Representation in Vision Shimon Edelman and Sharon Duvdevani-Bar Article Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex Rajesh P. N. Rao and Dana H. Ballard Notes Correction to "Lower Bounds on the VC-Dimension of Smoothly Parametrized Function Classes Wee Sun Lee, Peter L. Bartlett and Robert C. Williamson Lower Bound on VC-Dimension by Local Shattering Yossi Erlich, Dan Chazan, Scott Petrack, and Avi Levi Letters SEEMORE: Combining Color, Shape and Texture Histogramming in a Neurally-Inspired Approach to Visual Object Recognition Bartlett Mel Image Segmentation Based on Oscillatory Correlation DeLiang Wang and David Terman Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience Lance R. Williams and David W. Jacobs Local Parallel Computation of Stochastic Completion Fields Lance R. Williams and David W. Jacobs Optimal, Unsupervised Learning in Invariant Object Recognition Guy Wallis and Roland Baddeley Activation Functions, Computational Goals and Learning Rules for Local Processors with Contextual Guidance Jim Kay and W. A. Phillips Marr's Theory of the Neocortex as a Self-Organizing Neural Network David Willshaw, John Hallam, Sarah Gingell and Soo Leng Lau ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From ataxr at IMAP1.ASU.EDU Wed Apr 23 21:12:46 1997 From: ataxr at IMAP1.ASU.EDU (Asim Roy) Date: Wed, 23 Apr 1997 21:12:46 -0400 (EDT) Subject: CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS? Message-ID: 1997 International Conference on Neural Networks (ICNN'97) Houston, Texas (June 8 -12, 1997) ---------------------------------------------------------------- Further information on the conference is available on the conference web page: http://www.eng.auburn.edu/department/ee/ICNN97 ------------------------------------------------------------------ PANEL DISCUSSION ON "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ------------------------------------------------------------------- This is to announce that a panel will discuss the above question at ICNN'97 on Monday afternoon (June 9). Below is the abstract for the panel discussion broadly outlining the questions to be addressed. I am also attaching a slightly modified version of a subsequent note sent to the panelist. I think the issues are very broad and the questions are simple. The questions are not tied to any specific "algorithm" or "network architecture" or "task to be performed." However, the answers to these simple questions may have an enormous effect on the "nature of algorithms" that we would call "brain-like" and for the design and construction of autonomous learning systems and robots. I believe these questions also have a bearing on other brain related sciences such as neuroscience, neurobiology and cognitive science. Please send any comments on these issues directly to me (asim.roy at asu.edu). I will post the collection of responses to the newsgroups in a few weeks. All comments/criticisms/suggestions are welcome. All good science depends on vigorous debate. Asim Roy Arizona State University ------------------------- PANEL MEMBERS 1. Igor Aleksander 2. Shunichi Amari 3. Eric Baum 4. Jim Bezdek 5. Rolf Eckmiller 6. Lee Giles 7. Geoffrey Hinton 8. Dan Levine 9. Robert Marks 10. Jean Jacques Slotine 11. John G. Taylor 12. David Waltz 13. Paul Werbos 14. Nicolaos Karayiannis (Panel Moderator, ICNN'97 General Chair) 15. Asim Roy Six of the above members are plenary speakers at the meeting. ------------------------- PANEL TITLE: "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners. Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn’t. Second, strict local learning (e.g. back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.) can be developed if we don’t constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionists learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? ------------------------- A SUBSEQUENT NOTE SENT TO THE PANELIST The panel abstract was written to question the two pillars of classical connectionist learning - memoryless learning and pure local learning. With regards to memoryless learning, the basic argument against it is that humans do store information (remember facts/information) in order to learn. So memoryless learning, as far I understand, cannot be justified by any behavioral or biological observations/facts. That does not mean that humans store any and all information provided to them. They are definitely selective and parsimonious in the choice of information/facts to collect and store. We have been arguing that it is the "combination" of memoryless learning and pure local learning that is not feasible for any system, biological or otherwise. Pure local learning, in this context, implies that the system somehow puts together a set of "local learners" that start learning with each learning example given to it (e.g. in back propagation) without having seen a single training example before and without knowing anything about the complexity of the problem. Such a system can be demonstrated to do well in some cases, but would not work in general. Note that not all existing neural network algorithms are of this pure local learning type. For example, if I understand correctly, in constructive algorithms such as ART, RBF, RCE/hypersphere and others, a "decision" to create a new node is made by a "global decision-maker" based on evidence on performance of the existing system. So there is quite a bit of global coordination and "decision-making" in those algorithms beyond the simple "local learning". Anyway, if we "accept" the idea that memory can indeed be used for the purpose of learning (Paul Werbos indicated so in one of his notes), the terms of the debate/discussion change dramatically. We then open the door to the development of far more robust and reliable learning algorithms with much nicer properties than before. We can then start to develop algorithms that are closer to "normal human learning processes". Normal human learning includes processes such as (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets)for the problem, (4) testing of trial solutions (nets), (5) discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. And these learning processes are implemented within the brain, without doubt, using local computing mechanisms of different types. But these learning processes cannot exist without allowing for storage of information about the problem. One of the "large" missing pieces in the neural network field is the definition or characterization of an autonomous learning system such as the brain. We have never defined the external behavioral characteristics of our learning algorithms. We have largely pursued algorithm development from an "internal mechanisms" point of view (local learning, memoryless learning) rather than from the point of view of "external behavior or characteristics" of these resulting algorithms. Some of these external characteristics of our learning algorithms might be:(1) the capability to design the net on their own, (2) polynomial time complexity of the algorithm in design and training of the net, (3) generalization capability, and (4) learning from as few examples as possible (quickness in learning). It is perhaps time to define a set of desirable external characteristics for our learning algorithms. We need to define characteristics that are "independent of": (1) a particular architecture, (2) the problem to be solved (function approximation, classification, memory, etc.), (3)local/global learning issues, and (4) issues of whether to use memory or not to learn. We should rather argue about these external properties than issues of global/local learning and of memoryless learning. With best regards, Asim Roy Arizona State University From atick at monaco.rockefeller.edu Mon Apr 28 08:01:02 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Mon, 28 Apr 1997 08:01:02 -0400 Subject: Network:CNS 8:2 Message-ID: <9704280801.ZM7641@monaco.rockefeller.edu> Network: Computation in Neural Systems 8: 2, 1997 Table of Contents TOPICAL REVIEW R33 Plasticity in adult sensory cortex: a review A Das PAPERS 107 A neuronal model of stroboscopic alternative motion A Bartsch and J L van Hemmen 127 Metric-space analysis of spike trains: theory, algorithms and application J D Victor and K P Purpura 165 Dynamic transitions in global network activity influenced by the balance of excitation and inhibition S L Hill and A E P Villa 185 Spontaneous origin of topological complexity in self-organizing neural networks G Chapline 195 Hebbian learning and the development of direction selectivity: the role of geniculate response timings J C Feidler, A B Saul, A Murthy and A L Humphrey 215 Self-organized feature maps and information theory K Holthausen and O Breidbach 229 Topological singularities in cortical orientation maps: the sign theorem correctly predicts orientation column patterns in primate striate cortex D Tal and E L Schwartz The journal is fully online--those with institutional subscription can access it directly from the web: http:/www.iop.org/Journals/ne -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From bert at mbfys.kun.nl Tue Apr 29 07:07:49 1997 From: bert at mbfys.kun.nl (Bert Kappen) Date: Tue, 29 Apr 1997 13:07:49 +0200 Subject: job opening Message-ID: <199704291107.NAA16975@anthemius.mbfys.kun.nl> Post doc position available at SNN, University of Nijmegen, the Netherlands. Background: The group consists of 8 researchers and PhD students and conducts theoretical and applied research on neural networks. The group is part of the Laboratory of Biophysics which is involved in experimental brain science. The group will significantly expand in the coming years due to a recently received PIONIER research grant. Recent research of the group has focussed on theoretical description of learning processes using the theory of stochastic processes and the design of efficient learning rules for Boltzmann machines using techniques from statistical mechanics; the extraction of rules from data and the integration of knowledge and data for modeling; the design of robust methods for confidence estimation with neural networks. (See also http://www.mbfys.kun.nl/SNN) Job specification: The tasks of the post-doc will be to conduct independent research in one of the above areas. In addition, it is expected that the post-doc will initiate novel research and will assist in the supervision of PhD students. The postdoc should have a PhD in physics, mathematics or computer science and a strong theoretical background in neural networks. The post-doc salary will be maximally Dfl. 7178 per month, depending on experience. The position is available for 3 years with possible extension to 5 years. Applications: Interested candidates should send a letter with a CV and list of publications before June 1 1997 to dr. H.J. Kappen, Stichting Neurale Netwerken, University of Nijmegen, Geert Grooteplein 21, 6525 EZ Nijmegen. For information contact dr. H.J. Kappen, +31 24 3614241. From wiskott at salk.edu Tue Apr 29 13:17:46 1997 From: wiskott at salk.edu (Laurenz Wiskott) Date: Tue, 29 Apr 1997 10:17:46 -0700 Subject: technical report on Neural Map Formation available Message-ID: <199704291717.KAA04638@katz.salk.edu> The following technical report is now available by anonymous ftp from http://www.cnl.salk.edu/~wiskott/Abstracts/WisSej97a.html ftp://ftp.cnl.salk.edu/pub/wiskott/publications/WisSej97a-NeuralMapFormation-TRINC9701.ps.gz Comments are welcome! Laurenz Wiskott. Objective Functions for Neural Map Formation Laurenz Wiskott and Terrence Sejnowski Abstract: Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics which abstract from the the neural activity dynamics by an adiabatic approximation, and objective functions from which weight dynamics may be derived as gradient flows. In this paper we present an example of how an objective function can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from objective functions generated from a few prototypical terms. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same objective function. Coordinate transformations are also important in deriving normalization rules from constraints. Several examples illustrate how objective functions can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics. --. .--. .-----------------------------------------------------. | || SALK || | | Dr. Laurenz Wiskott, CNL, Salk Institute, San Diego | | |||____||| | | wiskott at salk.edu, http://www.cnl.salk.edu/~wiskott/ | `--''' ```--' `-----------------------------------------------------'