From brause at informatik.uni-frankfurt.de Fri Oct 1 10:15:15 1993 From: brause at informatik.uni-frankfurt.de (brause@informatik.uni-frankfurt.de) Date: Fri, 1 Oct 93 15:15:15 +0100 Subject: Tools for Art. Intelligence TAI-93, Advance Program Message-ID: <9310011415.AA03441@ilos.rbi.informatik.uni-frankfurt.de> ADVANCE PROGRAM =============== 5th International Conference on TOOLS WITH ARTIFICIAL INTELLIGENCE November 8-11, 1993, Cambridge (Boston), Massachusetts Sponsored by IEEE Computer Society This conference encompasses the technical aspects of specifying, designing, implementing, and evaluating tools with artificial intelligence and tools for artificial intelligence applications. The topics of interest include the following aspects: o Machine Learning o AI and Software Engineering o Logic and Intelligent Database o AI Knowledge Base Architectures o Parallel Processing and Hardware Support o Artificial Neural Networks o AI Applications o Expert Systems and Environments o Natural Language Processing o AI Algorithms o Intelligent Multimedia Systems o AI and Object-Oriented Systems o Reasoning Under Uncertainty, Fuzzy Logic Steering Committee ------------------ N. G. Bourbakis, SUNY-Binghamton C. V. Ramamoorthy, University of California-Berkeley H. E. Stephanou, Rensselaer Polytechnic Institute W. T. Tsai, University of Minnesota B. W. Wah, University of Illinois-Urbana Treasurer --------- N. G. Bourbakis, SUNY-Binghamton Registration and Publication Chair ---------------------------------- C. Koutsougeras, Tulane University Publicity Chairs ---------------- M. Perlin, Carnegie Mellon University M. Aoyama, Fujitsu Limited A. Delis, University of Maryland J. Y. Juang, National Taiwan University E. Kounalis, University de Nice General Chair ------------- J. Mylopoulos Department of Computer Science University of Toronto 6 King's College Road Toronto, Ontario Canada M5S 1A4 Tel: (416)978-5180 jm at cs.toronto.ca Program Chair ------------- J. J. P. Tsai, University of Illinois-Chicago Vice-Program Chairs ------------------- R. Brause, J.W.Goethe-University F. Golshani, Arizona State University F. Gomez, University of Central Florida J. Gu, University of Calgary M. H. Ibrahim, EDS Corporation M. Jarke, Technical University of Aachen T. Lewis, Naval Postgraduate School K. Nakamura, Fujitsu Limited R. Reynolds, Wayne State University P. Sheu, University of California-Irvine B. Silver, GTE Labs J. Yen, Texas A&M University C. Yu, University of Illinois-Chicago Local Arrangement Committee --------------------------- Chair: J. Vittal, GTE Labs J. Gattiker, SUNY Binghamton S. Mertoguno, SUNY Binghamton M. Mortazavi, SUNY Binghamton --------------------------------------------------------------------- Monday, November 8 WORKSHOP Intelligent Tools & Their Applications -------------------------------------- Invited speakers from Industry, Academia, and Government will address important issues in knowledge engineering, AI languages, and perception systems. Keynote Speaker C. V. Ramamoorthy, University of California, Berkeley Participants from NTT, NIS Labs, Japan; University of Connecticut; Tokai University, Japan; Philips Labs, NY; Gensys, MA; GMG, PA; AAAI Lab, NY; US Air Force; etc. The Workshop features an exhibition of AI tools from several industrial agencies. ----------------------------------------------------------------------- TAI'93 ADVANCE PROGRAM ====================== Tuesday, November 9 9:00 AM - 10:20 AM OPENING SESSION Welcome and Introduction: John Mylopoulos, University of Toronto Additional Greetings: Nikolaos G. Bourbakis, SUNY-Binghamton Program Overview: Jeffrey J. P. Tsai, University of Illinois-Chicago KEYNOTE ADDRESS: The Architecture of Intelligent Agents Raj Reddy, Carnegie Mellon University 10:40 AM - 12:20 PM PARALLEL SESSIONS SESSION A1 Artificial Neural Networks I Session Chair: Nikolaos G. Bourbakis, SUNY-Binghamton Transform coding by lateral inhibited neural nets, Rudiger W. Brause, J.W. Goethe University, Germany. Data transformation for learning in feedforward neural nets, Cris Koutsougeras and R. Srikanth, Tulane University. Logical and linear dependencies extraction from trained neural networks, Raqui Kane and Maurice Milgram, Universite Pierre et Marie Curie, France. Neural-logic belief networks -- a tool for knowledge representation and reasoning Boon Toh Low, University of Sydney, Australia. SESSION B1 AI Algorithms I Session Chair: Jun Gu, University of Calgary The implementation of a first-order logic AGM belief reversion system, Simon Dixon and Wayne Wobcke, University of Sydney, Australia. Nogood recording for static and dynamic CSP, Thomas Schiex and Gerard Verfalillie, CERT-ONERA, France. Constraint relaxation in distributed constraint satisfaction problems, Makoto Yokoo, NTT Communication Science Lab., Japan. Genetic algorithms in industrial design, Jokob Axelsson, Linkoping University, Sweden. Stefan Menth, ABB Corporate Research Center, Switzerland. Klaus Semmler, ABB Kraftwerke AG, Switzerland. SESSION C1 AI and Object-Oriented Systems I Session Chair: Mamdouh H. Ibrahim, EDS Systems Using the active object model to implement multi-agent systems, Eleri Cardozo, UNICAMP, Brazil. Jaime Simao Sichman and Yves Demazeau, LIFIA - Institut IMAG, France. Principled animation of artificial intelligence algorithms, Mark Perlin, Carnegie Mellon University. A method for translating CLP(R) rules into objects, Ta-Cheng Yu, Northwestern University Jie-Yong Juang, National Taiwan University, Taiwan. Object-oriented programming and frame-based knowledge representation, Christian Rathke, Universitat Stuttgart, Germany. 12:20 PM - 1:40 PM LUNCH 1:40 PM - 2:50 PM KEYNOTE ADDRESS: What is the Trend of Information Technology? Alan Salisbury, Learning Group International 3:10 PM - 4:40 PM PARALLEL SESSIONS SESSION A2 PANEL 1: Will Symbolic AI be Replaced by Neural Networks? Moderator: Rudiger W. Brause, J. W. Goethe University Panelists: Gail Carpenter, Center for Adaptive Syst., Boston University, Tomaso Poggio, Art. Int. Lab., MIT Rudiger W. Brause, J.W. Goethe-University (TBA). SESSION B2 AI and Software Engineering Session Chair: Imran A. Zualkernan, Pennsylvania State University An interactive truth maintenance system and its logical framework, Wei Li, Beijing University of Aeronautics and Astronautics, China. Enhancing reuse of Smalltalk methods by conceptual clustering, R. Jetzelsperger, Software Kinetics Ltd., Canada. S. Matwin, University of Ottawa, Canada. F. Oppacher, Carleton University, Canada. Using Analogy to determine program modification based on specification changes, Jun-Jang Jeng and Betty H.C. Cheng, Michigan State University. SESSION C2 AI Knowledge Base Architectures I Session Chair: Robert Reynolds, Wayne State University An intelligent tool for Unix performance tuning, Raul Velez, NCR Mexico, Mexico. Du Zhang and James Kho, California State University. Task based modeling for problem solving strategies, P. Uvietta J. Willamowski, and D. Ziebelin, INRIA -- Rhone-Alpes -- LIFIA, France. Integrating constraints, composite objects and tasks in a knowledge representation system, Jerome Gensel, Pierre Girard, and Olivier Schmeltzer, INRIA -- Rhone-Alpes -- LIFIA, France. 5:00 PM - 6:30 PM PARALLEL SESSIONS SESSION A3 PANEL 2: Integration of AI, Database, and Software Engineering: Research Issues, Practical Problems Moderator: Matthias Jarke, Information V., RWTH Aachen and Robert Reynolds, Wayne State University Panelists: Mike Brodie, GTE Labs, John Mylopoulos, University of Toronto, Matthias Jarke, Information V., RWTH Aachen, Robert Reynolds, Wayne State University SESSION B3 Machine Learning I Session Chair: Bernard Silver, GTE Labs An empirical evaluation of beam search and pre- and post-pruning in BEXA, Hendrik Theron and Ian Cloete, University of Stellenbosch, South Africa. Probabilistic induction of decision trees and disjunctive normal forms, Xiao-Jia M. Zhou and Tharam S. Dillon, La Trobe University, Australia. The use of a machine learning toolbox on industrial applications, N.J. Puzey, T.J. Parsons and P.F.Sims, British Aerospace, Sowerby Research Center, United Kingdom M.Green and T.Brookes, British Aerospace (Systems and Equipment) Ltd., United Kingdom. SESSION C3 AI and Object-Oriented Systems II Session Chair: Betty H.C. Cheng, Michigan State University A combined object-oriented and logic programming tool for AI, Marcelo Jenkins and Daniel Chester, University of Delaware. Knowledge representation and reasoning in a system integrating logic in objects, Ioannis Hatzilygeroudis, University of Patras, Greece. On the semantics of an object-oriented logic programming language: SCKE, Jin Zhi, Academia Sinica, China. ---------------------------------------------------------------------------- Wednesday, November 10 9:00 AM - 10:10 AM KEYNOTE ADDRESS: How Can Knowledge-Based Techniques Help Software Development? Sam DiNitto, USAF Rome Laboratory 10:30 AM - 12:10 PM PARALLEL SESSIONS SESSION A4 Reasoning Under Uncertainty, Fuzzy Logic Session Chair: John Yen, Texas A&M University A fast hill-climbing approach without an energy function for probabilistic reasoning, Eugene Santos Jr., Air Force Institute of Technology. Real-time value-driven diagnosis, Bruce D'Ambrosio, Oregon State University. Generalizing evidence theory to lattices to manage uncertainty, Sheng Guan, University of Texas. Networked bubble propagation method as a polynomial-time hypothetical reasoning for computing quasi-optimal solution, Yukio Ohsawa and Mitsuru Ishizuka, University of Tokyo, Japan. SESSION B4 Expert Systems and Environments Session Chair: Philip Sheu, University of California, Irvine Experimental evaluation of output-based partition testing for expert systems, I.A. Zualkernan and Yuan-Jing Lin, The Pennsylvania State University. Illustration of a decision table tool for specifying and implementing knowledge based systems, Jan Vanthienen and Elke Dries, Katholieke Universiteit Leuven, Belgium. Elastic version space: A knowledge acquisition method with background knowledge adjustment, Ken-ichi Hagiwara, FuJi Electric Corporate Research and Development LTD., Japan. A simple and efficient method for diagnosing equipment faults using equations representing the steady state, Hisashi Shimodaira, Nihon MECCS Co., Ltd., Japan. SESSION C4 AI Algorithms II Session Chair: Mark Perlin, Carnegie Mellon University Scaling up version spaces by using domain specific search algorithms, William Sverdlik, Lawrence Technological University Robert G. Reynolds, Wayne State University. Self-adjusting real-time search: a summary of results, Shashi Shekhar and Babak Hamidzadeh, University of Minnesota. Fast hypothetical reasoning using analogy on inference-path networks, Mitsuru Ishizuka, University of Tokyo, Japan. Akinori Abe, NTT Communication Science Lab., Japan. Short term unit-commitment using genetic algorithms, Dipankar Dasgupta and Douglas R. McGregor, University of Strathclyde, United Kingdom. 12:10 PM - 1:40 PM LUNCH 1:40 PM - 3:10 PM PARALLEL SESSIONS SESSION A5 PANEL 3 Quality of Heuristic Programs Moderators: Wei-Tek Tsai, University of Minnesota and Imran A. Zualkernan, Pennsylvania State University Panelists: Scott French, IBM, Houston, Du Zhang, California State University, C. Mathews, IBM, Sommers, Wei Li, Beijing University of Aeronautics and Astronautics, China Alun Preece, Concordia University John Yen, Texas A&M University SESSION B5 Natural Language Processing I Session Chair: Fernando Gomez, University of Central Florida A marker-passing algorithm for reference resolution, Seungho Cha and Dan Moldovan, University of Southern California. CARAMEL: a step towards reflection in natural language understanding systems, Gerard Sabah and Xavier Briffault, LIMSI - CNRS, France. Quixote as a Tool for Natural Language Processing Satoshi Tojo, Hiroshi Tsuda, Hideki Yasukawa, Kazumasa Yokota, and Yukihiro Morita, Institute for New Generation Computer Technology (ICOT), Japan. SESSION C5 Artificial Neural Networks II Session Chair: Rudiger W. Brause, J. W. Goethe University Management of graphical symbols in a CAD environment: a neural network approach, DerShung Yang and Larry A. Rendell, University of Illinois Julie L. Webster and Doris S. Shaw, U.S. Army Construction Engineering Research Laboratories James H. Garrett, Jr., Carnegie Mellon University. On features used for handwritten character recognition in a neural network environment Akhtar Jameel and Cris Koutsougeras, Tulane University An architecture of neural network for fuzzy teaching inputs, Hahn-Ming Lee and Weng-Tang Wang, National Taiwan Institute of Technology, Taiwan. 3:30 PM - 5:00 PM PARALLEL SESSIONS SESSION A6 PANEL 4 Real-Time and AI Moderator: Shashi Shekhar, University of Minnesota Panelists: Krithi Ramakrishna, University of Mass. Amherst, Ashok Agarwal, Univ. of Maryland, Tom Dean, Brown University, R. Brooks, MIT SESSION B6 Natural Language Processing II Session Chair: Barrett Bryant, University of Alabama A Language Model For Parsing Very Long Chinese Sentences, Hsin-Hsi Chen, National Taiwan University, Taiwan. Interval constraint satisfaction tool INC++, Eero Hyvonen, Stefano De Pascale, and Aarno Lehtola, VTT -- Technical Research Center of Finland, Finland. Meaning Description by SD-Forms and a Prototype of a Conversational-Text Retrieval, Eiji Kawaguchit and Marilyn Lee, Kyushu Institute of Technology, Japan. Koichi Nozaki, Nagasaki University, Japan. SESSION C6 Logic and Intelligent Database I Session Chair: Guojie Li, Academia Sinica, China Beta-Prolog: an extended Prolog with boolean tables for combinatorial searching, Neng-fa Zhou and Isao Nagasawa, Kyushu Institute of Technology, Japan. Evaluating logical queries by means of communicating processes, Du Zhang, California State University. The semantic approach to developing multi-modal non-monotonic logics Hua Shu, University of Karlskrona/Ronneby, Sweden. 5:30 PM - 6:30 PM POSTER SESSION 7:00 PM - 10:00 PM BANQUET KEYNOTE ADDRESS: Knowledge-Based Computer-Aided Design Steve Szygenda, University of Texas at Austin ------------------------------------------------------------------------- Thursday, November 11 9:00 AM - 10:10 AM KEYNOTE ADDRESS: Integrating T\&E in the Acquisition Process to Reduce Cost Raymond A. Paul, U.S. Army 10:30 AM - 12:00 PM PLENARY PANEL: The Future Direction of AI Tools Moderator: John Mylopoulos, University of Toronto, Panelists: Farokh B. Bastani, University of Houston, Nikolaos G. Bourbakis, SUNY-Binghamton Mike Brodie, GTE Labs, Guojie Li, Academia Sinica, China, Matthias Jarke, Information V., RWTH Aachen, Raymond A. Paul, U.S. Army C.V. Ramamoorthy, University of California at Berkeley 12:00 PM - 1:40 PM LUNCH 1:40 PM - 3:00 PM PARALLEL SESSIONS SESSION A7 PANEL 5 Tools for Constraint Satisfaction Moderator: Eugene C. Freuder, University of New Hampshire Panelists: Simon Kasif, Johns Hopkins University, David Allen McAllester, MIT, Bart Selman, AT&T Bell Laboratories, Pascal Van Hentenryck, Brown University SESSION B7 Artificial Neural Networks III Session Chair: Cris Koutsougeras, Tulane University A connectionist shell for developing expert decision support systems Tong-Seng Quah, Chew-Lim Tan, and Hoon-Heng Teh, National University of Singapore, Singapore. Neural network optimization tool based on predictive MDL principle for time series prediction, Mikko Lehtokangas, Jukka Saarinen, and Kimmo Kaski, Tampere University of Technology, Finland. Pentti Huuhtanen, University of Tampere, Finland. Paper web quality profile analysis tool based on artificial neural network, Jukka Vanhala and Kimmo Kaski, Tampere University of Technology, Finland. Pekka Pakarinen, Technical Research Centre of Finland, Finland. SESSION C7 Machine Learning II Session Chair: Prasad Gavaskar, Motorola Inc. The analysis of cost error parallel simulated annealing, Chul-Eui Hong, Il-Yong Chung and Hee-IL Ahn, Electronics and Telecommunications Research Institute, Korea Robust feature selection algorithms, Haleh Vafaie and Kenneth De Jong, George Mason University. Knowledge Based Tools for Risk Assessment in Software Development and Reuse (invited paper) C.V. Ramamoorthy University of California at Berkeley 3:30 PM - 5:10 PM PARALLEL SESSIONS SESSION A8 Parallel Processing and Hardware Support Session Chair: Farokh B. Bastani, University of Houston PARTES: a partitioning scheme for parallel matching, Stefano Gallucci and Jack Tan, University of Houston. A parallel search-and-learn technique for solving large scale TSP, C.P. Ravikumar, Indian Institute of Technology, India. An operating context-sensitive approach to fault detection of mechatronic systems, Matti Kurki and Jarmo Hirvinen, Technical Research Center of Finland(VTT), Finland. SESSION B8 AI Knowledge Base Architectures II Session Chair: Kenneth De Jong, George Mason University ANTISTROFEAS: a knowledge-based expert system for automatic visual VLSI reverse-engineering: the layout version N. G. Bourbakis, SUNY-Binghamton New techniques for inference in assumption-based truth maintenance systems C. Cayrol, M. Cayrol, O. Palmade, Universite Paul Sabatier, France. A research for visual reasoning, Jianxiang Wang and Shenquan Liu, Academic Sinica, China. Modeling autonomous agents in a knowledge based simulation environment, M. Zeller and R. Mock-Hecker, University of Ulm, Germany. SESSION C8 Logic and Intelligent Database II Session Chair: Du Zhang, California State University. HML-an approach for managing/refining knowledge discovered from database, Ning Zhong and Setsuo Ohsuga, The University of Tokyo, Japan. Absorption by decomposition: a more powerful form of absorption, Sukhamay Kundu, Louisiana State University. A tool for classifying office documents, Xiaolong Hao, Jason T.L. Wang, Michael P. Bieber, and Peter A. Ng, New Jersey Institute of Technology. Sampling issues in generating rules from database, Changhwan Lee, University of Connecticut.  From ptodd at spo.rowland.org Sat Oct 2 18:05:02 1993 From: ptodd at spo.rowland.org (Peter M. Todd) Date: Sat, 2 Oct 93 18:05:02 EDT Subject: Deadline reminder: Music/arts special issue Message-ID: <9310022205.AA09991@spo.rowland.org> **** PLEASE DISTRIBUTE **** MUSIC AND CREATIVITY Call for Papers for a Special Issue of Connection Science-- Reminder of approaching deadline The October 15 deadline for submissions to the special issue of Connection Science on network applications in music, arts, and creativity, is fast approaching. We seek full-length papers on empirical or theoretical work in the areas of modelling musical cognition; network composition, choreography, or visual creation; integration of high- and low-level musical or artistic knowledge; cross-modal integration (e.g. rhythm and tonality); developmental models; cross-cultural models; psychoacoustic models; relationships between music and language; and connections to cognitive neuroscience. We also welcome shorter research notes up to 4000 words in length covering ongoing research projects. For a complete call for papers and author guidelines, or to submit a paper (five copies), contact the Special Issue Editors: Niall Griffith Department of Computer Science, University of Exeter, Prince of Wales Road, Exeter, EX4 4PT, England. E-mail: ngr at dcs.exeter.ac.uk Peter M. Todd The Rowland Institute for Science 100 Edwin H. Land Boulevard Cambridge, MA 02142 USA E-mail: ptodd at spo.rowland.org  From Christian.Lehmann at di.epfl.ch Mon Oct 4 05:39:26 1993 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Mon, 4 Oct 93 10:39:26 +0100 Subject: research job Message-ID: <9310040939.AA09484@lamisun.epfl.ch> ________________________________________________________________________________ * * * * * * * * * * * * * ________________________________________________________________________________ University of Lausanne: Graduate student position (Doctorant) available in October 1993 at the Institute of Physiology Topic: Spatial and temporal processing in neural networks The student will be integrated in an electrophysiology group working with simultaneous single unit recordings. A tight collaboration with the Swiss Federal School of Technology (EPFL) will provide the latest technical facilities. It is expected that she/he will acquire in-depth knowledge in the fast growing field of neural networks in order to develop and test original ideas on information processing in the brain. A good background in mathematics, physics, and biology as well as knowledge of at least one higher programming language is recommended. Our Ph.D. program extends over a duration of three years minimum. The minimum salary ranges between US$19,000 and 24,000/year. Please send applications (curriculum vitae and letters of recommendations of two academic referees) to or get further information from: Dr. Alessandro Villa or Dr. Yves de Ribaupierre, UNIL Institute of Physiology, Rue du Bugnon 7, CH-1005 Lausanne, Switzerland. Tel. ++41-21-313.2809 FAX ++41-21-313.2865 E-mail: villa at ulmed.unil.ch ________________________________________________________________________________ * * * * * * * * * * * * * ________________________________________________________________________________  From jordan at psyche.mit.edu Mon Oct 4 15:36:46 1993 From: jordan at psyche.mit.edu (Michael Jordan) Date: Mon, 4 Oct 93 15:36:46 EDT Subject: technical report Message-ID: The following paper is now available on the neuroprose archive as "jordan.convergence.ps.Z". Convergence results for the EM approach to mixtures of experts architectures Michael I. Jordan Lei Xu Department of Brain and Cognitive Sciences Massachusetts Institute of Technology The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments.  From mel at cns.caltech.edu Mon Oct 4 17:41:11 1993 From: mel at cns.caltech.edu (Bartlett Mel) Date: Mon, 4 Oct 93 14:41:11 PDT Subject: NIPS*93 program Message-ID: <9310042141.AA16230@plato.klab.caltech.edu> NIPS*93 MEETING PROGRAM and REGISTRATION REMINDER The 1993 Neural Information Processing Systems (NIPS*93) meeting is the seventh meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. There will be an afternoon of tutorial presentations (Nov. 29), two and a half days of regular meeting sessions (Nov. 30 - Dec. 2), and two days of focused workshops at a nearby ski area (Dec. 3-4). An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035. EARLY REGISTRATION DEADLINE (for $100 discount): Oct. 30 _________________ NIPS*93 ORAL PRESENTATIONS PROGRAM Tues. AM: Cognitive Science 8:30 Invited Talk: Jeff Elman, UC San Diego: From Weared to Wore: A Connectionist Account of the History of the Past Tense 9:00 Richard O. Duda, San Jose State Univ.: Connectionist Models for Auditory Scene Analysis 9:20 Reza Shadmehr and Ferdinando A. Mussa-Ivaldi, MIT: Computational Elements of the Adaptive Controller of the Human Arm 9:40 Catherine Stevens and Janet Wiles, University of Queensland: Tonal Music as a Componential Code: Learning Temporal Relationships Between and Within Pitch and Timing Components 10:00 Poster Spotlights: Thea B. Ghiselli-Crispa and Paul Munro, Univ. of Pittsburgh: Emergence of Global Structure from Local Associations Tony A. Plate, University of Toronto: Estimating Structural Similarity by Vector Dot Products of Holographic Reduced Representations 10:10 BREAK Speech Recognition 10:40 Jose C. Principe, Hui-H. Hsu and Jyh-M. Kuo, Univ. of Florida: Analysis of Short Term Neural Memory Structures for Nonlinear Prediction 11:00 Eric I. Chang and Richard P. Lippmann, MIT Lincoln Laboratory: Figure of Merit Training for Detection and Spotting 11:20 Gregory J. Wolff, K. Venkatesh Prasad, David G. Stork and Marcus Hennecke, Ricoh California Research Center: Lipreading by Neural Networks: Visual Preprocessing, Learning and Sensory Integration 11:40 Poster Spotlights: Steve Renals, Mike Hochberg and Tony Robinson, Cambridge University: Learning Temporal Dependencies In Large-Scale Connectionist Speech Recognition Ying Zhao, John Makhoul, Richard Schwartz and George Zavaliagkos, BBN Systems and Technologies: Segmental Neural Net Optimization for Continuous Speech Recognition 11:50 Rod Goodman, Caltech: Posner Memorial Lecture Tues. PM: Temporal Prediction and Control 2:00 Invited Talk: Doyne Farmer, Prediction Co.: Time Series Analysis of Nonlinear and Chaotic Time Series: State Space Reconstruction and the Curse of Dimensionality 2:30 Kenneth M. Buckland and Peter D. Lawrence, Univ. of British Columbia: Transition Point Dynamic Programming 2:50 Gary W. Flake, Guo-Zhen Sun, Yee-Chun Lee and Hsing-Hen Chen, University of Maryland: Exploiting Chaos to Control The Future 3:10 Satinder P. Singh, Andrew G. Barto, Roderic Grupen and Christopher Connolly, University of Massachusetts: Robust Reinforcement Learning in Motion Planning 3:30 BREAK Theoretical Analysis 4:00 Scott Kirkpatrick, Naftali Tishby, Lidror Troyansky, The Hebrew Univ. of Jerusalem, and Geza Gyorgi, Eotvos Univ.: The Statistical Mechanics of K-Satisfaction 4:20 Santosh S. Venkatesh, Changfeng Wang, Univ. of Pennsylvania, and Stephen Judd, Siemens Corporate Research: When To Stop: On Optimal Stopping And Effective Machine Size In Learning 4:40 Wolfgang Maass, Technische Univ. Graz: Agnostic PAC-Learning Functions on Analog Neural Nets 5:00 H.N. Mhaskar, California State Univ. and Charles A. Micchelli, IBM: How To Choose An Activation Function 5:20 Poster Spotlights Iris Ginzburg, Tel Aviv Univ. and Haim Sompolinsky, Hebrew Univ.: Correlation Functions on a Large Stochastic Neural Network Xin Wang, Qingnan Li and Edward K. Blum, USC: Asynchronous Dynamics of Continuous-Time Neural Networks Tal Grossman and Alan Lapedes, Los Alamos National Laboratory: Use of Bad Training Data for Better Predictions Wed. AM: Learning Algorithms 8:30 Invited Talk: Geoff Hinton, Univ. of Toronto: Using the Minimum Description Length Principle to Discover Factorial Codes 9:00 Richard S. Zemel, Salk Institute, and G. Hinton, Univ. of Toronto: Developing Population Codes By Minimizing Description Length 9:20 Sreerupa Das and Michael C. Mozer, University of Colorado: A Hybrid Gradient-Descent/Clustering Technique for Finite State Machine Induction 9:40 Eric Saund, Xerox Palo Alto Research Center: Unsupervised Learning of Mixtures of Multiple Causes in Binary Data 10:00 BREAK 10:30 A. Uzi Levin and Todd Leen, Oregon Graduate Institute: Fast Pruning Using Principal Components 10:50 Christoph Bregler and Stephen Omohundro, ICSI: Surface Learning with Applications to Lip Reading 11:10 Melanie Mitchell, Santa Fe Inst. and John H. Holland, Univ. Michigan: When Will a Genetic Algorithm Outperform Hill Climbing 11:30 Oded Maron and Andrew W. Moore, MIT: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation 11:50 Poster Spotlights: Zoubin Ghahramani and Michael I. Jordan, MIT: Supervised Learning from Incomplete Data via an EM Approach Mats Osterberg and Reiner Lenz, Linkoping Univ. Unsupervised Parallel Feature Extraction from First Principles Terence D. Sanger, LAC-USC Medical Center: Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples Patrice Y. Simard and Edi Sackinger, AT&T Bell Laboratories: Efficient Computation of Complex Distance Metrics Using Hierarchical Filtering Wed. PM: Neuroscience 2:00 Invited Talk: Eve Marder, Brandeis Univ.: Dynamic Modulation of Neurons and Networks 2:30 Ojvind Bernander, Rodney Douglas and Christof Koch, Caltech: Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells 2:50 Christiane Linster and David Marsan, ESPCI, Claudine Masson and Michel Kerzberg, CNRS: Odor Processing in the Bee: a Preliminary Study of the Role of Central Input to the Antennal Lobe 3:10 M.G. Maltenfort, R. E. Druzinsky, C. J. Heckman and W. Z. Rymer, Northwestern Univ.: Lower Boundaries of Motoneuron Desynchronization Via Renshaw Interneurons 3:30 BREAK Visual Processing 4:00 K. Obermayer, The Salk Institute, L. Kiorpes, NYU and Gary G. Blasdel, Harvard Medical School: Development of Orientation and Ocular Dominance Columns in Infant Macaques 4:20 Yoshua Bengio, Yann Le Cun and Donnie Henderson, AT&T Bell Labs: Globally Trained Handwritten Word Recognizer using Spatial Representation, Spatial Displacement Neural Networks and Hidden Markov Models 4:40 Trevor Darrell and A. P. Pentland, MIT: Classification of Hand Gestures using a View-based Distributed Representation 5:00 Ko Sakai and Leif H. Finkel, Univ. of Pennsylvania: A Network Mechanism for the Determination of Shape-from-Texture 5:20 Video Poster Spotlights (to be announced) Thurs. AM: Implementations and Applications 8:30 Invited Talk: Dan Seligson, Intel: A Radial Basis Function Classifier with On-chip Learning 9:00 Michael A. Glover, Current Technology, Inc. and W. Thomas Miller III, University of New Hampshire: A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Application 9:20 Steven S. Watkins, Paul M. Chau, and Mark Plutowski, UCSD, Raoul Tawel and Bjorn Lambrigsten, JPL: A Hybrid Radial Basis Function Neurocomputer 9:40 Gert Cauwenberghs, Caltech : A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics 10:00 BREAK 10:30 Invited Talk: Paul Refenes, University College London: Neural Network Applications in the Capital Markets 11:00 Jane Bromley, Isabelle Guyon, Yann Le Cun, Eduard Sackinger and Roopak Shah, AT&T Bell Laboratories: Signature Verification using a "Siamese" Time Delay Neural Network 11:20 John Platt and Ralph Wolf, Synaptics, Inc.: Postal Address Block Location Using a Convolutional Locator Network 11:40 Shumeet Baluja and Dean Pomerleau, Carnegie Mellon University: Non-Intrusive Gaze Tracking Using Artificial Neural Networks 12:00 Adjourn to Vail for Workshops _____________________ NIPS*93 POSTER PROGRAM Tues. PM Posters: Cognitive Science (CS) CS-1 Blasig Using Backpropagation to Automatically Generate Symbolic Classification Rules CS-2 Munro, Ghiselli-Crispa Emergence of Global Structure from Local Associations CS-3 Plate Estimating structural similarity by vector dot products of Holographic Reduced Representations CS-4 Shultz, Elman Analyzing Cross Connected Networks CS-5 Sperduti Encoding of Labeled Graphs by Labeling RAAM Speech Processing (SP) SP-1 Farrell, Mammone Speaker Recognition Using Neural Tree Networks SP-2 Hirayama, Vatikiotis-Bateson, Kawato Inverse Dynamics of Speech Motor Control SP-3 Renals, Hochberg, Robinson Learning Temporal Dependencies In Large-Scale Connectionist Speech Recognition SP-4 Zhao, Makhoul, Schwartz, Zavaliagkos Segmental Neural Net Optimization for Continuous Speech Recognition Control, Navigation and Planning (CT) CT-1 Atkeson Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming CT-2 Boyan, Littman A Reinforcement Learning Scheme for Packet Routing Using a Network of Neural Networks CT-3 Cohn Queries and Exploration Using Optimal Experiment Design CT-4 Duff, Barto Monte Carlo Matrix Inversion and Reinforcement Learning CT-5 Gullapalli, Barto Convergence of Indirect Adaptive Asynchronous Dynamic Programming Algorithms CT-6 Jaakkola, Jordan, Singh Stochastic Convergence Of Iterative DP Algorithms CT-7 Moore The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces CT-8 Nowlan, Cacciatore Mixtures of Controllers for Jump Linear and Non-linear Plants CT-9 Wada, Koike, Vatikiotis-Bateson, Kawato A Computational Model for Cursive Handwriting Based on the Minimization Principle Learning Theory, Generalization and Complexity (LT) LT-01 Cortes, Jackel, Solla, Vapnik, Denker Learning Curves: Asymptotic Values and Rates of Convergence LT-02 Fefferman Recovering A Feed-Forward Net From Its Output LT-03 Grossman, Lapedes Use of Bad Training Data for Better Predictions LT-04 Hassibi, Sayed, Kailath H-inf Optimality Criteria for LMS and Backpropagation LT-05 Hush, Horne Bounds on the complexity of recurrent neural network implementations of finite state machines LT-06 Ji A Bound on Generalization Error Using Network-Parameter-Dependent Information and Its Applications LT-07 Kowalczyk Counting function theorem for multi-layer networks LT-08 Mangasarian, Solodov Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization LT-09 Plutowski, White Delete-1 Cross-Validation Estimates IMSE LT-10 Schwarze, Hertz Discontinuous Generalization in Large Commitee Machines LT-11 Shapiro, Prugel-Bennett Non-Linear Statistical Analysis and Self-Organizing Competitive Networks LT-12 Wahba Structured Machine Learning for 'Soft' Classification, with Smoothing Spline ANOVA Models and Stacked Tuning, Testing and Evaluation LT-13 Watanabe Solvable models of artificial neural networks LT-14 Wiklicky On the Non-Existence of a Universal Learning Algorithm for Recurrent Neural Networks Dynamics/Statistical Analysis (DS) DS-1 Coolen, Penney, Sherrington Coupled Dynamics of Fast Neurons and Slow Interactions DS-2 Garzon, Botelho Observability of neural network behavior DS-3 Gerstner, van Hemmen How to Describe Neuronal Activity: Spikes, Rates, or Assemblies? DS-4 Ginzburg, Sompolinsky Correlation Functions on a Large Stochastic Neural Network DS-5 Leen, Orr Momentum and Optimal Stochastic Search DS-6 Ruppin, Meilijson Optimal signalling in Attractor Neural Networks DS-7 Wang, Li, Blum Asynchronous Dynamics of Continuous-Time Neural Networks Recurrent Networks (RN) RN-1 Baird, Troyer, Eeckman Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Net RN-2 Bengio, Frasconi Credit Assignment through Time: Alternatives to Backpropagation RN-3 Kolen Fool's Gold: Extracting Finite State Machines From Recurrent Network Dynamics RN-4 Movellan A Reinforcement Algorithm to Learn Trajectories with Stochastic Neural Networks RN-5 Saunders, Angeline, Pollack Structural and behavioral evolution of recurrent networks Applications (AP) AP-01 Baldi, Brunak, Chauvin, Krogh Hidden Markov Models in Molecular Biology: Parsing the Human Genome AP-02 Eeckman, Buhmann, Lades A Silicon Retina for Face Recognition AP-03 Flann A Hierarchal Approach to Recognizing On-line Cursive Handwriting AP-04 Graf, Cosatto, Ting Locating Address Blocks with a Neural Net System AP-05 Karunanithi Identifying Fault-Prone Software Modules Using Feed-Forward Networks: A Case Study AP-06 Keymeulen Comparison Training for a Rescheduling Problem in Neural Networks AP-07 Lapedes, Steeg Use of Adaptive Networks to Find Highly Predictable Protein Structure Classes AP-08 Schraudolph, Dayan, Sejnowski Using the TD(lambda) Algorithm to Learn an Evaluation Funcion for the Game of Go AP-09 Smyth Probabilistic Anomaly Detection in Dynamic Systems AP-10 Tishby, Singer Decoding Cursive Scripts Wed. PM posters: Learning Algorithms (LA) LA-01 Gold, Mjolsness Clustering with a Domain-Specific Distance Metric LA-02 Buhmann Central and Pairwise Data Clustering by Competitive Neural Networks LA-03 de Sa Learning Classification without Labeled Data LA-04 Ghahramani, Jordan Supervised learning from incomplete data via an EM approach LA-05 Tresp, Ahmad, Neuneier Training Neural Networks with Deficient Data LA-06 Osterberg, Lenz Unsupervised Parallel Feature Extraction from First Principles LA-07 Sanger Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples LA-08 Leen, Kambhatla Fast Non-Linear Dimension Reduction LA-09 Schaal, Atkeson Assessing The Quality of Learned Local Models LA-10 Simard, Sackinger Efficient computation of complex distance metrics using hierarchical filtering LA-11 Tishby, Ron, Singer The Power of Amnesia LA-12 Wettscherek, Dietterich Locally Adaptive Nearest Neighbor Algorithms LA-13 Liu Robust Parameter Estimation and Model Selection for Neural Network Regression LA-14 Wolpert Bayesian Backpropagation Over Functions Rather Than Weights LA-15 Thodberg Bayesian Backprop in Action: Pruning, Ensembles, Error Bars and Application to Strectroscopy LA-16 Dietterich, Jain, Lanthop Dynamic Reposing for Drug Activity Prediction LA-17 Ginzburg, Horn Combined Neural Networks For Time Series Analysis LA-18 Graf, Simard Backpropagation without Multiplication LA-19 Harget, Bostock A Comparative Study of the Performance of a Modified Bumptree with Radial Basis Function Networks and the Standard Multi-Layer Perceptron LA-20 Najafi, Cherkassky Adaptive Knot Placement Based on Estimated Second Derivative of Regression Surface Constructive/Pruning Algorithms (CP) CP-1 Fritzke Supervised Learning with Growing Cell Structures CP-2 Hassibi, Stork, Wolff Optimal Brain Surgeon: Extensions, streamlining and performance comparisons CP-3 Kamimura Generation of Internal Representations by alpha-transformation CP-4 Leerink, Jabri Constructive Learning Using Internal Representation Conflicts CP-5 Utans Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data CP-6 Watanabe An Optimization Method of Layered Neural Networks Based on the Modified Information Criterion Neuroscience (NS) NS-01 Bialek, Ruderman Statistics of Natural Images: Scaling in the Woods NS-02 Boussard, Vibert Dopaminergic neuromodulation brings a dynamical plasticiy to the retina NS-03 Doya, Selverston, Rowat A Hodgkin-Huxley Type Neuron Model that Learns Slow Non-Spike Oscillation NS-04 Gusik, Eaton Directional Hearing by the Mauthner System NS-05 Horiuchi, Bishofberger, Koch Building an Analog VLSI, Saccadic Eye Movement System NS-06 Lewicki Bayesian Modeling and Classification of Neural Signals NS-07 Montague, Dayan, Sejnowski Foraging in an Uncertain Environment Using Predictive Hebbian Learning NS-08 Rosen, Rumelhart, Knudsen A Connectionist Model of the Owl's Sound Localization System NS-09 Sanger Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements NS-10 Siegal An Analog VLSI Model Of Central Pattern Generation In The Medicinal Leech NS-11 Usher, Stemmler, Koch High spike rate variability as a consequence of network amplification of local fluctuations Visual Processing (VP) VP-1 Ahmad Feature Densities are Required for Computing Feature Corresponces VP-2 Buracas, Albright Proposed function of MT neurons' receptive field surrounds: computing shapes of objects from velocity fields VP-3 Geiger, Diamantaras Resolving motion ambiguities VP-4 Mjolsness Two-Dimensional Object Localization by Coarse-to-fine Correlation Matching VP-5 Sajda, Finkel Dual Mechanisms for Neural Binding and Segmentation and Their Role in Cortical Integration VP-6 Yuille, Smirnakis, Xu Bayesian Self-Organization Implementations (IM) IM-01 Andreou, Edwards VLSI Phase Locking Architecture for Feature Linking in Multiple Target Tracking Systems IM-02 Coggins, Jabri WATTLE: A Trainable Gain Analogue VLSI Neural Network IM-03 Elfadel, Wyatt The "Softmax" Nonlinearity: Derivation Using Statistical Mechanics and Useful Properties as a Multiterminal Analog Circuit Element IM-04 Muller, Kocheisen, Gunzinger High Performance Neural Net Simulation on a Multiprocessor System with "Intelligent" Communication IM-05 Murray, Burr, Stork, et al. Digital Boltzmann VLSI for constraint satisfaction and learning IM-06 Niebur, Brettle Efficient Simulation of Biological Neural Networks on Massively Parallel Supercomputers with Hypercube Architecture IM-07 Oliveira, Sangiovanni-Vincentelli Learning Complex Boolean Functions: Algorithms and Applications IM-08 Shibata, Kotani, Yamashita et al. Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors IM-09 Watts Event-Driven Simulation of Networks of Spiking Neurons  From wong at redhook.llnl.gov Mon Oct 4 17:17:22 1993 From: wong at redhook.llnl.gov (Issac Wong) Date: Mon, 4 Oct 93 14:17:22 PDT Subject: reprint available: nonlinear scale-space filtering Message-ID: <9310042117.AA20057@redhook.llnl.gov> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/wong.scale_space.ps.Z The file wong.scale_space.ps.Z is now available for copying from the Neuroprose repository: A Nonlinear Scale-Space Filter by Physical Computation Yiu-fai Wong Institute for Scientific Computing Research, L-426 Lawrence Livermore National Laboratory Livermore, CA 94551 E-mail: wong at redhook.llnl.gov Abstract--- Using maximum entropy principle and statistical mechanics, we derive and demonstrate a nonlinear scale-space filter. For each datum in a signal, a neighborhood of weighted data is used for scale-space clustering. The cluster center becomes the filter output. The filter is governed by a single scale parameter which dictates the spatial extent of nearby data used for clustering. This, together with the local characteristic of the signal, determine the scale parameter in the output space, which dictates the influences of these data on the output. This filter is thus completely unsupervised and data-driven. It provides a mechanism for a) removing noise; b) preserving edges and c) improved smoothing of nonimpulsive noise. This filter presents a new mechanism for detecting discontinuities differing from techniques based on local gradients and line processes. We demonstrate the filter using real images. This work shows that scale-space filtering, nonlinear filtering and scale-space clustering are closely related and provides a framework within which further image processing, image coding and computer vision problems can be investigated. This work has been presented at IEEE Conf. Computer Vision and Pattern Recognition and IEEE Workshop on Neural Networks for Signal Processing, 1993. --Isaac Wong from Lawrence Livermore Lab  From P.Refenes at cs.ucl.ac.uk Tue Oct 5 11:48:08 1993 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 05 Oct 93 16:48:08 +0100 Subject: No subject Message-ID: CALL FOR PARTICIPATION 1ST INTERNATIONAL WORKSHOP NEURAL NETWORKS IN THE CAPITAL MARKETS LONDON BUSINESS SCHOOL, NOVEMBER 18-19 1993 Neural networks have now been applied to a number of live systems in the capital markets and in many cases have demonstrated better performance than competing approaches. Now is the time to take a critical look at their successes and limitations and to assess their capabilities, research issues and future directions. This workshop presents original papers which represent new and significant research, development and applications in finance and investment and which cover key areas of time series forecasting, multivariate dataset analysis, classification and pattern recognition. Application areas include: - Bond and Stock Valuation and Trading - Univariate time series analysis - Asset allocation and risk management - Multivariate data analysis - Foreign exchange rate prediction - Classification and ranking - Commodity price forecasting - Pattern Recognition - Portfolio management - Hybrid systems PROGRAMME COMMITTEE Prof. N. Biggs - London School of Economics Prof. D. Bunn - London Business School Dr J. Moody - Oregon Graduate Institute Dr A. Refenes - London Business School Prof. M. Steiner - Universitaet Munster Dr A. Weigend - University of Colorado VENUE All sessions will be held at the London Business School which is situated overlooking Regents Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registrees. PROVISIONAL PROGRAMME November 18 8.30 Registration 9.00 Session 1 ADVANCES IN NEURAL NETWORKS Chair: D. Bunn, London Business School "Predicting the future and understanding the past" A. Weigend, University of Colorado "Non-linear behaviour of financial markets" A. Antoniou, Brunel University "Neural networks for financial engineering" A. Refenes, London Business School "Designing neural networks: Computational Learning Theory" N. Biggs, London School of Economics 12.00 Lunch 2.00 Session 2 FOREIGN EXCHANGE: PREDICTION AND TRADING Chair: B. Davies, BZW Invited talk: "Learning and forecasting from hints" Y. Abu-Mustafa, California Institute of Technology 5.00 Poster Session November 19 9.00 Session 3 BONDS AND DERIVATIVES Chair: P. Sondhi, CitiBank Invited talk: "Bond rating using neural networks" J. Moody, Oregon Graduate Institute 12.00 Lunch 2.00 Session 4 EQUITIES Chair: S. Lamoine, Societe Generale Invited talk: "Neural networks as an alternative market model" M. Steiner, Universitat Munster 5.00 Panel Session 6.00 End of workshop Submitted Papers include: - An investigation into the use of simulated artificial neural networks for forecasting the movement of foreign currency exchange Thomas M. Seiler & Jay E. Aronson Nova University, Florida, USA - Short-Term Forecasting of the USD/DM-Exchange Rate Dr. Thorsten Poddig Universitdt Bamberg, Germany - Estimation of implied volatilities using a neural network approach Fernando Gonzalez Miranda University of Madrid, Spain - Estimating Tax Inflows at a Public Institution D. E. Baestaens, W. M. van den Bergh & H. Vaudrey Erasmus Univerity Rotterdam, The Netherlands - Genetic Programming for Strategy Acquisition in the Financial Markets Martin Andrews Cambridge University, U.K. - Bond Rating with Neural Networks J. Clay Singleton & Alvin J. Surkan University of North Texas, USA - Feedforward Neural Network and Canonical Correlation Models as Approximators with an Application to One-Year Ahead Forecasting Dr P. W. Otter Faculty of Economics, Groningen, The Netherlands - Dependency Analysis and Neural Network Modeling of Currency Exchange Rates Hong Pi Lund University, Sweden - Neural Networks in Financial Forecasting - How to develop Forecasting Models Prof. Dr. W. Gerke & S. Baun Friedrich-Alexander-Universitdt, N|rnberg, Germany - Results of a simple trading scheme based on an Artificial Neural Network on the Austrian Stock Market Christian Haefke Institut for Advanced Studies, Vienna, Austria - Topology-Preserving Neural Architectures and Multidimensional Scaling for Multivariate Data Analysis C. Serrano-Cinca, C. Mar-Molinero & B. Martin-Del-Brio University of Zaragossa, Spain - Important factors in Neural Networks- Forecasts of Gold Futures Prices Gary Grudnitski San Diego State University, USA - Economic Forecasting with Neural Nets: a Computational Learning Theory View Martin Anthony & Norman L. Biggs London School of Economics - Application of Neural Networks in Short-Term Stock Price Forcasting G. M. Papadourakis, G. Spanoudakis & A. Gotsias Institute of Computer Science, Herakliom, Greece - Artificial Neural Networks for Treasury Bills Rate Forecasting Leonardo Landi & Emilio Barucci Universita di Firenze, Italy - Forecasting the German Inflation Rate Wietske van Zwol & Albert Bolts Tilburg Institute for Applied Economic Research, Germany - Predicting Gold Prices With Neural Networks: Multivariate vs Univariate Analysis M. Pachero, M. Vellasco & A. Abelim Pontificia Universidade Catolica do Rio de Janeiro, Brasil - Is mean-reversion on stock indices a linear effect? D. C. Meir, R. Pfeifer, R. Demostene & C. Sheier Universitdt Z|rich, Switzerland - Neural Nets for Time Series Forecasting: Criteria for Performance with an Application in Gilt Futures Pricing Jason Kingdon Department of Computer Science, University College London - Financial Time Series Forecasting of Recurrent Artificial Neural Network Techniques Dr. Ah Chung Tsoi, Clarence N.W. Tan & Stephen Lawrence Bond University, Australia - Application of Sensitivity Analysis techniques to Neural Network Bond Forecasting U. Bilge, A.N. Refenes, C. Diamond & J.Shalbolt Department of Computer Science, University College London - Multivariate Prediction of financial time series using recent developments in chaos theory Andrew Edmonds Prophecy systems, England - Hybrid Technologies for Far East Markets or "The Persistence of Memory" from Salvador Dali Lee Chay Tiam Smith Barney, Singapore - Nonlinearities in financial markets A. Antoniou & V.Bekos Brunel University - Using Neural Networks for modelling the French Stock Market A. Zapranis, Y. Bentz & A.N. Refenes London Business School HOTEL DETAILS Convenient hotels include: London Regents Park Hilton 18 Lodge Road, St. Johns Wood, London NW8 7JT Tel: (071) 722 7722 Fax: (071) 483 2408 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (071) 486 6161 Fax: (071) 486 0884 The White House Hotel Albany St, Regents Park, London NW1 Tel: (071) 387 1200 Fax: (071) 388 0091 REGISTRATION To register, complete the form and mail to: Helen Tracey, London Business School, Sussex Place, Regents Park, London NW1 4SA, UK. Please note that places are limited and will be allocated on a "first-come first-served" basis. For additional information call: (44)-71-262-5050 ext. 3507 Fax: (44)-71-724-7875 ----------------------------------------------------------- REGISTRATION FORM First International Workshop on Neural Networks in the Capital Markets November 18-19, 1993 Name:__________________________________________ Affiliation:___________________________________ Address:_______________________________________ _______________________________________________ Telephone:_____________________________________ Workshop Fee: 200 pounds sterling Payment may be made by: (please tick) [ ] Cheque payable to London Business School [ ] VISA [ ] Access [ ] American Express Card number: ___________________________________ -----------------------------------------------------------  From hwang at pierce.ee.washington.edu Tue Oct 5 18:34:57 1993 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Tue, 5 Oct 93 15:34:57 PDT Subject: No subject Message-ID: <9310052234.AA18754@pierce.ee.washington.edu.> Technical Report available from neuroprose: 26 single spaced pages (13 pages of text and 13 pages of figures) WHAT'S WRONG WITH A CASCADED CORRELATION LEARNING NETWORK: A PROJECTION PURSUIT LEARNING PERSPECTIVE Jenq-Neng Hwang, Shih-Shien You, Shyh-Rong Lay, I-Chang Jou Information Processing Laboratory Department of Electrical Engineering, FT-10, University of Washington, Seattle, WA 98195. Telecommunication Laboratories Ministry of Transportation and Communications P.O. Box 71, Chung-Li, Taiwan 320, R.O.C. ABSTRACT: Cascaded correlation is a popular supervised learning architecture that dynamically grows layers of hidden neurons of fixed nonlinear activations (e.g., sigmoids), so that the network topology (size, depth) can be efficiently determined. Similar to a cascaded correlation learning network (CCLN), a projection pursuit learning network (PPLN) also dynamically grows the hidden neurons. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using (more flexible) trainable nonlinear nodal activation functions. Moreover, the maximum correlation training criterion used in a CCLN results in a poorer estimate of hidden weights when compared with the minimum mean squared error criterion used in a PPLN. The CCLN is thus excluded for most regression applications where smooth interpolation of functional values are highly desired. Furthermore, it is shown that the PPLN can also achieves much better performance in solving the two-spiral classification benchmarks using comparable size of weight parameters. ================ To obtain copies of the postscript file, please use Jordan Pollack's service (no hardcopies will be provided): Example: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) Name (archive.cis.ohio-state.edu): anonymous Password (archive.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get hwang.cclppl.ps.Z ftp> quit unix> uncompress hwang.cclppl.ps Now print "hwang.cclppl.ps" as you would any other (postscript) file. In case your printer has limited memory, you can divide this file into two smaller files after the uncompress: unix>> head -42429 hwang.cclppl.ps > file1.ps unix>> tail +42430 hwang.cclppl.ps > file2.ps Then print "file1.ps" and "file2.ps" separately.  From thildebr at aragorn.csee.lehigh.edu Tue Oct 5 14:08:11 1993 From: thildebr at aragorn.csee.lehigh.edu (Thomas Hildebrandt) Date: Tue, 5 Oct 93 14:08:11 -0400 Subject: NIPS Workshop: Selective Attention Message-ID: <9310051808.AA05643@aragorn.csee.lehigh.edu> I wish to call your attention to a workshop on selective attention which I will be hosting at this year's NIPS conference. =================================================================== NIPS*93 Postconference Workshop Functional Models of Selective Attention and Context Dependency December 4, 1993 Intended Audience: Those applying NNs to vision and speech analysis and pattern recognition tasks, as well as computational neurobiologists modelling attentional mechanisms. Organizer: Thomas H. Hildebrandt thildebr at athos.eecs.lehigh.edu ABSTRACT: Classification based on trainable models still fails to achieve the current ideal of human-like performance. One identifiable reason for this failure is the disparity between the number of training examples needed to achieve good performance (large) and the number of labelled samples available for training (small). On certain tasks, humans are able to generalize well when given only one exemplar. Clearly, a different mechanism is at work. In human behavior, there are numerous examples of selective attention improving a person's recognition capabilities. Models using context or selective attention seek to improve classification performance by modifying the behavior of a classifier based on the current (and possibly recent) input data. Because they treat learning and contextual adaptation as two different processes, these models solve the memory/plasticity dilemma by incorporating both. In other words, they differ fundamentally from models which attempt to provide contextual adaptation by allowing all the weights in the network to continue evolving while the system is in operation. Schedule December 4, 1993 ======== ================ 7:30 - 7:35 Opening Remarks 7:35 - 8:00 Current Research in Selective Attention Thomas H. Hildebrandt, Lehigh University 8:00 - 8:30 Context-varying Preferences and Traits in a Class of Neural Networks Daniel S. Levine, University of Texas at Arlington Samuel J. Leven, For a New Social Science 8:30 - 9:00 ETS - A Formal Model of an Evolving Learning Machine L.Goldfarb, J.Abela, V.Kamat, University of New Brunswick 9:00 - 9:30 Recognizing Handwritten Digits Using a Selective Attention Mechanism Ethem Alpaydin, Bogazici University, Istanbul TURKEY 9:30 - 4:30 FREE TIME 4:30 - 5:00 Context and Selective Attention in the Capital Markets P. N. Refenes, London Business School 5:00 - 5:30 The Global Context-Sensitive Constraint Satisfaction Property in Adaptive Perceptual Pattern Recognition Jonathan A. Marshall, University of North Carolina 5:30 - 6:00 Neural Networks for Context Sensitive Representation of Synonymous and Homonymic Patterns Albert Nigrin, American University 6:00 - 6:30 Learn to Pay Attention, Young Network! Barak A. Pearlmutter, Siemens Corp. Research Ctr., Princeton NJ 6:30 - 6:35 Closing Remarks 7:00 Workshop Wrap-Up (common to all sessions) ===================================================================== The topic to be covered differs from that recently announced by Ernst Niebur and Bruno Olshausen, in that "functional" models are not necessarily tied to neurophysiological structures. Thanks to the Workshop Chair, Mike Mozer, the two workshops were scheduled on different days, so that it is possible for interested parties to attend both. An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035. Feel free to contact me for more information on the workshop. Thomas H. Hildebrandt Electrical Engineering & Computer Science Lehigh University Bethlehem, PA 18015 Work: (215) 758-4063 FAX: (215) 758-6279 thildebr at athos.eecs.lehigh.edu  From brunak at cbs.dth.dk Tue Oct 5 07:04:07 1993 From: brunak at cbs.dth.dk (Soren Brunak) Date: Tue, 5 Oct 93 12:04:07 +0100 Subject: Positions in BIOCOMPUTING Message-ID: Positions in BIOCOMPUTING at the Danish Center for Biological Sequence Analysis, Department of Physical Chemistry, The Technical University of Denmark (Lyngby). A number of pre- and post-doctoral positions are available at the newly formed Center for Biological Sequence Analysis. They have a duration of one, two and three years, starting late 1993 or early 1994. The center is funded by a five-year grant from the Danish National Research Foundation and conducts an active research program in biomolecular sequence and structure analysis with emphasis on novel adaptive computational strategies. The Technical University of Denmark is situated in Lyngby just outside Copenhagen. The center offers employment to researchers with a background primarily in the natural sciences, molecular biology, genetics, chemistry and physics. We seek individuals with additional competence and interest in areas of computer science, but not with this area as the main subject of expertise. Priority will be given to younger scientists with experience in some of the following areas (in alphabetical order): experimental molecular biology, information theory and statistics, mathematical analysis, neural computation, protein folding, physics of computation and complex systems, and sequence analysis. In a wide range of projects the center collaborates with national and foreign groups using novel adaptive computational methods many of which have received attention in the biocomputing context only recently. The center is characterized by the use of new approaches both regarding the algorithmic aspect of the simulation methods as well as the use of advanced hardware. A wide range of parallel and cluster computing environments is available locally at the center; a nearby supercomputer center offers easy access to CRAY and Connection Machine facilities. Among the research topics are pre--mRNA splicing, recognition of vertebrate promoters, RNA folding, protein structure prediction, proteolytic processing of polyproteins, signal peptide recognition, phylogenies, global multiple sequence alignment and dedicated sequence analysis hardware. The results are evaluated through intensive exchange with experimentalists. For further information, feel free to contact us at the address below. Applicants should send their resumes to Soren Brunak Center director Center for Biological Sequence Analysis Department of Physical Chemistry The Technical University of Denmark Building 206 DK-2800 Lyngby Denmark Tel: +45-42882222, ext. 2477 Fax: +45-45934808 Email: brunak at cbs.dth.dk  From B.DASGUPTA at fs3.mbs.ac.uk Wed Oct 6 11:09:39 1993 From: B.DASGUPTA at fs3.mbs.ac.uk (BHASKAR DASGUPTA ALIAS BD) Date: 6 Oct 93 11:09:39 BST Subject: references neural networks and time series Message-ID: <55A134A2613@fs3.mbs.ac.uk> Thanks to all who replied to my request for references to applications of neural networks to time series forecasting and apologies for the delay. I have now done a preliminary compilation, its more than 100 references. I frankly did not know that!. Well, anyway, I do not have access to an FTP so, if anyone requires a copy of this file, please email me, and I shall send the references immediately. Cheers and Thanks =================================================================== Bhaskar Dasgupta |\ /| //====\ /======= Manchester Business School ||\\ //|| || || || Booth Street West, || \/ || || || || Manchester M15 6PB, || || ||=====<< \======\ UK || || || || || Phone::+61-275-6547 || || || || || Fax::+67-273-7732. || || ||======/ =======/ =================================================================== Chaos is the rule of Nature Order is the dream of Man ===================================================================  From reza at ai.mit.edu Wed Oct 6 09:21:03 1993 From: reza at ai.mit.edu (Reza Shadmehr) Date: Wed, 6 Oct 93 09:21:03 EDT Subject: Tech Report from CBCL at MIT Message-ID: <9310061321.AA01574@corpus-callosum.ai.mit.edu> The following technical report from the Center for Biological and Computational Learning at M.I.T. is now available via anonymous ftp. ------------- :CBCL Paper #78/AI Memo #1405 :author Amnon Shashua (amnon at ai.mit.edu) :title On Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views :date July 1993 :pages 14 Part I of this paper investigates the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. The projective extension is then derived, showing that it is necessary only for tasks for which a reference view is not available --- such as happens when updating scene structure from a moving stereo rig. In part II we use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved). In the general case, three views satisfy a trilinear function of image coordinates. In case where two of the views are orthographic and the third is perspective the function reduces to a bilinear form. In case all three views are orthographic the function reduces further to a linear form (the ``linear combination of views'' of \cite{Ullman-Basri89}). These functions are shown to be useful for recognition, among other applications. -------------- How to get a copy of this report: The files are in compressed postscript format and are named by their AI memo number. They are put in a directory named as the year in which the paper was written. Here is the procedure for ftp-ing: unix> ftp publications.ai.mit.edu (128.52.32.22, log-in as anonymous) ftp> cd ai-publications/1993 ftp> binary ftp> get AIM-number.ps.Z ftp> quit unix> zcat AIM-number.ps.Z | lpr Best wishes, Reza Shadmehr Center for Biological and Computational Learning M. I. T. Cambridge, MA 02139  From kolen-j at cis.ohio-state.edu Wed Oct 6 12:01:50 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Wed, 6 Oct 93 12:01:50 -0400 Subject: Reprint Announcement Message-ID: <9310061601.AA05113@pons.cis.ohio-state.edu> This is an announcement of a newly available paper in neuroprose: RECURRENT NETWORKS: STATE MACHINES OR ITERATED FUNCTION SYSTEMS? John F. Kolen Laboratory for AI Research Department of Computer and Information Science The Ohio State University Columbus, OH 43210 kolen-j at cis.ohio-state.edu Feedforward neural networks process information by performing fixed transformations from one representation space to another. Recurrent networks, on the other hand, process information quite differently. To understand recurrent networks one must confront the notion of state as recurrent networks perform iterated transformations on state representations. Many researchers have recognized this difference and have suggested parallels between recurrent networks and various automata. First, I will demonstrate how the common notion of deterministic information processing does not necessarily hold for deterministic recurrent neural networks whose dynamics are sensitive to initial conditions. Second, I will link the mathematics of recurrent neural network models with that of iterated function systems. This link points to model independent constraints on the recurrent network state dynamics that explain universal behaviors of recurrent networks like internal state clustering. This paper will appear in The Proceedings of the 1993 Connectionist Models Summer School. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get kolen.rnifs.ps.Z ftp> quit unix> uncompress kolen.rnifs.ps.Z unix> lpr kolen.rnifs.ps (or what you normally do to print PostScript)  From kruschke at pallas.psych.indiana.edu Wed Oct 6 12:13:58 1993 From: kruschke at pallas.psych.indiana.edu (John Kruschke) Date: Wed, 6 Oct 1993 11:13:58 -0500 (EST) Subject: job opening at Indiana: Cognitive Science/Psychology Message-ID: A non-text attachment was scrubbed... Name: not available Type: text Size: 948 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/9628cb75/attachment.ksh From smieja at nathan.gmd.de Wed Oct 6 12:35:54 1993 From: smieja at nathan.gmd.de (Frank Smieja) Date: Wed, 6 Oct 1993 17:35:54 +0100 Subject: TR announcement Message-ID: <199310061635.AA14313@trillian.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/smieja.pandemonium.ps.Z The file smieja.pandemonium.ps.Z is now available for copying from the Neuroprose repository: The Pandemonium System of Reflective Agents (17 pages) by Frank Smieja GMD, Bonn, Germany ABSTRACT: The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feed-forward neural networks which adapt using the back-propagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker. -Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 53757 St Augustin, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From Announce at PARK.BU.EDU Thu Oct 7 16:09:31 1993 From: Announce at PARK.BU.EDU (Announce@PARK.BU.EDU) Date: Thu, 7 Oct 93 16:09:31 -0400 Subject: Graduate study in Cognitive and Neural Systems at Boston University Message-ID: <9310072009.AA04230@retina.bu.edu> (please post) *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe  From tajchman at ICSI.Berkeley.EDU Thu Oct 7 20:32:57 1993 From: tajchman at ICSI.Berkeley.EDU (Gary Tajchman) Date: Thu, 7 Oct 93 17:32:57 PDT Subject: New Book Announcement Message-ID: <9310080032.AA05515@icsib28.ICSI.Berkeley.EDU> I thought this might be of interest to folks on connectionists. Kluwer Academic has just published a book by H. Bourlard and N. Morgan called ``CONNECTIONIST SPEECH RECOGNITION: A Hybrid Approach''. In the words of the back cover description, this book ``describes the theory and implementation of a method to incorporate neural network approaches into state-of-the-art continuous speech recognition systems based on Hidden Markov Models (HMMs) to improve their performance.'' The book is based on work done in a 5-year trans-Atlantic collaboration between Bourlard and Morgan, and puts together in one place what is otherwise scattered over a bunch of conference and journal papers. If you would like more information please send email to N. Morgan at morgan at icsi.berkeley.edu, or reply to this message. __________________________________________________________________________________ Gary Tajchman tajchman at icsi.berkeley.edu International Computer Science Institute TEL: (510)642-4274 1947 Center St., Suite 600 FAX: (510)643-7684 Berkeley, CA  From rmeir at ee.technion.ac.il Fri Oct 8 08:03:01 1993 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Fri, 8 Oct 93 10:03:01 -0200 Subject: Paper announcement Message-ID: <9310081203.AA26234@ee.technion.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/meir.compress.ps.Z FTP-filename: /pub/neuroprose/meir.learn.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following two papers are now available in the neuroprose directory. The papers are 10 and 11 pages long, respectively. Sorry, but no hardcopies are available. Data Compression and Prediction in Neural Networks Ronny Meir Jose F. Fontanari Department of EE Department of Physics Technion University of Sao Paulo Haifa 32000, Israel 13560 Sao Carlos, Brazil rmeir at ee.technion.ac.il fontanari at uspfsc.ifqsc.usp.ansp.br We study the relationship between data compression and prediction in single-layer neural networks of limited complexity. Quantifying the intuitive notion of Occam's razor using Rissanen's minimum complexity framework, we investigate the model-selection criterion advocated by this principle. While we find that the criterion works well for large sample sizes (as it must for consistency), the behavior for finite sample sizes is rather complex, depending intricately on the relationship between the complexity of the hypothesis space and the target space. We also show that the limited networks studied perform efficient data compression, even in the error full regime. ------------------------------------------------------------------------------ Learning Algorithms, Input Distributions and Generalization Ronny Meir Department of Electrical Engineering Technion Haifa 32000, Israel rmeir at ee.technion.ac.il We study the interaction between input distributions, learning algorithms and finite sample sizes in the case of learning classification tasks. Focusing on the case of normal input distributions, we use statistical mechanics techniques to calculate the empirical and expected (or generalization) errors for several well-known algorithms learning the weights of a single-layer perceptron. In the case of spherically symmetric distributions within each class we find that the simple Hebb algorithm is optimal. Moreover, we show that in the regime where the overlap between the classes is large, algorithms with low empirical error do worse in terms of generalization, a phenomenon known as over-training. -------------------------------------------------------------------------- To obtain copies: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get meir.compress.ps.Z get meir.learn.ps.Z quit Then at your system: uncompress meir.*.ps.Z lpr -P meir.*.ps  From smieja at nathan.gmd.de Mon Oct 11 09:32:55 1993 From: smieja at nathan.gmd.de (Frank Smieja) Date: Mon, 11 Oct 1993 14:32:55 +0100 Subject: TR Anouncement: exploration Message-ID: <199310111332.AA20912@trillian.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/beyer.explore.ps.Z The file beyer.explore.ps.Z is now available for copying from the Neuroprose repository: Learning from Examples using Reflective Exploration (17 pages) U. Beyer and F. Smieja GMD (Germany) ABSTRACT: An important property of models constructed through the process of learning from examples is the manipulation and control of the data itself. When the data is actively selected or generated the process is known as {\it exploration}. Reflection about the internal model allows exploration to be more than just a random choice in the input space. In this paper we identify two basic forms of reflective exploration: density-based and error-based. We demonstrate the applicability of exploration processes and the advantages of using them in open systems using the task of learning a 2-dimensional continuous function. - Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 53757 St Augustin, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From seifert at psych.lsa.umich.edu Mon Oct 11 11:53:17 1993 From: seifert at psych.lsa.umich.edu (Colleen Seifert) Date: Mon, 11 Oct 93 10:53:17 -0500 Subject: position In-Reply-To: Your message of Wed, 8 Sep 93 11:10:27 -0500 Message-ID: Position in Cognitive Psychology University of Michigan The University of Michigan Department of Psychology invites applications for a tenure-track position in the area of Cognition, beginning September 1, 1994. The appointment will most likely be made at the Assistant Professor level, but it may be possible at other ranks. We seek candidates with primary interests and technical skills in cognitive psychology. Our primary goal is to hire an outstanding cognitive psychologist, and thus we will look at candidates with any specific research interest. We have a preference for candidates interested in higher mental processes or for candidates with computational modeling skills (including connectionism) or an interest in cognitive neuroscience. Responsibilities include graduate and undergraduate teaching, as well as research and research supervision. Send curriculum vitae, letters of reference,copies of recent publications, and a statement of research and teaching interests no later than January 7, 1994 to: Gary Olson, Chair, Cognitive Processes Search Committee, Department of Psychology, University of Michigan, 330 Packard Road, Ann Arbor, Michigan 48104. The University of Michigan is an Equal Opportunity/Affirmative Action employer.  From lpratt at slate.Mines.Colorado.EDU Mon Oct 11 13:46:41 1993 From: lpratt at slate.Mines.Colorado.EDU (Lorien Pratt) Date: Mon, 11 Oct 1993 11:46:41 -0600 Subject: Motif version of hyperplane animator available Message-ID: <9310111746.AA53873@slate.Mines.Colorado.EDU> ----------------------------------- Announcing the availability of an X-based neural network hyperplane animator Version 1.01 October 10, 1993 ----------------------------------- Lori Pratt and Steve Nicodemus Department of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA lpratt at mines.colorado.edu Understanding neural network behavior is an important goal of many research efforts. Although several projects have sought to translate neural network weights into symbolic representations, an alternative approach is to understand trained networks graphically. Many researchers have used a display of hyperplanes defined by the weights in a single layer of a back-propagation neural network. In contrast to some network visualization schemes, this approach shows both the training data and the network parameters that attempt to fit those data. At NIPS 1990, Paul Munro presented a video which demonstrated the dynamics of hyperplanes as a network changes during learning. The program displayed ran on a Stardent 4000 graphics engine, and was implemented at Siemens. At NIPS 1991, we demonstrated an X-based hyperplane animator, similar in appearance to Paul Munro's, but with extensions to allow for interaction during training. The user may speed up, slow down, or freeze animation, and set various other parameters. Also, since it runs under X, this program should be more generally usable. An openwindows version of this program was made available to the public domain in 1992. This announcement describes a version of the hyperplane animator that has been rewritten for Motif. It was developed on an IBM RS/6000 platform, and so is written in ANSI C. The remainder of this message contains more details of the hyperplane animator and ftp information. ------------------------------------------------------------------------------ 1. What is the Hyperplane Animator? The Hyperplane Animator is a program that allows easy graphical display of Back-Propagation training data and weights in a Back-Propagation neural network [Rumelhart, 1987]. It implements only some of the functionality that we eventually hope to include. In particular, it only animates hyperplanes representing input-to-hidden weights. Back-Propagation neural networks consist of processing nodes interconnected by adjustable, or ``weighted'' connections. Neural network learning consists of adjusting weights in response to a set of training data. The weights w1,w2,...wn on the connections into any one node can be viewed as the coefficients in the equation of an (n-1)-dimensional plane. Each non-input node in the neural net is thus associated with its own plane. These hyperplanes are graphically portrayed by the hyperplane animator. On the same graph it also shows the training data. 2. Why use it? As learning progresses and the weights in a neural net alter, hyperplane positions move. At the end of the training they are in positions that roughly divide training data into partitions, each of which contains only one class of data. Observations of hyperplane movement can yield valuable insights into neural network learning. 3. Platform information The Animator was developed using the Motif toolkit on an IBM RS6000 with X-Windows. It appears to be stable on this platform, and has not been compiled on other platforms. However, Dec5000 and SGI workstations have been succesfully used as graphics servers for the animator. How to install the hyperplane animator: You will need a machine which has X-Windows, and the Motif libraries. 1. copy the file animator.tar.Z to your machine via ftp as follows: ftp mines.colorado.edu (138.67.1.3) Name: anonymous Password: (your ID) ftp> cd pub/software/hyperplane-animator ftp> binary ftp> get hyperplane-animator.tar ftp> quit 2. Extract files from hyperplane-animator.tar with: tar -xvf hyperplane-animator.tar 3. Read the README file there. It includes information about compiling. It also includes instructions for running a number of demonstration networks that are included with this distribution. DISCLAIMER: This software is distributed as shareware, and comes with no warantees whatsoever for the software itself or systems that include it. The authors deny responsibility for errors, misstatements, or omissions that may or may not lead to injuries or loss of property. This code may not be sold for profit, but may be distributed and copied free of charge as long as the credits window, copyright statement in the program, and this notice remain intact. -------------------------------------------------------------------------------  From tishby at CS.HUJI.AC.IL Tue Oct 12 12:20:30 1993 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Tue, 12 Oct 1993 18:20:30 +0200 Subject: ICPR 94 in Jerusalem: Call for Papers Message-ID: <199310121620.AA21278@irs01.cs.huji.ac.il> % ************* CALL FOR PAPERS - PLEASE DISTRIBUTE *************************** % % CALL FOR PAPERS - 12th ICPR - PATTERN RECOGNITION AND NEURAL NETWORKS % Oct 9-13, 1994, Jerusalem, Israel % % CONFERENCE TOPICS: % Statistical pattern recognition; % temporal pattern recognition; % neural network models and algorithms; % machine learning in pattern recognition; % theoretical models and analysis of neural networks; % models of biological pattern recognition; % adaptive models; % fuzzy systems; % applications to biological sequence analysis, % applications to handwriting, speech, motor control, and active vision. % % PROGRAM COMMITTEE: % Naftali Tishby (Chair) - Hebrew University (tishby at cs.huji.ac.il) % % Henry Baird Eric Baum Victor Brailovsky % Alfred Bruckstein Pierre A. Devijver Robert P.W. Duin % Isak Gath Geoffrey E. Hinton % Nathan Intrator Anil Jain Chuanyi Ji % Michael Jordan Junichi Kanai Rangachar Kasturi % Josef Kittler Yann LeCun Mike Mozer % Erkki Oja Sarunas Raudys Gabriella Sanniti di Baja % Eric Schwartz Haim Sompolinsky Vladimir Vapnik % Harry Wechsler Daphna Weinshall Haim Wolfson % % % This conference is one of Four conferences in the 12th ICPR. Each submitted % paper will be carefully reviewed by members of the program committee. % Papers describing applications are encouraged, and will be reviewed by a % special Applications Committee. % The conference proceedings are published by the IEEE Computer Society Press. % % % 12-ICPR CO-CHAIRS: S. Ullman - Weizmann Inst. (shimon at wisdom.weizmann.ac.il) % S. Peleg - The Hebrew University (peleg at cs.huji.ac.il) % LOCAL ARRANGEMENTS: Y. Yeshurun - Tel-Aviv University (hezy at math.tau.ac.il) % INDUSTRIAL & APPLICATIONS LIAISON: M. Ejiri - Hitachi (ejiri at crl.hitachi.co.jp) % % % PAPER SUBMISSION DEADLINE: February 1, 1994. % Notification of Acceptance: May 1994. Camera-Ready Copy: June 1994. % % Send four copies of paper to: 12th ICPR, c/o International, 10 Rothschild Blvd, % 65121 Tel Aviv, ISRAEL. Tel. +972(3)510-2538, Fax +972(3)660-604 % % Each manuscript should include the following: % 1. A Summary Page addressing these topics: % - To which of the four conference is the paper submitted? % - What is the paper about? - What is the original contribution of this work? % - Does the paper mainly describe an application, and should be reviewed by % the applications committee? % 2. Paper should be limited in length to 4000 words, the estimated length of % the proceedings version. % % For further information on all ICPR conferences contact the secretariat at the % above address, or use E-mail: icpr at math.tau.ac.il . %%======= %% ICPR CALL FOR PAPERS in LaTeX format \documentstyle [11pt]{article} \pagestyle{empty} \setlength{\textheight}{10.0in} \setlength{\topmargin}{-.75in} \setlength{\textwidth}{7.0in} \setlength{\oddsidemargin}{-.25in} \begin{document} \centerline{\bf \Large CALL FOR PAPERS} \vspace{0.15in} \centerline{\bf \Large 12th ICPR} \vspace{0.15in} \centerline{\bf \Large PATTERN RECOGNITION AND NEURAL NETWORKS} \vspace{0.07in} \centerline{\bf \Large Oct 9-13, 1994, Jerusalem, Israel} \null \vspace{0.1in} \centerline{\bf CONFERENCE TOPICS:} \smallskip \begin{center} \addtolength{\baselineskip}{-4pt} $\bullet$ Statistical pattern recognition \hspace{0.05in} $\bullet$ temporal pattern recognition\\ \hspace{0.05in} $\bullet$ neural network models and algorithms \hspace{0.05in} $\bullet$ machine learning in pattern recognition\\ \hspace{0.05in} $\bullet$ theoretical models and analysis of neural networks\\ \hspace{0.05in} $\bullet$ models of biological pattern recognition \hspace{0.05in} $\bullet$ adaptive models \hspace{0.05in} $\bullet$ fuzzy systems\\ $\bullet$ applications to biological sequence analysis, handwriting, speech, motor control, and active vision. \addtolength{\baselineskip}{+4pt} \end{center} \small \vspace{0.1in} \centerline{\bf PROGRAM COMMITTEE:} \vspace{0.1in} \centerline{Naftali Tishby (Chair) - Hebrew University ({\tt tishby at cs.huji.ac.il})} \begin{tabbing} \hspace*{0.85in} \= \hspace*{2.0in} \= \hspace*{2.0in} \= \kill \>Henry Baird \> Eric Baum \> Victor Brailovsky \\ \>Alfred Bruckstein \> Pierre A. Devijver \> Robert P.W. Duin \\ \> Isak Gath \> \> Geoffrey E. Hinton \\ \>Nathan Intrator \> Anil Jain \> Chuanyi Ji \\ \>Michael Jordan \> Junichi Kanai \> Rangachar Kasturi \\ \>Josef Kittler \> Yann LeCun \> Mike Mozer \\ \>Erkki Oja \> Sarunas Raudys \> Gabriella Sanniti di Baja\\ \>Eric Schwartz \> Haim Sompolinsky \> Vladimir Vapnik \\ \>Harry Wechsler \> Daphna Weinshall \> Haim Wolfson \\ \end{tabbing} \medskip \noindent This conference is one of four conferences in the 12th ICPR. Each submitted paper will be reviewed by members of the program committee. Papers describing applications are encouraged, and will be reviewed by a special Applications Committee. The conference proceedings are published by the IEEE Computer Society Press. \vspace{0.16in} \begin{tabbing} \hspace*{3.0in} \= \kill 12-ICPR CO-CHAIRS: \> S. Ullman - Weizmann Inst. ({\tt shimon at wisdom.weizmann.ac.il})\\ \> S. Peleg - The Hebrew University ({\tt peleg at cs.huji.ac.il})\\ LOCAL ARRANGEMENTS: \> Y. Yeshurun - Tel-Aviv University ({\tt hezy at math.tau.ac.il})\\ INDUSTRIAL \& APPLICATIONS LIAISON: \> M. Ejiri - Hitachi ({\tt ejiri at crl.hitachi.co.jp})\\ \end{tabbing} %\vspace{0.06in} \smallskip \noindent {\bf PAPER SUBMISSION DEADLINE: ~~~ February 1, 1994.} \medskip \noindent {\bf Notification of Acceptance:} May 1994. ~~~{\bf Camera-Ready Copy:~~ June 1994}. \medskip \noindent Send four copies of paper to: 12th ICPR, \\ c/o International, 10 Rothschild Blvd,\\ 65121 Tel Aviv, ISRAEL. Tel. +972(3)510-2538, Fax +972(3)660-604\\ \medskip \noindent Each manuscript should include the following: \begin{enumerate} \addtolength{\baselineskip}{-4pt} \item A Summary Page addressing these topics: \begin{itemize} \item To which of the four conference is the paper submitted? \item What is the paper about? - What is the original contribution of this work? \item Does the paper mainly describe an application, and should be reviewed by the applications committee? \end{itemize} \item Papers should be limited to 4000 words, the estimated length of the proceedings version. \addtolength{\baselineskip}{+4pt} \end{enumerate} \noindent For further information on all ICPR conferences contact the secretariat at the above address, or use \\ E-mail: {\tt icpr at math.tau.ac.il} . \end{document}  From shrager at xerox.com Tue Oct 12 23:17:59 1993 From: shrager at xerox.com (Jeff Shrager) Date: Tue, 12 Oct 1993 20:17:59 PDT Subject: Plasticity and Cortical Development Paper available Message-ID: <93Oct12.201804pdt.38019@huh.parc.xerox.com> The following paper is available in hardcopy upon request. (Please send a message in which your name and address appear in a format that can be cut-and-pasted onto an envelope.) Shrager, J. & Johnson, M. H. (in press). Modeling the development of cortical function. To appear in I. Kovacs & B. Julesz (Eds.), Maturational Windows and Cortical Plasticity [working title and editors list]. Santa Fe, NM: The Santa Fe Institute. Our goal in this work is to investigate the factors that give rise to the functional organization of the mammalian cerebral cortex during early brain development. We hypothesize that the cortex is organized through a combination of endogenous and exogenous influences including subcortical structuring, maturational timing, and the information structure of the organism's early environment. In this paper we demonstrate, via computational neural modeling, the way in which these influences can lead to differential cortical function, and to the differential distribution of function over the cortical sheet. In three computational studies, using a modified version of a model of cortical development due originally to Kerszberg, Dehaene, and Changeux, we demonstrate that stimulus correlations, structural targeting (of subcortex to cortex), spatial structure in the stimulus, and, most importantly, waves of neural trophic factor have predictable effects upon the modular structure and degree of functionality represented in the resulting cortical sheet.  From robbie at prodigal.psych.rochester.edu Wed Oct 13 14:19:02 1993 From: robbie at prodigal.psych.rochester.edu (Robbie Jacobs) Date: Wed, 13 Oct 93 14:19:02 EDT Subject: Assistant Professor opening Message-ID: <9310131819.AA15914@prodigal.psych.rochester.edu> Dear Colleague: The attached advertisement describes an Assistant Professor position for a Behavioral Neuroscientist in the Department of Psychology at the University of Rochester. It is anticipated that this position will be available July 1, 1994 or 1995. We hope to attract a scientist who will interact productively with existing faculty whose research interests are in developmental psychobiology and/or learning and memory. Also, the candidate would be part of a university-wide community of over 60 neuroscientists contributing to inter-departmental graduate and undergraduate programs in neuroscience. I would appreciate if you could bring this position to the attention of suitable candidates. Sincerely, Ernie Nordeen Associate Professor of Psychology and of Neurobiology & Anatomy BEHAVIORAL NEUROSCIENTIST. The Department of Psychology at the University of Rochester anticipates an Assistant Professor position in neuroscience. We are particularly interested in persons investigating relationships between brain and behavioral plasticity at the level of neural systems. Individuals whose research emphasizes either i) neural mechanisms of learning and memory, or ii) development/reorganization in perceptual or motor systems are especially encouraged to apply, but persons interested in related areas of behavioral neuroscience will also be considered. The successful candidate is expected to develop an active research program, and participate in teaching within graduate and undergraduate programs in neuroscience. Applicants should submit curriculum vitae, a brief statement of research interests, and three letters of reference by 1 February 1994 to: Chair, Biopsychology Search Committee, Dept. of Psychology, University of Rochester, Rochester, NY, 14627. An Affirmative Action/Equal Opportunity Employer.  From hzhu at liverpool.ac.uk Thu Oct 14 13:55:43 1993 From: hzhu at liverpool.ac.uk (Mr. H. Zhu) Date: Thu, 14 Oct 93 13:55:43 BST Subject: PhD Thesis available for FTP in neuroprose Message-ID: <9310141255.AA04958@yew-13.liv.ac.uk> FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-file: pub/neuroprose/zhu.thesis.ps.Z PhD Thesis (222 pages) available in neuroprose repository. (An index entry, and sample ftp procedure follows abstract) NEURAL NETWORKS AND ADAPTIVE COMPUTERS: Theory and Methods of Stochastic Adaptive Computation Huaiyu Zhu Department of Statistics and Computational Mathematics Liverpool University, Liverpool L69 3BX, UK ABSTRACT: This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects --- It can be continuous, stochastic and adaptive --- and retains the TM computation as a subclass called ``data processing''. The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of ``trainable information processor'' (TIP) --- parameterised stochastic mapping with a rule to change the parameters --- is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of ``global search'' in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a ``basic adaptive computer', which has the advantage over earlier reinforcement learning systems, such as Sutton's ``Dyna'', in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve. ___________________________________________________________________ INDEX entry: zhu.thesis.ps.Z hzhu at liverpool.ac.uk 222 pages. Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorial state space and action space. (Mathematics with simulation results plus philosophy.) --------------------- Sample ftp procedure: unix$ ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:name): ftp (or anonymous) Password: (your email address including @) ftp> cd pub/neuroprose ftp> binary ftp> get zhu.thesis.ps.Z ftp> quit unix$ uncompress zhu.thesis.ps.Z unix$ lpr -P zhu.thesis.ps The last two steps can also be combined to unix$ zcat zhu.thesis.ps.Z | lpr -P which will save some space. ---------------------- Note: This announcement is simultaneous sent to the following three mailing lists: connectionists at cs.cmu.edu, anneal at sti.com, reinforce at cs.uwa.edu.au My apology to those who subscribe to more than one of them. I'm sorry that there is no hard copy available. -- Huaiyu Zhu hzhu at liverpool.ac.uk Dept. of Stat. & Comp. Math., University of Liverpool, L69 3BX, UK  From lmr at pimac2.iet.unipi.it Thu Oct 14 08:16:05 1993 From: lmr at pimac2.iet.unipi.it (Leonardo Reyneri) Date: Thu, 14 Oct 93 13:16:05 +0100 Subject: No subject Message-ID: <9310141216.AA22985@pimac2.iet.unipi.it> Please find below the Call For Papers of MICRONEURO '94: ************************************************************************** MICRONEURO 94 The Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems Torino (I), September 26-28, 1994 FIRST CALL FOR PAPERS This conference is the fourth in a series of international conferences dedicated to all aspects of hardware implementations of Neural Networks and Fuzzy Systems. MICRONEURO has emerged as the only international forum devoted specifically to all hardware implementation aspects, giving particular weight to those interdisciplinary issues which affect the design of Neural and Fuzzy hardware directly. TOPICS The conference program will focus upon all aspects of hardware implementations of Neural Networks and Fuzzy Systems and their applications in the real world. Topics will concentrate upon the following fields: - Analog and mixed-mode implementations - Digital implementations - Optical systems - Pulse-Stream computation - Weightless Neural systems - Neural and Fuzzy hardware systems - Interfaces with external world - Applications of dedicated hardware - VLSI-friendly Neural algorithms - New technologies for Neural and Fuzzy Systems Selection criteria will be based also on technical relevance, novelty of the approach and on availability of performance measurements for the system/device. INFORMATION FOR AUTHORS All submitted material (written in English) will be refereed and should be typed on A4 paper, 1-1/2 spaced, 12 point font, 160x220 mm text size. All accepted material will appear in the proceedings. PAPERS should not exceed 10 pages including figures and text. Also reports on EARLY INNOVATIVE IDEAS will be considered for presentation. In this case the submission should be a short description of the novel idea, not exceeding 6 pages in length, and it must be clearly marked ``Innovative Idea''. The most interesting papers and ideas will be published in a special issue of IEEE MICRO. SUBMISSIONS Six copies of final manuscripts, written according to the above requirements, shall be submitted to the Program Chairman. Submissions arriving late or significantly departing from length guidelines, or papers published elsewhere will be returned without review. Electronic versions of the submission (possibly in LATEX format) are kindly welcome. DEADLINES Submission of paper and/or ideas May 30, 1994 Notification of acceptance July 15, 1994 THE WORKSHOP VENUE The venue of MICRONEURO '94 is Torino, the historic and beautiful center of Piemonte. The town is surrounded by the highest mountains in Europe and by beautiful hills and landscapes. The region is also famous for its excellent wines. MICRONEURO '94 will be held at the Politecnico di Torino. The venue is conveniently located close to the town centre, with many restaurants and cafes close by. General Chair: H.P. Graf AT T Bell Laboratories Room 4 G 320 HOLMDEL, NJ 07733 - USA Tel. +1 908 949 0183 Fax. +1 908 949 7722 Program Chair: L.M. Reyneri Dip. Ingegneria Informazione Universita' di Pisa Via Diotisalvi, 2 56126 PISA - ITALY Tel. +39 50 568 511 Fax. +39 50 568 522 E.mail lmr at pimac2.iet.unipi.it Organisation: COREP Segr. MICRONEURO '94 C.so Duca d. Abruzzi, 24 10129 TORINO - ITALY Tel. +39 11 564 5108 Fax. +39 11 564 5199 Steering Committee: K. Goser (D) J. Herault (F) W. Moore (UK) A.F. Murray (UK) U. Ramacher (D) M. Sami (I) Program Committee: E. Bruun (DK) H.C. Card (CA) D. Del Corso (I) P. Garda (F) M. Jabri (AU) S.R. Jones (UK) C. Jutten (F) H. Klar (D) J.A. Nossek (D) A. Prieto (E) U. Rueckert (D) L. Spaanenburg (NL) L. Tarassenko (UK) M. Verleysen (B) E. Vittoz (CH) J. Wawrzynek (USA) W. Yang (USA) **************************************************************************  From olivier at dendrite.cs.colorado.edu Thu Oct 14 15:06:56 1993 From: olivier at dendrite.cs.colorado.edu (Olivier Brousse) Date: Thu, 14 Oct 1993 13:06:56 -0600 Subject: Generativity and systematicity, learning: Report announcement Message-ID: <199310141907.AA00719@dendrite.cs.Colorado.EDU> The following report is now available via anonymous ftp on the cis.ohio-state.edu ftp server, directory pub/neuroprose, file brousse.sysgen.ps.Z Pages: 180, size: 893720 bytes. Title: Generativity and Systematicity in Neural Network Combinatorial Learning It is also available via surface mail, as: Technical Report CU-CS-676-93, for a small fee, $5 I believe from Attn: Vicki Emken Department of Computer Science, Box 430 University of Colorado at Boulder Boulder, CO 80309-0430, U.S.A. Abstract: This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to the existence of symmetries within them. Such properties have seldom been seen in neural networks models, which have typically suffered from problems of inadequate generalization, as examplified both by small number of generalizations relative to training set sizes and heavy interference between newly learned items and previously learned information. Symbolic theories, arguing that mental representations have syntactic and semantic structure built from structured combinations of symbolic constituents, can in principle account for these properties (both arise from the sensitivity of structured semantic content with a generative and systematic syntax). This thesis studies the question of whether connectionism, arguing that symbolic theories can only provide approximative cognitive descriptions which can only be made precise at a sub-symbolic level, can also account for these properties. Taking a cue from the domains in which human learning most dramatically displays generativity and systematicity, the answer is hypothesized to be positive for domains with combinatorial structure. A study of such domains is performed, and a measure of combinatorial complexity in terms of information/entropy is used. Experiments are then designed to confirm the hypothesis. It is found that a basic connectionist model trained on a very small percentage of a simple combinatorial domain of recognizing letter sequences can correctly generalize to large numbers of novel sequences. These numbers are found to grow exponentially when the combinatorial complexity of the domain grows. The same behavior is even more dramatically obtained with virtual generalizations: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. Experiments are repeated with fully-distributed representations, and results imply that performance is not degraded. When weight elimination is added, perfect systematicity is obtained. A formal analysis is then attempted in a simpler case. The more general case is treated with contribution analysis. To retrieve and print: unix> ftp archive.cis.ohio-state.edu Name: anonymous 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp> binary ftp> get brousse.sysgen.ps.Z 200 PORT command successful. ftp> quit unix> zcat brousse.sysgen.ps.Z | lpr or unix> zcat brousse.sysgen.ps.Z | lpr -s - Olivier Brousse olivier at cs.colorado.edu  From Frances.T.Perillo at Dartmouth.EDU Thu Oct 14 09:21:13 1993 From: Frances.T.Perillo at Dartmouth.EDU (Frances T. Perillo) Date: 14 Oct 93 09:21:13 EDT Subject: POSITION AVAILABLE Message-ID: <6728344@prancer.Dartmouth.EDU> Cognitive Position: The Department of Psychology at Dartmouth College has a junior, tenure-track position available in the area of Cognition -- broadly construed to include any area of research within Cognitive Psychology, Cognitive Science, and/or Cognitive Neuroscience. Candidates must be able to establish a strong research program and must have a commitment to undergraduate and graduate instruction. Supervision of both graduate and undergraduate research will be expected. Please send a letter of application, vita and three letters of recommendation to: Chair, Cognitive Search Committee, Department of Psychology, 6207 Gerry Hall, Dartmouth College, Hanover, NH 03755-3459. Review of applications will begin on February 15, 1994. Dartmouth College is an equal opportunity employer with an affirmative action plan. Women and members of minority groups are encouraged to apply.  From fellous at rana.usc.edu Wed Oct 13 20:36:19 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Wed, 13 Oct 93 17:36:19 PDT Subject: CNE WORKSHOP - PROGRAM Message-ID: <9310140036.AA01375@rana.usc.edu> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< The Center for Neural Engineering University of Southern California Los Angeles CA 90089-2520 Announces a Workshop - Oct 19-20, 1993 Neural Architectures and Distributed AI: From Schema Assemblages to Neural Networks October 19-20, 1993 Program Committee: Michael Arbib (Organizer), George Bekey, Damian Lyons, and Ron Sun This message contains the PROGRAM for the meeting and comes with a warm invitation to participate in the Workshop. Registration materials are provided at the end. Please plan to join us in Los Angeles in October for the workshop - and do consider coming a day early to take part in the CNE Review. Scope of the Workshop: To design complex technological systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence and perceptual robotics which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. We seek (both at the level of schema asemblages, and in terms of "modular" neural networks) a distributed model of computation, supporting many concurrent activities for recognition of objects, and the planning and control of different activities. The use, representation, and recall of knowledge is mediated through the activity of a network of interacting computing agents which between them provide processes for going from a particular situation and a particular structure of goals and tasks to a suitable course of action. This action may involve passing of messages, changes of state, instantiation to add new schema instances to the network, deinstantiation to remove instances, and may involve self-modification and self- organization. Schemas provide a form of knowledge representation which differs from frames and scripts by being of a finer granularity. Schema theory is generative: schemas may well be linked to others to provide yet more comprehensive schemas, whereas frames tend to "build in" from the overall framework. The analysis of interacting computing agents (the schema instances) is intermediate between the overall specification of some behavior and the neural networks that subserve it. The Workshop will focus on different facets of this multi-level methodology. Abstracts will be collected in a CNE Technical Report which will be made available to registrants at the start of the meeting. Monday: The meeting will start at 6:30pm on Monday evening (for those who have formally registered): Evening at the University Hilton No-Host Bar followed by Dinner Note: Members of the USC community are welcome to attend the presentations (but not the dinner) free of charge - please obtain your free registration between 8:30 and 9:00am on Tuesday. USC registrants may purchase the Workshop Proceedings for $10. ---------------------------- TUESDAY All talks except for the last session will be given in the Hedco Neurosciences Building Auditorium 8:30am Registration. Hedco Neurosciences Building Lobby - Introductory Overview > 9:00am Schemas and Neural Networks: A Multi-Level Approach to Natural and Artificial Intelligence Michael A. Arbib - University of Southern California - Schemas for Robotics > 10:00am Reactive Schema-based Robotic Systems: Principles and Practice. Ronald C. Arkin - Georgia Institute of Technology > 10:30 Coffee > 11:00am A Schema-Theory Approach to Building and Analysing the Behavior of Robot Systems D. M. Lyons - North American Philips Corporation > 11:30am Visually Guided Multi-Fingered Robot Hand Grasping as Defined by Schemas and a Reactive System. T. G. Murphy - University of Massachusetts, Lowell D. M. Lyons & A.J. Hendricks - North American Philips Corporation > 12 Noon Reinforcement Learning for Robotic Reaching and Grasping Andrew H. Fagg - University of Southern California > 12:30pm Lunch > 1:30pm A Knowledge Base for Neural Guidance System Ramon Krosley & Manavendra Misra - Colorado School of Mines > 2:00pm Multiresolutional Schemata for Motion Control A. Meystel - Drexel University > 2:30pm Baby Sub: Using Schemata for Conceptual Learning Alberto Lacaze & Michael Meystel - Drexel University > 3:00pm Refreshments > 3:30pm A Real-Time Neural Implementation of a Schema Driven Toy-Car. Jan N. H. Heenskerk & Fred Keijzer - Leiden University, The Netherlands - Schemas, NNs, Vision, and Visuomotor Coordination > 4:00pm Representing and Learning Visual Schemas in Neural Networks for Scene Analysis Wee Kheng Leow & Risto Miikkulainen - University of Texas at Austin > 4:30pm Integration of Connectionist and Symbolic Modules in a Vision Task Masumi Ishikawa, Kengo Matsuo & Kenichi Yoshino Kyushu Institute of Technology, Japan -------------------- WEDNESDAY > 9:00am A Schema-Theoretic Approach to Study the "Chantlitaxia" Behavior in the Praying Mantis Francisco Cervantes Perez, Arturo Franco, Susana Velazquez and Nydia Lara - ITAM and UNAM, Mexico > 9:30am Schema Based Learning and Anuran Detour Behavior Fernando J. Corbacho and Hyun Bong Lee University of Southern California > 10:00am "What", "Where", and the Architecture of Action-Oriented Perception Michael A. Arbib - University of Southern California > 10:30am Coffee - Programming Environments for Schemas and NNs > 11:00am ASL:Hierarchy, Composition, Heterogeneity, and Multi- Granularity in Concurrent Object-Oriented Programming A. Weitzenfeld - University of Southern California > 11:30am A Message Passing Based Approach to the Design of Modular Neural Network Systems Lawrence Gunn - MacDonald Dettwiler Associates, Canada > 12 Noon A Paradigm for Handling Neural Networks in Databases Erich Schikuta - University of Vienna > 12:30pm Lunch - Schemas and Connectionism > 1:30pm Feeling-Based Schemas Peter H.Greene & Greg T.H. Chien - Illinois Institute of Technology > 2:00pm Neural Schemas and Connectionist Logics: A Synthesis of the Symbolic and the Subsymbolic Ron Sun - The University of Alabama > 2:30pm Distributed Knowledge Representation in Adaptive Self- Organizing Concurrent Systems Andrew Bartczak - The University of Rhode Island > 3:00pm Refreshments **Refreshments and the concluding session will take place at the Auditorium of the Andrus Gerontology Center.** > 3:30pm A Connectionist Model of Semantic Memory for Metaphor Interpretation Tony Veale & Mark Keane - Trinity College, Ireland > 4:00pm Schema-based Modeling of Commonsense Understanding of Causal Narratives Srinivas Narayanan - University of California, Berkeley > 4:30pm Dynamic Schema Instances in the Conposit Framework John A. Barnden - New Mexico State University ************** BONUS EVENT: CNE RESEARCH REVIEW **************** Registrants for the Workshop are invited to attend, at no extra charge, the CNE Research Review to be held on Monday, October 18, 1993. The Review will present a day-long sampling of CNE research. In particular, the meeting will celebrate the opening of two new CNE Laboratories: The Autonomous Robotics Laboratory and the The Neuro-Optical Computing Laboratory which join the Brain Simulation Laboratory in the Hedco Neurosciences Building. During the day, George Bekey will present an overview of our research on autonomous robots, while Keith Jenkins and Armand Tanguay will review the state of the art in our research on optical implementation of neural networks. Related talks will include those by Bing Sheu on VLSI for Neural Networks and by Alfredo Weitzenfeld on neural simulation tools. Another major development we will celebrate is the ever-strengthening cooperation between CNE and USC's Program in Neural, Informational, and Behavioral Sciences (NIBS) in bringing information technology to bear in catalyzing new insights into the complexity of the brain. Scott Grafton will review our use of PET scans to gain new insight into human brain mechanisms of vision, action and memory; while Denis McLeod will present our approach to the construction of federated databases for neuroscience and other scientific applications. The Program will be rounded out by talks by other faculty and students, student posters, and demonstrations of hardware and software. Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "CNE Workshop" attendees to avail themselves of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, jfellous at pollux.usc.edu. REGISTRATION The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor and for CNE Members) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students and CNE members) together with the following information to: Marietta Pobst Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520, USA. mpobst at pollux.usc.edu Tel: (213) 740-1176; Fax: (213) 740-5687 SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering; University of Southern California October 19-20, 1993 NAME: ___________________________________________ ADDRESS: _________________________________________ _________________________________________ _________________________________________ PHONE NO.:_______________ FAX:___________________ EMAIL:___________________________________________ I intend to attend the CNE Research Review on October 18, 1993: YES [ ] NO [ ] Note: Late registrations will be accepted on the Monday morning, but places at the dinner are limited, so advance email to mpobst at pollux.usc.edu would be appreciated, even if you choose to bring your check on Monday morning. .  From harnad at Princeton.EDU Thu Oct 14 20:23:58 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 14 Oct 93 20:23:58 EDT Subject: Hippocampus and Memory: BBS Call for Commentators Message-ID: <9310150023.AA25022@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by H. EICHENBAUM et al. on TWO COMPONENT FUNCTIONS OF THE HIPPOCAMPAL MEMORY SYSTEM that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ TWO COMPONENT FUNCTIONS OF THE HIPPOCAMPAL MEMORY SYSTEM Howard Eichenbaum Center for Behavioral Neuroscience State University of New York at Stony Brook Stony Brook, NY 11794 (516) 632-9482 heichen at neuro.som.sunysb.edu Tim Otto Department of Psychology Busch Campus Rutgers University New Bruswick, NJ 08903 Neal J. Cohen Beckman Institute & Department of Psychology University of Illinois at Urbana-Champaign 405 N. Mathews Avenue Urbana, IL 61801 KEY WORDS: Amnesia, Hippocampus, Parahippocampal Region, Entorhinal Cortex, Learning, Memory, Representation. ABSTRACT: The hippocampal system contributes to (1) the temporary maintenance of memories and (2) the processing of a particular type of memory representation. The evidence from amnesia suggests that these two hippocampus-dependent memory functions are orthogonal. Neuropsychological, anatomical and physiological evidence supports a two-component model of cortico-hippocampal interactions: Neocortical association areas maintain short-term memories for specific items and events prior to hippocampal processing and they provide the final repositories of long-term memory. The parahippocampal region supports intermediate-term storage of individual items and the hippocampal formation itself organizes memories according to relevant relationships among items. Hippocampal-cortical interactions lead to strong and persistent memories for events and their constituent elements and interrelations, together with a capacity for flexibly producing memories across a wide range of circumstances. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.bridgeman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.eichenbaum When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson). ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From fellous at rana.usc.edu Wed Oct 13 20:23:34 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Wed, 13 Oct 93 17:23:34 PDT Subject: CNE Review Announcement Message-ID: <9310140023.AA01325@rana.usc.edu> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The Center for Neural Engineering University of Southern California Announces The CNE Research Review Monday, October 18,1993 The University of Southern California has established itself as a leader in linking research on the brain to innovations in neurally based artificial intelligence. The Center for Neural Engineering (CNE) has more than forty faculty members in such disciplines as Biomedical Engineering, Computer Science, Electrical Engineering, Neurobiology, Neurology, Linguistics and Psychology engaged in studies of neural networks and the computing style of the brain, and the design of a new generation of computers and robotic devices inspired by them. These professors supervise a large number of Ph.D. candidates, offer a broad range of graduate courses, and conduct a strong research program supported by federal funds, foundations, and corporations. Research teams involving faculty, students and industrial colleagues ensure a healthy flow of ideas among several interrelated facets of neural science and engineering, including fundamental research on the brain, the simulation and mathematical analysis of neural networks, the development of novel engineering systems based in part on neural concepts, and the practical application of these systems. The CNE Research Review is designed for all who wish to benefit from USC's activities in Neural Engineering. Our computational analysis of the brain leads us to new strategies for human learning and new weapons in the fight against disease. These insights also enable us to chart new computer architectures, and to develop new forms of artificial intelligence to act as intelligent assistants to human decision-makers. Optical computing and microelectronics are leading us to the high bandwidth communication and massively parallel computation that can make these new tools effective and affordable on a grand scale. The CNE Review will present a day-long sampling of our research. In particular, the meeting will celebrate the opening of two new CNE Laboratories: The Autonomous Robotics Laboratory and the The Neuro-Optical Computing Laboratory which join the Brain Simulation Laboratory in the Hedco Neurosciences Building. During the day, George Bekey will present an overview of our research on autonomous robots, while Keith Jenkins and Armand Tanguay will review the state of the art in our research on optical implementation of neural networks. Related talks will include those by Bing Sheu on VLSI for Neural Networks and by Alfredo Weitzenfeld on neural simulation tools. Another major development we will celebrate is the ever-strengthening cooperation between CNE and USC's Program in Neural, Informational, and Behavioral Sciences (NIBS) in bringing information technology to bear in catalyzing new insights into the complexity of the brain. Scott Grafton will review our use of PET scans to gain new insight into human brain mechanisms of vision, action and memory; while Dennis McLeod will present our approach to the construction of federated databases for neuroscience and other scientific applications. The Program will be rounded out by talks by other faculty and students, student posters, and demonstrations of hardware and software in the CNE's Laboratories. Members, and potential members, of the CNE Industrial Affiliates Program will have the chance to meet with individual faculty members to discuss specific topics for research collaboration. The day will conclude with a Dinner which will give CNE members and visitors from Industry and other Universities a chance to reflect on the day's many presentations and discuss areas of mutual interest in a relaxed and convivial setting. Program All talks will be given in the Hedco Neurosciences Building Auditorium > 8:30am: Registration. Hedco Neurosciences Building Lobby > 9:00 am: Michael A. Arbib: Welcome to the CNE > 9:30 am: George Bekey: Research on Autonomous Robots > 10:00am: Keith Jenkins: Optical Implementation of Neural Networks - Emphasis Computing > 10:30am: Coffee > 11:00am: Armand Tanguay: Optical Implementation of Neural Networks - Emphasis Devices > 11:30am: Bing Sheu: VLSI for Neural Networks > 12:00am: Lunch > 1:30pm: Alfredo Weitzenfeld: The Neural Simulation Language NSL > 2:00pm: Scott Grafton: PET Scans, Functional MRI, and Human Brain Mechanisms. > 2:30pm: Dennis McLeod: Federated Databases for Neuroscience Research > 3:00pm: Coffee > 3:30pm: Laboratory Demonstrations: Autonomous Robotics Laboratory; Neuro-Optical Computing Laboratory; Brain Simulation Laboratory. > 6:30pm (For those who have formally registered): Evening at the University Hilton No-Host Bar followed by Dinner Bonus Event: Workshop on Neural Architectures and Distributed AI For an extra $50, fully paid registrants may have their registration extended to include a two-day Workshop sponsored by the CNE to be held on the two days following the CNE Review. The Workshop, "Neural Architectures and Distributed AI: From Schema Assemblages to Neural Networks" will be held on October 19 and 20, 1993. (The total fee for CNE Review and Workshop is $40 for CNE members and qualified students who include a "certificate of student status" from their advisor.) Note: Members of the USC community are welcome to attend the day's presentations (but not the dinner) free of charge - please obtain your free registration between 8:30 and 9:00am. ***** Industrial Affiliates in Neural Engineering ***** The Industrial Affiliates Program of the Center for Neural Engineering enables Industrial Affiliates to stay informed of the latest research in Neural Engineering and Computational Neurobiology at USC, and to take part in that research, ensuring that activities at the University of Southern California are responsive to the research and development needs of corporations, and contribute to technological competitiveness and technology transfer. Industrial Affiliates are involved in the work of the CNE through the provision of funding for both general and targeted research projects, through participation in research, seminars and educational programs at USC, and through a Visiting Scientists Program which enables corporate personnel to actively participate in research at USC as well as providing access for USC faculty and student researchers to specialized corporate facilities and industrial R&D programs. Over the years, membership has included General Dynamics, General Motors, Hitachi, Hughes, IBM, Lockheed, Matsushita, Nissan Motor Company, NTT Data, Ricoh Corporation, and Rockwell International. Basic membership is designed to help companies monitor USC's latest contributions to neural network technology and relate them to their business area and products. General funds are used to support workshops and the CNE seminar series, to contribute to administrative costs, and to provide small amounts of seed money for research projects. Support of personnel for training at USC is also encouraged. Funding of workshops is one way to advance this educational function. Going further, organizations that wish to undertake projects coordinating USC research with their own ongoing research and development efforts in neural engineering make a much larger commitment. Typically, we design a research project that combines an application of interest to the company with basic research issues of interest to the university. The company pays an engineer to spend a year in the CNE working on the project, and provides funds to cover release time for a faculty member to supervise the project, the stipend for Ph.D. graduate students to act as research assistants for the project, and general operating expenses. Questions about the opportunities for research cooperation with the CNE should be addressed to: Michael A. Arbib, Director Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 (213) 740-9220 FAX (213) 740-5687 arbib at pollux.usc.edu Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "CNE Review" attendees to avail themselves of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, jfellous at pollux.usc.edu. Registration The registration fee of $100 for the CNE Review includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. (Students may attend the Review for free, but will not be entitled to attend the dinner unless they register for the Workshop.) Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $100 ($150 for those also wishing to attend the Workshop; $40 for students and CNE members) together with the following information to Marrietta Pobst, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. ------------------------------------------------------------------- CNE Review Center for Neural Engineering, USC October 18, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ Please register me: for the Workshop as well as the Review: YES [ ] NO [ ] Note: Late registrations will be accepted on the Monday morning, but places at the dinner are limited, so advance email to mpobst at pollux.usc.edu would be appreciated, even if you choose to bring your check on Monday morning. .  From janetw at cs.uq.oz.au Fri Oct 15 02:15:51 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Fri, 15 Oct 93 16:15:51 +1000 Subject: Symposium on Connectionist Models and Psychology (Australia) Message-ID: <9310150615.AA25251@client> First Announcement: Call to participants SYMPOSIUM ON CONNECTIONIST MODELS AND PSYCHOLOGY University of Queensland Brisbane Australia Saturday, 29 January 1994 (back-to-back with the Australian Conference on Neural Networks) The symposium is aimed at psychologists and psychological modelers, specifically those who are studying or questioning the relevance of neural networks to experimental psych. We are aiming to provide a structured forum for discussion of issues. The symposium is structured into three sessions, focussed on the following themes: TENTATIVE PROGRAMME (8.10.93) SESSION 1. The rationale for psychologists using models: What benefits (if any) are there to be gained from using neural nets and other computational devices as models of human perception and cognition? Chair: Peter Slezak Target Address: Danny Latimer Discussants: Max Coltheart; Sally Andrews; Margaret Charles SESSION 2. Correspondence between human and network performance: What methods and measures are available for comparing network variables and human experimental data? Chair: Danny Latimer Speakers: Kate Stevens; Graeme Halford; Simon Dennis SESSION 3. Basic computational processes: Psychological theories of cognition assume the operation of basic processes such as comparison, storage, search etc. What do the connectionist and symbol-manipulating approaches provide as means for modeling these processes? Chair: Steve Schwartz Target Address: Janet Wiles Discussants: Mike Johnson; Zoltan Schreter; George Oliphant For the first and third sessions the role of the speaker is to raise issues and, possibly, defend a position. The reviewers will provide a commentary on the issues raised by the target speaker. For the second session, the role of the speakers is to raise and discuss conceptual and empirical issues. In all sessions the chair will lead the discussion. We're aiming to circulate a pre-proceedings two weeks in advance of the symposium to registered participants. _______________________________________________________________________ Details ------- Date: Saturday 29th January, 1994 Time: 9.00am - 6pm Location: Dept of Psychology, Room 304, University of Queensland St Lucia, Brisbane. _______________________________________________________________________ Please indicate your interest in attending the Symposium by returning this form or by contacting one of the organizers by email, telephone or fax by January 3, 1994. I will be attending the Symposium. Please keep me informed of developments Name: ................................................................ Address:................................................................ ................................................................ Phone: ................................................................ Fax: ................................................................ Email: ................................................................ _______________________________________________________________________ Janet Wiles Kate Stevens Danny Latimer Dept of Comp Sci Dept of Psychology Dept of Psychology Uni of Queensland Uni of Queensland Uni of Sydney St Lucia 4072 St Lucia 4072 NSW 2006 Phone: 07 365-2902 Phone: 07 365-6203 Phone: 02 692-2481 Fax: 07 365-1999 (International: +61-7-365-1999) Email: janetw at cs.uq.oz.au Email: kates at psych.psy.uq.oz.au Email: cyril at psychvax.psych.su.oz.au  From BRAIN1 at taunivm.tau.ac.il Fri Oct 15 17:33:28 1993 From: BRAIN1 at taunivm.tau.ac.il (BRAIN1@taunivm.tau.ac.il) Date: Fri, 15 Oct 93 17:33:28 IST Subject: Bat-Sheva seminar on functional brain imaging Message-ID: % Dear Colleague, % % here follows the first announcement (plain TeX file) of the % % % "BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING" % % which will take place in Tel-Aviv % June 9 to 16, 1994 % % May we ask you to post the announcement ? % % Many thanks and best regards, % % D. Horn G. Navon % \nopagenumbers \magnification=1200 \def\sk{\vskip .2cm} \hsize=13cm \centerline{\bf BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING} \sk \centerline{\bf Tel-Aviv, Israel, June 9 to 16, 1994} \vskip 3cm \centerline{\bf FIRST ANNOUNCEMENT} \sk The seminar will bring together experts on various techniques of functional brain imaging (PET, EEG, MEG, Optical, and particular emphasis on MRI). It will start with a day of tutorials at Tel-Aviv University. These will serve as technical and scientific introductions for participants from different disciplines. It will continue in a resort hotel at the seashore with plenary lectures, describing recent advances in all different techniques and comparing their merits and scientific results. Speakers include: M. Abeles, J. W. Belliveau, A. S. Gevins, A. Grinvald, M. H\"am\"al\"ainen, S. Ogawa, H. Pratt, M. Raichle, R. G. Shulman, D. Weinberger. The number of participants in the workshop will be limited. \sk \vskip 1cm Information and registration: Dan Knassim Ltd., P.O.B. 57005, Tel-Aviv 61570, Israel. Tel: 972-3-562 6470 Fax: 972-3-561 2303 \sk \vskip 2cm \centerline {D. Horn~~~~~~~~G. Navon} \centerline {ADAMS SUPER-CENTER FOR BRAIN STUDIES} \centerline {TEL-AVIV UNIVERSITY, TEL-AVIV, ISRAEL} \centerline{ e-mail: brain1 at taunivm.tau.ac.il } \vskip 2cm \sk \vfill\eject\end  From arantza at cogs.susx.ac.uk Mon Oct 18 10:48:42 1993 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Mon, 18 Oct 93 10:48:42 BST Subject: Artificial Life Workshop Announcement Message-ID: "Artificial Life: a Bridge towards a New Artificial Intelligence" Palacio de Miramar (San Sebastian, Spain) December 10th and 11th, 1993 Workshop organised by the Department of Logic and Philosophy of Science, Faculty of Computer Science & Institute of Logic, Cognition, Language and Information (ILCLI) of the University of the Basque Country (UPV/EHU) Directors: Alvaro Moreno (University of the Basque Country) Francisco Varela (CREA, Paris) This Workshop will be dedicated to a discussion of the impact of works on Artifical Life in Artificial Intelligence. Artificial Intelligence (AI) has traditionally attempted to study cognition as an abstract phenomenon using formal tools, that is, as a disembodied process that can be grasped through formal operations, independent of the nature of the system that displays it. Cognition appears as an abstract representation of reality. After several decades of research in this direction the field has encountered several problems that have taken it to what many consider a "dead end": difficulties in understanding autonomous and situated agencies, in relating behaviour in a real environment, in studying the nature and evolution of perception, in finding a pragmatic approach to explain the operation of most cognitive capacities such as natural language, context dependent action, etc. Artificial Life (AL) has recently emerged as a confluence of very different fields trying to study different kinds of phenomena of living systems using computers as a modelling tool, and, at last, trying to artificially (re)produce a living or a population of living systems in real or computational media. Examples of such phenomena are prebiotic systems and their evolution, growth and development, self-reproduction, adaptation to an environment, evolution of ecosystems and natural selection, formation of sensory-motor loops, autonomous robots. Thus, AL is having an impact on classic life sciences but also on the conceptual foundations of AI and new methodological ideas to Cognitive Science. The aim of this Workshop is to focus on the last two points and to evaluate the influence of the methodology and concepts appearing in AL for the development of a new ideas about cognition that could eventually give birth to a new Artificial Intelligence. Some of the sessions consist on presentations and replies on a specific subject by invited speakers while others will be debates open to all participants in the workshop. MAIN TOPICS: * A review of the problems of FUNCTIONALISM in Cognitive Science and Artificial Life. * Modelling Neural Networks through Genetic Algorithms. * Autonomy and Robotics. * Consequences of the crisis of the representational models of cognition. * Minimal Living System and Minimal Cognitive System * Artificial Life systems as problem solvers * Emergence and evolution in artificial systems SPEAKERS S. Harnad P. Husbands G. Kampis B. Mac Mullin D. Parisi T. Smithers E. Thompson F. Varela Further Information: Alvaro Moreno Apartado 1249 20080 DONOSTIA SPAIN E. Mail: biziart at si.ehu.es Fax: 34 43 311056 Phone: 34 43 310600 (extension 221) 34 43 218000 (extension 209)  From terry at helmholtz.sdsc.edu Mon Oct 18 15:58:48 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Mon, 18 Oct 93 12:58:48 PDT Subject: NEURAL COMPUTATION, 5:6 Nov 93 Message-ID: <9310181958.AA27586@helmholtz.sdsc.edu> NEURAL COMPUTATION Volume 5, Number 6, 1993 Articles: Analysis of Neuron Models with Dynamically Regulated Conductances L. F. Abbott and Gwendal Le Masson Letters: Limitations of the Hodgkin-Huxley Formalism: Effects of Single Channel Kinetics upon Transmembrane Voltage Dynamics Adam F. Strassberg and Louis J. DeFelice Two-dimensional Motion Perception in Flies A. Borst, M. Egelhaaf, and H. S. Seung Neural Representations of Space Using Sinusoidal Arrays David S. Touretzky, A. David Redish and Hank S. Wan Fast Recognition of Noisy Digits Jeffrey Kidder and Daniel Seligson Local Algorithms for Pattern Recognition and Dependencies Estimation V. Vapnik and L. Bottou On the Geometry of Feedforward Neural Network Error Surfaces An Mei Chen, Haw-minn Lu and Robert Hecht-Nielsen Rational Function Neural Network Henry Leung and Simon Haykin On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle Marc M. Van Hulle and Dominique Martinez A Function Estimation Approach to Sequential Learning with Neural Networks Visakan Kadirkamanathan and Mahesan Niranjan Learning Finite State Machines with Self- Clustering Recurrent Networks Zheng Zeng, Rodney Goodman and Padhraic Smyth ----- SUBSCRIPTIONS - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From cabestan at eel.upc.es Tue Oct 19 05:59:16 1993 From: cabestan at eel.upc.es (Joan Cabestany) Date: Tue, 19 Oct 1993 9:59:16 UTC Subject: Proceedings available Message-ID: Proceedings available _____________________ After the last edition of IWANN'93 (International Workshop on Artificial Neural Networks) held in Spain (Sitges) during June 1993, some books with the Proceedings are still available at special price. Reference: New Trends in Neural Computation (IWANN'93 Proceedings) J.Mira, J.Cabestany, A.Prieto editors Lecture Notes in Computer Science number 686 SPRINGER VERLAG 1993 Price: 9000 pesetas (spanish currency) Method of payment: VISA Card number _________________________ Expiration date _______________ Name of card holder ______________________________________________________ Date ____________ Signature ___________________________________ Send this form to ULTRAMAR CONGRESS Att. Mr. J.Balada Diputacio, 238, 3 08007 BARCELONA Spain Fax + 34.3.412.03.19  From simon at dcs.ex.ac.uk Tue Oct 19 11:23:58 1993 From: simon at dcs.ex.ac.uk (simon@dcs.ex.ac.uk) Date: Tue, 19 Oct 93 16:23:58 +0100 Subject: UK PhD connectionist studentship available Message-ID: <16831.9310191523@kaos.dcs.exeter.ac.uk> PhD Studentship -- full-time, SERC funded The Department of Computer Science at the University of Exeter has a SERC quota award available for a suitable candidate to pursue fulltime research for a PhD degree. Applicants should have a good first degree in Computer Science (or a closely related dsicipline) with a sound knowledge of neural computing and/or software engineering. The successful applicant will join a research group exploring the use of neural computing as a novel software technology. Potential projects range from formal analysis of network implementations of well-defined problems to development of visualization techniques to facilitate efficient network training as well as to provide support for a conceptual understanding of neural net implementations. Application forms and further information can be obtained from: Lyn Shackleton, Department of Computer Science, University of Exeter, Exeter EX4 4PT. email: lyn at dcs.exeter.ac.uk; tel: 0392 264066; FAX: 0392 264067 Informal enquiries and requests for further details of the research group's activities may be made to: Professor Derek Partridge, Department of Computer Science, University of Exeter, Exeter EX4 4PT, email: derek at dcs.exeter.ac.uk tel: 0392 264061, FAX: 0392 264067, The closing date for applications is November 19th, 1993. Interviews will be conducted in the week beginning November 22, 1993. It is expected that the award will be taken up in January 1994. -- Simon Klyne Connection Science Laboratory email: simon at dcs.exeter.ac.uk Department of Computer Science phone: (+44) 392 264066 University of Exeter EX4 4QE, UK.  From mozer at dendrite.cs.colorado.edu Tue Oct 19 14:40:14 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Tue, 19 Oct 1993 12:40:14 -0600 Subject: information on NIPS*93 workshop accommodations Message-ID: <199310191840.AA22297@neuron.cs.colorado.edu> The NIPS*93 brochure is a bit sketchy concerning accommodations at the NIPS workshops, to be held at the Radisson Resort Vail December 2-4. To make reservations at the Radisson, call (800) 648-0720. For general information on the resort, the central number is (303) 476-4444. Reservations can also be made by fax: (303) 476-1647. And if you would like to let the glowing power of a live psychic answer your very personal questions, the number is (900) 820-7131. Note that rooms will be held for us only until the beginning of November, and last year many participants had to sleep in the snow due to lack of foresight in making reservations. Concerning lift tickets: Unfortunately, the NIPS brochure was published before we were able to obtain this year's lift ticket prices. The prices have increased roughly $5/day over those published in the brochure. If you wish to advance purchase tickets, though, we ask that you send in the amounts published in the brochure. We will collect the difference on site. (Sorry, it's the only feasible way to do recordkeeping at this point.) Lift tickets may also be purchased on site at an additional expense of roughly $1/day. Very sorry for the inconvenience. Mike Mozer NIPS*93 Workshop Chair  From fulkersw at smtplink.de.deere.com Tue Oct 19 10:11:36 1993 From: fulkersw at smtplink.de.deere.com (William Fulkerson) Date: Tue, 19 Oct 93 09:11:36 CDT Subject: Possible position in Finance Message-ID: <9310190911.A02154@smtplink.de.deere.com> A position is anticipated in the Finance department, Deere & Company, Moline Illinois. The purpose of this message is to survey the interest in such a position and to determine the skills available among the likely candidates. The successful candidate will have a theoretical background and 1 to 3 years professional experience in applications of neural networks, evolutionary computing, and/or fuzzy logic. Although professional experience in applications is required, it need not be in finance. Applicants with advanced degrees in engineering, statistics, or computer science are preferred. The applicant must want to apply the above technologies to financial problems and be willing to pursue these financial applications for an extended period of years. The initial assignment will be to develop trading systems for foreign exchange and commercial paper. Future assignments will be within the Finance department and may include pension fund management. If your interest, application experience, training, and skills match this description, please send a short description of your qualifications via e-mail to: fulkersw at smtplink.de.deere.com. Receipt of your e-mail will be acknowledged.  From srx014 at cck.coventry.ac.uk Wed Oct 20 11:36:42 1993 From: srx014 at cck.coventry.ac.uk (CRReeves) Date: Wed, 20 Oct 93 11:36:42 WET DST Subject: ICSE94 - Call for Papers Message-ID: <7831.9310201036@cck.coventry.ac.uk> The following may be of interest to connectionists working on control engineering applications: ****************************************************************************** ICSE 94 Tenth International Conference on Systems Engineering First Announcement Call for papers 6-8 September 1994 C O V E N T R Y U N I V E R S I T Y Held at Coventry University UK Organised by the Control Theory and Applications Centre International Conference on Systems Engineering The 10th International Conference on Systems Engineering, ICSE'94, will take place at Coventry University and organised through the Control Theory and Applications Centre, an interdisciplinary research centre established by drawing together staff from the School of Engineering and the School of Mathematical and Information Sciences. Scope of Conference The Conference will cover the general area of Systems Engineering, with particular emphasis being placed on applications. It is expected to include sessions on the following themes: - Adaptive Control and System Identification - Algorithms and Architectures - Control Theory and Industrial Applications - Educational Developments in Systems Engineering - Energy Efficiency and Environmental Systems - Image and Signal Processing - Manufacturing Systems - Modelling and Simulation - Rule Based Control and Fuzzy Decision Making - Neural Networks and Genetic Algorithms in Control and Identification Call for Papers Authors wishing to contribute to the Conference should submit an abstract (three copies) of their proposed contribution before 15 February 1994. The abstract should be typed and written in English. Refereeing of abstacts submitted before the deadline date will take place on a regular basis. This will allow early decisions to be taken and should assist authors in their planning arrangements. The Organising Committee would also welcome proposals for arranged specialist sessions on a focused theme relevant to the Conference, each session consisting of up to six papers. All papers presented will be considered for publication in the Journal 'Systems Science', published in Poland (in English). Deadlines - Submission of abstracts 15 February 1994 - Acceptance of papers 7 March 1994 - Submission of full papers 1 June 1994 It is intended to have the Conference Proceedings available for participants. Consequently, deadlines for submission of papers should be strictly respected. Preliminary Arrangements - Conference fees, provisionally estimated at 325 Pounds Sterling, inclues a copy of the Conference Proceedings, lunches on the 6th, 7th and 8th, the Conference Banquet on the 6th, and a Civic Reception followed by the Conference Dinner on the 7th. - Participants will have the option of being accommodated in the University Halls of Residence overlooking Coventry Cathedral or in local hotels or guest houses. The Conference fee is exclusive of accommodation charges. - The working language of the Conference is English, which will be used for all presentations, discussions and printed material. - The Conference Banquet is to be of the 'Olde English Mediaeval' style and will be held at the historical Coombe Abbey just outside Coventry. Abstracts, papers and requests for further details should be sent to: Dr Keith Burnham Conference Secretary ICSE94 Control Theory and Applications Centre Coventry University Priory Street Coventry CV1 5FB United Kingdom Telephone: 0203 838972 (International +44 203 838972) Telex: 9312102228 (CPG) Fax: 0203 838585 (International +44 203 838585) Email: mtx062 at uk.ac.cov  From marwan at sedal.sedal.su.OZ.AU Wed Oct 20 22:47:19 1993 From: marwan at sedal.sedal.su.OZ.AU (Marwan Jabri) Date: Wed, 20 Oct 93 21:47:19 EST Subject: software Message-ID: <9310201147.AA21214@sedal.sedal.su.OZ.AU> MUME version 0.73 has been released. With respect to the earlier released version of MUME (version 0.6), it provides: o Xmume is an X Windowes based editor/visualisation tool for MUME o the support for scheduling of multiple training algorithms/strategies for the simultaneous training of multiple networks Both binaries and sources are freely available. The binaries are available via ananymous ftp and the sources are available following the return of a signed license available as a postscript file in the anonymous ftp directory. The machine is mickey.sedal.su.oz.au (129.78.24.170) login as anonymous for the binaries or to fetch the license file (license.ps). The file MUME-README could e fetched and provides instructions/information. MUME can be compiled on Unix and DOS machines. Xmume, of course, only works on Unix boxes.  From gary at cs.ucsd.edu Wed Oct 20 08:50:33 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Wed, 20 Oct 93 05:50:33 -0700 Subject: Virtual Festschrift for Jellybean Message-ID: <9310201250.AA16310@odin.ucsd.edu> ***********************REMINDER********************************* The Petschrift Papers are DUE NOVEMBER FIRST!!! "Computer! Funny bone ON!!!" Here is the original announcement: Dear Connectionists, On a sad day this spring, my longtime collaborator, friend, and inspiration for the field of Dognitive Science, Jellybean, died at the ripe old age of 16. His age (for a golden retriever/samoyed cross) at his death is a testament to modern veterinary medicine. Alas, we still must all go sometime. The purpose of this message is to invite the humorists among us to contribute a piece to a collection I am editing of humor in Jellybean's memory. As you may know, a "festschrift" is a volume of articles presented as a tribute or memorial to an academic. I have no plans to publish this except "virtually", through the auspices of the neuroprose archive. I already have several contributions that were privately solicited. This is a public solicitation for humor for this purpose. Your piece does not have to be in the "Dognitive Science" vein, but may be anything having to do with neural nets, Cognitive Science, or nearby fields. I reserve editorial right to accept, edit, and/or reject any material submitted that I deem either inappropriate, too long (I am expecting pieces to be on the order of 1-8 pages), or simply, not funny. Any editing will be with the agreement of the author. Latex files are probably best. Remember, brevity is the mother of wit. The deadline for submission will be Nov. 1, 1993. Email submissions only to gary at cs.ucsd.edu. Thanks for your attention. Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 Computer Science and Engineering 0114 University of California San Diego La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP)  From lksaul at cmt6.mit.edu Thu Oct 21 11:13:48 1993 From: lksaul at cmt6.mit.edu (Lawrence K. Saul) Date: Thu, 21 Oct 93 11:13:48 -0400 Subject: Paper Announcement --- Learning in Boltzmann Trees Message-ID: <9310211513.AA10103@cmt6.mit.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/saul.boltzmann.ps.Z The file saul.boltzmann.ps.Z is now available for copying in the Neuroprose repository: Learning in Boltzmann Trees (11 pages) Lawrence Saul and Michael Jordan Massachusetts Institute of Technology ABSTRACT: We introduce a family of hierarchical Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. Stochastic averages are computed by the technique of decimation. We present results on the problems of N-bit parity and the detection of hidden symmetries. Lawrence Saul lksaul at cmt6.mit.edu  From fdoyle at ecn.purdue.edu Fri Oct 22 13:12:30 1993 From: fdoyle at ecn.purdue.edu (Frank Doyle) Date: Fri, 22 Oct 93 12:12:30 -0500 Subject: No subject Message-ID: <9310221712.AA18339@volterra.ecn.purdue.edu> Postdoctoral position available in : NEURO-MODELING in the Department of Chemical Engineering, Purdue University Position for 2 years (beginning Fall 1993; salary: $25,000 per year). Subject: Neuro-modeling of blood pressure control This project is part of an interdisciplinary program involving industrial and academic participants from DuPont, Purdue University, the University of Pennsylvannia, and Louisiana State University. The program encompasses the disciplines of chemical engineering, automatic control, and neuroscience. Active interactions with engineers and nonlinear control and modeling community at Purdue and DuPont as well as with the neuroscientists at DuPont and Penn will be necessary for the success of the project. A strong background in neuro-modeling is required. The facilities at Purdue include state-of-the art computational workstations (HP 735s and Sun 10/41s). The postdoctoral candidate will work on the development of models of the control mechanisms responsible for blood pressure regulation. The neural system under investigation is the cardiorespiratory control system, which integrates sensory information on respiratory and cardiovascular variables to regulate and coordinate cardiac, vascular and respiratory activity. In order to better understand this system our program does neurobiolgical research and computational modeling. In effect, these results reverse engineer neuronal and systems function, which can have implications for engineering application; and the engineering applications of our first interest are in chemical engineering. The overall effort involves neurobiologists, chemical engineers, computer scientists, bioengineers and neural systems modelers. The present position is meant to contribute to the neural systems modeling - chemical engineering interaction. The neural computational-modeling work is progressing at several levels: (1) systems-level modeling modeling of the closed-loop cardiorespiratory system, (2) cellular level modeling of nonlinear computation in Hodgkin-Huxley style neuron models, and (3) network modeling of networks built-up from HH-style neurons incorporating channel kinetics and synaptic conductances to capture the mechanisms in the baroreceptor vagal reflex. The macroscopic model will be used (in conjunction with experimental data from the literature and from the laboratory of Dr. Schwaber) in developing structures to represent the control functions. The synaptic level modeling activities will be used in developing the building blocks which achieve the control function. The present position will focus towards research goals, under the supervision of Dr. Frank Doyle, that include the identification of novel control and modeling techniques. Interested candidates should send their curriculum vitae to BOTH: Prof. Francis J. Doyle III School of Chemical Engineering Purdue University West Lafayette, IN 47907-1283 (317) 497-9228 E-mail: fdoyle at ecn.purdue.edu & Dr. James Schwaber Neural Computation Group E.I. DuPont deNemours & Co., Inc. P.O. Box 80352 Wilmington, DE 19880-0352 (302) 695-7136 E-mail: schwaber at eplrx7.es.duPont.com  From RAMPO at SALERNO.INFN.IT Fri Oct 22 10:35:00 1993 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Fri, 22 OCT 93 14:35 GMT Subject: E.R.Caianiello Message-ID: <2163@SALERNO.INFN.IT> Prof. E. R. Caianiello suddenly died this morning at 8.00 in his home in Naples.  From weigend at sabai.cs.colorado.edu Fri Oct 22 03:37:55 1993 From: weigend at sabai.cs.colorado.edu (weigend@sabai.cs.colorado.edu) Date: Fri, 22 Oct 93 01:37:55 MDT Subject: Santa Fe Time Series Competition book out Message-ID: <199310220737.AA24728@sabai.cs.colorado.edu> Announcing book on the results of the Santa Fe Time Series Competition: ____________________________________________________________________ Title: TIME SERIES PREDICTION: Forecasting the Future and Understanding the Past. Editors: Andreas S. Weigend and Neil A. Gershenfeld Publisher: Addison-Wesley, September 1993. Paperback ISBN 0-201-62602-0 US$32.25 (672 pages) Hardcover ISBN 0-201-62601-2 US$49.50 (672 pages) The rest of this message gives some background, ordering information, and the table of contents. ____________________________________________________________________ Most observational disciplines, such as physics, biology, and finance, try to infer properties of an unfamiliar system from the analysis of a measured time record of its behavior. There are many mature techniques associated with traditional time series analysis. However, during the last decade, several new and innovative approaches have emerged (such as neural networks and time-delay embedding), promising insights not available with these standard methods. Unfortunately, the realization of this promise has been difficult. Adequate benchmarks have been lacking, and much of the literature has been fragmentary and anecdotal. This volume addresses these shortcomings by presenting the results of a careful comparison of different methods for time series prediction and characterization. This breadth and depth was achieved through the Santa Fe Time Series Prediction and Analysis Competition, which brought together an international group of time series experts from a wide variety of fields to analyze data from the following common data sets: - A physics laboratory experiment (NH3 laser) - Physiological data from a patient with sleep apnea - Tick-by-tick currency exchange rate data - A computer-generated series designed specifically for the Competition - Astrophysical data from a variable white dwarf star - J. S. Bach's last (unfinished) fugue from "Die Kunst der Fuge." In bringing together the results of this unique competition, this volume serves as a much-needed survey of the latest techniques in time series analysis. Andreas Weigend received his Ph.D. from Stanford University and was a postdoc at Xerox PARC. He is Assistant Professor in the Computer Science Department and at the Institute of Cognitive Science at the University of Colorado at Boulder. Neil Gershenfeld received his Ph.D. from Cornell University and was a Junior Fellow at Harvard University. He is Assistant Professor at the Media Lab at MIT. ____________________________________________________________________ Order it through your bookstore, or directly from the publisher by - calling the Addison-Wesley Order Department at 1-800-358-4566, - faxing 1-800-333-3328, - emailing , or - writing to Advanced Book Marketing Addison-Wesley Publishing One Jacob Way Reading, MA 01867, USA. VISA, Mastercard, and American Express and checks are accepted. When you prepay by check, Addison-Wesley pays shipping and handling charges. If payment does not accompany your order, shipping charges will be added to your invoice. Addison-Wesley is required to remit sales tax to the following states: AZ, AR, CA, CO, CT, FL, GA, IL, IN, LA, ME, MA, MI, MN, NY, NC, OH, PA, RI, SD, TN, TX, UT, VT, WA, WV, WI. _____________________________________________________________________ TABLE OF CONTENTS xv Preface Andreas S. Weigend and Neil A. Gershenfeld 1 The Future of Time Series: Learning and Understanding Neil A. Gershenfeld and Andreas S. Weigend Section I. DESCRIPTION OF THE DATA SETS__________________________________ 73 Lorenz-Like Chaos in NH3-FIR Lasers Udo Huebner, Carl-Otto Weiss, Neal Broadus Abraham, and Dingyuan Tang 105 Multi-Channel Physiological Data: Description and Analysis David R. Rigney, Ary L. Goldberger, Wendell C. Ocasio, Yuhei Ichimaru, George B. Moody, and Roger G. Mark 131 Foreign Currency Dealing: A Brief Introduction Jean Y. Lequarre 139 Whole Earth Telescope Observations of the White Dwarf Star (PG1159-035) J. Christopher Clemens 151 Baroque Forecasting: On Completing J.S. Bach's Last Fugue Matthew Dirst and Andreas S. Weigend Section II. TIME SERIES PREDICTION________________________________________ 175 Time Series Prediction by Using Delay Coordinate Embedding Tim Sauer 195 Time Series Prediction by Using a Connectionist Network with Internal Delay Lines Eric A. Wan 219 Simple Architectures on Fast Machines: Practical Issues in Nonlinear Time Series Prediction Xiru Zhang and Jim Hutchinson 243 Neural Net Architectures for Temporal Sequence Processing Michael C. Mozer 265 Forecasting Probability Densities by Using Hidden Markov Models with Mixed States Andrew M. Fraser and Alexis Dimitriadis 283 Time Series Prediction by Using the Method of Analogues Eric J. Kostelich and Daniel P. Lathrop 297 Modeling Time Series by Using Multivariate Adaptive Regression Splines (MARS) P.A.W. Lewis, B.K. Ray, and J.G. Stevens 319 Visual Fitting and Extrapolation George G. Lendaris and Andrew M. Fraser 323 Does a Meeting in Santa Fe Imply Chaos? Leonard A. Smith Section III. TIME SERIES ANALYSIS AND CHARACTERIZATION___________________ 347 Exploring the Continuum Between Deterministic and Stochastic Modeling Martin C. Casdagli and Andreas S. Weigend 367 Estimating Generalized Dimensions and Choosing Time Delays: A Fast Algorithm Fernando J. Pineda and John C. Sommerer 387 Identifying and Quantifying Chaos by Using Information-Theoretic Functionals Milan Palus 415 A Geometrical Statistic for Detecting Deterministic Dynamics Daniel T. Kaplan 429 Detecting Nonlinearity in Data with Long Coherence Times James Theiler, Paul S. Linsay, and David M. Rubin 457 Nonlinear Diagnostics and Simple Trading Rules for High-Frequency Foreign Exchange Rates Blake LeBaron 475 Noise Reduction by Local Reconstruction of the Dynamics Holger Kantz Section IV. PRACTICE AND PROMISE_________________________________________ 493 Large-Scale Linear Methods for Interpolation, Realization, and Reconstruction of Noisy, Irregularly Sampled Data William H. Press and George B. Rybicki 513 Complex Dynamics in Physiology and Medicine Leon Glass and Daniel T. Kaplan 529 Forecasting in Economics Clive W.J. Granger 539 Finite-Dimensional Spatial Disorder: Description and Analysis V.S. Afraimovich, M.I. Rabinovich, and A.L. Zheleznyak 557 Spatio-Temporal Patterns: Observations and Analysis Harry L. Swinney 569 Appendix: Accessing the Server 571 Bibliography (800 references) 631 Index  From weigend at sabai.cs.colorado.edu Fri Oct 22 03:33:09 1993 From: weigend at sabai.cs.colorado.edu (weigend@sabai.cs.colorado.edu) Date: Fri, 22 Oct 93 01:33:09 MDT Subject: Music and Audition at NIPS (1st day at Vail) Message-ID: <199310220733.AA24718@sabai.cs.colorado.edu> Call for abstracts and announcement: NIPS workshop: Connectionism for Music and Audition Date: December 3 (this is the first of the two Vail days) Organizers: Dick Duda Andreas Weigend San Jose State University University of Colorado at Boulder duda at sri.com weigend at cs.colorado.edu If you are interested in presenting at this NIPS workshop in Vail, please send an abstract to both organizers **before November 1st**. We will review the abstracts and then decide on the schedule. We also invite suggestions for further topics of discussion. ____________________________________________________________________________ CONTENTS: While speech and language dominate our auditory experience, the human auditory system is constantly processing a much broader world of sound events. Some of the most fundamental questions concerning music and general sound perception are still largely unanswered; the range extends from the separation and organization of sound streams to the problem of a hierarchy of time scales. In this workshop, we want to explores the development and application of connectionist methods to music and audition. At this NIPS workshop, we plan to address both topics of music and audition. Topic 1: Music In recent years, NIPS has seen (and heard) neural networks generate tunes and harmonize chorales. With a large amount of music becoming available in computer-readable form, real data can be used to build connectionist models. The time has come to think about questions, tasks, goals in areas ranging from connectionist modeling of expectations to automated music analysis and composition. One feature of music that makes it interesting to model is the hierarchy of time scales involved. Which connectionist architectures are suited for several order of magnitude in time? How can temporal information important for music be integrated efficiently? Particular attention will be paid to the advantages of different type of recurrent networks, and to architectures that try to incorporate invariances of the problem domain (e.g., TDNN). Topic 2: Audition While the human auditory system is exquisitely sensitive to speech and music, it is based on a mammalian auditory system that is equally adept at solving more fundamental tasks. These tasks include (a) separating multiple sound sources, (b) localizing the sources in space and time, (c) characterizing the sources in terms of identifying qualities such as pitch and timbre, (d) suppressing the effects of early reflections and room reverberation, and (e) characterizing the acoustic environment. To date, most of the work on auditory scene analysis has focussed on the auditory periphery, as represented by cochlear models and networks for detecting onsets, harmonicity, modulation, and interaural time and intensity differences. The major unanswered questions concern the nature of structures and processes that can integrate this information into stable and valid percepts. How should different sound objects be represented in a connectionist architecture? What will stabilize these representations as the sources and the listener move? How will these representations support other tasks? What is the role of expectations or other forms of bi-directional information flow? How can cross-modal information be included? What is the role of world constraints and environmental factors, and how are they reflected in the network architecture? Finally, what is a scientifically appropriate methodology for evaluating the performance of proposed connectionist solutions to these problems? ____________________________________________________________________________ General NIPS information and registration: An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure, you can write to nips93 at systems.caltech.edu or NIPS Foundation P.O. Box 60035 Pasadena, CA 91116-6035 ____________________________________________________________________________  From harnad at Princeton.EDU Sat Oct 23 20:50:29 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 23 Oct 93 20:50:29 EDT Subject: Learning - Implicit vs. Explicit: BBS Call for Commentators Message-ID: <9310240050.AA29897@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by D.R. SHANKS and M.F. ST. JOHN on IMPLICIT VS. EXPLICIT LEARNING that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ CHARACTERISTICS OF DISSOCIABLE HUMAN LEARNING SYSTEMS David R. Shanks Department of Psychology University College London London WC1E 6BT, England david.shanks at psychol.ucl.ac.uk Mark F. St. John Department of Cognitive Science University of California at San Diego La Jolla, CA 92093 mstjohn at cogsci.ucsd.edu KEYWORDS: learning; memory; consciousness; explicit/implicit processes; rules; instances; unconscious processes ABSTRACT: The proposal that there exist independent explicit and implicit learning systems is based on two further distinctions: (i) learning that takes place with versus without concurrent awareness, and (ii) learning that involves the encoding of instances (or fragments) versus the induction of abstract rules or hypotheses. Implicit learning is assumed to involve unconscious rule learning. We examine the implicit learning evidence from subliminal learning, conditioning, artificial grammar learning, instrumental learning, and reaction times in sequence learning. Unconscious learning has not been satisfactorily established in any of these areas. The assumption that learning in some of these tasks (e.g., artificial grammar learning) is predominantly based on rule abstraction is questionable. When subjects cannot report the "implicitly learned" rules that govern stimulus selection, this is often because their knowledge consists of instances or fragments of the training stimuli rather than rules. In contrast to the distinction between conscious and unconscious learning, the distinction between instance and rule learning is a sound and meaningful way of taxonomizing human learning. We discuss various computational models of these two forms of learning. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.shanks). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from an Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shanks When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). JANET users without ftp can instead utilise the file transfer facilities at sites uk.ac.ft-relay or uk.ac.nsf.sun. Full details are available on request. -------------------------------------------------------------  From piero at dist.dist.unige.it Sun Oct 24 16:43:32 1993 From: piero at dist.dist.unige.it (Piero Morasso) Date: Sun, 24 Oct 93 16:43:32 MET Subject: farawell to E.R Caianiello Message-ID: <9310241543.AA21642@dist.dist.unige.it> ---------------------------------------------------------------- FARAWELL TO A PIONIEER: Eduardo R. Caianiello ---------------------------------------------------------------- A physicist by training, Eduardo R. Caianiello was one of the brave fews who dared to start the field of neural networks more than 30 years ago. In 1991 he was one of the founders of the European Neural Network Society and was chairing the organization of the 1994 Conference ICANN'94 in Sorrento until he suddenly and peacefully died, on October 22, 1993, in his home, in Naples. Let us remember and perhaps re-read one of his seminal papers: E.R. Caianiello, Outline of a theory of thought process and thinking machines. J. Theor. Biol. 2, 204-235, 1961. ----------------------------------------------------------------  From large at cis.ohio-state.edu Mon Oct 25 09:43:53 1993 From: large at cis.ohio-state.edu (E. large) Date: Mon, 25 Oct 93 09:43:53 -0400 Subject: Music and Audition at NIPS (1st day at Vail) In-Reply-To: weigend@sabai.cs.colorado.edu's message of Fri, 22 Oct 93 01:33:09 MDT <199310220733.AA24718@sabai.cs.colorado.edu> Message-ID: <9310251343.AA03835@cerebellum.cis.ohio-state.edu> Resonance and the Perception of Musical Meter Edward W. Large and John F. Kolen The perception of musical rhythm is traditionally described as involving, among other things, the assignment of metrical structure to rhythmic patterns. In our view, the perception of metrical structure is best described as a dynamic process in which the temporal organization of musical events entrains the listener in much the same way that two pendulum clocks hanging on the same wall synchronize their motions so that they tick in lock step. In this talk, we re-assess the notion of musical meter, and show how the perception of this sort of temporal organization can be modeled as a system of non-linearly coupled oscillators responding to musical rhythms. Individual oscillators phase- and frequency- lock to components of rhythmic patterns, embodying the notion of musical pulse, or beat. The collective behavior of a system of oscillators represents a self-organized response to rhythmic patterns, embodying a "perception" of metrical structure. When exposed to performed musical rhythms the system shows the ability to simultaneously perform quantization (categorization of temporal intervals), and assignment of metrical structure in real time. We discuss implications for psychological theories of temporal expectancy, "categorical" perception of temporal intervals, and the perception of metrical structure.  From mpp at cns.brown.edu Mon Oct 25 10:23:54 1993 From: mpp at cns.brown.edu (Michael P. Perrone) Date: Mon, 25 Oct 1993 10:23:54 -0400 (EDT) Subject: NIPS*93 Hybrid Systems Workshop Message-ID: <9310251423.AA14415@cns.brown.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 4146 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/ff410a1b/attachment.ksh From french at willamette.edu Mon Oct 25 12:07:29 1993 From: french at willamette.edu (Bob French) Date: Mon, 25 Oct 93 09:07:29 PDT Subject: NIPS-93 Workshop on catastrophic interference Message-ID: <9310251607.AA14952@willamette.edu> NIPS-93 Workshop: ================ CATASTROPHIC INTERFERENCE IN CONNECTIONIST NETWORKS: CAN IT BE PREDICTED, CAN IT BE PREVENTED? Date: Saturday, December 4, 1993, at Vail, Colorado ==== Intended audience: Connectionists, cognitive scientists and ================= applications-oriented users of connectionist networks interested in a better understanding of: i) when and why their networks can suddenly and completely forget previously learned information; ii) how it is possible to reduce or even eliminate this phenomenon. Organizer: Bob French ========= Computer Science Department Willamette University, Salem OR french at willamette.edu Program: ======== When connectionist networks learn new information, they can suddenly and completely forget everything they had previously learned. This problem is called catastrophic forgetting or catastrophic interference. Given the demonstrated severity of the problem, it is intriguing to note that this problem has to date received very little attention. When new information must be added to an already-trained connectionist network, it is currently taken for granted that the network will simply cycle through all of the old data again. Since relearning all of the old data is both psychologically implausible as well as impractical for very large data sets, is it possible to do otherwise? Can connectionist networks be developed that do not forget catastrophically -- or perhaps that do not forget at all -- in the presence of new information? Or is catastrophic forgetting perhaps the inevitable price for using fully distributed representations? Under what circumstances will a network forget or not forget? Further, can the amount of forgetting be predicted with any reliability? These questions are of particular interest to anyone who intends to use connectionist networks as a memory/generalization device. This workshop will focus on: - the theoretical reasons for catastrophic interference; - the techniques that have been developed to eliminate it or to reduce its severity; - the side-effects of catastrophic interference; - the degree to which a priori prediction of catastrophic forgetting is or is not possible. As connectionist networks become more and more a part of applications packages, the problem of catastrophic interference will have to be addressed. This workshop will bring the audience up to date on current research on catastrophic interference. Speakers: Stephan Lewandowsky (lewan at constellation.ecn.uoknor.edu) ======== Department of Psychology University of Oklahoma Phil A. Hetherington (het at blaise.psych.mcgill.ca) Department of Psychology McGill University Noel Sharkey (noel at dcs.exeter.ac.uk) Connection Science Laboratory Dept. of Computer Science University of Exeter, U.K. Bob French (french at willamette.edu) Computer Science Department Willamette University Morning session: --------------- 7:30 - 7:45 Bob French: An Introduction to the Problem of Catastrophic Interference in Connectionist Networks 7:45 - 8:15 Stephan Lewandowsky: Catastrophic Interference: Causes, Solutions, and Side-Effects 8:15 - 8:30 Brief discussion 8:30 - 9:00 Phil Hetherington: Sequential Learning in Connectionist Networks: A Problem for Whom? 9:00 - 9:30 General discussion Afternoon session ----------------- 4:30 - 5:00 Noel Sharkey: Catastrophic Interference and Discrimination. 5:00 - 5:15 Brief discussion 5:15 - 5:45 Bob French: Prototype Biasing and the Problem of Prediction 5:45 - 6:30 General discussion and closing remarks Below are the abstracts for the talks to be presented in this workshop: CATASTROPHIC INTERFERENCE: CAUSES, SOLUTIONS, AND SIDE-EFFECTS Stephan Lewandowsky Department of Psychology University of Oklahoma I briefly review the causes for catastrophic interference in connectionist models and summarize some existing solutions. I then focus on possible trade-offs between resolutions to catastrophic interference and other desirable network properties. For example, it has been suggested that reduced interference might impair generalization or prototype formation. I suggest that these trade-offs occur only if interference is reduced by altering the response surfaces of hidden units. -------------------------------------------------------------------------- SEQUENTIAL LEARNING IN CONNECTIONIST NETWORKS: A PROBLEM FOR WHOM? Phil A. Hetherington Department of Psychology McGill University Training networks in a strictly blocked, sequential manner normally results in poor performance because new items overlap with old items at the hidden unit layer. However, catastrophic interference is not a necessary consequence of using distributed representations. First, examination by the method of savings demonstrates that much of the early information is still retained: Items thought lost can be relearned within a couple of trials. Second, when items are learned in a windowed, or overlapped fashion, less interference obtains. And third, when items are presented in a strictly blocked, sequential manner to a network that already possesses a relevant knowledge base, interference may not occur at all. Thus, when modeling normal human learning there is no catastrophic interference problem. Nor is there a problem when modeling strictly sequential human memory experiments with a network that has a relevant knowledge base. There is only a problem when simple, unstructured, tabula rasa networks are expected to model the intricacies of human memory. -------------------------------------------------------------------------- CATASTROPHIC INTERFERENCE AND DISCRIMINATION Noel Sharkey Connection Science Laboratory Dept. of Computer Science University of Exeter Exeter, U.K. Connectionist learning techniques, such as backpropagation, have been used increasingly for modelling psychological phenomena. However, a number of recent simulation studies have shown that when a connectionist net is trained, using backpropagation, to memorize sets of items in sequence and without negative exemplars, newly learned information seriously interferes with old. Three converging methods were employed to show why and under what circumstances such retroactive interference arises. First, a geometrical analysis technique, derived from perceptron research, was introduced and employed to determine the computational and representational properties of feedforward nets with one and two layers of weights. This analysis showed that the elimination of interference always resulted in a breakdown of old-new discrimination. Second, a formally guaranteed solution to the problems of interference and discrimination was presented as the HARM model and used to assess the relative merits of other proposed solutions. Third, two simulation studies were reported that assessed the effects of providing nets with experience of the experimental task. Prior knowledge of the encoding task was provided to the nets either by block training them in advance or by allowing them to extract the knowledge through sequential training. The overall conclusion was that the interference and discrimination problems are closely related. Sequentially trained nets employing the backpropagation learning algorithm will unavoidably suffer from either one or the other. -------------------------------------------------------------------------- PROTOTYPE BIASING IN CONNECTIONIST NETWORKS Bob French Computer Science Dept. Willamette University Previously learned representations bias new representations. If subjects are told that a newly encountered object X belongs to an already familiar category P, they will tend to emphasize in their representation of X features of the prototype they have for the category P. This is the basis of prototype biasing, a technique that appears to significantly reduce the effects catastrophic forgetting. The 1984 Congressional Voting Records database is used to illustrate prototype biasing. This database contains the yes-no voting records of Republican and Democratic members of Congress in 1984 on 16 separate issues. This database lends itself conveniently to the use of a network having 16 "yes-no" input units, a hidden layer and one "Republican/Democrat" output node. A "Republican" prototype and a "Democrat" prototype are built, essentially by separately averaging over Republican and Democrat hidden-layer representations. These prototypes then "bias" subsequent representations of new Democrats towards the Democrat prototype and of new Republicans towards the Republican prototype. Prototypes are learned by a second, separate backpropagation network that associates teacher patterns with their respective prototypes. Thus, ideally, when the "Republican" teacher pattern is fed into it, it produces the "Republican" prototype on output. The output from this network is continually fed back to the hidden layer of the primary network and is used to bias new representations. Also discussed in this paper are the problems involved in predicting the severity of catastrophic forgetting.  From tgd at chert.CS.ORST.EDU Mon Oct 25 18:06:15 1993 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Mon, 25 Oct 93 15:06:15 PDT Subject: Articles of interest in Machine Learning Message-ID: <9310252206.AA28485@curie.CS.ORST.EDU> The most recent issue of Machine Learning (Volume 13, Number 1) contains articles of interest to readers of this list. Here is a partial table of contents: Cost-Sensitive learning of classification knowledge and its applications in robotics Ming Tan Extracting refined rules from knowledge-based neural networks Geoffrey Towell and Jude Shavlik Prioritized sweeping: reinforcement learning with less data and less time Andrew Moore and Chris Atkeson Technical Note: Selecting a Classification Method by Cross-validation Cullen Schaffer For subscription information, contact Kluwer at world.std.com. Tom Dietterich, Executive Editor, Machine Learning  From terry at helmholtz.sdsc.edu Tue Oct 26 00:15:43 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Mon, 25 Oct 93 21:15:43 PDT Subject: NIPS Workshop on Spatial Perception Message-ID: <9310260415.AA22226@helmholtz.sdsc.edu> NIPS*93 WORKSHOP ANNOUNCEMENT Title: Processing of visual and auditory space and its modification by experience. Intended Audience: Researchers interested in spatial perception, sensory fusion and learning. Organizers: Josef P. Rauschecker Terrence Sejnowski josef at helix.nih.gov terry at helmholtz.sdsc.edu Program: This workshop will address the question how spatial information is represented in the brain, how it is matched and compared by the visual and auditory systems, and how early sensory experience influences the development of these space representations. We will discuss neurophysiological and computational data from cats, monkeys, and owls that suggest how the convergence of different sensory space representations may be handled by the brain. In particular, we will look at the role of early experience and learning in establishing these representations. Lack of visual experience affects space processing in cats and owls differently. We will therefore discuss various kinds of plasticity in different spatial representations. Half the available time has been reserved for discussion and informal presentations. We will encourage lively audience participation. Morning Session (7:30 - 8:30) Presentations Predictive Hebbian learning and sensory fusion (Terry Sejnowski) A connectionist model of the owl's sound localization system (Dan Rosen) Intermodal compensatory plasticity of sensory systems (Josef Rauschecker) 8:30 - 9:30 Discussion Afternoon Session (4:30 - 5:30) Presentations Neurophysiological processing of visual and auditory space in monkeys (Richard Andersen) Learning map registration in the superior colliculus with predictive Hebbian learning (Alex Pouget) A neural network model for the detection of heading direction from optic flow in the cat's visual system (Markus Lappe) 5:30 - 6:30 Discussion ===================================================================== General NIPS information and registration: An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035 =====================================================================  From cohn at psyche.mit.edu Tue Oct 26 13:53:43 1993 From: cohn at psyche.mit.edu (David Cohn) Date: Tue, 26 Oct 93 13:53:43 EDT Subject: Post-NIPS Workshop on Robot Learning Message-ID: <9310261753.AA18987@psyche.mit.edu> The following workshop will be held on Friday, December 3rd in Vail, CO as one of the Post-NIPS workshops. To be added to a mailing list for further information about the workshop, send electronic mail to "robot-learning-request at psyche.mit.edu". --------------------------------------------------------------------------- NIPS*93 Workshop: Robot Learning II: Exploration and Continuous Domains ================= Intended Audience: Researchers interested in robot learning, exploration, ================== and active learning systems in general Organizer: David Cohn (cohn at psyche.mit.edu) ========== Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Overview: ========= The goal of this workshop will be to provide a forum for researchers active in the area of robot learning and related fields. Due to the limited time available, we will focus on two major issues: efficient exploration of a learner's state space, and learning in continuous domains. Robot learning is characterized by sensor noise, control error, dynamically changing environments and the opportunity for learning by experimentation. A number of approaches, such as Q-learning, have shown great practical utility learning under these difficult conditions. However, these approaches have only been proven to converge to a solution if all states of a system are visited infinitely often. What has yet to be determined is whether we can efficiently explore a state space so that we can learn without having to visit every state an infinite number of times, and how we are to address problems on continuous domains, where there are effectively an infinite number of states to be visited. This workshop is intended to serve as a followup to last year's post-NIPS workshop on machine learning. The two problems to be addressed this year were identified as two (of the many) crucial issues facing robot learning. The morning session of the workshop will consist of short presentations discussing theoretical approaches to exploration and to learning in continuous domains, followed by general discussion guided by a moderator. The afternoon session will center on practical and/or heuristic approaches to these problems in the same format. As time permits, we may also attempt to create an updated "Where do we go from here?" list, like that drawn up in last year's workshop. Video demos will be encouraged. If feasible, we will attempt to have a VCR set up after the workshop to allow for informal demos. Preparatory readings from the presenters will be ready by early November. To be placed on a list to receive continuing information about the workshop (such as where and when the readings appear on-line), send email to "robot-learning-request at psyche.mit.edu". Tentative Program: ================== December 3, 1993 Morning Session: Theoretical Approaches --------------------------------------- 7:30-8:30 Andrew Moore, CMU "The Parti-game approach to exploration" synopses of different Leemon Baird, USAF approaches "Reinforcement learning in continuous domains" (20 min each) Juergen Schmidhuber, TUM Reinforcement-directed information acquisition in Markov Environments 8:30-9:30 Open discussion Afternoon Session: Heuristic Approaches --------------------------------------- 4:30-5:50 Long-Ji Lin, Siemens "RatBot: A mail-delivery robot" synopses of different Stephan Schaal, MIT approaches "Efficiently exploring high-dimensional spaces" (20 min each) Terry Sanger, MIT/JPL "Trajectory extension learning" Jeff Schneider, Rochester "Learning robot skills in high-dimensional action spaces" 5:50-6:30 Open discussion  From cowan at synapse.uchicago.edu Tue Oct 26 13:06:56 1993 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 26 Oct 93 10:06:56 -0700 Subject: Caianiello Message-ID: <9310261706.AA01028@synapse> I was very sorry to hear of the death of Eduardo Caianiello. Eduardo was one of the early works in neural networks. I met him first in 1959 at MIT when he visited the Communications Biophysics Group, of which I was then a graduate student member. It was Caianiello who first tried to study discrete state continuous-time models of spiking neurons in networks with modifiable couplings. His 1961 paper in the Journal of Theoretical Biology was an important early contribution to the field. Of course Caianiello did a lot more than neural networks: he was a top notch theoretical physicist who made a number of important contributions to quantum field theory. When I first visited him in 1964 he was running the Institute of Theoretical Physics in Naples. He later set up the Laboratory for Cybernetics, and ran numerous very stimulating summer schools in Southern Italy on Physics, Cybernetics and Automata Theory, and on Neural Networks. He therefore was very influential in keeping alive work on Neural Networks, not just in Italy, but in Europe, North America, and also in Japan, where his work was very well received. His passing breaks yet another link with the early generation of workers in Neural Networks, and he will be missed, but not forgotten. Jack Cowan  From P.Refenes at cs.ucl.ac.uk Tue Oct 26 17:14:23 1993 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 26 Oct 93 21:14:23 +0000 Subject: PostDoctoral Fellowship. Message-ID: Postdoctoral Fellowship CALL FOR APPLICATIONS for a post doctoral research fellowship on NONLINEAR MODELLING IN FINANCIAL ENGINEERING at: London Business School, Department of Decision Science. Position: for upto 2 years (beginning Fall 1994; stipend: $50,000 pa). London Business School has been selected as one of the European Host Institutes for the CEC Human Capital and Mobility Programme and has been awarded a number of postdoctoral fellowships. The NeuroForecasting Unit at the faculty of Decision Sciences has a strong involvement in the application of neural networks to financial engineering including asset pricing, tactical asset allocation, equity investment, forex, etc. and would like to put forward a candidate with a research proposal in neural network analysis including parameter significance estimation in multi-variate datasets, sensitivity analysis, and/or non-linear dimentionality reduction in the context of factor models for equity investment. Candidates must hold a PhD in non-linear modelling or related areas and have a proven research record. Normal HCM rules apply i.e. only CEC nationals (excluding UK residents) are eligible. CEC nationals that have been working overseas for the past two years also qualify. Interested candidates should send their curriculum vitae and a summary of their research interests to: Dr A. N. Refenes NeuroForecasting Unit Department of Decision Science London Business School Sussex Place, Regents Park, London NW1 4SA, UK Tel: ++ 44 (71) 262 50 50 Fax: ++ 44 (71) 724 78 75  From VAINA at buenga.bu.edu Tue Oct 26 21:21:14 1993 From: VAINA at buenga.bu.edu (Lucia M. Vaina) Date: Tue, 26 Oct 1993 21:21:14 -0400 (EDT) Subject: No subject Message-ID: From smagt at fwi.uva.nl Wed Oct 27 09:01:52 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Wed, 27 Oct 1993 14:01:52 +0100 Subject: CFP: Neural Systems for Robotics Message-ID: <199310271301.AA03104@carol.fwi.uva.nl> PROGRESS IN NEURAL NETWORKS series editor O. M. Omidvar CALL FOR PAPERS Special Volume: NEURAL SYSTEMS FOR ROBOTICS Editor: P. Patrick van der Smagt This series will review state-of-the-art research in neural networks, natural and synthetic. Contributions from leading researchers and practitioners will be sought. This series will help shape and define academic and professional programs in this area. This series is intended for a wide audience; those professionally involved in neural network research, such as lecturers and primary investigators in neural computing, neural modeling, neural learning, neural memory, and neurocomputers. The upcoming volume, Neural Systems for Robotics, will focus on research in natural and artificial neural systems directly related to robotics and robot control. Authors are invited to submit original manuscripts describing recent progress in neural network research directly applicable to robotics. Manuscripts may be survey or tutorial in nature. Suggested topics include, but are not limited to: * Neural control systems for visually guided robots * Manipulator trajectory control * Obstacle avoidance * Sensor feedback systems * Biologically inspired robot systems * Identification of kinematics and dynamics The papers will be refereed and uniformly typeset. Ablex and the Progress Series editors invite you to submit an abstract, extended summary or manuscript proposal, directly to the Special Volume Editor: P. Patrick van der Smagt, Dept. of Computer Systems, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, THE NETHERLANDS Tel: +31 20 525-7524 FAX: +31 20 525-7490 Email: smagt at fwi.uva.nl or to the Series Editor: Dr. Omid M. Omidvar, Computer Science Dept., University of the District of Columbia, Washington DC 20008 Tel: (202)282-7345 FAX: (202)282-3677 Email: OOMIDVAR at UDCVAX.BITNET The Publisher is Ablex Publishing Corporation, Norwood, NJ. Other volumes: Neural Networks for Control, ed. by D. L. Elliott Neural Networks Hardware Implementations, ed. by M. E. Zaghiloul Motion Detection & Temporal Pattern Recognition, ed. by J. Dayoff Biological Neural Networks, ed. by D. Tam Mathematical Foundations, ed. by M. Garzon  From GARZONM at hermes.msci.memst.edu Wed Oct 27 16:29:32 1993 From: GARZONM at hermes.msci.memst.edu (GARZONM@hermes.msci.memst.edu) Date: 27 Oct 93 15:29:32 CDT Subject: NIPS*93 Workshop on Stability and Observability/program Message-ID: A day at NIPS*93 on STABILITY AND OBSERVABILITY 3 December 1993 at Vail, Colorado Intended Audience: nneuroscientists, computer and cognitive ================= scientists, neurobiologists, mathematicians/ dynamical systems, electrical engineers, and anyone interested in questions such as: * what effects can noise, bounded precision and uncertainty in inputs, weights and/or transfer functions have on the i/o behavior of a neural network? * what is missed and what is observable in computer simulations of the networks they purport to simulate? * how much architecture can be observed in the behavior of a network-in-a-box? * what can be done to improve and/or accelerate convergence to stable equilibria during learning and network updates while preserving the intended dynamics of the process? Organizers: ========== Fernanda Botelho Max Garzon botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu Mathematical Sciences Institute for Intelligent Systems Memphis State University Memphis, TN 38152 U.S.A. [botelhof,garzonm]@hermes.msci.memst.edu Program: ======= Following is a (virtually) final schedule. Each talk is scheduled for 15 minutes with 5 minutes of interim for questions and comments. One or contributed talk might still be added to the schedule (and will cut into the panel discussion in the afternoon). Morning Session: --------------- 7:30-7:50 M. Garzon, Memphis State University, Tennessee Introduction and Overview 7:50-8:10 S. Kak, Louisiana State University, Baton Rouge Stability and Observability in Feedback Networks 8:10-8:30 S. Piche, Microelectronics Technology Co., Austin, Texas Sensitivity of Neural Networks to Errors 8:30-8:50 R. Rojas, Int. Computer Science Institute UCB and Freie Universit\"at Berlin Stability of Learning in Neural Networks 8:50-9:10 G. Chauvet and P. Chauvet, Institut de Biologie Th\'eorique, U. d'Angers, France Stability of Purkinje Units in the Cerebellar Cortex 9:10-9:30 N. Peterfreund and Y. Baram, Technion, Israel Trajectory Control of Convergent Networks Afternoon Session: ------------------ 4:30-4:50 X. Wang, U. of Southern California and UCLA Consistencies of Stability and Bifurcation 4:50-5:10 M. Casey, UCSD, San Diego, California Computation Dynamics in Discrete-time Recurrent Nets 5:10-5:30 M. Cohen, Boston University, Massachussets Synthesis of Decision Regions in Dynamical Systems 5:30-5:50 F. Botelho, Memphis State University, Tennessee Observability of Discrete and Analog Networks 5:50-6:10 U. Levin and K. Narendra, OGI/CSE Portland/Oregon and Yale University, Recursive Identification Using Feedforward Nets 6:10-6:30 Panel Discussion 7:00 All-workshop wrap-up Max Garzon (preferred) garzonm at hermes.msci.memst.edu Math Sciences garzonm at memstvx1.memst.edu Memphis State University Phone: (901) 678-3138/-2482 Memphis, TN 38152 USA Fax: (901) 678-2480/3299  From kk at ee.umr.edu Fri Oct 29 13:23:40 1993 From: kk at ee.umr.edu (kk@ee.umr.edu) Date: Fri, 29 Oct 93 12:23:40 CDT Subject: Convolution Software (Educational Tool) Message-ID: <9310291723.AA03815@lamarr.ee.umr.edu> Contributed by: Kurt Kosbar FREE EDUCATIONAL SOFTWARE PACKAGE P. C. CONVOLUTION P.C. convolution is a educational software package that graphically demonstrates the convolution operation. It runs on IBM PC type computers using DOS 4.0 or later. It is currently being used at over 70 schools throughout the world in departments of Electrical Engineering, Physics, Mathematics, Chemical Engineering, Chemistry, Crystallography, Geography, Geophysics, Earth Science, Acoustics, Phonetics & Linguistics, Biology, Astronomy, Ophthalmology, Communication Sciences, Business, Aeronautics, Biomechanics, Hydrology and Experimental Psychology. Anyone may download a demonstration version of this software via anonymous ftp from 131.151.4.11 (file name /pub/pc_conv.zip) University instructors my obtain a free, fully operational version by contacting Dr. Kurt Kosbar at the address listed below. Dr. Kurt Kosbar 117 Electrical Engineering Building, University of Missouri - Rolla Rolla, Missouri, USA 65401, phone: (314) 341-4894 e-mail: kk at ee.umr.edu  From rohwerrj at sun.aston.ac.uk Fri Oct 29 08:34:11 1993 From: rohwerrj at sun.aston.ac.uk (rohwerrj) Date: Fri, 29 Oct 93 12:34:11 GMT Subject: PhD Thesis available for FTP in neuroprose Message-ID: <8108.9310291234@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-file: pub/neuroprose/zhu.thesis.ps.Z PhD Thesis (222 pages) available in neuroprose repository. (An index entry, and sample ftp procedure follows abstract) NEURAL NETWORKS AND ADAPTIVE COMPUTERS: Theory and Methods of Stochastic Adaptive Computation Huaiyu Zhu Department of Statistics and Computational Mathematics Liverpool University, Liverpool L69 3BX, UK ABSTRACT: This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects --- It can be continuous, stochastic and adaptive --- and retains the TM computation as a subclass called ``data processing''. The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of ``trainable information processor'' (TIP) --- parameterised stochastic mapping with a rule to change the parameters --- is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of ``global search'' in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a ``basic adaptive computer', which has the advantage over earlier reinforcement learning systems, such as Sutton's ``Dyna'', in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve. ___________________________________________________________________ INDEX entry: zhu.thesis.ps.Z hzhu at liverpool.ac.uk 222 pages. Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorial state space and action space. (Mathematics with simulation results plus philosophy.) --------------------- Sample ftp procedure: unix$ ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:name): ftp (or anonymous) Password: (your email address including @) ftp> cd pub/neuroprose ftp> binary ftp> get zhu.thesis.ps.Z ftp> quit unix$ uncompress zhu.thesis.ps.Z unix$ lpr -P zhu.thesis.ps The last two steps can also be combined to unix$ zcat zhu.thesis.ps.Z | lpr -P which will save some space. ---------------------- Note: This announcement is simultaneous sent to the following three mailing lists: connectionists at cs.cmu.edu, anneal at sti.com, reinforce at cs.uwa.edu.au My apology to those who subscribe to more than one of them. I'm sorry that there is no hard copy available. -- Huaiyu Zhu hzhu at liverpool.ac.uk Dept. of Stat. & Comp. Math., University of Liverpool, L69 3BX, UK  From ronen at wisdom.weizmann.ac.il Sun Oct 31 09:57:25 1993 From: ronen at wisdom.weizmann.ac.il (Ronen Basri) Date: Sun, 31 Oct 93 16:57:25 +0200 Subject: IAICVNN-93, Dec. 27-28, 1993, Tel Aviv: Final program and Registration Information Message-ID: <9310311457.AA03856@wisdom.weizmann.ac.il> 10th Israeli Conference on Artificial Intelligence, Computer Vision and Neural Networks Tel-Aviv, December 27-28, 1993 FINAL PROGRAM and REGISTRATION INFORMATION Monday, December 27, 1993 ------------------------- (8:30-9:15) REGISTRATION ------------------------ Session 1.1 AI VISION NN (9:15-11:15) PLENARY SESSION ---------------------------- AI Keynote Speaker: D. Gabbay, Imperial College of Science and Technology, London. LENA - An automated car saleswoman capable of belief revision, abduction action and small lies. Vision Keynote Speaker: Yiannis Aloimonos, University of Maryland, College Park MD. Interpretation of visual patterns: Navigation & Manipulation. (11:15-11:45) COFFEE BREAK -------------------------- Session 1.2 AI (11:45-13:00) FORMAL TECHNIQUES ------------------------------- Roccetti Marco, Teolis Antonio G.B. (University of Bologna) The Use of Moebius Function in Probabilistic Rule-Based Expert Systems Meisels Amnon, Ell-sana' Jihad, Gudes Ehud (Ben-Gurion University) Comments on CSP Algorithms Applied to Timetabling Shechory On, Kraus Sarit (Bar-Ilan University) Coalition Formation Among Autonomous Agents: Strategies and Complexity Session 1.3 NN (11:45 - 13:00) SPEECH AND SIGNAL PROCESSING -------------------------------------------- I Voitovetsky, S Dahan, Y Menashe, H Guterman (Ben Gurion University) Speaker Independent Vowel Recognition Using Neural Networks Zvi Boger (Negev Nuclear Research Center) ANNs for Quantitative Stationary Spectrometric Measurements Yaakov Stein (Efrat Future Technology) False Alarm Rate Reduction for ASR and OCR Session 1.4 VISION (11:45 - 13:00) SHAPE DESCRIPTION --------------------------------- Ilan Shimshoni, Jean Ponce (University of Illinois) Finite resolution aspect graphs of polyhedral objects. R.L. Ogniewicz, G. Szekely, O. Kubler (Communication Technology Laboratory, Zurich) Detection of prominent boundary points based on structural characteristics of the heirarchic medial axis transform. Daphna Weinshall, Michael Werman, Naftali Tishbi (Hebrew University) Stability and likelihood of two dimensional views of three dimensional objects. (13:00-14:30) LUNCH BREAK ------------------------- Session 1.5 AI (14:30-15:45) DESIGN METHODOLOGY -------------------------------- Ndjatou Gilbert (CUNY) Modelling Objects and Distributed Object-Based Systems Nygate Yossi (AT&T-Bell), Sterling Leon (Case Western Reserve University) ASPEN - Designing Complex Knowledge Based Systems Weiss Richard J., Tamir D. E., Schneider Moti (Florida Institute of Technology) Efficient Resource Allocation for Parallel Prolog Interpretation Session 1.6 NN (14:30 - 15:45) OPTICAL CHARACTER RECOGNITION AND RELATED TOPICS ---------------------------------------------------------------- Jacob (Yak) Shaya, Jonathan Eran Kali, Gideon Y Ben-Zvi (Ligatura) A Stochastic Approach -- Advantages in Character Recognition Oded Comay, Nathan Intrator (Tel Aviv University) Ensemble Training: Some recent experiments with Postal Zip data B Lerner, H Guterman, I Dinstein (Ben Gurion University) Global Features and Simple Transformation to Chromosome Classification Session 1.7 VISION (14:30-15:45) MODELS FOR HUMAN VISION ------------------------------------- Yael Moses, Shimon Ullman, Shimon Edelman (Weizmann Institute) Generalization to novel images in upright and inverted faces. Ehud Grunfeld, Hedva Spitzer (Tel Aviv University) Spatio-temporal model for subjective colours based on colour coded cells. Haya Ruchvarger (Bar Ilan University) A mathematical derivation on possible aid of eye movments to depth perception. (15:45-16:00) COFFEE BREAK -------------------------- Session 1.8 AI (16:00-16:45) LOGIC ------------------- Ben-Eliyahu Rachel (UCLA) Back to the Future: Program Completion Revisited Slobodova Anna (Slovak Academy of Science) On a Special Case of Probabilistic Logic Session 1.9 NN (16:00-16:50) HARDWARE IMPLEMENTATIONS -------------------------------------- Ronny Agranat (Hebrew University) An Electroholographic Artificial Neural Network U Sandler (Jerusalem College of Technology) Multimode Laser As a Multi-Neuron Session 1.10 VISION (16:00 - 17:15) FEATURES FOR RECOGNITION ---------------------------------------- Dieter Koller (University of California at Berkeley) Moving object recognition and classification based on recursive shape parameter estimation. Antti Yla-Jaaski, F. Ade (Mapvision Ltd., Finland) Grouping symmetrical structures for image segmentation and description. Yui-Liang Chen, D.C. Hung (New Jersey Institute of Technology) Shape recognition using hypothetic feature. Session 1.11 AI (16:45-17:45) TUTORIAL ---------------------- S. Engelson (Yale University) 'Where am I, and where am I headed?" (Map Learning for Mobile Robots) Session 1.12 NN (16:50 - 18:00) BIOLOGICAL AND MEDICAL APPLICATIONS --------------------------------------------------- Itiel E Dror (Harvard University) Neural Network Models of Bat Sonar H Guterman (Ben Gurion University), A Yarkoni (Soroka Hospital) Classification of Labor Contractions by Neural Networks and Fuzzy Logic P Tandeitnik, H Guterman (Ben Gurion University) System Identification of Engineering and Biological Processes Tuesday, December 28, 1993 -------------------------- (8:00-9:00) REGISTRATION ------------------------- Session 2.1 AI VISION NN (9:00-11:00) PLENARY SESSION ---------------------------- NN Keynote Speaker S. Kirkpatrick, IBM Yorktown Heights and HU Satisfaction Thresholds and other Disorderly Things Vision Keynote Speaker Davi Geiger, Siemens Corporate Research, NJ. Perceptual mechanisms for images and stereo pairs (11:00-11:30) COFFEE BREAK -------------------------- Session 2.2 AI (11:30-13:15) Learning ---------------------- Sprotte-Kleiber Lucia (Zurich University) GAAR - A System for Solving Geometric Analogy Problems with Analogical Representations Koppel M., Feldman R. (Bar-Ilan University), Segre A. (Cornell University) Theory Revision Using Noisy Examples Biberman Yoram (Hebrew University) A Context Similarity Measure for Nominal Variables Reich Yoram, Karni Reuven, Fournier Fabiana (Technion) An Investigation of Machine Learning Approaches to Knowledge Extraction from Databases Session 2.3 NN (11:30 - 13:15) ARCHITECTURES AND LEARNING ------------------------------------------ Nathan Intrator (Tel Aviv University), Orna Intrator (Hebrew University) Interpreting Neural-Network Models Orly Yadid-Pecht, Moshe Gur (Technion) Modified MAXNET with Fast Convergence Rate Lev Tsitolovsky (Bar Ilan University), Maxim Kovalenko (Weizmann Institute) Structure Independent Learning in Neural Networks Session 2.4 VISION (11:30 - 13:15) RECOVERY AND GEOMETRY ------------------------------------- Patrick Gros, Long Quan (LIFIA/INRIA, France) 3D projective invariants from two images. Ron Kimmel, Guillermo Sapiro (Technion) Shortening three dimensional curves via two dimensional flows. Michal Irani, Benny Rousso, Shmuel Peleg (Hebrew University) Recovery of ego-motion using image stabilization. Quang-Tuan Luong, Rachid Deriche, Olivier Faugeras, Theodore Papadopoulo (INRIA, France) On determining the fundamental matrix: analysis of different methods and experimental results. (13:15-14:30) LUNCH BREAK ------------------------- Session 2.5 AI (14:30-15:45) APPLICATIONS -------------------------- Tomer Amir (Rafael) An Implementation Methodology with Constructive Inheritance Stilman Boris (University of Colorado) Hierarchical Networks for Systems Control Tabakman T, Exman I. (Hebrew University) Towards Real-Time Self-Organising Maps with Parallel and Noisy Inputs Session 2.6 NN (14:30 - 15:45) FUZZY LOGIC, EXPERT SYSTEMS, PATTERN RECOGNITION ---------------------------------------------------------------- Alon Cohen (Bar Ilan University) A Legal Expert Neural Network Usiel Sandler (Jerusalem College of Technology) Fuzzy Dynamics Yaakov Stein (Efrat Future Technology) A Hypersphere Classifier Which Trains Like A Hyperplane One Session 2.7 VISION (14:30-15:45) GROUPING AND SEGMENTATION --------------------------------------- Gady Agam, Its'hak Dinstein (Ben Guryon University) Pre-processing of metaphase images for automatic chromosome classification. Antti Yla-Jaaski (Mapvision Ltd., Finland), Nahum Kiryati (Technion) Adaptive termination of voting in probabilistic hough algorithms. Victor Brailovsky, Yulia Kempner (Tel Aviv University) Restoring the original range image structure using probabilistic estimates. (15:45-16:00) COFFEE BREAK -------------------------- Session 2.8 AI (16:00-17:30) Tutorial ---------------------- Dan Geiger (Technion) Probabilistic Reasoning: Learning and Inference Session 2.9 NN (16:00 - 17:00) IMAGE PROCESSING -------------------------------- Sorin Costiner, Maxim Kovalenko (Weizmann Institute) A Multigrid Neural Network with Applications to Image Processing S Greenberg, H Guterman (Ben Gurion University) Rotation and Shift Invariant Image Classifier using NN Session 2.10 VISION (16:00-17:00) NON RIGID OBJECTS ------------------------------- Eyal Shavit, Allan Jepson (University of Toronto) The pose function: and intermediate level representation for motion understanding. Yaacov Hel-or (Weizmann Institute) and Michael Werman (Hebrew University) Pose estimation of articulated and constrained models. Session 2.11 NN (17:00 - 18:00) PANEL DISCUSSION -------------------------------- Industry and Academia Interaction ------------------------------------------------------------------------- REGISTRATION INFORMATION Registration forms should be sent by December 6, 1993, to the following address: Ms. Ruth Cooperman 10th IAICVNN Secretariat IPA, Kfar Maccabiah Ramat Gan 52109 ISRAEL The registration fee for the conference (including lunch, coffee, and proceedings ) is as follows: IPA members (*) (before Dec. 6) : $200 or 350 NIS IPA members (*) (after Dec. 6) : $250 or 400 NIS Non-IPA members (before Dec. 6) : 375 NIS Non-IPA members (after Dec. 6) : 425 NIS Students (**) (before Dec. 6) : $100 or 200 NIS Students (**) (after Dec. 6) : $125 or 225 NIS (*) Visitors from abroad are entitled to the IPA member rate. (Payment in US$ only.) (**) The student rate will be approved only for students who are enrolled in a full time study program and with documents confirmed by the university. The student registration fee does not include lunch. Lunch-tickets may be obtained for an additional $25/50 NIS (for two tickets). A block of rooms has been reserved at Kfar Maccabiah Hotel, a 4-star hotel at the conference site. Prices, including breakfast and service charge, are $77 for a double room, $60 for a single room per night (+ VAT for Israeli residents). A deposit of $50 per person is required with this reservation. The accomodation form should be sent to IPA together with the registration form. REGISTRATION FORM Name _________________________________________________ Last First Title Affiliation __________________________________________ Position/Department __________________________________ Address ______________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Home address _________________________________________ ______________________________________________________ ______________________________________________________ Preferred mailing address ___ Home ___ Business Registration fees: ____ IPA member $200/350 NIS ____ non-IPA member 375 NIS ____ Student $100/200 NIS ____ Late registration (after December 6, add as specified above) ____ Total Payment can be in US$ or NIS only, by bankers draft or personal check payable to: IPA. Signature _______________________ Date ___________ ________________________________________________________________________ ACCOMODATION FORM Name _________________________________________________ Last First Title Address ______________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone I wish to reserve a single/double room from __________ to __________ for a total of _______ nights. Payment can be in US$ or NIS only, by bankers draft or personal check payable to: IPA. Signature _______________________ Date ___________ Note : The FAX number of IPA is +972-3-574 4374. The FAX number of Kfar Maccabiah Hotel is +972-3-574 4678. ------------------------------------------------------------------------- From gary at cs.ucsd.edu Sun Oct 31 15:18:05 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Sun, 31 Oct 93 12:18:05 -0800 Subject: ACL-94 Call for papers Message-ID: <9310312018.AA26653@desi> Hi folks, Guess what - I'm going to be on the ACL program committee again this year (I guess they're gluttons for punishment). For those of you unfamiliar with this conference, it is the Association for Computational Linguistics. This is the premier conference for Computational Linguistics research, with refereeing as tight as NIPS. In recent years there has been an upsurge of interest in statistical approaches to large text corpora, and concommitantly, in neural network approaches to NLP. Last year I asked the people on this list to submit to this conference and the response was underwhelming. Please consider submitting to this conference if you do neural network or statistical approaches to NLP, and I will guarantee you a fair hearing. It is time the rift between symbolic and connectionist approaches was healed - we can learn from one another! PAPERS ARE DUE JANUARY 6, 1994!!! gary Here is the CFP: From walker at bellcore.com Wed Oct 13 11:17:12 1993 From: walker at bellcore.com (Don Walker) Date: Wed, 13 Oct 93 11:17:12 -0400 Subject: ACL-94 CALL FOR PAPERS Message-ID: ACL-94 CALL FOR PAPERS 32nd Annual Meeting of the Association for Computational Linguistics 27 June - 1 July 1994 New Mexico State University Las Cruces, New Mexico, USA TOPICS OF INTEREST: Papers are invited on substantial, original, and unpublished research on all aspects of computational linguistics, including, but not limited to, pragmatics, discourse, semantics, syntax, and the lexicon; phonetics, phonology, and morphology; interpreting and generating spoken and written language; linguistic, mathematical, and psychological models of language; language-oriented information retrieval; corpus-based language modeling; machine translation and translation aids; natural language interfaces and dialogue systems; message and narrative understanding systems; and theoretical and applications papers of every kind. REQUIREMENTS: Papers should describe unique work; they should emphasize completed work rather than intended work; and they should indicate clearly the state of completion of the reported results. A paper accepted for presentation at the ACL Meeting cannot be presented or have been presented at any other meeting with publicly available published proceedings. Papers that are being submitted to other conferences must reflect this fact on the title page. FORMAT FOR SUBMISSION: Authors should submit preliminary versions of their papers, not to exceed 3200 words (exclusive of references). Papers outside the specified length and formatting requirements are subject to rejection without review. Papers should be headed by a title page containing the paper title, a short (5 line) summary and a specification of the subject area. Since reviewing will be ``blind'', the title page of the paper should omit author names and addresses. Furthermore, self-references that reveal the authors' identity (e.g., ``We previously showed (Smith, 1991) . . .'') should be avoided. Instead, use references like ``Smith previously showed (1991) . . .'' To identify each paper, a separate identification page should be supplied, containing the paper's title, the name(s) of the author(s), complete addresses, a short (5 line) summary, a word count, and a specification of the topic area. SUBMISSION MEDIA: Papers should be submitted electronically or in hard copy to the Program Chair: James Pustejovsky +1-617-736-2709 Brandeis University +1-617-736-2741 fax Computer Science, Ford Hall Waltham, MA 02254, USA jamesp at cs.brandeis.edu Electronic submissions should be either self-contained LaTeX source or plain text. LaTeX submissions must use the ACL submission style (aclsub.sty) retrievable from the ACL LISTSERV server (access to which is described below) and should not refer to any external files or styles except for the standard styles for TeX 3.14 and LaTeX 2.09. A model submission modelsub.tex is also provided in the archive, as well as a bibliography style acl.bst. (Note however that the bibliography for a submission cannot be submitted as separate .bib file; the actual bibliography entries must be inserted in the submitted LaTeX source file.) Hard copy submissions should consist of four (4) copies of the paper and one (1) copy of the identification page. For both kinds of submissions, if at all possible, a plain text version of the identification page should be sent separately by electronic mail, using the following format: title: author: <name of first author> address: <address of first author> ... author: <name of last author> address: <address of last author> abstract: < abstract> content areas: <first area>, ..., <last area> word count: SCHEDULE: Authors must submit their papers by 6 January 1994. Late papers will not be considered. Notification of receipt will be mailed to the first author (or designated author) soon after receipt. Authors will be notified of acceptance by 15 March 1994. Camera-ready copies of final papers prepared in a double-column format, preferably using a laser printer, must be received by 1 May 1994, along with a signed copyright release statement. The ACL LaTeX proceedings format is available through the ACL LISTSERV. STUDENT SESSIONS: There will again be special Student Sessions organized by a committee of ACL graduate student members. ACL student members are invited to submit short papers describing innovative WORK IN PROGRESS in any of the topics listed above. Papers are limited to 3 pages plus a title page and an identification page in the format described above and must be submitted by hard copy or both e-mail AND hard copy to Beryl Hoffman at the address below by 1 FEBRUARY 1994. The papers will be reviewed by a committee of students and faculty members for presentation in workshop-style sessions and publication in a special section of the conference proceedings. There is a separate Call for Papers, available from the ACL LISTSERV (see below); or from Beryl Hoffman, University of Pennsylvania, Computer Science, 3401 Walnut Street, Philadelphia, PA 19104, USA; +1-215-898-5868; 0587 fax; hoffman at linc.cis.upenn.edu; or Rebecca Passonneau, Columbia University, Computer Science, New York, NY 10027, USA; +1-212-939-7120; 666-0140 fax; becky at cs.columbia.edu. OTHER ACTIVITIES: The meeting will include a program of tutorials coordinated by Lynette Hirschman, MITRE Corporation, 202 Burlington Road, MS K329, Bedford, MA 01730, USA; +1-617-271-7789; 2352 fax; lynette at linus.mitre.org. Some of the ACL Special Interest Groups may arrange workshops or other activities. Further information may be available from the ACL LISTSERV. CONFERENCE INFORMATION: The Local Arrangements Committee is chaired by: Janyce M. Wiebe +1-505-646-6228 New Mexico State University +1-505-646-6218 fax Computing Research Laboratory PO Box 30001/3CRL Las Cruces, NM 88003, USA wiebe at nmsu.edu Anyone wishing to arrange an exhibit or present a demonstration should send a brief description together with a specification of physical requirements (space, power, telephone connections, tables, etc.) to Ted Dunning, New Mexico State University, Computing Research Laboratory, Box 30001/3CRL, Las Cruces, NM 88003, USA; +1-505-646-6221; 6218 fax; ted at nmsu.edu. ACL INFORMATION: For other information on the conference and on the ACL more generally, contact Judith Klavans (ACL), Columbia University, Computer Science, New York, NY 10027, USA; +1-914-478-1802 phone/fax; acl at cs.columbia.edu. General information about the ACL AND electronic membership and order forms are available from the ACL LISTSERV. ACL LISTSERV: LISTSERV is a facility to allow access to an electronic document archive by electronic mail. The ACL LISTSERV has been set up at Columbia University's Department of Computer Science. Requests from the archive should be sent as e-mail messages to listserv at cs.columbia.edu with an empty subject field and the message body containing the request command. The most useful requests are "help" for general help on using LISTSERV, "index acl-l" for the current contents of the ACL archive and "get acl-l <file>" to get a particular file named <file> from the archive. For example, to get an ACL membership form, a message with the following body should be sent: get acl-l membership-form.txt Answers to requests are returned by e-mail. Since the server may have many requests for different archives to process, requests are queued up and may take a while (say, overnight) to be fulfilled. The ACL archive can also be accessed by anonymous FTP. Here is an example of how to get the same file by FTP (user typein is underlined): $ ftp cs.columbia.edu ------------------- Name (cs.columbia.edu:pereira): anonymous --------- Password:pereira at research.att.com << not echoed ------------------------ ftp> cd acl-l -------- ftp> get membership-form.txt.Z ------------------------- ftp> quit ---- $ uncompress membership-form.txt.Z -------------------------------- From arantza at cogs.susx.ac.uk Mon Oct 18 10:48:42 1993 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Mon, 18 Oct 93 10:48:42 BST Subject: Artificial Life Workshop Announcement Message-ID: <mailman.639.1149540269.24850.connectionists@cs.cmu.edu> ARTIFICIAL LIFE: A BRIDGE TOWARDS A NEW ARTIFICIAL INTELLIGENCE Palacio de Miramar (San Sebastian, Spain) December 10th and 11th, 1993 Workshop organised by the Department of Logic and Philosophy of Science, Faculty of Computer Science & Institute of Logic, Cognition, Language and Information (ILCLI) of the University of the Basque Country (UPV/EHU) Directors: Alvaro Moreno (University of the Basque Country) Francisco Varela (CREA, Paris) This Workshop will be devoted to a discussion of the impact of work on Artifical Life on Artificial Intelligence. Artificial Intelligence (AI) has traditionally attempted to study cognition as an abstract phenomenon using formal tools, that is, as a disembodied process that can be grasped through formal operations, independent of the nature of the system that displays it. Cognition is treated as an abstract representation of reality. After several decades of research in this direction the field has encountered several problems that have taken it to what many consider a "dead end": difficulties in understanding autonomous and situated agencies, in relating to behaviour in a real environment, in studying the nature and evolution of perception, in finding a practical explanation for the operation of most cognitive capacities such as natural language, context dependent action, etc. Artificial Life (AL) has recently emerged as a confluence of very different fields trying to study different kinds of features of living systems using computers as a modelling tool, and, at last, trying to artificially (re)produce a living system (or a population of them) in real or computational media. Examples of such phenomena are prebiotic systems and their evolution, growth and development, self-reproduction, adaptation to an environment, evolution of ecosystems and natural selection, formation of sensory-motor loops, autonomous robots. Thus, AL is having an impact on classic life sciences but also on the conceptual foundations of AI and new methodological ideas in Cognitive Science. The aim of this Workshop is to focus on the last two points and to evaluate the influence of the methodology and concepts appearing in AL for the development of new ideas about cognition that could eventually give birth to a new Artificial Intelligence. Some of the sessions consist of presentations and replies on a specific subject by invited speakers while others will be debates open to all participants in the workshop. MAIN TOPICS: * A review of the problems of FUNCTIONALISM in Cognitive Science and Artificial Life. * Modelling Neural Networks through Genetic Algorithms. * Autonomy and Robotics. * Consequences of the crisis of the representational models of cognition. * Minimal Living System and Minimal Cognitive System * Artificial Life systems as problem solvers * Emergence and evolution in artificial systems SPEAKERS S. Harnad P. Husbands G. Kampis B. Mac Mullin D. Parisi T. Smithers E. Thompson F. Varela Further Information: Alvaro Moreno Apartado 1249 20080 DONOSTIA SPAIN E. Mail: biziart at si.ehu.es Fax: 34 43 311056 Phone: 34 43 310600 (extension 221) 34 43 218000 (extension 209) ----------------------------------------------------------------------- LEVELS OF FUNCTIONAL EQUIVALENCE IN REVERSE BIOENGINEERING: THE DARWINIAN TURING TEST FOR ARTIFICIAL LIFE Stevan Harnad Laboratoire Cognition et Mouvement URA CNRS 1166 I.B.H.O.P. Universite d'Aix Marseille II 13388 Marseille cedex 13, France harnad at princeton.edu ABSTRACT: Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications; reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more BE alive than a virtual forest fire can be hot. In itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far? ----------------------------------------------------------------------- Phil Husbands School of Cognitive and Computing Sciences Univesity of Sussex, BRIGHTON BN1 9QH, U.K philh at cogs.susx.ac.uk ABSTRACT: We discuss the mothodological foundations for our work on the development of cognitive architectures, on control systems, for situated autonomous agents. We focus on the problems of developing sensory-motor ystems for mobile robots, but we also discuss the applicability of aur approach to the study of biological systems. We argue that, for agents required to exhibit sophisticated ionteractions with their environments, complex sensory-motor processing is necessary, and the design by hand of control systems capable of this is likely to to become a prohibiytively difficult as complexity increases. We propose an automatic design process involving artificial evolution,where the basoc buildig blocks used for evolving cognitive architectures are noise-tolerant dynamical networks. These networks may be recurrent, and should operate in real time. time. The evolution should be incremental, using an extended and modified version of a genetic algorithm. Practical constraints suggest that initial architecture evaluations should be done largely in simulation. To support our claims and proposals, we summarize results from some preliminary simulation experiments where visually guided robots are evolved to operate in simple environments. Significantly, our results demonstrate that robust visually-guided control systems evolve from evaluation fuctions which do not explicitly require monitoring visual input. We outline the difficulties involved in continuing with simulations, and conclude by describing specialized visuo-robotic equipment, designed to eliminate sensors and actuators. ----------------------------------------------------------------------- Barry MacMullin School of Electronic Engineering Dublin City University McMullinB at DCU.IE ABSTRACT: I reconsider the status of computationalism (or, in a weak sense, functionalism): the claim that being a realisation of some (as yet unespecified) class of abstract machine is both necessary ans sufficient for having genuine, full-blooded, mentality. This doctrine is now quite widely (though by no means universally) seen as discredited. My position is that, thoug it is undoubtedly an unsatisfactory (perhaps even repugnant) thsis, the arguments against it are still rather weak. In particular, I critically reassess John Searle's infamous Chinise Room Argument, and also some relevant aspects of Karl Popper s theory of the Open Universe. I conclude that the status of computationalism must still be regarded as undecided' and that it may still provide a satisfactory framework for research. ----------------------------------------------------------------------- Domenico Parisi Institute of Psychology National Research Council, Rome e-mail: domenico at irmkant.bitnet ABSTRACT: Genetic algorithms are methods of parallel search for optimal solutions to tasks which are inspired by biological evolution and are based on selective reproductiomn and the addition of variiation through mutations or crossover. As models of real biological and behevioral phenomena, however, genetic algorithms suffer from many limitations. Some of these limitations are discussed under the rubrics of (a) environment, (b) variation, and (c) fitness, and ways are suggested to overcome them. Various simulations using genetic algoritms and neural networks are briefly described which incorporate a more biologically realistic notion of evolution. ----------------------------------------------------------------------- Tim Smithers Facultad de Informatica Apartado 649 20080 San Sebastian smithers at si.ehu.es ABSTRACT: Traditianally autonomous systems research has been a domain of Artificial Intelligence. We argue that, as a consequence, it has been heavily influenced, often tacitly, by folk psychological notions. We believe that much of the widely acknowledged failure of this research to produce reliable and robust artificial autonomous systems can be apportioned to its use and dependence upon forlk psychological constructs. As an alternative we propose taking seriously the Eliminativce Materialism of Paul Chuchland In this paper we present our reasons for adopting this radical alternative approach and briefly describe the bottom-up methodology that goes with it. We illustrate the discussion with examples form our work on autonomous systems. [Rest of abstracts not yet available] From brause at informatik.uni-frankfurt.de Fri Oct 1 10:15:15 1993 From: brause at informatik.uni-frankfurt.de (brause@informatik.uni-frankfurt.de) Date: Fri, 1 Oct 93 15:15:15 +0100 Subject: Tools for Art. Intelligence TAI-93, Advance Program Message-ID: <9310011415.AA03441@ilos.rbi.informatik.uni-frankfurt.de> ADVANCE PROGRAM =============== 5th International Conference on TOOLS WITH ARTIFICIAL INTELLIGENCE November 8-11, 1993, Cambridge (Boston), Massachusetts Sponsored by IEEE Computer Society This conference encompasses the technical aspects of specifying, designing, implementing, and evaluating tools with artificial intelligence and tools for artificial intelligence applications. The topics of interest include the following aspects: o Machine Learning o AI and Software Engineering o Logic and Intelligent Database o AI Knowledge Base Architectures o Parallel Processing and Hardware Support o Artificial Neural Networks o AI Applications o Expert Systems and Environments o Natural Language Processing o AI Algorithms o Intelligent Multimedia Systems o AI and Object-Oriented Systems o Reasoning Under Uncertainty, Fuzzy Logic Steering Committee ------------------ N. G. Bourbakis, SUNY-Binghamton C. V. Ramamoorthy, University of California-Berkeley H. E. Stephanou, Rensselaer Polytechnic Institute W. T. Tsai, University of Minnesota B. W. Wah, University of Illinois-Urbana Treasurer --------- N. G. Bourbakis, SUNY-Binghamton Registration and Publication Chair ---------------------------------- C. Koutsougeras, Tulane University Publicity Chairs ---------------- M. Perlin, Carnegie Mellon University M. Aoyama, Fujitsu Limited A. Delis, University of Maryland J. Y. Juang, National Taiwan University E. Kounalis, University de Nice General Chair ------------- J. Mylopoulos Department of Computer Science University of Toronto 6 King's College Road Toronto, Ontario Canada M5S 1A4 Tel: (416)978-5180 jm at cs.toronto.ca Program Chair ------------- J. J. P. Tsai, University of Illinois-Chicago Vice-Program Chairs ------------------- R. Brause, J.W.Goethe-University F. Golshani, Arizona State University F. Gomez, University of Central Florida J. Gu, University of Calgary M. H. Ibrahim, EDS Corporation M. Jarke, Technical University of Aachen T. Lewis, Naval Postgraduate School K. Nakamura, Fujitsu Limited R. Reynolds, Wayne State University P. Sheu, University of California-Irvine B. Silver, GTE Labs J. Yen, Texas A&M University C. Yu, University of Illinois-Chicago Local Arrangement Committee --------------------------- Chair: J. Vittal, GTE Labs J. Gattiker, SUNY Binghamton S. Mertoguno, SUNY Binghamton M. Mortazavi, SUNY Binghamton --------------------------------------------------------------------- Monday, November 8 WORKSHOP Intelligent Tools & Their Applications -------------------------------------- Invited speakers from Industry, Academia, and Government will address important issues in knowledge engineering, AI languages, and perception systems. Keynote Speaker C. V. Ramamoorthy, University of California, Berkeley Participants from NTT, NIS Labs, Japan; University of Connecticut; Tokai University, Japan; Philips Labs, NY; Gensys, MA; GMG, PA; AAAI Lab, NY; US Air Force; etc. The Workshop features an exhibition of AI tools from several industrial agencies. ----------------------------------------------------------------------- TAI'93 ADVANCE PROGRAM ====================== Tuesday, November 9 9:00 AM - 10:20 AM OPENING SESSION Welcome and Introduction: John Mylopoulos, University of Toronto Additional Greetings: Nikolaos G. Bourbakis, SUNY-Binghamton Program Overview: Jeffrey J. P. Tsai, University of Illinois-Chicago KEYNOTE ADDRESS: The Architecture of Intelligent Agents Raj Reddy, Carnegie Mellon University 10:40 AM - 12:20 PM PARALLEL SESSIONS SESSION A1 Artificial Neural Networks I Session Chair: Nikolaos G. Bourbakis, SUNY-Binghamton Transform coding by lateral inhibited neural nets, Rudiger W. Brause, J.W. Goethe University, Germany. Data transformation for learning in feedforward neural nets, Cris Koutsougeras and R. Srikanth, Tulane University. Logical and linear dependencies extraction from trained neural networks, Raqui Kane and Maurice Milgram, Universite Pierre et Marie Curie, France. Neural-logic belief networks -- a tool for knowledge representation and reasoning Boon Toh Low, University of Sydney, Australia. SESSION B1 AI Algorithms I Session Chair: Jun Gu, University of Calgary The implementation of a first-order logic AGM belief reversion system, Simon Dixon and Wayne Wobcke, University of Sydney, Australia. Nogood recording for static and dynamic CSP, Thomas Schiex and Gerard Verfalillie, CERT-ONERA, France. Constraint relaxation in distributed constraint satisfaction problems, Makoto Yokoo, NTT Communication Science Lab., Japan. Genetic algorithms in industrial design, Jokob Axelsson, Linkoping University, Sweden. Stefan Menth, ABB Corporate Research Center, Switzerland. Klaus Semmler, ABB Kraftwerke AG, Switzerland. SESSION C1 AI and Object-Oriented Systems I Session Chair: Mamdouh H. Ibrahim, EDS Systems Using the active object model to implement multi-agent systems, Eleri Cardozo, UNICAMP, Brazil. Jaime Simao Sichman and Yves Demazeau, LIFIA - Institut IMAG, France. Principled animation of artificial intelligence algorithms, Mark Perlin, Carnegie Mellon University. A method for translating CLP(R) rules into objects, Ta-Cheng Yu, Northwestern University Jie-Yong Juang, National Taiwan University, Taiwan. Object-oriented programming and frame-based knowledge representation, Christian Rathke, Universitat Stuttgart, Germany. 12:20 PM - 1:40 PM LUNCH 1:40 PM - 2:50 PM KEYNOTE ADDRESS: What is the Trend of Information Technology? Alan Salisbury, Learning Group International 3:10 PM - 4:40 PM PARALLEL SESSIONS SESSION A2 PANEL 1: Will Symbolic AI be Replaced by Neural Networks? Moderator: Rudiger W. Brause, J. W. Goethe University Panelists: Gail Carpenter, Center for Adaptive Syst., Boston University, Tomaso Poggio, Art. Int. Lab., MIT Rudiger W. Brause, J.W. Goethe-University (TBA). SESSION B2 AI and Software Engineering Session Chair: Imran A. Zualkernan, Pennsylvania State University An interactive truth maintenance system and its logical framework, Wei Li, Beijing University of Aeronautics and Astronautics, China. Enhancing reuse of Smalltalk methods by conceptual clustering, R. Jetzelsperger, Software Kinetics Ltd., Canada. S. Matwin, University of Ottawa, Canada. F. Oppacher, Carleton University, Canada. Using Analogy to determine program modification based on specification changes, Jun-Jang Jeng and Betty H.C. Cheng, Michigan State University. SESSION C2 AI Knowledge Base Architectures I Session Chair: Robert Reynolds, Wayne State University An intelligent tool for Unix performance tuning, Raul Velez, NCR Mexico, Mexico. Du Zhang and James Kho, California State University. Task based modeling for problem solving strategies, P. Uvietta J. Willamowski, and D. Ziebelin, INRIA -- Rhone-Alpes -- LIFIA, France. Integrating constraints, composite objects and tasks in a knowledge representation system, Jerome Gensel, Pierre Girard, and Olivier Schmeltzer, INRIA -- Rhone-Alpes -- LIFIA, France. 5:00 PM - 6:30 PM PARALLEL SESSIONS SESSION A3 PANEL 2: Integration of AI, Database, and Software Engineering: Research Issues, Practical Problems Moderator: Matthias Jarke, Information V., RWTH Aachen and Robert Reynolds, Wayne State University Panelists: Mike Brodie, GTE Labs, John Mylopoulos, University of Toronto, Matthias Jarke, Information V., RWTH Aachen, Robert Reynolds, Wayne State University SESSION B3 Machine Learning I Session Chair: Bernard Silver, GTE Labs An empirical evaluation of beam search and pre- and post-pruning in BEXA, Hendrik Theron and Ian Cloete, University of Stellenbosch, South Africa. Probabilistic induction of decision trees and disjunctive normal forms, Xiao-Jia M. Zhou and Tharam S. Dillon, La Trobe University, Australia. The use of a machine learning toolbox on industrial applications, N.J. Puzey, T.J. Parsons and P.F.Sims, British Aerospace, Sowerby Research Center, United Kingdom M.Green and T.Brookes, British Aerospace (Systems and Equipment) Ltd., United Kingdom. SESSION C3 AI and Object-Oriented Systems II Session Chair: Betty H.C. Cheng, Michigan State University A combined object-oriented and logic programming tool for AI, Marcelo Jenkins and Daniel Chester, University of Delaware. Knowledge representation and reasoning in a system integrating logic in objects, Ioannis Hatzilygeroudis, University of Patras, Greece. On the semantics of an object-oriented logic programming language: SCKE, Jin Zhi, Academia Sinica, China. ---------------------------------------------------------------------------- Wednesday, November 10 9:00 AM - 10:10 AM KEYNOTE ADDRESS: How Can Knowledge-Based Techniques Help Software Development? Sam DiNitto, USAF Rome Laboratory 10:30 AM - 12:10 PM PARALLEL SESSIONS SESSION A4 Reasoning Under Uncertainty, Fuzzy Logic Session Chair: John Yen, Texas A&M University A fast hill-climbing approach without an energy function for probabilistic reasoning, Eugene Santos Jr., Air Force Institute of Technology. Real-time value-driven diagnosis, Bruce D'Ambrosio, Oregon State University. Generalizing evidence theory to lattices to manage uncertainty, Sheng Guan, University of Texas. Networked bubble propagation method as a polynomial-time hypothetical reasoning for computing quasi-optimal solution, Yukio Ohsawa and Mitsuru Ishizuka, University of Tokyo, Japan. SESSION B4 Expert Systems and Environments Session Chair: Philip Sheu, University of California, Irvine Experimental evaluation of output-based partition testing for expert systems, I.A. Zualkernan and Yuan-Jing Lin, The Pennsylvania State University. Illustration of a decision table tool for specifying and implementing knowledge based systems, Jan Vanthienen and Elke Dries, Katholieke Universiteit Leuven, Belgium. Elastic version space: A knowledge acquisition method with background knowledge adjustment, Ken-ichi Hagiwara, FuJi Electric Corporate Research and Development LTD., Japan. A simple and efficient method for diagnosing equipment faults using equations representing the steady state, Hisashi Shimodaira, Nihon MECCS Co., Ltd., Japan. SESSION C4 AI Algorithms II Session Chair: Mark Perlin, Carnegie Mellon University Scaling up version spaces by using domain specific search algorithms, William Sverdlik, Lawrence Technological University Robert G. Reynolds, Wayne State University. Self-adjusting real-time search: a summary of results, Shashi Shekhar and Babak Hamidzadeh, University of Minnesota. Fast hypothetical reasoning using analogy on inference-path networks, Mitsuru Ishizuka, University of Tokyo, Japan. Akinori Abe, NTT Communication Science Lab., Japan. Short term unit-commitment using genetic algorithms, Dipankar Dasgupta and Douglas R. McGregor, University of Strathclyde, United Kingdom. 12:10 PM - 1:40 PM LUNCH 1:40 PM - 3:10 PM PARALLEL SESSIONS SESSION A5 PANEL 3 Quality of Heuristic Programs Moderators: Wei-Tek Tsai, University of Minnesota and Imran A. Zualkernan, Pennsylvania State University Panelists: Scott French, IBM, Houston, Du Zhang, California State University, C. Mathews, IBM, Sommers, Wei Li, Beijing University of Aeronautics and Astronautics, China Alun Preece, Concordia University John Yen, Texas A&M University SESSION B5 Natural Language Processing I Session Chair: Fernando Gomez, University of Central Florida A marker-passing algorithm for reference resolution, Seungho Cha and Dan Moldovan, University of Southern California. CARAMEL: a step towards reflection in natural language understanding systems, Gerard Sabah and Xavier Briffault, LIMSI - CNRS, France. Quixote as a Tool for Natural Language Processing Satoshi Tojo, Hiroshi Tsuda, Hideki Yasukawa, Kazumasa Yokota, and Yukihiro Morita, Institute for New Generation Computer Technology (ICOT), Japan. SESSION C5 Artificial Neural Networks II Session Chair: Rudiger W. Brause, J. W. Goethe University Management of graphical symbols in a CAD environment: a neural network approach, DerShung Yang and Larry A. Rendell, University of Illinois Julie L. Webster and Doris S. Shaw, U.S. Army Construction Engineering Research Laboratories James H. Garrett, Jr., Carnegie Mellon University. On features used for handwritten character recognition in a neural network environment Akhtar Jameel and Cris Koutsougeras, Tulane University An architecture of neural network for fuzzy teaching inputs, Hahn-Ming Lee and Weng-Tang Wang, National Taiwan Institute of Technology, Taiwan. 3:30 PM - 5:00 PM PARALLEL SESSIONS SESSION A6 PANEL 4 Real-Time and AI Moderator: Shashi Shekhar, University of Minnesota Panelists: Krithi Ramakrishna, University of Mass. Amherst, Ashok Agarwal, Univ. of Maryland, Tom Dean, Brown University, R. Brooks, MIT SESSION B6 Natural Language Processing II Session Chair: Barrett Bryant, University of Alabama A Language Model For Parsing Very Long Chinese Sentences, Hsin-Hsi Chen, National Taiwan University, Taiwan. Interval constraint satisfaction tool INC++, Eero Hyvonen, Stefano De Pascale, and Aarno Lehtola, VTT -- Technical Research Center of Finland, Finland. Meaning Description by SD-Forms and a Prototype of a Conversational-Text Retrieval, Eiji Kawaguchit and Marilyn Lee, Kyushu Institute of Technology, Japan. Koichi Nozaki, Nagasaki University, Japan. SESSION C6 Logic and Intelligent Database I Session Chair: Guojie Li, Academia Sinica, China Beta-Prolog: an extended Prolog with boolean tables for combinatorial searching, Neng-fa Zhou and Isao Nagasawa, Kyushu Institute of Technology, Japan. Evaluating logical queries by means of communicating processes, Du Zhang, California State University. The semantic approach to developing multi-modal non-monotonic logics Hua Shu, University of Karlskrona/Ronneby, Sweden. 5:30 PM - 6:30 PM POSTER SESSION 7:00 PM - 10:00 PM BANQUET KEYNOTE ADDRESS: Knowledge-Based Computer-Aided Design Steve Szygenda, University of Texas at Austin ------------------------------------------------------------------------- Thursday, November 11 9:00 AM - 10:10 AM KEYNOTE ADDRESS: Integrating T\&E in the Acquisition Process to Reduce Cost Raymond A. Paul, U.S. Army 10:30 AM - 12:00 PM PLENARY PANEL: The Future Direction of AI Tools Moderator: John Mylopoulos, University of Toronto, Panelists: Farokh B. Bastani, University of Houston, Nikolaos G. Bourbakis, SUNY-Binghamton Mike Brodie, GTE Labs, Guojie Li, Academia Sinica, China, Matthias Jarke, Information V., RWTH Aachen, Raymond A. Paul, U.S. Army C.V. Ramamoorthy, University of California at Berkeley 12:00 PM - 1:40 PM LUNCH 1:40 PM - 3:00 PM PARALLEL SESSIONS SESSION A7 PANEL 5 Tools for Constraint Satisfaction Moderator: Eugene C. Freuder, University of New Hampshire Panelists: Simon Kasif, Johns Hopkins University, David Allen McAllester, MIT, Bart Selman, AT&T Bell Laboratories, Pascal Van Hentenryck, Brown University SESSION B7 Artificial Neural Networks III Session Chair: Cris Koutsougeras, Tulane University A connectionist shell for developing expert decision support systems Tong-Seng Quah, Chew-Lim Tan, and Hoon-Heng Teh, National University of Singapore, Singapore. Neural network optimization tool based on predictive MDL principle for time series prediction, Mikko Lehtokangas, Jukka Saarinen, and Kimmo Kaski, Tampere University of Technology, Finland. Pentti Huuhtanen, University of Tampere, Finland. Paper web quality profile analysis tool based on artificial neural network, Jukka Vanhala and Kimmo Kaski, Tampere University of Technology, Finland. Pekka Pakarinen, Technical Research Centre of Finland, Finland. SESSION C7 Machine Learning II Session Chair: Prasad Gavaskar, Motorola Inc. The analysis of cost error parallel simulated annealing, Chul-Eui Hong, Il-Yong Chung and Hee-IL Ahn, Electronics and Telecommunications Research Institute, Korea Robust feature selection algorithms, Haleh Vafaie and Kenneth De Jong, George Mason University. Knowledge Based Tools for Risk Assessment in Software Development and Reuse (invited paper) C.V. Ramamoorthy University of California at Berkeley 3:30 PM - 5:10 PM PARALLEL SESSIONS SESSION A8 Parallel Processing and Hardware Support Session Chair: Farokh B. Bastani, University of Houston PARTES: a partitioning scheme for parallel matching, Stefano Gallucci and Jack Tan, University of Houston. A parallel search-and-learn technique for solving large scale TSP, C.P. Ravikumar, Indian Institute of Technology, India. An operating context-sensitive approach to fault detection of mechatronic systems, Matti Kurki and Jarmo Hirvinen, Technical Research Center of Finland(VTT), Finland. SESSION B8 AI Knowledge Base Architectures II Session Chair: Kenneth De Jong, George Mason University ANTISTROFEAS: a knowledge-based expert system for automatic visual VLSI reverse-engineering: the layout version N. G. Bourbakis, SUNY-Binghamton New techniques for inference in assumption-based truth maintenance systems C. Cayrol, M. Cayrol, O. Palmade, Universite Paul Sabatier, France. A research for visual reasoning, Jianxiang Wang and Shenquan Liu, Academic Sinica, China. Modeling autonomous agents in a knowledge based simulation environment, M. Zeller and R. Mock-Hecker, University of Ulm, Germany. SESSION C8 Logic and Intelligent Database II Session Chair: Du Zhang, California State University. HML-an approach for managing/refining knowledge discovered from database, Ning Zhong and Setsuo Ohsuga, The University of Tokyo, Japan. Absorption by decomposition: a more powerful form of absorption, Sukhamay Kundu, Louisiana State University. A tool for classifying office documents, Xiaolong Hao, Jason T.L. Wang, Michael P. Bieber, and Peter A. Ng, New Jersey Institute of Technology. Sampling issues in generating rules from database, Changhwan Lee, University of Connecticut.  From ptodd at spo.rowland.org Sat Oct 2 18:05:02 1993 From: ptodd at spo.rowland.org (Peter M. Todd) Date: Sat, 2 Oct 93 18:05:02 EDT Subject: Deadline reminder: Music/arts special issue Message-ID: <9310022205.AA09991@spo.rowland.org> **** PLEASE DISTRIBUTE **** MUSIC AND CREATIVITY Call for Papers for a Special Issue of Connection Science-- Reminder of approaching deadline The October 15 deadline for submissions to the special issue of Connection Science on network applications in music, arts, and creativity, is fast approaching. We seek full-length papers on empirical or theoretical work in the areas of modelling musical cognition; network composition, choreography, or visual creation; integration of high- and low-level musical or artistic knowledge; cross-modal integration (e.g. rhythm and tonality); developmental models; cross-cultural models; psychoacoustic models; relationships between music and language; and connections to cognitive neuroscience. We also welcome shorter research notes up to 4000 words in length covering ongoing research projects. For a complete call for papers and author guidelines, or to submit a paper (five copies), contact the Special Issue Editors: Niall Griffith Department of Computer Science, University of Exeter, Prince of Wales Road, Exeter, EX4 4PT, England. E-mail: ngr at dcs.exeter.ac.uk Peter M. Todd The Rowland Institute for Science 100 Edwin H. Land Boulevard Cambridge, MA 02142 USA E-mail: ptodd at spo.rowland.org  From Christian.Lehmann at di.epfl.ch Mon Oct 4 05:39:26 1993 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Mon, 4 Oct 93 10:39:26 +0100 Subject: research job Message-ID: <9310040939.AA09484@lamisun.epfl.ch> ________________________________________________________________________________ * * * * * * * * * * * * * ________________________________________________________________________________ University of Lausanne: Graduate student position (Doctorant) available in October 1993 at the Institute of Physiology Topic: Spatial and temporal processing in neural networks The student will be integrated in an electrophysiology group working with simultaneous single unit recordings. A tight collaboration with the Swiss Federal School of Technology (EPFL) will provide the latest technical facilities. It is expected that she/he will acquire in-depth knowledge in the fast growing field of neural networks in order to develop and test original ideas on information processing in the brain. A good background in mathematics, physics, and biology as well as knowledge of at least one higher programming language is recommended. Our Ph.D. program extends over a duration of three years minimum. The minimum salary ranges between US$19,000 and 24,000/year. Please send applications (curriculum vitae and letters of recommendations of two academic referees) to or get further information from: Dr. Alessandro Villa or Dr. Yves de Ribaupierre, UNIL Institute of Physiology, Rue du Bugnon 7, CH-1005 Lausanne, Switzerland. Tel. ++41-21-313.2809 FAX ++41-21-313.2865 E-mail: villa at ulmed.unil.ch ________________________________________________________________________________ * * * * * * * * * * * * * ________________________________________________________________________________  From jordan at psyche.mit.edu Mon Oct 4 15:36:46 1993 From: jordan at psyche.mit.edu (Michael Jordan) Date: Mon, 4 Oct 93 15:36:46 EDT Subject: technical report Message-ID: <CMM.0.90.0.749763406.jordan@psyche.mit.edu> The following paper is now available on the neuroprose archive as "jordan.convergence.ps.Z". Convergence results for the EM approach to mixtures of experts architectures Michael I. Jordan Lei Xu Department of Brain and Cognitive Sciences Massachusetts Institute of Technology The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments.  From mel at cns.caltech.edu Mon Oct 4 17:41:11 1993 From: mel at cns.caltech.edu (Bartlett Mel) Date: Mon, 4 Oct 93 14:41:11 PDT Subject: NIPS*93 program Message-ID: <9310042141.AA16230@plato.klab.caltech.edu> NIPS*93 MEETING PROGRAM and REGISTRATION REMINDER The 1993 Neural Information Processing Systems (NIPS*93) meeting is the seventh meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. There will be an afternoon of tutorial presentations (Nov. 29), two and a half days of regular meeting sessions (Nov. 30 - Dec. 2), and two days of focused workshops at a nearby ski area (Dec. 3-4). An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035. EARLY REGISTRATION DEADLINE (for $100 discount): Oct. 30 _________________ NIPS*93 ORAL PRESENTATIONS PROGRAM Tues. AM: Cognitive Science 8:30 Invited Talk: Jeff Elman, UC San Diego: From Weared to Wore: A Connectionist Account of the History of the Past Tense 9:00 Richard O. Duda, San Jose State Univ.: Connectionist Models for Auditory Scene Analysis 9:20 Reza Shadmehr and Ferdinando A. Mussa-Ivaldi, MIT: Computational Elements of the Adaptive Controller of the Human Arm 9:40 Catherine Stevens and Janet Wiles, University of Queensland: Tonal Music as a Componential Code: Learning Temporal Relationships Between and Within Pitch and Timing Components 10:00 Poster Spotlights: Thea B. Ghiselli-Crispa and Paul Munro, Univ. of Pittsburgh: Emergence of Global Structure from Local Associations Tony A. Plate, University of Toronto: Estimating Structural Similarity by Vector Dot Products of Holographic Reduced Representations 10:10 BREAK Speech Recognition 10:40 Jose C. Principe, Hui-H. Hsu and Jyh-M. Kuo, Univ. of Florida: Analysis of Short Term Neural Memory Structures for Nonlinear Prediction 11:00 Eric I. Chang and Richard P. Lippmann, MIT Lincoln Laboratory: Figure of Merit Training for Detection and Spotting 11:20 Gregory J. Wolff, K. Venkatesh Prasad, David G. Stork and Marcus Hennecke, Ricoh California Research Center: Lipreading by Neural Networks: Visual Preprocessing, Learning and Sensory Integration 11:40 Poster Spotlights: Steve Renals, Mike Hochberg and Tony Robinson, Cambridge University: Learning Temporal Dependencies In Large-Scale Connectionist Speech Recognition Ying Zhao, John Makhoul, Richard Schwartz and George Zavaliagkos, BBN Systems and Technologies: Segmental Neural Net Optimization for Continuous Speech Recognition 11:50 Rod Goodman, Caltech: Posner Memorial Lecture Tues. PM: Temporal Prediction and Control 2:00 Invited Talk: Doyne Farmer, Prediction Co.: Time Series Analysis of Nonlinear and Chaotic Time Series: State Space Reconstruction and the Curse of Dimensionality 2:30 Kenneth M. Buckland and Peter D. Lawrence, Univ. of British Columbia: Transition Point Dynamic Programming 2:50 Gary W. Flake, Guo-Zhen Sun, Yee-Chun Lee and Hsing-Hen Chen, University of Maryland: Exploiting Chaos to Control The Future 3:10 Satinder P. Singh, Andrew G. Barto, Roderic Grupen and Christopher Connolly, University of Massachusetts: Robust Reinforcement Learning in Motion Planning 3:30 BREAK Theoretical Analysis 4:00 Scott Kirkpatrick, Naftali Tishby, Lidror Troyansky, The Hebrew Univ. of Jerusalem, and Geza Gyorgi, Eotvos Univ.: The Statistical Mechanics of K-Satisfaction 4:20 Santosh S. Venkatesh, Changfeng Wang, Univ. of Pennsylvania, and Stephen Judd, Siemens Corporate Research: When To Stop: On Optimal Stopping And Effective Machine Size In Learning 4:40 Wolfgang Maass, Technische Univ. Graz: Agnostic PAC-Learning Functions on Analog Neural Nets 5:00 H.N. Mhaskar, California State Univ. and Charles A. Micchelli, IBM: How To Choose An Activation Function 5:20 Poster Spotlights Iris Ginzburg, Tel Aviv Univ. and Haim Sompolinsky, Hebrew Univ.: Correlation Functions on a Large Stochastic Neural Network Xin Wang, Qingnan Li and Edward K. Blum, USC: Asynchronous Dynamics of Continuous-Time Neural Networks Tal Grossman and Alan Lapedes, Los Alamos National Laboratory: Use of Bad Training Data for Better Predictions Wed. AM: Learning Algorithms 8:30 Invited Talk: Geoff Hinton, Univ. of Toronto: Using the Minimum Description Length Principle to Discover Factorial Codes 9:00 Richard S. Zemel, Salk Institute, and G. Hinton, Univ. of Toronto: Developing Population Codes By Minimizing Description Length 9:20 Sreerupa Das and Michael C. Mozer, University of Colorado: A Hybrid Gradient-Descent/Clustering Technique for Finite State Machine Induction 9:40 Eric Saund, Xerox Palo Alto Research Center: Unsupervised Learning of Mixtures of Multiple Causes in Binary Data 10:00 BREAK 10:30 A. Uzi Levin and Todd Leen, Oregon Graduate Institute: Fast Pruning Using Principal Components 10:50 Christoph Bregler and Stephen Omohundro, ICSI: Surface Learning with Applications to Lip Reading 11:10 Melanie Mitchell, Santa Fe Inst. and John H. Holland, Univ. Michigan: When Will a Genetic Algorithm Outperform Hill Climbing 11:30 Oded Maron and Andrew W. Moore, MIT: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation 11:50 Poster Spotlights: Zoubin Ghahramani and Michael I. Jordan, MIT: Supervised Learning from Incomplete Data via an EM Approach Mats Osterberg and Reiner Lenz, Linkoping Univ. Unsupervised Parallel Feature Extraction from First Principles Terence D. Sanger, LAC-USC Medical Center: Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples Patrice Y. Simard and Edi Sackinger, AT&T Bell Laboratories: Efficient Computation of Complex Distance Metrics Using Hierarchical Filtering Wed. PM: Neuroscience 2:00 Invited Talk: Eve Marder, Brandeis Univ.: Dynamic Modulation of Neurons and Networks 2:30 Ojvind Bernander, Rodney Douglas and Christof Koch, Caltech: Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells 2:50 Christiane Linster and David Marsan, ESPCI, Claudine Masson and Michel Kerzberg, CNRS: Odor Processing in the Bee: a Preliminary Study of the Role of Central Input to the Antennal Lobe 3:10 M.G. Maltenfort, R. E. Druzinsky, C. J. Heckman and W. Z. Rymer, Northwestern Univ.: Lower Boundaries of Motoneuron Desynchronization Via Renshaw Interneurons 3:30 BREAK Visual Processing 4:00 K. Obermayer, The Salk Institute, L. Kiorpes, NYU and Gary G. Blasdel, Harvard Medical School: Development of Orientation and Ocular Dominance Columns in Infant Macaques 4:20 Yoshua Bengio, Yann Le Cun and Donnie Henderson, AT&T Bell Labs: Globally Trained Handwritten Word Recognizer using Spatial Representation, Spatial Displacement Neural Networks and Hidden Markov Models 4:40 Trevor Darrell and A. P. Pentland, MIT: Classification of Hand Gestures using a View-based Distributed Representation 5:00 Ko Sakai and Leif H. Finkel, Univ. of Pennsylvania: A Network Mechanism for the Determination of Shape-from-Texture 5:20 Video Poster Spotlights (to be announced) Thurs. AM: Implementations and Applications 8:30 Invited Talk: Dan Seligson, Intel: A Radial Basis Function Classifier with On-chip Learning 9:00 Michael A. Glover, Current Technology, Inc. and W. Thomas Miller III, University of New Hampshire: A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Application 9:20 Steven S. Watkins, Paul M. Chau, and Mark Plutowski, UCSD, Raoul Tawel and Bjorn Lambrigsten, JPL: A Hybrid Radial Basis Function Neurocomputer 9:40 Gert Cauwenberghs, Caltech : A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics 10:00 BREAK 10:30 Invited Talk: Paul Refenes, University College London: Neural Network Applications in the Capital Markets 11:00 Jane Bromley, Isabelle Guyon, Yann Le Cun, Eduard Sackinger and Roopak Shah, AT&T Bell Laboratories: Signature Verification using a "Siamese" Time Delay Neural Network 11:20 John Platt and Ralph Wolf, Synaptics, Inc.: Postal Address Block Location Using a Convolutional Locator Network 11:40 Shumeet Baluja and Dean Pomerleau, Carnegie Mellon University: Non-Intrusive Gaze Tracking Using Artificial Neural Networks 12:00 Adjourn to Vail for Workshops _____________________ NIPS*93 POSTER PROGRAM Tues. PM Posters: Cognitive Science (CS) CS-1 Blasig Using Backpropagation to Automatically Generate Symbolic Classification Rules CS-2 Munro, Ghiselli-Crispa Emergence of Global Structure from Local Associations CS-3 Plate Estimating structural similarity by vector dot products of Holographic Reduced Representations CS-4 Shultz, Elman Analyzing Cross Connected Networks CS-5 Sperduti Encoding of Labeled Graphs by Labeling RAAM Speech Processing (SP) SP-1 Farrell, Mammone Speaker Recognition Using Neural Tree Networks SP-2 Hirayama, Vatikiotis-Bateson, Kawato Inverse Dynamics of Speech Motor Control SP-3 Renals, Hochberg, Robinson Learning Temporal Dependencies In Large-Scale Connectionist Speech Recognition SP-4 Zhao, Makhoul, Schwartz, Zavaliagkos Segmental Neural Net Optimization for Continuous Speech Recognition Control, Navigation and Planning (CT) CT-1 Atkeson Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming CT-2 Boyan, Littman A Reinforcement Learning Scheme for Packet Routing Using a Network of Neural Networks CT-3 Cohn Queries and Exploration Using Optimal Experiment Design CT-4 Duff, Barto Monte Carlo Matrix Inversion and Reinforcement Learning CT-5 Gullapalli, Barto Convergence of Indirect Adaptive Asynchronous Dynamic Programming Algorithms CT-6 Jaakkola, Jordan, Singh Stochastic Convergence Of Iterative DP Algorithms CT-7 Moore The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces CT-8 Nowlan, Cacciatore Mixtures of Controllers for Jump Linear and Non-linear Plants CT-9 Wada, Koike, Vatikiotis-Bateson, Kawato A Computational Model for Cursive Handwriting Based on the Minimization Principle Learning Theory, Generalization and Complexity (LT) LT-01 Cortes, Jackel, Solla, Vapnik, Denker Learning Curves: Asymptotic Values and Rates of Convergence LT-02 Fefferman Recovering A Feed-Forward Net From Its Output LT-03 Grossman, Lapedes Use of Bad Training Data for Better Predictions LT-04 Hassibi, Sayed, Kailath H-inf Optimality Criteria for LMS and Backpropagation LT-05 Hush, Horne Bounds on the complexity of recurrent neural network implementations of finite state machines LT-06 Ji A Bound on Generalization Error Using Network-Parameter-Dependent Information and Its Applications LT-07 Kowalczyk Counting function theorem for multi-layer networks LT-08 Mangasarian, Solodov Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization LT-09 Plutowski, White Delete-1 Cross-Validation Estimates IMSE LT-10 Schwarze, Hertz Discontinuous Generalization in Large Commitee Machines LT-11 Shapiro, Prugel-Bennett Non-Linear Statistical Analysis and Self-Organizing Competitive Networks LT-12 Wahba Structured Machine Learning for 'Soft' Classification, with Smoothing Spline ANOVA Models and Stacked Tuning, Testing and Evaluation LT-13 Watanabe Solvable models of artificial neural networks LT-14 Wiklicky On the Non-Existence of a Universal Learning Algorithm for Recurrent Neural Networks Dynamics/Statistical Analysis (DS) DS-1 Coolen, Penney, Sherrington Coupled Dynamics of Fast Neurons and Slow Interactions DS-2 Garzon, Botelho Observability of neural network behavior DS-3 Gerstner, van Hemmen How to Describe Neuronal Activity: Spikes, Rates, or Assemblies? DS-4 Ginzburg, Sompolinsky Correlation Functions on a Large Stochastic Neural Network DS-5 Leen, Orr Momentum and Optimal Stochastic Search DS-6 Ruppin, Meilijson Optimal signalling in Attractor Neural Networks DS-7 Wang, Li, Blum Asynchronous Dynamics of Continuous-Time Neural Networks Recurrent Networks (RN) RN-1 Baird, Troyer, Eeckman Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Net RN-2 Bengio, Frasconi Credit Assignment through Time: Alternatives to Backpropagation RN-3 Kolen Fool's Gold: Extracting Finite State Machines From Recurrent Network Dynamics RN-4 Movellan A Reinforcement Algorithm to Learn Trajectories with Stochastic Neural Networks RN-5 Saunders, Angeline, Pollack Structural and behavioral evolution of recurrent networks Applications (AP) AP-01 Baldi, Brunak, Chauvin, Krogh Hidden Markov Models in Molecular Biology: Parsing the Human Genome AP-02 Eeckman, Buhmann, Lades A Silicon Retina for Face Recognition AP-03 Flann A Hierarchal Approach to Recognizing On-line Cursive Handwriting AP-04 Graf, Cosatto, Ting Locating Address Blocks with a Neural Net System AP-05 Karunanithi Identifying Fault-Prone Software Modules Using Feed-Forward Networks: A Case Study AP-06 Keymeulen Comparison Training for a Rescheduling Problem in Neural Networks AP-07 Lapedes, Steeg Use of Adaptive Networks to Find Highly Predictable Protein Structure Classes AP-08 Schraudolph, Dayan, Sejnowski Using the TD(lambda) Algorithm to Learn an Evaluation Funcion for the Game of Go AP-09 Smyth Probabilistic Anomaly Detection in Dynamic Systems AP-10 Tishby, Singer Decoding Cursive Scripts Wed. PM posters: Learning Algorithms (LA) LA-01 Gold, Mjolsness Clustering with a Domain-Specific Distance Metric LA-02 Buhmann Central and Pairwise Data Clustering by Competitive Neural Networks LA-03 de Sa Learning Classification without Labeled Data LA-04 Ghahramani, Jordan Supervised learning from incomplete data via an EM approach LA-05 Tresp, Ahmad, Neuneier Training Neural Networks with Deficient Data LA-06 Osterberg, Lenz Unsupervised Parallel Feature Extraction from First Principles LA-07 Sanger Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples LA-08 Leen, Kambhatla Fast Non-Linear Dimension Reduction LA-09 Schaal, Atkeson Assessing The Quality of Learned Local Models LA-10 Simard, Sackinger Efficient computation of complex distance metrics using hierarchical filtering LA-11 Tishby, Ron, Singer The Power of Amnesia LA-12 Wettscherek, Dietterich Locally Adaptive Nearest Neighbor Algorithms LA-13 Liu Robust Parameter Estimation and Model Selection for Neural Network Regression LA-14 Wolpert Bayesian Backpropagation Over Functions Rather Than Weights LA-15 Thodberg Bayesian Backprop in Action: Pruning, Ensembles, Error Bars and Application to Strectroscopy LA-16 Dietterich, Jain, Lanthop Dynamic Reposing for Drug Activity Prediction LA-17 Ginzburg, Horn Combined Neural Networks For Time Series Analysis LA-18 Graf, Simard Backpropagation without Multiplication LA-19 Harget, Bostock A Comparative Study of the Performance of a Modified Bumptree with Radial Basis Function Networks and the Standard Multi-Layer Perceptron LA-20 Najafi, Cherkassky Adaptive Knot Placement Based on Estimated Second Derivative of Regression Surface Constructive/Pruning Algorithms (CP) CP-1 Fritzke Supervised Learning with Growing Cell Structures CP-2 Hassibi, Stork, Wolff Optimal Brain Surgeon: Extensions, streamlining and performance comparisons CP-3 Kamimura Generation of Internal Representations by alpha-transformation CP-4 Leerink, Jabri Constructive Learning Using Internal Representation Conflicts CP-5 Utans Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data CP-6 Watanabe An Optimization Method of Layered Neural Networks Based on the Modified Information Criterion Neuroscience (NS) NS-01 Bialek, Ruderman Statistics of Natural Images: Scaling in the Woods NS-02 Boussard, Vibert Dopaminergic neuromodulation brings a dynamical plasticiy to the retina NS-03 Doya, Selverston, Rowat A Hodgkin-Huxley Type Neuron Model that Learns Slow Non-Spike Oscillation NS-04 Gusik, Eaton Directional Hearing by the Mauthner System NS-05 Horiuchi, Bishofberger, Koch Building an Analog VLSI, Saccadic Eye Movement System NS-06 Lewicki Bayesian Modeling and Classification of Neural Signals NS-07 Montague, Dayan, Sejnowski Foraging in an Uncertain Environment Using Predictive Hebbian Learning NS-08 Rosen, Rumelhart, Knudsen A Connectionist Model of the Owl's Sound Localization System NS-09 Sanger Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements NS-10 Siegal An Analog VLSI Model Of Central Pattern Generation In The Medicinal Leech NS-11 Usher, Stemmler, Koch High spike rate variability as a consequence of network amplification of local fluctuations Visual Processing (VP) VP-1 Ahmad Feature Densities are Required for Computing Feature Corresponces VP-2 Buracas, Albright Proposed function of MT neurons' receptive field surrounds: computing shapes of objects from velocity fields VP-3 Geiger, Diamantaras Resolving motion ambiguities VP-4 Mjolsness Two-Dimensional Object Localization by Coarse-to-fine Correlation Matching VP-5 Sajda, Finkel Dual Mechanisms for Neural Binding and Segmentation and Their Role in Cortical Integration VP-6 Yuille, Smirnakis, Xu Bayesian Self-Organization Implementations (IM) IM-01 Andreou, Edwards VLSI Phase Locking Architecture for Feature Linking in Multiple Target Tracking Systems IM-02 Coggins, Jabri WATTLE: A Trainable Gain Analogue VLSI Neural Network IM-03 Elfadel, Wyatt The "Softmax" Nonlinearity: Derivation Using Statistical Mechanics and Useful Properties as a Multiterminal Analog Circuit Element IM-04 Muller, Kocheisen, Gunzinger High Performance Neural Net Simulation on a Multiprocessor System with "Intelligent" Communication IM-05 Murray, Burr, Stork, et al. Digital Boltzmann VLSI for constraint satisfaction and learning IM-06 Niebur, Brettle Efficient Simulation of Biological Neural Networks on Massively Parallel Supercomputers with Hypercube Architecture IM-07 Oliveira, Sangiovanni-Vincentelli Learning Complex Boolean Functions: Algorithms and Applications IM-08 Shibata, Kotani, Yamashita et al. Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors IM-09 Watts Event-Driven Simulation of Networks of Spiking Neurons  From wong at redhook.llnl.gov Mon Oct 4 17:17:22 1993 From: wong at redhook.llnl.gov (Issac Wong) Date: Mon, 4 Oct 93 14:17:22 PDT Subject: reprint available: nonlinear scale-space filtering Message-ID: <9310042117.AA20057@redhook.llnl.gov> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/wong.scale_space.ps.Z The file wong.scale_space.ps.Z is now available for copying from the Neuroprose repository: A Nonlinear Scale-Space Filter by Physical Computation Yiu-fai Wong Institute for Scientific Computing Research, L-426 Lawrence Livermore National Laboratory Livermore, CA 94551 E-mail: wong at redhook.llnl.gov Abstract--- Using maximum entropy principle and statistical mechanics, we derive and demonstrate a nonlinear scale-space filter. For each datum in a signal, a neighborhood of weighted data is used for scale-space clustering. The cluster center becomes the filter output. The filter is governed by a single scale parameter which dictates the spatial extent of nearby data used for clustering. This, together with the local characteristic of the signal, determine the scale parameter in the output space, which dictates the influences of these data on the output. This filter is thus completely unsupervised and data-driven. It provides a mechanism for a) removing noise; b) preserving edges and c) improved smoothing of nonimpulsive noise. This filter presents a new mechanism for detecting discontinuities differing from techniques based on local gradients and line processes. We demonstrate the filter using real images. This work shows that scale-space filtering, nonlinear filtering and scale-space clustering are closely related and provides a framework within which further image processing, image coding and computer vision problems can be investigated. This work has been presented at IEEE Conf. Computer Vision and Pattern Recognition and IEEE Workshop on Neural Networks for Signal Processing, 1993. --Isaac Wong from Lawrence Livermore Lab  From P.Refenes at cs.ucl.ac.uk Tue Oct 5 11:48:08 1993 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 05 Oct 93 16:48:08 +0100 Subject: No subject Message-ID: <mailman.626.1149591290.29955.connectionists@cs.cmu.edu> CALL FOR PARTICIPATION 1ST INTERNATIONAL WORKSHOP NEURAL NETWORKS IN THE CAPITAL MARKETS LONDON BUSINESS SCHOOL, NOVEMBER 18-19 1993 Neural networks have now been applied to a number of live systems in the capital markets and in many cases have demonstrated better performance than competing approaches. Now is the time to take a critical look at their successes and limitations and to assess their capabilities, research issues and future directions. This workshop presents original papers which represent new and significant research, development and applications in finance and investment and which cover key areas of time series forecasting, multivariate dataset analysis, classification and pattern recognition. Application areas include: - Bond and Stock Valuation and Trading - Univariate time series analysis - Asset allocation and risk management - Multivariate data analysis - Foreign exchange rate prediction - Classification and ranking - Commodity price forecasting - Pattern Recognition - Portfolio management - Hybrid systems PROGRAMME COMMITTEE Prof. N. Biggs - London School of Economics Prof. D. Bunn - London Business School Dr J. Moody - Oregon Graduate Institute Dr A. Refenes - London Business School Prof. M. Steiner - Universitaet Munster Dr A. Weigend - University of Colorado VENUE All sessions will be held at the London Business School which is situated overlooking Regents Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registrees. PROVISIONAL PROGRAMME November 18 8.30 Registration 9.00 Session 1 ADVANCES IN NEURAL NETWORKS Chair: D. Bunn, London Business School "Predicting the future and understanding the past" A. Weigend, University of Colorado "Non-linear behaviour of financial markets" A. Antoniou, Brunel University "Neural networks for financial engineering" A. Refenes, London Business School "Designing neural networks: Computational Learning Theory" N. Biggs, London School of Economics 12.00 Lunch 2.00 Session 2 FOREIGN EXCHANGE: PREDICTION AND TRADING Chair: B. Davies, BZW Invited talk: "Learning and forecasting from hints" Y. Abu-Mustafa, California Institute of Technology 5.00 Poster Session November 19 9.00 Session 3 BONDS AND DERIVATIVES Chair: P. Sondhi, CitiBank Invited talk: "Bond rating using neural networks" J. Moody, Oregon Graduate Institute 12.00 Lunch 2.00 Session 4 EQUITIES Chair: S. Lamoine, Societe Generale Invited talk: "Neural networks as an alternative market model" M. Steiner, Universitat Munster 5.00 Panel Session 6.00 End of workshop Submitted Papers include: - An investigation into the use of simulated artificial neural networks for forecasting the movement of foreign currency exchange Thomas M. Seiler & Jay E. Aronson Nova University, Florida, USA - Short-Term Forecasting of the USD/DM-Exchange Rate Dr. Thorsten Poddig Universitdt Bamberg, Germany - Estimation of implied volatilities using a neural network approach Fernando Gonzalez Miranda University of Madrid, Spain - Estimating Tax Inflows at a Public Institution D. E. Baestaens, W. M. van den Bergh & H. Vaudrey Erasmus Univerity Rotterdam, The Netherlands - Genetic Programming for Strategy Acquisition in the Financial Markets Martin Andrews Cambridge University, U.K. - Bond Rating with Neural Networks J. Clay Singleton & Alvin J. Surkan University of North Texas, USA - Feedforward Neural Network and Canonical Correlation Models as Approximators with an Application to One-Year Ahead Forecasting Dr P. W. Otter Faculty of Economics, Groningen, The Netherlands - Dependency Analysis and Neural Network Modeling of Currency Exchange Rates Hong Pi Lund University, Sweden - Neural Networks in Financial Forecasting - How to develop Forecasting Models Prof. Dr. W. Gerke & S. Baun Friedrich-Alexander-Universitdt, N|rnberg, Germany - Results of a simple trading scheme based on an Artificial Neural Network on the Austrian Stock Market Christian Haefke Institut for Advanced Studies, Vienna, Austria - Topology-Preserving Neural Architectures and Multidimensional Scaling for Multivariate Data Analysis C. Serrano-Cinca, C. Mar-Molinero & B. Martin-Del-Brio University of Zaragossa, Spain - Important factors in Neural Networks- Forecasts of Gold Futures Prices Gary Grudnitski San Diego State University, USA - Economic Forecasting with Neural Nets: a Computational Learning Theory View Martin Anthony & Norman L. Biggs London School of Economics - Application of Neural Networks in Short-Term Stock Price Forcasting G. M. Papadourakis, G. Spanoudakis & A. Gotsias Institute of Computer Science, Herakliom, Greece - Artificial Neural Networks for Treasury Bills Rate Forecasting Leonardo Landi & Emilio Barucci Universita di Firenze, Italy - Forecasting the German Inflation Rate Wietske van Zwol & Albert Bolts Tilburg Institute for Applied Economic Research, Germany - Predicting Gold Prices With Neural Networks: Multivariate vs Univariate Analysis M. Pachero, M. Vellasco & A. Abelim Pontificia Universidade Catolica do Rio de Janeiro, Brasil - Is mean-reversion on stock indices a linear effect? D. C. Meir, R. Pfeifer, R. Demostene & C. Sheier Universitdt Z|rich, Switzerland - Neural Nets for Time Series Forecasting: Criteria for Performance with an Application in Gilt Futures Pricing Jason Kingdon Department of Computer Science, University College London - Financial Time Series Forecasting of Recurrent Artificial Neural Network Techniques Dr. Ah Chung Tsoi, Clarence N.W. Tan & Stephen Lawrence Bond University, Australia - Application of Sensitivity Analysis techniques to Neural Network Bond Forecasting U. Bilge, A.N. Refenes, C. Diamond & J.Shalbolt Department of Computer Science, University College London - Multivariate Prediction of financial time series using recent developments in chaos theory Andrew Edmonds Prophecy systems, England - Hybrid Technologies for Far East Markets or "The Persistence of Memory" from Salvador Dali Lee Chay Tiam Smith Barney, Singapore - Nonlinearities in financial markets A. Antoniou & V.Bekos Brunel University - Using Neural Networks for modelling the French Stock Market A. Zapranis, Y. Bentz & A.N. Refenes London Business School HOTEL DETAILS Convenient hotels include: London Regents Park Hilton 18 Lodge Road, St. Johns Wood, London NW8 7JT Tel: (071) 722 7722 Fax: (071) 483 2408 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (071) 486 6161 Fax: (071) 486 0884 The White House Hotel Albany St, Regents Park, London NW1 Tel: (071) 387 1200 Fax: (071) 388 0091 REGISTRATION To register, complete the form and mail to: Helen Tracey, London Business School, Sussex Place, Regents Park, London NW1 4SA, UK. Please note that places are limited and will be allocated on a "first-come first-served" basis. For additional information call: (44)-71-262-5050 ext. 3507 Fax: (44)-71-724-7875 ----------------------------------------------------------- REGISTRATION FORM First International Workshop on Neural Networks in the Capital Markets November 18-19, 1993 Name:__________________________________________ Affiliation:___________________________________ Address:_______________________________________ _______________________________________________ Telephone:_____________________________________ Workshop Fee: 200 pounds sterling Payment may be made by: (please tick) [ ] Cheque payable to London Business School [ ] VISA [ ] Access [ ] American Express Card number: ___________________________________ -----------------------------------------------------------  From hwang at pierce.ee.washington.edu Tue Oct 5 18:34:57 1993 From: hwang at pierce.ee.washington.edu (Jenq-Neng Hwang) Date: Tue, 5 Oct 93 15:34:57 PDT Subject: No subject Message-ID: <9310052234.AA18754@pierce.ee.washington.edu.> Technical Report available from neuroprose: 26 single spaced pages (13 pages of text and 13 pages of figures) WHAT'S WRONG WITH A CASCADED CORRELATION LEARNING NETWORK: A PROJECTION PURSUIT LEARNING PERSPECTIVE Jenq-Neng Hwang, Shih-Shien You, Shyh-Rong Lay, I-Chang Jou Information Processing Laboratory Department of Electrical Engineering, FT-10, University of Washington, Seattle, WA 98195. Telecommunication Laboratories Ministry of Transportation and Communications P.O. Box 71, Chung-Li, Taiwan 320, R.O.C. ABSTRACT: Cascaded correlation is a popular supervised learning architecture that dynamically grows layers of hidden neurons of fixed nonlinear activations (e.g., sigmoids), so that the network topology (size, depth) can be efficiently determined. Similar to a cascaded correlation learning network (CCLN), a projection pursuit learning network (PPLN) also dynamically grows the hidden neurons. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using (more flexible) trainable nonlinear nodal activation functions. Moreover, the maximum correlation training criterion used in a CCLN results in a poorer estimate of hidden weights when compared with the minimum mean squared error criterion used in a PPLN. The CCLN is thus excluded for most regression applications where smooth interpolation of functional values are highly desired. Furthermore, it is shown that the PPLN can also achieves much better performance in solving the two-spiral classification benchmarks using comparable size of weight parameters. ================ To obtain copies of the postscript file, please use Jordan Pollack's service (no hardcopies will be provided): Example: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) Name (archive.cis.ohio-state.edu): anonymous Password (archive.cis.ohio-state.edu:anonymous): <ret> ftp> cd pub/neuroprose ftp> binary ftp> get hwang.cclppl.ps.Z ftp> quit unix> uncompress hwang.cclppl.ps Now print "hwang.cclppl.ps" as you would any other (postscript) file. In case your printer has limited memory, you can divide this file into two smaller files after the uncompress: unix>> head -42429 hwang.cclppl.ps > file1.ps unix>> tail +42430 hwang.cclppl.ps > file2.ps Then print "file1.ps" and "file2.ps" separately.  From thildebr at aragorn.csee.lehigh.edu Tue Oct 5 14:08:11 1993 From: thildebr at aragorn.csee.lehigh.edu (Thomas Hildebrandt) Date: Tue, 5 Oct 93 14:08:11 -0400 Subject: NIPS Workshop: Selective Attention Message-ID: <9310051808.AA05643@aragorn.csee.lehigh.edu> I wish to call your attention to a workshop on selective attention which I will be hosting at this year's NIPS conference. =================================================================== NIPS*93 Postconference Workshop Functional Models of Selective Attention and Context Dependency December 4, 1993 Intended Audience: Those applying NNs to vision and speech analysis and pattern recognition tasks, as well as computational neurobiologists modelling attentional mechanisms. Organizer: Thomas H. Hildebrandt thildebr at athos.eecs.lehigh.edu ABSTRACT: Classification based on trainable models still fails to achieve the current ideal of human-like performance. One identifiable reason for this failure is the disparity between the number of training examples needed to achieve good performance (large) and the number of labelled samples available for training (small). On certain tasks, humans are able to generalize well when given only one exemplar. Clearly, a different mechanism is at work. In human behavior, there are numerous examples of selective attention improving a person's recognition capabilities. Models using context or selective attention seek to improve classification performance by modifying the behavior of a classifier based on the current (and possibly recent) input data. Because they treat learning and contextual adaptation as two different processes, these models solve the memory/plasticity dilemma by incorporating both. In other words, they differ fundamentally from models which attempt to provide contextual adaptation by allowing all the weights in the network to continue evolving while the system is in operation. Schedule December 4, 1993 ======== ================ 7:30 - 7:35 Opening Remarks 7:35 - 8:00 Current Research in Selective Attention Thomas H. Hildebrandt, Lehigh University 8:00 - 8:30 Context-varying Preferences and Traits in a Class of Neural Networks Daniel S. Levine, University of Texas at Arlington Samuel J. Leven, For a New Social Science 8:30 - 9:00 ETS - A Formal Model of an Evolving Learning Machine L.Goldfarb, J.Abela, V.Kamat, University of New Brunswick 9:00 - 9:30 Recognizing Handwritten Digits Using a Selective Attention Mechanism Ethem Alpaydin, Bogazici University, Istanbul TURKEY 9:30 - 4:30 FREE TIME 4:30 - 5:00 Context and Selective Attention in the Capital Markets P. N. Refenes, London Business School 5:00 - 5:30 The Global Context-Sensitive Constraint Satisfaction Property in Adaptive Perceptual Pattern Recognition Jonathan A. Marshall, University of North Carolina 5:30 - 6:00 Neural Networks for Context Sensitive Representation of Synonymous and Homonymic Patterns Albert Nigrin, American University 6:00 - 6:30 Learn to Pay Attention, Young Network! Barak A. Pearlmutter, Siemens Corp. Research Ctr., Princeton NJ 6:30 - 6:35 Closing Remarks 7:00 Workshop Wrap-Up (common to all sessions) ===================================================================== The topic to be covered differs from that recently announced by Ernst Niebur and Bruno Olshausen, in that "functional" models are not necessarily tied to neurophysiological structures. Thanks to the Workshop Chair, Mike Mozer, the two workshops were scheduled on different days, so that it is possible for interested parties to attend both. An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035. Feel free to contact me for more information on the workshop. Thomas H. Hildebrandt Electrical Engineering & Computer Science Lehigh University Bethlehem, PA 18015 Work: (215) 758-4063 FAX: (215) 758-6279 thildebr at athos.eecs.lehigh.edu  From brunak at cbs.dth.dk Tue Oct 5 07:04:07 1993 From: brunak at cbs.dth.dk (Soren Brunak) Date: Tue, 5 Oct 93 12:04:07 +0100 Subject: Positions in BIOCOMPUTING Message-ID: <mailman.627.1149591290.29955.connectionists@cs.cmu.edu> Positions in BIOCOMPUTING at the Danish Center for Biological Sequence Analysis, Department of Physical Chemistry, The Technical University of Denmark (Lyngby). A number of pre- and post-doctoral positions are available at the newly formed Center for Biological Sequence Analysis. They have a duration of one, two and three years, starting late 1993 or early 1994. The center is funded by a five-year grant from the Danish National Research Foundation and conducts an active research program in biomolecular sequence and structure analysis with emphasis on novel adaptive computational strategies. The Technical University of Denmark is situated in Lyngby just outside Copenhagen. The center offers employment to researchers with a background primarily in the natural sciences, molecular biology, genetics, chemistry and physics. We seek individuals with additional competence and interest in areas of computer science, but not with this area as the main subject of expertise. Priority will be given to younger scientists with experience in some of the following areas (in alphabetical order): experimental molecular biology, information theory and statistics, mathematical analysis, neural computation, protein folding, physics of computation and complex systems, and sequence analysis. In a wide range of projects the center collaborates with national and foreign groups using novel adaptive computational methods many of which have received attention in the biocomputing context only recently. The center is characterized by the use of new approaches both regarding the algorithmic aspect of the simulation methods as well as the use of advanced hardware. A wide range of parallel and cluster computing environments is available locally at the center; a nearby supercomputer center offers easy access to CRAY and Connection Machine facilities. Among the research topics are pre--mRNA splicing, recognition of vertebrate promoters, RNA folding, protein structure prediction, proteolytic processing of polyproteins, signal peptide recognition, phylogenies, global multiple sequence alignment and dedicated sequence analysis hardware. The results are evaluated through intensive exchange with experimentalists. For further information, feel free to contact us at the address below. Applicants should send their resumes to Soren Brunak Center director Center for Biological Sequence Analysis Department of Physical Chemistry The Technical University of Denmark Building 206 DK-2800 Lyngby Denmark Tel: +45-42882222, ext. 2477 Fax: +45-45934808 Email: brunak at cbs.dth.dk  From B.DASGUPTA at fs3.mbs.ac.uk Wed Oct 6 11:09:39 1993 From: B.DASGUPTA at fs3.mbs.ac.uk (BHASKAR DASGUPTA ALIAS BD) Date: 6 Oct 93 11:09:39 BST Subject: references neural networks and time series Message-ID: <55A134A2613@fs3.mbs.ac.uk> Thanks to all who replied to my request for references to applications of neural networks to time series forecasting and apologies for the delay. I have now done a preliminary compilation, its more than 100 references. I frankly did not know that!. Well, anyway, I do not have access to an FTP so, if anyone requires a copy of this file, please email me, and I shall send the references immediately. Cheers and Thanks =================================================================== Bhaskar Dasgupta |\ /| //====\ /======= Manchester Business School ||\\ //|| || || || Booth Street West, || \/ || || || || Manchester M15 6PB, || || ||=====<< \======\ UK || || || || || Phone::+61-275-6547 || || || || || Fax::+67-273-7732. || || ||======/ =======/ =================================================================== Chaos is the rule of Nature Order is the dream of Man ===================================================================  From reza at ai.mit.edu Wed Oct 6 09:21:03 1993 From: reza at ai.mit.edu (Reza Shadmehr) Date: Wed, 6 Oct 93 09:21:03 EDT Subject: Tech Report from CBCL at MIT Message-ID: <9310061321.AA01574@corpus-callosum.ai.mit.edu> The following technical report from the Center for Biological and Computational Learning at M.I.T. is now available via anonymous ftp. ------------- :CBCL Paper #78/AI Memo #1405 :author Amnon Shashua (amnon at ai.mit.edu) :title On Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views :date July 1993 :pages 14 Part I of this paper investigates the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. The projective extension is then derived, showing that it is necessary only for tasks for which a reference view is not available --- such as happens when updating scene structure from a moving stereo rig. In part II we use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved). In the general case, three views satisfy a trilinear function of image coordinates. In case where two of the views are orthographic and the third is perspective the function reduces to a bilinear form. In case all three views are orthographic the function reduces further to a linear form (the ``linear combination of views'' of \cite{Ullman-Basri89}). These functions are shown to be useful for recognition, among other applications. -------------- How to get a copy of this report: The files are in compressed postscript format and are named by their AI memo number. They are put in a directory named as the year in which the paper was written. Here is the procedure for ftp-ing: unix> ftp publications.ai.mit.edu (128.52.32.22, log-in as anonymous) ftp> cd ai-publications/1993 ftp> binary ftp> get AIM-number.ps.Z ftp> quit unix> zcat AIM-number.ps.Z | lpr Best wishes, Reza Shadmehr Center for Biological and Computational Learning M. I. T. Cambridge, MA 02139  From kolen-j at cis.ohio-state.edu Wed Oct 6 12:01:50 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Wed, 6 Oct 93 12:01:50 -0400 Subject: Reprint Announcement Message-ID: <9310061601.AA05113@pons.cis.ohio-state.edu> This is an announcement of a newly available paper in neuroprose: RECURRENT NETWORKS: STATE MACHINES OR ITERATED FUNCTION SYSTEMS? John F. Kolen Laboratory for AI Research Department of Computer and Information Science The Ohio State University Columbus, OH 43210 kolen-j at cis.ohio-state.edu Feedforward neural networks process information by performing fixed transformations from one representation space to another. Recurrent networks, on the other hand, process information quite differently. To understand recurrent networks one must confront the notion of state as recurrent networks perform iterated transformations on state representations. Many researchers have recognized this difference and have suggested parallels between recurrent networks and various automata. First, I will demonstrate how the common notion of deterministic information processing does not necessarily hold for deterministic recurrent neural networks whose dynamics are sensitive to initial conditions. Second, I will link the mathematics of recurrent neural network models with that of iterated function systems. This link points to model independent constraints on the recurrent network state dynamics that explain universal behaviors of recurrent networks like internal state clustering. This paper will appear in The Proceedings of the 1993 Connectionist Models Summer School. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get kolen.rnifs.ps.Z ftp> quit unix> uncompress kolen.rnifs.ps.Z unix> lpr kolen.rnifs.ps (or what you normally do to print PostScript)  From kruschke at pallas.psych.indiana.edu Wed Oct 6 12:13:58 1993 From: kruschke at pallas.psych.indiana.edu (John Kruschke) Date: Wed, 6 Oct 1993 11:13:58 -0500 (EST) Subject: job opening at Indiana: Cognitive Science/Psychology Message-ID: <mailman.628.1149591290.29955.connectionists@cs.cmu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 948 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/8f9f3f10/attachment.ksh From smieja at nathan.gmd.de Wed Oct 6 12:35:54 1993 From: smieja at nathan.gmd.de (Frank Smieja) Date: Wed, 6 Oct 1993 17:35:54 +0100 Subject: TR announcement Message-ID: <199310061635.AA14313@trillian.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/smieja.pandemonium.ps.Z The file smieja.pandemonium.ps.Z is now available for copying from the Neuroprose repository: The Pandemonium System of Reflective Agents (17 pages) by Frank Smieja GMD, Bonn, Germany ABSTRACT: The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feed-forward neural networks which adapt using the back-propagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker. -Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 53757 St Augustin, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From Announce at PARK.BU.EDU Thu Oct 7 16:09:31 1993 From: Announce at PARK.BU.EDU (Announce@PARK.BU.EDU) Date: Thu, 7 Oct 93 16:09:31 -0400 Subject: Graduate study in Cognitive and Neural Systems at Boston University Message-ID: <9310072009.AA04230@retina.bu.edu> (please post) *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe  From tajchman at ICSI.Berkeley.EDU Thu Oct 7 20:32:57 1993 From: tajchman at ICSI.Berkeley.EDU (Gary Tajchman) Date: Thu, 7 Oct 93 17:32:57 PDT Subject: New Book Announcement Message-ID: <9310080032.AA05515@icsib28.ICSI.Berkeley.EDU> I thought this might be of interest to folks on connectionists. Kluwer Academic has just published a book by H. Bourlard and N. Morgan called ``CONNECTIONIST SPEECH RECOGNITION: A Hybrid Approach''. In the words of the back cover description, this book ``describes the theory and implementation of a method to incorporate neural network approaches into state-of-the-art continuous speech recognition systems based on Hidden Markov Models (HMMs) to improve their performance.'' The book is based on work done in a 5-year trans-Atlantic collaboration between Bourlard and Morgan, and puts together in one place what is otherwise scattered over a bunch of conference and journal papers. If you would like more information please send email to N. Morgan at morgan at icsi.berkeley.edu, or reply to this message. __________________________________________________________________________________ Gary Tajchman tajchman at icsi.berkeley.edu International Computer Science Institute TEL: (510)642-4274 1947 Center St., Suite 600 FAX: (510)643-7684 Berkeley, CA  From rmeir at ee.technion.ac.il Fri Oct 8 08:03:01 1993 From: rmeir at ee.technion.ac.il (Ron Meir) Date: Fri, 8 Oct 93 10:03:01 -0200 Subject: Paper announcement Message-ID: <9310081203.AA26234@ee.technion.ac.il> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/meir.compress.ps.Z FTP-filename: /pub/neuroprose/meir.learn.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following two papers are now available in the neuroprose directory. The papers are 10 and 11 pages long, respectively. Sorry, but no hardcopies are available. Data Compression and Prediction in Neural Networks Ronny Meir Jose F. Fontanari Department of EE Department of Physics Technion University of Sao Paulo Haifa 32000, Israel 13560 Sao Carlos, Brazil rmeir at ee.technion.ac.il fontanari at uspfsc.ifqsc.usp.ansp.br We study the relationship between data compression and prediction in single-layer neural networks of limited complexity. Quantifying the intuitive notion of Occam's razor using Rissanen's minimum complexity framework, we investigate the model-selection criterion advocated by this principle. While we find that the criterion works well for large sample sizes (as it must for consistency), the behavior for finite sample sizes is rather complex, depending intricately on the relationship between the complexity of the hypothesis space and the target space. We also show that the limited networks studied perform efficient data compression, even in the error full regime. ------------------------------------------------------------------------------ Learning Algorithms, Input Distributions and Generalization Ronny Meir Department of Electrical Engineering Technion Haifa 32000, Israel rmeir at ee.technion.ac.il We study the interaction between input distributions, learning algorithms and finite sample sizes in the case of learning classification tasks. Focusing on the case of normal input distributions, we use statistical mechanics techniques to calculate the empirical and expected (or generalization) errors for several well-known algorithms learning the weights of a single-layer perceptron. In the case of spherically symmetric distributions within each class we find that the simple Hebb algorithm is optimal. Moreover, we show that in the regime where the overlap between the classes is large, algorithms with low empirical error do worse in terms of generalization, a phenomenon known as over-training. -------------------------------------------------------------------------- To obtain copies: ftp cheops.cis.ohio-state.edu login: anonymous password: <your email address> cd pub/neuroprose binary get meir.compress.ps.Z get meir.learn.ps.Z quit Then at your system: uncompress meir.*.ps.Z lpr -P<printer-name> meir.*.ps  From smieja at nathan.gmd.de Mon Oct 11 09:32:55 1993 From: smieja at nathan.gmd.de (Frank Smieja) Date: Mon, 11 Oct 1993 14:32:55 +0100 Subject: TR Anouncement: exploration Message-ID: <199310111332.AA20912@trillian.gmd.de> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/beyer.explore.ps.Z The file beyer.explore.ps.Z is now available for copying from the Neuroprose repository: Learning from Examples using Reflective Exploration (17 pages) U. Beyer and F. Smieja GMD (Germany) ABSTRACT: An important property of models constructed through the process of learning from examples is the manipulation and control of the data itself. When the data is actively selected or generated the process is known as {\it exploration}. Reflection about the internal model allows exploration to be more than just a random choice in the input space. In this paper we identify two basic forms of reflective exploration: density-based and error-based. We demonstrate the applicability of exploration processes and the advantages of using them in open systems using the task of learning a 2-dimensional continuous function. - Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 53757 St Augustin, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From seifert at psych.lsa.umich.edu Mon Oct 11 11:53:17 1993 From: seifert at psych.lsa.umich.edu (Colleen Seifert) Date: Mon, 11 Oct 93 10:53:17 -0500 Subject: position In-Reply-To: Your message <Mailstrom.1.03.28195.-15445.seifert@psych.lsa.umich.edu> of Wed, 8 Sep 93 11:10:27 -0500 Message-ID: <Mailstrom.1.03.60317.-9246.seifert@psych.lsa.umich.edu> Position in Cognitive Psychology University of Michigan The University of Michigan Department of Psychology invites applications for a tenure-track position in the area of Cognition, beginning September 1, 1994. The appointment will most likely be made at the Assistant Professor level, but it may be possible at other ranks. We seek candidates with primary interests and technical skills in cognitive psychology. Our primary goal is to hire an outstanding cognitive psychologist, and thus we will look at candidates with any specific research interest. We have a preference for candidates interested in higher mental processes or for candidates with computational modeling skills (including connectionism) or an interest in cognitive neuroscience. Responsibilities include graduate and undergraduate teaching, as well as research and research supervision. Send curriculum vitae, letters of reference,copies of recent publications, and a statement of research and teaching interests no later than January 7, 1994 to: Gary Olson, Chair, Cognitive Processes Search Committee, Department of Psychology, University of Michigan, 330 Packard Road, Ann Arbor, Michigan 48104. The University of Michigan is an Equal Opportunity/Affirmative Action employer.  From lpratt at slate.Mines.Colorado.EDU Mon Oct 11 13:46:41 1993 From: lpratt at slate.Mines.Colorado.EDU (Lorien Pratt) Date: Mon, 11 Oct 1993 11:46:41 -0600 Subject: Motif version of hyperplane animator available Message-ID: <9310111746.AA53873@slate.Mines.Colorado.EDU> ----------------------------------- Announcing the availability of an X-based neural network hyperplane animator Version 1.01 October 10, 1993 ----------------------------------- Lori Pratt and Steve Nicodemus Department of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA lpratt at mines.colorado.edu Understanding neural network behavior is an important goal of many research efforts. Although several projects have sought to translate neural network weights into symbolic representations, an alternative approach is to understand trained networks graphically. Many researchers have used a display of hyperplanes defined by the weights in a single layer of a back-propagation neural network. In contrast to some network visualization schemes, this approach shows both the training data and the network parameters that attempt to fit those data. At NIPS 1990, Paul Munro presented a video which demonstrated the dynamics of hyperplanes as a network changes during learning. The program displayed ran on a Stardent 4000 graphics engine, and was implemented at Siemens. At NIPS 1991, we demonstrated an X-based hyperplane animator, similar in appearance to Paul Munro's, but with extensions to allow for interaction during training. The user may speed up, slow down, or freeze animation, and set various other parameters. Also, since it runs under X, this program should be more generally usable. An openwindows version of this program was made available to the public domain in 1992. This announcement describes a version of the hyperplane animator that has been rewritten for Motif. It was developed on an IBM RS/6000 platform, and so is written in ANSI C. The remainder of this message contains more details of the hyperplane animator and ftp information. ------------------------------------------------------------------------------ 1. What is the Hyperplane Animator? The Hyperplane Animator is a program that allows easy graphical display of Back-Propagation training data and weights in a Back-Propagation neural network [Rumelhart, 1987]. It implements only some of the functionality that we eventually hope to include. In particular, it only animates hyperplanes representing input-to-hidden weights. Back-Propagation neural networks consist of processing nodes interconnected by adjustable, or ``weighted'' connections. Neural network learning consists of adjusting weights in response to a set of training data. The weights w1,w2,...wn on the connections into any one node can be viewed as the coefficients in the equation of an (n-1)-dimensional plane. Each non-input node in the neural net is thus associated with its own plane. These hyperplanes are graphically portrayed by the hyperplane animator. On the same graph it also shows the training data. 2. Why use it? As learning progresses and the weights in a neural net alter, hyperplane positions move. At the end of the training they are in positions that roughly divide training data into partitions, each of which contains only one class of data. Observations of hyperplane movement can yield valuable insights into neural network learning. 3. Platform information The Animator was developed using the Motif toolkit on an IBM RS6000 with X-Windows. It appears to be stable on this platform, and has not been compiled on other platforms. However, Dec5000 and SGI workstations have been succesfully used as graphics servers for the animator. How to install the hyperplane animator: You will need a machine which has X-Windows, and the Motif libraries. 1. copy the file animator.tar.Z to your machine via ftp as follows: ftp mines.colorado.edu (138.67.1.3) Name: anonymous Password: (your ID) ftp> cd pub/software/hyperplane-animator ftp> binary ftp> get hyperplane-animator.tar ftp> quit 2. Extract files from hyperplane-animator.tar with: tar -xvf hyperplane-animator.tar 3. Read the README file there. It includes information about compiling. It also includes instructions for running a number of demonstration networks that are included with this distribution. DISCLAIMER: This software is distributed as shareware, and comes with no warantees whatsoever for the software itself or systems that include it. The authors deny responsibility for errors, misstatements, or omissions that may or may not lead to injuries or loss of property. This code may not be sold for profit, but may be distributed and copied free of charge as long as the credits window, copyright statement in the program, and this notice remain intact. -------------------------------------------------------------------------------  From tishby at CS.HUJI.AC.IL Tue Oct 12 12:20:30 1993 From: tishby at CS.HUJI.AC.IL (Tali Tishby) Date: Tue, 12 Oct 1993 18:20:30 +0200 Subject: ICPR 94 in Jerusalem: Call for Papers Message-ID: <199310121620.AA21278@irs01.cs.huji.ac.il> % ************* CALL FOR PAPERS - PLEASE DISTRIBUTE *************************** % % CALL FOR PAPERS - 12th ICPR - PATTERN RECOGNITION AND NEURAL NETWORKS % Oct 9-13, 1994, Jerusalem, Israel % % CONFERENCE TOPICS: % Statistical pattern recognition; % temporal pattern recognition; % neural network models and algorithms; % machine learning in pattern recognition; % theoretical models and analysis of neural networks; % models of biological pattern recognition; % adaptive models; % fuzzy systems; % applications to biological sequence analysis, % applications to handwriting, speech, motor control, and active vision. % % PROGRAM COMMITTEE: % Naftali Tishby (Chair) - Hebrew University (tishby at cs.huji.ac.il) % % Henry Baird Eric Baum Victor Brailovsky % Alfred Bruckstein Pierre A. Devijver Robert P.W. Duin % Isak Gath Geoffrey E. Hinton % Nathan Intrator Anil Jain Chuanyi Ji % Michael Jordan Junichi Kanai Rangachar Kasturi % Josef Kittler Yann LeCun Mike Mozer % Erkki Oja Sarunas Raudys Gabriella Sanniti di Baja % Eric Schwartz Haim Sompolinsky Vladimir Vapnik % Harry Wechsler Daphna Weinshall Haim Wolfson % % % This conference is one of Four conferences in the 12th ICPR. Each submitted % paper will be carefully reviewed by members of the program committee. % Papers describing applications are encouraged, and will be reviewed by a % special Applications Committee. % The conference proceedings are published by the IEEE Computer Society Press. % % % 12-ICPR CO-CHAIRS: S. Ullman - Weizmann Inst. (shimon at wisdom.weizmann.ac.il) % S. Peleg - The Hebrew University (peleg at cs.huji.ac.il) % LOCAL ARRANGEMENTS: Y. Yeshurun - Tel-Aviv University (hezy at math.tau.ac.il) % INDUSTRIAL & APPLICATIONS LIAISON: M. Ejiri - Hitachi (ejiri at crl.hitachi.co.jp) % % % PAPER SUBMISSION DEADLINE: February 1, 1994. % Notification of Acceptance: May 1994. Camera-Ready Copy: June 1994. % % Send four copies of paper to: 12th ICPR, c/o International, 10 Rothschild Blvd, % 65121 Tel Aviv, ISRAEL. Tel. +972(3)510-2538, Fax +972(3)660-604 % % Each manuscript should include the following: % 1. A Summary Page addressing these topics: % - To which of the four conference is the paper submitted? % - What is the paper about? - What is the original contribution of this work? % - Does the paper mainly describe an application, and should be reviewed by % the applications committee? % 2. Paper should be limited in length to 4000 words, the estimated length of % the proceedings version. % % For further information on all ICPR conferences contact the secretariat at the % above address, or use E-mail: icpr at math.tau.ac.il . %%======= %% ICPR CALL FOR PAPERS in LaTeX format \documentstyle [11pt]{article} \pagestyle{empty} \setlength{\textheight}{10.0in} \setlength{\topmargin}{-.75in} \setlength{\textwidth}{7.0in} \setlength{\oddsidemargin}{-.25in} \begin{document} \centerline{\bf \Large CALL FOR PAPERS} \vspace{0.15in} \centerline{\bf \Large 12th ICPR} \vspace{0.15in} \centerline{\bf \Large PATTERN RECOGNITION AND NEURAL NETWORKS} \vspace{0.07in} \centerline{\bf \Large Oct 9-13, 1994, Jerusalem, Israel} \null \vspace{0.1in} \centerline{\bf CONFERENCE TOPICS:} \smallskip \begin{center} \addtolength{\baselineskip}{-4pt} $\bullet$ Statistical pattern recognition \hspace{0.05in} $\bullet$ temporal pattern recognition\\ \hspace{0.05in} $\bullet$ neural network models and algorithms \hspace{0.05in} $\bullet$ machine learning in pattern recognition\\ \hspace{0.05in} $\bullet$ theoretical models and analysis of neural networks\\ \hspace{0.05in} $\bullet$ models of biological pattern recognition \hspace{0.05in} $\bullet$ adaptive models \hspace{0.05in} $\bullet$ fuzzy systems\\ $\bullet$ applications to biological sequence analysis, handwriting, speech, motor control, and active vision. \addtolength{\baselineskip}{+4pt} \end{center} \small \vspace{0.1in} \centerline{\bf PROGRAM COMMITTEE:} \vspace{0.1in} \centerline{Naftali Tishby (Chair) - Hebrew University ({\tt tishby at cs.huji.ac.il})} \begin{tabbing} \hspace*{0.85in} \= \hspace*{2.0in} \= \hspace*{2.0in} \= \kill \>Henry Baird \> Eric Baum \> Victor Brailovsky \\ \>Alfred Bruckstein \> Pierre A. Devijver \> Robert P.W. Duin \\ \> Isak Gath \> \> Geoffrey E. Hinton \\ \>Nathan Intrator \> Anil Jain \> Chuanyi Ji \\ \>Michael Jordan \> Junichi Kanai \> Rangachar Kasturi \\ \>Josef Kittler \> Yann LeCun \> Mike Mozer \\ \>Erkki Oja \> Sarunas Raudys \> Gabriella Sanniti di Baja\\ \>Eric Schwartz \> Haim Sompolinsky \> Vladimir Vapnik \\ \>Harry Wechsler \> Daphna Weinshall \> Haim Wolfson \\ \end{tabbing} \medskip \noindent This conference is one of four conferences in the 12th ICPR. Each submitted paper will be reviewed by members of the program committee. Papers describing applications are encouraged, and will be reviewed by a special Applications Committee. The conference proceedings are published by the IEEE Computer Society Press. \vspace{0.16in} \begin{tabbing} \hspace*{3.0in} \= \kill 12-ICPR CO-CHAIRS: \> S. Ullman - Weizmann Inst. ({\tt shimon at wisdom.weizmann.ac.il})\\ \> S. Peleg - The Hebrew University ({\tt peleg at cs.huji.ac.il})\\ LOCAL ARRANGEMENTS: \> Y. Yeshurun - Tel-Aviv University ({\tt hezy at math.tau.ac.il})\\ INDUSTRIAL \& APPLICATIONS LIAISON: \> M. Ejiri - Hitachi ({\tt ejiri at crl.hitachi.co.jp})\\ \end{tabbing} %\vspace{0.06in} \smallskip \noindent {\bf PAPER SUBMISSION DEADLINE: ~~~ February 1, 1994.} \medskip \noindent {\bf Notification of Acceptance:} May 1994. ~~~{\bf Camera-Ready Copy:~~ June 1994}. \medskip \noindent Send four copies of paper to: 12th ICPR, \\ c/o International, 10 Rothschild Blvd,\\ 65121 Tel Aviv, ISRAEL. Tel. +972(3)510-2538, Fax +972(3)660-604\\ \medskip \noindent Each manuscript should include the following: \begin{enumerate} \addtolength{\baselineskip}{-4pt} \item A Summary Page addressing these topics: \begin{itemize} \item To which of the four conference is the paper submitted? \item What is the paper about? - What is the original contribution of this work? \item Does the paper mainly describe an application, and should be reviewed by the applications committee? \end{itemize} \item Papers should be limited to 4000 words, the estimated length of the proceedings version. \addtolength{\baselineskip}{+4pt} \end{enumerate} \noindent For further information on all ICPR conferences contact the secretariat at the above address, or use \\ E-mail: {\tt icpr at math.tau.ac.il} . \end{document}  From shrager at xerox.com Tue Oct 12 23:17:59 1993 From: shrager at xerox.com (Jeff Shrager) Date: Tue, 12 Oct 1993 20:17:59 PDT Subject: Plasticity and Cortical Development Paper available Message-ID: <93Oct12.201804pdt.38019@huh.parc.xerox.com> The following paper is available in hardcopy upon request. (Please send a message in which your name and address appear in a format that can be cut-and-pasted onto an envelope.) Shrager, J. & Johnson, M. H. (in press). Modeling the development of cortical function. To appear in I. Kovacs & B. Julesz (Eds.), Maturational Windows and Cortical Plasticity [working title and editors list]. Santa Fe, NM: The Santa Fe Institute. Our goal in this work is to investigate the factors that give rise to the functional organization of the mammalian cerebral cortex during early brain development. We hypothesize that the cortex is organized through a combination of endogenous and exogenous influences including subcortical structuring, maturational timing, and the information structure of the organism's early environment. In this paper we demonstrate, via computational neural modeling, the way in which these influences can lead to differential cortical function, and to the differential distribution of function over the cortical sheet. In three computational studies, using a modified version of a model of cortical development due originally to Kerszberg, Dehaene, and Changeux, we demonstrate that stimulus correlations, structural targeting (of subcortex to cortex), spatial structure in the stimulus, and, most importantly, waves of neural trophic factor have predictable effects upon the modular structure and degree of functionality represented in the resulting cortical sheet.  From robbie at prodigal.psych.rochester.edu Wed Oct 13 14:19:02 1993 From: robbie at prodigal.psych.rochester.edu (Robbie Jacobs) Date: Wed, 13 Oct 93 14:19:02 EDT Subject: Assistant Professor opening Message-ID: <9310131819.AA15914@prodigal.psych.rochester.edu> Dear Colleague: The attached advertisement describes an Assistant Professor position for a Behavioral Neuroscientist in the Department of Psychology at the University of Rochester. It is anticipated that this position will be available July 1, 1994 or 1995. We hope to attract a scientist who will interact productively with existing faculty whose research interests are in developmental psychobiology and/or learning and memory. Also, the candidate would be part of a university-wide community of over 60 neuroscientists contributing to inter-departmental graduate and undergraduate programs in neuroscience. I would appreciate if you could bring this position to the attention of suitable candidates. Sincerely, Ernie Nordeen Associate Professor of Psychology and of Neurobiology & Anatomy BEHAVIORAL NEUROSCIENTIST. The Department of Psychology at the University of Rochester anticipates an Assistant Professor position in neuroscience. We are particularly interested in persons investigating relationships between brain and behavioral plasticity at the level of neural systems. Individuals whose research emphasizes either i) neural mechanisms of learning and memory, or ii) development/reorganization in perceptual or motor systems are especially encouraged to apply, but persons interested in related areas of behavioral neuroscience will also be considered. The successful candidate is expected to develop an active research program, and participate in teaching within graduate and undergraduate programs in neuroscience. Applicants should submit curriculum vitae, a brief statement of research interests, and three letters of reference by 1 February 1994 to: Chair, Biopsychology Search Committee, Dept. of Psychology, University of Rochester, Rochester, NY, 14627. An Affirmative Action/Equal Opportunity Employer.  From hzhu at liverpool.ac.uk Thu Oct 14 13:55:43 1993 From: hzhu at liverpool.ac.uk (Mr. H. Zhu) Date: Thu, 14 Oct 93 13:55:43 BST Subject: PhD Thesis available for FTP in neuroprose Message-ID: <9310141255.AA04958@yew-13.liv.ac.uk> FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-file: pub/neuroprose/zhu.thesis.ps.Z PhD Thesis (222 pages) available in neuroprose repository. (An index entry, and sample ftp procedure follows abstract) NEURAL NETWORKS AND ADAPTIVE COMPUTERS: Theory and Methods of Stochastic Adaptive Computation Huaiyu Zhu Department of Statistics and Computational Mathematics Liverpool University, Liverpool L69 3BX, UK ABSTRACT: This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects --- It can be continuous, stochastic and adaptive --- and retains the TM computation as a subclass called ``data processing''. The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of ``trainable information processor'' (TIP) --- parameterised stochastic mapping with a rule to change the parameters --- is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of ``global search'' in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a ``basic adaptive computer', which has the advantage over earlier reinforcement learning systems, such as Sutton's ``Dyna'', in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve. ___________________________________________________________________ INDEX entry: zhu.thesis.ps.Z hzhu at liverpool.ac.uk 222 pages. Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorial state space and action space. (Mathematics with simulation results plus philosophy.) --------------------- Sample ftp procedure: unix$ ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:name): ftp (or anonymous) Password: (your email address including @) ftp> cd pub/neuroprose ftp> binary ftp> get zhu.thesis.ps.Z ftp> quit unix$ uncompress zhu.thesis.ps.Z unix$ lpr -P<printer_name> zhu.thesis.ps The last two steps can also be combined to unix$ zcat zhu.thesis.ps.Z | lpr -P<printer_name> which will save some space. ---------------------- Note: This announcement is simultaneous sent to the following three mailing lists: connectionists at cs.cmu.edu, anneal at sti.com, reinforce at cs.uwa.edu.au My apology to those who subscribe to more than one of them. I'm sorry that there is no hard copy available. -- Huaiyu Zhu hzhu at liverpool.ac.uk Dept. of Stat. & Comp. Math., University of Liverpool, L69 3BX, UK  From lmr at pimac2.iet.unipi.it Thu Oct 14 08:16:05 1993 From: lmr at pimac2.iet.unipi.it (Leonardo Reyneri) Date: Thu, 14 Oct 93 13:16:05 +0100 Subject: No subject Message-ID: <9310141216.AA22985@pimac2.iet.unipi.it> Please find below the Call For Papers of MICRONEURO '94: ************************************************************************** MICRONEURO 94 The Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems Torino (I), September 26-28, 1994 FIRST CALL FOR PAPERS This conference is the fourth in a series of international conferences dedicated to all aspects of hardware implementations of Neural Networks and Fuzzy Systems. MICRONEURO has emerged as the only international forum devoted specifically to all hardware implementation aspects, giving particular weight to those interdisciplinary issues which affect the design of Neural and Fuzzy hardware directly. TOPICS The conference program will focus upon all aspects of hardware implementations of Neural Networks and Fuzzy Systems and their applications in the real world. Topics will concentrate upon the following fields: - Analog and mixed-mode implementations - Digital implementations - Optical systems - Pulse-Stream computation - Weightless Neural systems - Neural and Fuzzy hardware systems - Interfaces with external world - Applications of dedicated hardware - VLSI-friendly Neural algorithms - New technologies for Neural and Fuzzy Systems Selection criteria will be based also on technical relevance, novelty of the approach and on availability of performance measurements for the system/device. INFORMATION FOR AUTHORS All submitted material (written in English) will be refereed and should be typed on A4 paper, 1-1/2 spaced, 12 point font, 160x220 mm text size. All accepted material will appear in the proceedings. PAPERS should not exceed 10 pages including figures and text. Also reports on EARLY INNOVATIVE IDEAS will be considered for presentation. In this case the submission should be a short description of the novel idea, not exceeding 6 pages in length, and it must be clearly marked ``Innovative Idea''. The most interesting papers and ideas will be published in a special issue of IEEE MICRO. SUBMISSIONS Six copies of final manuscripts, written according to the above requirements, shall be submitted to the Program Chairman. Submissions arriving late or significantly departing from length guidelines, or papers published elsewhere will be returned without review. Electronic versions of the submission (possibly in LATEX format) are kindly welcome. DEADLINES Submission of paper and/or ideas May 30, 1994 Notification of acceptance July 15, 1994 THE WORKSHOP VENUE The venue of MICRONEURO '94 is Torino, the historic and beautiful center of Piemonte. The town is surrounded by the highest mountains in Europe and by beautiful hills and landscapes. The region is also famous for its excellent wines. MICRONEURO '94 will be held at the Politecnico di Torino. The venue is conveniently located close to the town centre, with many restaurants and cafes close by. General Chair: H.P. Graf AT T Bell Laboratories Room 4 G 320 HOLMDEL, NJ 07733 - USA Tel. +1 908 949 0183 Fax. +1 908 949 7722 Program Chair: L.M. Reyneri Dip. Ingegneria Informazione Universita' di Pisa Via Diotisalvi, 2 56126 PISA - ITALY Tel. +39 50 568 511 Fax. +39 50 568 522 E.mail lmr at pimac2.iet.unipi.it Organisation: COREP Segr. MICRONEURO '94 C.so Duca d. Abruzzi, 24 10129 TORINO - ITALY Tel. +39 11 564 5108 Fax. +39 11 564 5199 Steering Committee: K. Goser (D) J. Herault (F) W. Moore (UK) A.F. Murray (UK) U. Ramacher (D) M. Sami (I) Program Committee: E. Bruun (DK) H.C. Card (CA) D. Del Corso (I) P. Garda (F) M. Jabri (AU) S.R. Jones (UK) C. Jutten (F) H. Klar (D) J.A. Nossek (D) A. Prieto (E) U. Rueckert (D) L. Spaanenburg (NL) L. Tarassenko (UK) M. Verleysen (B) E. Vittoz (CH) J. Wawrzynek (USA) W. Yang (USA) **************************************************************************  From olivier at dendrite.cs.colorado.edu Thu Oct 14 15:06:56 1993 From: olivier at dendrite.cs.colorado.edu (Olivier Brousse) Date: Thu, 14 Oct 1993 13:06:56 -0600 Subject: Generativity and systematicity, learning: Report announcement Message-ID: <199310141907.AA00719@dendrite.cs.Colorado.EDU> The following report is now available via anonymous ftp on the cis.ohio-state.edu ftp server, directory pub/neuroprose, file brousse.sysgen.ps.Z Pages: 180, size: 893720 bytes. Title: Generativity and Systematicity in Neural Network Combinatorial Learning It is also available via surface mail, as: Technical Report CU-CS-676-93, for a small fee, $5 I believe from Attn: Vicki Emken Department of Computer Science, Box 430 University of Colorado at Boulder Boulder, CO 80309-0430, U.S.A. Abstract: This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to the existence of symmetries within them. Such properties have seldom been seen in neural networks models, which have typically suffered from problems of inadequate generalization, as examplified both by small number of generalizations relative to training set sizes and heavy interference between newly learned items and previously learned information. Symbolic theories, arguing that mental representations have syntactic and semantic structure built from structured combinations of symbolic constituents, can in principle account for these properties (both arise from the sensitivity of structured semantic content with a generative and systematic syntax). This thesis studies the question of whether connectionism, arguing that symbolic theories can only provide approximative cognitive descriptions which can only be made precise at a sub-symbolic level, can also account for these properties. Taking a cue from the domains in which human learning most dramatically displays generativity and systematicity, the answer is hypothesized to be positive for domains with combinatorial structure. A study of such domains is performed, and a measure of combinatorial complexity in terms of information/entropy is used. Experiments are then designed to confirm the hypothesis. It is found that a basic connectionist model trained on a very small percentage of a simple combinatorial domain of recognizing letter sequences can correctly generalize to large numbers of novel sequences. These numbers are found to grow exponentially when the combinatorial complexity of the domain grows. The same behavior is even more dramatically obtained with virtual generalizations: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. Experiments are repeated with fully-distributed representations, and results imply that performance is not degraded. When weight elimination is added, perfect systematicity is obtained. A formal analysis is then attempted in a simpler case. The more general case is treated with contribution analysis. To retrieve and print: unix> ftp archive.cis.ohio-state.edu Name: anonymous 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp> binary ftp> get brousse.sysgen.ps.Z 200 PORT command successful. ftp> quit unix> zcat brousse.sysgen.ps.Z | lpr or unix> zcat brousse.sysgen.ps.Z | lpr -s - Olivier Brousse olivier at cs.colorado.edu  From Frances.T.Perillo at Dartmouth.EDU Thu Oct 14 09:21:13 1993 From: Frances.T.Perillo at Dartmouth.EDU (Frances T. Perillo) Date: 14 Oct 93 09:21:13 EDT Subject: POSITION AVAILABLE Message-ID: <6728344@prancer.Dartmouth.EDU> Cognitive Position: The Department of Psychology at Dartmouth College has a junior, tenure-track position available in the area of Cognition -- broadly construed to include any area of research within Cognitive Psychology, Cognitive Science, and/or Cognitive Neuroscience. Candidates must be able to establish a strong research program and must have a commitment to undergraduate and graduate instruction. Supervision of both graduate and undergraduate research will be expected. Please send a letter of application, vita and three letters of recommendation to: Chair, Cognitive Search Committee, Department of Psychology, 6207 Gerry Hall, Dartmouth College, Hanover, NH 03755-3459. Review of applications will begin on February 15, 1994. Dartmouth College is an equal opportunity employer with an affirmative action plan. Women and members of minority groups are encouraged to apply.  From fellous at rana.usc.edu Wed Oct 13 20:36:19 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Wed, 13 Oct 93 17:36:19 PDT Subject: CNE WORKSHOP - PROGRAM Message-ID: <9310140036.AA01375@rana.usc.edu> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< The Center for Neural Engineering University of Southern California Los Angeles CA 90089-2520 Announces a Workshop - Oct 19-20, 1993 Neural Architectures and Distributed AI: From Schema Assemblages to Neural Networks October 19-20, 1993 Program Committee: Michael Arbib (Organizer), George Bekey, Damian Lyons, and Ron Sun This message contains the PROGRAM for the meeting and comes with a warm invitation to participate in the Workshop. Registration materials are provided at the end. Please plan to join us in Los Angeles in October for the workshop - and do consider coming a day early to take part in the CNE Review. Scope of the Workshop: To design complex technological systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence and perceptual robotics which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. We seek (both at the level of schema asemblages, and in terms of "modular" neural networks) a distributed model of computation, supporting many concurrent activities for recognition of objects, and the planning and control of different activities. The use, representation, and recall of knowledge is mediated through the activity of a network of interacting computing agents which between them provide processes for going from a particular situation and a particular structure of goals and tasks to a suitable course of action. This action may involve passing of messages, changes of state, instantiation to add new schema instances to the network, deinstantiation to remove instances, and may involve self-modification and self- organization. Schemas provide a form of knowledge representation which differs from frames and scripts by being of a finer granularity. Schema theory is generative: schemas may well be linked to others to provide yet more comprehensive schemas, whereas frames tend to "build in" from the overall framework. The analysis of interacting computing agents (the schema instances) is intermediate between the overall specification of some behavior and the neural networks that subserve it. The Workshop will focus on different facets of this multi-level methodology. Abstracts will be collected in a CNE Technical Report which will be made available to registrants at the start of the meeting. Monday: The meeting will start at 6:30pm on Monday evening (for those who have formally registered): Evening at the University Hilton No-Host Bar followed by Dinner Note: Members of the USC community are welcome to attend the presentations (but not the dinner) free of charge - please obtain your free registration between 8:30 and 9:00am on Tuesday. USC registrants may purchase the Workshop Proceedings for $10. ---------------------------- TUESDAY All talks except for the last session will be given in the Hedco Neurosciences Building Auditorium 8:30am Registration. Hedco Neurosciences Building Lobby - Introductory Overview > 9:00am Schemas and Neural Networks: A Multi-Level Approach to Natural and Artificial Intelligence Michael A. Arbib - University of Southern California - Schemas for Robotics > 10:00am Reactive Schema-based Robotic Systems: Principles and Practice. Ronald C. Arkin - Georgia Institute of Technology > 10:30 Coffee > 11:00am A Schema-Theory Approach to Building and Analysing the Behavior of Robot Systems D. M. Lyons - North American Philips Corporation > 11:30am Visually Guided Multi-Fingered Robot Hand Grasping as Defined by Schemas and a Reactive System. T. G. Murphy - University of Massachusetts, Lowell D. M. Lyons & A.J. Hendricks - North American Philips Corporation > 12 Noon Reinforcement Learning for Robotic Reaching and Grasping Andrew H. Fagg - University of Southern California > 12:30pm Lunch > 1:30pm A Knowledge Base for Neural Guidance System Ramon Krosley & Manavendra Misra - Colorado School of Mines > 2:00pm Multiresolutional Schemata for Motion Control A. Meystel - Drexel University > 2:30pm Baby Sub: Using Schemata for Conceptual Learning Alberto Lacaze & Michael Meystel - Drexel University > 3:00pm Refreshments > 3:30pm A Real-Time Neural Implementation of a Schema Driven Toy-Car. Jan N. H. Heenskerk & Fred Keijzer - Leiden University, The Netherlands - Schemas, NNs, Vision, and Visuomotor Coordination > 4:00pm Representing and Learning Visual Schemas in Neural Networks for Scene Analysis Wee Kheng Leow & Risto Miikkulainen - University of Texas at Austin > 4:30pm Integration of Connectionist and Symbolic Modules in a Vision Task Masumi Ishikawa, Kengo Matsuo & Kenichi Yoshino Kyushu Institute of Technology, Japan -------------------- WEDNESDAY > 9:00am A Schema-Theoretic Approach to Study the "Chantlitaxia" Behavior in the Praying Mantis Francisco Cervantes Perez, Arturo Franco, Susana Velazquez and Nydia Lara - ITAM and UNAM, Mexico > 9:30am Schema Based Learning and Anuran Detour Behavior Fernando J. Corbacho and Hyun Bong Lee University of Southern California > 10:00am "What", "Where", and the Architecture of Action-Oriented Perception Michael A. Arbib - University of Southern California > 10:30am Coffee - Programming Environments for Schemas and NNs > 11:00am ASL:Hierarchy, Composition, Heterogeneity, and Multi- Granularity in Concurrent Object-Oriented Programming A. Weitzenfeld - University of Southern California > 11:30am A Message Passing Based Approach to the Design of Modular Neural Network Systems Lawrence Gunn - MacDonald Dettwiler Associates, Canada > 12 Noon A Paradigm for Handling Neural Networks in Databases Erich Schikuta - University of Vienna > 12:30pm Lunch - Schemas and Connectionism > 1:30pm Feeling-Based Schemas Peter H.Greene & Greg T.H. Chien - Illinois Institute of Technology > 2:00pm Neural Schemas and Connectionist Logics: A Synthesis of the Symbolic and the Subsymbolic Ron Sun - The University of Alabama > 2:30pm Distributed Knowledge Representation in Adaptive Self- Organizing Concurrent Systems Andrew Bartczak - The University of Rhode Island > 3:00pm Refreshments **Refreshments and the concluding session will take place at the Auditorium of the Andrus Gerontology Center.** > 3:30pm A Connectionist Model of Semantic Memory for Metaphor Interpretation Tony Veale & Mark Keane - Trinity College, Ireland > 4:00pm Schema-based Modeling of Commonsense Understanding of Causal Narratives Srinivas Narayanan - University of California, Berkeley > 4:30pm Dynamic Schema Instances in the Conposit Framework John A. Barnden - New Mexico State University ************** BONUS EVENT: CNE RESEARCH REVIEW **************** Registrants for the Workshop are invited to attend, at no extra charge, the CNE Research Review to be held on Monday, October 18, 1993. The Review will present a day-long sampling of CNE research. In particular, the meeting will celebrate the opening of two new CNE Laboratories: The Autonomous Robotics Laboratory and the The Neuro-Optical Computing Laboratory which join the Brain Simulation Laboratory in the Hedco Neurosciences Building. During the day, George Bekey will present an overview of our research on autonomous robots, while Keith Jenkins and Armand Tanguay will review the state of the art in our research on optical implementation of neural networks. Related talks will include those by Bing Sheu on VLSI for Neural Networks and by Alfredo Weitzenfeld on neural simulation tools. Another major development we will celebrate is the ever-strengthening cooperation between CNE and USC's Program in Neural, Informational, and Behavioral Sciences (NIBS) in bringing information technology to bear in catalyzing new insights into the complexity of the brain. Scott Grafton will review our use of PET scans to gain new insight into human brain mechanisms of vision, action and memory; while Denis McLeod will present our approach to the construction of federated databases for neuroscience and other scientific applications. The Program will be rounded out by talks by other faculty and students, student posters, and demonstrations of hardware and software. Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "CNE Workshop" attendees to avail themselves of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, jfellous at pollux.usc.edu. REGISTRATION The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor and for CNE Members) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students and CNE members) together with the following information to: Marietta Pobst Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520, USA. mpobst at pollux.usc.edu Tel: (213) 740-1176; Fax: (213) 740-5687 SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering; University of Southern California October 19-20, 1993 NAME: ___________________________________________ ADDRESS: _________________________________________ _________________________________________ _________________________________________ PHONE NO.:_______________ FAX:___________________ EMAIL:___________________________________________ I intend to attend the CNE Research Review on October 18, 1993: YES [ ] NO [ ] Note: Late registrations will be accepted on the Monday morning, but places at the dinner are limited, so advance email to mpobst at pollux.usc.edu would be appreciated, even if you choose to bring your check on Monday morning. .  From harnad at Princeton.EDU Thu Oct 14 20:23:58 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 14 Oct 93 20:23:58 EDT Subject: Hippocampus and Memory: BBS Call for Commentators Message-ID: <9310150023.AA25022@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by H. EICHENBAUM et al. on TWO COMPONENT FUNCTIONS OF THE HIPPOCAMPAL MEMORY SYSTEM that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ TWO COMPONENT FUNCTIONS OF THE HIPPOCAMPAL MEMORY SYSTEM Howard Eichenbaum Center for Behavioral Neuroscience State University of New York at Stony Brook Stony Brook, NY 11794 (516) 632-9482 heichen at neuro.som.sunysb.edu Tim Otto Department of Psychology Busch Campus Rutgers University New Bruswick, NJ 08903 Neal J. Cohen Beckman Institute & Department of Psychology University of Illinois at Urbana-Champaign 405 N. Mathews Avenue Urbana, IL 61801 KEY WORDS: Amnesia, Hippocampus, Parahippocampal Region, Entorhinal Cortex, Learning, Memory, Representation. ABSTRACT: The hippocampal system contributes to (1) the temporary maintenance of memories and (2) the processing of a particular type of memory representation. The evidence from amnesia suggests that these two hippocampus-dependent memory functions are orthogonal. Neuropsychological, anatomical and physiological evidence supports a two-component model of cortico-hippocampal interactions: Neocortical association areas maintain short-term memories for specific items and events prior to hippocampal processing and they provide the final repositories of long-term memory. The parahippocampal region supports intermediate-term storage of individual items and the hippocampal formation itself organizes memories according to relevant relationships among items. Hippocampal-cortical interactions lead to strong and persistent memories for events and their constituent elements and interrelations, together with a capacity for flexibly producing memories across a wide range of circumstances. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.bridgeman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.eichenbaum When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson). ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From fellous at rana.usc.edu Wed Oct 13 20:23:34 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Wed, 13 Oct 93 17:23:34 PDT Subject: CNE Review Announcement Message-ID: <9310140023.AA01325@rana.usc.edu> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The Center for Neural Engineering University of Southern California Announces The CNE Research Review Monday, October 18,1993 The University of Southern California has established itself as a leader in linking research on the brain to innovations in neurally based artificial intelligence. The Center for Neural Engineering (CNE) has more than forty faculty members in such disciplines as Biomedical Engineering, Computer Science, Electrical Engineering, Neurobiology, Neurology, Linguistics and Psychology engaged in studies of neural networks and the computing style of the brain, and the design of a new generation of computers and robotic devices inspired by them. These professors supervise a large number of Ph.D. candidates, offer a broad range of graduate courses, and conduct a strong research program supported by federal funds, foundations, and corporations. Research teams involving faculty, students and industrial colleagues ensure a healthy flow of ideas among several interrelated facets of neural science and engineering, including fundamental research on the brain, the simulation and mathematical analysis of neural networks, the development of novel engineering systems based in part on neural concepts, and the practical application of these systems. The CNE Research Review is designed for all who wish to benefit from USC's activities in Neural Engineering. Our computational analysis of the brain leads us to new strategies for human learning and new weapons in the fight against disease. These insights also enable us to chart new computer architectures, and to develop new forms of artificial intelligence to act as intelligent assistants to human decision-makers. Optical computing and microelectronics are leading us to the high bandwidth communication and massively parallel computation that can make these new tools effective and affordable on a grand scale. The CNE Review will present a day-long sampling of our research. In particular, the meeting will celebrate the opening of two new CNE Laboratories: The Autonomous Robotics Laboratory and the The Neuro-Optical Computing Laboratory which join the Brain Simulation Laboratory in the Hedco Neurosciences Building. During the day, George Bekey will present an overview of our research on autonomous robots, while Keith Jenkins and Armand Tanguay will review the state of the art in our research on optical implementation of neural networks. Related talks will include those by Bing Sheu on VLSI for Neural Networks and by Alfredo Weitzenfeld on neural simulation tools. Another major development we will celebrate is the ever-strengthening cooperation between CNE and USC's Program in Neural, Informational, and Behavioral Sciences (NIBS) in bringing information technology to bear in catalyzing new insights into the complexity of the brain. Scott Grafton will review our use of PET scans to gain new insight into human brain mechanisms of vision, action and memory; while Dennis McLeod will present our approach to the construction of federated databases for neuroscience and other scientific applications. The Program will be rounded out by talks by other faculty and students, student posters, and demonstrations of hardware and software in the CNE's Laboratories. Members, and potential members, of the CNE Industrial Affiliates Program will have the chance to meet with individual faculty members to discuss specific topics for research collaboration. The day will conclude with a Dinner which will give CNE members and visitors from Industry and other Universities a chance to reflect on the day's many presentations and discuss areas of mutual interest in a relaxed and convivial setting. Program All talks will be given in the Hedco Neurosciences Building Auditorium > 8:30am: Registration. Hedco Neurosciences Building Lobby > 9:00 am: Michael A. Arbib: Welcome to the CNE > 9:30 am: George Bekey: Research on Autonomous Robots > 10:00am: Keith Jenkins: Optical Implementation of Neural Networks - Emphasis Computing > 10:30am: Coffee > 11:00am: Armand Tanguay: Optical Implementation of Neural Networks - Emphasis Devices > 11:30am: Bing Sheu: VLSI for Neural Networks > 12:00am: Lunch > 1:30pm: Alfredo Weitzenfeld: The Neural Simulation Language NSL > 2:00pm: Scott Grafton: PET Scans, Functional MRI, and Human Brain Mechanisms. > 2:30pm: Dennis McLeod: Federated Databases for Neuroscience Research > 3:00pm: Coffee > 3:30pm: Laboratory Demonstrations: Autonomous Robotics Laboratory; Neuro-Optical Computing Laboratory; Brain Simulation Laboratory. > 6:30pm (For those who have formally registered): Evening at the University Hilton No-Host Bar followed by Dinner Bonus Event: Workshop on Neural Architectures and Distributed AI For an extra $50, fully paid registrants may have their registration extended to include a two-day Workshop sponsored by the CNE to be held on the two days following the CNE Review. The Workshop, "Neural Architectures and Distributed AI: From Schema Assemblages to Neural Networks" will be held on October 19 and 20, 1993. (The total fee for CNE Review and Workshop is $40 for CNE members and qualified students who include a "certificate of student status" from their advisor.) Note: Members of the USC community are welcome to attend the day's presentations (but not the dinner) free of charge - please obtain your free registration between 8:30 and 9:00am. ***** Industrial Affiliates in Neural Engineering ***** The Industrial Affiliates Program of the Center for Neural Engineering enables Industrial Affiliates to stay informed of the latest research in Neural Engineering and Computational Neurobiology at USC, and to take part in that research, ensuring that activities at the University of Southern California are responsive to the research and development needs of corporations, and contribute to technological competitiveness and technology transfer. Industrial Affiliates are involved in the work of the CNE through the provision of funding for both general and targeted research projects, through participation in research, seminars and educational programs at USC, and through a Visiting Scientists Program which enables corporate personnel to actively participate in research at USC as well as providing access for USC faculty and student researchers to specialized corporate facilities and industrial R&D programs. Over the years, membership has included General Dynamics, General Motors, Hitachi, Hughes, IBM, Lockheed, Matsushita, Nissan Motor Company, NTT Data, Ricoh Corporation, and Rockwell International. Basic membership is designed to help companies monitor USC's latest contributions to neural network technology and relate them to their business area and products. General funds are used to support workshops and the CNE seminar series, to contribute to administrative costs, and to provide small amounts of seed money for research projects. Support of personnel for training at USC is also encouraged. Funding of workshops is one way to advance this educational function. Going further, organizations that wish to undertake projects coordinating USC research with their own ongoing research and development efforts in neural engineering make a much larger commitment. Typically, we design a research project that combines an application of interest to the company with basic research issues of interest to the university. The company pays an engineer to spend a year in the CNE working on the project, and provides funds to cover release time for a faculty member to supervise the project, the stipend for Ph.D. graduate students to act as research assistants for the project, and general operating expenses. Questions about the opportunities for research cooperation with the CNE should be addressed to: Michael A. Arbib, Director Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 (213) 740-9220 FAX (213) 740-5687 arbib at pollux.usc.edu Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "CNE Review" attendees to avail themselves of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, jfellous at pollux.usc.edu. Registration The registration fee of $100 for the CNE Review includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. (Students may attend the Review for free, but will not be entitled to attend the dinner unless they register for the Workshop.) Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $100 ($150 for those also wishing to attend the Workshop; $40 for students and CNE members) together with the following information to Marrietta Pobst, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. ------------------------------------------------------------------- CNE Review Center for Neural Engineering, USC October 18, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ Please register me: for the Workshop as well as the Review: YES [ ] NO [ ] Note: Late registrations will be accepted on the Monday morning, but places at the dinner are limited, so advance email to mpobst at pollux.usc.edu would be appreciated, even if you choose to bring your check on Monday morning. .  From janetw at cs.uq.oz.au Fri Oct 15 02:15:51 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Fri, 15 Oct 93 16:15:51 +1000 Subject: Symposium on Connectionist Models and Psychology (Australia) Message-ID: <9310150615.AA25251@client> First Announcement: Call to participants SYMPOSIUM ON CONNECTIONIST MODELS AND PSYCHOLOGY University of Queensland Brisbane Australia Saturday, 29 January 1994 (back-to-back with the Australian Conference on Neural Networks) The symposium is aimed at psychologists and psychological modelers, specifically those who are studying or questioning the relevance of neural networks to experimental psych. We are aiming to provide a structured forum for discussion of issues. The symposium is structured into three sessions, focussed on the following themes: TENTATIVE PROGRAMME (8.10.93) SESSION 1. The rationale for psychologists using models: What benefits (if any) are there to be gained from using neural nets and other computational devices as models of human perception and cognition? Chair: Peter Slezak Target Address: Danny Latimer Discussants: Max Coltheart; Sally Andrews; Margaret Charles SESSION 2. Correspondence between human and network performance: What methods and measures are available for comparing network variables and human experimental data? Chair: Danny Latimer Speakers: Kate Stevens; Graeme Halford; Simon Dennis SESSION 3. Basic computational processes: Psychological theories of cognition assume the operation of basic processes such as comparison, storage, search etc. What do the connectionist and symbol-manipulating approaches provide as means for modeling these processes? Chair: Steve Schwartz Target Address: Janet Wiles Discussants: Mike Johnson; Zoltan Schreter; George Oliphant For the first and third sessions the role of the speaker is to raise issues and, possibly, defend a position. The reviewers will provide a commentary on the issues raised by the target speaker. For the second session, the role of the speakers is to raise and discuss conceptual and empirical issues. In all sessions the chair will lead the discussion. We're aiming to circulate a pre-proceedings two weeks in advance of the symposium to registered participants. _______________________________________________________________________ Details ------- Date: Saturday 29th January, 1994 Time: 9.00am - 6pm Location: Dept of Psychology, Room 304, University of Queensland St Lucia, Brisbane. _______________________________________________________________________ Please indicate your interest in attending the Symposium by returning this form or by contacting one of the organizers by email, telephone or fax by January 3, 1994. I will be attending the Symposium. Please keep me informed of developments Name: ................................................................ Address:................................................................ ................................................................ Phone: ................................................................ Fax: ................................................................ Email: ................................................................ _______________________________________________________________________ Janet Wiles Kate Stevens Danny Latimer Dept of Comp Sci Dept of Psychology Dept of Psychology Uni of Queensland Uni of Queensland Uni of Sydney St Lucia 4072 St Lucia 4072 NSW 2006 Phone: 07 365-2902 Phone: 07 365-6203 Phone: 02 692-2481 Fax: 07 365-1999 (International: +61-7-365-1999) Email: janetw at cs.uq.oz.au Email: kates at psych.psy.uq.oz.au Email: cyril at psychvax.psych.su.oz.au  From BRAIN1 at taunivm.tau.ac.il Fri Oct 15 17:33:28 1993 From: BRAIN1 at taunivm.tau.ac.il (BRAIN1@taunivm.tau.ac.il) Date: Fri, 15 Oct 93 17:33:28 IST Subject: Bat-Sheva seminar on functional brain imaging Message-ID: <mailman.629.1149591290.29955.connectionists@cs.cmu.edu> % Dear Colleague, % % here follows the first announcement (plain TeX file) of the % % % "BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING" % % which will take place in Tel-Aviv % June 9 to 16, 1994 % % May we ask you to post the announcement ? % % Many thanks and best regards, % % D. Horn G. Navon % \nopagenumbers \magnification=1200 \def\sk{\vskip .2cm} \hsize=13cm \centerline{\bf BAT-SHEVA SEMINAR ON FUNCTIONAL BRAIN IMAGING} \sk \centerline{\bf Tel-Aviv, Israel, June 9 to 16, 1994} \vskip 3cm \centerline{\bf FIRST ANNOUNCEMENT} \sk The seminar will bring together experts on various techniques of functional brain imaging (PET, EEG, MEG, Optical, and particular emphasis on MRI). It will start with a day of tutorials at Tel-Aviv University. These will serve as technical and scientific introductions for participants from different disciplines. It will continue in a resort hotel at the seashore with plenary lectures, describing recent advances in all different techniques and comparing their merits and scientific results. Speakers include: M. Abeles, J. W. Belliveau, A. S. Gevins, A. Grinvald, M. H\"am\"al\"ainen, S. Ogawa, H. Pratt, M. Raichle, R. G. Shulman, D. Weinberger. The number of participants in the workshop will be limited. \sk \vskip 1cm Information and registration: Dan Knassim Ltd., P.O.B. 57005, Tel-Aviv 61570, Israel. Tel: 972-3-562 6470 Fax: 972-3-561 2303 \sk \vskip 2cm \centerline {D. Horn~~~~~~~~G. Navon} \centerline {ADAMS SUPER-CENTER FOR BRAIN STUDIES} \centerline {TEL-AVIV UNIVERSITY, TEL-AVIV, ISRAEL} \centerline{ e-mail: brain1 at taunivm.tau.ac.il } \vskip 2cm \sk \vfill\eject\end  From arantza at cogs.susx.ac.uk Mon Oct 18 10:48:42 1993 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Mon, 18 Oct 93 10:48:42 BST Subject: Artificial Life Workshop Announcement Message-ID: <m0oorCk-0000FCC@ticsuna.crn.cogs.susx.ac.uk> "Artificial Life: a Bridge towards a New Artificial Intelligence" Palacio de Miramar (San Sebastian, Spain) December 10th and 11th, 1993 Workshop organised by the Department of Logic and Philosophy of Science, Faculty of Computer Science & Institute of Logic, Cognition, Language and Information (ILCLI) of the University of the Basque Country (UPV/EHU) Directors: Alvaro Moreno (University of the Basque Country) Francisco Varela (CREA, Paris) This Workshop will be dedicated to a discussion of the impact of works on Artifical Life in Artificial Intelligence. Artificial Intelligence (AI) has traditionally attempted to study cognition as an abstract phenomenon using formal tools, that is, as a disembodied process that can be grasped through formal operations, independent of the nature of the system that displays it. Cognition appears as an abstract representation of reality. After several decades of research in this direction the field has encountered several problems that have taken it to what many consider a "dead end": difficulties in understanding autonomous and situated agencies, in relating behaviour in a real environment, in studying the nature and evolution of perception, in finding a pragmatic approach to explain the operation of most cognitive capacities such as natural language, context dependent action, etc. Artificial Life (AL) has recently emerged as a confluence of very different fields trying to study different kinds of phenomena of living systems using computers as a modelling tool, and, at last, trying to artificially (re)produce a living or a population of living systems in real or computational media. Examples of such phenomena are prebiotic systems and their evolution, growth and development, self-reproduction, adaptation to an environment, evolution of ecosystems and natural selection, formation of sensory-motor loops, autonomous robots. Thus, AL is having an impact on classic life sciences but also on the conceptual foundations of AI and new methodological ideas to Cognitive Science. The aim of this Workshop is to focus on the last two points and to evaluate the influence of the methodology and concepts appearing in AL for the development of a new ideas about cognition that could eventually give birth to a new Artificial Intelligence. Some of the sessions consist on presentations and replies on a specific subject by invited speakers while others will be debates open to all participants in the workshop. MAIN TOPICS: * A review of the problems of FUNCTIONALISM in Cognitive Science and Artificial Life. * Modelling Neural Networks through Genetic Algorithms. * Autonomy and Robotics. * Consequences of the crisis of the representational models of cognition. * Minimal Living System and Minimal Cognitive System * Artificial Life systems as problem solvers * Emergence and evolution in artificial systems SPEAKERS S. Harnad P. Husbands G. Kampis B. Mac Mullin D. Parisi T. Smithers E. Thompson F. Varela Further Information: Alvaro Moreno Apartado 1249 20080 DONOSTIA SPAIN E. Mail: biziart at si.ehu.es Fax: 34 43 311056 Phone: 34 43 310600 (extension 221) 34 43 218000 (extension 209)  From terry at helmholtz.sdsc.edu Mon Oct 18 15:58:48 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Mon, 18 Oct 93 12:58:48 PDT Subject: NEURAL COMPUTATION, 5:6 Nov 93 Message-ID: <9310181958.AA27586@helmholtz.sdsc.edu> NEURAL COMPUTATION Volume 5, Number 6, 1993 Articles: Analysis of Neuron Models with Dynamically Regulated Conductances L. F. Abbott and Gwendal Le Masson Letters: Limitations of the Hodgkin-Huxley Formalism: Effects of Single Channel Kinetics upon Transmembrane Voltage Dynamics Adam F. Strassberg and Louis J. DeFelice Two-dimensional Motion Perception in Flies A. Borst, M. Egelhaaf, and H. S. Seung Neural Representations of Space Using Sinusoidal Arrays David S. Touretzky, A. David Redish and Hank S. Wan Fast Recognition of Noisy Digits Jeffrey Kidder and Daniel Seligson Local Algorithms for Pattern Recognition and Dependencies Estimation V. Vapnik and L. Bottou On the Geometry of Feedforward Neural Network Error Surfaces An Mei Chen, Haw-minn Lu and Robert Hecht-Nielsen Rational Function Neural Network Henry Leung and Simon Haykin On an Unsupervised Learning Rule for Scalar Quantization Following the Maximum Entropy Principle Marc M. Van Hulle and Dominique Martinez A Function Estimation Approach to Sequential Learning with Neural Networks Visakan Kadirkamanathan and Mahesan Niranjan Learning Finite State Machines with Self- Clustering Recurrent Networks Zheng Zeng, Rodney Goodman and Padhraic Smyth ----- SUBSCRIPTIONS - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu -----  From cabestan at eel.upc.es Tue Oct 19 05:59:16 1993 From: cabestan at eel.upc.es (Joan Cabestany) Date: Tue, 19 Oct 1993 9:59:16 UTC Subject: Proceedings available Message-ID: <mailman.630.1149591290.29955.connectionists@cs.cmu.edu> Proceedings available _____________________ After the last edition of IWANN'93 (International Workshop on Artificial Neural Networks) held in Spain (Sitges) during June 1993, some books with the Proceedings are still available at special price. Reference: New Trends in Neural Computation (IWANN'93 Proceedings) J.Mira, J.Cabestany, A.Prieto editors Lecture Notes in Computer Science number 686 SPRINGER VERLAG 1993 Price: 9000 pesetas (spanish currency) Method of payment: VISA Card number _________________________ Expiration date _______________ Name of card holder ______________________________________________________ Date ____________ Signature ___________________________________ Send this form to ULTRAMAR CONGRESS Att. Mr. J.Balada Diputacio, 238, 3 08007 BARCELONA Spain Fax + 34.3.412.03.19  From simon at dcs.ex.ac.uk Tue Oct 19 11:23:58 1993 From: simon at dcs.ex.ac.uk (simon@dcs.ex.ac.uk) Date: Tue, 19 Oct 93 16:23:58 +0100 Subject: UK PhD connectionist studentship available Message-ID: <16831.9310191523@kaos.dcs.exeter.ac.uk> PhD Studentship -- full-time, SERC funded The Department of Computer Science at the University of Exeter has a SERC quota award available for a suitable candidate to pursue fulltime research for a PhD degree. Applicants should have a good first degree in Computer Science (or a closely related dsicipline) with a sound knowledge of neural computing and/or software engineering. The successful applicant will join a research group exploring the use of neural computing as a novel software technology. Potential projects range from formal analysis of network implementations of well-defined problems to development of visualization techniques to facilitate efficient network training as well as to provide support for a conceptual understanding of neural net implementations. Application forms and further information can be obtained from: Lyn Shackleton, Department of Computer Science, University of Exeter, Exeter EX4 4PT. email: lyn at dcs.exeter.ac.uk; tel: 0392 264066; FAX: 0392 264067 Informal enquiries and requests for further details of the research group's activities may be made to: Professor Derek Partridge, Department of Computer Science, University of Exeter, Exeter EX4 4PT, email: derek at dcs.exeter.ac.uk tel: 0392 264061, FAX: 0392 264067, The closing date for applications is November 19th, 1993. Interviews will be conducted in the week beginning November 22, 1993. It is expected that the award will be taken up in January 1994. -- Simon Klyne Connection Science Laboratory email: simon at dcs.exeter.ac.uk Department of Computer Science phone: (+44) 392 264066 University of Exeter EX4 4QE, UK.  From mozer at dendrite.cs.colorado.edu Tue Oct 19 14:40:14 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Tue, 19 Oct 1993 12:40:14 -0600 Subject: information on NIPS*93 workshop accommodations Message-ID: <199310191840.AA22297@neuron.cs.colorado.edu> The NIPS*93 brochure is a bit sketchy concerning accommodations at the NIPS workshops, to be held at the Radisson Resort Vail December 2-4. To make reservations at the Radisson, call (800) 648-0720. For general information on the resort, the central number is (303) 476-4444. Reservations can also be made by fax: (303) 476-1647. And if you would like to let the glowing power of a live psychic answer your very personal questions, the number is (900) 820-7131. Note that rooms will be held for us only until the beginning of November, and last year many participants had to sleep in the snow due to lack of foresight in making reservations. Concerning lift tickets: Unfortunately, the NIPS brochure was published before we were able to obtain this year's lift ticket prices. The prices have increased roughly $5/day over those published in the brochure. If you wish to advance purchase tickets, though, we ask that you send in the amounts published in the brochure. We will collect the difference on site. (Sorry, it's the only feasible way to do recordkeeping at this point.) Lift tickets may also be purchased on site at an additional expense of roughly $1/day. Very sorry for the inconvenience. Mike Mozer NIPS*93 Workshop Chair  From fulkersw at smtplink.de.deere.com Tue Oct 19 10:11:36 1993 From: fulkersw at smtplink.de.deere.com (William Fulkerson) Date: Tue, 19 Oct 93 09:11:36 CDT Subject: Possible position in Finance Message-ID: <9310190911.A02154@smtplink.de.deere.com> A position is anticipated in the Finance department, Deere & Company, Moline Illinois. The purpose of this message is to survey the interest in such a position and to determine the skills available among the likely candidates. The successful candidate will have a theoretical background and 1 to 3 years professional experience in applications of neural networks, evolutionary computing, and/or fuzzy logic. Although professional experience in applications is required, it need not be in finance. Applicants with advanced degrees in engineering, statistics, or computer science are preferred. The applicant must want to apply the above technologies to financial problems and be willing to pursue these financial applications for an extended period of years. The initial assignment will be to develop trading systems for foreign exchange and commercial paper. Future assignments will be within the Finance department and may include pension fund management. If your interest, application experience, training, and skills match this description, please send a short description of your qualifications via e-mail to: fulkersw at smtplink.de.deere.com. Receipt of your e-mail will be acknowledged.  From srx014 at cck.coventry.ac.uk Wed Oct 20 11:36:42 1993 From: srx014 at cck.coventry.ac.uk (CRReeves) Date: Wed, 20 Oct 93 11:36:42 WET DST Subject: ICSE94 - Call for Papers Message-ID: <7831.9310201036@cck.coventry.ac.uk> The following may be of interest to connectionists working on control engineering applications: ****************************************************************************** ICSE 94 Tenth International Conference on Systems Engineering First Announcement Call for papers 6-8 September 1994 C O V E N T R Y U N I V E R S I T Y Held at Coventry University UK Organised by the Control Theory and Applications Centre International Conference on Systems Engineering The 10th International Conference on Systems Engineering, ICSE'94, will take place at Coventry University and organised through the Control Theory and Applications Centre, an interdisciplinary research centre established by drawing together staff from the School of Engineering and the School of Mathematical and Information Sciences. Scope of Conference The Conference will cover the general area of Systems Engineering, with particular emphasis being placed on applications. It is expected to include sessions on the following themes: - Adaptive Control and System Identification - Algorithms and Architectures - Control Theory and Industrial Applications - Educational Developments in Systems Engineering - Energy Efficiency and Environmental Systems - Image and Signal Processing - Manufacturing Systems - Modelling and Simulation - Rule Based Control and Fuzzy Decision Making - Neural Networks and Genetic Algorithms in Control and Identification Call for Papers Authors wishing to contribute to the Conference should submit an abstract (three copies) of their proposed contribution before 15 February 1994. The abstract should be typed and written in English. Refereeing of abstacts submitted before the deadline date will take place on a regular basis. This will allow early decisions to be taken and should assist authors in their planning arrangements. The Organising Committee would also welcome proposals for arranged specialist sessions on a focused theme relevant to the Conference, each session consisting of up to six papers. All papers presented will be considered for publication in the Journal 'Systems Science', published in Poland (in English). Deadlines - Submission of abstracts 15 February 1994 - Acceptance of papers 7 March 1994 - Submission of full papers 1 June 1994 It is intended to have the Conference Proceedings available for participants. Consequently, deadlines for submission of papers should be strictly respected. Preliminary Arrangements - Conference fees, provisionally estimated at 325 Pounds Sterling, inclues a copy of the Conference Proceedings, lunches on the 6th, 7th and 8th, the Conference Banquet on the 6th, and a Civic Reception followed by the Conference Dinner on the 7th. - Participants will have the option of being accommodated in the University Halls of Residence overlooking Coventry Cathedral or in local hotels or guest houses. The Conference fee is exclusive of accommodation charges. - The working language of the Conference is English, which will be used for all presentations, discussions and printed material. - The Conference Banquet is to be of the 'Olde English Mediaeval' style and will be held at the historical Coombe Abbey just outside Coventry. Abstracts, papers and requests for further details should be sent to: Dr Keith Burnham Conference Secretary ICSE94 Control Theory and Applications Centre Coventry University Priory Street Coventry CV1 5FB United Kingdom Telephone: 0203 838972 (International +44 203 838972) Telex: 9312102228 (CPG) Fax: 0203 838585 (International +44 203 838585) Email: mtx062 at uk.ac.cov  From marwan at sedal.sedal.su.OZ.AU Wed Oct 20 22:47:19 1993 From: marwan at sedal.sedal.su.OZ.AU (Marwan Jabri) Date: Wed, 20 Oct 93 21:47:19 EST Subject: software Message-ID: <9310201147.AA21214@sedal.sedal.su.OZ.AU> MUME version 0.73 has been released. With respect to the earlier released version of MUME (version 0.6), it provides: o Xmume is an X Windowes based editor/visualisation tool for MUME o the support for scheduling of multiple training algorithms/strategies for the simultaneous training of multiple networks Both binaries and sources are freely available. The binaries are available via ananymous ftp and the sources are available following the return of a signed license available as a postscript file in the anonymous ftp directory. The machine is mickey.sedal.su.oz.au (129.78.24.170) login as anonymous for the binaries or to fetch the license file (license.ps). The file MUME-README could e fetched and provides instructions/information. MUME can be compiled on Unix and DOS machines. Xmume, of course, only works on Unix boxes.  From gary at cs.ucsd.edu Wed Oct 20 08:50:33 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Wed, 20 Oct 93 05:50:33 -0700 Subject: Virtual Festschrift for Jellybean Message-ID: <9310201250.AA16310@odin.ucsd.edu> ***********************REMINDER********************************* The Petschrift Papers are DUE NOVEMBER FIRST!!! "Computer! Funny bone ON!!!" Here is the original announcement: Dear Connectionists, On a sad day this spring, my longtime collaborator, friend, and inspiration for the field of Dognitive Science, Jellybean, died at the ripe old age of 16. His age (for a golden retriever/samoyed cross) at his death is a testament to modern veterinary medicine. Alas, we still must all go sometime. The purpose of this message is to invite the humorists among us to contribute a piece to a collection I am editing of humor in Jellybean's memory. As you may know, a "festschrift" is a volume of articles presented as a tribute or memorial to an academic. I have no plans to publish this except "virtually", through the auspices of the neuroprose archive. I already have several contributions that were privately solicited. This is a public solicitation for humor for this purpose. Your piece does not have to be in the "Dognitive Science" vein, but may be anything having to do with neural nets, Cognitive Science, or nearby fields. I reserve editorial right to accept, edit, and/or reject any material submitted that I deem either inappropriate, too long (I am expecting pieces to be on the order of 1-8 pages), or simply, not funny. Any editing will be with the agreement of the author. Latex files are probably best. Remember, brevity is the mother of wit. The deadline for submission will be Nov. 1, 1993. Email submissions only to gary at cs.ucsd.edu. Thanks for your attention. Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 Computer Science and Engineering 0114 University of California San Diego La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP)  From lksaul at cmt6.mit.edu Thu Oct 21 11:13:48 1993 From: lksaul at cmt6.mit.edu (Lawrence K. Saul) Date: Thu, 21 Oct 93 11:13:48 -0400 Subject: Paper Announcement --- Learning in Boltzmann Trees Message-ID: <9310211513.AA10103@cmt6.mit.edu> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/saul.boltzmann.ps.Z The file saul.boltzmann.ps.Z is now available for copying in the Neuroprose repository: Learning in Boltzmann Trees (11 pages) Lawrence Saul and Michael Jordan Massachusetts Institute of Technology ABSTRACT: We introduce a family of hierarchical Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. Stochastic averages are computed by the technique of decimation. We present results on the problems of N-bit parity and the detection of hidden symmetries. Lawrence Saul lksaul at cmt6.mit.edu  From fdoyle at ecn.purdue.edu Fri Oct 22 13:12:30 1993 From: fdoyle at ecn.purdue.edu (Frank Doyle) Date: Fri, 22 Oct 93 12:12:30 -0500 Subject: No subject Message-ID: <9310221712.AA18339@volterra.ecn.purdue.edu> Postdoctoral position available in : NEURO-MODELING in the Department of Chemical Engineering, Purdue University Position for 2 years (beginning Fall 1993; salary: $25,000 per year). Subject: Neuro-modeling of blood pressure control This project is part of an interdisciplinary program involving industrial and academic participants from DuPont, Purdue University, the University of Pennsylvannia, and Louisiana State University. The program encompasses the disciplines of chemical engineering, automatic control, and neuroscience. Active interactions with engineers and nonlinear control and modeling community at Purdue and DuPont as well as with the neuroscientists at DuPont and Penn will be necessary for the success of the project. A strong background in neuro-modeling is required. The facilities at Purdue include state-of-the art computational workstations (HP 735s and Sun 10/41s). The postdoctoral candidate will work on the development of models of the control mechanisms responsible for blood pressure regulation. The neural system under investigation is the cardiorespiratory control system, which integrates sensory information on respiratory and cardiovascular variables to regulate and coordinate cardiac, vascular and respiratory activity. In order to better understand this system our program does neurobiolgical research and computational modeling. In effect, these results reverse engineer neuronal and systems function, which can have implications for engineering application; and the engineering applications of our first interest are in chemical engineering. The overall effort involves neurobiologists, chemical engineers, computer scientists, bioengineers and neural systems modelers. The present position is meant to contribute to the neural systems modeling - chemical engineering interaction. The neural computational-modeling work is progressing at several levels: (1) systems-level modeling modeling of the closed-loop cardiorespiratory system, (2) cellular level modeling of nonlinear computation in Hodgkin-Huxley style neuron models, and (3) network modeling of networks built-up from HH-style neurons incorporating channel kinetics and synaptic conductances to capture the mechanisms in the baroreceptor vagal reflex. The macroscopic model will be used (in conjunction with experimental data from the literature and from the laboratory of Dr. Schwaber) in developing structures to represent the control functions. The synaptic level modeling activities will be used in developing the building blocks which achieve the control function. The present position will focus towards research goals, under the supervision of Dr. Frank Doyle, that include the identification of novel control and modeling techniques. Interested candidates should send their curriculum vitae to BOTH: Prof. Francis J. Doyle III School of Chemical Engineering Purdue University West Lafayette, IN 47907-1283 (317) 497-9228 E-mail: fdoyle at ecn.purdue.edu & Dr. James Schwaber Neural Computation Group E.I. DuPont deNemours & Co., Inc. P.O. Box 80352 Wilmington, DE 19880-0352 (302) 695-7136 E-mail: schwaber at eplrx7.es.duPont.com  From RAMPO at SALERNO.INFN.IT Fri Oct 22 10:35:00 1993 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Fri, 22 OCT 93 14:35 GMT Subject: E.R.Caianiello Message-ID: <2163@SALERNO.INFN.IT> Prof. E. R. Caianiello suddenly died this morning at 8.00 in his home in Naples.  From weigend at sabai.cs.colorado.edu Fri Oct 22 03:37:55 1993 From: weigend at sabai.cs.colorado.edu (weigend@sabai.cs.colorado.edu) Date: Fri, 22 Oct 93 01:37:55 MDT Subject: Santa Fe Time Series Competition book out Message-ID: <199310220737.AA24728@sabai.cs.colorado.edu> Announcing book on the results of the Santa Fe Time Series Competition: ____________________________________________________________________ Title: TIME SERIES PREDICTION: Forecasting the Future and Understanding the Past. Editors: Andreas S. Weigend and Neil A. Gershenfeld Publisher: Addison-Wesley, September 1993. Paperback ISBN 0-201-62602-0 US$32.25 (672 pages) Hardcover ISBN 0-201-62601-2 US$49.50 (672 pages) The rest of this message gives some background, ordering information, and the table of contents. ____________________________________________________________________ Most observational disciplines, such as physics, biology, and finance, try to infer properties of an unfamiliar system from the analysis of a measured time record of its behavior. There are many mature techniques associated with traditional time series analysis. However, during the last decade, several new and innovative approaches have emerged (such as neural networks and time-delay embedding), promising insights not available with these standard methods. Unfortunately, the realization of this promise has been difficult. Adequate benchmarks have been lacking, and much of the literature has been fragmentary and anecdotal. This volume addresses these shortcomings by presenting the results of a careful comparison of different methods for time series prediction and characterization. This breadth and depth was achieved through the Santa Fe Time Series Prediction and Analysis Competition, which brought together an international group of time series experts from a wide variety of fields to analyze data from the following common data sets: - A physics laboratory experiment (NH3 laser) - Physiological data from a patient with sleep apnea - Tick-by-tick currency exchange rate data - A computer-generated series designed specifically for the Competition - Astrophysical data from a variable white dwarf star - J. S. Bach's last (unfinished) fugue from "Die Kunst der Fuge." In bringing together the results of this unique competition, this volume serves as a much-needed survey of the latest techniques in time series analysis. Andreas Weigend received his Ph.D. from Stanford University and was a postdoc at Xerox PARC. He is Assistant Professor in the Computer Science Department and at the Institute of Cognitive Science at the University of Colorado at Boulder. Neil Gershenfeld received his Ph.D. from Cornell University and was a Junior Fellow at Harvard University. He is Assistant Professor at the Media Lab at MIT. ____________________________________________________________________ Order it through your bookstore, or directly from the publisher by - calling the Addison-Wesley Order Department at 1-800-358-4566, - faxing 1-800-333-3328, - emailing <marcuss at world.std.com>, or - writing to Advanced Book Marketing Addison-Wesley Publishing One Jacob Way Reading, MA 01867, USA. VISA, Mastercard, and American Express and checks are accepted. When you prepay by check, Addison-Wesley pays shipping and handling charges. If payment does not accompany your order, shipping charges will be added to your invoice. Addison-Wesley is required to remit sales tax to the following states: AZ, AR, CA, CO, CT, FL, GA, IL, IN, LA, ME, MA, MI, MN, NY, NC, OH, PA, RI, SD, TN, TX, UT, VT, WA, WV, WI. _____________________________________________________________________ TABLE OF CONTENTS xv Preface Andreas S. Weigend and Neil A. Gershenfeld 1 The Future of Time Series: Learning and Understanding Neil A. Gershenfeld and Andreas S. Weigend Section I. DESCRIPTION OF THE DATA SETS__________________________________ 73 Lorenz-Like Chaos in NH3-FIR Lasers Udo Huebner, Carl-Otto Weiss, Neal Broadus Abraham, and Dingyuan Tang 105 Multi-Channel Physiological Data: Description and Analysis David R. Rigney, Ary L. Goldberger, Wendell C. Ocasio, Yuhei Ichimaru, George B. Moody, and Roger G. Mark 131 Foreign Currency Dealing: A Brief Introduction Jean Y. Lequarre 139 Whole Earth Telescope Observations of the White Dwarf Star (PG1159-035) J. Christopher Clemens 151 Baroque Forecasting: On Completing J.S. Bach's Last Fugue Matthew Dirst and Andreas S. Weigend Section II. TIME SERIES PREDICTION________________________________________ 175 Time Series Prediction by Using Delay Coordinate Embedding Tim Sauer 195 Time Series Prediction by Using a Connectionist Network with Internal Delay Lines Eric A. Wan 219 Simple Architectures on Fast Machines: Practical Issues in Nonlinear Time Series Prediction Xiru Zhang and Jim Hutchinson 243 Neural Net Architectures for Temporal Sequence Processing Michael C. Mozer 265 Forecasting Probability Densities by Using Hidden Markov Models with Mixed States Andrew M. Fraser and Alexis Dimitriadis 283 Time Series Prediction by Using the Method of Analogues Eric J. Kostelich and Daniel P. Lathrop 297 Modeling Time Series by Using Multivariate Adaptive Regression Splines (MARS) P.A.W. Lewis, B.K. Ray, and J.G. Stevens 319 Visual Fitting and Extrapolation George G. Lendaris and Andrew M. Fraser 323 Does a Meeting in Santa Fe Imply Chaos? Leonard A. Smith Section III. TIME SERIES ANALYSIS AND CHARACTERIZATION___________________ 347 Exploring the Continuum Between Deterministic and Stochastic Modeling Martin C. Casdagli and Andreas S. Weigend 367 Estimating Generalized Dimensions and Choosing Time Delays: A Fast Algorithm Fernando J. Pineda and John C. Sommerer 387 Identifying and Quantifying Chaos by Using Information-Theoretic Functionals Milan Palus 415 A Geometrical Statistic for Detecting Deterministic Dynamics Daniel T. Kaplan 429 Detecting Nonlinearity in Data with Long Coherence Times James Theiler, Paul S. Linsay, and David M. Rubin 457 Nonlinear Diagnostics and Simple Trading Rules for High-Frequency Foreign Exchange Rates Blake LeBaron 475 Noise Reduction by Local Reconstruction of the Dynamics Holger Kantz Section IV. PRACTICE AND PROMISE_________________________________________ 493 Large-Scale Linear Methods for Interpolation, Realization, and Reconstruction of Noisy, Irregularly Sampled Data William H. Press and George B. Rybicki 513 Complex Dynamics in Physiology and Medicine Leon Glass and Daniel T. Kaplan 529 Forecasting in Economics Clive W.J. Granger 539 Finite-Dimensional Spatial Disorder: Description and Analysis V.S. Afraimovich, M.I. Rabinovich, and A.L. Zheleznyak 557 Spatio-Temporal Patterns: Observations and Analysis Harry L. Swinney 569 Appendix: Accessing the Server 571 Bibliography (800 references) 631 Index  From weigend at sabai.cs.colorado.edu Fri Oct 22 03:33:09 1993 From: weigend at sabai.cs.colorado.edu (weigend@sabai.cs.colorado.edu) Date: Fri, 22 Oct 93 01:33:09 MDT Subject: Music and Audition at NIPS (1st day at Vail) Message-ID: <199310220733.AA24718@sabai.cs.colorado.edu> Call for abstracts and announcement: NIPS workshop: Connectionism for Music and Audition Date: December 3 (this is the first of the two Vail days) Organizers: Dick Duda Andreas Weigend San Jose State University University of Colorado at Boulder duda at sri.com weigend at cs.colorado.edu If you are interested in presenting at this NIPS workshop in Vail, please send an abstract to both organizers **before November 1st**. We will review the abstracts and then decide on the schedule. We also invite suggestions for further topics of discussion. ____________________________________________________________________________ CONTENTS: While speech and language dominate our auditory experience, the human auditory system is constantly processing a much broader world of sound events. Some of the most fundamental questions concerning music and general sound perception are still largely unanswered; the range extends from the separation and organization of sound streams to the problem of a hierarchy of time scales. In this workshop, we want to explores the development and application of connectionist methods to music and audition. At this NIPS workshop, we plan to address both topics of music and audition. Topic 1: Music In recent years, NIPS has seen (and heard) neural networks generate tunes and harmonize chorales. With a large amount of music becoming available in computer-readable form, real data can be used to build connectionist models. The time has come to think about questions, tasks, goals in areas ranging from connectionist modeling of expectations to automated music analysis and composition. One feature of music that makes it interesting to model is the hierarchy of time scales involved. Which connectionist architectures are suited for several order of magnitude in time? How can temporal information important for music be integrated efficiently? Particular attention will be paid to the advantages of different type of recurrent networks, and to architectures that try to incorporate invariances of the problem domain (e.g., TDNN). Topic 2: Audition While the human auditory system is exquisitely sensitive to speech and music, it is based on a mammalian auditory system that is equally adept at solving more fundamental tasks. These tasks include (a) separating multiple sound sources, (b) localizing the sources in space and time, (c) characterizing the sources in terms of identifying qualities such as pitch and timbre, (d) suppressing the effects of early reflections and room reverberation, and (e) characterizing the acoustic environment. To date, most of the work on auditory scene analysis has focussed on the auditory periphery, as represented by cochlear models and networks for detecting onsets, harmonicity, modulation, and interaural time and intensity differences. The major unanswered questions concern the nature of structures and processes that can integrate this information into stable and valid percepts. How should different sound objects be represented in a connectionist architecture? What will stabilize these representations as the sources and the listener move? How will these representations support other tasks? What is the role of expectations or other forms of bi-directional information flow? How can cross-modal information be included? What is the role of world constraints and environmental factors, and how are they reflected in the network architecture? Finally, what is a scientifically appropriate methodology for evaluating the performance of proposed connectionist solutions to these problems? ____________________________________________________________________________ General NIPS information and registration: An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure, you can write to nips93 at systems.caltech.edu or NIPS Foundation P.O. Box 60035 Pasadena, CA 91116-6035 ____________________________________________________________________________  From harnad at Princeton.EDU Sat Oct 23 20:50:29 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 23 Oct 93 20:50:29 EDT Subject: Learning - Implicit vs. Explicit: BBS Call for Commentators Message-ID: <9310240050.AA29897@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by D.R. SHANKS and M.F. ST. JOHN on IMPLICIT VS. EXPLICIT LEARNING that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ CHARACTERISTICS OF DISSOCIABLE HUMAN LEARNING SYSTEMS David R. Shanks Department of Psychology University College London London WC1E 6BT, England david.shanks at psychol.ucl.ac.uk Mark F. St. John Department of Cognitive Science University of California at San Diego La Jolla, CA 92093 mstjohn at cogsci.ucsd.edu KEYWORDS: learning; memory; consciousness; explicit/implicit processes; rules; instances; unconscious processes ABSTRACT: The proposal that there exist independent explicit and implicit learning systems is based on two further distinctions: (i) learning that takes place with versus without concurrent awareness, and (ii) learning that involves the encoding of instances (or fragments) versus the induction of abstract rules or hypotheses. Implicit learning is assumed to involve unconscious rule learning. We examine the implicit learning evidence from subliminal learning, conditioning, artificial grammar learning, instrumental learning, and reaction times in sequence learning. Unconscious learning has not been satisfactorily established in any of these areas. The assumption that learning in some of these tasks (e.g., artificial grammar learning) is predominantly based on rule abstraction is questionable. When subjects cannot report the "implicitly learned" rules that govern stimulus selection, this is often because their knowledge consists of instances or fragments of the training stimuli rather than rules. In contrast to the distinction between conscious and unconscious learning, the distinction between instance and rule learning is a sound and meaningful way of taxonomizing human learning. We discuss various computational models of these two forms of learning. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.shanks). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from an Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shanks When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). JANET users without ftp can instead utilise the file transfer facilities at sites uk.ac.ft-relay or uk.ac.nsf.sun. Full details are available on request. -------------------------------------------------------------  From piero at dist.dist.unige.it Sun Oct 24 16:43:32 1993 From: piero at dist.dist.unige.it (Piero Morasso) Date: Sun, 24 Oct 93 16:43:32 MET Subject: farawell to E.R Caianiello Message-ID: <9310241543.AA21642@dist.dist.unige.it> ---------------------------------------------------------------- FARAWELL TO A PIONIEER: Eduardo R. Caianiello ---------------------------------------------------------------- A physicist by training, Eduardo R. Caianiello was one of the brave fews who dared to start the field of neural networks more than 30 years ago. In 1991 he was one of the founders of the European Neural Network Society and was chairing the organization of the 1994 Conference ICANN'94 in Sorrento until he suddenly and peacefully died, on October 22, 1993, in his home, in Naples. Let us remember and perhaps re-read one of his seminal papers: E.R. Caianiello, Outline of a theory of thought process and thinking machines. J. Theor. Biol. 2, 204-235, 1961. ----------------------------------------------------------------  From large at cis.ohio-state.edu Mon Oct 25 09:43:53 1993 From: large at cis.ohio-state.edu (E. large) Date: Mon, 25 Oct 93 09:43:53 -0400 Subject: Music and Audition at NIPS (1st day at Vail) In-Reply-To: weigend@sabai.cs.colorado.edu's message of Fri, 22 Oct 93 01:33:09 MDT <199310220733.AA24718@sabai.cs.colorado.edu> Message-ID: <9310251343.AA03835@cerebellum.cis.ohio-state.edu> Resonance and the Perception of Musical Meter Edward W. Large and John F. Kolen The perception of musical rhythm is traditionally described as involving, among other things, the assignment of metrical structure to rhythmic patterns. In our view, the perception of metrical structure is best described as a dynamic process in which the temporal organization of musical events entrains the listener in much the same way that two pendulum clocks hanging on the same wall synchronize their motions so that they tick in lock step. In this talk, we re-assess the notion of musical meter, and show how the perception of this sort of temporal organization can be modeled as a system of non-linearly coupled oscillators responding to musical rhythms. Individual oscillators phase- and frequency- lock to components of rhythmic patterns, embodying the notion of musical pulse, or beat. The collective behavior of a system of oscillators represents a self-organized response to rhythmic patterns, embodying a "perception" of metrical structure. When exposed to performed musical rhythms the system shows the ability to simultaneously perform quantization (categorization of temporal intervals), and assignment of metrical structure in real time. We discuss implications for psychological theories of temporal expectancy, "categorical" perception of temporal intervals, and the perception of metrical structure.  From mpp at cns.brown.edu Mon Oct 25 10:23:54 1993 From: mpp at cns.brown.edu (Michael P. Perrone) Date: Mon, 25 Oct 1993 10:23:54 -0400 (EDT) Subject: NIPS*93 Hybrid Systems Workshop Message-ID: <9310251423.AA14415@cns.brown.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 4146 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/ff410a1b/attachment-0001.ksh From french at willamette.edu Mon Oct 25 12:07:29 1993 From: french at willamette.edu (Bob French) Date: Mon, 25 Oct 93 09:07:29 PDT Subject: NIPS-93 Workshop on catastrophic interference Message-ID: <9310251607.AA14952@willamette.edu> NIPS-93 Workshop: ================ CATASTROPHIC INTERFERENCE IN CONNECTIONIST NETWORKS: CAN IT BE PREDICTED, CAN IT BE PREVENTED? Date: Saturday, December 4, 1993, at Vail, Colorado ==== Intended audience: Connectionists, cognitive scientists and ================= applications-oriented users of connectionist networks interested in a better understanding of: i) when and why their networks can suddenly and completely forget previously learned information; ii) how it is possible to reduce or even eliminate this phenomenon. Organizer: Bob French ========= Computer Science Department Willamette University, Salem OR french at willamette.edu Program: ======== When connectionist networks learn new information, they can suddenly and completely forget everything they had previously learned. This problem is called catastrophic forgetting or catastrophic interference. Given the demonstrated severity of the problem, it is intriguing to note that this problem has to date received very little attention. When new information must be added to an already-trained connectionist network, it is currently taken for granted that the network will simply cycle through all of the old data again. Since relearning all of the old data is both psychologically implausible as well as impractical for very large data sets, is it possible to do otherwise? Can connectionist networks be developed that do not forget catastrophically -- or perhaps that do not forget at all -- in the presence of new information? Or is catastrophic forgetting perhaps the inevitable price for using fully distributed representations? Under what circumstances will a network forget or not forget? Further, can the amount of forgetting be predicted with any reliability? These questions are of particular interest to anyone who intends to use connectionist networks as a memory/generalization device. This workshop will focus on: - the theoretical reasons for catastrophic interference; - the techniques that have been developed to eliminate it or to reduce its severity; - the side-effects of catastrophic interference; - the degree to which a priori prediction of catastrophic forgetting is or is not possible. As connectionist networks become more and more a part of applications packages, the problem of catastrophic interference will have to be addressed. This workshop will bring the audience up to date on current research on catastrophic interference. Speakers: Stephan Lewandowsky (lewan at constellation.ecn.uoknor.edu) ======== Department of Psychology University of Oklahoma Phil A. Hetherington (het at blaise.psych.mcgill.ca) Department of Psychology McGill University Noel Sharkey (noel at dcs.exeter.ac.uk) Connection Science Laboratory Dept. of Computer Science University of Exeter, U.K. Bob French (french at willamette.edu) Computer Science Department Willamette University Morning session: --------------- 7:30 - 7:45 Bob French: An Introduction to the Problem of Catastrophic Interference in Connectionist Networks 7:45 - 8:15 Stephan Lewandowsky: Catastrophic Interference: Causes, Solutions, and Side-Effects 8:15 - 8:30 Brief discussion 8:30 - 9:00 Phil Hetherington: Sequential Learning in Connectionist Networks: A Problem for Whom? 9:00 - 9:30 General discussion Afternoon session ----------------- 4:30 - 5:00 Noel Sharkey: Catastrophic Interference and Discrimination. 5:00 - 5:15 Brief discussion 5:15 - 5:45 Bob French: Prototype Biasing and the Problem of Prediction 5:45 - 6:30 General discussion and closing remarks Below are the abstracts for the talks to be presented in this workshop: CATASTROPHIC INTERFERENCE: CAUSES, SOLUTIONS, AND SIDE-EFFECTS Stephan Lewandowsky Department of Psychology University of Oklahoma I briefly review the causes for catastrophic interference in connectionist models and summarize some existing solutions. I then focus on possible trade-offs between resolutions to catastrophic interference and other desirable network properties. For example, it has been suggested that reduced interference might impair generalization or prototype formation. I suggest that these trade-offs occur only if interference is reduced by altering the response surfaces of hidden units. -------------------------------------------------------------------------- SEQUENTIAL LEARNING IN CONNECTIONIST NETWORKS: A PROBLEM FOR WHOM? Phil A. Hetherington Department of Psychology McGill University Training networks in a strictly blocked, sequential manner normally results in poor performance because new items overlap with old items at the hidden unit layer. However, catastrophic interference is not a necessary consequence of using distributed representations. First, examination by the method of savings demonstrates that much of the early information is still retained: Items thought lost can be relearned within a couple of trials. Second, when items are learned in a windowed, or overlapped fashion, less interference obtains. And third, when items are presented in a strictly blocked, sequential manner to a network that already possesses a relevant knowledge base, interference may not occur at all. Thus, when modeling normal human learning there is no catastrophic interference problem. Nor is there a problem when modeling strictly sequential human memory experiments with a network that has a relevant knowledge base. There is only a problem when simple, unstructured, tabula rasa networks are expected to model the intricacies of human memory. -------------------------------------------------------------------------- CATASTROPHIC INTERFERENCE AND DISCRIMINATION Noel Sharkey Connection Science Laboratory Dept. of Computer Science University of Exeter Exeter, U.K. Connectionist learning techniques, such as backpropagation, have been used increasingly for modelling psychological phenomena. However, a number of recent simulation studies have shown that when a connectionist net is trained, using backpropagation, to memorize sets of items in sequence and without negative exemplars, newly learned information seriously interferes with old. Three converging methods were employed to show why and under what circumstances such retroactive interference arises. First, a geometrical analysis technique, derived from perceptron research, was introduced and employed to determine the computational and representational properties of feedforward nets with one and two layers of weights. This analysis showed that the elimination of interference always resulted in a breakdown of old-new discrimination. Second, a formally guaranteed solution to the problems of interference and discrimination was presented as the HARM model and used to assess the relative merits of other proposed solutions. Third, two simulation studies were reported that assessed the effects of providing nets with experience of the experimental task. Prior knowledge of the encoding task was provided to the nets either by block training them in advance or by allowing them to extract the knowledge through sequential training. The overall conclusion was that the interference and discrimination problems are closely related. Sequentially trained nets employing the backpropagation learning algorithm will unavoidably suffer from either one or the other. -------------------------------------------------------------------------- PROTOTYPE BIASING IN CONNECTIONIST NETWORKS Bob French Computer Science Dept. Willamette University Previously learned representations bias new representations. If subjects are told that a newly encountered object X belongs to an already familiar category P, they will tend to emphasize in their representation of X features of the prototype they have for the category P. This is the basis of prototype biasing, a technique that appears to significantly reduce the effects catastrophic forgetting. The 1984 Congressional Voting Records database is used to illustrate prototype biasing. This database contains the yes-no voting records of Republican and Democratic members of Congress in 1984 on 16 separate issues. This database lends itself conveniently to the use of a network having 16 "yes-no" input units, a hidden layer and one "Republican/Democrat" output node. A "Republican" prototype and a "Democrat" prototype are built, essentially by separately averaging over Republican and Democrat hidden-layer representations. These prototypes then "bias" subsequent representations of new Democrats towards the Democrat prototype and of new Republicans towards the Republican prototype. Prototypes are learned by a second, separate backpropagation network that associates teacher patterns with their respective prototypes. Thus, ideally, when the "Republican" teacher pattern is fed into it, it produces the "Republican" prototype on output. The output from this network is continually fed back to the hidden layer of the primary network and is used to bias new representations. Also discussed in this paper are the problems involved in predicting the severity of catastrophic forgetting.  From tgd at chert.CS.ORST.EDU Mon Oct 25 18:06:15 1993 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Mon, 25 Oct 93 15:06:15 PDT Subject: Articles of interest in Machine Learning Message-ID: <9310252206.AA28485@curie.CS.ORST.EDU> The most recent issue of Machine Learning (Volume 13, Number 1) contains articles of interest to readers of this list. Here is a partial table of contents: Cost-Sensitive learning of classification knowledge and its applications in robotics Ming Tan Extracting refined rules from knowledge-based neural networks Geoffrey Towell and Jude Shavlik Prioritized sweeping: reinforcement learning with less data and less time Andrew Moore and Chris Atkeson Technical Note: Selecting a Classification Method by Cross-validation Cullen Schaffer For subscription information, contact Kluwer at world.std.com. Tom Dietterich, Executive Editor, Machine Learning  From terry at helmholtz.sdsc.edu Tue Oct 26 00:15:43 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Mon, 25 Oct 93 21:15:43 PDT Subject: NIPS Workshop on Spatial Perception Message-ID: <9310260415.AA22226@helmholtz.sdsc.edu> NIPS*93 WORKSHOP ANNOUNCEMENT Title: Processing of visual and auditory space and its modification by experience. Intended Audience: Researchers interested in spatial perception, sensory fusion and learning. Organizers: Josef P. Rauschecker Terrence Sejnowski josef at helix.nih.gov terry at helmholtz.sdsc.edu Program: This workshop will address the question how spatial information is represented in the brain, how it is matched and compared by the visual and auditory systems, and how early sensory experience influences the development of these space representations. We will discuss neurophysiological and computational data from cats, monkeys, and owls that suggest how the convergence of different sensory space representations may be handled by the brain. In particular, we will look at the role of early experience and learning in establishing these representations. Lack of visual experience affects space processing in cats and owls differently. We will therefore discuss various kinds of plasticity in different spatial representations. Half the available time has been reserved for discussion and informal presentations. We will encourage lively audience participation. Morning Session (7:30 - 8:30) Presentations Predictive Hebbian learning and sensory fusion (Terry Sejnowski) A connectionist model of the owl's sound localization system (Dan Rosen) Intermodal compensatory plasticity of sensory systems (Josef Rauschecker) 8:30 - 9:30 Discussion Afternoon Session (4:30 - 5:30) Presentations Neurophysiological processing of visual and auditory space in monkeys (Richard Andersen) Learning map registration in the superior colliculus with predictive Hebbian learning (Alex Pouget) A neural network model for the detection of heading direction from optic flow in the cat's visual system (Markus Lappe) 5:30 - 6:30 Discussion ===================================================================== General NIPS information and registration: An electronic copy of the 1993 NIPS registration brochure is available in postscript format via anonymous ftp at helper.systems.caltech.edu in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or other information, please send a request to nips93 at systems.caltech.edu or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035 =====================================================================  From cohn at psyche.mit.edu Tue Oct 26 13:53:43 1993 From: cohn at psyche.mit.edu (David Cohn) Date: Tue, 26 Oct 93 13:53:43 EDT Subject: Post-NIPS Workshop on Robot Learning Message-ID: <9310261753.AA18987@psyche.mit.edu> The following workshop will be held on Friday, December 3rd in Vail, CO as one of the Post-NIPS workshops. To be added to a mailing list for further information about the workshop, send electronic mail to "robot-learning-request at psyche.mit.edu". --------------------------------------------------------------------------- NIPS*93 Workshop: Robot Learning II: Exploration and Continuous Domains ================= Intended Audience: Researchers interested in robot learning, exploration, ================== and active learning systems in general Organizer: David Cohn (cohn at psyche.mit.edu) ========== Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Overview: ========= The goal of this workshop will be to provide a forum for researchers active in the area of robot learning and related fields. Due to the limited time available, we will focus on two major issues: efficient exploration of a learner's state space, and learning in continuous domains. Robot learning is characterized by sensor noise, control error, dynamically changing environments and the opportunity for learning by experimentation. A number of approaches, such as Q-learning, have shown great practical utility learning under these difficult conditions. However, these approaches have only been proven to converge to a solution if all states of a system are visited infinitely often. What has yet to be determined is whether we can efficiently explore a state space so that we can learn without having to visit every state an infinite number of times, and how we are to address problems on continuous domains, where there are effectively an infinite number of states to be visited. This workshop is intended to serve as a followup to last year's post-NIPS workshop on machine learning. The two problems to be addressed this year were identified as two (of the many) crucial issues facing robot learning. The morning session of the workshop will consist of short presentations discussing theoretical approaches to exploration and to learning in continuous domains, followed by general discussion guided by a moderator. The afternoon session will center on practical and/or heuristic approaches to these problems in the same format. As time permits, we may also attempt to create an updated "Where do we go from here?" list, like that drawn up in last year's workshop. Video demos will be encouraged. If feasible, we will attempt to have a VCR set up after the workshop to allow for informal demos. Preparatory readings from the presenters will be ready by early November. To be placed on a list to receive continuing information about the workshop (such as where and when the readings appear on-line), send email to "robot-learning-request at psyche.mit.edu". Tentative Program: ================== December 3, 1993 Morning Session: Theoretical Approaches --------------------------------------- 7:30-8:30 Andrew Moore, CMU "The Parti-game approach to exploration" synopses of different Leemon Baird, USAF approaches "Reinforcement learning in continuous domains" (20 min each) Juergen Schmidhuber, TUM Reinforcement-directed information acquisition in Markov Environments 8:30-9:30 Open discussion Afternoon Session: Heuristic Approaches --------------------------------------- 4:30-5:50 Long-Ji Lin, Siemens "RatBot: A mail-delivery robot" synopses of different Stephan Schaal, MIT approaches "Efficiently exploring high-dimensional spaces" (20 min each) Terry Sanger, MIT/JPL "Trajectory extension learning" Jeff Schneider, Rochester "Learning robot skills in high-dimensional action spaces" 5:50-6:30 Open discussion  From cowan at synapse.uchicago.edu Tue Oct 26 13:06:56 1993 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 26 Oct 93 10:06:56 -0700 Subject: Caianiello Message-ID: <9310261706.AA01028@synapse> I was very sorry to hear of the death of Eduardo Caianiello. Eduardo was one of the early works in neural networks. I met him first in 1959 at MIT when he visited the Communications Biophysics Group, of which I was then a graduate student member. It was Caianiello who first tried to study discrete state continuous-time models of spiking neurons in networks with modifiable couplings. His 1961 paper in the Journal of Theoretical Biology was an important early contribution to the field. Of course Caianiello did a lot more than neural networks: he was a top notch theoretical physicist who made a number of important contributions to quantum field theory. When I first visited him in 1964 he was running the Institute of Theoretical Physics in Naples. He later set up the Laboratory for Cybernetics, and ran numerous very stimulating summer schools in Southern Italy on Physics, Cybernetics and Automata Theory, and on Neural Networks. He therefore was very influential in keeping alive work on Neural Networks, not just in Italy, but in Europe, North America, and also in Japan, where his work was very well received. His passing breaks yet another link with the early generation of workers in Neural Networks, and he will be missed, but not forgotten. Jack Cowan  From P.Refenes at cs.ucl.ac.uk Tue Oct 26 17:14:23 1993 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 26 Oct 93 21:14:23 +0000 Subject: PostDoctoral Fellowship. Message-ID: <mailman.631.1149591291.29955.connectionists@cs.cmu.edu> Postdoctoral Fellowship CALL FOR APPLICATIONS for a post doctoral research fellowship on NONLINEAR MODELLING IN FINANCIAL ENGINEERING at: London Business School, Department of Decision Science. Position: for upto 2 years (beginning Fall 1994; stipend: $50,000 pa). London Business School has been selected as one of the European Host Institutes for the CEC Human Capital and Mobility Programme and has been awarded a number of postdoctoral fellowships. The NeuroForecasting Unit at the faculty of Decision Sciences has a strong involvement in the application of neural networks to financial engineering including asset pricing, tactical asset allocation, equity investment, forex, etc. and would like to put forward a candidate with a research proposal in neural network analysis including parameter significance estimation in multi-variate datasets, sensitivity analysis, and/or non-linear dimentionality reduction in the context of factor models for equity investment. Candidates must hold a PhD in non-linear modelling or related areas and have a proven research record. Normal HCM rules apply i.e. only CEC nationals (excluding UK residents) are eligible. CEC nationals that have been working overseas for the past two years also qualify. Interested candidates should send their curriculum vitae and a summary of their research interests to: Dr A. N. Refenes NeuroForecasting Unit Department of Decision Science London Business School Sussex Place, Regents Park, London NW1 4SA, UK Tel: ++ 44 (71) 262 50 50 Fax: ++ 44 (71) 724 78 75  From VAINA at buenga.bu.edu Tue Oct 26 21:21:14 1993 From: VAINA at buenga.bu.edu (Lucia M. Vaina) Date: Tue, 26 Oct 1993 21:21:14 -0400 (EDT) Subject: No subject Message-ID: <mailman.632.1149591291.29955.connectionists@cs.cmu.edu> From smagt at fwi.uva.nl Wed Oct 27 09:01:52 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Wed, 27 Oct 1993 14:01:52 +0100 Subject: CFP: Neural Systems for Robotics Message-ID: <199310271301.AA03104@carol.fwi.uva.nl> PROGRESS IN NEURAL NETWORKS series editor O. M. Omidvar CALL FOR PAPERS Special Volume: NEURAL SYSTEMS FOR ROBOTICS Editor: P. Patrick van der Smagt This series will review state-of-the-art research in neural networks, natural and synthetic. Contributions from leading researchers and practitioners will be sought. This series will help shape and define academic and professional programs in this area. This series is intended for a wide audience; those professionally involved in neural network research, such as lecturers and primary investigators in neural computing, neural modeling, neural learning, neural memory, and neurocomputers. The upcoming volume, Neural Systems for Robotics, will focus on research in natural and artificial neural systems directly related to robotics and robot control. Authors are invited to submit original manuscripts describing recent progress in neural network research directly applicable to robotics. Manuscripts may be survey or tutorial in nature. Suggested topics include, but are not limited to: * Neural control systems for visually guided robots * Manipulator trajectory control * Obstacle avoidance * Sensor feedback systems * Biologically inspired robot systems * Identification of kinematics and dynamics The papers will be refereed and uniformly typeset. Ablex and the Progress Series editors invite you to submit an abstract, extended summary or manuscript proposal, directly to the Special Volume Editor: P. Patrick van der Smagt, Dept. of Computer Systems, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, THE NETHERLANDS Tel: +31 20 525-7524 FAX: +31 20 525-7490 Email: smagt at fwi.uva.nl or to the Series Editor: Dr. Omid M. Omidvar, Computer Science Dept., University of the District of Columbia, Washington DC 20008 Tel: (202)282-7345 FAX: (202)282-3677 Email: OOMIDVAR at UDCVAX.BITNET The Publisher is Ablex Publishing Corporation, Norwood, NJ. Other volumes: Neural Networks for Control, ed. by D. L. Elliott Neural Networks Hardware Implementations, ed. by M. E. Zaghiloul Motion Detection & Temporal Pattern Recognition, ed. by J. Dayoff Biological Neural Networks, ed. by D. Tam Mathematical Foundations, ed. by M. Garzon  From GARZONM at hermes.msci.memst.edu Wed Oct 27 16:29:32 1993 From: GARZONM at hermes.msci.memst.edu (GARZONM@hermes.msci.memst.edu) Date: 27 Oct 93 15:29:32 CDT Subject: NIPS*93 Workshop on Stability and Observability/program Message-ID: <MAILQUEUE-101.931027152932.384@mathsci.msci.memst.edu> A day at NIPS*93 on STABILITY AND OBSERVABILITY 3 December 1993 at Vail, Colorado Intended Audience: nneuroscientists, computer and cognitive ================= scientists, neurobiologists, mathematicians/ dynamical systems, electrical engineers, and anyone interested in questions such as: * what effects can noise, bounded precision and uncertainty in inputs, weights and/or transfer functions have on the i/o behavior of a neural network? * what is missed and what is observable in computer simulations of the networks they purport to simulate? * how much architecture can be observed in the behavior of a network-in-a-box? * what can be done to improve and/or accelerate convergence to stable equilibria during learning and network updates while preserving the intended dynamics of the process? Organizers: ========== Fernanda Botelho Max Garzon botelhof at hermes.msci.memst.edu garzonm at hermes.msci.memst.edu Mathematical Sciences Institute for Intelligent Systems Memphis State University Memphis, TN 38152 U.S.A. [botelhof,garzonm]@hermes.msci.memst.edu Program: ======= Following is a (virtually) final schedule. Each talk is scheduled for 15 minutes with 5 minutes of interim for questions and comments. One or contributed talk might still be added to the schedule (and will cut into the panel discussion in the afternoon). Morning Session: --------------- 7:30-7:50 M. Garzon, Memphis State University, Tennessee Introduction and Overview 7:50-8:10 S. Kak, Louisiana State University, Baton Rouge Stability and Observability in Feedback Networks 8:10-8:30 S. Piche, Microelectronics Technology Co., Austin, Texas Sensitivity of Neural Networks to Errors 8:30-8:50 R. Rojas, Int. Computer Science Institute UCB and Freie Universit\"at Berlin Stability of Learning in Neural Networks 8:50-9:10 G. Chauvet and P. Chauvet, Institut de Biologie Th\'eorique, U. d'Angers, France Stability of Purkinje Units in the Cerebellar Cortex 9:10-9:30 N. Peterfreund and Y. Baram, Technion, Israel Trajectory Control of Convergent Networks Afternoon Session: ------------------ 4:30-4:50 X. Wang, U. of Southern California and UCLA Consistencies of Stability and Bifurcation 4:50-5:10 M. Casey, UCSD, San Diego, California Computation Dynamics in Discrete-time Recurrent Nets 5:10-5:30 M. Cohen, Boston University, Massachussets Synthesis of Decision Regions in Dynamical Systems 5:30-5:50 F. Botelho, Memphis State University, Tennessee Observability of Discrete and Analog Networks 5:50-6:10 U. Levin and K. Narendra, OGI/CSE Portland/Oregon and Yale University, Recursive Identification Using Feedforward Nets 6:10-6:30 Panel Discussion 7:00 All-workshop wrap-up Max Garzon (preferred) garzonm at hermes.msci.memst.edu Math Sciences garzonm at memstvx1.memst.edu Memphis State University Phone: (901) 678-3138/-2482 Memphis, TN 38152 USA Fax: (901) 678-2480/3299  From kk at ee.umr.edu Fri Oct 29 13:23:40 1993 From: kk at ee.umr.edu (kk@ee.umr.edu) Date: Fri, 29 Oct 93 12:23:40 CDT Subject: Convolution Software (Educational Tool) Message-ID: <9310291723.AA03815@lamarr.ee.umr.edu> Contributed by: Kurt Kosbar <kk at ee.umr.edu> FREE EDUCATIONAL SOFTWARE PACKAGE P. C. CONVOLUTION P.C. convolution is a educational software package that graphically demonstrates the convolution operation. It runs on IBM PC type computers using DOS 4.0 or later. It is currently being used at over 70 schools throughout the world in departments of Electrical Engineering, Physics, Mathematics, Chemical Engineering, Chemistry, Crystallography, Geography, Geophysics, Earth Science, Acoustics, Phonetics & Linguistics, Biology, Astronomy, Ophthalmology, Communication Sciences, Business, Aeronautics, Biomechanics, Hydrology and Experimental Psychology. Anyone may download a demonstration version of this software via anonymous ftp from 131.151.4.11 (file name /pub/pc_conv.zip) University instructors my obtain a free, fully operational version by contacting Dr. Kurt Kosbar at the address listed below. Dr. Kurt Kosbar 117 Electrical Engineering Building, University of Missouri - Rolla Rolla, Missouri, USA 65401, phone: (314) 341-4894 e-mail: kk at ee.umr.edu  From rohwerrj at sun.aston.ac.uk Fri Oct 29 08:34:11 1993 From: rohwerrj at sun.aston.ac.uk (rohwerrj) Date: Fri, 29 Oct 93 12:34:11 GMT Subject: PhD Thesis available for FTP in neuroprose Message-ID: <8108.9310291234@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-file: pub/neuroprose/zhu.thesis.ps.Z PhD Thesis (222 pages) available in neuroprose repository. (An index entry, and sample ftp procedure follows abstract) NEURAL NETWORKS AND ADAPTIVE COMPUTERS: Theory and Methods of Stochastic Adaptive Computation Huaiyu Zhu Department of Statistics and Computational Mathematics Liverpool University, Liverpool L69 3BX, UK ABSTRACT: This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects --- It can be continuous, stochastic and adaptive --- and retains the TM computation as a subclass called ``data processing''. The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of ``trainable information processor'' (TIP) --- parameterised stochastic mapping with a rule to change the parameters --- is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of ``global search'' in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a ``basic adaptive computer', which has the advantage over earlier reinforcement learning systems, such as Sutton's ``Dyna'', in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve. ___________________________________________________________________ INDEX entry: zhu.thesis.ps.Z hzhu at liverpool.ac.uk 222 pages. Foundation of stochastic adaptive computation based on neural networks. Simulated annealing learning rule superior to backpropagation and Boltzmann machine learning rules. Reinforcement learning for combinatorial state space and action space. (Mathematics with simulation results plus philosophy.) --------------------- Sample ftp procedure: unix$ ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:name): ftp (or anonymous) Password: (your email address including @) ftp> cd pub/neuroprose ftp> binary ftp> get zhu.thesis.ps.Z ftp> quit unix$ uncompress zhu.thesis.ps.Z unix$ lpr -P<printer_name> zhu.thesis.ps The last two steps can also be combined to unix$ zcat zhu.thesis.ps.Z | lpr -P<printer_name> which will save some space. ---------------------- Note: This announcement is simultaneous sent to the following three mailing lists: connectionists at cs.cmu.edu, anneal at sti.com, reinforce at cs.uwa.edu.au My apology to those who subscribe to more than one of them. I'm sorry that there is no hard copy available. -- Huaiyu Zhu hzhu at liverpool.ac.uk Dept. of Stat. & Comp. Math., University of Liverpool, L69 3BX, UK  From ronen at wisdom.weizmann.ac.il Sun Oct 31 09:57:25 1993 From: ronen at wisdom.weizmann.ac.il (Ronen Basri) Date: Sun, 31 Oct 93 16:57:25 +0200 Subject: IAICVNN-93, Dec. 27-28, 1993, Tel Aviv: Final program and Registration Information Message-ID: <9310311457.AA03856@wisdom.weizmann.ac.il> 10th Israeli Conference on Artificial Intelligence, Computer Vision and Neural Networks Tel-Aviv, December 27-28, 1993 FINAL PROGRAM and REGISTRATION INFORMATION Monday, December 27, 1993 ------------------------- (8:30-9:15) REGISTRATION ------------------------ Session 1.1 AI VISION NN (9:15-11:15) PLENARY SESSION ---------------------------- AI Keynote Speaker: D. Gabbay, Imperial College of Science and Technology, London. LENA - An automated car saleswoman capable of belief revision, abduction action and small lies. Vision Keynote Speaker: Yiannis Aloimonos, University of Maryland, College Park MD. Interpretation of visual patterns: Navigation & Manipulation. (11:15-11:45) COFFEE BREAK -------------------------- Session 1.2 AI (11:45-13:00) FORMAL TECHNIQUES ------------------------------- Roccetti Marco, Teolis Antonio G.B. (University of Bologna) The Use of Moebius Function in Probabilistic Rule-Based Expert Systems Meisels Amnon, Ell-sana' Jihad, Gudes Ehud (Ben-Gurion University) Comments on CSP Algorithms Applied to Timetabling Shechory On, Kraus Sarit (Bar-Ilan University) Coalition Formation Among Autonomous Agents: Strategies and Complexity Session 1.3 NN (11:45 - 13:00) SPEECH AND SIGNAL PROCESSING -------------------------------------------- I Voitovetsky, S Dahan, Y Menashe, H Guterman (Ben Gurion University) Speaker Independent Vowel Recognition Using Neural Networks Zvi Boger (Negev Nuclear Research Center) ANNs for Quantitative Stationary Spectrometric Measurements Yaakov Stein (Efrat Future Technology) False Alarm Rate Reduction for ASR and OCR Session 1.4 VISION (11:45 - 13:00) SHAPE DESCRIPTION --------------------------------- Ilan Shimshoni, Jean Ponce (University of Illinois) Finite resolution aspect graphs of polyhedral objects. R.L. Ogniewicz, G. Szekely, O. Kubler (Communication Technology Laboratory, Zurich) Detection of prominent boundary points based on structural characteristics of the heirarchic medial axis transform. Daphna Weinshall, Michael Werman, Naftali Tishbi (Hebrew University) Stability and likelihood of two dimensional views of three dimensional objects. (13:00-14:30) LUNCH BREAK ------------------------- Session 1.5 AI (14:30-15:45) DESIGN METHODOLOGY -------------------------------- Ndjatou Gilbert (CUNY) Modelling Objects and Distributed Object-Based Systems Nygate Yossi (AT&T-Bell), Sterling Leon (Case Western Reserve University) ASPEN - Designing Complex Knowledge Based Systems Weiss Richard J., Tamir D. E., Schneider Moti (Florida Institute of Technology) Efficient Resource Allocation for Parallel Prolog Interpretation Session 1.6 NN (14:30 - 15:45) OPTICAL CHARACTER RECOGNITION AND RELATED TOPICS ---------------------------------------------------------------- Jacob (Yak) Shaya, Jonathan Eran Kali, Gideon Y Ben-Zvi (Ligatura) A Stochastic Approach -- Advantages in Character Recognition Oded Comay, Nathan Intrator (Tel Aviv University) Ensemble Training: Some recent experiments with Postal Zip data B Lerner, H Guterman, I Dinstein (Ben Gurion University) Global Features and Simple Transformation to Chromosome Classification Session 1.7 VISION (14:30-15:45) MODELS FOR HUMAN VISION ------------------------------------- Yael Moses, Shimon Ullman, Shimon Edelman (Weizmann Institute) Generalization to novel images in upright and inverted faces. Ehud Grunfeld, Hedva Spitzer (Tel Aviv University) Spatio-temporal model for subjective colours based on colour coded cells. Haya Ruchvarger (Bar Ilan University) A mathematical derivation on possible aid of eye movments to depth perception. (15:45-16:00) COFFEE BREAK -------------------------- Session 1.8 AI (16:00-16:45) LOGIC ------------------- Ben-Eliyahu Rachel (UCLA) Back to the Future: Program Completion Revisited Slobodova Anna (Slovak Academy of Science) On a Special Case of Probabilistic Logic Session 1.9 NN (16:00-16:50) HARDWARE IMPLEMENTATIONS -------------------------------------- Ronny Agranat (Hebrew University) An Electroholographic Artificial Neural Network U Sandler (Jerusalem College of Technology) Multimode Laser As a Multi-Neuron Session 1.10 VISION (16:00 - 17:15) FEATURES FOR RECOGNITION ---------------------------------------- Dieter Koller (University of California at Berkeley) Moving object recognition and classification based on recursive shape parameter estimation. Antti Yla-Jaaski, F. Ade (Mapvision Ltd., Finland) Grouping symmetrical structures for image segmentation and description. Yui-Liang Chen, D.C. Hung (New Jersey Institute of Technology) Shape recognition using hypothetic feature. Session 1.11 AI (16:45-17:45) TUTORIAL ---------------------- S. Engelson (Yale University) 'Where am I, and where am I headed?" (Map Learning for Mobile Robots) Session 1.12 NN (16:50 - 18:00) BIOLOGICAL AND MEDICAL APPLICATIONS --------------------------------------------------- Itiel E Dror (Harvard University) Neural Network Models of Bat Sonar H Guterman (Ben Gurion University), A Yarkoni (Soroka Hospital) Classification of Labor Contractions by Neural Networks and Fuzzy Logic P Tandeitnik, H Guterman (Ben Gurion University) System Identification of Engineering and Biological Processes Tuesday, December 28, 1993 -------------------------- (8:00-9:00) REGISTRATION ------------------------- Session 2.1 AI VISION NN (9:00-11:00) PLENARY SESSION ---------------------------- NN Keynote Speaker S. Kirkpatrick, IBM Yorktown Heights and HU Satisfaction Thresholds and other Disorderly Things Vision Keynote Speaker Davi Geiger, Siemens Corporate Research, NJ. Perceptual mechanisms for images and stereo pairs (11:00-11:30) COFFEE BREAK -------------------------- Session 2.2 AI (11:30-13:15) Learning ---------------------- Sprotte-Kleiber Lucia (Zurich University) GAAR - A System for Solving Geometric Analogy Problems with Analogical Representations Koppel M., Feldman R. (Bar-Ilan University), Segre A. (Cornell University) Theory Revision Using Noisy Examples Biberman Yoram (Hebrew University) A Context Similarity Measure for Nominal Variables Reich Yoram, Karni Reuven, Fournier Fabiana (Technion) An Investigation of Machine Learning Approaches to Knowledge Extraction from Databases Session 2.3 NN (11:30 - 13:15) ARCHITECTURES AND LEARNING ------------------------------------------ Nathan Intrator (Tel Aviv University), Orna Intrator (Hebrew University) Interpreting Neural-Network Models Orly Yadid-Pecht, Moshe Gur (Technion) Modified MAXNET with Fast Convergence Rate Lev Tsitolovsky (Bar Ilan University), Maxim Kovalenko (Weizmann Institute) Structure Independent Learning in Neural Networks Session 2.4 VISION (11:30 - 13:15) RECOVERY AND GEOMETRY ------------------------------------- Patrick Gros, Long Quan (LIFIA/INRIA, France) 3D projective invariants from two images. Ron Kimmel, Guillermo Sapiro (Technion) Shortening three dimensional curves via two dimensional flows. Michal Irani, Benny Rousso, Shmuel Peleg (Hebrew University) Recovery of ego-motion using image stabilization. Quang-Tuan Luong, Rachid Deriche, Olivier Faugeras, Theodore Papadopoulo (INRIA, France) On determining the fundamental matrix: analysis of different methods and experimental results. (13:15-14:30) LUNCH BREAK ------------------------- Session 2.5 AI (14:30-15:45) APPLICATIONS -------------------------- Tomer Amir (Rafael) An Implementation Methodology with Constructive Inheritance Stilman Boris (University of Colorado) Hierarchical Networks for Systems Control Tabakman T, Exman I. (Hebrew University) Towards Real-Time Self-Organising Maps with Parallel and Noisy Inputs Session 2.6 NN (14:30 - 15:45) FUZZY LOGIC, EXPERT SYSTEMS, PATTERN RECOGNITION ---------------------------------------------------------------- Alon Cohen (Bar Ilan University) A Legal Expert Neural Network Usiel Sandler (Jerusalem College of Technology) Fuzzy Dynamics Yaakov Stein (Efrat Future Technology) A Hypersphere Classifier Which Trains Like A Hyperplane One Session 2.7 VISION (14:30-15:45) GROUPING AND SEGMENTATION --------------------------------------- Gady Agam, Its'hak Dinstein (Ben Guryon University) Pre-processing of metaphase images for automatic chromosome classification. Antti Yla-Jaaski (Mapvision Ltd., Finland), Nahum Kiryati (Technion) Adaptive termination of voting in probabilistic hough algorithms. Victor Brailovsky, Yulia Kempner (Tel Aviv University) Restoring the original range image structure using probabilistic estimates. (15:45-16:00) COFFEE BREAK -------------------------- Session 2.8 AI (16:00-17:30) Tutorial ---------------------- Dan Geiger (Technion) Probabilistic Reasoning: Learning and Inference Session 2.9 NN (16:00 - 17:00) IMAGE PROCESSING -------------------------------- Sorin Costiner, Maxim Kovalenko (Weizmann Institute) A Multigrid Neural Network with Applications to Image Processing S Greenberg, H Guterman (Ben Gurion University) Rotation and Shift Invariant Image Classifier using NN Session 2.10 VISION (16:00-17:00) NON RIGID OBJECTS ------------------------------- Eyal Shavit, Allan Jepson (University of Toronto) The pose function: and intermediate level representation for motion understanding. Yaacov Hel-or (Weizmann Institute) and Michael Werman (Hebrew University) Pose estimation of articulated and constrained models. Session 2.11 NN (17:00 - 18:00) PANEL DISCUSSION -------------------------------- Industry and Academia Interaction ------------------------------------------------------------------------- REGISTRATION INFORMATION Registration forms should be sent by December 6, 1993, to the following address: Ms. Ruth Cooperman 10th IAICVNN Secretariat IPA, Kfar Maccabiah Ramat Gan 52109 ISRAEL The registration fee for the conference (including lunch, coffee, and proceedings ) is as follows: IPA members (*) (before Dec. 6) : $200 or 350 NIS IPA members (*) (after Dec. 6) : $250 or 400 NIS Non-IPA members (before Dec. 6) : 375 NIS Non-IPA members (after Dec. 6) : 425 NIS Students (**) (before Dec. 6) : $100 or 200 NIS Students (**) (after Dec. 6) : $125 or 225 NIS (*) Visitors from abroad are entitled to the IPA member rate. (Payment in US$ only.) (**) The student rate will be approved only for students who are enrolled in a full time study program and with documents confirmed by the university. The student registration fee does not include lunch. Lunch-tickets may be obtained for an additional $25/50 NIS (for two tickets). A block of rooms has been reserved at Kfar Maccabiah Hotel, a 4-star hotel at the conference site. Prices, including breakfast and service charge, are $77 for a double room, $60 for a single room per night (+ VAT for Israeli residents). A deposit of $50 per person is required with this reservation. The accomodation form should be sent to IPA together with the registration form. REGISTRATION FORM Name _________________________________________________ Last First Title Affiliation __________________________________________ Position/Department __________________________________ Address ______________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Home address _________________________________________ ______________________________________________________ ______________________________________________________ Preferred mailing address ___ Home ___ Business Registration fees: ____ IPA member $200/350 NIS ____ non-IPA member 375 NIS ____ Student $100/200 NIS ____ Late registration (after December 6, add as specified above) ____ Total Payment can be in US$ or NIS only, by bankers draft or personal check payable to: IPA. Signature _______________________ Date ___________ ________________________________________________________________________ ACCOMODATION FORM Name _________________________________________________ Last First Title Address ______________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone I wish to reserve a single/double room from __________ to __________ for a total of _______ nights. Payment can be in US$ or NIS only, by bankers draft or personal check payable to: IPA. Signature _______________________ Date ___________ Note : The FAX number of IPA is +972-3-574 4374. The FAX number of Kfar Maccabiah Hotel is +972-3-574 4678. ------------------------------------------------------------------------- From gary at cs.ucsd.edu Sun Oct 31 15:18:05 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Sun, 31 Oct 93 12:18:05 -0800 Subject: ACL-94 Call for papers Message-ID: <9310312018.AA26653@desi> Hi folks, Guess what - I'm going to be on the ACL program committee again this year (I guess they're gluttons for punishment). For those of you unfamiliar with this conference, it is the Association for Computational Linguistics. This is the premier conference for Computational Linguistics research, with refereeing as tight as NIPS. In recent years there has been an upsurge of interest in statistical approaches to large text corpora, and concommitantly, in neural network approaches to NLP. Last year I asked the people on this list to submit to this conference and the response was underwhelming. Please consider submitting to this conference if you do neural network or statistical approaches to NLP, and I will guarantee you a fair hearing. It is time the rift between symbolic and connectionist approaches was healed - we can learn from one another! PAPERS ARE DUE JANUARY 6, 1994!!! gary Here is the CFP: From walker at bellcore.com Wed Oct 13 11:17:12 1993 From: walker at bellcore.com (Don Walker) Date: Wed, 13 Oct 93 11:17:12 -0400 Subject: ACL-94 CALL FOR PAPERS Message-ID: <mailman.634.1149591291.29955.connectionists@cs.cmu.edu> ACL-94 CALL FOR PAPERS 32nd Annual Meeting of the Association for Computational Linguistics 27 June - 1 July 1994 New Mexico State University Las Cruces, New Mexico, USA TOPICS OF INTEREST: Papers are invited on substantial, original, and unpublished research on all aspects of computational linguistics, including, but not limited to, pragmatics, discourse, semantics, syntax, and the lexicon; phonetics, phonology, and morphology; interpreting and generating spoken and written language; linguistic, mathematical, and psychological models of language; language-oriented information retrieval; corpus-based language modeling; machine translation and translation aids; natural language interfaces and dialogue systems; message and narrative understanding systems; and theoretical and applications papers of every kind. REQUIREMENTS: Papers should describe unique work; they should emphasize completed work rather than intended work; and they should indicate clearly the state of completion of the reported results. A paper accepted for presentation at the ACL Meeting cannot be presented or have been presented at any other meeting with publicly available published proceedings. Papers that are being submitted to other conferences must reflect this fact on the title page. FORMAT FOR SUBMISSION: Authors should submit preliminary versions of their papers, not to exceed 3200 words (exclusive of references). Papers outside the specified length and formatting requirements are subject to rejection without review. Papers should be headed by a title page containing the paper title, a short (5 line) summary and a specification of the subject area. Since reviewing will be ``blind'', the title page of the paper should omit author names and addresses. Furthermore, self-references that reveal the authors' identity (e.g., ``We previously showed (Smith, 1991) . . .'') should be avoided. Instead, use references like ``Smith previously showed (1991) . . .'' To identify each paper, a separate identification page should be supplied, containing the paper's title, the name(s) of the author(s), complete addresses, a short (5 line) summary, a word count, and a specification of the topic area. SUBMISSION MEDIA: Papers should be submitted electronically or in hard copy to the Program Chair: James Pustejovsky +1-617-736-2709 Brandeis University +1-617-736-2741 fax Computer Science, Ford Hall Waltham, MA 02254, USA jamesp at cs.brandeis.edu Electronic submissions should be either self-contained LaTeX source or plain text. LaTeX submissions must use the ACL submission style (aclsub.sty) retrievable from the ACL LISTSERV server (access to which is described below) and should not refer to any external files or styles except for the standard styles for TeX 3.14 and LaTeX 2.09. A model submission modelsub.tex is also provided in the archive, as well as a bibliography style acl.bst. (Note however that the bibliography for a submission cannot be submitted as separate .bib file; the actual bibliography entries must be inserted in the submitted LaTeX source file.) Hard copy submissions should consist of four (4) copies of the paper and one (1) copy of the identification page. For both kinds of submissions, if at all possible, a plain text version of the identification page should be sent separately by electronic mail, using the following format: title: <title> author: <name of first author> address: <address of first author> ... author: <name of last author> address: <address of last author> abstract: < abstract> content areas: <first area>, ..., <last area> word count: SCHEDULE: Authors must submit their papers by 6 January 1994. Late papers will not be considered. Notification of receipt will be mailed to the first author (or designated author) soon after receipt. Authors will be notified of acceptance by 15 March 1994. Camera-ready copies of final papers prepared in a double-column format, preferably using a laser printer, must be received by 1 May 1994, along with a signed copyright release statement. The ACL LaTeX proceedings format is available through the ACL LISTSERV. STUDENT SESSIONS: There will again be special Student Sessions organized by a committee of ACL graduate student members. ACL student members are invited to submit short papers describing innovative WORK IN PROGRESS in any of the topics listed above. Papers are limited to 3 pages plus a title page and an identification page in the format described above and must be submitted by hard copy or both e-mail AND hard copy to Beryl Hoffman at the address below by 1 FEBRUARY 1994. The papers will be reviewed by a committee of students and faculty members for presentation in workshop-style sessions and publication in a special section of the conference proceedings. There is a separate Call for Papers, available from the ACL LISTSERV (see below); or from Beryl Hoffman, University of Pennsylvania, Computer Science, 3401 Walnut Street, Philadelphia, PA 19104, USA; +1-215-898-5868; 0587 fax; hoffman at linc.cis.upenn.edu; or Rebecca Passonneau, Columbia University, Computer Science, New York, NY 10027, USA; +1-212-939-7120; 666-0140 fax; becky at cs.columbia.edu. OTHER ACTIVITIES: The meeting will include a program of tutorials coordinated by Lynette Hirschman, MITRE Corporation, 202 Burlington Road, MS K329, Bedford, MA 01730, USA; +1-617-271-7789; 2352 fax; lynette at linus.mitre.org. Some of the ACL Special Interest Groups may arrange workshops or other activities. Further information may be available from the ACL LISTSERV. CONFERENCE INFORMATION: The Local Arrangements Committee is chaired by: Janyce M. Wiebe +1-505-646-6228 New Mexico State University +1-505-646-6218 fax Computing Research Laboratory PO Box 30001/3CRL Las Cruces, NM 88003, USA wiebe at nmsu.edu Anyone wishing to arrange an exhibit or present a demonstration should send a brief description together with a specification of physical requirements (space, power, telephone connections, tables, etc.) to Ted Dunning, New Mexico State University, Computing Research Laboratory, Box 30001/3CRL, Las Cruces, NM 88003, USA; +1-505-646-6221; 6218 fax; ted at nmsu.edu. ACL INFORMATION: For other information on the conference and on the ACL more generally, contact Judith Klavans (ACL), Columbia University, Computer Science, New York, NY 10027, USA; +1-914-478-1802 phone/fax; acl at cs.columbia.edu. General information about the ACL AND electronic membership and order forms are available from the ACL LISTSERV. ACL LISTSERV: LISTSERV is a facility to allow access to an electronic document archive by electronic mail. The ACL LISTSERV has been set up at Columbia University's Department of Computer Science. Requests from the archive should be sent as e-mail messages to listserv at cs.columbia.edu with an empty subject field and the message body containing the request command. The most useful requests are "help" for general help on using LISTSERV, "index acl-l" for the current contents of the ACL archive and "get acl-l <file>" to get a particular file named <file> from the archive. For example, to get an ACL membership form, a message with the following body should be sent: get acl-l membership-form.txt Answers to requests are returned by e-mail. Since the server may have many requests for different archives to process, requests are queued up and may take a while (say, overnight) to be fulfilled. The ACL archive can also be accessed by anonymous FTP. Here is an example of how to get the same file by FTP (user typein is underlined): $ ftp cs.columbia.edu ------------------- Name (cs.columbia.edu:pereira): anonymous --------- Password:pereira at research.att.com << not echoed ------------------------ ftp> cd acl-l -------- ftp> get membership-form.txt.Z ------------------------- ftp> quit ---- $ uncompress membership-form.txt.Z -------------------------------- From arantza at cogs.susx.ac.uk Mon Oct 18 10:48:42 1993 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Mon, 18 Oct 93 10:48:42 BST Subject: Artificial Life Workshop Announcement Message-ID: <mailman.639.1149591293.29955.connectionists@cs.cmu.edu> ARTIFICIAL LIFE: A BRIDGE TOWARDS A NEW ARTIFICIAL INTELLIGENCE Palacio de Miramar (San Sebastian, Spain) December 10th and 11th, 1993 Workshop organised by the Department of Logic and Philosophy of Science, Faculty of Computer Science & Institute of Logic, Cognition, Language and Information (ILCLI) of the University of the Basque Country (UPV/EHU) Directors: Alvaro Moreno (University of the Basque Country) Francisco Varela (CREA, Paris) This Workshop will be devoted to a discussion of the impact of work on Artifical Life on Artificial Intelligence. Artificial Intelligence (AI) has traditionally attempted to study cognition as an abstract phenomenon using formal tools, that is, as a disembodied process that can be grasped through formal operations, independent of the nature of the system that displays it. Cognition is treated as an abstract representation of reality. After several decades of research in this direction the field has encountered several problems that have taken it to what many consider a "dead end": difficulties in understanding autonomous and situated agencies, in relating to behaviour in a real environment, in studying the nature and evolution of perception, in finding a practical explanation for the operation of most cognitive capacities such as natural language, context dependent action, etc. Artificial Life (AL) has recently emerged as a confluence of very different fields trying to study different kinds of features of living systems using computers as a modelling tool, and, at last, trying to artificially (re)produce a living system (or a population of them) in real or computational media. Examples of such phenomena are prebiotic systems and their evolution, growth and development, self-reproduction, adaptation to an environment, evolution of ecosystems and natural selection, formation of sensory-motor loops, autonomous robots. Thus, AL is having an impact on classic life sciences but also on the conceptual foundations of AI and new methodological ideas in Cognitive Science. The aim of this Workshop is to focus on the last two points and to evaluate the influence of the methodology and concepts appearing in AL for the development of new ideas about cognition that could eventually give birth to a new Artificial Intelligence. Some of the sessions consist of presentations and replies on a specific subject by invited speakers while others will be debates open to all participants in the workshop. MAIN TOPICS: * A review of the problems of FUNCTIONALISM in Cognitive Science and Artificial Life. * Modelling Neural Networks through Genetic Algorithms. * Autonomy and Robotics. * Consequences of the crisis of the representational models of cognition. * Minimal Living System and Minimal Cognitive System * Artificial Life systems as problem solvers * Emergence and evolution in artificial systems SPEAKERS S. Harnad P. Husbands G. Kampis B. Mac Mullin D. Parisi T. Smithers E. Thompson F. Varela Further Information: Alvaro Moreno Apartado 1249 20080 DONOSTIA SPAIN E. Mail: biziart at si.ehu.es Fax: 34 43 311056 Phone: 34 43 310600 (extension 221) 34 43 218000 (extension 209) ----------------------------------------------------------------------- LEVELS OF FUNCTIONAL EQUIVALENCE IN REVERSE BIOENGINEERING: THE DARWINIAN TURING TEST FOR ARTIFICIAL LIFE Stevan Harnad Laboratoire Cognition et Mouvement URA CNRS 1166 I.B.H.O.P. Universite d'Aix Marseille II 13388 Marseille cedex 13, France harnad at princeton.edu ABSTRACT: Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications; reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more BE alive than a virtual forest fire can be hot. In itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far? ----------------------------------------------------------------------- Phil Husbands School of Cognitive and Computing Sciences Univesity of Sussex, BRIGHTON BN1 9QH, U.K philh at cogs.susx.ac.uk ABSTRACT: We discuss the mothodological foundations for our work on the development of cognitive architectures, on control systems, for situated autonomous agents. We focus on the problems of developing sensory-motor ystems for mobile robots, but we also discuss the applicability of aur approach to the study of biological systems. We argue that, for agents required to exhibit sophisticated ionteractions with their environments, complex sensory-motor processing is necessary, and the design by hand of control systems capable of this is likely to to become a prohibiytively difficult as complexity increases. We propose an automatic design process involving artificial evolution,where the basoc buildig blocks used for evolving cognitive architectures are noise-tolerant dynamical networks. These networks may be recurrent, and should operate in real time. time. The evolution should be incremental, using an extended and modified version of a genetic algorithm. Practical constraints suggest that initial architecture evaluations should be done largely in simulation. To support our claims and proposals, we summarize results from some preliminary simulation experiments where visually guided robots are evolved to operate in simple environments. Significantly, our results demonstrate that robust visually-guided control systems evolve from evaluation fuctions which do not explicitly require monitoring visual input. We outline the difficulties involved in continuing with simulations, and conclude by describing specialized visuo-robotic equipment, designed to eliminate sensors and actuators. ----------------------------------------------------------------------- Barry MacMullin School of Electronic Engineering Dublin City University McMullinB at DCU.IE ABSTRACT: I reconsider the status of computationalism (or, in a weak sense, functionalism): the claim that being a realisation of some (as yet unespecified) class of abstract machine is both necessary ans sufficient for having genuine, full-blooded, mentality. This doctrine is now quite widely (though by no means universally) seen as discredited. My position is that, thoug it is undoubtedly an unsatisfactory (perhaps even repugnant) thsis, the arguments against it are still rather weak. In particular, I critically reassess John Searle's infamous Chinise Room Argument, and also some relevant aspects of Karl Popper s theory of the Open Universe. I conclude that the status of computationalism must still be regarded as undecided' and that it may still provide a satisfactory framework for research. ----------------------------------------------------------------------- Domenico Parisi Institute of Psychology National Research Council, Rome e-mail: domenico at irmkant.bitnet ABSTRACT: Genetic algorithms are methods of parallel search for optimal solutions to tasks which are inspired by biological evolution and are based on selective reproductiomn and the addition of variiation through mutations or crossover. As models of real biological and behevioral phenomena, however, genetic algorithms suffer from many limitations. Some of these limitations are discussed under the rubrics of (a) environment, (b) variation, and (c) fitness, and ways are suggested to overcome them. Various simulations using genetic algoritms and neural networks are briefly described which incorporate a more biologically realistic notion of evolution. ----------------------------------------------------------------------- Tim Smithers Facultad de Informatica Apartado 649 20080 San Sebastian smithers at si.ehu.es ABSTRACT: Traditianally autonomous systems research has been a domain of Artificial Intelligence. We argue that, as a consequence, it has been heavily influenced, often tacitly, by folk psychological notions. We believe that much of the widely acknowledged failure of this research to produce reliable and robust artificial autonomous systems can be apportioned to its use and dependence upon forlk psychological constructs. As an alternative we propose taking seriously the Eliminativce Materialism of Paul Chuchland In this paper we present our reasons for adopting this radical alternative approach and briefly describe the bottom-up methodology that goes with it. We illustrate the discussion with examples form our work on autonomous systems. [Rest of abstracts not yet available]