From terry at salk.edu Thu Jan 2 15:07:24 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 2 Jan 1997 12:07:24 -0800 (PST) Subject: Neural Computation 9:1 Message-ID: <199701022007.MAA22177@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 1 - January 1, 1997 Article Flat Minima Sepp Hochreiter and Juergen Schmidhuber Note Lyapunov Functions for Neural Nets with Nondifferentiable Input-Output Characteristics Jianfeng Feng Letter Detecting Synchronous Cell Assemblies with Limited Data and Overlapping Assemblies Gary Strangman A Simple Neural Network Exhibiting Selective Activation of Neuronal Ensembles: from Winner-Take-All to Winners-Share-All Tomoki Fukai and Shigeru Tanaka Playing Billiard in Version Space Pal Rujan Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks Kazumi Saito and Ryohei Nakano Neural Networks for Functional Approximation and System Identification H. N. Mhaskar and Nahmwoo Hahm Selecting Optimal Experiments for Multiple Output Multilayer Perceptrons Lisa M. Belue, Kenneth W. Bauer, Jr., and Dennis W. Ruck A Penalty-Function Approach for Pruning Feedforward Neural Networks Rudy Setiono Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting Rudy Setiono ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From plaut at cmu.edu Thu Jan 2 15:01:23 1997 From: plaut at cmu.edu (David Plaut) Date: Thu, 02 Jan 1997 15:01:23 -0500 Subject: Preprint: Models of Word Reading and Lexical Decision Message-ID: <7924.852235283@crab.psy.cmu.edu> The following preprint is available via anonymous ftp and the web: Structure and function in the lexical system: Insights from distributed models of word reading and lexical decision David C. Plaut Departments of Psychology and Computer Science, Carnegie Mellon University, and the Center for the Neural Basis of Cognition, Pittsburgh PA, USA To appear in Language and Cognitive Processes The traditional view of the lexical system stipulates word-specific representations and separate pathways for regular and exception words. An alternative approach views lexical knowledge as developing from general learning principles applied to mappings among distributed representations of written and spoken words and their meanings. On this distributed account, distinctions among words and between words and nonwords are not reified in the structure of the system but reflect the sensitivity of learning to the relative systematicity in the various mappings. Two simulation experiments address findings that have seemed problematic for the distributed approach. Both involve a consideration of the role of semantics in normal and impaired lexical processing. The first experiment accounts for patients with impaired comprehension but intact reading in terms of individual differences in the division of labor between the semantic and phonological pathways. The second experiment demonstrates that a distributed network can reliably distinguish words from nonwords based on a measure of familiarity defined over semantics. The results underscore the importance of relating function to structure in the lexical system within the context of an explicit computational framework. ftp-host: cnbc.cmu.edu [128.2.244.1] ftp-file: pub/user/plaut/papers/PlautINPRESSLCP.structure.ps.Z OR pub/user/plaut/papers/uncompressed/PlautINPRESSLCP.structure.ps ftp://cnbc.cmu.edu:/pub/user/plaut/papers/PlautINPRESSLCP.structure.ps.Z 19 pages; 183Kb compressed; 498Kb uncompressed =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut Center for the Neural Basis of Cognition and Mellon Institute 115, CNBC Departments of Psychology and Computer Science Carnegie Mellon University MI 115I, 412/268-5145 (fax -5060) 4400 Fifth Ave., Pittsburgh PA 15213-2683 http://www.cnbc.cmu.edu/~plaut "Doubt is not a pleasant condition but certainty is an absurd one." -Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From adali at engr.umbc.edu Thu Jan 2 16:16:00 1997 From: adali at engr.umbc.edu (Tulay Adali) Date: Thu, 2 Jan 1997 16:16:00 -0500 (EST) Subject: CFP: NNSP'97 Special Session -Apps of NNs in Biomedical SP Message-ID: <199701022116.VAA18802@akdeniz.engr.umbc.edu> CALL FOR PAPERS ------------------------------------------------ Special Session on Applications of Neural Networks in Biomedical Signal Processing ---------------------------------------------------------------- NNSP'97- IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING ---------------------------------------------------------------- ----------------------------------Session Organizer: Tulay Adali Biomedical signal processing problems, with their particular features, challenges, and definition of objectives, have provided a unique platform for the application of artificial neural networks (ANNs) and the exploration of their relationship to other techniques. In this special session organized within NNSP'97, our aim is to bring researchers in the field together in a forum to present and discuss promising applications of ANNs in the biomedical domain and their relationship to more conventional ones in terms of performance, cost, and implementation. Contributions are sought dealing with both single (EEG, ECG, sensory data, etc.) and multi-dimensional (MR, PET, CT, functional MR, etc.) biomedical signals. Some of the possible areas of application are; * pattern recognition and feature extraction for computer aided diagnosis and prognosis * signal analysis (quantification, segmentation, etc.) * signal enhancement/restoration * data compression * reconstruction * data fusion/registration * biological system modeling NNSP'97 is the seventh of a series of IEEE Workshops on Neural Networks for Signal Processing and this year will be held in the Amelia Island Plantation, Florida, 24-26 September 1997. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. More information about the workshop is available at http://www.cnel.ufl.edu/nnsp97/ Submissions for the special session should follow the general guidelines for papers submitted to NNSP'97 and 5 copies of extended summaries of no more than 6 pages should be sent to: IEEE NNSP'97 Attn: Special Session ANN-BSP 444 CSE Bldg #42 P.O. Box 116130 University of Florida Gainesville, FL 32611 IMPORTANT DATES: * Submission of extended summary: January 27, 1997 * Notification of acceptance: March 31, 1997 * Submission of photo-ready accepted paper: April 26, 1997 * Advanced registration: before July 1, 1997 For further information about the special session, contact Tulay Adali---------------------------------------------- Department of Computer Science and Electrical Engineering University of Maryland Baltimore County 1000 Hilltop Circle (410) 455-3521 Baltimore, MD 21250 adali at engr.umbc.edu ------------------------------http://engr.umbc.edu/~adali From dwang at cis.ohio-state.edu Fri Jan 3 12:20:24 1997 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 3 Jan 1997 12:20:24 -0500 (EST) Subject: Tech report on Object Selection Message-ID: <199701031720.MAA08676@shirt.cis.ohio-state.edu> The following technical report is available via FTP/WWW: ------------------------------------------------------------------ Object Selection Based on Oscillatory Correlation ------------------------------------------------------------------ DeLiang Wang Technical Report: OSU-CISRC-12/96-TR67, 1996 OSU Department of Computer and Information Science One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Because of global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a new architecture that maintains spatial relations between input features. This selection network builds on LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) dynamics and slow inhibition. In an input scene with many objects (patterns), the network selects the largest object. This system can be easily adjusted to select several largest objects, which then alternate in time. We further show that a two-stage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. As a special case, the selection network without local excitation gives rise to a new form of oscillatory WTA. (20 pages + one figure: 1.3 MB compressed) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Wang96 FTP-filenames: wang.tech96.ps.Z, fig4.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the files, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon/Wang96 ftp> get wang.tech96.ps.Z ftp> get fig4.ps.Z ftp> quit unix> uncompress wang.tech96.ps.Z unix> uncompress fig4.ps.Z unix> lpr {each of the two postscript files} (wang.tech96.ps may not ghostview well - some figures do not show up in ghostview - but it should print ok) ---------------------------------------------------------------------------- From philh at cogs.susx.ac.uk Fri Jan 3 11:54:25 1997 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Fri, 3 Jan 1997 16:54:25 +0000 (GMT) Subject: Workshop: Autonomous Behaviour in Animals and Robots Message-ID: ONE DAY WORKSHOP UNIVERSITY OF SUSSEX, BRIGHTON, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, AI and Philosophy To mark the recent opening of the Sussex Centre for Computational Neuroscience and Robotics, we are holding a one day workshop on 24 January 1997. The aims of this workshop are: to foster interdisciplinary discussion and debate on key issues relating to biological and artificial behaviour generating mechanisms; to highlight the value and potential of collaborations at the new interface between biology, computer science and engineering; to consider the philosophical issues arising from the notion of "autonomy", especially in artificial systems; to consider emerging industrial and commercial applications for which this interface is likely to be an essential enabling factor; to discuss how a UK community interested in these issues may be established and whether new mechanisms and initiatives from funding bodies may be appropriate; to consider how this field in the UK can be promoted to ensure that we remain in the forefront worldwide. The workshop is divided into three sections (Setting the Stage; Vision, Behaviour and Robots; Complexity). Each session will be divided into two parts: a series of ten minute presentations in which speakers raise key issues and pose open questions, followed by a panel discussion. Contributions from members of the audience will also be encouraged. ------------------------------------------------------------------------------ One-Day Workshop 24 January 1997 University of Sussex, Brighton, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, Engineering and Philosophy P R O G R A M M E 10.00-10.30 Reception; Coffee 10.30-10.45 Welcome and Introduction Phil Husbands and Michael O'Shea SESSION A SETTING THE STAGE CHAIR: MICHAEL O'SHEA 10.45-10.55 Understanding the Nervous System Danny Osorio 10.55-11.05 Artificial Autonomous Agents Dave Cliff 11.05-11.15 Philosophical Perspectives Mike Wheeler 11.15-12.30 Discussion of Issues Arising Chair: Maggie Boden with Panel of Discussants (David McFarland; Brendan McGonnigle; Geoffrey Miller; Steven Rose; Aaron Sloman) 12.30-2.00 Lunch SESSION B VISION, BEHAVIOUR AND ROBOTS CHAIR: PHIL HUSBANDS 2.00-2.10 What is Natural Vision For? Mike Land 2.10-2.20 Animal Navigation Tom Collett 2.20-2.30 Robotic Navigation Barbara Webb 2.30-2.40 Models or Processes for Vision? Dave Hogg 2.40-2.50 Cognitive Robots? Andy Clark 2.50-4.00 Discussion of Issues Arising Chair: Tim Smithers with Panel of Discussants (Colin Blakemore; John Hallam; Claire Rind, Julie Rutkowsja; M Srinivasan) 4.00-4.20 Coffee Break SESSION C COMPLEXITY CHAIR: BRIAN GOODWIN 4.20-4.30 Complexity of the "Simple" Nervous System Paul Benjamin 4.30-4.40 Can Biological Inspired Engineering Cross the "Complexity Gap" Adrian Thompson 4.40-4.50 Evolution and Modularity in Complex Systems John Maynard Smith 4.50-6.00 Discussion of Issues Arising Chair: Chris Winter with Panel of Discussants (Malcolm Burrows; Ron Chrisley; Jeffrey Dean; Brian Goodwin; Misha Mahowald; Edmund Rolls) SESSION D WHERE DO WE GO FROM HERE? CHAIR: PHIL HUSBANDS and MICHAEL O'SHEA 6.00-6.30 Input and comment from BBSRC/EPSRC SESSION E CASH BAR and light refreshments ---------------------------------------------------------------------------------- R E G I S T R A T I O N F O R M One-Day Workshop 24 January 1997 University of Sussex, Brighton, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, AI and Philosophy The Registration fee of 15 for postgraduate students and 35 for others (to be paid by cheque) will include light refreshments and lunch. NB. NB. NB. Attendance is limited to 150 delegates, and registration will occur subject to availability. Forms and accompanying cheques submitted after all places are filled will be returned immediately. Name Address Tel Email Postgraduate student/Other Please delete as applicable Registration fee enclosed (please make cheque payable to CCNR, University of Sussex) Please return to Annie Bacon, CCNR Workshop, School of Biological Sciences, University of Sussex, Brighton, East Sussex BN1 9QG. Details of workshop location etc will be forwarded on receipt of conference fee. -------------------------------------------------------------------------------------- From publicity at MIT.EDU Mon Jan 6 17:42:48 1997 From: publicity at MIT.EDU (MITP Publicity) Date: Mon, 6 Jan 97 17:42:48 EST Subject: Book Announcement Message-ID: The following is a book which readers of this list might find of interest. For more information please see http://www-mitpress.mit.edu/mitp/recent-books/linguistics/klabp.html _The Balancing Act Combining Symbolic and Statistical Approaches to Language_ edited by Judith Klavans and Philip Resnik Symbolic and statistical approaches to language have historically been at odds - the former viewed as difficult to test and therefore perhaps impossible to define, and the latter as descriptive but possibly inadequate. At the heart of the debate are fundamental questions concerning the nature of language, the role of data in building a model or theory, and the impact of the competence-performance distinction on the field of computational linguistics. Currently, there is an increasing realization in both camps that the two approaches have something to offer in achieving common goals. The eight contributions in this book explore the inevitable "balancing act" that must take place when symbolic and statistical approaches are brought together - including basic choices about what knowledge will be represented symbolically and how it will be obtained, what assumptions underlie the statistical model, what principles motivate the symbolic model, and what the researcher gains by combining approaches. The topics covered include an examination of the relationship between traditional linguistics and statistical methods, qualitative and quantitative methods of speech translation, study and implementation of combined techniques for automatic extraction of terminology, comparative analysis of the contributions of linguistic cues to a statistical word grouping system, automatic construction of a symbolic parser via statistical techniques, combining linguistic with statistical methods in automatic speech understanding, exploring the nature of transformation-based learning, and a hybrid symbolic/statistical approach to recovering from parser failures. Language, Speech, and Communication series. A Bradford Book November 1996 140 pp. - 30 illus. ISBN 0-262-61122-8 $17.50 paper MIT Press*55 Hayward Street*Cambridge, MA 02142*(617)625-8569 From movellan at ergo.ucsd.edu Mon Jan 6 21:39:12 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Mon, 6 Jan 1997 18:39:12 -0800 Subject: UCSD Cogsci Tech Report Announcement Message-ID: <199701070239.SAA12418@ergo.ucsd.edu> UCSD Cognitive Science Tech Report Author: Sohie Lee Communicated by: David Zipser Title: The Representation, Storage and Retrieval of Reaching Movement Information in Motor Cortex. Electronic copies: http://cogsci.ucsd.edu and click on "Tech Reports and Software" Physical copies: Available for $7.00 within the US, $10.00 outside the US. For physical copies send a check of money order payable to UC Regents and mail it to TR Request Javier R. Movellan Department of Cognitive Science University of California San Diego La Jolla, Ca 92093-0515 ABSTRACT This report describes the use of analytical techniques and recurrent neural networks to investigate the representation and storage of reaching movement information. A key feature of reaching movement representation revealed by single cell recording is the firing of individual neurons to preferred movement directions. The preferred directions of motor cortical neurons change with starting hand position during reaching. I confirm that the precise nature of tuning parameters' spatial modulation is dependent upon afferent format. I also show that nonlinear coordinate systems produce the spatially dependent tuning parameters of the general form required by experimental observation. A model that investigates the dynamics of movement representation in motor cortex is described. A fully recurrent neural network was trained to continually output the direction and magnitude of movements required to reach randomly changing targets. Model neurons developed preferred directions and other properties similar to real motor cortical neurons. The key finding is that when the target for a reaching movement changes location, the ensemble representation of the movement changes nearly monotonically, while the individual neurons comprising the representation exhibit strong, nonmonotonic transients. These transients serve as internal recurrent signals that force the ensemble representation to change more rapidly than if it were limited by the time constants of individual neurons. These transients can be tested for experimentally. A second model investigates how recurrent networks might implement the storage, retrieval and matching functions observed when monkeys are trained to perform delayed match-to-sample reaching tasks with distractors. A fully recurrent network was trained to perform the task. The model learns a storage mechanism that relies on fixed point attractors. A minimal-sized network is comprised of units that correspond to the various task components, whereas larger networks exhibit more distributed solutions and have neuron properties that more closely resemble single cell behavior in the brain. From S.Holden at cs.ucl.ac.uk Wed Jan 8 09:56:29 1997 From: S.Holden at cs.ucl.ac.uk (Sean Holden) Date: Wed, 08 Jan 1997 14:56:29 +0000 Subject: New paper Message-ID: <1246.852735389@cs.ucl.ac.uk> The following paper does not specifically address connectionist networks. However it may be of interest to readers of this list. The following research note is now available -------------------------------------------- Cross-Validation and the PAC Learning Model Sean B. Holden Research Note RN/96/64 Department of Computer Science University College London Gower Street London WC1E 6BT, U.K. Abstract A large body of research exists within the general field of computational learning theory which, informally speaking, addresses the following question: how many examples are required so that, with `high probability', after training a supervised learner we can expect the error on the training set to be `close' to the actual probability of error (the {\em generalization error\/}) of the learner? Theoretical frameworks inspired by {\em probably approximately correct (PAC) learning\/} formalise what is meant by `high probability' and `close' in the above statement. A statistician might recognize this problem as that of knowing under what conditions the `resubstitution estimate'---as the error on the training set is often referred to---provides in a particular sense a good estimate of the generalization error. It is well-known that, in fact, the resubstitution estimate usually provides a rather bad estimate of this quantity, and that several better estimates exist. In this paper we study two of the latter estimates---the {\em holdout estimate\/} and the {\em cross-validation estimate\/}---within a framework inspired by PAC learning theory. We derive upper and lower bounds on the sample complexity of the error estimation problem for these estimates. Our bounds apply for {\em any\/} consistent supervised learner. A copy can be obtained as follows: ---------------------------------- a) By anonymous ftp ------------------- address: cs.ucl.ac.uk research/rn/rn-96-64.ps.Z b) From my Web page ------------------- http://www.cs.ucl.ac.uk/staff/S.Holden/ c) By postal mail ------------------ A limited number of paper copies is available. Request a copy from: Dr. Sean B. Holden Department of Computer Science University College London Gower Street London WC1E 6BT U.K. or make a request by email: s.holden at cs.ucl.ac.uk From erik at bbf.uia.ac.be Thu Jan 9 11:21:25 1997 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 9 Jan 1997 16:21:25 GMT Subject: Crete Course in Computational Neuroscience Message-ID: <199701091621.QAA16824@kuifje> FIRST CALL CRETE COURSE IN COMPUTATIONAL NEUROSCIENCE SEPTEMBER 7 - OCTOBER 3, 1997 UNIVERSITY OF CRETE, GREECE DIRECTORS: Erik De Schutter (University of Antwerp, Belgium) Idan Segev (Hebrew University, Jerusalem, Israel) Jim Bower (California Institute of Technology, USA) Adonis Moschovakis (University of Crete, Greece) The Crete Course in Computational Neuroscience introduces students to the practical application of computational methods in neuroscience, in particular how to create biologically realistic models of neurons and networks. The course consists of two complimentary parts. A distinguished international faculty gives morning lectures on topics in experimental and computational neuroscience. The rest of the day is spent learning how to use simulation software and how to implement a model of the system the student wishes to study. The first week of the course introduces students to the most important techniques in modeling single cells, networks and neural systems. Students learn how to use the GENESIS, NEURON, XPP and other software packages on their individual unix workstations. During the following three weeks the lectures will be more general, but each week topics ranging from modeling single cells and subcellular processes through the simulation of simple circuits, large neuronal networks and system level models of the the brain will be covered. The course ends with a presentation of the students' modeling projects. The Crete Course in Computational Neuroscience is designed for advanced graduate students and postdoctoral fellows in a variety of disciplines, including neuroscience, physics, electrical engineering, computer science and psychology. Students are expected to have a basic background in neurobiology as well as some computer experience. A total of 28 students will be accepted with an age limit of 35 years. We will accept students of any nationality, but the majority will be from the European Union and affiliated countries (Iceland, Israel, Liechtenstein and Norway). We specifically encourage applications from researchers who work in less-favoured regions of the EU, from women and from researchers from industry. Every student will be charged a tuition fee of 500 ECU (approx. US$630). In the case of students with a nationality from the EU, affiliated countries or Japan, the tuition fee covers lodging, local travel and all course-related expenses. All applicants with other nationalities will be charged an ADDITIONAL fee of 1000 ECU (approx. US$1260) which covers lodging, local travel and course-related expenses. For nationals from EU and affiliated countries economy travel from an EU country to Crete will be refunded after the course. A limited number of students from less-favoured regions world-wide will get their fees and travel refunded. More information and application forms can be obtained: - WWW access: http://bbf-www.uia.ac.be/Crete_index.html Please apply electronically using a web browser if possible. - email: crete_course at bbf.uia.ac.be - by mail: Prof. E. De Schutter Born-Bunge Foundation University of Antwerp - UIA, Universiteitsplein 1 B2610 Antwerp Belgium FAX: +32-3-8202669 APPLICATION DEADLINE: April 5, 1996. Applicants will be notified of the results of the selection procedures by May 5. FACULTY: L. Abbott (Brandeis University, USA), D. Beeman (University of Colorado, Boulder, USA), A. Borst (Max Planck Institute Tuebingen, Germany), R. Calabrese (Emory University, USA), A. Destexhe (Universite Laval, Canada), M. Hines (Yale University, USA), J.J.B. Jack (Oxford University, England), C. Koch (California Institute of Technology, USA), R. Kotter (Heinrich Heine University Dusseldorf, Germany), G. LeMasson (University of Bordeaux, France), K. Martin (Institute of Neuroinformatics, Zurich), M. Nicolelis (Duke University, USA), S. Redman (Australia National University Canberra), J.M. Rinzel (NIH, USA), S.A. Shamma (University of Maryland, USA), H. Sompolinsky (Hebrew University Jerusalem, Israel), S. Tanaka (RIKEN, Japan), A.M. Thomson (Royal Free Hospital, England), T.L. Williams (St George Hospital, London, England), Y. Yarom (Hebrew University Jerusalem, Israel), and others to be named. The Crete Course in Computational Neuroscience is supported by the European Commission (4th Framework Training and Mobility of Researchers program), by The Brain Science Foundation (Tokyo) and by UNESCO. Local administrative organization: the Institute of Applied and Computational Mathematics of FORTH (Crete, GR). From fritzke at neuroinformatik.ruhr-uni-bochum.de Thu Jan 9 12:49:47 1997 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Thu, 9 Jan 1997 18:49:47 +0100 (MET) Subject: paper available on LBG-U Message-ID: <199701091749.SAA00439@urda.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/manuscripts/IRINI/irini97-01/irini97-01.ps.gz The following TR/preprint is available via ftp (93 KB, 10 pages): The LBG-U method for vector quantization - an improvement over LBG inspired from neural networks Bernd Fritzke Systembiophysik Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum * Germany (to appear in: Neural Processing Letters, 1997, Vol. 5, No. 1) Keywords: codebook construction, data compression, growing neural networks, LBG, vector quantization Abstract: A new vector quantization method -- denoted LBG-U -- is presented which is closely related to a particular class of neural network models (growing self-organizing networks). LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG has converged, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum utility is moved to a new location, LBG is run on the resulting modified codebook until convergence, another vector is moved, and so on. Since a strictly monotonous improvement of the LBG-generated codebooks is enforced, it can be proved that LBG-U terminates in a finite number of steps. Experiments with artificial data demonstrate significant improvements in terms of RMSE over LBG combined with only modestly higher computational costs. Comments are welcome, Bernd Fritzke PS: Sorry for the long and obviously redundant URL. In some cases our TRs are provided as LaTeX source with all figures in separate files. Therefore, each TR has its own directory. Its a German system 8v). -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From villmann at informatik.uni-leipzig.d400.de Thu Jan 9 13:27:06 1997 From: villmann at informatik.uni-leipzig.d400.de (villmann@informatik.uni-leipzig.d400.de) Date: Thu, 9 Jan 1997 19:27:06 +0100 Subject: BOOK ANNOUNCEMENT Message-ID: <970109192706*/S=villmann/OU=informatik/PRMD=UNI-LEIPZIG/ADMD=D400/C=DE/@MHS> -------------- next part -------------- BOOK ANNOUNCEMENT The following book is now avaible: "Topologieerhaltung in selbstorganisierenden neuronalen Merkmalskarten" author : Thomas Villmann publisher : Verlag Harri Deutsch Frankfurt/M. ISBN : 3 - 8171 - 1523 - 7 116 pages, 42 fig. which is based on my PhD thesis. The book describes the current state of research of measuring the topology preservation in Kohonen's self-organizing feature maps (SOFM). After an introduction of SOFMs and considerations of their dynamics ( as an extension of the work done by H. Ritter et al.) actual measure of topology preservation are analyzed and the limits are shown. Taking into account these consideration following an intuitive understanding of topology preservation a strong mathematical approach is developed which is based on the mathematical theory of (discrete) topology. Using this theory a new measure, the so-called topographic function, is developed which measure the topology preservation according to its mathematical exact definition. In the last chapter some applications of the topographic function are presented. One of these is the estimation of growing self-organizing feature maps with respect to their topology preservation. Thereby a growing algorithm is presented which takes the principle components of the receptive fields of the neurons for the growing step into account. Please, note that the language of the book is German. Sorry, other hardcopies of the PhD thesis are not avaible. With best regards Thomas Villmann email: villmann at informatik.uni-leipzig.de From gbugmann at soc.plym.ac.uk Fri Jan 10 09:33:24 1997 From: gbugmann at soc.plym.ac.uk (Guido.Bugmann xtn 2566) Date: Fri, 10 Jan 1997 14:33:24 +0000 (GMT) Subject: Studentship in Autonomous Mobile Robotics Message-ID: Research Studentship in Autonomous Mobile Robotics Neural and Adaptive Systems Group School of Computing University of Plymouth The Neural and Adaptive Systems Group is offering a studentship aimed at investigating autonomous mobile robotics. The topics to be investigated will cover subjects such as biologically inspired models of spatial memory, use of vision in navigation tasks and for object recognition, goal directed action plans, exploration, and others. A research student is sought to: i) Take part in the set-up of the lab and provide technical support ii) Perform experiments with mobile robots. The ideal candidate should be a good all rounder, with good programming skills (C/C++), knowledge on interfacing PC's with the outside world, some knowledge of the design of electronic circuits, some skills in building / assembling electromechanical equipments, and a strong interest in biological and artificial neurocomputing, and its application in "useful" robots. The position is initially for two years, enabling to work towards an MPhil, with a possible extension of one year to complete a PhD. The studentship is approximately of 5400 pounds per year, tax free, which may be supplemented with part-time teaching. To apply for this position please send, before February the 15th, a completed postgrad application form that you can obtain from Carole Watson, [Phone (+44) 1752 23 25 41; Fax (+44) 1752 23 25 40; email: carole at soc.plym.ac.uk] or by writing at the address below. For further information contact: +--------------------------------------------------------------------+ Dr. Guido Bugmann Neural and Adaptive Systems Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom tel (+44) 1752 23 25 66 fax (+44) 1752 23 25 40 email: gbugmann at soc.plym.ac.uk or gbugmann at plymouth.ac.uk Home page: http://www.tech.plym.ac.uk/soc/staff/guidbugm/bugmann.html +--------------------------------------------------------------------+ From ken at phy.ucsf.edu Thu Jan 9 22:54:19 1997 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 9 Jan 1997 19:54:19 -0800 Subject: Postdoctoral and Predoctoral Positions in Theoretical Neurobiology Message-ID: <9701100354.AA06234@coltrane.ucsf.edu> POSTDOCTORAL AND PREDOCTORAL POSITIONS SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY UNIVERSITY OF CALIFORNIA, SAN FRANCISCO INFORMATION ON THE UCSF SLOAN CENTER AND FACULTY AND THE POSTDOCTORAL AND PREDOCTORAL POSITIONS IS AVAILABLE THROUGH OUR WWW SITE: http://www.sloan.ucsf.edu/sloan. E-mail inquiries should be sent to sloan-info at phy.ucsf.edu. Below is basic information on the program: The Sloan Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in a theoretical discipline, such as physics, mathematics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan Center will offer opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. The research undertaken by the trainees may be theoretical, experimental, or a combination. The RESIDENT FACULTY of the Sloan Center and their research interests are: Allison Doupe: Development of song recognition and production in songbirds. Stephen Lisberger: Learning and memory in a simple motor reflex, the vestibulo-ocular reflex, and visual guidance of smooth pursuit eye movements by the cerebral cortex. Michael Merzenich: Experience-dependent plasticity underlying learning in the adult cerebral cortex and the neurological bases of learning disabilities in children. Kenneth Miller: Mechanisms of self-organization of the cerebral cortex; circuitry and computational mechanisms underlying cortical function; computational neuroscience. Roger Nicoll: Synaptic and cellular mechanisms of learning and memory in the hippocampus. Christoph Schreiner: Cortical mechanisms of perception of complex sounds such as speech in adults, and plasticity of speech recognition in children and adults. Michael Stryker: Mechanisms that guide development of the visual cortex. All of these resident faculty are members of UCSF's W.M. Keck Foundation Center for Integrative Neuroscience, a new center (opened January, 1994) for systems neuroscience that includes extensive shared research resources within a newly renovated space designed to promote interaction and collaboration. The unusually collaborative and interactive nature of the Keck Center will facilitate the training of theorists in a variety of approaches to systems neuroscience. In addition to the resident faculty, there are a series of VISITING FACULTY who are in residence at UCSF for times ranging from 1-8 weeks each year. These faculty, and their research interests, include: Laurence Abbott, Brandeis University: Neural coding, relations between firing rate models and biophysical models, self-organization at the cellular level William Bialek, NEC Research Institute: Physical limits to sensory signal processing, reliability and information capacity in neural coding; Sebastian Seung, ATT Bell Labs: models of collective computation in neural systems; David Sparks, University of Pennsylvania: understanding the superior colliculus as a "model cortex" that guides eye movements; Steven Zucker, McGill University: Neurally based models of vision, visual psychophysics, mathematical characterization of neuroanatomical complexity. PREDOCTORAL applicants with strong theoretical training seeking to BEGIN a Ph.D. program should apply directly to the UCSF Neuroscience Ph.D. program. Contact Patricia Arrandale, patricia at phy.ucsf.edu, to obtain application materials; be sure to include your surface-mail address. The APPLICATION DEADLINE for Sloan Center applicants is Feb. 10, 1997 for fall, 1997 admission. Sloan Center applicants must also alert the Sloan Center of your application, by writing to Steve Lisberger at the address given below. POSTDOCTORAL applicants, or PREDOCTORAL applicants seeking to do research at the Sloan Center as part of a Ph.D. program in progress in a theoretical discipline elsewhere, should apply as follows: Send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. THE APPLICATION DEADLINE IS February 10, 1997. UC San Francisco is an Equal Opportunity Employer. Send applications to: Steve Lisberger Sloan Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 From terry at salk.edu Sat Jan 11 02:02:36 1997 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 10 Jan 1997 23:02:36 -0800 (PST) Subject: Telluride Workshop Message-ID: <199701110702.XAA26267@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 23 - JULY 13, 1997 TELLURIDE, COLORADO Deadline for application is April 1, 1997. Christof Koch (Caltech) Terry Sejnowski (Salk Institute/UCSD) and Rodney Douglas (Zurich, Switzerland) invite applications for a three-week summer workshop that will be held in Telluride, Colorado in 1997. The 1996 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the the "Center for Neuromorphic Systems Engineering" at Caltech, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week summer workshop will include background lectures, practical tutorials on aVLSI design, hands-on projects, and special interest groups. Participants are encouraged to get involved in as many of these activities as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robotics" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Andreas Andreou, Johns Hopkins. Richard Andersen, Caltech. Dana Ballard, Rochester. Avis Cohen, Maryland. Tobi Delbruck, Arithmos. Steve DeWeerth, Georgia Tech Rodney Douglas, Zurich. Christof Koch, Caltech. John Kauer, Tufts. Shih-Chii Liu, Caltech and Rockwell. Stefan Schaal, Georgia Tech Terrence Sejnowski, UCSD and Salk. Shihab Shamma, Maryland. Mark Tilden, Los Alamos. Paul Viola, MIT. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000 range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX (and windows). We have funds to reimburse some participants for up to $500 of domestic travel and for all housing expenses. Please specify on the application whether such financial help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 1, 1997. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1997. From giles at research.nj.nec.com Mon Jan 13 10:36:48 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 13 Jan 97 10:36:48 EST Subject: TR available Message-ID: <9701131536.AA26649@alta> The following TR is available from the NEC Research Institute and University of Maryland UMIACS archives. _______________________________________________________________________ A Delay Damage Model Selection Algorithm for NARX Neural Networks Tsungnan Lin{1,2}, C. Lee Giles{1,3}, Bill G. Horne{1}, Sun-Yang Kung{2} {1} NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 {2} Dept. of Electrical Engineering, Princeton U., Princeton, NJ 08540 {3} UMIACS, University of Maryland, College Park, MD 20742 U. of Maryland Technical Report CS-TR-3707 and UMIACS-TR-96-77 ABSTRACT Recurrent neural networks have become popular models for system identification and time series prediction. NARX (Nonlinear AutoRegressive models with eXogenous inputs) neural network models are a popular subclass of recurrent networks and have beenused in many applications. Though embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction. Keywords: Recurrent neural networks, tapped-delay lines, long-term dependencies, time series, automata, memory, temporal sequences, gradient descent training, latching, NARX networks, auto-regressive, pruning, embedding theory. _________________________________________________________________________ http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From harmonme at aa.wpafb.af.mil Mon Jan 13 16:06:16 1997 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Mon, 13 Jan 97 16:06:16 -0500 Subject: On-Line, Interactive RL Tutorial Message-ID: <970113160614.648@ethel.aa.wpafb.af.mil.0> Reinforcement Learning: An On-Line, Interactive Tutorial by Mance E. Harmon http://eureka1.aa.wpafb.af.mil/rltutorial (hardcopy length: 19 Pages) Scope of Tutorial The purpose of this tutorial is to provide an introduction to reinforcement learning (RL) at a level easily understood by students and researchers in a wide range of disciplines. The intent is not to present a rigorous mathematical discussion that requires a great deal of effort on the part of the reader, but rather to present a conceptual framework that might serve as an introduction to a more rigorous study of RL. The fundamental principles and techniques used to solve RL problems are presented. The most popular RL algorithms are presented and interactively demonstrated using WebSim, a Java-based simulation development environment. Section 1 presents an overview of RL and provides a simple example to develop intuition of the underlying dynamic programming mechanism. In Section 2 the parts of a reinforcement learning problem are discussed. These include the environment, reinforcement function, and value function. Section 3 gives a description of the most widely used reinforcement learning algorithms. These include TD(lambda) and both the residual and direct forms of value iteration, Q-learning, and advantage learning. In Section 4 some of the ancillary issues in RL are briefly discussed, such as choosing an exploration strategy and an appropriate discount factor. The conclusion is given in Section 5. Finally, Section 6 is a glossary of commonly used terms followed by references in Section 7 and a bibliography of RL applications in Section 8. It is assumed that the reader has some knowledge of learning algorithms that rely on gradient descent (such as the backpropagation of errors algorithm). Mance Harmon harmonme at aa.wpafb.af.mil From lazzaro at CS.Berkeley.EDU Mon Jan 13 19:13:02 1997 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Mon, 13 Jan 1997 16:13:02 -0800 (PST) Subject: New silicon audition papers online ... Message-ID: <199701140013.QAA16004@snap.CS.Berkeley.EDU> Two new papers on silicon audition available online ... --john lazzaro (presented at NIPS*96) A Micropower Analog VLSI HMM State Decoder for Wordspotting John Lazzaro and John Wawrzynek CS Division, UC Berkeley Richard Lippmann MIT Lincoln Laboratory ABSTRACT We describe the implementation of a hidden Markov model state decoding system, a component for a wordspotting speech recognition system. The key specification for this state decoder design is microwatt power dissipation; this requirement led to a continuous-time, analog circuit implementation. We describe the tradeoffs inherent in the choice of an analog design and explain the mapping of the discrete-time state decoding algorithm into the continuous domain. We characterize the operation of a 10-word (81 state) state decoder test chip. Available on the Web at: http://http.cs.berkeley.edu/~lazzaro/biblio/decoder.ps.gz (forthcoming UCB Technical Report) Anawake: Signal-Based Power Management For Digital Signal Processing Systems John Lazzaro and John Wawrzynek CS Division, UC Berkeley Richard Lippmann MIT Lincoln Laboratory ABSTRACT Single-chip, low-power, programmable digital signal processing systems are capable of hosting complete speech processing applications, while consuming a few milliwatts of average power. We present a power management architecture that decreases the average power consumption of these systems to 3--10 microwatts, in applications where speech signals are present with a sufficiently low duty cycle. In this architecture, a micropower analog signal processing system, {\it Anawake,} continuously analyzes the incoming signal, and controls the power consumption of the DSP system in a signal-dependent way. We estimate system power consumption for Anawake designs optimized for different peak-speech-signal to average-background-noise ratios. Available on the Web at: http://http.cs.berkeley.edu/~lazzaro/biblio/anawake.ps.gz From movellan at ergo.ucsd.edu Tue Jan 14 15:15:13 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Tue, 14 Jan 1997 12:15:13 -0800 Subject: UCSD Cogsci Tech Report 97.01 Message-ID: <199701142015.MAA00743@ergo.ucsd.edu> UCSD.Cogsci.TR.97.01 AUTHORS: Javier R. Movellan and Paul Mineiro TITLE: Modularity and Catastrophic Fusion: A Bayesian Approach with Applications to Audiovisual Speech Recognition. ABSTRACT: While modular architectures have desirable properties, integrating the outputs of many modules into a unified representation is not a trivial issue. In this paper we examine catastrophic fusion, a problem that occurs when modules are fused in incorrect context conditions. This problem has become especially apparent in the current research on automatic recognition of multimodal signals and has practical as well as theoretical relevance. Catastrophic fusion arises because modules make implicit assumptions and thus operate correctly only within a certain context. Practice shows that when modules are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We propose a principled solution to this problem based upon Bayesian ideas of competitive models. We study the approach analytically on a classic Gaussian discrimination task and then apply it to a realistic problem on audiovisual speech recognition (AVSR) with excellent results. For concreteness our emphasis is on applications to AVSR but the problems at hand are very general and touch fundamental issues about cognitive architectures. ELECTRONIC COPIES: http://cogsci.ucsd.edu and follow the link to "Tech Reports" PHYSICAL COPIES: Available for $7.00 within the US, $10.00 outside the US. For physical copies send a check of money order payable to UC Regents and mail it to the following address, TR Request Javier R. Movellan Department of Cognitive Science University of California San Diego La Jolla, Ca 92093-0515 From gjr at cra.com Tue Jan 14 17:04:52 1997 From: gjr at cra.com (Gerard J Rinkus) Date: Tue, 14 Jan 1997 17:04:52 -0500 Subject: Thesis available: Neural model integrating episodic, semantic and temporal sequence memory Message-ID: FTP-host: cns-ftp.bu.edu FTP-filename: /pub/rinkus/thesis_*.Z The following Ph.D. Thesis is available via either anonymous ftp or my web site (no hardcopies available). It is 249 pages long and the chapters can be retrieved individually. (Specific retrieval instructions below). =========================================================== Title: A Combinatorial Neural Network Exhibiting Episodic and Semantic Memory Properties for Spatio-Temporal Patterns Gerard J. Rinkus Dept. of Cognitive and Neural Systems Boston University Boston, MA 02215 rinkus at cns.bu.edu http://cns-web.bu.edu/pub/rinkus/www/ ABSTRACT This thesis describes TEMECOR (Temporal Episodic MEmory using COmbinatorial Representations), an unsupervised, distributed, associative network model of storage, retrieval and recognition of binary spatio-temporal patterns, exhibiting episodic, semantic and complex sequence memory. The original version of the model, TEMECOR-I, meets several essential requirements of episodic memory-very high capacity, single-trial learning, permanence (stability) of traces, and the ability to store highly-overlapped spatio-temporal patterns, including complex state sequences (CSSs) which are sequences in which the same state can recur multiple times-e.g., [A B B A G C B A D]. Various parametric simulation studies are reported, revealing that the model's capacity increases faster-than-linearly in the size (i.e., number of nodes) of the network, for both uncorrelated and correlated (specifically, complex sequence) spatio-temporal pattern sets. However, TEMECOR-I fails to possess the crucial property that similar inputs map to similar internal representations-i.e., continuity. Therefore the model fails to exhibit similarity-based generalization and categorization, which are the basis of many of those phenomena classed as semantic memory. A second version of the model, TEMECOR-II, adds the property of continuity and therefore constitutes a single associative neural network which exhibits both episodic and semantic memory properties, and which does so for the spatio-temporal pattern domain. TEMECOR-II achieves the continuity property by computing, on each time slice, t, the degree of match, G(t), between its expected and actual inputs and then adding an amount of noise, inversely proportional to G(t), into the process of choosing a final internal representation at t. This generally leads to reactivation of old traces (i.e., greater pattern completion) in proportion to the familiarity of inputs, and establishment of new traces (i.e., greater pattern separation) in proportion to the novelty of inputs. Simulation results are given for TEMECOR-II, demonstrating the embedding of similarity relationships in the model's adaptive mappings between inputs and internal representations, and the model's ability to co-categorize similar spatio-temporal events. The model is monolithic in that all three types of memory are explained by a single local circuit architecture, instantiating a winner-take-all network, that is proposed as an analog of the cortical minicolumn. Thus, episodic (i.e., exemplar-specific) and semantic (i.e., general, category-level) information coexist in the same physical substrate. A principle/mechanism is described whereby the model's instantaneous level of memory access-along the spectrum between highly specific (based on the details of a single exemplar) and highly generic (based on the general properties of a class of exemplars) memory access-can be controlled by modulation of various threshold parameters. =========================================================== FTP instructions: (e.g. to retrieve chapter 1) unix> ftp cns-ftp.bu.edu Name: anonymous Password: your full email address ftp> cd pub/rinkus ftp> get thesis_chap1.ps.Z ftp> bye unix> uncompress thesis_chap1.ps.Z .then send to a postscript printer or previewer Note: the file names and page lengths are: thesis_chap1.ps.Z ch. 1 49 (incl. prelim pages) thesis_chap2.ps.Z ch. 2 24 thesis_chap3.ps.Z ch. 3 29 thesis_chap4.ps.Z ch. 4 113 thesis_chap5.ps.Z ch. 5 5 thesis_refs.ps.Z refs. 11 From marks at u.washington.edu Tue Jan 14 18:45:23 1997 From: marks at u.washington.edu (Robert Marks) Date: Tue, 14 Jan 97 15:45:23 -0800 Subject: IEEE TNN now on-line Message-ID: <9701142345.AA20188@carson.u.washington.edu> The IEEE Transactions on Neural Networks is now on line. It is currently free to IEEE members. The web address is: http://www.opera.ieee.org/jolly/ Have your IEEE number handy. IEEE membership information is available on the NNC home page. http://engine.ieee.org/nnc/ Robert J. Marks II, Editor-in-Chief IEEE Transactions on Neural Networks r.marks at ieee.org From moody at chianti.cse.ogi.edu Wed Jan 15 01:54:15 1997 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 14 Jan 97 22:54:15 -0800 Subject: Research Position in Statistical Learning Message-ID: <9701150654.AA10106@chianti.cse.ogi.edu> Research Position in Nonparametric Statistics, Neural Networks and Machine Learning at Department of Computer Science & Engineering Oregon Graduate Institute of Science & Technology I am seeking a highly qualified researcher to take a leading role on a project involving the development and testing of new model selection and input variable subset selection algorithms for classification, regression, and time series prediction applications. Candidates should have a PhD in Statistics, EE, CS, or a related field, have experience in neural network modeling, nonparametric statistics or machine learning, have strong C programming skills, and preferably have experience with S-Plus and Matlab. The compensation and level of appointment (Postdoctoral Research Associate or Senior Research Associate) will depend upon experience. The initial appointment will be for one year, but may be extended depending upon the availability of funding. Candidates who can start by April 1, 1997 or before will be given preference, although an extremely qualified candidate who is available by June 1 may also be considered. If you are interested in applying for this position, please mail, fax, or email your CV (ascii text or postscript only), a letter of application, and a list of at least three references (names, addresses, emails, phone numbers) to: Ms. Sheri Dhuyvetter Computer Science & Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503) 690-1476 FAX: (503) 690-1548 Email: sherid at cse.ogi.edu Please do not send applications to me directly. I will consider all applications received by Sheri on or before January 31. OGI (Oregon Graduate Institute of Science and Technology) has over a dozen faculty, senior research staff, and postdocs doing research in Neural Networks, Machine Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. Short descriptions of our research interests are appended below. Additional information is available on the Web at http://www.cse.ogi.edu/Neural/ and http://www.cse.ogi.edu/CompFin/ . OGI is a young, but rapidly growing, private research institute located in the Silicon Forest area west of downtown Portland, Oregon. OGI offers Masters and PhD programs in Computer Science and Engineering, Electrical Engineering, Applied Physics, Materials Science and Engineering, Environmental Science and Engineering, Chemistry, Biochemistry, Molecular Biology, Management, and Computational Finance. The Portland area has a high concentration of high tech companies that includes major firms like Intel, Hewlett Packard, Tektronix, Sequent Computer, Mentor Graphics, Wacker Siltronics, and numerous smaller companies like Planar Systems, FLIR Systems, Flight Dynamics, and Adaptive Solutions (an OGI spin-off that manufactures high performance parallel computers for neural network and signal processing applications). +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science & Engineering Department of Electrical Engineering Research Interests of Faculty, Research Staff, and Postdocs in Neural Networks, Machine Learning, Signal Processing, Control, Speech, Language, Vision, Time Series, and Computational Finance Etienne Barnard (Associate Professor, EE): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EE); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and computational finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EE): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems and financial market analysis. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EE): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From lemmon at endeavor.ee.nd.edu Wed Jan 15 09:16:07 1997 From: lemmon at endeavor.ee.nd.edu (Michael Lemmon) Date: Wed, 15 Jan 1997 09:16:07 -0500 Subject: IEEE-TAC Special Issue Message-ID: <199701151416.JAA00690@endeavor.ee.nd.edu> Contributed by Michael D. Lemmon (lemmon at maddog.ee.nd.edu) FIRST CALL FOR PAPERS IEEE Transactions on Automatic Control announces a Special Issue on ARTIFICIAL NEURAL NETWORKS IN CONTROL, IDENTIFICATION, and DECISION MAKING Edited by Anthony N. Michel Michael Lemmon Dept of Electrical Engineering Dept. of Electrical Engineering University of Notre Dame University of Notre Dame Notre Dame, IN 46556, USA Notre Dame, IN, 46556, USA (219)-631-5534 (voice) (219)-631-8309 (voice) (219)-631-4393 (fax) (219)-631-4393 (fax) Anthony.N.Michel.1 at nd.edu lemmon at maddog.ee.nd.edu There is a growing body of experimental work suggesting that artificial neural networks can be very adept at solving pattern classification problems where there is significant real-world uncertainty. Neural networks also provide an analog method for quickly determining approximate solutions to complex optimization problems. Both of these capabilities can be of great use in solving various control problems and in recent years there has been increased interest in the use of artificial neural networks in the control and supervision of complex dynamical systems. This announcement is a call for papers addressing the topic of neural networks in control, identification, and decision making. Accepted papers will be published in a special issue of the IEEE Transactions of Automatic Control. The special issue is seeking papers which use formal analysis to establish the role of neural networks in control, identification, and decision making. For this reason, papers consisting primarily of empirical simulation results will not be considered for publication. Before submitting, prospective authors should consult past issues of the IEEE Transactions on Automatic Control to identify the type of results and the level of mathematical rigor that are the norm in this journal. Submitted papers are due by July 1, 1997 and should be sent to Michael D. Lemmon or Anthony N. Michel. Notification of acceptance decisions will be sent by December 31, 1997. The special issue is targeted for publication in 1998 or early 1999. All papers will be refereed in accordance with IEEE guidelines. Please consult the inside back cover of any recent issue of the Transactions on Automatic Control for style and length of the manuscript and the number of required copies (seven copies with cover letter) to be sent to one of the editors of this special issue. From marwan at ee.usyd.edu.au Thu Jan 16 05:03:49 1997 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Thu, 16 Jan 1997 21:03:49 +1100 (EST) Subject: Postgraduate scholarship Message-ID: Postgraduate scholarship Models and implementations of spiking networks This is a scholarship funded by the Australian Research Council for a period of three years to fund a student styding towards a PhD. Students applying for this scholarship should have a first class Honors or equivalent in electrical/computer engineering. The scholarship amounts to A$18,000 per year. The area of investigation is the development of models, learning algorithms and associated VLSI implementations of spiking networks. Note that non-Australian residents have to pay fees to enrol at Australian universities. The amount of fees can be consulted by looking the University of Sydney web page (www.usyd.edu.au). Applicants should forward to my address below a CV including copies of scripts of academic record and the names, email, phone, fax, and addresses of two academic referees. Deadline is Friday Feb 7, 1997. ------------ Marwan Jabri Professor in Adaptive Systems Dept of Electrical Engineering, The University of Sydney NSW 2006, Australia Tel: (+61-2) 9351-2240, Fax: (+61-2) 9351-7209, Mobile: (+61) 414-512240 Email: marwan at sedal.usyd.edu.au, http://www.sedal.usyd.edu.au/~marwan/ From mesegal at aehn2.einstein.edu Thu Jan 16 04:24:36 1997 From: mesegal at aehn2.einstein.edu (mary segal) Date: Thu, 16 Jan 1997 09:24:36 GMT Subject: Recruitment for fellows in neural networks/rehabilitation research Message-ID: <199701160924.JAA25147@aehn2.einstein.edu> FELLOWSHIPS AVAILABLE IN NEURAL NETWORKS RELATED TO COGNITIVE NEUROSCIENCE AND REHABILITATION RESEARCH The Moss Rehabilitation Research Institute (MRRI) is accepting applications for two-year research fellowships. MRRI provides an opportunity for mutual collaboration with senior investigators in the field; educational conferences, lab meetings, and weekly patient rounds; and access to a variety of patient populations. Requirements include a PhD in a relevant area of cognitive neuroscience or rehabilitation, training in research design, and applied experience in data collection and analysis. Projects are funded through Federal agencies, such as the National Institutes of Health, as well as private foundations. Individuals with the following interests are invited to apply for the 1997-1998 fellowships: Applications of neural networks to both theoretical and applied problems in rehabilitation research, including 1) cognitive processing and/or acquired language disorders, e.g. in stroke patients; 2) relationship between areas of muscle weakness and compensatory musculoskeletal overuse symptoms in polio and other neurologic diseases; and/or 3) prediction of functional outcomes in rehabilitation patients. Mentors include Drs. John Whyte, MRRI director; Myrna Schwartz, MRRI associate director; Laurel Buxbaum; Branch Coslett; Mary Klein; Susan Kohn; and Mary Segal. MRRI is affiliated with a number of area academic institutions and has close ties with departments of physical medicine and rehabilitation, psychology, cognitive neuroscience, neurology, and bioengineering at various institutions including the University of Pennsylvania, Temple University, Lehigh University, and Drexel University. MRRI is an equal opportunity employer. For more information contact Mary Segal, Moss Rehabilitation Research Institute, MossRehab Hospital, 213 Korman Building, 1200 West Tabor Road, Philadelphia, PA 19141; telephone (215) 456-9901 ext. 9181; FAX (215) 456-9514; e-mail mesegal at aehn2.einstein.edu. From anderson at magnum.cog.brown.edu Fri Jan 17 09:30:58 1997 From: anderson at magnum.cog.brown.edu (anderson@magnum.cog.brown.edu) Date: Fri, 17 Jan 1997 09:30:58 EST Subject: Faculty Position in Cognition, Brown University Message-ID: <009AE7E1.9C226A40.11@magnum.cog.brown.edu> FACULTY POSITION IN COGNITION, BROWN UNIVERSITY. The Department of Cognitive and Linguistic Sciences at Brown University invites applications for a four-year faculty position in Human Cognition, to begin July 1, 1997 (initial three year appointment with one year renewal, non-tenure-track). The position would be suited to either a senior visitor who would receive half-time salary support and teach two courses per year, or a more junior applicant who would receive full salary support and teach three courses per year. Candidates should have core teaching and research interests in an area of human cognition such as perception, attention, memory, categorization, problem solving, reasoning, or decision making, as well as an interest in interacting with members of an interdisciplinary department. Familiarity with computational modeling is desirable. All applicants must have received the Ph.D. degree or equivalent by the beginning of the appointment. The initial deadline for applications is March 1, 1997, but applications will be accepted after that time until the position is filled. Please send CV, recent publications, and a cover letter describing teaching and research interests to the address below. Senior applicants should enclose the names of three referees; junior applicants should have three letters of reference sent to: Cognitive Search Committee, Department of Cognitive and Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912. Brown is an Equal Opportunity/Affirmative Action employer. Women and minorities are especially encouraged to apply. From bruno at redwood.ucdavis.edu Fri Jan 17 21:05:41 1997 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Fri, 17 Jan 1997 18:05:41 -0800 Subject: grad program, UC Davis Message-ID: <199701180205.SAA20962@redwood.ucdavis.edu> GRADUATE PROGRAM IN NEUROSCIENCE UNIVERSITY OF CALIFORNIA, DAVIS The Graduate Program in Neuroscience at the University of California, Davis offers interdisciplinary training in areas from molecular to cognitive neuroscience. Many research opportunities exist for students interested in computational modeling approaches to problems in neuroscience. The Center for Neuroscience and the Institute for Theoretical Dynamics provide students and faculty with numerous research facilities and an excellent environment for combining theoretical and experimental approaches. Relevant faculty include: David Amaral - structure and function of hippocampus, amygdala Ken Britten - visual cortex, neural basis of motion perception Leo Chalupa - retina neurophysiology, development Barbara Chapman - development and plasticity of sensory systems Charles Gray - cortical mechanisms of pattern recognition, rhythmic activity Andrew Ishida - retinal ganglion cells, synaptic integration Joel Keizer - computational modeling, cell physiology, calcium dynamics Leah Krubitzer - cortical organization, comparative anatomy Ron Mangun - selective attention, cognitive neuroimaging Bruno Olshausen - computational models of vision, efficient coding Robert Rafal - neuropsychology of visual attention Gregg Recanzone - cortical mechanisms of attention, sensory processing Lynn Robertson - spatial vision, object recognition, hemispheric differences Karen Sigvart - neural control of locomotion Mitch Sutter - cortical mechanisms of auditory perception, plasticity Martin Wilson - synaptic transmission in the retina *** Application deadline for fall admissions is February 15, 1997. *** Application materials may be obtained from: Ms. Dawne Shell tel: (916) 752-9091 or 9092 Graduate Group Complex fax: (916) 752-8822 188 Briggs Hall e-mail: drshell at ucdavis.edu University of California, Davis Davis, California 95616-8599 Specific questions regarding the program should be directed to: Lynn Roberston, Program Chair or David Amaral, Chair of Admissions (916) 757-8853 (916) 757-8813 (510) 372-2000 X6891 dgamaral at ucdavis.edu marva4!lynn at ucdavis.edu Web site: http://neuroscience.ucdavis.edu/ngg/ From nkasabov at commerce.otago.ac.nz Mon Jan 20 16:39:11 1997 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 20 Jan 1997 09:39:11 -1200 Subject: ICONIP'97 call for papers Message-ID: <120BF143F97@jupiter.otago.ac.nz> CALL FOR PAPERS, PRESENTATIONS, SPECIAL SESSIONS ICONIP'97 jointly with ANZIIS'97 and ANNES'97 (in cooperation with IEEE NNC and INNS) The Fourth International Conference on Neural Information Processing-- The Annual Conference of the Asian Pacific Neural Network Assembly, jointly with The Fifth Australian and New Zealand International Conference on Intelligent Information Processing Systems, and The Third New Zealand International Conference on Artificial Neural Networks and Expert Systems, 24-28 November, 1997 Dunedin/Queenstown, New Zealand The joint conference will have three parallel streams: Stream1: Neural Information Processing Stream2: Computational Intelligence and Soft Computing Stream3: Intelligent Information Systems and their Applications TOPICS OF INTEREST Stream1: Neural Information Processing * Neurobiological systems * Cognition * Cognitive models of the brain * Dynamical modelling, chaotic processes in the brain * Brain computers, biological computers * Consciousness, awareness, attention * Adaptive biological systems * Modelling emotions * Perception, vision * Learning languages * Evolution Stream2: Computational Intelligence and Soft Computing * Artificial neural networks: models, architectures, algorithms * Fuzzy systems * Evolutionary programming and genetic algorithms * Artificial life * Distributed AI systems, agent-based systems * Soft computing--paradigms, methods, tools * Approximate reasoning * Probabilistic and statistical methods * Software tools, hardware implementation Stream3: Intelligent Information Systems and their Applications * Connectionist-based information systems * Hybrid systems * Expert systems * Adaptive systems * Machine learning, data mining and intelligent databases * Pattern recognition and image processing * Speech recognition and language processing * Intelligent information retrieval systems * Human-computer interfaces * Time-series prediction * Control * Diagnosis * Optimisation * Application of intelligent information technologies in: manufacturing, process control, quality testing, finance, economics, marketing, management, banking, agriculture, environment protection, medicine, geographic information systems, government, law, education, and sport * Intelligent information technologies on the global networks HONORARY CHAIR Shun-Ichi Amari, Tokyo University GENERAL CONFERENCE CHAIR Nik Kasabov, University of Otago nkasabov at otago.ac.nz COFERENCE CO-CHAIRS Yianni Attikiouzel, University of Western Australia Marwan Jabri, Sydney University PROGRAM CO-CHAIRS Tom Gedeon, University of New South Wales George Coghill, University of Auckland LOCAL ORGANIZING COMMITTEE CHAIR: Philip Sallis, University of Otago CONFERENCE ORGANISER Ms Kitty Ko Department of Information Science, University of Otago, PO Box 56, Dunedin, New Zealand phone: +64 3 479 8153, fax: +64 3 479 8311, email: kittyko at commerce.otago.ac.nz CALL FOR PAPERS Papers must be received by 30 May 1997. They will be reviewed by senior researchers in the field and the authors will be informed about the decision of the review process by 20 July 1997. The accepted papers must be submitted in a camera-ready format by 20 August. All accepted papers will be published by Springer-Verlag. As the conference is a multi-disciplinary meeting the papers are required to be comprehensible to a wider rather than to a very specialised audience. Papers will be presented at the conference either in an oral or in a poster session. Please submit three copies of the paper written in English on A4-format white paper with one inch margins on all four sides, in two column format, on not more than 4 pages, single-spaced, in Times or similar font of 10 points, and printed on one side of the page only. Centred at the top of the first page should be the complete title, author(s), mailing and e-mailing addresses, followed by an abstract and the text. In the covering letter the stream and the topic of the paper according to the list above should be indicated. SPECIAL ISSUES OF JOURNALS AND EDITED VOLUMES Selected papers will be published in special issues of scientific journals and in edited volumes which will include chapters covering the conference topics written by invited conference participants. TUTORIALS (24 November) Conference tutorials will be organized to introduce the basics of cognitive modelling, dynamical systems, neural networks, fuzzy systems, evolutionary programming, machine learning, soft computing, expert systems,hybrid systems, and adaptive systems. EXHIBITION Companies and university research laboratories are encouraged to exhibit their developed or distributing software and hardware systems. SPECIAL EVENTS FOR PRACTITIONERS The New Zealand Computer Society is organising special demonstrations, lectures and materials for practitioners working in the area of information technologies. VENUE (Dunedin/Queenstown) The Conference will be held at the University of Otago, Dunedin, New Zealand. The closing session will be held on Friday, 28 November on a cruise on one of the most beautiful lakes in the world, Lake Wakatipu. The cruise departs from the famous tourist centre Queenstown, about 300 km from Dunedin. Transportation will be provided and there will be a separate discount cost for the cruise. TRAVELLING The Dunedin branch of House of Travel, a travelling company, is happy to assist in any domestic and international travelling arrangements for the Conference delegates. They can be contacted through email: travel at es.co.nz, fax: +64 3 477 3806, phone: +64 3 477 3464, or toll free number: 0800 735 737 (within NZ). POSTCONFERENCE EVENTS Following the closing conference cruise, delegates may like to experience the delights of Queenstown, Central Otago, and Fiordland. Travel plans can be coordinated by the Dunedin Visitor Centre (phone: +64 3 474 3300, fax: +64 3 474 3311). IMPORTANT DATES Papers due: 30 May 1997 Notification of acceptance: 20 July 1997 Final camera-ready papers due: 20 August 1997 Registration of at least one author of a paper: 20 August 1997 Early registration: 20 August 1997 CONFERENCE CONTACTS, PAPER SUBMISSIONS, CONFERENCE INFORMATION, REGISTRATION FORMS Conference Secretariat Department of Information Science, University of Otago, PO Box 56, Dunedin, New Zealand; phone: +64 3 479 8142; fax: +64 3 479 8311; email: iconip97 at otago.ac.nz Home page: http://divcom.otago.ac.nz:800/com/infosci/kel/conferen.htm RELATED CONFERENCE The World Manufacturing Congress'97 (WMC'97) will be held from November 18-21, 1997 at Massesy University, Albany Campus, Auckland, New Zealand.For further information please visit the Web Site: http://www.compusmart.ab.ca/icsc/wmc97.htm From radford at cs.toronto.edu Mon Jan 20 13:52:03 1997 From: radford at cs.toronto.edu (Radford Neal) Date: Mon, 20 Jan 1997 13:52:03 -0500 Subject: Software & Technical Report available Message-ID: <97Jan20.135204edt.1028@neuron.ai.toronto.edu> Now available free for research and educational use: SOFTWARE FOR FLEXIBLE BAYESIAN MODELING This software implements a variety of Bayesian models for regression and classification based on neural networks and Gaussian processes. The software is written in C for Unix. The neural network programs are an update of those previously distributed, which are described in my book, Bayesian Learning for Neural Networks (Springer-Verlag 1996, ISBN 0-387-94724-8). The Gaussian process models and their implementation are described in the following technical report: MONTE CARLO IMPLEMENTATION OF GAUSSIAN PROCESS MODELS FOR BAYESIAN REGRESSION AND CLASSIFICATION Radford M. Neal Dept. of Statistics and Dept. of Computer Science University of Toronto Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple non- parametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response. Both the software and the technical report can be obtained via my home page, at URL http://www.cs.utoronto.ca/~radford/ You can directly obtain the compressed Postscript for the technical report at URL ftp://ftp.cs.utoronto.ca/pub/radford/mc-gp.ps.Z Please let me know if you encounter any difficulties. ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ---------------------------------------------------------------------------- From georg at ai.univie.ac.at Tue Jan 21 11:30:13 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Tue, 21 Jan 1997 17:30:13 +0100 (MET) Subject: CFP: NN in biomedical systems Message-ID: <199701211630.RAA21869@jedlesee.ai.univie.ac.at> Call for Abstracts for a ======================================= Special track on biomedical systems ======================================= at the International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 ----------------------------------------------------------------------------- The deadline for submission of abstracts to the special track on biomedical systems at EANN '97 has been extended to =================================== January 31, 1997 (email submission) =================================== Please send your submissions to georg at ai.univie.ac.at (Georg Dorffner) --------------- Instructions: Abstracts of one page (about 400 words) should be sent to georg at ai.univie.ac.at by 31 January 1997 by e-mail in plain ASCII format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper and whether you will prefer oral or poster presentation. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 7 February. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN95.html and http://www.abo.fi/~abulsari/EANN96.html --------------- About the conference: The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, biomedical systems, and environmental engineering. Other special tracks are: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, Ersin-Tulunay at metu.edu.tr), Hybrid Systems (D. Tsaptsinos, D.Tsaptsinos at kingston.ac.uk), Mechanical Engineering (A. Scherer, Andreas_Scherer at hp.com), Process Engineering (R. Baratti, baratti at ndchem3.unica.it) Advisory board J. Hopfield (USA) A. Lansner (Sweden) G. Sjodin (Sweden) Organising committee A. Bulsari (Finland) H. Liljenstrom (Sweden) D. Tsaptsinos (UK) International program committee G. Baier (Germany) R. Baratti (Italy) S. Cho (Korea) T. Clarkson (UK) J. DeMott (USA) G. Dorffner (Austria) W. Duch (Poland) G. Forsgren (Sweden) A. Gorni (Brazil) J. Heikkonen (Italy) F. Norlund (Sweden) A. Ruano (Portugal) A. Scherer (Germany) C. Schizas (Cyprus) J. Thibault (Canada) E. Tulunay (Turkey) Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact us again. You should receive an abstract number in a couple of days after the submission. International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 Registration information Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned after the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid and therefore will not be stored. For more information, please ask eann97 at kth.se. The conference fee will be SEK 4148 (SEK 3400 excluding VAT) until 28 February, and SEK 4978 (SEK 4080 excluding VAT) after that. The conference fee includes attendance to the conference and the proceedings. If your organisation (university or company or institute) has a VAT registration from a European Union country other than Finland, then your VAT number should be mentioned on the bank transfer as well as the registration form, and VAT need not be added to the conference fee. At least one author of each accepted paper should register by 15 March to ensure that the paper will be included in the proceedings. The correct conference fee amount should be received in the account number 207 799 342, Svenska Handelsbanken International, Stockholm branch. It can be paid by bank transfer (with all expenses paid by the sender) to "EANN Conference". To avoid extra bureaucracy and correction of the amount at the registration desk, make sure that you have taken care of the bank transfer fees. It is essential to mention the name of the participant with the bank transfer. If you need to pay it in another way (bank drafts, Eurocheques, postal order; no credit cards), please contact us at eann97 at kth.se. Invoicing will cost SEK 100. From atick at monaco.rockefeller.edu Tue Jan 21 14:50:31 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Tue, 21 Jan 1997 14:50:31 -0500 Subject: Job Openings in Computer Vision Research Message-ID: <9701211450.ZM26135@monaco.rockefeller.edu> FYI %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% JOB OPENINGS IN Pattern Recognition Research Join a growing team of scientists and software engineers developing real world applications of visual pattern recognition technology (e.g. face recognition systems). The openings are at various levels and will be at Visionics' research facility in New Jersey (about 20 minutes outside New York City). The candidate is expected to have experience in pattern recognition research, numerical analysis, C/C++ programming. Strong computer programming abilities are a must. A research track record in computer vision, artificial neural network, image processing, or scene understanding is a definite plus. If you are interested in an exciting job opportunity and would like a chance for rapid career development and significant financial rewards, please fax your resume to (908) 549 5323, Re: Job Posting, for consideration. Alternatively, you can email it to jobs at faceit.com. Additional information can be found at http://www.faceit.com. Visionics is an equal opportunity employer. Minority and women candidates are encouraged to apply. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From giles at research.nj.nec.com Wed Jan 22 10:49:30 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 22 Jan 97 10:49:30 EST Subject: TR on Alternative Discrete-time Operators in Neural Networks Message-ID: <9701221549.AA06169@alta> The following TR is now available from the University of Maryland, NEC Research Institute and the Laboratory of Artificial Brain Systems archives. ************************************************************************************ Alternative Discrete-Time Operators and Their Application to Nonlinear Models Andrew D. Back [1], Ah Chung Tsoi [2], Bill G. Horne [3], C. Lee Giles [4,5] [1] Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako--shi, Saitama 351-01, Japan [2] Faculty of Informatics, University of Wollongong, Northfields Avenue, Wollongong, Australia [3] AADM Consulting, 9 Pace Farm Rd., Califon, NJ 07830 [4} NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [5] Inst. for Advanced Computer Studies, U. of Maryland, College Park, MD. 20742 U. of Maryland Technical Report CS-TR-3738 and UMIACS-TR-97-03 ABSTRACT The shift operator, defined as q x(t) = x(t+1), is the basis for almost all discrete-time models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightly-damped-low-frequency (LDLF) systems, with poles near $(1,0)$ on the unit circle in the complex plane. This problem occurs under fast sampling conditions. As the sampling rate increases, coefficient sensitivity and round-off noise become a problem as the difference between successive sampled inputs becomes smaller and smaller. The resulting coefficients of the model approach the coefficients obtained in a binomial expansion, regardless of the underlying continuous-time system. This implies that for a given finite wordlength, severe inaccuracies may result. Wordlengths for the coefficients may also need to be made longer to accommodate models which have low frequency characteristics, corresponding to poles in the neighbourhood of (1,0). These problems also arise in neural network models which comprise of linear parts and nonlinear neural activation functions. Various alternative discrete-time operators can be introduced which offer numerical computational advantages over the conventional shift operator. The alternative discrete-time operators have been proposed independently of each other in the fields of digital filtering, adaptive control and neural networks. These include the delta, rho, gamma and bilinear operators. In this paper we first review these operators and examine some of their properties. An analysis of the TDNN and FIR MLP network structures is given which shows their susceptibility to parameter sensitivity problems. Subsequently, it is shown that models may be formulated using alternative discrete-time operators which have low sensitivity properties. Consideration is given to the problem of finding parameters for stable alternative discrete-time operators. A learning algorithm which adapts the alternative discrete-time operators parameters on-line is presented for MLP neural network models based on alternative discrete-time operators. It is shown that neural network models which use these alternative discrete-time perform better than those using the shift operator alone. Keywords: Shift operator, alternative discrete-time operator, gamma operator, rho operator, low sensitivity, time delay neural network, high speed sampling, finite wordlength, LDLF, MLP, TDNN. ___________________________________________________________________________________ http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html http://zoo.riken.go.jp/abs1/back/Welcome.html -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From michael at salk.edu Wed Jan 22 21:19:02 1997 From: michael at salk.edu (michael@salk.edu) Date: Wed, 22 Jan 1997 18:19:02 -0800 (PST) Subject: NIPS*96 preprints Message-ID: <199701230219.SAA03230@gabor.salk.edu> Connectionists - This is an announcement of several NIPS*96 preprints from the Computational Neurobiology Lab at the Salk Institute in San Diego. These will appear in "Advances in Neural Information Processing Systems 9" (available May 1997), edited by Mozer, M.C., Jordan, M.I., and Petsche, T., and published by MIT Press of Cambridge, MA. We enclose the abstracts and ftp addresses of these papers. Full citations are at the bottom of each abstract. Comments and feedback are welcome. - Marni Stewart Bartlett, Tony Bell, Michael Gray, Mike Lewicki, Terry Sejnowski, Magnus Stensmo, Akaysha Tang ************************************************************** VIEWPOINT INVARIANT FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS AND ATTRACTOR NETWORKS Bartlett, M. Stewart & Sejnowski, T.J. EDGES ARE THE `INDEPENDENT COMPONENTS' OF NATURAL SCENES Bell A.J. & Sejnowski T.J. DYNAMIC FEATURES FOR VISUAL SPEECHREADING: A SYSTEMATIC COMPARISON Gray, M.S., Movellan, J.R., & Sejnowski, T.J. SELECTIVE INTEGRATION: A MODEL FOR DISPARITY ESTIMATION Gray, M.S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T.J. BLIND SEPARATION OF DELAYED AND CONVOLVED SOURCES Lee T-W., Bell A.J. & Lambert R. BAYESIAN UNSUPERVISED LEARNING OF HIGHER ORDER STRUCTURE Lewicki, M.S. & Sejnowski, T.J. LEARNING DECISION THEORETIC UTILITIES THROUGH REINFORCEMENT LEARNING Stensmo, M. & Sejnowski, T.J. CHOLINERGIC MODULATION PRESERVES SPIKE TIMING UNDER PHYSIOLOGICALLY REALISTIC FLUCTUATING INPUT Tang, A.C., Bartels, A.M., & Sejnowski, T.J. ************************************************************** VIEWPOINT INVARIANT FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS AND ATTRACTOR NETWORKS Bartlett, M. Stewart & Sejnowski, T.J. We have explored two approaches to recognizing faces across changes in pose. First, we developed a representation of face images based on independent component analysis (ICA) and compared it to a principal component analysis (PCA) representation for face recognition. The ICA basis vectors for this data set were more spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present a model for the development of viewpoint invariant responses to faces from visual experience in a biological system. The temporal continuity of natural visual experience was incorporated into an attractor network model by Hebbian learning following a lowpass temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization of Griniasty et al. (1993), which associates temporally proximal input patterns into basins of attraction. The system acquired representations of faces that were largely independent of pose. ftp://ftp.cnl.salk.edu/pub/marni/nips96_bartlett.ps http://www.cnl.salk.edu/~marni/publications.html Bartlett, M. Stewart & Sejnowski, T. J. (in press). Viewpoint invariant face recognition using independent component analysis and attractor networks. In Mozer, M.C., Jordan, M.I., Petsche, T.(Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** EDGES ARE THE `INDEPENDENT COMPONENTS' OF NATURAL SCENES Bell A.J. & Sejnowski T.J. Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that non-linear `infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen \& Field (1996). In addition, the outputs of these filters are as independent as possible, since the infomax network is able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form an information-theoretic co-ordinate system for images. ftp://ftp.cnl.salk.edu/pub/tony/edge.ps.Z Bell A.J. & Sejnowski T.J. (In press). Edges are the `Independent Components' of Natural Scenes. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** DYNAMIC FEATURES FOR VISUAL SPEECHREADING: A SYSTEMATIC COMPARISON Gray, M. S., Movellan, J. R., & Sejnowski, T. J. Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored. ftp://ftp.cnl.salk.edu/pub/michael/nips_lips.ps ftp://ftp.cnl.salk.edu/pub/michael/nips_lips-abs.text Gray, M. S., Movellan, J. R., & Sejnowski, T. J. (In press). Dynamic features for visual speechreading: A systematic comparison. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** SELECTIVE INTEGRATION: A MODEL FOR DISPARITY ESTIMATION Gray, M. S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T. J. Local disparity information is often sparse and noisy, which creates two conflicting demands when estimating disparity in an image region: the need to spatially average to get an accurate estimate, and the problem of not averaging over discontinuities. We have developed a network model of disparity estimation based on disparity-selective neurons, such as those found in the early stages of processing in visual cortex. The model can accurately estimate multiple disparities in a region, which may be caused by transparency or occlusion, in real images and random-dot stereograms. The use of a selection mechanism to selectively integrate reliable local disparity estimates results in superior performance compared to standard back-propagation and cross-correlation approaches. In addition, the representations learned with this selection mechanism are consistent with recent neurophysiological results of von der Heydt, Zhou, Friedman, and Poggio (1995) for cells in cortical visual area V2. Combining multi-scale biologically-plausible image processing with the power of the mixture-of-experts learning algorithm represents a promising approach that yields both high performance and new insights into visual system function. ftp://ftp.cnl.salk.edu/pub/michael/nips_stereo.ps ftp://ftp.cnl.salk.edu/pub/michael/nips_stereo-abs.text Gray, M. S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T. J. (In press). Selective Integration: A Model for Disparity Estimation. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** BLIND SEPARATION OF DELAYED AND CONVOLVED SOURCES Lee T-W., Bell A.J. & Lambert R. We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve non-minimum phase transfer functions, not-invertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate real-room separation of two natural signals using this approach. ftp://ftp.cnl.salk.edu/pub/tony/twfinal.ps.Z Lee T-W., Bell A.J. & Lambert R. (In press). Blind separation of delayed and convolved sources. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** BAYESIAN UNSUPERVISED LEARNING OF HIGHER ORDER STRUCTURE Lewicki, M. S. & Sejnowski, T. J. Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers higher order structure using EM and Gibbs sampling. The model can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher levels. We demonstrate the performance of the algorithm on benchmark problems. ftp://ftp.cnl.salk.edu/pub/lewicki/nips96.ps.Z ftp://ftp.cnl.salk.edu/pub/lewicki/nips96-abs.text Lewicki, M.S. and Sejnowski, T.J. (In press). Bayesian unsupervised learning of higher order structure. In Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural and Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** LEARNING DECISION THEORETIC UTILITIES THROUGH REINFORCEMENT LEARNING Stensmo, M. & Sejnowski, T. J. Probability models can be used to predict outcomes and compensate for missing data, but even a perfect model cannot be used to make decisions unless the values of the outcomes, or preferences between them, are also provided. This arises in many real-world problems, such as medical diagnosis, where the cost of the test as well as the expected improvement in the outcome must be considered. Relatively little work has been done on learning the utilities of outcomes for optimal decision making. In this paper, we show how temporal-difference (TD($\lambda$)) reinforcement learning can be used to determine decision theoretic utilities within the context of a mixture model and apply this new approach to a problem in medical diagnosis. TD($\lambda$) learning reduces the number of tests that have to be done to achieve the same level of performance with the probability model alone, which result in significant cost savings and increased efficiency. http://www.cs.berkeley.edu/~magnus/papers/nips96.ps.Z Stensmo, M. and Sejnowski, T. J. (in press). Learning decision theoretic utilities through reinforcement learning. In: Mozer, M.C., Jordan, M.I. and Petsche, T., (Eds.), Advances in Neural Information Processing Systems, Vol. 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** CHOLINERGIC MODULATION PRESERVES SPIKE TIMING UNDER PHYSIOLOGICALLY REALISTIC FLUCTUATING INPUT Tang, A. C., Bartels, A. M., & Sejnowski, T. J. Neuromodulation can change not only the mean firing rate of a neuron, but also its pattern of firing. Therefore, a reliable neural coding scheme, whether a rate coding or a spike time based coding, must be robust in a dynamic neuromodulatory environment. The common observation that cholinergic modulation leads to a reduction in spike frequency adaptation implies a modification of spike timing, which would make a neural code based on precise spike timing difficult to maintain. In this paper, the effects of cholinergic modulation were studied to test the hypothesis that precise spike timing can serve as a reliable neural code. Using the whole cell patch-clamp technique in rat neocortical slice preparation and compartmental modeling techniques, we show that cholinergic modulation, surprisingly, preserved spike timing in response to a fluctuating inputs that resembles {\em in vivo} conditions. This result suggests that in vivo spike timing may be much more resistant to changes in neuromodulator concentrations than previous physiological studies have implied. ftp://ftp.cnl.salk.edu/pub/tang/ach_timing.ps.gz ftp://ftp.cnl.salk.edu/pub/tang/ach_timing_abs.txt Akaysha C. Tang, Andreas M. Bartels, and Terrence J Sejnowski. (In press). Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** From back at zoo.riken.go.jp Thu Jan 23 10:59:24 1997 From: back at zoo.riken.go.jp (Andrew Back) Date: Fri, 24 Jan 1997 00:59:24 +0900 Subject: URL Correction: TR on Alternative Discrete-time Operators in Neural Networks Message-ID: <9701231610.AA05123@tora.riken.go.jp> Please note the following URL correction for the previously announced TR: Alternative Discrete-Time Operators and Their Application to Nonlinear Models Andrew D. Back, Ah Chung Tsoi, Bill G. Horne, C. Lee Giles U. of Maryland Technical Report CS-TR-3738 and UMIACS-TR-97-03 Instead of: http://zoo.riken.go.jp/abs1/back/Welcome.html please use: http://www.bip.riken.go.jp/absl/back Our apologies for any inconvenience caused. -- Andrew Back Brain Information Processing Group The Institute of Physical and Chemical Research (RIKEN), Japan. WWW: http://www.bip.riken.go.jp/absl/back From ingber at ingber.com Thu Jan 23 14:58:40 1997 From: ingber at ingber.com (Lester Ingber) Date: Thu, 23 Jan 1997 14:58:40 -0500 Subject: Papers: Canonical momenta indicators ... Message-ID: <199701231958.LAA03723@alumnae.caltech.edu> Below are URLs and abstracts for 4 papers utilizing canonical momenta indicators (CMI), in analyses of neocortical EEG, financial markets, combat simulation, and data mining/knowledge discovery. Below these are instructions for retrieval of files. As noted by a Physical Review E referee for the EEG paper, ... the paper ... has potential value for a wide variety of systems, especially for very complex systems. Its filename [and size] is smni97_cmi.ps.Z [170K] %A L. Ingber %T Statistical mechanics of neocortical interactions: Canonical momenta indicators of electroencephalography %J Physical Review E %P (to be published) %D 1997 %O URL http://www.ingber.com/smni97_cmi.ps.Z ABSTRACT: A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. Sets of EEG and evoked potential data were fit, collected to investigate genetic predispositions to alcoholism and to extract brain "signatures" of short-term memory. Adaptive Simulated Annealing (ASA), a global optimization algorithm, was used to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta indicators (CMI) are thereby derived for individual's EEG data. The CMI give better signal recognition than the raw data, and can be used to advantage as correlates of behavioral states. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons. The markets file is the final version of a preprint posted in March '96. markets96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of financial markets and neocortical EEG %B International Conference on Neural Information Processing (ICONIP'96) %I Springer %C New York %P 777-784 %D 1996 %O Invited paper to the 1996 International Conference on Neural Information Processing (ICONIP'96), Hong Kong, 24-27 September 1996. URL http://www.ingber.com/markets96_momenta.ps.Z ABSTRACT: A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. combat97_cmi.ps.Z [55K] %A M. Bowman %A L. Ingber %T Canonical momenta of nonlinear combat %B Proceedings of the 1997 Simulation Multi-Conference, 6-10 April 1997, Atlanta, GA %I Society for Computer Simulation %C San Diego, CA %P (to be published) %D 1997 %O URL http://www.ingber.com/combat97_cmi.ps.Z ABSTRACT: The context of nonlinear combat calls for more sophisticated measures of effectiveness. We present a set of tools that can be used as such supplemental indicators, based on stochastic nonlinear multivariate modeling used to benchmark Janus simulation to exercise data from the U.S. Army National Training Center (NTC). As a prototype study, a strong global optimization tool, adaptive simulated annealing (ASA), is used to explicitly fit Janus data, deriving coefficients of relative measures of effectiveness, and developing a sound intuitive graphical decision aid, canonical momentum indicators (CMI), faithful to the sophisticated algebraic model. We argue that these tools will become increasingly important to aid simulation studies of the importance of maneuver in combat in the 21st century. path97_datamining.ps.Z [90K] %A L. Ingber %T Data mining and knowledge discovery via statistical mechanics in nonlinear stochastic systems %P (submitted) %D 1997 %O URL http://www.ingber.com/path97_datamining.ps.Z ABSTRACT: A modern calculus of multivariate nonlinear multiplicative Gaussian-Markovian systems provides models of many complex systems faithful to their nature, e.g., by not prematurely applying quasi-linear approximations for the sole purpose of easing analysis. To handle these complex algebraic constructs, sophisticated numerical tools have been developed, e.g., methods of adaptive simulated annealing (ASA) global optimization and of path integration (PATHINT). In-depth application to three quite different complex systems have yielded some insights into the benefits to be obtained by application of these algorithms and tools, in statistical mechanical descriptions of neocortex (electroencephalography), financial markets (interest-rate and trading models), and combat analysis (baselining simulations to exercise data). The latest Adaptive Simulated Annealing (ASA) optimization code may be retrieved at no charge from this archive in several formats: http://www.ingber.com/ASA-shar [1350K] http://www.ingber.com/ASA-shar.Z [500K] http://www.ingber.com/ASA.tar.Z [450K] http://www.ingber.com/ASA.tar.gz [320K] http://www.ingber.com/ASA.zip [330K] The archive can be accessed via WWW path http://www.ingber.com/ http://www.alumni.caltech.edu/~ingber/ where the last address is a mirror homepage for the full archive. Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at ftpmail.ramona.vix.com (was ftpmail at decwrl.dec.com), and send only the word "help" in the body of the message. Limited help assisting people with queries on my codes and papers is available only by electronic mail correspondence. Sorry, I cannot mail out hardcopies of code or papers. /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Prof. Lester Ingber __ PO Box 857 __ McLean, VA 22101-0857 __ USA */ From regier at tidbit Thu Jan 23 18:09:07 1997 From: regier at tidbit (Terry Regier) Date: Thu, 23 Jan 1997 23:09:07 +0000 (GMT) Subject: FEB 15 DEADLINE for Computational Psycholinguistics conference Message-ID: <199701232309.RAA03913@tidbit.> A non-text attachment was scrubbed... Name: not available Type: text Size: 6912 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/88110105/attachment.ksh From atick at monaco.rockefeller.edu Fri Jan 24 11:56:11 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Fri, 24 Jan 1997 11:56:11 -0500 Subject: Network: CNS, Table of Contents, Vol. 8, 1,97 Message-ID: <9701241156.ZM28094@monaco.rockefeller.edu> Network: Computation in Neural Systems Table of Contents Volume 8, 1, 1997 As you may know, the journal has adopted incremental publishing in its online edition, which means a paper is immediately published as soon as it is accepted and processed. Every 3 months we finalize an issue for archival reasons and issue a table of contents. Online journal and information can be found at http://www.iop.org/Journals/ne (limited access to non-subcribers, access to full length articles to insitutional subscribers) Table of contents of latest issue: %%%%%%%%%%%%% Editorial: Thank you to all our referees TOPICAL REVIEW R1 On the use of computation in modelling behaviour F van der Velde PAPERS 1 Nitric oxide: what can it compute? B Krekelberg and J G Taylor 17 Analysis of ocular dominance pattern formation in a high-dimensional self-organizing-map model H-U Bauer, D Brockmann and T Geisel 35 Capacity and information efficiency of the associative net B Graham and D Willshaw 55 A neural net model of the adaptation of binocular vertical eye alignment J W McCandless and C M Schor 71 Quality and efficiency of retrieval for Willshaw-like autoassociative networks: III. Willshaw--Potts model A Kartashov, A Frolov, A Goltsev and R Folk 87 Stereo vision using a microcanonical mean field annealing neural network Jeng-Sheng Huang and Hsiao-Chung Liu 104 Abstracts of Topical reviews published during 1996 Mutual information maximization: models of cortical self-organization S. Becker The development of topography in the visual cortex: a review of models N. Swindale Auditory cortical representation of complex acoustiv spectra as inferred from the ripple analysis method S. Shamma Human colour perception and its adaptation M. Webster %%%%%%%%%% Coming up in the May issue (1) Metric-space analysis of spike trains: theory, algorithm and application Jonathon Victor, & Keith Purpura (2) A neural model of the stroboscopic alternative motion A Bartschl and J L van Hemmen and much much more... %%%%%%%%%%%%%% Network:CNS would like to welcome its new editorial board: Larry Abott, Brandeis Peter Dayan, MIT Peter Hancock, University of Stirling David Heeger, Stanford University Leo van Hemmen, University of Munich Tony Movshon, NYU Markus Meister, Harvard Dan Ruderman, Salk Insitute Jonathon Victor, Cornell Univeristy David Willshaw, University of Edinburgh As always, we are happy to hear your suggestions and receive your submissions. We hope to continue to make Network:CNS an indispensible research tool for the computational and neuroscience community. Best regards joseph atick Editor-in-chief -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From lane at katrix.com Fri Jan 24 14:48:39 1997 From: lane at katrix.com (Stephen Lane) Date: Fri, 24 Jan 1997 14:48:39 -0500 Subject: Job Openings in Intelligent Agent R&D Message-ID: <01BC0A05.B08EE400@engarde.katrix.com> JOB OPENINGS IN INTELLIGENT AGENT RESEARCH AND DEVELOPMENT OVERVIEW Katrix Inc. has developed technology that enables intelligent agents to be embodied as fully articulated three-dimensional human and animal-like interactive characters in computer games, virtual reality simulations and distributed interactive network applications. Intelligent agents created with Katrix Technology think, learn and act, and as a result, can adapt their behavior and movement in real-time based upon interactions with human users and the 3D virtual environment. Katrix currently is developing a suite of products that support the creation and control of such intelligent agents for use as fully interactive Internet Avatars, Digital Actors and Virtual Creatures. These products include point and click behavioral animation authoring tools, libraries of off-the-shelf interactive characters and intelligent behaviors, as well as a totally new single-hand computer input device particularly well suited for virtual reality and gaming applications. STAFF POSITIONS AVAILABLE The positions available involve core technology development in the areas of intelligent control, robotics, behavioral animation, neural networks, knowledge-based systems, distributed interactive simulation and visual programming language design. Prospective candidates should have a strong math background and be self-motivated. Required programming skills include proficiency in C and C++ on PC and/or Unix platforms. A research track record in controls and dynamics, robotics or neural networks is a definite plus. Familiarity with design and implementation of graphical user interfaces, 3D computer animation, interactive simulation or 3D games also is desirable. Staff positions are available at various levels at Katrix facility located in Princeton, New Jersey (about 60 minutes from both New York City and Philadelphia). CONTACT If you are interested in an exciting career opportunity with a rapidly growing company in an industry poised for explosive growth, please send your resume immediately to: Stephen H. Lane, President FAX: (609) 921-7547 Email: lane at katrix.com Katrix Inc. 31 Airpark Road Princeton, NJ 08540 (609) 921-7544 From rao at cs.rochester.edu Sat Jan 25 00:36:49 1997 From: rao at cs.rochester.edu (Rajesh Rao) Date: Sat, 25 Jan 1997 00:36:49 -0500 Subject: Technical Report: Visual recognition and robust Kalman filters Message-ID: <199701250536.AAA04030@porcupine.cs.rochester.edu> The following paper on appearance-based visual recognition and robust Kalman filtering is now available for retrieval via ftp. Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== Robust Kalman Filters for Prediction, Recognition, and Learning Rajesh P.N. Rao Department of Computer Science University of Rochester Rochester, NY 14627-0226 Technical Report 645 December, 1996 Using results from the field of robust statistics, we derive a class of Kalman filters that are robust to structured and unstructured noise in the input data stream. Each filter from this class maintains robust optimal estimates of the input process's hidden state by allowing the measurement covariance matrix to be a non-linear function of the prediction errors. This endows the filter with the ability to reject outliers in the input stream. Simultaneously, the filter also learns an internal model of input dynamics by adapting its measurement and state transition matrices using two additional Kalman filter-based adaptation rules. We present experimental results demonstrating the efficacy of such filters in mediating appearance-based segmentation and recognition of objects and image sequences in the presence of varying degrees of occlusion, clutter, and noise. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/robust.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/robust.ps.Z 15 pages; 296K compressed, 1015K uncompressed ------------------------------------------------------------------------- Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get robust.ps ftp> bye From wray at Ultimode.com Sat Jan 25 13:50:52 1997 From: wray at Ultimode.com (Wray Buntine) Date: Sat, 25 Jan 1997 10:50:52 -0800 Subject: PhD/Masters Research Assistantship Message-ID: <199701251850.KAA12584@Ultimode.com> PhD/Masters Research Assistantships Field: probabilistic algorithms, data analysis/mining and optimization for CAD Place: Electrical Engineering and Computer Science University of California, Berkeley The CAD group in the EECS Dept. at UC Berkeley is offering research support for its Masters and Doctoral program. Research areas include but are not limited to the use of data mining/analysis/engineering techniques in CAD or optimization, and probabilistic methods for optimization or specialized compilation. The Electronic Design Technology (EDT) field is concerned with computer automated or computer-assisted design of complex electronic systems. With current hardware capabilities advancing rapidly, a key bottleneck is the development of advanced algorithms for optimization and simulation of partial, abstract or completed designs. Our task is to design, code and experiment with new algorithms, methodologies, and software technologies for alleviating this bottleneck. The task can include the use of data mining/analysis to understand the nature of the optimization task, or in order to develop adaptive optimization methods. The ideal candidate should have a background in computer science, electrical engineering or related disciplines, should be an accomplished or developing programmer, and should have an interest in the theory and mathematical techniques used in optimization, data analysis, or probabilistic methods. Candidates who wish to apply are invited to respond with a copy of their CV to: Professor R. Newton URL: http://www.eecs.berkeley.edu/~newton Dr. Wray Buntine URL: http://www.eecs.berkeley.edu/~wray Dr. Andrew Mayer URL: http://www.eecs.berkeley.edu/~mayer Dept. of Electrical Engineering and Computer Sciences 520 Cory Hall University of California at Berkeley Berkeley, CA, 94720 The CAD Group URL: http://www-cad.eecs.berkeley.edu EECS, UC Berkeley URL: http://www.eecs.berkeley.edu From baluja at jprc.com Mon Jan 27 17:30:42 1997 From: baluja at jprc.com (Shumeet Baluja) Date: Mon, 27 Jan 1997 17:30:42 -0500 Subject: Paper: Using Optimal Dependency-Trees for Combinatorial Optimization Message-ID: <199701272230.RAA15646@india.jprc.com> The following paper is available from: http://www.cs.cmu.edu/~baluja/techreps.html (CMU-CS-97-107) Title: Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space Abstract: Many combinatorial optimization algorithms have no mechanism to capture inter-parameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algorithm which incrementally learns second-order probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees. We test this algorithm on a variety of optimization problems. Our results indicate superior performance over other tested algorithms that either (1) do not explicitly use these dependencies, or (2) use these dependencies to generate a more restricted class of dependency graphs. By: Shumeet Baluja Justsystem Pittsburgh Research Center School of Computer Science 4616 Henry St. Carnegie Mellon University Pittsburgh, PA. 15213 Pittsburgh, PA. 15213 Scott Davies School of Computer Science Carnegie Mellon University Pittsburgh, PA. 15213 This work is an extension of the work presented in two papers at NIPS 1996: Baluja, S., "Genetic Algorithms and Explicit Search Statistics" to appear in Advances in Neural Information Processing Systems 1996. De Bonet, J., Isbell, C., and Viola, P. (1997) "MIMIC: Finding Optima by Estimating Probability Densities," to appear in Advances in Neural Information Processing Systems 1996. As always, comments and suggestion are most welcome. From ohira at csl.sony.co.jp Wed Jan 29 00:32:43 1997 From: ohira at csl.sony.co.jp (Toru Ohira) Date: Wed, 29 Jan 97 14:32:43 +0900 Subject: TR on Systems with noise and dealy Message-ID: <9701290532.AA07421@ohira.csl.sony.co.jp> The following TRs are available from Sony Computer Science Lab. http://www.csl.sony.co.jp/person/ohira/drw.html ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ (1) Oscillatory Correlation of Delayed Random Walks Toru Ohira (Sony CSL) SCSL-TR-96-014 (Tentatively scheduled to appear as Rapid Communincation in Phys. Rev. E. Feb., 1997; also regitared at Los Alamos National Lab Archive, cond-mat/9701066) (2) Delay Estimation from Noisy Time Series Toru Ohira (Sony CSL) Ryusuke Sawatari (Computer Science Dept. Keio Univ.) SCSL-TR-96-017 (Tentatively scheduled to appear as Rapid Communincation in Phys. Rev. E. March, 1997; also regitared at Los Alamos National Lab Archive, cond-mat/9701193) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ These papers are aimed to analytically capture the behavior of systems with noise and delay, which includes neural networks. As it stands these works are more of formal theory, and we welcome suggestion and comments toward application to neural networks. (For preliminary application to neuro-muscular control asssociated with human posture control, please see SCSL-TR-94-026, "Delayed Random Walks" (Ohira and Milton), Phys. Rev. E. vol.52, pp.3277,1995). Sincerely, Toru Ohira ohira at csl.sony.co.jp Sony Computer Science Lab. ++++++++++++++++++Abstracts++++++++++++++++ Oscillatory Correlation of Delayed Random Walks (Ohira) We investigate analytically and numerically the statistical properties of a random walk model with delayed transition probability dependence (delayed random walk). The characteristic feature of such a model is the oscillatory behavior of its correlation function. We investigate a model whose transient and stationary oscillatory behavior is analytically tractable. The corresppondence of the model with a Langevin equation with delay is also considered. Delay Estimation From Noisy Time Series (Ohira and Sawatari) We propose here a method to estimate a delay from a time series taking advantage of analysis of random walks with delay. This method is applicable to a time series coming out of a system which is or can be approximated as a linear feedback system with delay and noise. We successfully test the method with a time series generated by discrete Langevin equation with delay. From gordon at AIC.NRL.Navy.Mil Wed Jan 29 14:41:16 1997 From: gordon at AIC.NRL.Navy.Mil (gordon@AIC.NRL.Navy.Mil) Date: Wed, 29 Jan 97 14:41:16 EST Subject: ICML-97 workshops CFPs Message-ID: <9701291941.AA13925@sun14.aic.nrl.navy.mil> ================================================================ CALL FOR PAPERS REINFORCEMENT LEARNING: TO MODEL OR NOT TO MODEL, THAT IS THE QUESTION Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 Recently there has been some disagreement in the reinforcement learning community about whether finding a good control policy is helped or hindered by learning a model of the system to be controlled. Recent reinforcement learning successes (Tesauro's TD-gammon, Crites' elevator control, Zhang and Dietterich's space-shuttle scheduling) have all been in domains where a human-specified model of the target system was known in advance, and have all made substantial use of the model. On the other hand, there have been real robot systems which learned tasks either by model-free methods or via learned models. The debate has been exacerbated by the lack of fully-satisfactory algorithms on either side for comparison. Topics for discussion include (but are not limited to) o Case studies in which a learned model either contributed to or detracted from the solution of a control problem. In particular, does one method have better data efficiency? Time efficiency? Space requirements? Final control performance? Scaling behavior? o Computational techniques for finding a good policy, given a model from a particular class -- that is, what are good planning algorithms for each class of models? o Approximation results of the form: if the real system is in class A, and we approximate it by a model from class B, we are guaranteed to get "good" results as long as we have "sufficient" data. o Equivalences between techniques of the two sorts: for example, if we learn a policy of type A by direct method B, it is equivalent to learning a model of type C and computing its optimal controller. o How to take advantage of uncertainty estimates in a learned model. o Direct algorithms combine their knowledge of the dynamics and the goals into a single object, the policy. Thus, they may have more difficulty than indirect methods if the goals change (the "lifelong learning" question). Is this an essential difficulty? o Does the need for an online or incremental algorithm interact with the choice of direct or indirect methods? There will be presentations at the workshop by both invited speakers and authors of accepted papers; in addition, we may schedule a poster session after the workshop. Contributions that argue a position, give an overview or review, or report recent work are all encouraged. 3 hardcopies of extended abstracts or full papers papers no longer than 15 pages should be sent to arrive by March 15th, 1997 to Geoff Gordon (address below). Please also email a URL that points to your submission to ggordon at cs.cmu.edu by the same date. Accepted papers will be included in the hardcopy workshop proceedings (the ICML-97 style file will be available for final formatting). The URLs will be used to create an electronic proceedings. We would like the electronic proceedings to contain online copies of slides, posters, etc. in addition to the papers. Important Dates: March 15, 1997: Extended abstracts and papers due April 10, 1997: Notification of acceptance May 1, 1997: Camera-ready copy of papers due July 12, 1997: Workshop Organizers: Chris Atkeson (cga at cc.gatech.edu) College of Computing Georgia Institute of Technology 801 Atlantic Drive Atlanta, GA 30332-0280 Geoff Gordon (ggordon at cs.cmu.edu) Computer Science Department Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213-3891 (412) 268-3613, (412) 361-2893 Contact: Geoff Gordon (ggordon at cs.cmu.edu) ================================================================ CALL FOR PAPERS AUTOMATA INDUCTION, GRAMMATICAL INFERENCE, AND LANGUAGE ACQUISITION Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 The Automata Induction, Grammatical Inference, and Language Acquisition Workshop will be held on Saturday, July 12, 1997 during the Fourteenth International Conference on Machine Learning (ICML-97) which will be co-located with the Tenth Annual Conference on Computational Learning Theory (COLT-97) at Nashville, Tennessee from July 8 through July 12, 1997. Additional information on ICML-97 and COLT-97 can be found at: http://cswww.vuse.vanderbilt.edu/~mlccolt/ Objectives Machine learning of grammars, variously referred to as automata induction, grammatical inference, grammar induction, and automatic language acquisition, finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. The workshop seeks to bring together researchers working on different aspects of machine learning of grammars in a number of different (and until now, relatively isolated) areas including neural networks, pattern recognition, computational linguistics, computational learning theory, automata theory, and language acquisition for fruitful exchange of the relevant recent research results. Workshop Format The workshop will consist of 3--5 invited talks offering different perspectives on machine learning of grammars, interspersed with short (10--15 minute) presentations of accepted papers. The workshop schedule will allow ample time for informal discussion. Topics of Interest Topics of interest include, but are not limited to: Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions). Theoretical results in grammar induction: e.g., impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, etc. Empirical comparison of different approaches to grammar induction. Demonstrated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, adaptive intelligent agents, systems modelling, and other domains. Submission Guidelines Full paper submissions are highly recommended although extended abstracts will also be considered. The manuscript should be no more than 10 pages long when formatted for generic 8-1/2 x 11 inch pages using the formatting macros and templates available at: http://www.aaai.org/Publications/Templates/macros-link.html Postscript versions of the manuscripts should be emailed so as to arrive by March 15, 1997 at: honavar at cs.iastate.edu, pdupont at cs.cmu.edu, giles at research.nj.nec.com. Deadlines Deadline for submission of manuscripts: March 15, 1997 Decisions regarding acceptance or rejection emailed to authors: April 1, 1997 Final versions of the papers due: April 15, 1997 Selection Criteria Selection of submitted papers will be on the basis of review by at least two referees. Review criteria include: originality, technical soundness, clarity of presentation, relevance of the results and potential appeal to the workshop audience. Workshop Proceedings Workshop proceedings will be published in electronic form on the world-wide web. Authors of a selected subset of accepted workshop papers might also be invited to submit revised and expanded versions of their papers for possible publication in a special issue of a journal or an edited collection of papers to be published after the conference. Workshop Organizers: Dr. Vasant Honavar Department of Computer Science 226 Atanasoff Hall Iowa State University Ames, IA 50011 honavar at cs.iastate.edu Dr. Pierre Dupont Department of Computer Science Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 pdupont at cs.cmu.edu Dr. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 giles at research.nj.nec.com ================================================================ CALL FOR PAPERS ML APPLICATION IN THE REAL WORLD: METHODOLOGICAL ASPECTS AND IMPLICATIONS Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 WWW-page: http://www.aifb.uni-karlsruhe.de/WBS/ICML97/ICML97.html Description Application of Machine Learning techniques to solve real-world problems has gained more and more interest over the last decade. In spite of this attention, the ML application process is still lacking a generally accepted terminology, let alone commonly accepted approaches or solutions. Several initiatives, both conferences and workshops have been held concerning this topic. The ICML-93 workshop of Langley and Kodratoff on ML applications as well as at the ICML-95 workshop on 'Applying Machine Learning in Practice' by Aha, Catlett, Hirsh and Riddle form the successful precedents of this workshop. The focus of the ICML-95 workshop was the 'characterization of the expertise used by machine learning experts during the course of applying learning algorithms to practical applications'. In the last year a significant research effort has been spent that deals with applications of learning algorithms. A reflection of this is the recent interest in Data Mining and KDD, as for instance reflected in the international KDD- conference (1995 (Montreal) and 1996 (Portland, OR)). Since the application of ML-techniques is also very relevant to the KDD-community it is not surprising that this is also reflected in those conferences. The workshop will draw along the lines of all these events, but will emphasise the processes underlying the application of ML in practice. Methodological issues, as well as issues concerning the kinds and roles of knowledge needed for applying ML will form a major focus of the workshop. It aims at building upon some of the results of discussions at the ICML-95 workshop on "Application of ML techniques in practice" and at the same time tries to move forward to a consensus regarding a methodology on the application of learning algorithms in practice. The workshop "ML Application in the real world; methodological aspects and implications" focuses on the methodological principles underlying successful application of ML techniques. Apart from powerful ML algorithms, good application strategies have to be defined. This implies a thorough understanding of the initial problem definition and its relation to the chain of tasks that leads towards a successful solution. Therefore a two-dimensional approach regarding the process of ML application is needed. The first dimension deals with the whole cycle of analysing the setting, problem definition, knowledge extraction, database interaction, learning, evaluation and iteration in real-world domains, where the second dimension forms an "inner loop" to this cycle, where the problem definition is used to refine the task at hand and map it on available algorithms for learning, pre- and postprocessing and evaluation of results. Concerning these issues there is no clear distinction between ML and KDD, and therefore this workshop will be equally interesting for researchers from both communities. This workshop does not focus on (methods for) developing new algorithms. Moreover, case studies will only contribute to the workshop discussion if general application principles can be derived from them. Intended Participants and Audience The workshop primarily aims at scientists and practitioners that apply ML and related techniques to solve problems in the real world. To attend the workshop, one should submit a paper, a one page extended abstract or a statement of interest. In case of too much interest from participants, the program committee will select participants on the basis of workshop relevance. Ideally, the audience contains a mix of university and industrial participants. Workshop program The program for this one-day workshop will have a maximum of 10 presentations. Some invited presentations will be part of the program. Presentations will take 30 minutes (15-20 minutes presentation and 10-15 minutes discussion). Speakers are asked to focus their presentation on the basis of a topic list that will be compiled during the review process. To foster discussion and debate, accepted papers will be given to a critic beforehand; by these means critics will be prepared to debate presentations. At the end of the workshop, there will be a plenary discussion session. Accepted papers will be distributed via the workshop WWW-page before the workshop, to stimulate the discussion. Accepted papers will also be published in workshop proceedings. Papers are welcomed concerning (but not limited to) the following topics: * Methodological approaches focusing on the process of ML application, or sub-processes, such as problem definition and refinement, application design, data acquisition, pre- and postprocessing, task analysis etc. * Making explicit the kinds and roles of knowledge that are necessary for execution of ML applications. * Matching of problem definitions on specific techniques and multi- technique configurations. * Impact of methodologies for empirical research on the application of ML-techniques. * Identification of the relation of different ML strategies to given problem types and identification of the characteristics that play a role in describing the initial problems. * Embedding of the ML application process in more general methodologies for (knowledge) system development. * Frameworks for support of (ML-)novices and experts for setting up applications and reuse of previously application(part)s. * Case studies, describing successful ML applications, that abstract from the implementational aspects and focus on identification of the choices that are made when designing the application i.e. the (meta-)knowledge involved, etc. * Comparison of the process of ML application with processes for application of related techniques (e.g. statistical data analysis). Submission guidelines * Submitted papers should not exceed 3500 words or 8 pages Times Roman 12pt. * The title page should contain paper title, author name(s), affiliations and full addresses including e-mail of the corresponding author, as well as the paper abstract and five keywords at most. * Papers are reviewed by at least three members of the program committee on their relevance for the workshop discussions. * For preparation of the camera ready copies, an ICML style file will be available. Tentative Submission Schedule * Submission deadline: March 22, 1997 * Notification of acceptance: April 9, 1997 * Camera ready copy + PS-file: May 1, 1997 * Papers available on WWW: June 15, 1997 * Workshop date: July 12, 1997 Electronic paper submissions are preferred. Please send your submission to: MLApplic.ICML at ato.dlo.nl. If Postscript printing is not available, paper submissions (4 hardcopies, preferably double sided) can be sent to: ICML Workshop "ML APPLICATION IN THE REAL WORLD" p/o ATO-DLO, Floor Verdenius Postbus 17 6700 AA Wageningen Netherlands Program Committee Dr. Pieter Adriaans (Syllogic, Houten, The Netherlands) Prof. C. Brodley (Purdue University, West Lafayette, IND, USA) Prof. David Hand (Open University, Milton Keynes, United Kingdom) Prof. Yves Kodratoff (LRI, Paris, France) Dr. Vassilis Moustakis (Technical University of Crete, Chania, Greece) Prof. Gholamreza Nakhaeizadeh (Daimler Benz AG Research, Ulm, Germany) Dr. R. Kohavi (Silicon Graphics, Mountain View, CA, USA) Dr. Enric Plaza i Cervera (IIIA-CSIC, Bellaterra, Catalonia, Spain) Dr. Foster J. Provost (NYNEX Science & Technology, White Plains, NY, USA) Dr. P. Riddle (University of Auckland, New Zealand) Dr. Celine Rouveirol (LRI, Paris, France) Prof. Derek Sleeman (University of Aberdeen, United Kingdom) Drs. Maarten van Someren (SWI, Amsterdam, The Netherlands) Prof. Rudi Studer (University of Karlsruhe, Germany) Organising Committee Robert Engels (University of Karlsruhe, Germany) engels at aifb.uni-karlsruhe.de Juergen Herrmann (University of Dortmund, Germany) Herrmann at jupiter.informatik.uni-dortmund.de Bob Evans (RR Donnelley, Gallatin TN, USA) BOB.EVANS at rrd.com Floor Verdenius (ATO-DLO, Wageningen, The Netherlands) F.Verdenius at ato.dlo.nl ================================================================ From jagota at cse.ucsc.edu Wed Jan 29 21:55:35 1997 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Wed, 29 Jan 1997 18:55:35 -0800 Subject: NCS e-journal solicits collections Message-ID: <199701300255.SAA12024@bristlecone.cse.ucsc.edu> Neural Computing Surveys solicits collections In addition to regular survey papers, the e-journal NCS solicits manuscripts of a second kind: /collections/. Collections are to comprise of a set of consistently formulated and formatted items on some subarea of neural computing or related fields. Each item is to be presented succintly. Collections are expected to be exhaustive in coverage on the topic they serve, and are intended to provide a quick way for readers to check what all is known on the topic. Collections may range in length from very short (a few pages) to long, depending on the nature of the topic covered. Examples: Annotated bibliography on pruning algorithms for feedforward nets Collection of VC dimension results on neural nets Collection of neural activation functions used in ANNs Collection of learning rules used in associative memories Bibliography on self-organizing maps Collection of Lyapunov functions used in recurrent nets For more information about the e-journal, including submission guidelines, visit http://www.icsi.berkeley.edu/~jagota/NCS or http://www.dcs.rhbnc.ac.uk/NCS or contact jagota at cse.ucsc.edu Arun Jagota Dept of Computer Science, University of California, Santa Cruz, CA From pe_keller at ccmail.pnl.gov Fri Jan 31 18:09:14 1997 From: pe_keller at ccmail.pnl.gov (Paul E Keller) Date: Fri, 31 Jan 1997 15:09:14 -0800 Subject: Career Opportunities at Battelle Message-ID: <00020165.@ccmail.pnl.gov> Research Scientist/Engineer Principal Scientist/Engineer Battelle, Columbus, OH, USA Battelle, a leading provider of technology solutions, has immediate need for additional staff (1 to 2) to join their cognitive controls and systems initiative in their Columbus, Ohio, USA facility. The new position(s) will provide technical support for a multi-year corporate project applying adaptive/cognitive information technology to applications in emerging technology areas. The positions require an M.S./Ph.D. in Computer and Information Science, Electrical Engineering, or related field with a specialization or experience in artificial neural networks, fuzzy logic, evolutionary computing/genetic algorithms, and statistical methods. Oral, written, and interpersonal communications skills are essential to this highly interactive position. Applicant(s) selected will be subject to a security investigation and must meet eligibility requirements for access to classified information. Battelle offers competitive salaries, comprehensive benefits, and opportunities for professional development. Qualified candidates are invited to send their resumes to Dr. Steve Rogers or Dr. Paul Keller, Battelle, 505 King Avenue, Columbus, OH 43201-2693 or e-mail them to rogers at battelle.org or pe_keller at pnl.gov . An Equal Opportunity/Affirmative Action Employer M/F/D/V To find out more information about Battelle, try http://www.battelle.org. _____________________.____________________________.___________________ Paul E. Keller, Ph.D | Battelle Memorial Institute| pe_keller at pnl.gov Sr Research Scientist| 505 King Avenue | Tel: 614-424-7338 | Columbus, OH 43201-2693 | Fax: 614-424-7400 http://www.emsl.pnl.gov:2080/people/bionames/keller_pe.html From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Jan 31 23:24:13 1997 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 31 Jan 97 23:24:13 EST Subject: research position: neural nets for OCR Message-ID: <13158.854771053@DST.BOLTZ.CS.CMU.EDU> Research Position in Asian Language OCR The Imaging Systems Lab of the Robotics Institute at Carnegie Mellon University is seeking candidates for a research position in optical character recognition of Asian languages, particularly Chinese and Korean. The nature of the position is flexible. The ideal candidate would be a recent PhD in Computer Science with experience in neural network pattern recognition techniques, looking for a one year postdoctoral appointment. However, persons with at least a BS in Computer Science or a related field and expertise in neural networks, pattern recognition, or artificial intelligence are invited to apply for a position as a research programmer on the project. Strong linear algebra and C/C++ programming skills are required. To apply, send a curriculum vita to: Dr. Robert Thibadeau Imaging Systems Lab The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213-3891 From terry at salk.edu Thu Jan 2 15:07:24 1997 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 2 Jan 1997 12:07:24 -0800 (PST) Subject: Neural Computation 9:1 Message-ID: <199701022007.MAA22177@helmholtz.salk.edu> Neural Computation - Contents Volume 9, Number 1 - January 1, 1997 Article Flat Minima Sepp Hochreiter and Juergen Schmidhuber Note Lyapunov Functions for Neural Nets with Nondifferentiable Input-Output Characteristics Jianfeng Feng Letter Detecting Synchronous Cell Assemblies with Limited Data and Overlapping Assemblies Gary Strangman A Simple Neural Network Exhibiting Selective Activation of Neuronal Ensembles: from Winner-Take-All to Winners-Share-All Tomoki Fukai and Shigeru Tanaka Playing Billiard in Version Space Pal Rujan Partial BFGS Update and Efficient Step-Length Calculation for Three-Layer Neural Networks Kazumi Saito and Ryohei Nakano Neural Networks for Functional Approximation and System Identification H. N. Mhaskar and Nahmwoo Hahm Selecting Optimal Experiments for Multiple Output Multilayer Perceptrons Lisa M. Belue, Kenneth W. Bauer, Jr., and Dennis W. Ruck A Penalty-Function Approach for Pruning Feedforward Neural Networks Rudy Setiono Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting Rudy Setiono ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From plaut at cmu.edu Thu Jan 2 15:01:23 1997 From: plaut at cmu.edu (David Plaut) Date: Thu, 02 Jan 1997 15:01:23 -0500 Subject: Preprint: Models of Word Reading and Lexical Decision Message-ID: <7924.852235283@crab.psy.cmu.edu> The following preprint is available via anonymous ftp and the web: Structure and function in the lexical system: Insights from distributed models of word reading and lexical decision David C. Plaut Departments of Psychology and Computer Science, Carnegie Mellon University, and the Center for the Neural Basis of Cognition, Pittsburgh PA, USA To appear in Language and Cognitive Processes The traditional view of the lexical system stipulates word-specific representations and separate pathways for regular and exception words. An alternative approach views lexical knowledge as developing from general learning principles applied to mappings among distributed representations of written and spoken words and their meanings. On this distributed account, distinctions among words and between words and nonwords are not reified in the structure of the system but reflect the sensitivity of learning to the relative systematicity in the various mappings. Two simulation experiments address findings that have seemed problematic for the distributed approach. Both involve a consideration of the role of semantics in normal and impaired lexical processing. The first experiment accounts for patients with impaired comprehension but intact reading in terms of individual differences in the division of labor between the semantic and phonological pathways. The second experiment demonstrates that a distributed network can reliably distinguish words from nonwords based on a measure of familiarity defined over semantics. The results underscore the importance of relating function to structure in the lexical system within the context of an explicit computational framework. ftp-host: cnbc.cmu.edu [128.2.244.1] ftp-file: pub/user/plaut/papers/PlautINPRESSLCP.structure.ps.Z OR pub/user/plaut/papers/uncompressed/PlautINPRESSLCP.structure.ps ftp://cnbc.cmu.edu:/pub/user/plaut/papers/PlautINPRESSLCP.structure.ps.Z 19 pages; 183Kb compressed; 498Kb uncompressed =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut Center for the Neural Basis of Cognition and Mellon Institute 115, CNBC Departments of Psychology and Computer Science Carnegie Mellon University MI 115I, 412/268-5145 (fax -5060) 4400 Fifth Ave., Pittsburgh PA 15213-2683 http://www.cnbc.cmu.edu/~plaut "Doubt is not a pleasant condition but certainty is an absurd one." -Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From adali at engr.umbc.edu Thu Jan 2 16:16:00 1997 From: adali at engr.umbc.edu (Tulay Adali) Date: Thu, 2 Jan 1997 16:16:00 -0500 (EST) Subject: CFP: NNSP'97 Special Session -Apps of NNs in Biomedical SP Message-ID: <199701022116.VAA18802@akdeniz.engr.umbc.edu> CALL FOR PAPERS ------------------------------------------------ Special Session on Applications of Neural Networks in Biomedical Signal Processing ---------------------------------------------------------------- NNSP'97- IEEE WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING ---------------------------------------------------------------- ----------------------------------Session Organizer: Tulay Adali Biomedical signal processing problems, with their particular features, challenges, and definition of objectives, have provided a unique platform for the application of artificial neural networks (ANNs) and the exploration of their relationship to other techniques. In this special session organized within NNSP'97, our aim is to bring researchers in the field together in a forum to present and discuss promising applications of ANNs in the biomedical domain and their relationship to more conventional ones in terms of performance, cost, and implementation. Contributions are sought dealing with both single (EEG, ECG, sensory data, etc.) and multi-dimensional (MR, PET, CT, functional MR, etc.) biomedical signals. Some of the possible areas of application are; * pattern recognition and feature extraction for computer aided diagnosis and prognosis * signal analysis (quantification, segmentation, etc.) * signal enhancement/restoration * data compression * reconstruction * data fusion/registration * biological system modeling NNSP'97 is the seventh of a series of IEEE Workshops on Neural Networks for Signal Processing and this year will be held in the Amelia Island Plantation, Florida, 24-26 September 1997. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. More information about the workshop is available at http://www.cnel.ufl.edu/nnsp97/ Submissions for the special session should follow the general guidelines for papers submitted to NNSP'97 and 5 copies of extended summaries of no more than 6 pages should be sent to: IEEE NNSP'97 Attn: Special Session ANN-BSP 444 CSE Bldg #42 P.O. Box 116130 University of Florida Gainesville, FL 32611 IMPORTANT DATES: * Submission of extended summary: January 27, 1997 * Notification of acceptance: March 31, 1997 * Submission of photo-ready accepted paper: April 26, 1997 * Advanced registration: before July 1, 1997 For further information about the special session, contact Tulay Adali---------------------------------------------- Department of Computer Science and Electrical Engineering University of Maryland Baltimore County 1000 Hilltop Circle (410) 455-3521 Baltimore, MD 21250 adali at engr.umbc.edu ------------------------------http://engr.umbc.edu/~adali From dwang at cis.ohio-state.edu Fri Jan 3 12:20:24 1997 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 3 Jan 1997 12:20:24 -0500 (EST) Subject: Tech report on Object Selection Message-ID: <199701031720.MAA08676@shirt.cis.ohio-state.edu> The following technical report is available via FTP/WWW: ------------------------------------------------------------------ Object Selection Based on Oscillatory Correlation ------------------------------------------------------------------ DeLiang Wang Technical Report: OSU-CISRC-12/96-TR67, 1996 OSU Department of Computer and Information Science One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Because of global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a new architecture that maintains spatial relations between input features. This selection network builds on LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) dynamics and slow inhibition. In an input scene with many objects (patterns), the network selects the largest object. This system can be easily adjusted to select several largest objects, which then alternate in time. We further show that a two-stage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. As a special case, the selection network without local excitation gives rise to a new form of oscillatory WTA. (20 pages + one figure: 1.3 MB compressed) for anonymous ftp: FTP-HOST: ftp.cis.ohio-state.edu Directory: /pub/leon/Wang96 FTP-filenames: wang.tech96.ps.Z, fig4.ps.Z or for WWW: http://www.cis.ohio-state.edu/~dwang Comments are most welcome - Please send to DeLiang Wang (dwang at cis.ohio-state.edu) ---------------------------------------------------------------------------- FTP instructions: To retrieve and print the files, use the following commands: unix> ftp ftp.cis.ohio-state.edu Name: anonymous Password: (your email address) ftp> binary ftp> cd /pub/leon/Wang96 ftp> get wang.tech96.ps.Z ftp> get fig4.ps.Z ftp> quit unix> uncompress wang.tech96.ps.Z unix> uncompress fig4.ps.Z unix> lpr {each of the two postscript files} (wang.tech96.ps may not ghostview well - some figures do not show up in ghostview - but it should print ok) ---------------------------------------------------------------------------- From philh at cogs.susx.ac.uk Fri Jan 3 11:54:25 1997 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Fri, 3 Jan 1997 16:54:25 +0000 (GMT) Subject: Workshop: Autonomous Behaviour in Animals and Robots Message-ID: ONE DAY WORKSHOP UNIVERSITY OF SUSSEX, BRIGHTON, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, AI and Philosophy To mark the recent opening of the Sussex Centre for Computational Neuroscience and Robotics, we are holding a one day workshop on 24 January 1997. The aims of this workshop are: to foster interdisciplinary discussion and debate on key issues relating to biological and artificial behaviour generating mechanisms; to highlight the value and potential of collaborations at the new interface between biology, computer science and engineering; to consider the philosophical issues arising from the notion of "autonomy", especially in artificial systems; to consider emerging industrial and commercial applications for which this interface is likely to be an essential enabling factor; to discuss how a UK community interested in these issues may be established and whether new mechanisms and initiatives from funding bodies may be appropriate; to consider how this field in the UK can be promoted to ensure that we remain in the forefront worldwide. The workshop is divided into three sections (Setting the Stage; Vision, Behaviour and Robots; Complexity). Each session will be divided into two parts: a series of ten minute presentations in which speakers raise key issues and pose open questions, followed by a panel discussion. Contributions from members of the audience will also be encouraged. ------------------------------------------------------------------------------ One-Day Workshop 24 January 1997 University of Sussex, Brighton, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, Engineering and Philosophy P R O G R A M M E 10.00-10.30 Reception; Coffee 10.30-10.45 Welcome and Introduction Phil Husbands and Michael O'Shea SESSION A SETTING THE STAGE CHAIR: MICHAEL O'SHEA 10.45-10.55 Understanding the Nervous System Danny Osorio 10.55-11.05 Artificial Autonomous Agents Dave Cliff 11.05-11.15 Philosophical Perspectives Mike Wheeler 11.15-12.30 Discussion of Issues Arising Chair: Maggie Boden with Panel of Discussants (David McFarland; Brendan McGonnigle; Geoffrey Miller; Steven Rose; Aaron Sloman) 12.30-2.00 Lunch SESSION B VISION, BEHAVIOUR AND ROBOTS CHAIR: PHIL HUSBANDS 2.00-2.10 What is Natural Vision For? Mike Land 2.10-2.20 Animal Navigation Tom Collett 2.20-2.30 Robotic Navigation Barbara Webb 2.30-2.40 Models or Processes for Vision? Dave Hogg 2.40-2.50 Cognitive Robots? Andy Clark 2.50-4.00 Discussion of Issues Arising Chair: Tim Smithers with Panel of Discussants (Colin Blakemore; John Hallam; Claire Rind, Julie Rutkowsja; M Srinivasan) 4.00-4.20 Coffee Break SESSION C COMPLEXITY CHAIR: BRIAN GOODWIN 4.20-4.30 Complexity of the "Simple" Nervous System Paul Benjamin 4.30-4.40 Can Biological Inspired Engineering Cross the "Complexity Gap" Adrian Thompson 4.40-4.50 Evolution and Modularity in Complex Systems John Maynard Smith 4.50-6.00 Discussion of Issues Arising Chair: Chris Winter with Panel of Discussants (Malcolm Burrows; Ron Chrisley; Jeffrey Dean; Brian Goodwin; Misha Mahowald; Edmund Rolls) SESSION D WHERE DO WE GO FROM HERE? CHAIR: PHIL HUSBANDS and MICHAEL O'SHEA 6.00-6.30 Input and comment from BBSRC/EPSRC SESSION E CASH BAR and light refreshments ---------------------------------------------------------------------------------- R E G I S T R A T I O N F O R M One-Day Workshop 24 January 1997 University of Sussex, Brighton, UK Autonomous Behaviour in Animals and Robots: Perspectives from Neuroscience, AI and Philosophy The Registration fee of 15 for postgraduate students and 35 for others (to be paid by cheque) will include light refreshments and lunch. NB. NB. NB. Attendance is limited to 150 delegates, and registration will occur subject to availability. Forms and accompanying cheques submitted after all places are filled will be returned immediately. Name Address Tel Email Postgraduate student/Other Please delete as applicable Registration fee enclosed (please make cheque payable to CCNR, University of Sussex) Please return to Annie Bacon, CCNR Workshop, School of Biological Sciences, University of Sussex, Brighton, East Sussex BN1 9QG. Details of workshop location etc will be forwarded on receipt of conference fee. -------------------------------------------------------------------------------------- From publicity at MIT.EDU Mon Jan 6 17:42:48 1997 From: publicity at MIT.EDU (MITP Publicity) Date: Mon, 6 Jan 97 17:42:48 EST Subject: Book Announcement Message-ID: The following is a book which readers of this list might find of interest. For more information please see http://www-mitpress.mit.edu/mitp/recent-books/linguistics/klabp.html _The Balancing Act Combining Symbolic and Statistical Approaches to Language_ edited by Judith Klavans and Philip Resnik Symbolic and statistical approaches to language have historically been at odds - the former viewed as difficult to test and therefore perhaps impossible to define, and the latter as descriptive but possibly inadequate. At the heart of the debate are fundamental questions concerning the nature of language, the role of data in building a model or theory, and the impact of the competence-performance distinction on the field of computational linguistics. Currently, there is an increasing realization in both camps that the two approaches have something to offer in achieving common goals. The eight contributions in this book explore the inevitable "balancing act" that must take place when symbolic and statistical approaches are brought together - including basic choices about what knowledge will be represented symbolically and how it will be obtained, what assumptions underlie the statistical model, what principles motivate the symbolic model, and what the researcher gains by combining approaches. The topics covered include an examination of the relationship between traditional linguistics and statistical methods, qualitative and quantitative methods of speech translation, study and implementation of combined techniques for automatic extraction of terminology, comparative analysis of the contributions of linguistic cues to a statistical word grouping system, automatic construction of a symbolic parser via statistical techniques, combining linguistic with statistical methods in automatic speech understanding, exploring the nature of transformation-based learning, and a hybrid symbolic/statistical approach to recovering from parser failures. Language, Speech, and Communication series. A Bradford Book November 1996 140 pp. - 30 illus. ISBN 0-262-61122-8 $17.50 paper MIT Press*55 Hayward Street*Cambridge, MA 02142*(617)625-8569 From movellan at ergo.ucsd.edu Mon Jan 6 21:39:12 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Mon, 6 Jan 1997 18:39:12 -0800 Subject: UCSD Cogsci Tech Report Announcement Message-ID: <199701070239.SAA12418@ergo.ucsd.edu> UCSD Cognitive Science Tech Report Author: Sohie Lee Communicated by: David Zipser Title: The Representation, Storage and Retrieval of Reaching Movement Information in Motor Cortex. Electronic copies: http://cogsci.ucsd.edu and click on "Tech Reports and Software" Physical copies: Available for $7.00 within the US, $10.00 outside the US. For physical copies send a check of money order payable to UC Regents and mail it to TR Request Javier R. Movellan Department of Cognitive Science University of California San Diego La Jolla, Ca 92093-0515 ABSTRACT This report describes the use of analytical techniques and recurrent neural networks to investigate the representation and storage of reaching movement information. A key feature of reaching movement representation revealed by single cell recording is the firing of individual neurons to preferred movement directions. The preferred directions of motor cortical neurons change with starting hand position during reaching. I confirm that the precise nature of tuning parameters' spatial modulation is dependent upon afferent format. I also show that nonlinear coordinate systems produce the spatially dependent tuning parameters of the general form required by experimental observation. A model that investigates the dynamics of movement representation in motor cortex is described. A fully recurrent neural network was trained to continually output the direction and magnitude of movements required to reach randomly changing targets. Model neurons developed preferred directions and other properties similar to real motor cortical neurons. The key finding is that when the target for a reaching movement changes location, the ensemble representation of the movement changes nearly monotonically, while the individual neurons comprising the representation exhibit strong, nonmonotonic transients. These transients serve as internal recurrent signals that force the ensemble representation to change more rapidly than if it were limited by the time constants of individual neurons. These transients can be tested for experimentally. A second model investigates how recurrent networks might implement the storage, retrieval and matching functions observed when monkeys are trained to perform delayed match-to-sample reaching tasks with distractors. A fully recurrent network was trained to perform the task. The model learns a storage mechanism that relies on fixed point attractors. A minimal-sized network is comprised of units that correspond to the various task components, whereas larger networks exhibit more distributed solutions and have neuron properties that more closely resemble single cell behavior in the brain. From S.Holden at cs.ucl.ac.uk Wed Jan 8 09:56:29 1997 From: S.Holden at cs.ucl.ac.uk (Sean Holden) Date: Wed, 08 Jan 1997 14:56:29 +0000 Subject: New paper Message-ID: <1246.852735389@cs.ucl.ac.uk> The following paper does not specifically address connectionist networks. However it may be of interest to readers of this list. The following research note is now available -------------------------------------------- Cross-Validation and the PAC Learning Model Sean B. Holden Research Note RN/96/64 Department of Computer Science University College London Gower Street London WC1E 6BT, U.K. Abstract A large body of research exists within the general field of computational learning theory which, informally speaking, addresses the following question: how many examples are required so that, with `high probability', after training a supervised learner we can expect the error on the training set to be `close' to the actual probability of error (the {\em generalization error\/}) of the learner? Theoretical frameworks inspired by {\em probably approximately correct (PAC) learning\/} formalise what is meant by `high probability' and `close' in the above statement. A statistician might recognize this problem as that of knowing under what conditions the `resubstitution estimate'---as the error on the training set is often referred to---provides in a particular sense a good estimate of the generalization error. It is well-known that, in fact, the resubstitution estimate usually provides a rather bad estimate of this quantity, and that several better estimates exist. In this paper we study two of the latter estimates---the {\em holdout estimate\/} and the {\em cross-validation estimate\/}---within a framework inspired by PAC learning theory. We derive upper and lower bounds on the sample complexity of the error estimation problem for these estimates. Our bounds apply for {\em any\/} consistent supervised learner. A copy can be obtained as follows: ---------------------------------- a) By anonymous ftp ------------------- address: cs.ucl.ac.uk research/rn/rn-96-64.ps.Z b) From my Web page ------------------- http://www.cs.ucl.ac.uk/staff/S.Holden/ c) By postal mail ------------------ A limited number of paper copies is available. Request a copy from: Dr. Sean B. Holden Department of Computer Science University College London Gower Street London WC1E 6BT U.K. or make a request by email: s.holden at cs.ucl.ac.uk From erik at bbf.uia.ac.be Thu Jan 9 11:21:25 1997 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Thu, 9 Jan 1997 16:21:25 GMT Subject: Crete Course in Computational Neuroscience Message-ID: <199701091621.QAA16824@kuifje> FIRST CALL CRETE COURSE IN COMPUTATIONAL NEUROSCIENCE SEPTEMBER 7 - OCTOBER 3, 1997 UNIVERSITY OF CRETE, GREECE DIRECTORS: Erik De Schutter (University of Antwerp, Belgium) Idan Segev (Hebrew University, Jerusalem, Israel) Jim Bower (California Institute of Technology, USA) Adonis Moschovakis (University of Crete, Greece) The Crete Course in Computational Neuroscience introduces students to the practical application of computational methods in neuroscience, in particular how to create biologically realistic models of neurons and networks. The course consists of two complimentary parts. A distinguished international faculty gives morning lectures on topics in experimental and computational neuroscience. The rest of the day is spent learning how to use simulation software and how to implement a model of the system the student wishes to study. The first week of the course introduces students to the most important techniques in modeling single cells, networks and neural systems. Students learn how to use the GENESIS, NEURON, XPP and other software packages on their individual unix workstations. During the following three weeks the lectures will be more general, but each week topics ranging from modeling single cells and subcellular processes through the simulation of simple circuits, large neuronal networks and system level models of the the brain will be covered. The course ends with a presentation of the students' modeling projects. The Crete Course in Computational Neuroscience is designed for advanced graduate students and postdoctoral fellows in a variety of disciplines, including neuroscience, physics, electrical engineering, computer science and psychology. Students are expected to have a basic background in neurobiology as well as some computer experience. A total of 28 students will be accepted with an age limit of 35 years. We will accept students of any nationality, but the majority will be from the European Union and affiliated countries (Iceland, Israel, Liechtenstein and Norway). We specifically encourage applications from researchers who work in less-favoured regions of the EU, from women and from researchers from industry. Every student will be charged a tuition fee of 500 ECU (approx. US$630). In the case of students with a nationality from the EU, affiliated countries or Japan, the tuition fee covers lodging, local travel and all course-related expenses. All applicants with other nationalities will be charged an ADDITIONAL fee of 1000 ECU (approx. US$1260) which covers lodging, local travel and course-related expenses. For nationals from EU and affiliated countries economy travel from an EU country to Crete will be refunded after the course. A limited number of students from less-favoured regions world-wide will get their fees and travel refunded. More information and application forms can be obtained: - WWW access: http://bbf-www.uia.ac.be/Crete_index.html Please apply electronically using a web browser if possible. - email: crete_course at bbf.uia.ac.be - by mail: Prof. E. De Schutter Born-Bunge Foundation University of Antwerp - UIA, Universiteitsplein 1 B2610 Antwerp Belgium FAX: +32-3-8202669 APPLICATION DEADLINE: April 5, 1996. Applicants will be notified of the results of the selection procedures by May 5. FACULTY: L. Abbott (Brandeis University, USA), D. Beeman (University of Colorado, Boulder, USA), A. Borst (Max Planck Institute Tuebingen, Germany), R. Calabrese (Emory University, USA), A. Destexhe (Universite Laval, Canada), M. Hines (Yale University, USA), J.J.B. Jack (Oxford University, England), C. Koch (California Institute of Technology, USA), R. Kotter (Heinrich Heine University Dusseldorf, Germany), G. LeMasson (University of Bordeaux, France), K. Martin (Institute of Neuroinformatics, Zurich), M. Nicolelis (Duke University, USA), S. Redman (Australia National University Canberra), J.M. Rinzel (NIH, USA), S.A. Shamma (University of Maryland, USA), H. Sompolinsky (Hebrew University Jerusalem, Israel), S. Tanaka (RIKEN, Japan), A.M. Thomson (Royal Free Hospital, England), T.L. Williams (St George Hospital, London, England), Y. Yarom (Hebrew University Jerusalem, Israel), and others to be named. The Crete Course in Computational Neuroscience is supported by the European Commission (4th Framework Training and Mobility of Researchers program), by The Brain Science Foundation (Tokyo) and by UNESCO. Local administrative organization: the Institute of Applied and Computational Mathematics of FORTH (Crete, GR). From fritzke at neuroinformatik.ruhr-uni-bochum.de Thu Jan 9 12:49:47 1997 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Thu, 9 Jan 1997 18:49:47 +0100 (MET) Subject: paper available on LBG-U Message-ID: <199701091749.SAA00439@urda.neuroinformatik.ruhr-uni-bochum.de> ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/manuscripts/IRINI/irini97-01/irini97-01.ps.gz The following TR/preprint is available via ftp (93 KB, 10 pages): The LBG-U method for vector quantization - an improvement over LBG inspired from neural networks Bernd Fritzke Systembiophysik Institut f"ur Neuroinformatik Ruhr-Universit"at Bochum * Germany (to appear in: Neural Processing Letters, 1997, Vol. 5, No. 1) Keywords: codebook construction, data compression, growing neural networks, LBG, vector quantization Abstract: A new vector quantization method -- denoted LBG-U -- is presented which is closely related to a particular class of neural network models (growing self-organizing networks). LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG has converged, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum utility is moved to a new location, LBG is run on the resulting modified codebook until convergence, another vector is moved, and so on. Since a strictly monotonous improvement of the LBG-generated codebooks is enforced, it can be proved that LBG-U terminates in a finite number of steps. Experiments with artificial data demonstrate significant improvements in terms of RMSE over LBG combined with only modestly higher computational costs. Comments are welcome, Bernd Fritzke PS: Sorry for the long and obviously redundant URL. In some cases our TRs are provided as LaTeX source with all figures in separate files. Therefore, each TR has its own directory. Its a German system 8v). -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007845 Ruhr-Universit"at Bochum * Germany FAX. +49-234 7094210 WWW: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From villmann at informatik.uni-leipzig.d400.de Thu Jan 9 13:27:06 1997 From: villmann at informatik.uni-leipzig.d400.de (villmann@informatik.uni-leipzig.d400.de) Date: Thu, 9 Jan 1997 19:27:06 +0100 Subject: BOOK ANNOUNCEMENT Message-ID: <970109192706*/S=villmann/OU=informatik/PRMD=UNI-LEIPZIG/ADMD=D400/C=DE/@MHS> -------------- next part -------------- BOOK ANNOUNCEMENT The following book is now avaible: "Topologieerhaltung in selbstorganisierenden neuronalen Merkmalskarten" author : Thomas Villmann publisher : Verlag Harri Deutsch Frankfurt/M. ISBN : 3 - 8171 - 1523 - 7 116 pages, 42 fig. which is based on my PhD thesis. The book describes the current state of research of measuring the topology preservation in Kohonen's self-organizing feature maps (SOFM). After an introduction of SOFMs and considerations of their dynamics ( as an extension of the work done by H. Ritter et al.) actual measure of topology preservation are analyzed and the limits are shown. Taking into account these consideration following an intuitive understanding of topology preservation a strong mathematical approach is developed which is based on the mathematical theory of (discrete) topology. Using this theory a new measure, the so-called topographic function, is developed which measure the topology preservation according to its mathematical exact definition. In the last chapter some applications of the topographic function are presented. One of these is the estimation of growing self-organizing feature maps with respect to their topology preservation. Thereby a growing algorithm is presented which takes the principle components of the receptive fields of the neurons for the growing step into account. Please, note that the language of the book is German. Sorry, other hardcopies of the PhD thesis are not avaible. With best regards Thomas Villmann email: villmann at informatik.uni-leipzig.de From gbugmann at soc.plym.ac.uk Fri Jan 10 09:33:24 1997 From: gbugmann at soc.plym.ac.uk (Guido.Bugmann xtn 2566) Date: Fri, 10 Jan 1997 14:33:24 +0000 (GMT) Subject: Studentship in Autonomous Mobile Robotics Message-ID: Research Studentship in Autonomous Mobile Robotics Neural and Adaptive Systems Group School of Computing University of Plymouth The Neural and Adaptive Systems Group is offering a studentship aimed at investigating autonomous mobile robotics. The topics to be investigated will cover subjects such as biologically inspired models of spatial memory, use of vision in navigation tasks and for object recognition, goal directed action plans, exploration, and others. A research student is sought to: i) Take part in the set-up of the lab and provide technical support ii) Perform experiments with mobile robots. The ideal candidate should be a good all rounder, with good programming skills (C/C++), knowledge on interfacing PC's with the outside world, some knowledge of the design of electronic circuits, some skills in building / assembling electromechanical equipments, and a strong interest in biological and artificial neurocomputing, and its application in "useful" robots. The position is initially for two years, enabling to work towards an MPhil, with a possible extension of one year to complete a PhD. The studentship is approximately of 5400 pounds per year, tax free, which may be supplemented with part-time teaching. To apply for this position please send, before February the 15th, a completed postgrad application form that you can obtain from Carole Watson, [Phone (+44) 1752 23 25 41; Fax (+44) 1752 23 25 40; email: carole at soc.plym.ac.uk] or by writing at the address below. For further information contact: +--------------------------------------------------------------------+ Dr. Guido Bugmann Neural and Adaptive Systems Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom tel (+44) 1752 23 25 66 fax (+44) 1752 23 25 40 email: gbugmann at soc.plym.ac.uk or gbugmann at plymouth.ac.uk Home page: http://www.tech.plym.ac.uk/soc/staff/guidbugm/bugmann.html +--------------------------------------------------------------------+ From ken at phy.ucsf.edu Thu Jan 9 22:54:19 1997 From: ken at phy.ucsf.edu (Ken Miller) Date: Thu, 9 Jan 1997 19:54:19 -0800 Subject: Postdoctoral and Predoctoral Positions in Theoretical Neurobiology Message-ID: <9701100354.AA06234@coltrane.ucsf.edu> POSTDOCTORAL AND PREDOCTORAL POSITIONS SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY UNIVERSITY OF CALIFORNIA, SAN FRANCISCO INFORMATION ON THE UCSF SLOAN CENTER AND FACULTY AND THE POSTDOCTORAL AND PREDOCTORAL POSITIONS IS AVAILABLE THROUGH OUR WWW SITE: http://www.sloan.ucsf.edu/sloan. E-mail inquiries should be sent to sloan-info at phy.ucsf.edu. Below is basic information on the program: The Sloan Center for Theoretical Neurobiology at UCSF solicits applications for pre- and post-doctoral fellowships, with the goal of bringing theoretical approaches to bear on neuroscience. Applicants should have a strong background and education in a theoretical discipline, such as physics, mathematics, or computer science, and commitment to a future research career in neuroscience. Prior biological or neuroscience training is not required. The Sloan Center will offer opportunities to combine theoretical and experimental approaches to understanding the operation of the intact brain. The research undertaken by the trainees may be theoretical, experimental, or a combination. The RESIDENT FACULTY of the Sloan Center and their research interests are: Allison Doupe: Development of song recognition and production in songbirds. Stephen Lisberger: Learning and memory in a simple motor reflex, the vestibulo-ocular reflex, and visual guidance of smooth pursuit eye movements by the cerebral cortex. Michael Merzenich: Experience-dependent plasticity underlying learning in the adult cerebral cortex and the neurological bases of learning disabilities in children. Kenneth Miller: Mechanisms of self-organization of the cerebral cortex; circuitry and computational mechanisms underlying cortical function; computational neuroscience. Roger Nicoll: Synaptic and cellular mechanisms of learning and memory in the hippocampus. Christoph Schreiner: Cortical mechanisms of perception of complex sounds such as speech in adults, and plasticity of speech recognition in children and adults. Michael Stryker: Mechanisms that guide development of the visual cortex. All of these resident faculty are members of UCSF's W.M. Keck Foundation Center for Integrative Neuroscience, a new center (opened January, 1994) for systems neuroscience that includes extensive shared research resources within a newly renovated space designed to promote interaction and collaboration. The unusually collaborative and interactive nature of the Keck Center will facilitate the training of theorists in a variety of approaches to systems neuroscience. In addition to the resident faculty, there are a series of VISITING FACULTY who are in residence at UCSF for times ranging from 1-8 weeks each year. These faculty, and their research interests, include: Laurence Abbott, Brandeis University: Neural coding, relations between firing rate models and biophysical models, self-organization at the cellular level William Bialek, NEC Research Institute: Physical limits to sensory signal processing, reliability and information capacity in neural coding; Sebastian Seung, ATT Bell Labs: models of collective computation in neural systems; David Sparks, University of Pennsylvania: understanding the superior colliculus as a "model cortex" that guides eye movements; Steven Zucker, McGill University: Neurally based models of vision, visual psychophysics, mathematical characterization of neuroanatomical complexity. PREDOCTORAL applicants with strong theoretical training seeking to BEGIN a Ph.D. program should apply directly to the UCSF Neuroscience Ph.D. program. Contact Patricia Arrandale, patricia at phy.ucsf.edu, to obtain application materials; be sure to include your surface-mail address. The APPLICATION DEADLINE for Sloan Center applicants is Feb. 10, 1997 for fall, 1997 admission. Sloan Center applicants must also alert the Sloan Center of your application, by writing to Steve Lisberger at the address given below. POSTDOCTORAL applicants, or PREDOCTORAL applicants seeking to do research at the Sloan Center as part of a Ph.D. program in progress in a theoretical discipline elsewhere, should apply as follows: Send a curriculum vitae, a statement of previous research and research goals, up to three relevant publications, and have two letters of recommendation sent to us. THE APPLICATION DEADLINE IS February 10, 1997. UC San Francisco is an Equal Opportunity Employer. Send applications to: Steve Lisberger Sloan Center for Theoretical Neurobiology at UCSF Department of Physiology University of California 513 Parnassus Ave. San Francisco, CA 94143-0444 From terry at salk.edu Sat Jan 11 02:02:36 1997 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 10 Jan 1997 23:02:36 -0800 (PST) Subject: Telluride Workshop Message-ID: <199701110702.XAA26267@helmholtz.salk.edu> "NEUROMORPHIC ENGINEERING WORKSHOP" JUNE 23 - JULY 13, 1997 TELLURIDE, COLORADO Deadline for application is April 1, 1997. Christof Koch (Caltech) Terry Sejnowski (Salk Institute/UCSD) and Rodney Douglas (Zurich, Switzerland) invite applications for a three-week summer workshop that will be held in Telluride, Colorado in 1997. The 1996 summer workshop on "Neuromorphic Engineering", sponsored by the National Science Foundation, the Gatsby Foundation and by the the "Center for Neuromorphic Systems Engineering" at Caltech, was an exciting event and a great success. A detailed report on the workshop is available at http://www.klab.caltech.edu/~timmer/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week summer workshop will include background lectures, practical tutorials on aVLSI design, hands-on projects, and special interest groups. Participants are encouraged to get involved in as many of these activities as interest and time allow. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the workshop of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in a number of groups, including active vision, audition, olfaction, motor control, central pattern generator, robotics, multichip communication, analog VLSI and learning. The "active perception" project group will emphasize vision and human sensory-motor coordination. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The "central pattern generator" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robotics" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip communication" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Andreas Andreou, Johns Hopkins. Richard Andersen, Caltech. Dana Ballard, Rochester. Avis Cohen, Maryland. Tobi Delbruck, Arithmos. Steve DeWeerth, Georgia Tech Rodney Douglas, Zurich. Christof Koch, Caltech. John Kauer, Tufts. Shih-Chii Liu, Caltech and Rockwell. Stefan Schaal, Georgia Tech Terrence Sejnowski, UCSD and Salk. Shihab Shamma, Maryland. Mark Tilden, Los Alamos. Paul Viola, MIT. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000 range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX (and windows). We have funds to reimburse some participants for up to $500 of domestic travel and for all housing expenses. Please specify on the application whether such financial help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 1, 1997. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1997. From giles at research.nj.nec.com Mon Jan 13 10:36:48 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 13 Jan 97 10:36:48 EST Subject: TR available Message-ID: <9701131536.AA26649@alta> The following TR is available from the NEC Research Institute and University of Maryland UMIACS archives. _______________________________________________________________________ A Delay Damage Model Selection Algorithm for NARX Neural Networks Tsungnan Lin{1,2}, C. Lee Giles{1,3}, Bill G. Horne{1}, Sun-Yang Kung{2} {1} NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 {2} Dept. of Electrical Engineering, Princeton U., Princeton, NJ 08540 {3} UMIACS, University of Maryland, College Park, MD 20742 U. of Maryland Technical Report CS-TR-3707 and UMIACS-TR-96-77 ABSTRACT Recurrent neural networks have become popular models for system identification and time series prediction. NARX (Nonlinear AutoRegressive models with eXogenous inputs) neural network models are a popular subclass of recurrent networks and have beenused in many applications. Though embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction. Keywords: Recurrent neural networks, tapped-delay lines, long-term dependencies, time series, automata, memory, temporal sequences, gradient descent training, latching, NARX networks, auto-regressive, pruning, embedding theory. _________________________________________________________________________ http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From harmonme at aa.wpafb.af.mil Mon Jan 13 16:06:16 1997 From: harmonme at aa.wpafb.af.mil (Mance E. Harmon) Date: Mon, 13 Jan 97 16:06:16 -0500 Subject: On-Line, Interactive RL Tutorial Message-ID: <970113160614.648@ethel.aa.wpafb.af.mil.0> Reinforcement Learning: An On-Line, Interactive Tutorial by Mance E. Harmon http://eureka1.aa.wpafb.af.mil/rltutorial (hardcopy length: 19 Pages) Scope of Tutorial The purpose of this tutorial is to provide an introduction to reinforcement learning (RL) at a level easily understood by students and researchers in a wide range of disciplines. The intent is not to present a rigorous mathematical discussion that requires a great deal of effort on the part of the reader, but rather to present a conceptual framework that might serve as an introduction to a more rigorous study of RL. The fundamental principles and techniques used to solve RL problems are presented. The most popular RL algorithms are presented and interactively demonstrated using WebSim, a Java-based simulation development environment. Section 1 presents an overview of RL and provides a simple example to develop intuition of the underlying dynamic programming mechanism. In Section 2 the parts of a reinforcement learning problem are discussed. These include the environment, reinforcement function, and value function. Section 3 gives a description of the most widely used reinforcement learning algorithms. These include TD(lambda) and both the residual and direct forms of value iteration, Q-learning, and advantage learning. In Section 4 some of the ancillary issues in RL are briefly discussed, such as choosing an exploration strategy and an appropriate discount factor. The conclusion is given in Section 5. Finally, Section 6 is a glossary of commonly used terms followed by references in Section 7 and a bibliography of RL applications in Section 8. It is assumed that the reader has some knowledge of learning algorithms that rely on gradient descent (such as the backpropagation of errors algorithm). Mance Harmon harmonme at aa.wpafb.af.mil From lazzaro at CS.Berkeley.EDU Mon Jan 13 19:13:02 1997 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Mon, 13 Jan 1997 16:13:02 -0800 (PST) Subject: New silicon audition papers online ... Message-ID: <199701140013.QAA16004@snap.CS.Berkeley.EDU> Two new papers on silicon audition available online ... --john lazzaro (presented at NIPS*96) A Micropower Analog VLSI HMM State Decoder for Wordspotting John Lazzaro and John Wawrzynek CS Division, UC Berkeley Richard Lippmann MIT Lincoln Laboratory ABSTRACT We describe the implementation of a hidden Markov model state decoding system, a component for a wordspotting speech recognition system. The key specification for this state decoder design is microwatt power dissipation; this requirement led to a continuous-time, analog circuit implementation. We describe the tradeoffs inherent in the choice of an analog design and explain the mapping of the discrete-time state decoding algorithm into the continuous domain. We characterize the operation of a 10-word (81 state) state decoder test chip. Available on the Web at: http://http.cs.berkeley.edu/~lazzaro/biblio/decoder.ps.gz (forthcoming UCB Technical Report) Anawake: Signal-Based Power Management For Digital Signal Processing Systems John Lazzaro and John Wawrzynek CS Division, UC Berkeley Richard Lippmann MIT Lincoln Laboratory ABSTRACT Single-chip, low-power, programmable digital signal processing systems are capable of hosting complete speech processing applications, while consuming a few milliwatts of average power. We present a power management architecture that decreases the average power consumption of these systems to 3--10 microwatts, in applications where speech signals are present with a sufficiently low duty cycle. In this architecture, a micropower analog signal processing system, {\it Anawake,} continuously analyzes the incoming signal, and controls the power consumption of the DSP system in a signal-dependent way. We estimate system power consumption for Anawake designs optimized for different peak-speech-signal to average-background-noise ratios. Available on the Web at: http://http.cs.berkeley.edu/~lazzaro/biblio/anawake.ps.gz From movellan at ergo.ucsd.edu Tue Jan 14 15:15:13 1997 From: movellan at ergo.ucsd.edu (Javier R. Movellan) Date: Tue, 14 Jan 1997 12:15:13 -0800 Subject: UCSD Cogsci Tech Report 97.01 Message-ID: <199701142015.MAA00743@ergo.ucsd.edu> UCSD.Cogsci.TR.97.01 AUTHORS: Javier R. Movellan and Paul Mineiro TITLE: Modularity and Catastrophic Fusion: A Bayesian Approach with Applications to Audiovisual Speech Recognition. ABSTRACT: While modular architectures have desirable properties, integrating the outputs of many modules into a unified representation is not a trivial issue. In this paper we examine catastrophic fusion, a problem that occurs when modules are fused in incorrect context conditions. This problem has become especially apparent in the current research on automatic recognition of multimodal signals and has practical as well as theoretical relevance. Catastrophic fusion arises because modules make implicit assumptions and thus operate correctly only within a certain context. Practice shows that when modules are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We propose a principled solution to this problem based upon Bayesian ideas of competitive models. We study the approach analytically on a classic Gaussian discrimination task and then apply it to a realistic problem on audiovisual speech recognition (AVSR) with excellent results. For concreteness our emphasis is on applications to AVSR but the problems at hand are very general and touch fundamental issues about cognitive architectures. ELECTRONIC COPIES: http://cogsci.ucsd.edu and follow the link to "Tech Reports" PHYSICAL COPIES: Available for $7.00 within the US, $10.00 outside the US. For physical copies send a check of money order payable to UC Regents and mail it to the following address, TR Request Javier R. Movellan Department of Cognitive Science University of California San Diego La Jolla, Ca 92093-0515 From gjr at cra.com Tue Jan 14 17:04:52 1997 From: gjr at cra.com (Gerard J Rinkus) Date: Tue, 14 Jan 1997 17:04:52 -0500 Subject: Thesis available: Neural model integrating episodic, semantic and temporal sequence memory Message-ID: FTP-host: cns-ftp.bu.edu FTP-filename: /pub/rinkus/thesis_*.Z The following Ph.D. Thesis is available via either anonymous ftp or my web site (no hardcopies available). It is 249 pages long and the chapters can be retrieved individually. (Specific retrieval instructions below). =========================================================== Title: A Combinatorial Neural Network Exhibiting Episodic and Semantic Memory Properties for Spatio-Temporal Patterns Gerard J. Rinkus Dept. of Cognitive and Neural Systems Boston University Boston, MA 02215 rinkus at cns.bu.edu http://cns-web.bu.edu/pub/rinkus/www/ ABSTRACT This thesis describes TEMECOR (Temporal Episodic MEmory using COmbinatorial Representations), an unsupervised, distributed, associative network model of storage, retrieval and recognition of binary spatio-temporal patterns, exhibiting episodic, semantic and complex sequence memory. The original version of the model, TEMECOR-I, meets several essential requirements of episodic memory-very high capacity, single-trial learning, permanence (stability) of traces, and the ability to store highly-overlapped spatio-temporal patterns, including complex state sequences (CSSs) which are sequences in which the same state can recur multiple times-e.g., [A B B A G C B A D]. Various parametric simulation studies are reported, revealing that the model's capacity increases faster-than-linearly in the size (i.e., number of nodes) of the network, for both uncorrelated and correlated (specifically, complex sequence) spatio-temporal pattern sets. However, TEMECOR-I fails to possess the crucial property that similar inputs map to similar internal representations-i.e., continuity. Therefore the model fails to exhibit similarity-based generalization and categorization, which are the basis of many of those phenomena classed as semantic memory. A second version of the model, TEMECOR-II, adds the property of continuity and therefore constitutes a single associative neural network which exhibits both episodic and semantic memory properties, and which does so for the spatio-temporal pattern domain. TEMECOR-II achieves the continuity property by computing, on each time slice, t, the degree of match, G(t), between its expected and actual inputs and then adding an amount of noise, inversely proportional to G(t), into the process of choosing a final internal representation at t. This generally leads to reactivation of old traces (i.e., greater pattern completion) in proportion to the familiarity of inputs, and establishment of new traces (i.e., greater pattern separation) in proportion to the novelty of inputs. Simulation results are given for TEMECOR-II, demonstrating the embedding of similarity relationships in the model's adaptive mappings between inputs and internal representations, and the model's ability to co-categorize similar spatio-temporal events. The model is monolithic in that all three types of memory are explained by a single local circuit architecture, instantiating a winner-take-all network, that is proposed as an analog of the cortical minicolumn. Thus, episodic (i.e., exemplar-specific) and semantic (i.e., general, category-level) information coexist in the same physical substrate. A principle/mechanism is described whereby the model's instantaneous level of memory access-along the spectrum between highly specific (based on the details of a single exemplar) and highly generic (based on the general properties of a class of exemplars) memory access-can be controlled by modulation of various threshold parameters. =========================================================== FTP instructions: (e.g. to retrieve chapter 1) unix> ftp cns-ftp.bu.edu Name: anonymous Password: your full email address ftp> cd pub/rinkus ftp> get thesis_chap1.ps.Z ftp> bye unix> uncompress thesis_chap1.ps.Z .then send to a postscript printer or previewer Note: the file names and page lengths are: thesis_chap1.ps.Z ch. 1 49 (incl. prelim pages) thesis_chap2.ps.Z ch. 2 24 thesis_chap3.ps.Z ch. 3 29 thesis_chap4.ps.Z ch. 4 113 thesis_chap5.ps.Z ch. 5 5 thesis_refs.ps.Z refs. 11 From marks at u.washington.edu Tue Jan 14 18:45:23 1997 From: marks at u.washington.edu (Robert Marks) Date: Tue, 14 Jan 97 15:45:23 -0800 Subject: IEEE TNN now on-line Message-ID: <9701142345.AA20188@carson.u.washington.edu> The IEEE Transactions on Neural Networks is now on line. It is currently free to IEEE members. The web address is: http://www.opera.ieee.org/jolly/ Have your IEEE number handy. IEEE membership information is available on the NNC home page. http://engine.ieee.org/nnc/ Robert J. Marks II, Editor-in-Chief IEEE Transactions on Neural Networks r.marks at ieee.org From moody at chianti.cse.ogi.edu Wed Jan 15 01:54:15 1997 From: moody at chianti.cse.ogi.edu (John Moody) Date: Tue, 14 Jan 97 22:54:15 -0800 Subject: Research Position in Statistical Learning Message-ID: <9701150654.AA10106@chianti.cse.ogi.edu> Research Position in Nonparametric Statistics, Neural Networks and Machine Learning at Department of Computer Science & Engineering Oregon Graduate Institute of Science & Technology I am seeking a highly qualified researcher to take a leading role on a project involving the development and testing of new model selection and input variable subset selection algorithms for classification, regression, and time series prediction applications. Candidates should have a PhD in Statistics, EE, CS, or a related field, have experience in neural network modeling, nonparametric statistics or machine learning, have strong C programming skills, and preferably have experience with S-Plus and Matlab. The compensation and level of appointment (Postdoctoral Research Associate or Senior Research Associate) will depend upon experience. The initial appointment will be for one year, but may be extended depending upon the availability of funding. Candidates who can start by April 1, 1997 or before will be given preference, although an extremely qualified candidate who is available by June 1 may also be considered. If you are interested in applying for this position, please mail, fax, or email your CV (ascii text or postscript only), a letter of application, and a list of at least three references (names, addresses, emails, phone numbers) to: Ms. Sheri Dhuyvetter Computer Science & Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503) 690-1476 FAX: (503) 690-1548 Email: sherid at cse.ogi.edu Please do not send applications to me directly. I will consider all applications received by Sheri on or before January 31. OGI (Oregon Graduate Institute of Science and Technology) has over a dozen faculty, senior research staff, and postdocs doing research in Neural Networks, Machine Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. Short descriptions of our research interests are appended below. Additional information is available on the Web at http://www.cse.ogi.edu/Neural/ and http://www.cse.ogi.edu/CompFin/ . OGI is a young, but rapidly growing, private research institute located in the Silicon Forest area west of downtown Portland, Oregon. OGI offers Masters and PhD programs in Computer Science and Engineering, Electrical Engineering, Applied Physics, Materials Science and Engineering, Environmental Science and Engineering, Chemistry, Biochemistry, Molecular Biology, Management, and Computational Finance. The Portland area has a high concentration of high tech companies that includes major firms like Intel, Hewlett Packard, Tektronix, Sequent Computer, Mentor Graphics, Wacker Siltronics, and numerous smaller companies like Planar Systems, FLIR Systems, Flight Dynamics, and Adaptive Solutions (an OGI spin-off that manufactures high performance parallel computers for neural network and signal processing applications). +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science & Engineering Department of Electrical Engineering Research Interests of Faculty, Research Staff, and Postdocs in Neural Networks, Machine Learning, Signal Processing, Control, Speech, Language, Vision, Time Series, and Computational Finance Etienne Barnard (Associate Professor, EE): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EE); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and computational finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EE): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems and financial market analysis. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EE): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From lemmon at endeavor.ee.nd.edu Wed Jan 15 09:16:07 1997 From: lemmon at endeavor.ee.nd.edu (Michael Lemmon) Date: Wed, 15 Jan 1997 09:16:07 -0500 Subject: IEEE-TAC Special Issue Message-ID: <199701151416.JAA00690@endeavor.ee.nd.edu> Contributed by Michael D. Lemmon (lemmon at maddog.ee.nd.edu) FIRST CALL FOR PAPERS IEEE Transactions on Automatic Control announces a Special Issue on ARTIFICIAL NEURAL NETWORKS IN CONTROL, IDENTIFICATION, and DECISION MAKING Edited by Anthony N. Michel Michael Lemmon Dept of Electrical Engineering Dept. of Electrical Engineering University of Notre Dame University of Notre Dame Notre Dame, IN 46556, USA Notre Dame, IN, 46556, USA (219)-631-5534 (voice) (219)-631-8309 (voice) (219)-631-4393 (fax) (219)-631-4393 (fax) Anthony.N.Michel.1 at nd.edu lemmon at maddog.ee.nd.edu There is a growing body of experimental work suggesting that artificial neural networks can be very adept at solving pattern classification problems where there is significant real-world uncertainty. Neural networks also provide an analog method for quickly determining approximate solutions to complex optimization problems. Both of these capabilities can be of great use in solving various control problems and in recent years there has been increased interest in the use of artificial neural networks in the control and supervision of complex dynamical systems. This announcement is a call for papers addressing the topic of neural networks in control, identification, and decision making. Accepted papers will be published in a special issue of the IEEE Transactions of Automatic Control. The special issue is seeking papers which use formal analysis to establish the role of neural networks in control, identification, and decision making. For this reason, papers consisting primarily of empirical simulation results will not be considered for publication. Before submitting, prospective authors should consult past issues of the IEEE Transactions on Automatic Control to identify the type of results and the level of mathematical rigor that are the norm in this journal. Submitted papers are due by July 1, 1997 and should be sent to Michael D. Lemmon or Anthony N. Michel. Notification of acceptance decisions will be sent by December 31, 1997. The special issue is targeted for publication in 1998 or early 1999. All papers will be refereed in accordance with IEEE guidelines. Please consult the inside back cover of any recent issue of the Transactions on Automatic Control for style and length of the manuscript and the number of required copies (seven copies with cover letter) to be sent to one of the editors of this special issue. From marwan at ee.usyd.edu.au Thu Jan 16 05:03:49 1997 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Thu, 16 Jan 1997 21:03:49 +1100 (EST) Subject: Postgraduate scholarship Message-ID: Postgraduate scholarship Models and implementations of spiking networks This is a scholarship funded by the Australian Research Council for a period of three years to fund a student styding towards a PhD. Students applying for this scholarship should have a first class Honors or equivalent in electrical/computer engineering. The scholarship amounts to A$18,000 per year. The area of investigation is the development of models, learning algorithms and associated VLSI implementations of spiking networks. Note that non-Australian residents have to pay fees to enrol at Australian universities. The amount of fees can be consulted by looking the University of Sydney web page (www.usyd.edu.au). Applicants should forward to my address below a CV including copies of scripts of academic record and the names, email, phone, fax, and addresses of two academic referees. Deadline is Friday Feb 7, 1997. ------------ Marwan Jabri Professor in Adaptive Systems Dept of Electrical Engineering, The University of Sydney NSW 2006, Australia Tel: (+61-2) 9351-2240, Fax: (+61-2) 9351-7209, Mobile: (+61) 414-512240 Email: marwan at sedal.usyd.edu.au, http://www.sedal.usyd.edu.au/~marwan/ From mesegal at aehn2.einstein.edu Thu Jan 16 04:24:36 1997 From: mesegal at aehn2.einstein.edu (mary segal) Date: Thu, 16 Jan 1997 09:24:36 GMT Subject: Recruitment for fellows in neural networks/rehabilitation research Message-ID: <199701160924.JAA25147@aehn2.einstein.edu> FELLOWSHIPS AVAILABLE IN NEURAL NETWORKS RELATED TO COGNITIVE NEUROSCIENCE AND REHABILITATION RESEARCH The Moss Rehabilitation Research Institute (MRRI) is accepting applications for two-year research fellowships. MRRI provides an opportunity for mutual collaboration with senior investigators in the field; educational conferences, lab meetings, and weekly patient rounds; and access to a variety of patient populations. Requirements include a PhD in a relevant area of cognitive neuroscience or rehabilitation, training in research design, and applied experience in data collection and analysis. Projects are funded through Federal agencies, such as the National Institutes of Health, as well as private foundations. Individuals with the following interests are invited to apply for the 1997-1998 fellowships: Applications of neural networks to both theoretical and applied problems in rehabilitation research, including 1) cognitive processing and/or acquired language disorders, e.g. in stroke patients; 2) relationship between areas of muscle weakness and compensatory musculoskeletal overuse symptoms in polio and other neurologic diseases; and/or 3) prediction of functional outcomes in rehabilitation patients. Mentors include Drs. John Whyte, MRRI director; Myrna Schwartz, MRRI associate director; Laurel Buxbaum; Branch Coslett; Mary Klein; Susan Kohn; and Mary Segal. MRRI is affiliated with a number of area academic institutions and has close ties with departments of physical medicine and rehabilitation, psychology, cognitive neuroscience, neurology, and bioengineering at various institutions including the University of Pennsylvania, Temple University, Lehigh University, and Drexel University. MRRI is an equal opportunity employer. For more information contact Mary Segal, Moss Rehabilitation Research Institute, MossRehab Hospital, 213 Korman Building, 1200 West Tabor Road, Philadelphia, PA 19141; telephone (215) 456-9901 ext. 9181; FAX (215) 456-9514; e-mail mesegal at aehn2.einstein.edu. From anderson at magnum.cog.brown.edu Fri Jan 17 09:30:58 1997 From: anderson at magnum.cog.brown.edu (anderson@magnum.cog.brown.edu) Date: Fri, 17 Jan 1997 09:30:58 EST Subject: Faculty Position in Cognition, Brown University Message-ID: <009AE7E1.9C226A40.11@magnum.cog.brown.edu> FACULTY POSITION IN COGNITION, BROWN UNIVERSITY. The Department of Cognitive and Linguistic Sciences at Brown University invites applications for a four-year faculty position in Human Cognition, to begin July 1, 1997 (initial three year appointment with one year renewal, non-tenure-track). The position would be suited to either a senior visitor who would receive half-time salary support and teach two courses per year, or a more junior applicant who would receive full salary support and teach three courses per year. Candidates should have core teaching and research interests in an area of human cognition such as perception, attention, memory, categorization, problem solving, reasoning, or decision making, as well as an interest in interacting with members of an interdisciplinary department. Familiarity with computational modeling is desirable. All applicants must have received the Ph.D. degree or equivalent by the beginning of the appointment. The initial deadline for applications is March 1, 1997, but applications will be accepted after that time until the position is filled. Please send CV, recent publications, and a cover letter describing teaching and research interests to the address below. Senior applicants should enclose the names of three referees; junior applicants should have three letters of reference sent to: Cognitive Search Committee, Department of Cognitive and Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912. Brown is an Equal Opportunity/Affirmative Action employer. Women and minorities are especially encouraged to apply. From bruno at redwood.ucdavis.edu Fri Jan 17 21:05:41 1997 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Fri, 17 Jan 1997 18:05:41 -0800 Subject: grad program, UC Davis Message-ID: <199701180205.SAA20962@redwood.ucdavis.edu> GRADUATE PROGRAM IN NEUROSCIENCE UNIVERSITY OF CALIFORNIA, DAVIS The Graduate Program in Neuroscience at the University of California, Davis offers interdisciplinary training in areas from molecular to cognitive neuroscience. Many research opportunities exist for students interested in computational modeling approaches to problems in neuroscience. The Center for Neuroscience and the Institute for Theoretical Dynamics provide students and faculty with numerous research facilities and an excellent environment for combining theoretical and experimental approaches. Relevant faculty include: David Amaral - structure and function of hippocampus, amygdala Ken Britten - visual cortex, neural basis of motion perception Leo Chalupa - retina neurophysiology, development Barbara Chapman - development and plasticity of sensory systems Charles Gray - cortical mechanisms of pattern recognition, rhythmic activity Andrew Ishida - retinal ganglion cells, synaptic integration Joel Keizer - computational modeling, cell physiology, calcium dynamics Leah Krubitzer - cortical organization, comparative anatomy Ron Mangun - selective attention, cognitive neuroimaging Bruno Olshausen - computational models of vision, efficient coding Robert Rafal - neuropsychology of visual attention Gregg Recanzone - cortical mechanisms of attention, sensory processing Lynn Robertson - spatial vision, object recognition, hemispheric differences Karen Sigvart - neural control of locomotion Mitch Sutter - cortical mechanisms of auditory perception, plasticity Martin Wilson - synaptic transmission in the retina *** Application deadline for fall admissions is February 15, 1997. *** Application materials may be obtained from: Ms. Dawne Shell tel: (916) 752-9091 or 9092 Graduate Group Complex fax: (916) 752-8822 188 Briggs Hall e-mail: drshell at ucdavis.edu University of California, Davis Davis, California 95616-8599 Specific questions regarding the program should be directed to: Lynn Roberston, Program Chair or David Amaral, Chair of Admissions (916) 757-8853 (916) 757-8813 (510) 372-2000 X6891 dgamaral at ucdavis.edu marva4!lynn at ucdavis.edu Web site: http://neuroscience.ucdavis.edu/ngg/ From nkasabov at commerce.otago.ac.nz Mon Jan 20 16:39:11 1997 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Mon, 20 Jan 1997 09:39:11 -1200 Subject: ICONIP'97 call for papers Message-ID: <120BF143F97@jupiter.otago.ac.nz> CALL FOR PAPERS, PRESENTATIONS, SPECIAL SESSIONS ICONIP'97 jointly with ANZIIS'97 and ANNES'97 (in cooperation with IEEE NNC and INNS) The Fourth International Conference on Neural Information Processing-- The Annual Conference of the Asian Pacific Neural Network Assembly, jointly with The Fifth Australian and New Zealand International Conference on Intelligent Information Processing Systems, and The Third New Zealand International Conference on Artificial Neural Networks and Expert Systems, 24-28 November, 1997 Dunedin/Queenstown, New Zealand The joint conference will have three parallel streams: Stream1: Neural Information Processing Stream2: Computational Intelligence and Soft Computing Stream3: Intelligent Information Systems and their Applications TOPICS OF INTEREST Stream1: Neural Information Processing * Neurobiological systems * Cognition * Cognitive models of the brain * Dynamical modelling, chaotic processes in the brain * Brain computers, biological computers * Consciousness, awareness, attention * Adaptive biological systems * Modelling emotions * Perception, vision * Learning languages * Evolution Stream2: Computational Intelligence and Soft Computing * Artificial neural networks: models, architectures, algorithms * Fuzzy systems * Evolutionary programming and genetic algorithms * Artificial life * Distributed AI systems, agent-based systems * Soft computing--paradigms, methods, tools * Approximate reasoning * Probabilistic and statistical methods * Software tools, hardware implementation Stream3: Intelligent Information Systems and their Applications * Connectionist-based information systems * Hybrid systems * Expert systems * Adaptive systems * Machine learning, data mining and intelligent databases * Pattern recognition and image processing * Speech recognition and language processing * Intelligent information retrieval systems * Human-computer interfaces * Time-series prediction * Control * Diagnosis * Optimisation * Application of intelligent information technologies in: manufacturing, process control, quality testing, finance, economics, marketing, management, banking, agriculture, environment protection, medicine, geographic information systems, government, law, education, and sport * Intelligent information technologies on the global networks HONORARY CHAIR Shun-Ichi Amari, Tokyo University GENERAL CONFERENCE CHAIR Nik Kasabov, University of Otago nkasabov at otago.ac.nz COFERENCE CO-CHAIRS Yianni Attikiouzel, University of Western Australia Marwan Jabri, Sydney University PROGRAM CO-CHAIRS Tom Gedeon, University of New South Wales George Coghill, University of Auckland LOCAL ORGANIZING COMMITTEE CHAIR: Philip Sallis, University of Otago CONFERENCE ORGANISER Ms Kitty Ko Department of Information Science, University of Otago, PO Box 56, Dunedin, New Zealand phone: +64 3 479 8153, fax: +64 3 479 8311, email: kittyko at commerce.otago.ac.nz CALL FOR PAPERS Papers must be received by 30 May 1997. They will be reviewed by senior researchers in the field and the authors will be informed about the decision of the review process by 20 July 1997. The accepted papers must be submitted in a camera-ready format by 20 August. All accepted papers will be published by Springer-Verlag. As the conference is a multi-disciplinary meeting the papers are required to be comprehensible to a wider rather than to a very specialised audience. Papers will be presented at the conference either in an oral or in a poster session. Please submit three copies of the paper written in English on A4-format white paper with one inch margins on all four sides, in two column format, on not more than 4 pages, single-spaced, in Times or similar font of 10 points, and printed on one side of the page only. Centred at the top of the first page should be the complete title, author(s), mailing and e-mailing addresses, followed by an abstract and the text. In the covering letter the stream and the topic of the paper according to the list above should be indicated. SPECIAL ISSUES OF JOURNALS AND EDITED VOLUMES Selected papers will be published in special issues of scientific journals and in edited volumes which will include chapters covering the conference topics written by invited conference participants. TUTORIALS (24 November) Conference tutorials will be organized to introduce the basics of cognitive modelling, dynamical systems, neural networks, fuzzy systems, evolutionary programming, machine learning, soft computing, expert systems,hybrid systems, and adaptive systems. EXHIBITION Companies and university research laboratories are encouraged to exhibit their developed or distributing software and hardware systems. SPECIAL EVENTS FOR PRACTITIONERS The New Zealand Computer Society is organising special demonstrations, lectures and materials for practitioners working in the area of information technologies. VENUE (Dunedin/Queenstown) The Conference will be held at the University of Otago, Dunedin, New Zealand. The closing session will be held on Friday, 28 November on a cruise on one of the most beautiful lakes in the world, Lake Wakatipu. The cruise departs from the famous tourist centre Queenstown, about 300 km from Dunedin. Transportation will be provided and there will be a separate discount cost for the cruise. TRAVELLING The Dunedin branch of House of Travel, a travelling company, is happy to assist in any domestic and international travelling arrangements for the Conference delegates. They can be contacted through email: travel at es.co.nz, fax: +64 3 477 3806, phone: +64 3 477 3464, or toll free number: 0800 735 737 (within NZ). POSTCONFERENCE EVENTS Following the closing conference cruise, delegates may like to experience the delights of Queenstown, Central Otago, and Fiordland. Travel plans can be coordinated by the Dunedin Visitor Centre (phone: +64 3 474 3300, fax: +64 3 474 3311). IMPORTANT DATES Papers due: 30 May 1997 Notification of acceptance: 20 July 1997 Final camera-ready papers due: 20 August 1997 Registration of at least one author of a paper: 20 August 1997 Early registration: 20 August 1997 CONFERENCE CONTACTS, PAPER SUBMISSIONS, CONFERENCE INFORMATION, REGISTRATION FORMS Conference Secretariat Department of Information Science, University of Otago, PO Box 56, Dunedin, New Zealand; phone: +64 3 479 8142; fax: +64 3 479 8311; email: iconip97 at otago.ac.nz Home page: http://divcom.otago.ac.nz:800/com/infosci/kel/conferen.htm RELATED CONFERENCE The World Manufacturing Congress'97 (WMC'97) will be held from November 18-21, 1997 at Massesy University, Albany Campus, Auckland, New Zealand.For further information please visit the Web Site: http://www.compusmart.ab.ca/icsc/wmc97.htm From radford at cs.toronto.edu Mon Jan 20 13:52:03 1997 From: radford at cs.toronto.edu (Radford Neal) Date: Mon, 20 Jan 1997 13:52:03 -0500 Subject: Software & Technical Report available Message-ID: <97Jan20.135204edt.1028@neuron.ai.toronto.edu> Now available free for research and educational use: SOFTWARE FOR FLEXIBLE BAYESIAN MODELING This software implements a variety of Bayesian models for regression and classification based on neural networks and Gaussian processes. The software is written in C for Unix. The neural network programs are an update of those previously distributed, which are described in my book, Bayesian Learning for Neural Networks (Springer-Verlag 1996, ISBN 0-387-94724-8). The Gaussian process models and their implementation are described in the following technical report: MONTE CARLO IMPLEMENTATION OF GAUSSIAN PROCESS MODELS FOR BAYESIAN REGRESSION AND CLASSIFICATION Radford M. Neal Dept. of Statistics and Dept. of Computer Science University of Toronto Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple non- parametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response. Both the software and the technical report can be obtained via my home page, at URL http://www.cs.utoronto.ca/~radford/ You can directly obtain the compressed Postscript for the technical report at URL ftp://ftp.cs.utoronto.ca/pub/radford/mc-gp.ps.Z Please let me know if you encounter any difficulties. ---------------------------------------------------------------------------- Radford M. Neal radford at cs.utoronto.ca Dept. of Statistics and Dept. of Computer Science radford at utstat.utoronto.ca University of Toronto http://www.cs.utoronto.ca/~radford ---------------------------------------------------------------------------- From georg at ai.univie.ac.at Tue Jan 21 11:30:13 1997 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Tue, 21 Jan 1997 17:30:13 +0100 (MET) Subject: CFP: NN in biomedical systems Message-ID: <199701211630.RAA21869@jedlesee.ai.univie.ac.at> Call for Abstracts for a ======================================= Special track on biomedical systems ======================================= at the International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 ----------------------------------------------------------------------------- The deadline for submission of abstracts to the special track on biomedical systems at EANN '97 has been extended to =================================== January 31, 1997 (email submission) =================================== Please send your submissions to georg at ai.univie.ac.at (Georg Dorffner) --------------- Instructions: Abstracts of one page (about 400 words) should be sent to georg at ai.univie.ac.at by 31 January 1997 by e-mail in plain ASCII format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper and whether you will prefer oral or poster presentation. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 7 February. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN95.html and http://www.abo.fi/~abulsari/EANN96.html --------------- About the conference: The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, biomedical systems, and environmental engineering. Other special tracks are: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, Ersin-Tulunay at metu.edu.tr), Hybrid Systems (D. Tsaptsinos, D.Tsaptsinos at kingston.ac.uk), Mechanical Engineering (A. Scherer, Andreas_Scherer at hp.com), Process Engineering (R. Baratti, baratti at ndchem3.unica.it) Advisory board J. Hopfield (USA) A. Lansner (Sweden) G. Sjodin (Sweden) Organising committee A. Bulsari (Finland) H. Liljenstrom (Sweden) D. Tsaptsinos (UK) International program committee G. Baier (Germany) R. Baratti (Italy) S. Cho (Korea) T. Clarkson (UK) J. DeMott (USA) G. Dorffner (Austria) W. Duch (Poland) G. Forsgren (Sweden) A. Gorni (Brazil) J. Heikkonen (Italy) F. Norlund (Sweden) A. Ruano (Portugal) A. Scherer (Germany) C. Schizas (Cyprus) J. Thibault (Canada) E. Tulunay (Turkey) Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact us again. You should receive an abstract number in a couple of days after the submission. International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 Registration information Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned after the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid and therefore will not be stored. For more information, please ask eann97 at kth.se. The conference fee will be SEK 4148 (SEK 3400 excluding VAT) until 28 February, and SEK 4978 (SEK 4080 excluding VAT) after that. The conference fee includes attendance to the conference and the proceedings. If your organisation (university or company or institute) has a VAT registration from a European Union country other than Finland, then your VAT number should be mentioned on the bank transfer as well as the registration form, and VAT need not be added to the conference fee. At least one author of each accepted paper should register by 15 March to ensure that the paper will be included in the proceedings. The correct conference fee amount should be received in the account number 207 799 342, Svenska Handelsbanken International, Stockholm branch. It can be paid by bank transfer (with all expenses paid by the sender) to "EANN Conference". To avoid extra bureaucracy and correction of the amount at the registration desk, make sure that you have taken care of the bank transfer fees. It is essential to mention the name of the participant with the bank transfer. If you need to pay it in another way (bank drafts, Eurocheques, postal order; no credit cards), please contact us at eann97 at kth.se. Invoicing will cost SEK 100. From atick at monaco.rockefeller.edu Tue Jan 21 14:50:31 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Tue, 21 Jan 1997 14:50:31 -0500 Subject: Job Openings in Computer Vision Research Message-ID: <9701211450.ZM26135@monaco.rockefeller.edu> FYI %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% JOB OPENINGS IN Pattern Recognition Research Join a growing team of scientists and software engineers developing real world applications of visual pattern recognition technology (e.g. face recognition systems). The openings are at various levels and will be at Visionics' research facility in New Jersey (about 20 minutes outside New York City). The candidate is expected to have experience in pattern recognition research, numerical analysis, C/C++ programming. Strong computer programming abilities are a must. A research track record in computer vision, artificial neural network, image processing, or scene understanding is a definite plus. If you are interested in an exciting job opportunity and would like a chance for rapid career development and significant financial rewards, please fax your resume to (908) 549 5323, Re: Job Posting, for consideration. Alternatively, you can email it to jobs at faceit.com. Additional information can be found at http://www.faceit.com. Visionics is an equal opportunity employer. Minority and women candidates are encouraged to apply. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From giles at research.nj.nec.com Wed Jan 22 10:49:30 1997 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 22 Jan 97 10:49:30 EST Subject: TR on Alternative Discrete-time Operators in Neural Networks Message-ID: <9701221549.AA06169@alta> The following TR is now available from the University of Maryland, NEC Research Institute and the Laboratory of Artificial Brain Systems archives. ************************************************************************************ Alternative Discrete-Time Operators and Their Application to Nonlinear Models Andrew D. Back [1], Ah Chung Tsoi [2], Bill G. Horne [3], C. Lee Giles [4,5] [1] Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako--shi, Saitama 351-01, Japan [2] Faculty of Informatics, University of Wollongong, Northfields Avenue, Wollongong, Australia [3] AADM Consulting, 9 Pace Farm Rd., Califon, NJ 07830 [4} NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [5] Inst. for Advanced Computer Studies, U. of Maryland, College Park, MD. 20742 U. of Maryland Technical Report CS-TR-3738 and UMIACS-TR-97-03 ABSTRACT The shift operator, defined as q x(t) = x(t+1), is the basis for almost all discrete-time models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightly-damped-low-frequency (LDLF) systems, with poles near $(1,0)$ on the unit circle in the complex plane. This problem occurs under fast sampling conditions. As the sampling rate increases, coefficient sensitivity and round-off noise become a problem as the difference between successive sampled inputs becomes smaller and smaller. The resulting coefficients of the model approach the coefficients obtained in a binomial expansion, regardless of the underlying continuous-time system. This implies that for a given finite wordlength, severe inaccuracies may result. Wordlengths for the coefficients may also need to be made longer to accommodate models which have low frequency characteristics, corresponding to poles in the neighbourhood of (1,0). These problems also arise in neural network models which comprise of linear parts and nonlinear neural activation functions. Various alternative discrete-time operators can be introduced which offer numerical computational advantages over the conventional shift operator. The alternative discrete-time operators have been proposed independently of each other in the fields of digital filtering, adaptive control and neural networks. These include the delta, rho, gamma and bilinear operators. In this paper we first review these operators and examine some of their properties. An analysis of the TDNN and FIR MLP network structures is given which shows their susceptibility to parameter sensitivity problems. Subsequently, it is shown that models may be formulated using alternative discrete-time operators which have low sensitivity properties. Consideration is given to the problem of finding parameters for stable alternative discrete-time operators. A learning algorithm which adapts the alternative discrete-time operators parameters on-line is presented for MLP neural network models based on alternative discrete-time operators. It is shown that neural network models which use these alternative discrete-time perform better than those using the shift operator alone. Keywords: Shift operator, alternative discrete-time operator, gamma operator, rho operator, low sensitivity, time delay neural network, high speed sampling, finite wordlength, LDLF, MLP, TDNN. ___________________________________________________________________________________ http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html http://zoo.riken.go.jp/abs1/back/Welcome.html -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From michael at salk.edu Wed Jan 22 21:19:02 1997 From: michael at salk.edu (michael@salk.edu) Date: Wed, 22 Jan 1997 18:19:02 -0800 (PST) Subject: NIPS*96 preprints Message-ID: <199701230219.SAA03230@gabor.salk.edu> Connectionists - This is an announcement of several NIPS*96 preprints from the Computational Neurobiology Lab at the Salk Institute in San Diego. These will appear in "Advances in Neural Information Processing Systems 9" (available May 1997), edited by Mozer, M.C., Jordan, M.I., and Petsche, T., and published by MIT Press of Cambridge, MA. We enclose the abstracts and ftp addresses of these papers. Full citations are at the bottom of each abstract. Comments and feedback are welcome. - Marni Stewart Bartlett, Tony Bell, Michael Gray, Mike Lewicki, Terry Sejnowski, Magnus Stensmo, Akaysha Tang ************************************************************** VIEWPOINT INVARIANT FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS AND ATTRACTOR NETWORKS Bartlett, M. Stewart & Sejnowski, T.J. EDGES ARE THE `INDEPENDENT COMPONENTS' OF NATURAL SCENES Bell A.J. & Sejnowski T.J. DYNAMIC FEATURES FOR VISUAL SPEECHREADING: A SYSTEMATIC COMPARISON Gray, M.S., Movellan, J.R., & Sejnowski, T.J. SELECTIVE INTEGRATION: A MODEL FOR DISPARITY ESTIMATION Gray, M.S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T.J. BLIND SEPARATION OF DELAYED AND CONVOLVED SOURCES Lee T-W., Bell A.J. & Lambert R. BAYESIAN UNSUPERVISED LEARNING OF HIGHER ORDER STRUCTURE Lewicki, M.S. & Sejnowski, T.J. LEARNING DECISION THEORETIC UTILITIES THROUGH REINFORCEMENT LEARNING Stensmo, M. & Sejnowski, T.J. CHOLINERGIC MODULATION PRESERVES SPIKE TIMING UNDER PHYSIOLOGICALLY REALISTIC FLUCTUATING INPUT Tang, A.C., Bartels, A.M., & Sejnowski, T.J. ************************************************************** VIEWPOINT INVARIANT FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS AND ATTRACTOR NETWORKS Bartlett, M. Stewart & Sejnowski, T.J. We have explored two approaches to recognizing faces across changes in pose. First, we developed a representation of face images based on independent component analysis (ICA) and compared it to a principal component analysis (PCA) representation for face recognition. The ICA basis vectors for this data set were more spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present a model for the development of viewpoint invariant responses to faces from visual experience in a biological system. The temporal continuity of natural visual experience was incorporated into an attractor network model by Hebbian learning following a lowpass temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization of Griniasty et al. (1993), which associates temporally proximal input patterns into basins of attraction. The system acquired representations of faces that were largely independent of pose. ftp://ftp.cnl.salk.edu/pub/marni/nips96_bartlett.ps http://www.cnl.salk.edu/~marni/publications.html Bartlett, M. Stewart & Sejnowski, T. J. (in press). Viewpoint invariant face recognition using independent component analysis and attractor networks. In Mozer, M.C., Jordan, M.I., Petsche, T.(Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** EDGES ARE THE `INDEPENDENT COMPONENTS' OF NATURAL SCENES Bell A.J. & Sejnowski T.J. Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that non-linear `infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen \& Field (1996). In addition, the outputs of these filters are as independent as possible, since the infomax network is able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form an information-theoretic co-ordinate system for images. ftp://ftp.cnl.salk.edu/pub/tony/edge.ps.Z Bell A.J. & Sejnowski T.J. (In press). Edges are the `Independent Components' of Natural Scenes. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** DYNAMIC FEATURES FOR VISUAL SPEECHREADING: A SYSTEMATIC COMPARISON Gray, M. S., Movellan, J. R., & Sejnowski, T. J. Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored. ftp://ftp.cnl.salk.edu/pub/michael/nips_lips.ps ftp://ftp.cnl.salk.edu/pub/michael/nips_lips-abs.text Gray, M. S., Movellan, J. R., & Sejnowski, T. J. (In press). Dynamic features for visual speechreading: A systematic comparison. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** SELECTIVE INTEGRATION: A MODEL FOR DISPARITY ESTIMATION Gray, M. S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T. J. Local disparity information is often sparse and noisy, which creates two conflicting demands when estimating disparity in an image region: the need to spatially average to get an accurate estimate, and the problem of not averaging over discontinuities. We have developed a network model of disparity estimation based on disparity-selective neurons, such as those found in the early stages of processing in visual cortex. The model can accurately estimate multiple disparities in a region, which may be caused by transparency or occlusion, in real images and random-dot stereograms. The use of a selection mechanism to selectively integrate reliable local disparity estimates results in superior performance compared to standard back-propagation and cross-correlation approaches. In addition, the representations learned with this selection mechanism are consistent with recent neurophysiological results of von der Heydt, Zhou, Friedman, and Poggio (1995) for cells in cortical visual area V2. Combining multi-scale biologically-plausible image processing with the power of the mixture-of-experts learning algorithm represents a promising approach that yields both high performance and new insights into visual system function. ftp://ftp.cnl.salk.edu/pub/michael/nips_stereo.ps ftp://ftp.cnl.salk.edu/pub/michael/nips_stereo-abs.text Gray, M. S., Pouget, A., Zemel, R., Nowlan, S., & Sejnowski, T. J. (In press). Selective Integration: A Model for Disparity Estimation. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** BLIND SEPARATION OF DELAYED AND CONVOLVED SOURCES Lee T-W., Bell A.J. & Lambert R. We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve non-minimum phase transfer functions, not-invertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate real-room separation of two natural signals using this approach. ftp://ftp.cnl.salk.edu/pub/tony/twfinal.ps.Z Lee T-W., Bell A.J. & Lambert R. (In press). Blind separation of delayed and convolved sources. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** BAYESIAN UNSUPERVISED LEARNING OF HIGHER ORDER STRUCTURE Lewicki, M. S. & Sejnowski, T. J. Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers higher order structure using EM and Gibbs sampling. The model can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher levels. We demonstrate the performance of the algorithm on benchmark problems. ftp://ftp.cnl.salk.edu/pub/lewicki/nips96.ps.Z ftp://ftp.cnl.salk.edu/pub/lewicki/nips96-abs.text Lewicki, M.S. and Sejnowski, T.J. (In press). Bayesian unsupervised learning of higher order structure. In Mozer, M.C., Jordan, M.I., and Petsche, T. (Eds.), Advances in Neural and Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** LEARNING DECISION THEORETIC UTILITIES THROUGH REINFORCEMENT LEARNING Stensmo, M. & Sejnowski, T. J. Probability models can be used to predict outcomes and compensate for missing data, but even a perfect model cannot be used to make decisions unless the values of the outcomes, or preferences between them, are also provided. This arises in many real-world problems, such as medical diagnosis, where the cost of the test as well as the expected improvement in the outcome must be considered. Relatively little work has been done on learning the utilities of outcomes for optimal decision making. In this paper, we show how temporal-difference (TD($\lambda$)) reinforcement learning can be used to determine decision theoretic utilities within the context of a mixture model and apply this new approach to a problem in medical diagnosis. TD($\lambda$) learning reduces the number of tests that have to be done to achieve the same level of performance with the probability model alone, which result in significant cost savings and increased efficiency. http://www.cs.berkeley.edu/~magnus/papers/nips96.ps.Z Stensmo, M. and Sejnowski, T. J. (in press). Learning decision theoretic utilities through reinforcement learning. In: Mozer, M.C., Jordan, M.I. and Petsche, T., (Eds.), Advances in Neural Information Processing Systems, Vol. 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** CHOLINERGIC MODULATION PRESERVES SPIKE TIMING UNDER PHYSIOLOGICALLY REALISTIC FLUCTUATING INPUT Tang, A. C., Bartels, A. M., & Sejnowski, T. J. Neuromodulation can change not only the mean firing rate of a neuron, but also its pattern of firing. Therefore, a reliable neural coding scheme, whether a rate coding or a spike time based coding, must be robust in a dynamic neuromodulatory environment. The common observation that cholinergic modulation leads to a reduction in spike frequency adaptation implies a modification of spike timing, which would make a neural code based on precise spike timing difficult to maintain. In this paper, the effects of cholinergic modulation were studied to test the hypothesis that precise spike timing can serve as a reliable neural code. Using the whole cell patch-clamp technique in rat neocortical slice preparation and compartmental modeling techniques, we show that cholinergic modulation, surprisingly, preserved spike timing in response to a fluctuating inputs that resembles {\em in vivo} conditions. This result suggests that in vivo spike timing may be much more resistant to changes in neuromodulator concentrations than previous physiological studies have implied. ftp://ftp.cnl.salk.edu/pub/tang/ach_timing.ps.gz ftp://ftp.cnl.salk.edu/pub/tang/ach_timing_abs.txt Akaysha C. Tang, Andreas M. Bartels, and Terrence J Sejnowski. (In press). Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input. In Mozer, M.C., Jordan, M.I., Petsche, T. (Eds.), Advances in Neural Information Processing Systems 9. MIT Press, Cambridge, MA, U.S.A. ************************************************************** From back at zoo.riken.go.jp Thu Jan 23 10:59:24 1997 From: back at zoo.riken.go.jp (Andrew Back) Date: Fri, 24 Jan 1997 00:59:24 +0900 Subject: URL Correction: TR on Alternative Discrete-time Operators in Neural Networks Message-ID: <9701231610.AA05123@tora.riken.go.jp> Please note the following URL correction for the previously announced TR: Alternative Discrete-Time Operators and Their Application to Nonlinear Models Andrew D. Back, Ah Chung Tsoi, Bill G. Horne, C. Lee Giles U. of Maryland Technical Report CS-TR-3738 and UMIACS-TR-97-03 Instead of: http://zoo.riken.go.jp/abs1/back/Welcome.html please use: http://www.bip.riken.go.jp/absl/back Our apologies for any inconvenience caused. -- Andrew Back Brain Information Processing Group The Institute of Physical and Chemical Research (RIKEN), Japan. WWW: http://www.bip.riken.go.jp/absl/back From ingber at ingber.com Thu Jan 23 14:58:40 1997 From: ingber at ingber.com (Lester Ingber) Date: Thu, 23 Jan 1997 14:58:40 -0500 Subject: Papers: Canonical momenta indicators ... Message-ID: <199701231958.LAA03723@alumnae.caltech.edu> Below are URLs and abstracts for 4 papers utilizing canonical momenta indicators (CMI), in analyses of neocortical EEG, financial markets, combat simulation, and data mining/knowledge discovery. Below these are instructions for retrieval of files. As noted by a Physical Review E referee for the EEG paper, ... the paper ... has potential value for a wide variety of systems, especially for very complex systems. Its filename [and size] is smni97_cmi.ps.Z [170K] %A L. Ingber %T Statistical mechanics of neocortical interactions: Canonical momenta indicators of electroencephalography %J Physical Review E %P (to be published) %D 1997 %O URL http://www.ingber.com/smni97_cmi.ps.Z ABSTRACT: A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. Sets of EEG and evoked potential data were fit, collected to investigate genetic predispositions to alcoholism and to extract brain "signatures" of short-term memory. Adaptive Simulated Annealing (ASA), a global optimization algorithm, was used to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta indicators (CMI) are thereby derived for individual's EEG data. The CMI give better signal recognition than the raw data, and can be used to advantage as correlates of behavioral states. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons. The markets file is the final version of a preprint posted in March '96. markets96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of financial markets and neocortical EEG %B International Conference on Neural Information Processing (ICONIP'96) %I Springer %C New York %P 777-784 %D 1996 %O Invited paper to the 1996 International Conference on Neural Information Processing (ICONIP'96), Hong Kong, 24-27 September 1996. URL http://www.ingber.com/markets96_momenta.ps.Z ABSTRACT: A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. combat97_cmi.ps.Z [55K] %A M. Bowman %A L. Ingber %T Canonical momenta of nonlinear combat %B Proceedings of the 1997 Simulation Multi-Conference, 6-10 April 1997, Atlanta, GA %I Society for Computer Simulation %C San Diego, CA %P (to be published) %D 1997 %O URL http://www.ingber.com/combat97_cmi.ps.Z ABSTRACT: The context of nonlinear combat calls for more sophisticated measures of effectiveness. We present a set of tools that can be used as such supplemental indicators, based on stochastic nonlinear multivariate modeling used to benchmark Janus simulation to exercise data from the U.S. Army National Training Center (NTC). As a prototype study, a strong global optimization tool, adaptive simulated annealing (ASA), is used to explicitly fit Janus data, deriving coefficients of relative measures of effectiveness, and developing a sound intuitive graphical decision aid, canonical momentum indicators (CMI), faithful to the sophisticated algebraic model. We argue that these tools will become increasingly important to aid simulation studies of the importance of maneuver in combat in the 21st century. path97_datamining.ps.Z [90K] %A L. Ingber %T Data mining and knowledge discovery via statistical mechanics in nonlinear stochastic systems %P (submitted) %D 1997 %O URL http://www.ingber.com/path97_datamining.ps.Z ABSTRACT: A modern calculus of multivariate nonlinear multiplicative Gaussian-Markovian systems provides models of many complex systems faithful to their nature, e.g., by not prematurely applying quasi-linear approximations for the sole purpose of easing analysis. To handle these complex algebraic constructs, sophisticated numerical tools have been developed, e.g., methods of adaptive simulated annealing (ASA) global optimization and of path integration (PATHINT). In-depth application to three quite different complex systems have yielded some insights into the benefits to be obtained by application of these algorithms and tools, in statistical mechanical descriptions of neocortex (electroencephalography), financial markets (interest-rate and trading models), and combat analysis (baselining simulations to exercise data). The latest Adaptive Simulated Annealing (ASA) optimization code may be retrieved at no charge from this archive in several formats: http://www.ingber.com/ASA-shar [1350K] http://www.ingber.com/ASA-shar.Z [500K] http://www.ingber.com/ASA.tar.Z [450K] http://www.ingber.com/ASA.tar.gz [320K] http://www.ingber.com/ASA.zip [330K] The archive can be accessed via WWW path http://www.ingber.com/ http://www.alumni.caltech.edu/~ingber/ where the last address is a mirror homepage for the full archive. Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at ftpmail.ramona.vix.com (was ftpmail at decwrl.dec.com), and send only the word "help" in the body of the message. Limited help assisting people with queries on my codes and papers is available only by electronic mail correspondence. Sorry, I cannot mail out hardcopies of code or papers. /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Prof. Lester Ingber __ PO Box 857 __ McLean, VA 22101-0857 __ USA */ From regier at tidbit Thu Jan 23 18:09:07 1997 From: regier at tidbit (Terry Regier) Date: Thu, 23 Jan 1997 23:09:07 +0000 (GMT) Subject: FEB 15 DEADLINE for Computational Psycholinguistics conference Message-ID: <199701232309.RAA03913@tidbit.> A non-text attachment was scrubbed... Name: not available Type: text Size: 6912 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/88110105/attachment-0001.ksh From atick at monaco.rockefeller.edu Fri Jan 24 11:56:11 1997 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Fri, 24 Jan 1997 11:56:11 -0500 Subject: Network: CNS, Table of Contents, Vol. 8, 1,97 Message-ID: <9701241156.ZM28094@monaco.rockefeller.edu> Network: Computation in Neural Systems Table of Contents Volume 8, 1, 1997 As you may know, the journal has adopted incremental publishing in its online edition, which means a paper is immediately published as soon as it is accepted and processed. Every 3 months we finalize an issue for archival reasons and issue a table of contents. Online journal and information can be found at http://www.iop.org/Journals/ne (limited access to non-subcribers, access to full length articles to insitutional subscribers) Table of contents of latest issue: %%%%%%%%%%%%% Editorial: Thank you to all our referees TOPICAL REVIEW R1 On the use of computation in modelling behaviour F van der Velde PAPERS 1 Nitric oxide: what can it compute? B Krekelberg and J G Taylor 17 Analysis of ocular dominance pattern formation in a high-dimensional self-organizing-map model H-U Bauer, D Brockmann and T Geisel 35 Capacity and information efficiency of the associative net B Graham and D Willshaw 55 A neural net model of the adaptation of binocular vertical eye alignment J W McCandless and C M Schor 71 Quality and efficiency of retrieval for Willshaw-like autoassociative networks: III. Willshaw--Potts model A Kartashov, A Frolov, A Goltsev and R Folk 87 Stereo vision using a microcanonical mean field annealing neural network Jeng-Sheng Huang and Hsiao-Chung Liu 104 Abstracts of Topical reviews published during 1996 Mutual information maximization: models of cortical self-organization S. Becker The development of topography in the visual cortex: a review of models N. Swindale Auditory cortical representation of complex acoustiv spectra as inferred from the ripple analysis method S. Shamma Human colour perception and its adaptation M. Webster %%%%%%%%%% Coming up in the May issue (1) Metric-space analysis of spike trains: theory, algorithm and application Jonathon Victor, & Keith Purpura (2) A neural model of the stroboscopic alternative motion A Bartschl and J L van Hemmen and much much more... %%%%%%%%%%%%%% Network:CNS would like to welcome its new editorial board: Larry Abott, Brandeis Peter Dayan, MIT Peter Hancock, University of Stirling David Heeger, Stanford University Leo van Hemmen, University of Munich Tony Movshon, NYU Markus Meister, Harvard Dan Ruderman, Salk Insitute Jonathon Victor, Cornell Univeristy David Willshaw, University of Edinburgh As always, we are happy to hear your suggestions and receive your submissions. We hope to continue to make Network:CNS an indispensible research tool for the computational and neuroscience community. Best regards joseph atick Editor-in-chief -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From lane at katrix.com Fri Jan 24 14:48:39 1997 From: lane at katrix.com (Stephen Lane) Date: Fri, 24 Jan 1997 14:48:39 -0500 Subject: Job Openings in Intelligent Agent R&D Message-ID: <01BC0A05.B08EE400@engarde.katrix.com> JOB OPENINGS IN INTELLIGENT AGENT RESEARCH AND DEVELOPMENT OVERVIEW Katrix Inc. has developed technology that enables intelligent agents to be embodied as fully articulated three-dimensional human and animal-like interactive characters in computer games, virtual reality simulations and distributed interactive network applications. Intelligent agents created with Katrix Technology think, learn and act, and as a result, can adapt their behavior and movement in real-time based upon interactions with human users and the 3D virtual environment. Katrix currently is developing a suite of products that support the creation and control of such intelligent agents for use as fully interactive Internet Avatars, Digital Actors and Virtual Creatures. These products include point and click behavioral animation authoring tools, libraries of off-the-shelf interactive characters and intelligent behaviors, as well as a totally new single-hand computer input device particularly well suited for virtual reality and gaming applications. STAFF POSITIONS AVAILABLE The positions available involve core technology development in the areas of intelligent control, robotics, behavioral animation, neural networks, knowledge-based systems, distributed interactive simulation and visual programming language design. Prospective candidates should have a strong math background and be self-motivated. Required programming skills include proficiency in C and C++ on PC and/or Unix platforms. A research track record in controls and dynamics, robotics or neural networks is a definite plus. Familiarity with design and implementation of graphical user interfaces, 3D computer animation, interactive simulation or 3D games also is desirable. Staff positions are available at various levels at Katrix facility located in Princeton, New Jersey (about 60 minutes from both New York City and Philadelphia). CONTACT If you are interested in an exciting career opportunity with a rapidly growing company in an industry poised for explosive growth, please send your resume immediately to: Stephen H. Lane, President FAX: (609) 921-7547 Email: lane at katrix.com Katrix Inc. 31 Airpark Road Princeton, NJ 08540 (609) 921-7544 From rao at cs.rochester.edu Sat Jan 25 00:36:49 1997 From: rao at cs.rochester.edu (Rajesh Rao) Date: Sat, 25 Jan 1997 00:36:49 -0500 Subject: Technical Report: Visual recognition and robust Kalman filters Message-ID: <199701250536.AAA04030@porcupine.cs.rochester.edu> The following paper on appearance-based visual recognition and robust Kalman filtering is now available for retrieval via ftp. Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== Robust Kalman Filters for Prediction, Recognition, and Learning Rajesh P.N. Rao Department of Computer Science University of Rochester Rochester, NY 14627-0226 Technical Report 645 December, 1996 Using results from the field of robust statistics, we derive a class of Kalman filters that are robust to structured and unstructured noise in the input data stream. Each filter from this class maintains robust optimal estimates of the input process's hidden state by allowing the measurement covariance matrix to be a non-linear function of the prediction errors. This endows the filter with the ability to reject outliers in the input stream. Simultaneously, the filter also learns an internal model of input dynamics by adapting its measurement and state transition matrices using two additional Kalman filter-based adaptation rules. We present experimental results demonstrating the efficacy of such filters in mediating appearance-based segmentation and recognition of objects and image sequences in the presence of varying degrees of occlusion, clutter, and noise. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/robust.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/robust.ps.Z 15 pages; 296K compressed, 1015K uncompressed ------------------------------------------------------------------------- Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get robust.ps ftp> bye From wray at Ultimode.com Sat Jan 25 13:50:52 1997 From: wray at Ultimode.com (Wray Buntine) Date: Sat, 25 Jan 1997 10:50:52 -0800 Subject: PhD/Masters Research Assistantship Message-ID: <199701251850.KAA12584@Ultimode.com> PhD/Masters Research Assistantships Field: probabilistic algorithms, data analysis/mining and optimization for CAD Place: Electrical Engineering and Computer Science University of California, Berkeley The CAD group in the EECS Dept. at UC Berkeley is offering research support for its Masters and Doctoral program. Research areas include but are not limited to the use of data mining/analysis/engineering techniques in CAD or optimization, and probabilistic methods for optimization or specialized compilation. The Electronic Design Technology (EDT) field is concerned with computer automated or computer-assisted design of complex electronic systems. With current hardware capabilities advancing rapidly, a key bottleneck is the development of advanced algorithms for optimization and simulation of partial, abstract or completed designs. Our task is to design, code and experiment with new algorithms, methodologies, and software technologies for alleviating this bottleneck. The task can include the use of data mining/analysis to understand the nature of the optimization task, or in order to develop adaptive optimization methods. The ideal candidate should have a background in computer science, electrical engineering or related disciplines, should be an accomplished or developing programmer, and should have an interest in the theory and mathematical techniques used in optimization, data analysis, or probabilistic methods. Candidates who wish to apply are invited to respond with a copy of their CV to: Professor R. Newton URL: http://www.eecs.berkeley.edu/~newton Dr. Wray Buntine URL: http://www.eecs.berkeley.edu/~wray Dr. Andrew Mayer URL: http://www.eecs.berkeley.edu/~mayer Dept. of Electrical Engineering and Computer Sciences 520 Cory Hall University of California at Berkeley Berkeley, CA, 94720 The CAD Group URL: http://www-cad.eecs.berkeley.edu EECS, UC Berkeley URL: http://www.eecs.berkeley.edu From baluja at jprc.com Mon Jan 27 17:30:42 1997 From: baluja at jprc.com (Shumeet Baluja) Date: Mon, 27 Jan 1997 17:30:42 -0500 Subject: Paper: Using Optimal Dependency-Trees for Combinatorial Optimization Message-ID: <199701272230.RAA15646@india.jprc.com> The following paper is available from: http://www.cs.cmu.edu/~baluja/techreps.html (CMU-CS-97-107) Title: Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space Abstract: Many combinatorial optimization algorithms have no mechanism to capture inter-parameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algorithm which incrementally learns second-order probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees. We test this algorithm on a variety of optimization problems. Our results indicate superior performance over other tested algorithms that either (1) do not explicitly use these dependencies, or (2) use these dependencies to generate a more restricted class of dependency graphs. By: Shumeet Baluja Justsystem Pittsburgh Research Center School of Computer Science 4616 Henry St. Carnegie Mellon University Pittsburgh, PA. 15213 Pittsburgh, PA. 15213 Scott Davies School of Computer Science Carnegie Mellon University Pittsburgh, PA. 15213 This work is an extension of the work presented in two papers at NIPS 1996: Baluja, S., "Genetic Algorithms and Explicit Search Statistics" to appear in Advances in Neural Information Processing Systems 1996. De Bonet, J., Isbell, C., and Viola, P. (1997) "MIMIC: Finding Optima by Estimating Probability Densities," to appear in Advances in Neural Information Processing Systems 1996. As always, comments and suggestion are most welcome. From ohira at csl.sony.co.jp Wed Jan 29 00:32:43 1997 From: ohira at csl.sony.co.jp (Toru Ohira) Date: Wed, 29 Jan 97 14:32:43 +0900 Subject: TR on Systems with noise and dealy Message-ID: <9701290532.AA07421@ohira.csl.sony.co.jp> The following TRs are available from Sony Computer Science Lab. http://www.csl.sony.co.jp/person/ohira/drw.html ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ (1) Oscillatory Correlation of Delayed Random Walks Toru Ohira (Sony CSL) SCSL-TR-96-014 (Tentatively scheduled to appear as Rapid Communincation in Phys. Rev. E. Feb., 1997; also regitared at Los Alamos National Lab Archive, cond-mat/9701066) (2) Delay Estimation from Noisy Time Series Toru Ohira (Sony CSL) Ryusuke Sawatari (Computer Science Dept. Keio Univ.) SCSL-TR-96-017 (Tentatively scheduled to appear as Rapid Communincation in Phys. Rev. E. March, 1997; also regitared at Los Alamos National Lab Archive, cond-mat/9701193) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ These papers are aimed to analytically capture the behavior of systems with noise and delay, which includes neural networks. As it stands these works are more of formal theory, and we welcome suggestion and comments toward application to neural networks. (For preliminary application to neuro-muscular control asssociated with human posture control, please see SCSL-TR-94-026, "Delayed Random Walks" (Ohira and Milton), Phys. Rev. E. vol.52, pp.3277,1995). Sincerely, Toru Ohira ohira at csl.sony.co.jp Sony Computer Science Lab. ++++++++++++++++++Abstracts++++++++++++++++ Oscillatory Correlation of Delayed Random Walks (Ohira) We investigate analytically and numerically the statistical properties of a random walk model with delayed transition probability dependence (delayed random walk). The characteristic feature of such a model is the oscillatory behavior of its correlation function. We investigate a model whose transient and stationary oscillatory behavior is analytically tractable. The corresppondence of the model with a Langevin equation with delay is also considered. Delay Estimation From Noisy Time Series (Ohira and Sawatari) We propose here a method to estimate a delay from a time series taking advantage of analysis of random walks with delay. This method is applicable to a time series coming out of a system which is or can be approximated as a linear feedback system with delay and noise. We successfully test the method with a time series generated by discrete Langevin equation with delay. From gordon at AIC.NRL.Navy.Mil Wed Jan 29 14:41:16 1997 From: gordon at AIC.NRL.Navy.Mil (gordon@AIC.NRL.Navy.Mil) Date: Wed, 29 Jan 97 14:41:16 EST Subject: ICML-97 workshops CFPs Message-ID: <9701291941.AA13925@sun14.aic.nrl.navy.mil> ================================================================ CALL FOR PAPERS REINFORCEMENT LEARNING: TO MODEL OR NOT TO MODEL, THAT IS THE QUESTION Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 Recently there has been some disagreement in the reinforcement learning community about whether finding a good control policy is helped or hindered by learning a model of the system to be controlled. Recent reinforcement learning successes (Tesauro's TD-gammon, Crites' elevator control, Zhang and Dietterich's space-shuttle scheduling) have all been in domains where a human-specified model of the target system was known in advance, and have all made substantial use of the model. On the other hand, there have been real robot systems which learned tasks either by model-free methods or via learned models. The debate has been exacerbated by the lack of fully-satisfactory algorithms on either side for comparison. Topics for discussion include (but are not limited to) o Case studies in which a learned model either contributed to or detracted from the solution of a control problem. In particular, does one method have better data efficiency? Time efficiency? Space requirements? Final control performance? Scaling behavior? o Computational techniques for finding a good policy, given a model from a particular class -- that is, what are good planning algorithms for each class of models? o Approximation results of the form: if the real system is in class A, and we approximate it by a model from class B, we are guaranteed to get "good" results as long as we have "sufficient" data. o Equivalences between techniques of the two sorts: for example, if we learn a policy of type A by direct method B, it is equivalent to learning a model of type C and computing its optimal controller. o How to take advantage of uncertainty estimates in a learned model. o Direct algorithms combine their knowledge of the dynamics and the goals into a single object, the policy. Thus, they may have more difficulty than indirect methods if the goals change (the "lifelong learning" question). Is this an essential difficulty? o Does the need for an online or incremental algorithm interact with the choice of direct or indirect methods? There will be presentations at the workshop by both invited speakers and authors of accepted papers; in addition, we may schedule a poster session after the workshop. Contributions that argue a position, give an overview or review, or report recent work are all encouraged. 3 hardcopies of extended abstracts or full papers papers no longer than 15 pages should be sent to arrive by March 15th, 1997 to Geoff Gordon (address below). Please also email a URL that points to your submission to ggordon at cs.cmu.edu by the same date. Accepted papers will be included in the hardcopy workshop proceedings (the ICML-97 style file will be available for final formatting). The URLs will be used to create an electronic proceedings. We would like the electronic proceedings to contain online copies of slides, posters, etc. in addition to the papers. Important Dates: March 15, 1997: Extended abstracts and papers due April 10, 1997: Notification of acceptance May 1, 1997: Camera-ready copy of papers due July 12, 1997: Workshop Organizers: Chris Atkeson (cga at cc.gatech.edu) College of Computing Georgia Institute of Technology 801 Atlantic Drive Atlanta, GA 30332-0280 Geoff Gordon (ggordon at cs.cmu.edu) Computer Science Department Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213-3891 (412) 268-3613, (412) 361-2893 Contact: Geoff Gordon (ggordon at cs.cmu.edu) ================================================================ CALL FOR PAPERS AUTOMATA INDUCTION, GRAMMATICAL INFERENCE, AND LANGUAGE ACQUISITION Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 The Automata Induction, Grammatical Inference, and Language Acquisition Workshop will be held on Saturday, July 12, 1997 during the Fourteenth International Conference on Machine Learning (ICML-97) which will be co-located with the Tenth Annual Conference on Computational Learning Theory (COLT-97) at Nashville, Tennessee from July 8 through July 12, 1997. Additional information on ICML-97 and COLT-97 can be found at: http://cswww.vuse.vanderbilt.edu/~mlccolt/ Objectives Machine learning of grammars, variously referred to as automata induction, grammatical inference, grammar induction, and automatic language acquisition, finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. The workshop seeks to bring together researchers working on different aspects of machine learning of grammars in a number of different (and until now, relatively isolated) areas including neural networks, pattern recognition, computational linguistics, computational learning theory, automata theory, and language acquisition for fruitful exchange of the relevant recent research results. Workshop Format The workshop will consist of 3--5 invited talks offering different perspectives on machine learning of grammars, interspersed with short (10--15 minute) presentations of accepted papers. The workshop schedule will allow ample time for informal discussion. Topics of Interest Topics of interest include, but are not limited to: Different models of grammar induction: e.g., learning from examples, learning using examples and queries, incremental versus non-incremental learning, distribution-free models of learning, learning under various distributional assumptions (e.g., simple distributions). Theoretical results in grammar induction: e.g., impossibility results, complexity results, characterizations of representational and search biases of grammar induction algorithms. Algorithms for induction of different classes of languages and automata: e.g., regular, context-free, and context-sensitive languages, interesting subsets of the above under additional syntactic constraints, tree and graph grammars, picture grammars, multi-dimensional grammars, attributed grammars, etc. Empirical comparison of different approaches to grammar induction. Demonstrated or potential applications of grammar induction in natural language acquisition, computational biology, structural pattern recognition, adaptive intelligent agents, systems modelling, and other domains. Submission Guidelines Full paper submissions are highly recommended although extended abstracts will also be considered. The manuscript should be no more than 10 pages long when formatted for generic 8-1/2 x 11 inch pages using the formatting macros and templates available at: http://www.aaai.org/Publications/Templates/macros-link.html Postscript versions of the manuscripts should be emailed so as to arrive by March 15, 1997 at: honavar at cs.iastate.edu, pdupont at cs.cmu.edu, giles at research.nj.nec.com. Deadlines Deadline for submission of manuscripts: March 15, 1997 Decisions regarding acceptance or rejection emailed to authors: April 1, 1997 Final versions of the papers due: April 15, 1997 Selection Criteria Selection of submitted papers will be on the basis of review by at least two referees. Review criteria include: originality, technical soundness, clarity of presentation, relevance of the results and potential appeal to the workshop audience. Workshop Proceedings Workshop proceedings will be published in electronic form on the world-wide web. Authors of a selected subset of accepted workshop papers might also be invited to submit revised and expanded versions of their papers for possible publication in a special issue of a journal or an edited collection of papers to be published after the conference. Workshop Organizers: Dr. Vasant Honavar Department of Computer Science 226 Atanasoff Hall Iowa State University Ames, IA 50011 honavar at cs.iastate.edu Dr. Pierre Dupont Department of Computer Science Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 pdupont at cs.cmu.edu Dr. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 giles at research.nj.nec.com ================================================================ CALL FOR PAPERS ML APPLICATION IN THE REAL WORLD: METHODOLOGICAL ASPECTS AND IMPLICATIONS Workshop at the Fourteenth International Conference on Machine Learning (ICML-97) Nashville, Tennessee July 12, 1997 WWW-page: http://www.aifb.uni-karlsruhe.de/WBS/ICML97/ICML97.html Description Application of Machine Learning techniques to solve real-world problems has gained more and more interest over the last decade. In spite of this attention, the ML application process is still lacking a generally accepted terminology, let alone commonly accepted approaches or solutions. Several initiatives, both conferences and workshops have been held concerning this topic. The ICML-93 workshop of Langley and Kodratoff on ML applications as well as at the ICML-95 workshop on 'Applying Machine Learning in Practice' by Aha, Catlett, Hirsh and Riddle form the successful precedents of this workshop. The focus of the ICML-95 workshop was the 'characterization of the expertise used by machine learning experts during the course of applying learning algorithms to practical applications'. In the last year a significant research effort has been spent that deals with applications of learning algorithms. A reflection of this is the recent interest in Data Mining and KDD, as for instance reflected in the international KDD- conference (1995 (Montreal) and 1996 (Portland, OR)). Since the application of ML-techniques is also very relevant to the KDD-community it is not surprising that this is also reflected in those conferences. The workshop will draw along the lines of all these events, but will emphasise the processes underlying the application of ML in practice. Methodological issues, as well as issues concerning the kinds and roles of knowledge needed for applying ML will form a major focus of the workshop. It aims at building upon some of the results of discussions at the ICML-95 workshop on "Application of ML techniques in practice" and at the same time tries to move forward to a consensus regarding a methodology on the application of learning algorithms in practice. The workshop "ML Application in the real world; methodological aspects and implications" focuses on the methodological principles underlying successful application of ML techniques. Apart from powerful ML algorithms, good application strategies have to be defined. This implies a thorough understanding of the initial problem definition and its relation to the chain of tasks that leads towards a successful solution. Therefore a two-dimensional approach regarding the process of ML application is needed. The first dimension deals with the whole cycle of analysing the setting, problem definition, knowledge extraction, database interaction, learning, evaluation and iteration in real-world domains, where the second dimension forms an "inner loop" to this cycle, where the problem definition is used to refine the task at hand and map it on available algorithms for learning, pre- and postprocessing and evaluation of results. Concerning these issues there is no clear distinction between ML and KDD, and therefore this workshop will be equally interesting for researchers from both communities. This workshop does not focus on (methods for) developing new algorithms. Moreover, case studies will only contribute to the workshop discussion if general application principles can be derived from them. Intended Participants and Audience The workshop primarily aims at scientists and practitioners that apply ML and related techniques to solve problems in the real world. To attend the workshop, one should submit a paper, a one page extended abstract or a statement of interest. In case of too much interest from participants, the program committee will select participants on the basis of workshop relevance. Ideally, the audience contains a mix of university and industrial participants. Workshop program The program for this one-day workshop will have a maximum of 10 presentations. Some invited presentations will be part of the program. Presentations will take 30 minutes (15-20 minutes presentation and 10-15 minutes discussion). Speakers are asked to focus their presentation on the basis of a topic list that will be compiled during the review process. To foster discussion and debate, accepted papers will be given to a critic beforehand; by these means critics will be prepared to debate presentations. At the end of the workshop, there will be a plenary discussion session. Accepted papers will be distributed via the workshop WWW-page before the workshop, to stimulate the discussion. Accepted papers will also be published in workshop proceedings. Papers are welcomed concerning (but not limited to) the following topics: * Methodological approaches focusing on the process of ML application, or sub-processes, such as problem definition and refinement, application design, data acquisition, pre- and postprocessing, task analysis etc. * Making explicit the kinds and roles of knowledge that are necessary for execution of ML applications. * Matching of problem definitions on specific techniques and multi- technique configurations. * Impact of methodologies for empirical research on the application of ML-techniques. * Identification of the relation of different ML strategies to given problem types and identification of the characteristics that play a role in describing the initial problems. * Embedding of the ML application process in more general methodologies for (knowledge) system development. * Frameworks for support of (ML-)novices and experts for setting up applications and reuse of previously application(part)s. * Case studies, describing successful ML applications, that abstract from the implementational aspects and focus on identification of the choices that are made when designing the application i.e. the (meta-)knowledge involved, etc. * Comparison of the process of ML application with processes for application of related techniques (e.g. statistical data analysis). Submission guidelines * Submitted papers should not exceed 3500 words or 8 pages Times Roman 12pt. * The title page should contain paper title, author name(s), affiliations and full addresses including e-mail of the corresponding author, as well as the paper abstract and five keywords at most. * Papers are reviewed by at least three members of the program committee on their relevance for the workshop discussions. * For preparation of the camera ready copies, an ICML style file will be available. Tentative Submission Schedule * Submission deadline: March 22, 1997 * Notification of acceptance: April 9, 1997 * Camera ready copy + PS-file: May 1, 1997 * Papers available on WWW: June 15, 1997 * Workshop date: July 12, 1997 Electronic paper submissions are preferred. Please send your submission to: MLApplic.ICML at ato.dlo.nl. If Postscript printing is not available, paper submissions (4 hardcopies, preferably double sided) can be sent to: ICML Workshop "ML APPLICATION IN THE REAL WORLD" p/o ATO-DLO, Floor Verdenius Postbus 17 6700 AA Wageningen Netherlands Program Committee Dr. Pieter Adriaans (Syllogic, Houten, The Netherlands) Prof. C. Brodley (Purdue University, West Lafayette, IND, USA) Prof. David Hand (Open University, Milton Keynes, United Kingdom) Prof. Yves Kodratoff (LRI, Paris, France) Dr. Vassilis Moustakis (Technical University of Crete, Chania, Greece) Prof. Gholamreza Nakhaeizadeh (Daimler Benz AG Research, Ulm, Germany) Dr. R. Kohavi (Silicon Graphics, Mountain View, CA, USA) Dr. Enric Plaza i Cervera (IIIA-CSIC, Bellaterra, Catalonia, Spain) Dr. Foster J. Provost (NYNEX Science & Technology, White Plains, NY, USA) Dr. P. Riddle (University of Auckland, New Zealand) Dr. Celine Rouveirol (LRI, Paris, France) Prof. Derek Sleeman (University of Aberdeen, United Kingdom) Drs. Maarten van Someren (SWI, Amsterdam, The Netherlands) Prof. Rudi Studer (University of Karlsruhe, Germany) Organising Committee Robert Engels (University of Karlsruhe, Germany) engels at aifb.uni-karlsruhe.de Juergen Herrmann (University of Dortmund, Germany) Herrmann at jupiter.informatik.uni-dortmund.de Bob Evans (RR Donnelley, Gallatin TN, USA) BOB.EVANS at rrd.com Floor Verdenius (ATO-DLO, Wageningen, The Netherlands) F.Verdenius at ato.dlo.nl ================================================================ From jagota at cse.ucsc.edu Wed Jan 29 21:55:35 1997 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Wed, 29 Jan 1997 18:55:35 -0800 Subject: NCS e-journal solicits collections Message-ID: <199701300255.SAA12024@bristlecone.cse.ucsc.edu> Neural Computing Surveys solicits collections In addition to regular survey papers, the e-journal NCS solicits manuscripts of a second kind: /collections/. Collections are to comprise of a set of consistently formulated and formatted items on some subarea of neural computing or related fields. Each item is to be presented succintly. Collections are expected to be exhaustive in coverage on the topic they serve, and are intended to provide a quick way for readers to check what all is known on the topic. Collections may range in length from very short (a few pages) to long, depending on the nature of the topic covered. Examples: Annotated bibliography on pruning algorithms for feedforward nets Collection of VC dimension results on neural nets Collection of neural activation functions used in ANNs Collection of learning rules used in associative memories Bibliography on self-organizing maps Collection of Lyapunov functions used in recurrent nets For more information about the e-journal, including submission guidelines, visit http://www.icsi.berkeley.edu/~jagota/NCS or http://www.dcs.rhbnc.ac.uk/NCS or contact jagota at cse.ucsc.edu Arun Jagota Dept of Computer Science, University of California, Santa Cruz, CA From pe_keller at ccmail.pnl.gov Fri Jan 31 18:09:14 1997 From: pe_keller at ccmail.pnl.gov (Paul E Keller) Date: Fri, 31 Jan 1997 15:09:14 -0800 Subject: Career Opportunities at Battelle Message-ID: <00020165.@ccmail.pnl.gov> Research Scientist/Engineer Principal Scientist/Engineer Battelle, Columbus, OH, USA Battelle, a leading provider of technology solutions, has immediate need for additional staff (1 to 2) to join their cognitive controls and systems initiative in their Columbus, Ohio, USA facility. The new position(s) will provide technical support for a multi-year corporate project applying adaptive/cognitive information technology to applications in emerging technology areas. The positions require an M.S./Ph.D. in Computer and Information Science, Electrical Engineering, or related field with a specialization or experience in artificial neural networks, fuzzy logic, evolutionary computing/genetic algorithms, and statistical methods. Oral, written, and interpersonal communications skills are essential to this highly interactive position. Applicant(s) selected will be subject to a security investigation and must meet eligibility requirements for access to classified information. Battelle offers competitive salaries, comprehensive benefits, and opportunities for professional development. Qualified candidates are invited to send their resumes to Dr. Steve Rogers or Dr. Paul Keller, Battelle, 505 King Avenue, Columbus, OH 43201-2693 or e-mail them to rogers at battelle.org or pe_keller at pnl.gov . An Equal Opportunity/Affirmative Action Employer M/F/D/V To find out more information about Battelle, try http://www.battelle.org. _____________________.____________________________.___________________ Paul E. Keller, Ph.D | Battelle Memorial Institute| pe_keller at pnl.gov Sr Research Scientist| 505 King Avenue | Tel: 614-424-7338 | Columbus, OH 43201-2693 | Fax: 614-424-7400 http://www.emsl.pnl.gov:2080/people/bionames/keller_pe.html From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Jan 31 23:24:13 1997 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 31 Jan 97 23:24:13 EST Subject: research position: neural nets for OCR Message-ID: <13158.854771053@DST.BOLTZ.CS.CMU.EDU> Research Position in Asian Language OCR The Imaging Systems Lab of the Robotics Institute at Carnegie Mellon University is seeking candidates for a research position in optical character recognition of Asian languages, particularly Chinese and Korean. The nature of the position is flexible. The ideal candidate would be a recent PhD in Computer Science with experience in neural network pattern recognition techniques, looking for a one year postdoctoral appointment. However, persons with at least a BS in Computer Science or a related field and expertise in neural networks, pattern recognition, or artificial intelligence are invited to apply for a position as a research programmer on the project. Strong linear algebra and C/C++ programming skills are required. To apply, send a curriculum vita to: Dr. Robert Thibadeau Imaging Systems Lab The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213-3891