From ronnyk at CS.Stanford.EDU Sat Dec 3 10:26:20 1994 From: ronnyk at CS.Stanford.EDU (Ronny Kohavi) Date: Sat, 3 Dec 1994 23:26:20 +0800 Subject: MLC++: Machine Learning Utilities Available Message-ID: <9412040726.AA14178@starry.Stanford.EDU> MLC++ Utilities _______________ MLC++ is a Machine Learning library of C++ classes being developed at Stanford. More information about the library can be obtained at URL http://robotics.stanford.edu:/users/ronnyk/mlc.html. We are now releasing the object code for some utilities written using MLC++. These will run on Suns, either on SunOS or Solaris. Included in the current release are the following induction algorithms: 1. Majority (baseline). 2. Basic ID3 for inducing decision trees. The output can be sent to a mail server to get a postscript picture of the resulting tree. Very useful for looking at the final tree and for teaching. 3. Basic nearest neighbor. 4. Decision Table. 5. Interface to C4.5 (for utilities below). 6. Feature subset selection : wraps around any of the above and selects a good subset of the features, usually improving performance and comprehensibility. Utilities released are: 1. Cross validation : cross validate a file and any of the above induction algorithms. Allows regular or stratified CV. You can also generate the cross validation files to compare your own induction algorithm. 2. Learning curve : generate a learning curve for any of the above induction algorithms. 3. Project : project the data onto a subset of attributes. 4. Convert : convert nominal attributes to unary encoding or binary encoding. Quick starter guide: -------------------- The MLC++ utilities are accessible by anonymous ftp to starry.stanford.edu:pub/ronnyk/mlc/ There are currently two kits, one for Solaris (MLCutil-solaris.tar.Z) and one for SunOS (MLCutil-sunos.tar.Z). cd zcat | tar xvf - where is the directory under which the mlc directory will be built (e.g., /usr/local), and is the kit appropriate for your machine. The documentation is in utils.ps. The environment variable MLCDIR must be set to the directory where the utilities are installed. Databases in the MLC++ format, which is very similar to C4.5 format can be found in starry.stanford.edu:pub/ronnyk/mlc/db. Most datafiles are converted from the repository at UC Irvine. Questions or help requests related to the utilities should be addressed to mlcpp-help at CS.Stanford.EDU -- Ronny Kohavi (ronnyk at CS.Stanford.EDU, http://robotics.stanford.edu/~ronnyk)  From marcus at hydra.ifh.de Mon Dec 5 04:37:52 1994 From: marcus at hydra.ifh.de (Marcus Speh) Date: Mon, 5 Dec 94 10:37:52 +0100 Subject: Paper - GAUGE THEORY OF THINGS ALIVE AND UNIVERSAL DYNAMICS Message-ID: <9412050937.AA21444@hydra.ifh.de> The following paper is now available via anonymous FTP (see below): GAUGE THEORY OF THINGS ALIVE AND UNIVERSAL DYNAMICS BY G. MACK, Theoretical Physics, University of Hamburg, Germany SOURCE FORMAT: 13 pages, latex, uses fleqn.sty (can be removed without harm) REPORT-NO: DESY 94-184 ABSTRACT Positing complex adaptive systems made of agents with relations between them that can be composed, it follows that they can be described by gauge theories similar to elementary particle theory and general relativity. By definition, a universal dynamics is able to determine the time development of any such system without need for further specification. The possibilities are limited, but one of them - reproduction fork dynamics - describes DNA replication and is the basis of biological life on earth. It is a universal copy machine and a renormalization group fixed point. A universal equation of motion in continuous time is also presented. TO RETRIEVE, connect as USER "anonymous" with PASSWORD "@
" a) in Europe, to ftp.desy.de (131.169.30.42) and retrieve one of the files from directory "pub/outgoing" 203657 Dec 5 10:06 DESY_94-184.ps [PostScript, uncompressed] 60601 Dec 5 10:06 DESY_94-184.ps.gz [PostScript, compressed] 48393 Dec 1 14:48 DESY_94-184.tex [LaTeX source] OR in the US to b) ftp.scri.fsu.edu (144.174.128.34) and retrieve the LaTeX source file from directory "hep-lat/papers/9411": 48506 Nov 28 07:27 9411059 On the World-Wide Web, you can also access the URL http://www.desy.de/pub/outgoing/DESY_94-184.ps.gz [60KB] Inquiries and comments should be sent directly to the author, Prof. G. Mack  From wuertz at ips.id.ethz.ch Mon Dec 5 08:45:23 1994 From: wuertz at ips.id.ethz.ch (Diethelm Wuertz) Date: Mon, 05 Dec 1994 14:45:23 +0100 Subject: PASE 95: Parallel Applications in Statistics and Economics Message-ID: <9412051433.AA11623@sitter> 5th Anniversary First Announcement International Workshop on Parallel Applications in Statistics and Economics >> Non-linear Data Analysis << Trier - Mainz, Germany August 29 - September 2, 1995 ____________________________________________________________________________ PURPOSE OF THE WORKSHOP: The purpose of this workshop is to bring together researchers interested in innovative information processing systems and their applications in the areas of statistics, finance and economics. The focus will be on in-depth presentations of state-of-the-art methods and applications as well as on communicating current research topics. This workshop is intended for industrial and academic persons seeking new ways of comprehending the behavior of complex dynamic systems. The PASE'95 workshop is concerned with but not restricted to the following topics: o Applications in finance, economics, and natural science o Statistical tests for finding deterministic and chaotic behavior o Statistical tests for measuring stability and stationarity o Sampling and retrieving techniques for high frequency data o Modeling and analysis of non-linear multivariate time series o Statistical use of neural networks, genetic algorithms and fuzzy systems ____________________________________________________________________________ WORKSHOP SITE: The workshop will be held on a comfortable ship running from Trier to Mainz on the scenic rivers Mosel and Rhine in Germany. The best connection to Trier is to fly to Frankfurt and then to take the train. It will take about 2 1/2 hours. From Mainz there are direct trains to Frankfurt Airport (~ 30 minutes). Detailed traveling possibilities will be announced later. ____________________________________________________________________________ ACCOMMODATION AND REGISTRATION: All participants will be accommodated on the ship. The price of the accommodation in a double room for 4 nights is Sfr 870.- for the main deck (920.- for the upper deck) including breakfasts, lunches and dinners. The welcome and workshop dinner are also included. The service begins on Tuesday afternoon, August 29th, in Trier with registration and ends on Saturday morning, September 2nd in Mainz. Since the facilities of the ship are dedicated to the workshop the price is unique and no reductions can be provided for a shorter participation. Since the number of participants is limited to about 100 persons, early registration is recommended. All applications will be handled on the basis "first come first serve". Registration will only be accepted after a pre-payment of Sfr 300.-. The rest of the accommodation expenses of Sfr 570.- or 620.-, respectively, and the workshop fees must be transferred to our account PASE Workshop - Dr. Diethelm Wuertz Schweizerischer Bankverein, Zurich, Switzerland Number of Account: P0-206066.0 before May 31st 1995. No currencies other than Swiss Francs will be accepted. In the case of a later remittance we cannot guarantee your application. ____________________________________________________________________________ WORKSHOP FEE AND PROCEEDINGS: The workshop fee is Sfr 580.- for profit-making companies and Sfr 230.- for others. It includes the Proceedings, published as a Special Issue of the Journal "Neural Network World - International Journal on Neural and Mass-Parallel computing and Information Systems". ____________________________________________________________________________ WORKSHOP SCHOLARSHIPS: For students and a limited number of participants from "post-communist" European countries some support and scholarships will be available: please contact the organizers. ____________________________________________________________________________ ABSTRACTS AND DEADLINES: Regular abstracts of one page (approximately 30 lines with 60 characters) must be submitted before December 1st, 1994. (For post-deadline papers please contact directly: .) All contributions will be refereed. The deadline for the full papers (about 10 pages) is April 1st, 1995. We cannot guarantee that papers submitted later will be printed in the proceedings. The proceedings will be available at the conference. It is also possible to submit an extended abstract of 4 pages. From these contributions we will select, besides the regular invited speakers, 3 further speakers for an invited talk. The deadlines are the same as in the case of regular abstracts. If you plan a soft- or hardware demonstration please contact the organizers. Please send the abstracts and full papers to: Hynek Beran, ICS Prag Pod vodarenskou vezi 2 FAX: +42 2 858 57 89 182 07 PRAGUE 8, Czech Republic E-mail: pase at uivt.cas.cz ____________________________________________________________________________ ORGANIZATION: The Workshop will be organized by the Interdisciplinary Project Center for Supercomputing (ETH Zurich), Olsen & Associates (Research Institute for Applied Economics, Zurich) and the Institute of Computer Science (Academy of Sciences, Prague) W.M. van den Bergh, D.E. Baestaens, Erasmus University Rotterdam Thilo von Czarnowski, Helaba Frankfurt Michel M. Dacorogna, Olsen & Associates Zurich Susanne Fromme, SMH Research Frankfurt Heinz Muehlenbein, GMD Sankt Augustin Gholamreza Nakkaeizadeh, Daimler Benz Forschung Ulm Paul Ormerod, Henley Center for Forecasting London Emil Pelikan, ICS Czech Academy of Sciences Prague Heinz Rehkugler, University of Bamberg Marco Tomassini, CSCS Manno Dieter Wenger, Swiss Bank Corporation Basel Diethelm Wuertz, IPS ETH Zurich Hans Georg Zimmermann, Siemens AG, Munchen ____________________________________________________________________________ FURTHER INFORMATION: Further information will be available from anonymous ftp "maggia.ethz.ch" (129.132.17.1) or world wide webb "http://www.ips.id.ethz.ch/PASE/pase.html" ____________________________________________________________________________ REGISTRATION FORM - PASE '95: .................................................................... Mr./Mrs. First Name Surname .................................................................... Institution or Company .................................................................... .................................................................... Mailing Address .................................................................... E-mail .................................................................... Phone Fax o Fees University Sfr 230.-- o Main Deck Sfr 870.- o Fees Profit making Company Sfr 580.-- o Upper Deck Sfr 920.- CONTRIBUTION: I would like to present a talk and to submit a paper with the following title: .................................................................... .................................................................... .................................................................... Please, return this form as soon as possible to: Martin Hanf IPS CLU B2, ETH Zentrum CH-8092 Zurich, Switzerland E-Mail: pase at ips.id.ethz.ch ____________________________________________________________________________ PASE '95 -------------------------------------------------------------------------- PD Dr. Diethelm Wuertz Interdisciplinary Project Center for Supercomputing IPS CLU B3 - ETH Zentrum CH-8092 ZURICH, Switzerland Tel. +41-1-632.5567 Fax. +41-1-632.1104 --------------------------------------------------------------------------  From mbrown at aero.soton.ac.uk Mon Dec 5 05:08:01 1994 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Mon, 5 Dec 94 10:08:01 GMT Subject: neural -probability-fuzzy seminars Message-ID: <7370.9412051008@aero.soton.ac.uk> ADT '95 Date: April 3-5 1995 Venue: BRUNEL - The University of West London UNICOM Seminars Ltd., Brunel Science Park, Cleveland Road, Uxbridge, Middlesex, UB8 3PH, UK Telephone: +44 1895 256484 FAX +44 1895 813095 ADT 95 Sponsored by: Department of Trade and Industry (DTI) Institute of Mathematics and its Applications IFSA International Neural Networks Society European Neural Networks Society AI Watch Journal of Neural Computing and Applications Rapid Data Ltd Call for Papers, Registration Information, and Abstract Form An international team of experts describes and explores the most recent developments and research in Neural Networks, Bayesian Belief Networks, Modern Heuristic Search Methods and Fuzzy Logic. Presentations cover theoretical developments as well as industrial and commercial applications. Main Topics Neural Networks and their Applications: New Developments in Theory and Algorithms; Simulations and Hardware; Pattern Recognition; Medical Diagnosis. Modern Heuristic Search Methods: Genetic Algorithms; Classifier Systems; Simulated Annealing; Tabu Search; Hybrid Techniques; Genetic and Evolutionary Programming. Probabilistic Reasoning and Computational Learning: Bayesian Belief Networks; Computational Learning; Inductive Inference; Statistical Causality in Belief Networks; Applications in Computational Vision, Medicine, Legal Reasoning, Fault Diagnosis and Genetics; Model Selection. Fuzzy Logic: Neural Fuzzy Programming; Algorithms and Applications; Fuzzy Control; Fuzzy Data Processing; Fuzzy Silicon Chips. PROGRAMME COMMITTEE & SPEAKERS: Neural Networks and their Applications: J G Taylor, King's College London (Chair); I Aleksander, Imperial College; M J Denham, University of Plymouth; B Schuermann, Siemens AG; A G Carr, ERA Technology; R Wiggins, DTI. Other Speakers include: S Hancock, Neural Technologies Ltd; J Stonham, Brunel University, J Austin, Univ. of York; B Clements, Clement Neuronics Modern Heuristic Search Methods: V Rayward-Smith, University of East Anglia (Chair); I Osman, University of Kent; C R Reeves, University of Coventry; G D Smith, University of East Anglia. Other Speakers include: E Aarts, Philips Research, Netherlands; C Hughes, Logica; S Voss, Darmstadt Univ, D Schaaffer, Philips, USA; R Battiti, Trento Univ, Italy Probabilistic Reasoning & Computational Learning: A Gammerman, Royal Holloway University of London (Chair); J Pearl, UCLA; D Spiegelhalter, Medical Research Council; A Dempster, Harvard University; C S Wallace, Monash University (Australia); V Vapnik, AT&T Bell Labs; V Uspensky, University of Moscow. Other Speakers include: P Reverberi, Inst. di Analisi dei Sistemi..Rome; M Talamo, Univ degli Studi, Rome; S McClean, Ulster Univ; A Shen, Ind. Univ. of Moscow; J Kwaan, Marconi Simulation Fuzzy Logic: E H Mamdani, Queen Mary & Westfield College (Chair); J Baldwin, Bristol University; C Harris, University of Southampton; H Zimmerman, RWTH Aachen.Other Speakers include: L Zadeh, Univ of Calif., Berkeley; R Weber, MIT GmbH; R John, De Montfort Univ.; H Bersini, IRIDIA, Belgium. HOW TO CONTRIBUTE: PRESENTATIONS: You may wish to present a paper at any of these sessions. Please indicate your choice of session with your submission. Please ensure we receive the title of your proposed talk and a short abstract (30-40 words) by 14 January 1995. The Organising Committee will make their selections and we will advise you by 14 February 1995 whether you presentation can be included. Written papers for the proceedings are also welcome. Please include your telephone and fax numbers and postal address in all correspondence. PUBLICATIONS: Preliminary proceedings of this symposium and also a refereed volume will be published. Deadlines for Abstracts: 14 January 1995 - acceptance of paper notified by 14 February 1995. Extended abstracts (1000-1500) words, by 15 March 1995. Submission of papers for refereed proceedings (3000-5000) words by 31 August 1995. REGISTRATION: The conference program and registration material will be available in late February 1995. To ensure receiving your copy, complete the attached form and return it to ADT 95. Registration Fee: GBP175 (Academic), GBP350 (Industrial), GBP500-800 (Exhibition), and GBP1000 Commercial Sponsors, preliminary proceedings included in the fee. Late registration after 28 February 1995 will be charged GBP25 extra. ------------------------------------------------------------------- Please return the form below to: ADT 95, UNICOM Seminars Ltd., Brunel Science Park, Cleveland Road, Uxbridge, Middlesex, UB8 3PH. Telephone: +44 1895 256484. FAX: +44 1895 813095. Email: adt95 at unicom.demon.co.uk. -------------------------------------------------------------------- Name: Organization: Position / Department: Address: Telephone: Facsimile: E-Mail: Please send me information about the event, [ ] I wish to present a paper, title and abstract attached [ ] I am interested in exhibiting [ ] Send me more information about ADT95 This event is part of Brunel University's Continuing Education Programme. If you fail to return this form, you may not receive further information.  From dayan at cs.TORONTO.EDU Mon Dec 5 17:40:37 1994 From: dayan at cs.TORONTO.EDU (Peter Dayan) Date: Mon, 5 Dec 1994 17:40:37 -0500 Subject: The Helmholtz Machine and the Wake-Sleep Algorithm Message-ID: <94Dec5.174037edt.317@neuron.ai.toronto.edu> FTP-host: ftp.cs.toronto.edu FTP-filename: pub/dayan/wake-sleep.ps.Z FTP-filename: pub/dayan/helmholtz.ps.Z Two papers about stochastic and deterministic Helmholtz machines are available in compressed postscript by anonymous ftp from ftp.cs.toronto.edu - abstracts are given below. We regret that hardcopies are not available. ---------------------------------------------------------------------- pub/dayan/wake-sleep.ps.Z The Wake-Sleep Algorithm for Unsupervised Neural Networks Geoffrey E Hinton, Peter Dayan, Brendan J Frey and Radford M Neal Abstract: We describe an unsupervised learning algorithm for a multilayer network of stochastic neurons. Bottom-up "recognition" connections convert the input into representations in successive hidden layers and top-down "generative" connections reconstruct the representation in one layer from the representation in the layer above. In the "wake" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the "sleep" phase, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above. Submitted for publication ---------------------------------------------------------------------- pub/dayan/helmholtz.ps.Z The Helmholtz Machine Peter Dayan, Geoffrey E Hinton, Radford M Neal and Richard S Zemel Abstract: Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterised stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns, We describe a way of finessing this combinatorial explosion by maximising an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. Neural Computation, in press. ----------------------------------------------------------------------  From lazzaro at CS.Berkeley.EDU Mon Dec 5 17:44:49 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Mon, 5 Dec 1994 14:44:49 -0800 Subject: No subject Message-ID: <199412052244.OAA16160@boom.CS.Berkeley.EDU> =========================== TR announcement ================================= REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities ---Applications to Transition-based Connectionist Speech Recognition--- by H. Bourlard, Y. Konig & N. Morgan Intl. Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 email: bourlard,konig,morgan at icsi.berkeley.edu ICSI Technical Report TR-94-064 Abstract In this report, we describe the theoretical formulation of REMAP, an approach for the training and estimation of posterior probabilities using a recursive algorithm that is reminiscent of the EM (Expectation Maximization) algorithm for the estimation of data likelihoods. Although very general, the method is developed in the context of a statistical model for transition-based speech recognition using Artificial Neural Networks (ANN) to generate probabilities for hidden Markov models (HMMs). In the new approach, we use local conditional posterior probabilities of transitions to estimate global posterior probabili- ties of word sequences given acoustic speech data. Although we still use ANNs to estimate posterior probabilities, the network is trained with targets that are themselves estimates of local posterior probabilities. These targets are iteratively re-estimated by the REMAP equivalent of the forward and backward recursions of the Baum-Welch algorithm to guarantee regular increase (up to a local maximum) of the global posterior probability. Convergence of the whole scheme is proven. Unlike most previous hybrid HMM/ANN systems that we and others have developed, the new formulation determines the most probable word sequence, rather than the utterance corresponding to the most probable state sequence. Also, in addition to using all possible state sequences, the proposed training algorithm uses posterior probabilities at both local and global levels and is discriminant in nature. The postscript file of the full technical report (66 pages) can be copied from our (anonymous) ftp site as follows: ftp ftp.icsi.berkeley.edu username= anonymous passw= your email address cd pub/techreports/1994 binary get tr-94-064.ps.Z  From esann at dice.ucl.ac.be Tue Dec 6 04:33:59 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 6 Dec 1994 11:33:59 +0200 Subject: Neural Processing Letters Vol.1 No.2 Message-ID: <9412061028.AA06065@ns1.dice.ucl.ac.be> The following articles may be found in the second issue of the "Neural Processing Letters" journal (November 1994): - Good teaching inputs do not correspond to desired responses in ecological neural networks S. Nolfi, D. Parisi - Recurrent Sigma-Pi-linked back-propagation network T.W.S. Chow, G. Fei - An evolutionary approach to associative memory in recurrent neural networks S. Fujita, H. Nishimura - Equivalence between some dynamical systems for optimization K. Urahama - VISOR: schema-based scene analysis with structured neural networks W.K. Leow, R. Miikkulainen - Nonlinear neural controller with neural Smith predictor Y. Tan, A. Van Cauwenberghe - A neural method for geographical continuous field estimation D. Pariente, S. Servigne, R. Laurini Neural Processing Letters in a rapid publication journal, aimed to publish new ideas, original developments and work in progress in all aspects of the field of Artificial Neural Networks. The delay between submission of papers and publication is maximum about 3 months. Please don't hesitate to ask for the instructions for authors, in order to submit your work to Neural Processing Letters. We remind you that all information concerning this journal may be found on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL login: anonymous password: your_e-mail_address - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html For any information (subscriptions, instructions for authors, free sample copies...), you can also directly contact the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________  From mario at physics.uottawa.ca Tue Dec 6 09:18:06 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Tue, 6 Dec 94 10:18:06 AST Subject: Neural net paper available by anonymous ftp Message-ID: <9412061418.AA24196@physics.uottawa.ca> The following paper, which was presented at the NIPS'94 conference, is available by anonymous ftp at: ftp://dirac.physics.uottawa.ca/usr2/ftp/pub/tr/marchand FileName: nips94.ps Title: Learning Stochastic Perceptrons Under k-Blocking Distributions Authors: Marchand M. and Hadjifaradji S. Abstract: We present a statistical method that PAC learns the class of stochastic perceptrons with arbitrary monotonic activation function and weights $w_i \in \{-1, 0, +1\}$ when the probability distribution that generates the input examples is member of a family that we call {\em k-blocking distributions\/}. Such distributions represent an important step beyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic perceptron we mean a perceptron which, upon presentation of input vector $\x$, outputs~1 with probability $f(\sum_i w_i x_i - \theta)$. Because the same algorithm works for any monotonic (nondecreasing or nonincreasing) activation function $f$ on Boolean domain, it handles the well studied cases of sigmo\"{\i}ds and the ``usual'' radial basis functions. ALSO: you will find other papers co-authored by Mario Marchand in this directory. The text file: Abstracts-mm.txt contains a list of abstracts of all the papers. PLEASE: communicate to me any printing or transmission problems. Any comments concerning these papers are very welcome. ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ----------------------------------------------------------------  From J.Vroomen at kub.nl Tue Dec 6 12:58:06 1994 From: J.Vroomen at kub.nl (Jean Vroomen) Date: 6 Dec 94 12:58:06 MET Subject: PhD research position in Cogn Psy/Comput Linguistics at Tilburg Message-ID: Predoctoral research associate position in cognitive psychology/computational linguistics (Ph.D position, 4 year contract) Tilburg University A predoctoral position in cognitive psychology/computational linguistics is available immediately at Tilburg University. The focus of research will be on the development of a text-to-speech conversion system with special emphasis on modelling reading ac- quisition and developmental dyslexia. The development of the model and the simulations will be conducted together with behavioral experiments. In the project, a strong cross-linguistic approach is taken. Comparative models will be built for English, French, and Dutch. This approach furthermore includes the development of a computational measure of the complexity of these writing systems (`orthographic depth'). Candidates with training in cognitive psychology, cognitive neurop- sychology combined with computational (connectionist) modelling are preferred. Applicants should send a curriculum vitea and a brief description of fields of interest to: Professor Beatrice de Gelder Department of Psychology Tilburg University Warandelaan, 2 PO Box 90153 5000 LE Tilburg, The Netherlands email: b.degelder at kub.nl  From FRYRL at f1groups.fsd.jhuapl.edu Wed Dec 7 15:30:00 1994 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Wed, 07 Dec 94 12:30:00 PST Subject: New Neuroprose Entry Message-ID: <2EE61B8E@fsdsmtpgw.fsd.jhuapl.edu> A preprint of a manuscript entitled "Observer-participant models of neural computation" which has been accepted by the IEEE Trans. Neural Networks is being made available via FTP from the Neuroprose directory. Details for retrieving this article an associated figures follows the abstract below: ABSTRACT Observer-participant models of neural computation A model is proposed in which the neuron serves as an information channel. Channel distortion occurs through the channel since the mapping from input boolean codes to output codes are many-to-one in that neuron outputs consist of just two distinguished states. Within the described model, the neuron performs a decision-making function. Decisions are made regarding the validity of a question passively posed by the neuron to its environment. This question becomes defined through learning hence learning is viewed as the process of determining an appropriate question based on supplied input ensembles. An application of the Shannon information measures of entropy and mutual information taken together in the context of the proposed model lead to the Hopfield neuron model with conditionalized Hebbian learning rules implemented through a simple modification to Oja's learning equation. Neural decisions are shown to be based on a sigmoidal transfer characteristic or in the limit as computational temperature tends to zero, a maximum likelihood decision rule. The described work is contrasted with the information-theoretic approach of Linsker. The paper is available in two files from archive.cis.ohio-state.edu in the /pub/neuroprose subdirectory. The file names are fry.maxmut.ps.Z (compressed postscript manuscript 433927 bytes) fry.maxmut_figs.ps.Z (comp. postscript figures 622321 bytes) Robert L. Fry Johns Hopkins University/ Applied Physics Laboratory Johns Hopkins Road Laurel, MD 20723 robert_fry at jhuapl.edu  From rohwerrj at helios.aston.ac.uk Wed Dec 7 12:21:49 1994 From: rohwerrj at helios.aston.ac.uk (rohwerrj) Date: Wed, 7 Dec 1994 17:21:49 +0000 Subject: 2 tech reports on n-tuple classifiers Message-ID: <22434.9412071721@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/morciniec.ntup_bench.ps.Z FTP-filename: /pub/neuroprose/rohwer.ram_bayes.ps.Z The following two tech reports have been placed on the Neuropose ftp site: archive.cis.ohio-state.edu (128.146.8.52) in pub/neuroprose/ Copies are also available on the Aston University ftp site: cs.aston.ac.uk (134.151.52.106) in pub/docs/ ----------------------------------------------------------------- morciniec.ntup_bench.ps.Z The n-tuple Classifier: Too good to Ignore (11 Pages) Michal Morciniec and Richard Rohwer The n-tuple classifier is compared to 23 other methods on 11 large datasets from the European Community StatLog project. ----------------------------------------------------------------- rohwer.ram_bayes.ps.Z Two Bayesian treatments of the n-tuple recognition method (8 pages) Richard Rohwer Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. ----------------------------------------------------------------- Sorry, no hardcopies available. Three cheers for Jordan Pollack!!! Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3621 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs  From mike at PARK.BU.EDU Tue Dec 6 20:51:49 1994 From: mike at PARK.BU.EDU (Michael Cohen) Date: Tue, 6 Dec 1994 20:51:49 -0500 Subject: WCNN'95 Call for Papers & Meeting Info Message-ID: <199412070151.UAA25340@cns.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS RENAISSANCE HOTEL WASHINGTON, DC, USA July 17-21, 1995 CALL FOR PAPERS! DEADLINE FOR PAPER SUBMISSION: February 10, 1995. Registration Fees: Category Pre-registration Pre-registration On-Site prior to prior to February 10, 1995 June 16, 1995 INNS Member $170.00 $250.00 $350.00 Non-member** $270.00 $380.00 $480.00 Student*** $ 85.00 $110.00 $135.00 **Registration fee includes 1995 membership and a one (1) year subscription to the Journal Neural Networks. ***Student registration must be accompanied by a letter of verification from department chairperson. Any student registration received with no verification letter will be processed at the higher member or non-member fee, depending on current membership status. Copies of student identification cards are NOT acceptable. This also applies to on-site registration. ORGANIZING COMMITTEE John G. Taylor, General Chair Walter J. Freeman Harold Szu Rolf Eckmiller Shun-ichi Amari David Casasent INNS OFFICERS GOVERNING BOARD President: Walter J. Freeman Shun-ichi Amari President-Elect: John G. Taylor James A. Anderson Past President: Harold Szu Andrew Barto Secretary: Gail Carpenter David Casasent Treasurer: Judith Dayhoff Leon Cooper Executive Director: R. K. Talley Rolf Eckmiller Kunihiko Fukushima Stephen Grossberg Mitsuo Kawato Christof Koch Bart Kosko Christoph von der Mallsburg Alianna Maren Paul Werbos Bernard Widrow Lotfi A. Zadeh PROGRAM COMMITTEE Shun-ichi Amari James A. Anderson Etienne Barnard Andrew R. Barron Andrew Barto Theodore Berger Artie Briggs Gail Carpenter David Casasent Leon Cooper Judith Dayhoff Rolf Eckmiller Jeff Elman Terrence L. Fine Francoise Fogelman-Soulie Walter J. Freeman Kunihiko Fukushima Patric Gallinari Apostolos Georgopoulos Stephen Grossberg John B. Hampshire II Michael Hasselmo Robert Hecht-Nielsen Akira Iwata Jari Kangas Bert Kappen Christof Koch Teuvo Kohonen Kenneth Kreutz-Delgado Clifford Lau Sam Levin Daniel S. Levine William B. Levy Christof von der Malsburg Alianna Maren Lina Massone Lance Optican Paul Refenes Jeffrey Sutton Harold Szu John G. Taylor Brian Telfer Andreas Weigand Paul Werbos Hal White Bernard Widrow Daniel Wolpert Kenji Yamanishi Mona E. Zaghloul CALL FOR PAPERS: PAPERS MUST BE RECEIVED BY FEBRUARY 10, 1995. Authors must submit registration payment with papers to be eligible for the early registration fee. A $35 per paper processing fee must be enclosed in order for the paper to be refereed. Please make checks payable to INNS and include with submitted paper. For review purposes, please submit six (6) copies (1 original, 5 copies) plus 3 1/2" disk (see instructions below), four page limit, in English. $20 per page for papers exceeding (4) pages (do not number pages). Checks for over length charges should be made out to INNS and must be included with submitted Paper. Papers must be on 8 1/2" x 11" white paper with 1" margins on all sides, one column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. FAX's not acceptable. Centered at the top of first page should be complete title, author name(s), affiliation(s), and mailing address(es), followed by blank space, abstract (up to 15 lines), and text. The following information MUST be included in an accompanying cover letter in order for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers. Technical Session (see session topics) 1st and 2nd choices, oral or poster presentation preferred*, audio-visual requirements (for oral presentations only). Papers submitted which do not meet these requirements or for which insufficient funds are submitted will be returned. For the first time, the proceedings of the 1995 World Congress on Neural Networks will be distributed on CD-ROM. The CD-ROM Proceedings are included in your registration fee. Please note that the 4 volume book of proceedings will not be printed. Format of 3 1/2" disk for CD-ROM: Once paper is proofed, completed and printed for review, reformat the paper in Landscape format, page size 8" x 5" for CD. You may include a separate file with 1 paragraph biographical information with your name, company, address and telephone number. Presenters should submit their papers in one of the following Macintosh or Microsoft Windows formats: Microsoft Word, WordPerfect, FrameMaker, Quark or Quark Professional, PageMaker, Persuasion, ASCII, PowerPoint, Adobe.PDF, Postscript (text, not EPS). Images can be submitted in TIF or PCX format. If submitting in Macintosh format, you should submit a disk containing a Font Suitcase of fonts used in your presentationn to ensure proper match or enclose a cover letter with paper indicating a list of fonts used in your presentation. Information published on the CD will consist of text, in-line and offset mathematical equations, black and white images, line art and graphics. By submitting a previously unpublished paper, author agrees to the transfer of the copyright to INNS for the conference proceedings. All submitted papers become the property of INNS. Papers and disk to be sent to: WCNN'95, 875 Kings Highway, Suite 200, Woodbury, NJ 08096-3172 USA. *When to choose a poster presentation: more than 15 minutes are needed; author does not wish to run concurrently with an invited talk; work is a continuation of previous publication; author seeks a lively discussion opportunity or to solicit future collaboration. PLENARY SPEAKERS: Daniel L. Alkon, U.S. National Institutes of Health Shun-ichi Amari, University of Tokyo Gail Carpenter, Boston University Walter J. Freeman, University of California, Berkeley Teuvo Kohonen, Helsinki University of Technology Harold Szu, Naval Surface Warfare Center John G. Taylor, King's College London SESSION TOPICS: 1. Biological Vision: Rolf Eckmiller 2. Machine Vision: Kunihiko Fukushima, Robert Hecht-Nielsen 3. Speech and Language: Jeff Elman 4. Biological Sensory-Motor Control: Andrew Barto, Lina Massone 5. Neurocontrol and Robotics: Paul Werbos 6. Supervised Learning: Andrew R. Barron, Terrence L. Fine 7. Unsupervised Learning: Teuvo Kohonen, Francoise Fogelman-Soulie 8. Pattern Recognition: David Casasent, Brian Telfer 9. Prediction and System Identification: John G. Taylor, Paul Werbos 10. Cognitive Neuroscience: James Anderson, Jeffrey Sutton 11. Links to Cognitive Science & Artificial Intelligence: Alianna Maren 12. Signal Processing: Bernard Widrow 13. Neurodynamics and Chaos: Harold Szu, Mona E. Zaghloul 14. Hardware Implementation: Clifford Lau 15. Associative Memory: Christoph von der Malsburg 16. Applications: Leon Cooper 17. Circuits and Systems Neuroscience: Stephen Grossberg, Lance Optican 18. Mathematical Foundations: Shun-ichi Amari, D.S. Levine 19. Evolutionary Computing, Genetic Algorithms: Judith Dayhoff SHORT COURSES: a. Pattern Recognition and Neural Nets: David Casasent, Carnegie Mellon University b. Modelling Consciousness: John G. Taylor, King's College London c. Neocognitron and the Selective Attention Model: Kunihiko Fukushima, Osaka University d. What are the Differences & the Similarities Among Fuzzy, Neural, & Chaotic Systems: Takeshi Yamakawa, Kyushu Institute of Technology e. Image Processing & Pattern Recognition by Self-Organizing Neural Networks: Stephen Grossberg, Boston University f. Dynamic Neural Networks: Signal Processing & Coding: Judith Dayhoff, University of Maryland g. Language and Speech Processing: Jeff Elman, University of California-San Diego h. Introduction to Statistical Theory of Neural Networks: Shun-ichi Amari, University of Tokyo i. Cognitive Network Computation: James Anderson, Brown University j. Biology-Inspired Neural Networks: From Brain Research to Applications in Technology & Medicine: Rolf Eckmiller, University of Dusseldorf k. Neural Control Systems: Bernard Widrow, Stanford University l. Neural Networks to Advance Intelligent Systems: Alianna Maren, Accurate Automation Corporation m. Reinforcement Learning: Andrew G. Barto, University of Massachusetts n. Advanced Supervised-Learning Algorithms and Applications: Francoise Fogelman-Soulie, SLIGOS o. Neural Network & Statistical Methods for Function Estimation: Vladimir Cherkassky, University of Minnesota p. Adaptive Resonance Theory: Gail A. Carpenter, Boston University q. What Have We Learned from Experiences of Real World Applications in NN/FS/GA?: Hideyuki Takagi, Matsushita Elctrical Industrial Co., Ltd. r. Fuzzy Function Approximation: Julie A. Dickerson, University of Southern Califorrnia s. Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs: Lofti A. Zadeh, University of California-Berkeley t. Overview of Neuroengineering and Supervised Learning: Paul Werbos, National Science Foundation INDUSTRIAL ENTERPRISE DAY: Monday, July 17, 1995 Enterprise Session: Chair: Robert Hecht-Nielsen, HNC, Inc. Industrial Session: Chair: Takeshi Yamakawa, Kyushu Institute of Technology FUZZY NEURAL NETWORKS: Tuesday, July 18, 1995 Wednesday, July 19, 1995 Co-Chairs: Bart Kosko, University of Southern California Ronald R. Yager, Iona College SPECIAL SESSIONS: Neural Network Applications in the Electrical Utility Industry Biomedical Applications & Imaging/Computer Aided Diagnosis in Medical Imaging Statistics and Neural Networks Dynamical Systems in Financial Engineering Mind, Brain and Consciousness Physics and Neural Networks Biological Neural Networks To obtain additional information (complete registration brochure, registration and hotel forms) contact WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172 USA, Tele: (609)845- 1720; Fax: (609)853-0411; e-mail: 74577.504 at compuserve.com  From jbower at smaug.bbb.caltech.edu Wed Dec 7 17:13:15 1994 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Wed, 7 Dec 94 14:13:15 PST Subject: Call for papers -- CNS*95 Message-ID: CALL FOR PAPERS Fourth Annual Computational Neuroscience Meeting CNS*95 July 11 - 15, 1995 Monterey, California ................ DEADLINE FOR SUMMARIES AND ABSTRACTS: **>> January 25, 1995 <<** ^^^^^^^^^^^^^^^^ This is the fourth annual meeting of an interdisciplinary conference intended to address the broad range of research approaches and issues involved in the field of computational neuroscience. The last three annual meetings, in San Francisco (CNS*92), Washington, DC (CNS*93), and Monterey, California (CNS*94) brought experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. Peer reviewed papers were presented on a range of subjects related to understanding how nervous systems compute. As in previous years, the meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The meeting in 1995 will again take place at the Monterey Doubletree Hotel and include plenary, contributed, and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. The last day of the meeting will be devoted to a series of informal workshops focused on current issues in computational neuroscience. Student Travel funds and Child Day Care will be available. SUBMISSION INSTRUCTIONS: With this announcement we solicit the submission of presented papers. All papers will be refereed. Authors should send original research contributions in the form of a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Summaries are for program committee use only. Abstracts will be published in the conference program. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other systems F. Motor Systems and Sensory Motor Integration G. Learning and Memory H. Behavior I. Cognitive J. Disease Include addresses of all authors on the front of the summary and the abstract including the E-mail address for EACH author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered if they lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by surface mail ONLY by sending 6 copies of the abstract and summary to: CNS*95 Submissions Division of Biology 216-76 Caltech Pasadena, CA 91125 ADDITIONAL INFORMATION can be obtained by: o Using our on-line WWW information and registration server, URL of: http://www.bbb.caltech.edu/cns95.html o ftp-ing to our ftp site. yourhost% ftp 131.215.137.69 Name (131.215.137.69:): ftp Password: yourname at yourhost.yourside.yourdomain ftp> cd cns95 ftp> ls o Sending Email to: cns95-registration-info at smaug.bbb.caltech.edu CNS*94 ORGANIZING COMMITTEE: Co-meeting chair / logistics - John Miller, UC Berkeley Co-meeting chair / program - Jim Bower, Caltech Program committee: Gwen Jacobs, University of California, Berkeley Catherine Carr, University of Maryland, College Park Dennis Glanzman, NIMH/NIH Nancy Kopel, Boston University Christiane Linster, ESPCI, Paris, France Philip Ulinski, University of Chicago Charles Wilson, University of Tennessee, Memphis Regional Organizers: Europe- Erik DeSchutter (Belgium) Middle East - Idan Segev (Jerusalem) Down Under - Mike Paulin (New Zealand) South America - Renato Sabbatini (Brazil) Asia - Zhaoping Li (Hong Kong) *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From lars at ida.his.se Wed Dec 7 02:31:17 1994 From: lars at ida.his.se (Lars Niklasson) Date: Wed, 7 Dec 94 08:31:17 +0100 Subject: Lectureship Message-ID: <9412070731.AA00135@mhost.ida.his.se> The following lectureship is available for a connectionist with a background in Lingusitics. =========================================================== Lectureship in linguistics, ref. no. 276-94-40 University of Skoevde, Sweden. Deadline for application: Jan. 20th 1995. The appointment includes both research and lecturing duties. Applicants should posses a general competence in linguis- tics, but also be prepared to work within the border-areas of computer science and cognitive science, which are areas where the University is planning to develop both education and a research competence with special emphasis on linguis- tics. It is therefore considered a merit if the applicant has skills within traditional artificial intelligence or connectionism. The lectureship will belong to the Department of modern languages, but will involve lecturing duties at the Department of computer science. The University of Skoevde is one of Sweden's youngest, but it is undergoing a dynamical development. The main focus of all the University activities is towards the private sector, industry and international trade, especially the use and development of tools for information technology. Six areas of competence have been established; engineering, computer science, cognitive science, economics, language as well as media and arts. Currently, the education programmes of the Department of Modern Languages include; English, French, Spanish and Ger- man. In addition, the Department of computer science offers a number of 3-year programmes in computer science and cogni- tive science, as well as a research oriented master pro- gramme. Research is conducted in connectionism, distributed real-time databases and active databases. For more information please contact: The vice-chancellor's office Lars-Erik Johansson Email: Lars-Erik.Johansson at sta.his.se  From pollack at cs.brandeis.edu Wed Dec 7 17:12:09 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Wed, 7 Dec 1994 17:12:09 -0500 Subject: compneuro asst prof job Message-ID: <199412072212.RAA02297@onyx.cs.brandeis.edu> I recently moved to Brandeis University, near Boston, where there is a lovely new building housing a new center for complex systems, which focuses on the brain, the mind, and computation. The center houses biologists, biochemists, physicists, psychologists, linguists, and computer scientists. There is a faculty search which is relevant to the list: ------- The Volen National Center for Complex Systems at Brandeis University seeks candidates for a tenure-track assistant professorship to participate in establishing a Center for Theoretical Neuroscience funded by a grant from the Alfred P. Sloan Foundation. We are especially interested in theorists with strong interests and demonstrated commitment to research relevant to neuroscience. Candidates are expected to have Ph.D.s (or the equivalent) in Mathematics, Physics, Computer Science or another quantitative discipline. The successful candidate will be appointed in the appropriate academic department/s. The position carries a reduced teaching load and summer salary for three years. Prospective candidates should send a curriculum vitae, statement of research interests, and arrange for three letters of recommendation to be sent directly to: Search Committee, Theoretical/Computational Neuroscientist, Volen Center, Brandeis University, Waltham, MA 02254. Applications from minority and women are especially welcome. Consideration of completed applications will start after January 1, 1995, and will continue until the position is filled, although candidates are strongly encouraged to submit completed applications as soon as possible. ---- The likeliest candidates for the faculty position will have a body of work in neuroscience modeling, rather than in neural models of cognition or engineering. However, there are also several postdocs available on the same Sloan grant, for theorists or modelers who want to cross into experimental neuroscience. Contact Eve Marder at binah.cc.brandeis.edu, Larry Abbott at psy.ox.ac.uk or John Lisman at binah.cc.brandeis.edu for further info. Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu PS Neuroprose is staying at OSU for a while, just with delays in moving files.  From pollack at cs.brandeis.edu Thu Dec 8 11:51:23 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Thu, 8 Dec 1994 11:51:23 -0500 Subject: FLAME: WCNN'95 - science in the mud pits Message-ID: <199412081651.LAA01963@garnet.cs.brandeis.edu> I kept quiet about the first IEEE NN conference in 1987, where every paper was accepted, and sorted by how well the author's name was recognized by the local committee, a feast of surplus dollars were collected in hyperinflated registration fees. I just never submitted another paper. But these policies are sickening: >Authors must submit >registration payment with papers to be eligible for the early >registration fee...Registration fee includes 1995 membership and a >one (1) year subscription to the Journal Neural Networks. >A $35 per paper processing fee must be enclosed >in order for the paper to be refereed. Reading fees are charged by agents for aspiring science fiction writers. Linking registration to submission is nothing more than an admission that all papers are to be accepted. And a fee of $480 is paying a lot more than membership, subscription, and a few days of coffee breaks! Also, since the size of the intersection of the two sets |ORGANIZING COMMITTEE & PLENARY SPEAKERS| is 4 instead of 0, I guess a pentium chip was used. WHERE are the ethical governors of this scientific society? Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu  From carlos at cenoli1.ulb.ac.be Thu Dec 8 16:15:38 1994 From: carlos at cenoli1.ulb.ac.be (carlos@cenoli1.ulb.ac.be) Date: Thu, 8 Dec 94 16:15:38 MET Subject: paper available: dynamical computation with chaos Message-ID: <9412081515.AA01490@cenoli5.ulb.ac.be> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/babloyantz.comput_chaos.ps.Z The file babloyantz.comput_chaos.ps.Z is now available for copying from the Neuroprose repository. This is a 13-page long paper with 5 figures. Size is 204239 bytes compressed and 585704 uncompressed. Sorry, hardcopies not available. Computation with chaos: A paradigm for cortical activity A. Babloyantz and C. Louren\c{c}o Service de Chimie-Physique, Universit\'{e} Libre de Bruxelles CP 231 - Campus Plaine, Boulevard du Triomphe B-1050~Bruxelles, Belgium e-mail: carlos at cenoli.ulb.ac.be Abstract: A device comprising two interconnected networks of oscillators exhibiting spatiotemporal chaos is considered. An external cue stabilizes input specific unstable periodic orbits of the first network thus creating an ``attentive'' state. Only in this state is the device able to perform pattern discrimination and motion detection. We discuss the relevance of the procedure to the information processing of the brain. This paper appeared in Proc. Natl. Acad. Sci. USA, Vol. 91, p. 9027, 13 September 1994. Thanks to Jordan Pollack for maintaining the archive. Carlos Louren\c{c}o ------------------------------------------------------------------------- *** Indonesians persist in the racial cleansing of East Timor. *** *** Torture and killing of innocents happen daily. ***  From mdavies at psy.ox.ac.uk Wed Dec 7 11:24:21 1994 From: mdavies at psy.ox.ac.uk (Martin Davies) Date: Wed, 7 Dec 1994 16:24:21 +0000 Subject: Euro-SPP '95 Message-ID: <9412071624.AA21454@Mac8> ************************************************************************* EUROPEAN SOCIETY FOR PHILOSOPHY AND SOCIETY Fourth Annual Meeting St. Catherine's College, Oxford Wednesday 30 August - Friday 1 September, 1995 ************************************************************************* FIRST ANNOUNCEMENT AND CALL FOR PAPERS The Fourth Annual Meeting of the Euro-SPP will begin at 11.30 am on Wednesday 30 August and will end at 5.30 pm on Friday 1 September. Themes for Invited Symposia include: emotion, attention, artifical life, and brain imaging. The conference will be held in St. Catherine's College, Oxford, where accommodation will be available. We expect to be able to offer an accommodation and meals package for the period from Wednesday morning until Friday afternoon for 108 pounds. In addition, bed and breakfast accommodation will be available for the Tuesday night before the conference, and for the Friday and Saturday nights after the conference, at a cost of 28 pounds per night. A limited number of superior rooms with private bath will be available at a higher rate. ************************************************************************* For further information about local arrangements, email: espp95 at psy.ox.ac.uk. ************************************************************************* The Society welcomes submitted papers and posters for this meeting. Submitted papers and posters are refereed and selected on the basis of quality and relevance to both psychologists and philosophers. Submitted Papers: Papers should not exceed a length of 30 minutes (about 12 double-spaced pages). The full text should be submitted, along with a 300 word abstract. Poster Presentations: Proposals for poster presentations should consist of a 500 word abstract. Unless authors indicate otherwise, submitted papers that we are not able to accept will also be considered for poster presentation. The deadline for submission of both submitted papers and poster presentations is 20 January 1995. Please send three copies to: Professor Beatrice de Gelder Department of Psychology Tilburg University 5000 LE Tilburg The Netherlands or: Professor Christopher Peacocke Magdalen College Oxford OX1 4AU UK ************************************************************************* For information about membership of the Euro-SPP, email: espp at kub.nl. *************************************************************************  From marks at u.washington.edu Fri Dec 9 20:37:22 1994 From: marks at u.washington.edu (Robert Marks) Date: Fri, 9 Dec 94 17:37:22 -0800 Subject: Russian Symposium Message-ID: <9412100137.AA25592@carson.u.washington.edu> The 2-nd International Symposium on Neuroinformatics and Neurocomputers Rostov-on-Don, RUSSIA September 20-23, 1995 Organized by Russian Neural Network Society (RNNS) and A.B. Kogan Research Institute for Neurocybernetics (KRINC) in co-operation with Institute of Electrical and Electronics Enginieers Neural Networks Council (IEEE NNC) First Call for Papers Research in Neuroinformatics and Neurocomputing continued in Russia after the research was deflated in the west in the 1970's. The research sophistication in neural networks, as a result, is quite advanced in Russia. The first international RNNS/IEEE Symposium, held in October 1992, proved to be a highly successful forum for a diverse international interchange of fresh and novel research results. The second International Symposium on Neuroinformatics and Neurocomputers is built on this remarkable success. The symposium focus is on the neuroscience, mathematics, physics, engineering and design of neuroinformatic and neurocomputing systems. Rostov-on-Don, the location of the Symposium, is about 1000 km south of Moscow on the scenic Don river. The Don is commonly identified as the boundary between the continents of Europe and Asia. Rostov is the home of the A.B. Kogan Research Institute for Neurocybernetics at Rostov State University - one of the premier neural network research centers in Russia. Papers for the Symposium should be sent in CAMERA-READY FORM, NOT EXEEDING 8 PAGES in A4 format to the Program Committee Co-Chair Alexander A. Frolov. Two copies of the paper should be submitted. The deadline for submission is 15 MARCH, 1995. Notification of acceptance will be sent on or before 15 May, 1995. SYMPOSIUM COMMITTEE GENERAL CHAIR Witali L. Dunin-Barkowski, Dr. Sci., The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Symposium Chair President of the Russian Neural Network Society, A.B. Kogan Research Institute for Neurocybernetics Rostov State University 194/1 Stachka avenue, 344104, Rostov-on-Don, Russia Tel: +7-8632-28-0588, Fax: +7-8632-28-0367 E-mail: wldb at krinc.rostov-na-donu.su PROGRAM COMMITTEE CO-CHAIRS Professor Alexander A. Frolov, The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Program Co-Chair 5a Butlerov str. Higher Nervous Activity and Neurophysiology Institute Russian Academy of Science 117220, Moscow, RUSSIA. Professor Robert J. Marks II Program Co-Chair The 2-nd International Symposium on Neuroinformatics and Neurocomputers University of Washington Department of Electrical Engineering c/o 1131 199th Street S.W., Suite N Lynnwood, WA 98036-7138 WA, USA. Other information is available from the Symposium Committee.  From john at dcs.rhbnc.ac.uk Sat Dec 10 07:18:37 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Sat, 10 Dec 94 12:18:37 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <28465.9412101218@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): one new report available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-011: ---------------------------------------- Valid Generalisation from Approximate Interpolation by Martin Anthony, Department of Mathematics, The London School of Economics Peter Bartlett, Research School of Information Sciences and Engineering, Australian National University Yuval Ishai, Department of Computer Science, Technion John Shawe-Taylor, Department of Computer Science, Royal Holloway, University of London Abstract: Let $\H$ and $\C$ be sets of functions from domain $X$ to the reals. We say that $\H$ validly generalises $\C$ from approximate interpolation if and only if for each $\eta>0$ and $\epsilon, \delta \in (0,1)$ there is a number $m_0(\eta,\epsilon, \delta)$ such that for any function $t \in \C$ and any probability distribution $P$ on $X$, if $m \ge m_0$ then with $P^m$-probability at least $1-\delta$, a sample $\vx =(x_1, x_2, \dots, x_m) \in X^m$ satisfies $$\forall h \in \H, \, |h(x_i) - t(x_i)|< \eta, \,(1 \le i \le m) \Longrightarrow P(\{x: |h(x) -t(x)| \ge \eta\}) < \epsilon.$$ We find conditions that are necessary and sufficient for $\H$ to validly generalise $\C$ from approximate interpolation, and we obtain bounds on the sample length $m_0(\eta, \epsilon, \delta)$ in terms of various parameters describing the expressive power of $\H$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-022: ---------------------------------------- Sample Sizes for Sigmoidal Neural Networks by John Shawe-Taylor, Department of Computer Science, Royal Holloway University of London Abstract: This paper applies the theory of Probably Approximately Correct (PAC) learning to feedforward neural networks with sigmoidal activation functions. Despite the best known upper bound on the VC dimension of such networks being $O((WN)^2)$, for $W$ parameters and $N$ computational nodes, it is shown that the asymptotic bound on the sample size required for learning with increasing accuracy $1 - \epsilon$ and decreasing probability of failure $\delta$ is $$O((1/\epsilon)(W\log(1/\epsilon) + (WN)^2 + \log(1/\delta)).$$ For practical values of $\epsilon$ and $\delta$ the formula obtained for the sample sizes is a factor $2\log(2e/\epsilon)$ smaller than a naive use of the VC dimension result would give. Similar results are obtained for learning where the hypothesis is only guaranteed to correctly classify a given proportion of the training sample. The results are formulated in general terms and show that for many learning classes defined by smooth functions thresholded at the output, the sample size for a class with VC-dimension $d$ and $\ell$ parameters is $O((1/\epsilon)(\ell\log(1/\epsilon) + o(\log(1/\epsilon))d + \log(1/\delta))$. ----------------------- The Report NC-TR-94-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-011.ps.Z ftp> bye % zcat nc-tr-94-011.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From cnna at tce.ing.uniroma1.it Mon Dec 12 09:54:22 1994 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Mon, 12 Dec 1994 15:54:22 +0100 Subject: program CNNA-94 Message-ID: <9412121454.AA14007@tce.ing.uniroma1.it> (Multiple posting - please excuse us if you receive more than a copy) THIRD IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND THEIR APPLICATIONS (CNNA-94) ROME, ITALY, DEC. 18-21, 1994 PRELIMINARY PROGRAM SUNDAY, DECEMBER 18, 1994 17.30 Inaugural Session (Hall 1) 17.30 Welcome address G. Orlandi, Dean, Faculty of Engineering V. Cimagalli, Chairman, CNNA-94 17.45 Opening address C. Lau 18.05 Inaugural lecture: L.O. Chua The CNN Universal Chip: Dawn of a New Computer Paradigm 19.30 Welcome Cocktail (Room of the Frescoes) MONDAY, DECEMBER 19, 1994 9.00 Session 1: Theory I (Hall of the Cloister - chairman: L.O. Chua) 9.00 Invited review paper: T. Roska Analogic Algorithms Running on the CNN Universal Machine 9.30 G. Yang, T. Yang, L.-B. Yang On Unconditional Stability of the General Delayed Cellular Neural Networks 9.45 S. Arik, V. Tavsanoglu A Weaker Condition for the Stability of Nonsymmetric CNNs 10.00 M. P. Joy, V. Tavsanoglu Circulant Matrices and the Stability Theory of CNNs 10.15 B.E. Shi, S. Wendsche, T. Roska, L.O. Chua Random Variations in CNN Templates: Theoretical Models and Empirical Studies 10.30 Coffee Break (Room of the Frescoes) 11.00 Session 2: Connections with Neurophysiology (Hall of the Cloister - chairman: T. Roska) 11.00 Invited lecture: F. Werblin, A. Jacobs Using CNN to Unravel Space-Time Processing in the Vertebrate Retina 11.45 Invited lecture: J. Hmori Synaptic Organization of the Thalamic Visual Center (LGN) of Mammals as a Basis of CNN Model of Subcortical Visual Processing 12.30 K. Lotz, Z. Vidnynszky, T. Roska, J. Vandewalle, J. Hmori, A. Jacobs, F. Werblin Some Cortical Spiking Neuron Models Using CNN 12.45 T.W. Berger, B.J. Sheu, R. H.-J. Tsai Analog VLSI Implementation of a Nonlinear Systems Model of the Hippocampal Brain Region 13.00 A. Jacobs, T. Roska, F. Werblin Techniques for Constructing Physiologically Motivated Neuromorphic Models in CNN 13.15 Lunch (Room of the Frescoes) 14.45 Session 3: Hardware Implementations I (Hall of the Cloister - chairman: A. Rodrguez-Vzquez) 14.45 Invited review paper: A. Rodrguez-Vzquez, R. Domnguez-Castro, S. Espejo Design of CNN Universal Chips: Trends and Obstacles 15.15 J.M. Cruz, L.O. Chua, T. Roska A Fast, Complex and Efficient Test Implementation of the CNN Universal Machine 15.30 F. Sargeni, V. Bonaiuto High Performance Digitally Programmable CNN Chip with Discrete Templates 15.45 A. Paasio, A. Dawidziuk, K. Halonen, V. Porra Digitally Controllable Weights in Current Mode Cellular Neural Networks 16.00 D. Lm, G.S. Moschytz A Programmable, Modular CNN Cell 16.15 M.-D. Doan, R. Chakrabaty, M. Heidenreich, M. Glesner, S. Cheung Realisation of a Digital Cellular Neural Network for Image Processing 16.30 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, R. Carmona A CNN Universal Chip in CMOS Technology 16.45 Coffee Break (Room of the Frescoes) 17.15 Session 4: Theory II (Hall of the Cloister - chairman: R.-W. Liu) 17.15 R.-W. Liu, Y.-F. Huang, X.-T. Ling A Novel Approach to the Convergence of Neural Networks for Signal Processing 17.30 E. Pessa, M.P. Penna Local and Global Connectivity in Neuronic Cellular Automata 17.45 X.-Z. Huang, T. Yang, L.-B. Yang On Stability of the Time-Variant Delayed Cellular Neural Networks 18.00 J.J. Szczyrek, S. Jankowski A Class of Asymmetrical Templates in Cellular Neural Networks 18.15 P.P. Civalleri, M. Gilli A Topological description of the State Space of a Cellular Neural Network 18.30 M. Tanaka, T. Watanabe Cooperative and Competitive Cellular Neural Networks TUESDAY, DECEMBER 20, 1994 9.15 Session 5: Learning I (Hall of the Cloister - chairman: J.A. Nossek) 9.15 Invited review paper: J.A. Nossek Design and Learning with Cellular Neural Networks 9.45 I. Fajfar, F. Bratkovic Statistical Design Using Variable Parameter Variances and Application to Cellular Neural Networks 10.00 N.N. Aizenberg, I.N. Aizenberg CNN-like Networks Based on Multi-Valued and Universal Binary Neurons: Learning and Application to Image Processing 10.15 W. Utschick, J.A. Nossek Computational Learning Theory Applied to Discrete-Time Cellular Neural Networks 10.30 H. Magnussen, J.A. Nossek Global Learning Algorithms for Discrete-Time Cellular Neural Networks 10.45 Coffee Break (Room of the Frescoes) 11.15 Session 6: Learning II (Hall of the Cloister - chairman: J. Vandewalle) 11.15 H. Magnussen, G. Papoutsis, J.A. Nossek Continuation-Based Learning Algorithm for Discrete-Time Cellular Neural Networks 11.30 C. Gzelis, S. Karamahmut Recurrent Perceptron Learning Algorithm for Completely Stable Cellular Neural Networks 11.45 A.J. Schuler, M. Brabec, D. Schubel, J.A. Nossek Hardware-Oriented Learning for Cellular Neural Networks 12.00 F. Dellaert, J. Vandewalle Automatic Design of Cellular Neural Networks by Means of Genetic Algorithms: Finding a Feature Detector 12.15 H. Mizutani A New Learning Method for Multilayered Cellular Neural Networks 12.30 Lunch (Room of the Frescoes) 14.00 Panel Discussion: Trends and Applications of the CNN Paradigm (Hall of the Cloister - chairman: L.O. Chua) Panelists: L.O. Chua, V. Cimagalli, J.A. Nossek, T. Roska, A. Rodrguez- Vzquez, J. Vandewalle 15.30 Coffee Break (Room of the Frescoes) 16.00 Session 7: Applications I (Hall of the Cloister - chairman: N.N. Aizenberg) 16.00 N.N. Aizenberg, I.N. Aizenberg, T.P. Belikova Extraction and Localization of Important Features on Grey-Scale Images: Implementation on the CNN 16.15 K. Slot Large-Neighborhood Templates Implementation in Discrete-Time CNN Universal Machine with a Nearest-Neighbor Connection Pattern 16.30 J. Pineda de Gyvez XCNN: A Software Package for Color Image Processing 16.45 B.E. Shi Order Statistic Filtering with Cellular Neural Networks 17.00 L.-B. Yang, T. Yang, B.-S. Chen Moving Point Target Detection Using Cellular Neural Networks 17.15 X.-P. Yang, T. Yang, L.-B. Yang Extracting Focused Object from Defocused Background Using Cellular Neural Networks 17.30 M. Balsi, N. Racina Automatic Recognition of Train Tail Signs Using CNNs 17.45 A. Kellner, H. Magnussen, J.A. Nossek Texture Classification, Texture Segmentation and Text Segmentation with Discrete-Time Cellular Neural Networks 16.00 Session 8: Applications II (Hall 17 - chairman: S. Jankowski) 16.00 P.L. Venetianer, P. Szolgay, K.R. Crounse, T. Roska, L.O. Chua Analog Combinatorics and Cellular Automata-Key Algorithms and Layout Design 16.15 . Zarndy, T. Roska, Gy. Liszka, J. Hegyesi, L. Kk, Cs. Rekeczky Design of Analogic CNN Algorithms for Mammogram Analysis 16.30 P. Szolgay, Gy. Erss, A. Katona, . Kiss An Experimental System for Path Tracking of a Robot Using a 16*16 Connected Component Detector CNN Chip with Direct Optical Input 16.45 T. Kozek, T. Roska A Double Time-Scale CNN for Solving 2-D Navier-Stokes Equations 17.00 . Zarndy, F. Werblin, T. Roska, L.O. Chua Novel Types of Analogic CNN Algorithms for Recognizing Bank-Notes 17.15 B.J. Sheu, Sa H. Bang, W.-C. Fang Optimal Solutions of Selected Cellular Neural Network Applications by the Hardware Annealing Method 17.30 B. Siemiatkowska Cellular Neural Network for Mobile Robot Navigation 17.45 A. Murgu Distributed Neural Control for Markov Decision Processes in Hierarchic Communication Networks 20.00 Banquet (Restaurant "4 Colonne" - via della Posta Vecchia, 4) WEDNESDAY, DECEMBER 21, 1994 9.00 Session 9: Spatio-Temporal Dynamics I (Hall of the Cloister - chairman: V.D. Shalfeev) 9.00 Invited review paper V.D. Shalfeev, G.V. Osipov Spatio-Temporal Phenomena in Cellular Neural Networks 9.30 C.-M. Yang, T. Yang, K.-Y. Zhang Chaos in Discrete Time Cellular Neural Networks 9.45 R. Dogaru, A.T. Murgan, D. Ioan Robust Oscillations and Bifurcations in Cellular Neural Networks 10.00 H. Chen, M.-D. Dai, X.-Y. Wu Bifurcation and Chaos in Discrete-Time Cellular Neural Networks 10.15 M.J. Ogorzalek, A. Dabrowski, W. Dabrowski Hyperchaos, Clustering and Cooperative Phenomena in CNN Arrays Composed of Chaotic Circuits 10.30 Coffee Break (Room of the Frescoes) 11.00 Session 10: Spatio-Temporal Dynamics II (Hall of the Cloister - chairman: M. Hasler) 11.00 Invited review paper: P. Thiran, M. Hasler Information Processing Using Stable and Unstable Oscillations: A Tutorial 11.30 P. Szolgay, G. Vrs Transient Response Computation of a Mechanical Vibrating System Using Cellular Neural Networks 11.45 P.P. Civalleri, M. Gilli Propagation Phenomena in Cellular Neural Networks 12.00 S. Jankowski, A. Londei, C. Mazur, A. Lozowski Synchronization Phenomena in 2D Chaotic CNN 12.15 Poster Session (Hall of the Cloister) S. Jankowski, R. Wanczuk CNN models of complex pattern formation in excitable media Z. Galias, J.A. Nossek Control of a Real Chaotic Cellular Neural Network A. Piovaccari, G. Setti A Versatile CMOS Building Block for Fully Analogically-ProgrammableVLSI Cellular Neural Networks P. Thiran, G. Setti An Approach to Local Diffusion and Global Propagation in 1-dim. Cellular Neural Networks J. Kowalski, K. Slot, T. Kacprzak A CMOS Current-Mode VLSI Implementation of Cellular Neural Network for an Image Objects Area Estimation W.J. Jansen, R. van Drunen, L. Spaanenburg, J.A.G. Nijhuis The AD2 Microcontroller Extension for Artificial Neural Networks C.-K. Pham, M. Tanaka A Novel Chaos Generator Employing CMOS Inverter for Cellular Neural Networks 13.15 Lunch (Room of the Frescoes) 14.45 Session 11: Hardware Implementations II (Hall of the Cloister - chairman: J.L. Huertas) 14.45 R. Beccherelli, G. de Cesare, F. Palma Towards an Hydrogenated Amorphous Silicon Phototransistor Cellular Neural Network 15.00 A. Sani, S. Graffi, G. Masetti, G. Setti Design of CMOS Cellular Neural Networks Operating at Several Supply Voltages 15.15 M. Russell Grimaila, J. Pineda de Gyvez A Macromodel Fault Generator for Cellular Neural Networks 15.30 P. Kinget, M. Steyaert Evaluation of CNN Template Robustness Towards VLSI Implementation 15.45 B.J. Sheu, Sa H. Bang, W.-C. Fang Analog VLSI Design of Cellular Neural Networks with Annealing Ability 16.00 L. Raffo, S.P. Sabatini, G.M. Bisio A Reconfigurable Architecture Mapping Multilayer CNN Paradigms 14.45: Session 12: Applications III (Hall 17 - chairman: J. Herault) 14.45 G. Adorni, V. DAndrea, G. Destri A Massively Parallel Approach to Cellular Neural Networks Image Processing 15.00 M. Coli, P. Palazzari, R. Rughi Use of the CNN Dynamic to Associate Two Points with Different Quantization Grains in the State Space 15.15 M. Csapodi, L. Nemes, G. Tth, T. Roska, A. Radvnyi Some Novel Analogic CNN Algorithms for Object Rotation, 3D Interpolation- Approximation, and a "Door-in-a-Floor" Problem 15.30 H. Harrer, P.L. Venetianer, J.A. Nossek, T. Roska, L.O. Chua Some Examples of Preprocessing Analog Images with Discrete-Time Cellular Neural Networks 15.45 A.G. Radvnyi Solution of Stereo Correspondence in Real Scene: an Analogic CNN Algorithm 16.00 J.P. Miller, K.R. Crounse, T. Sziranyi, L. Nemes, L.O. Chua, T. Roska Deblurring of Images by Cellular Neural Networks with Applications to Microscopy 16.15 Coffee Break (Room of the Frescoes) 16.45 Session 13: Hardware Implementations III (Hall of the Cloister - chairman: M. Salerno) 16.45 T. Roska, P. Szolgay, . Zarndy, P.L. Venetianer, A. Radvnyi, T. Szirnyi On a CNN Chip-Prototyping System 17.00 M. Balsi, V. Cimagalli, I. Ciancaglioni, F. Galluzzi Optoelectronic Cellular Neural Network Based on Amorphous Silicon Thin Film Technology 17.15 S. Espejo, R. Domnguez-Castro, A. Rodrguez-Vzquez, R. Carmona Weight-Control Strategy for Programmable CNN Chips 17.30 S. Espejo, A. Rodrguez-Vzquez, R. Domnguez-Castro, R. Carmona Convergence and Stability of the FSR CNN Model 17.45 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, I. Garca- Vargas, J.F. Ramos, R. Carmona SIRENA: A Simulation Environment for CNNs 16.45 Session 14: Applications IV (Hall 17 - chairman: M. Tanaka) 16.45 P. Arena, S. Baglio, L. Fortuna, G. Manganaro CNN Processing for NMR Spectra 17.00 P. Arena, L. Fortuna, G. Manganaro, S. Spina CNN Image Processing for the Automatic Classification of Oranges 17.15 S. Schwarz Detection of Defects on Photolitographic Masks by Cellular Neural Networks 17.30 M. Ikegami, M. Tanaka Moving Image Coding and Decoding by DTCNN with 3-D Templates 17.45 M. Kanaya, M. Tanaka Robot Multi-Driving Controls by Cellular Neural Networks GENERAL INFORMATION Venue: All sessions will be held at the Faculty of Engineering, "La Sapienza" University of Rome, vie Eudossiana, 18 - 00184 Rome Coffee Breaks and lunches are included in the registration fee; they will be served in the room of the Frescoes, opposite the main lecture hall. In order to receive them, it is necessary to wear the workshop badge. Banquet will be held at restaurant "4 Colonne", via della Posta Vecchia, 4, near Piazza Navona, tel. 68307152/68805261. It can be reached on foot in about 20 minutes from the Faculty of Engineering, by bus no. 81 or 87 from via dei Fori Imperiali to Corso Rinascimento, or by taxi. You may ask for directions at the registration desk. Please bring the coupon you receive at registration. Communications: You may receive messages by fax, no. +39-6-4742647. Faxes should be addressed to your name, with clear statement "participant to CNNA-94" in the cover sheet. A UNIX ASCII terminal will be available for all participants, with full Internet capabilities. You may receive messages at the usual e-mail address of the workshop: cnna at tce.ing.uniroma1.it Subject of messages should include your name. Public telephones, suitable for local, long-distance, and international calls, are available in the Faculty. They accept magnetic cards that can be purchased at the registration desk, at tobacconists' and newsstands, and in other shops carrying the Telecom symbol. Bank: Banca di Roma has an agency within the Faculty of Engineering. It is open in the morning from 8.25 to 1.35. Several other banks are available in the area of the University, also open from 3 to 4pm. Some cash dispensers, e.g. the one at Banca di Roma on via Cavour, between Hotel Palatino and via dei Fori Imperiali, accept credit cards for cash retrieval 24 hours a day. Authors are requested to meet with their session chairman 15 minutes prior to the session in the same room where it will be held. Time slots for regular papers are strictly limited to 15 minutes, including discussion. Poster authors should set up posters on 10.30, Wed. Dec. 21, and remove them during lunch break. Demonstrations of working hardware and software for CNNs will be held throughout the workshop during breaks. Demonstration authors should bring their material for set up on 8am of the day agreed upon, and remove it at the end of sessions. FURTHER INFORMATION Please contact: CNNA-94 tel. +39-6-44585836 fax. +39-6-4742647 e-mail cnna at tce.ing.uniroma1.it  From ai at aaai.org Mon Dec 12 15:36:08 1994 From: ai at aaai.org (AAAI) Date: Mon, 12 Dec 94 12:36:08 PST Subject: AAAI 1995 Fall Symposium Series Call for Participation Message-ID: <9412122036.AA19948@aaai.org> AAAI 1995 Fall Symposium Series Call for Participation November 10-12, 1995 Massachusetts Institute of Technology Cambridge, Massachusetts Sponsored by the American Association for Artificial Intelligence 445 Burgess Drive, Menlo Park, CA 94025 (415) 328-3123 (voice) (415) 321-4457 (fax) fss at aaai.org The American Association for Artificial Intelligence presents the 1995 Fall Symposium Series, to be held Friday through Sunday, November 10-12, 1995, at the Massachusetts Institute of Technology. The topics of the eight symposia in the 1995 Fall Symposium Series are: - Active Learning - Adaptation of Knowledge for Reuse - AI Applications in Knowledge Navigation and Retrieval - Computational Models for Integrating Language and Vision - Embodied Language and Action - Formalizing Context - Genetic Programming - Rational Agency: Concepts, Theories, Models, and Applications Symposia will be limited to between forty and sixty participants. Each participant will be expected to attend a single symposium. Working notes will be prepared and distributed to participants in each symposium. A general plenary session, in which the highlights of each symposium will be presented, will be held on Saturday, November 11, and an informal reception will be held on Friday, November 10. In addition to invited participants, a limited number of other interested parties will be able to register in each symposium on a first-come, first-served basis. Registration will be available by 1 August, 1995. To obtain registration information write to the AAAI at 445 Burgess Drive, Menlo Park, CA 94025 (fss at aaai.org). Submission Dates - Submissions for the symposia are due on April 14, 1995. - Notification of acceptance will be given by May 19, 1995. - Material to be included in the working notes of the symposium must be received by September 1, 1995. See the appropriate section below for specific submission requirements for each symposium. This document is available as http://www.ai.mit.edu/people/las/aaai/fss-95/fss-95-cfp.html ******************************************************************************* ACTIVE LEARNING An active learning system is one that can influence the training data it receives by actions or queries to its environment. Properly selected, these actions can drastically reduce the amount of data and computation required by a machine learner. Active learning has been studied independently by researchers in machine learning, neural networks, robotics, computational learning theory, experiment design, information retrieval, and reinforcement learning, among other areas. This symposium will bring researchers together to clarify the foundations of active learning and point out synergies to build on. Submission Information Potential participants should submit a position paper (at most two pages) discussing what the participant could contribute to a dialogue on active learning and/or what they hope to learn by participating. Suggested topics include: Theory: What are the important results in the theory of active learning and what are important open problems? How much guidance does theory give to application? Algorithms: What successful algorithms have been found for active learning? How general are they? For what tasks are they appropriate? Evaluation: How can accuracy, convergence, and other properties of active learning algorithms be evaluated when, for instance, data is not sampled randomly? Taxonomy: What kinds of information are available to learners (e.g. membership vs. equivalence queries, labeled vs. unlabeled data) and what are the ways learning methods can use them? What are the commonalities among methods studied by different fields? Papers should be sent to David D. Lewis, lewis at research.att.com, AT&T Bell Laboratories, 600 Mountain Ave., Room 2C-408, Murray Hill, NJ 07974-0636. Electronic mail submissions are strongly preferred. Symposium Structure The symposium will be broken into sessions, each dedicated to a major theme identified within the position papers. Sessions will begin with a background presentation by an invited speaker, followed by brief position statements from selected participants. A significant portion of each session will be reserved for group discussion, guided by a moderator and focused on the core issue for the session. The final session of the symposium will accommodate new issues that are raised during sessions. Organizing Committee David A. Cohn (cochair), MIT, cohn at psyche.mit.edu; David D. Lewis (cochair), AT&T Bell Labs, lewis at research.att.com; Kathryn Chaloner, U. Minnesota ; Leslie Pack Kaelbling, Brown U.; Robert Schapire, AT&T Bell Labs; Sebastian Thrun, U. Bonn; Paul Utgoff, U. Mass Amherst. ****************************************************************************** ADAPTATION OF KNOWLEDGE FOR REUSE Several areas in AI address issues of creating and storing knowledge constructs (such as cases, plans, designs, specifications, concepts, domain theories, schedules). There is broad interest in reusing these constructs in similar problem-solving situations so as to avoid expensive re-derivation. Adaptation techniques have been developed to support reuse in frameworks such as analogical problem solving, case-based reasoning, problem reformulation, or representation change and task domains such as creativity, design, planning, program transformation or software reuse, schedule revision, and theory revision. However, many open issues remain, and progress on such issues as case adaptation would substantially assist many researchers and practitioners. Our goals are to characterize the approaches to adaptation employed in various AI subfields, define the core issues in adaptation of knowledge, and advance the state-of-the-art in addressing these issues. We intend that presentations will investigate novel solutions to unsolved problems on adaptation, reflect diverse viewpoints, and focus on adaptation issues that are common to several subfields of AI. Discussions will be held on the strengths and limitations of adaptation techniques and their interrelationships. Invited talks will be given by experts who will discuss methods for the adaptation of various types of knowledge constructs. Two panels will be held. First, researchers studying knowledge adaptation from different perspectives will discuss how approaches used in their community differ from those used elsewhere, focusing on their potential benefits for other problems. Panelists in the second panel will lead discussions on identifying the core issues in knowledge adaptation raised in the presentations and the impact of the proposed methods on addressing these issues. Submission Information Anyone interested in presenting relevant material is invited to email PostScript submissions to aha at aic.nrl.navy.mil using the six-page AAAI-94 proceedings format. Anyone interested in attending is asked to submit a two-page research statement and a list of relevant publications. Please see http://www.aic.nrl.navy.mil/~aha/aaai95-fss/home.html for further information. Organizing Committee David W. Aha (cochair), NRL, aha at aic. nrl.navy.mil; Brian Falkenhainer, Xerox; Eric K. Jones, Victoria University; Subbarao Kambhampati, Arizona State University; David Leake, Indiana University; Ashwin Ram (cochair), Georgia Institute of Technology, ashwin at cc.gatech.edu. ***************************************************************************** AI APPLICATIONS IN KNOWLEDGE NAVIGATION AND RETRIEVAL The diversity and volume of accessible on-line data is increasing dramatically. As a result, existing tools for searching and browsing information are becoming less effective. The increasing use of non-text data such as images, audio and video has amplified this trend. Knowledge navigation systems are knowledge-based interfaces to information resources. They allow users to investigate the contents of complex and diverse sources of data in a natural manner. For example, intelligent browsers that can help direct a user through a large multi-dimensional information space, agents that users can direct to perform information finding tasks, or knowledge-based intermediaries that employ retrieval strategies to gather information relevant to a userUs request. The purpose of this symposium is to examine the state of the art in knowledge navigation by examining existing applications and by discussing new techniques and research directions. We encourage two types of submissions: work-in-progress papers that point towards the future of this research area, and demonstrations of knowledge navigation systems. Some research issues of interest: - Indexing: What indexing methods are appropriate and feasible for knowledge navigation systems? How can indices be extracted from data? - Retrieval: What retrieval methods are appropriate for knowledge navigation? What retrieval strategies can be employed? - Learning: How can knowledge navigation systems adapt to a changing knowledge environment and to user needs? - User interfaces: What are the characteristics of a useful navigational interface? What roles can or should an "agent" metaphor play in such interfaces? How can a navigation system orient the user in the information space? - Multi-source integration: How can multiple data and knowledge sources be integrated to address usersU needs? - Multimedia: What are the challenges presented by multimedia information sources? Submission Information The symposium will consist of invited talks, presentations, and hands-on demonstration/discussion sessions. Interested participants should submit a short paper (8 pages maximum) addressing a research issue in knowledge navigation or describing a knowledge navigation system that can be made available for hands-on demonstration at the symposium. System descriptions should clearly indicate the novel and interesting features of the system to be presented and its applicability to the central problems in knowledge navigation. Those wishing to demonstrate should also include a one-page description of their hardware and connectivity requirements. Send, by email, either a URL pointing to a PostScript version of the paper or the PostScript copy itself to aiakn at cs.uchicago.edu. Or, send 5 hard copies to Robin Burke, AI Applications in Knowledge Navigation, University of Chicago, Department of Computer Science. 1100 E. 58th St. Chicago, IL 60637. For further information, a web page for this symposium is located at http://www-cs.uchicago.edu/ ~burke/aiakn.html Organizing Committee Robin Burke (chair), University of Chicago, burke at cs.uchicago.edu; Catharine Baudin, NASA Ames; Su-Shing Chen, National Science Foundation; Kristian Hammond, University of Chicago; Christopher Owens, Bolt, Beranek & Newman. Program Committee Ray Bariess, Institute for the Learning Sciences; Alon Levy, AT&T Bell Laboratories; Jim Mayfield, University of Maryland, Baltimore County; Dick Osgood, Andersen Consulting. ****************************************************************************** COMPUTATIONAL MODELS FOR INTEGRATING LANGUAGE AND VISION This symposium will focus on research issues in developing computational models for integrating language and vision. The intrinsic difficulty of both natural language processing and computer vision has discouraged researchers from attempting integration, although in some cases it may simplify individual tasks like collateral-based vision, resolving ambiguous sentences through the use of visual information. Developing a bridge between language and vision is nontrivial, because the correspondence between words and images is not one-to-one. Much has been said about the necessity of linking language and perception for a system to exhibit intelligent behavior, but there has been relatively little work on developing computational models for this task. A natural-language understanding system should be able to understand and make references to the visual world. The use of scene-specific context (obtained from written or spoken text accompanying a scene) could greatly enhance the performance of computer vision systems. Some topics to be addressed are: - use of collateral text in image and graphics understanding - generating natural-language descriptions of visual data (e.g., event perception in image sequences) - identifying and extracting visual information from language - understanding spatial language, spatial reasoning - knowledge representation for linguistic and visual information, hybrid (language and visual) knowledge bases - use of visual data in disambiguating/understanding text - content-based retrieval from integrated text/image databases - language-based scene modeling (e.g., picture or graphics generation) - cognitive theories connecting language and perception Submission Information The symposium will consist of invited talks, panel discussions, individual presentations and group discussions. Those interested in making a presentation should submit a technical paper (not to exceed 3,000 words). Other participants should submit either a position paper or a research abstract. Email submissions in postscript format are encouraged. Send to burhans @cs.buffalo.edu. Alternatively, 4 hard copies may be sent to Rohini Srihari, CEDAR/SUNY at Buffalo, UB Commons, 520 Lee Entrance Suite 202, Buffalo, NY 14228-2567. Further information on this symposium may be found at http://www.cedar.buffalo.edu/Piction/FSS95/CFP.html. Please address questions to Debra Burhans (burhans at cs.buffalo.edu) or Rajiv Chopra (rchopra at cs.buffalo.edu). Organizing Committee Janice Glasgow, Queen's University; Ken Forbus, Northwestern University; Annette Herskovits, Wellesley College; Gordon Novak, University of Texas at Austin; Candace Sidner, Lotus Development Corporation; Jeffrey Siskind, University of Toronto; Rohini K. Srihari (chair), CEDAR, SUNY at Buffalo, rohini at cedar.buffalo. edu; Thomas M. Strat, SRI International; David Waltz, NEC Research Institute. ******************************************************************************* EMBODIED LANGUAGE AND ACTION This symposium focuses on agents that can use language or similar communication, such as gesture, to facilitate extended interactions in a shared physical or simulated world. We examine how this embodiment in a shared world both stimulates communication and provides a resource for understanding it. Our focus is on the design of artificial agents, implemented in software, hardware, or as animated characters. Papers should clearly relate the technical content presented to one of the following tasks: - Two or more communicating agents work together to construct, carry out maintenance on, or destroy a physical or simulated artifact (Collaborative Engagement) - An agent assists a human by fetching or delivering physical or software objects. The human communicates with the agent about what is to be fetched or delivered to where. (Delivery Assistance) We solicit papers on the following issues (not to the exclusion of others): - Can task contexts act as resources for communication by simplifying the interpretation and production of communicative acts? - How does physical embodiment and its concomitant resource limitation affect an agentUs ability to interpret or generate language? - Can architectures designed to support perception and action support language or other forms of communication? - How can agents to mediate between the propositional representations of language and the (often) non-propositional representations of perception and action? - What tradeoffs exist between the use of communication to improve the agents' task performance and the additional overhead involved in understanding and generating messages? - Do differences between communication used to support concurrent task execution and communication used to support planning, reflect deeper differences in agent ability? - What is the role of negotiation, whether of task responsibilities, or of reference and meaning, in such situated task environments? Submission Information Interested participants should submit either (1) a paper (in 12 pt font, not to exceed 3000 words), or (2) a statement of interest briefly describing the authorUs relevant work in this area and listing recent relevant publications. Send contributions, plain ascii or postscript, to ian at ai. mit.edu. If electronic submission is impossible, mail 6 copies to Ian Horswill, MIT Artificial Intelligence Laboratory, 545 Technology Square, Cambridge, MA 02139. Organizing Committee John Batali, UCSD; Jim Firby, University of Chicago; Ian Horswill (cochair), MIT, ian at ai.mit.edu; Marilyn Walker (cochair), Mitsubishi Cambridge Research Labs, walker at merl.com; Bonnie Webber, University of Pennsylvania. ****************************************************************************** FORMALIZING CONTEXT The notion of context has played an important role in AI systems for many years. However, formal logical explication of contexts remains an area of research with significant open issues. This symposium will provide a forum for discussing formalizations of contexts, approaches to resolving open issues, and application areas for context formalisms. The most ambitious goal of formalizing contexts is to make automated reasoning systems which are never permanently stuck with the concepts they use at a given time because they can always transcend the context they are in. Such a capability would allow the designer of a reasoning system to include only such phenomena as are required for the system's immediate purpose, retaining the assurance that if a broader system is required later, "lifting rules" can be devised to restate the facts from the narrow context in the broader context with qualifications added as necessary. A formal theory of context in which sentences are always considered as asserted within a context could provide a basis for such transcendence. Formal theories of context are also needed to provide a representation of the context associated with a particular circumstance, e.g. the context of a conversation in which terms have particular meanings that they wouldn't have in the language in general. Linguists and philosophers have already studied similar notions of context. An example is the situation theory that has been proposed in philosophy and applied to linguistics. However, these theories usually lie embedded in the analysis of specific linguistic constructions, so locating the exact match with AI concerns is itself a research challenge. This symposium aims to bring together researchers who have studied or applied contexts in AI or related fields. Technical papers dealing with formalizations of context, the problem of generality, and use of context in common sense reasoning are especially welcome. However, survey papers which focus on contexts from other points of view, such as philosophy, linguistics, or natural language processing, or which apply contexts in other areas of AI, are also encouraged. Submission Information Persons wishing to make presentations should submit papers (up to 12 pages, 12 pt font). Persons wishing only to attend should submit a 1-2 page research summary including a list of relevant publications. A PostScript file or 8 paper copies should be sent to the program chair, Sasa Buvac Department of Computer Science Stanford University Stanford CA 94305-2140 buvac at sail.stanford.edu. Limited funding will be available to support student travel. Organizing Committee Sasa Buvac (chair), Stanford University, buvac at sail.stanford.edu; Richard Fikes, Stanford University; Ramanathan Guha, MCC; Pat Hayes, Beckman Institute; John McCarthy, Stanford University; Murray Shanahan, Imperial College; Robert Stalnaker, MIT; Johan van Benthem, University of Amsterdam. Genetic programming (GP) extends the genetic algorithm to the domain of computer programs. In genetic programming, populations of programs are genetically bred to solve problems. Genetic programming can solve problems of system identification, classification, control, robotics, optimization, game-playing, and pattern recognition. ****************************************************************************** GENETIC PROGRAMMING Starting with a primordial ooze of hundreds or thousands of randomly created programs composed of functions and terminals appropriate to the problem, the population is progressively evolved over a series of generations by applying the operations of Darwinian fitness proportionate reproduction and crossover (sexual recombination). Topics of interest for the symposium include: - The theoretical basis of genetic programming - Applications of genetic programming - Rigorousness of validation techniques - Hierarchical decomposition, e.g. automatically defined functions - Competitive coevolution - Automatic parameter tuning - Representation issues - Genetic operators - Establishing standard benchmark problems - Parallelization techniques - Innovative variations Submission Information The format of the symposium will encourage interaction and discussion, but will also include formal presentations. Persons wishing to make a presentation should submit an extended abstract of up to 2500 words of their work in progress or completed work. For accepted abstracts, full papers will be due at a date closer to the symposium. Persons not wishing to make a presentation are asked to submit a one-page description of their research interests since there may be limited room for participation. Submit your abstract or one-page description as plain text electronically by Friday April 14, 1995, with a hard-copy backup to Eric V. Siegel, AAAI GP Symposium Co-Chair, Columbia University, Department of Computer Science, 500 W 120th Street, New York, NY 10027, USA; telephone: 212-939-7112, fax: 212-666-0140, e-mail: evs at cs.columbia.edu. Organizing Committee Robert Collins, USAnimation, Inc.; Frederic Gruau, Stanford University; John R. Koza (co-chair), Stanford University, koza at cs.stanford.edu; Robert Collins, US Animation, Inc.; Conor Ryan, University College Cork; Eric V. Siegel (co-chair), Columbia University, evs at cs.columbia.edu; Andy Singleton, Creation Mechanics, Inc.; Astro Teller, Carnegie-Mellon University. ****************************************************************************** RATIONAL AGENCY: CONCEPTS, THEORIES, MODELS, AND APPLICATIONS This symposium explores conceptions of rational agency and their implications for theory, research, and practice. The view that intelligent systems are, or ought to be, rational agents underlies much of the theory and research in artificial intelligence and cognitive science. However, no consensus exists on a proper view of agency or rationality principles for practical agents. Traditionally agents are presumed disposed toward purposive action. However, agent theories abound in which behavior is fundamentally reactive. Some theories emphasize agents' abilities to manage private belief systems. Others focus on agents' interactions with their environment, sometimes including other agents. Application builders have recently broadened the term "agent" to mean any embedded system performing tasks to support human users. Rationality accounts are equally diverse. Rationality involves having reasons warranting particular beliefs (epistemic rationality) or particular desires and actions (strategic or practical rationality). Many agent models propose epistemic rationality criteria such as logical consistency or consequential closure. Other agent models base practical rationality on classical or non-monotonic logics for reasoning about action. Such logicist views are now being challenged by decision theoretic accounts emphasizing optimal action under uncertainty, including recent work on decision theoretic principles of metareasoning for limited rationality. Our symposium will explore the diverse views of rational agency. Through informal presentations and group discussion, participants will critically examine agency concepts and rationality principles, review computational agent models and applications, and consider promising directions for future work on this topic. Submission Information Prospective participants should submit a brief paper (5 pages or less) describing their research in relation to any of the following questions: - Is rationality important; must an agent be rational to be successful? - What are suitable principles of epistemic, strategic, and limited rationality? - Are rationality principles applicable to retrospective processes such as learning? - What are general requirements on rational agent architectures? - How, if at all, must a model of rational agency be modified to account for social, multi-agent interaction? Those wishing to make a specific presentation should describe its contents in their concept paper. Note: While we recognize that our topic lends itself to formal analysis, we also encourage discussion of experimental work with implemented agents. Postscript files of concept papers should be sent by email only to the program chair, fehling at lis.stanford.edu. Organizing Committee Michael Fehling (chair), Stanford University, fehling at lis.stanford.edu; Don Perlis, University of Maryland; Martha Pollack, University of Pittsburgh; John Pollock, University of Arizona.  From tesauro at watson.ibm.com Thu Dec 8 14:28:41 1994 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Thu, 8 Dec 94 14:28:41 EST Subject: NIPS*94 Top Ten List Message-ID: By popular demand, here once again is the Top Ten List. Thanks go to my comedy-writing collaborators, Mike Mozer and Don Mathis of the Univ. of Colorado. --Gerry Tesauro ------------------------------------------------------------------ Top 10 Little-Known Improvements to NIPS This Year 10. No more of that annoying math! 9. To keep speakers on their toes, a gong at every table. 8. Special prize for "Most Laughable Poster" 7. Push pins available to everyone, not just to poster presenters. 6. Special evening session in which senior researchers display their surgical scars. 5. Hotel staff all licensed to back-propagate. 4. More rock, less talk. 3. Student volunteers at the registration desk will finally stop asking "You want fries with that?" 2. Invited speakers disqualified if they test positive for steroids. 1. Morgan Kaufmann to publish special "Swimsuit Edition" of conference proceedings.  From dwang at cis.ohio-state.edu Mon Dec 12 10:50:23 1994 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 12 Dec 1994 10:50:23 -0500 Subject: About sequential learning (or interference) Message-ID: It was reported some time ago that multilayer perceptrons suffer the problem of so-called "catastrophic interference", meaning that later training will destroy previously acquired "knowledge" (see McCloskey & Cohen, Psychol. of Learning and Motivat. 24, 1990; Ratcliff, Psychol. Rev. 97, 1990). This seems to be a serious problem, if we want to use neural networks both as a stable knowledge store and a long-term problem solver. The problems seems to exist in associative memory models as well, even though the simple prescription of the Hopfield net can easily incorporate more patterns as long as they are within the capacity limit. But we all know that the original Hopfield net does not work well for correlated patterns, which are the most interesting ones for real applications. Kanter and Sompolinsky proposed a prescription to tackle the problem (Phys. Rev. A, 1987). Their prescription, however, requires nonlocal learning. Diederich and Opper (1987, Phys. Rev. Lett.) later proposed a local, iterative learning rule (similar to perceptron learning) that they show will converge to the prescription (which is the old idea of orthogonalization). According to their paper, to learn a new pattern one needs to bring in all previously acquired patterns during iterative training in order to make all of the patterns converge to the desired prescription. Because of this learning scheme, I suspect that the algorithm of Diederich and Opper also suffers "catastrophic forgetting". Is that a fair assessment of the problem? Has any major effort been taken to address this important problem? DeLiang Wang  From jaap.murre at mrc-apu.cam.ac.uk Tue Dec 13 11:59:33 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 13 Dec 94 16:59:33 GMT Subject: hypertransfer and interference Message-ID: <9412131659.AA03848@rigel.mrc-apu.cam.ac.uk> DeLiang Wang asks whether any major effort has been undertaken to investigate and possibly remedy the problem of catastrophic interference. A fairly detailed analysis of this problem can be found in: Murre, J.M.J. (1992b). Categorization and learning in modular neural networks. Hemel Hempstead: Harvester Wheatsheaf. Co-published by Lawrence Erlbaum in the USA and Canada (Hillsdale, NJ). I have recently completed a paper that shows that backpropagation not only suffers from 'catastrophic interference' but also from 'hyper transfer', i.e., in some circumstances performance on a set A actually *improves* when learning a second set B. The learning transfer effects are catastrophic (or hyper) with respect to human learning data. The paper also shows that two-layer networks do not suffer from excessive transfer and are in fact in very close accordance with the human interference data as summarized in the classic paper by Osgood (1949): Murre, J.M.J. (in press). Transfer of learning in backpropagation networks and in related neural network models. To appear in Levy, Bairaktaris, Bullinaria, & Cairns (Eds.), Connectionist Models of Memory and Language. London: UCL Press. (in fpt directory: hyper1.ps) I have put the paper in our anonymous ftp directory: ftp ftp.mrc-apu.cam.ac.uk cd /pub/nn/murre bin get hyper1.ps  From lpratt at franklinite.Mines.Colorado.EDU Tue Dec 13 05:20:35 1994 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Tue, 13 Dec 94 11:20:35 +0100 Subject: About sequential learning (or interference) Message-ID: <9412131020.AA01764@franklinite.Mines.Colorado.EDU> Deliang, My own work on transfer between neural networks has addressed the sequential learning problem, see pratt-93, cited below, and lots of other papers available from my web page: http://vita.mines.colorado.edu:3857/1s/lpratt. My formuation differs from others in that, if the new task is essentially different from the old task, I intentionally lose old-task performance if necessary to improve on the new task. Other people who have looked at a similar formulation include sharkey-92, sharkey-93, naik-92b, and agarwal-92, martin-88. Those who have attempted to preserve old-task performance include mccloskey-89, and recent work by Thrun & Mitchell, especially thrun-93a, but also including thrun-92, thrun-93b. Sebastian and I had a long talk at NIPS about a general formulation for transfer as a solution to the sequential learning problem -- he's got an excellent formulation along these lines. Approaches to modular training can also be viewed as one approach to handling the sequential learning problem, in the special case where the old tasks can be viewed as subtasks of the new task (i.e. pratt-91, waibel-89, and Jacobs' and others' recent work on mixtures of experts, including Pomerleau's approach to Alvinn, see last year's NIPS). My thesis (see my web page) has a more detailed review of all this stuff. Hope that helps. --Lori, recovering from NIPS @incollection{ pratt-93, MYKEY = " pratt-93 : .con .bap", EDITOR = "C.L. Giles and S. J. Hanson and J. D. Cowan", BOOKTITLE = "{Advances in Neural Information Processing Systems 5}", AUTHOR = "L. Y. Pratt", TITLE = "Discriminability-Based Transfer between Neural Networks", ADDRESS = "San Mateo, CA", PUBLISHER = "Morgan Kaufmann Publishers", YEAR = 1993, PAGES = {204--211}, CATALOGDATE = "April 12, 1993", NOTE = "Also available via anonymous ftp to franklinite.mines.colorado.edu: pub/pratt-papers/pratt-nips5.ps.Z", } @inproceedings{ sharkey-92, MYKEY = " sharkey-92 : ", TITLE = "Adaptive generalisation and the transfer of knowledge", AUTHOR = "Noel E. Sharkey and Amanda J. C. Sharkey", BOOKTITLE = "Proceedings of the Second Irish Neural Networks Conference, Belfast", YEAR = 1992, CATALOGDATE = "December 18, 1992", } @article{ sharkey-93, MYKEY = " sharkey-93 : ", TITLE = "Adaptive generalisation and the transfer of knowledge", AUTHOR = "N. E. Sharkey and A. J. C. Sharkey", NUMBER = "In press", VOLUME = "Special Issue on Connectionism", JOURNAL = "AI Review", YEAR = 1993, ANNOTE = "In press", CATALOGDATE = "May 19, 1993", } @inproceedings{ naik-92b, MYKEY = " naik-92b : .eml .unb .con ", AUTHOR = "D. K. Naik and R. J. Mammone and A. Agarwal", TITLE = "Meta-Neural Network approach to learning by learning", YEAR = 1992, BOOKTITLE = "Intelligence Engineering Systems through Artificial Neural Networks", ORGANIZATION = "The American Society of Mechanical Engineers", PUBLISHER = "ASME Press", VOLUME = 2, PAGES = {245--252}, CATALOGDATE = "December 23, 1992", } @inproceedings{ agarwal-92, MYKEY = " agarwal-92 : .eml .unb .con ", AUTHOR = "A. Agarwal and R. J. Mammone and D. K. Naik", TITLE = "An on-line Training Algorithm to Overcome Catastrophic Forgetting", BOOKTITLE = "Intelligence Engineering Systems through Artificial Neural Networks", YEAR = 1992, ORGANIZATION = "The American Society of Mechanical Engineers", PUBLISHER = "ASME Press", VOLUME = 2, PAGES = {239--244}, CATALOGDATE = "December 23, 1992", } @techreport{ martin-88, MYKEY = " martin-88 : ", TITLE = "The Effects of Old Learning on New in Hopfield and Backpropagation Nets", AUTHOR = "Gale Martin", KEY = "martin-88", INSTITUTION = "Microelectronics and Computer Technology Corporation (MCC)", NUMBER = "ACA-HI-019", YEAR = 1988, CATALOGDATE = "January 3, 1992", } @article{ mccloskey-89, MYKEY = " mccloskey-89 : .unb .con .csy .adap", TITLE = "Catastrophic interference in connectionist networks: the sequential learning problem", KEY = "mccloskey-89", AUTHOR = "Michael McCloskey and Neal J. Cohen", JOURNAL = "The psychology of learning and motivation", VOLUME = 24, YEAR = 1989, CATALOGDATE = "April 9, 1991", } @inproceedings{ thrun-93a, MYKEY = " thrun-93a : ", TITLE = "Lifelong Robot Learning", AUTHOR = "Sebastian B. Thrun and Tom M. Mitchell", BOOKTITLE = "Proceedings of the {NATO} {ASI}: The biology and technology of intelligent autonomous agents", EDITOR = "Luc Steels", YEAR = 1993, KEY = "thrun-93a", CATALOGDATE = "September 1, 1993", } @INPROCEEDINGS{thrun-93b, AUTHOR = {Thrun, Sebastian B. and Mitchell, Tom M.}, TITLE = {Integrating Inductive Neural Network Learning and Explanation-Based Learning}, BOOKTITLE = {Proceedings of IJCAI-93}, YEAR = {1993}, ORGANIZATION = {IJCAI, Inc.}, PUBLISHER = {}, ADDRESS = {Chamberry, France}, MONTH = {}, CATALOGDATE = "October 11, 1993", } @inproceedings{ pratt-91, MYKEY = " pratt-91 : .min .bap .app .spc .con ", AUTHOR = "Lorien Y. Pratt and Jack Mostow and Candace A. Kamm", TITLE = "Direct Transfer of Learned Information among Neural Networks", BOOKTITLE = "Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91)", PAGES = {584--589}, ADDRESS = "Anaheim, CA", YEAR = 1991, } @article{ waibel-89, MYKEY = " waibel-89 : .bap .unr .unb .tem .spc .con ", TITLE = "Modular Construction of Time-Delay Neural Networks for Speech Recognition", AUTHOR = "Alexander Waibel", journal = "Neural Computation", volume = 1, pages = {39--46}, year = 1989 }  From wermter at nats2.informatik.uni-hamburg.de Tue Dec 13 06:04:05 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 13 Dec 94 12:04:05 +0100 Subject: book on hybrid connectionist language processing Message-ID: <9412131104.AA01259@nats2.informatik.uni-hamburg.de> BOOK ANNOUNCEMENT ----------------- The following book is now available from the beginning of December 1994. Title: Hybrid connectionist natural language processing Date: 1995 Author: Stefan Wermter Dept. of Computer Science University of Hamburg Vogt-Koelln-Str. 30 D-22527 Hamburg Germany wermter at informatik.uni-hamburg.de Series: Neural Computing Series 7 Publisher: Chapman & Hall Inc 2-6 Boundary Row London SE1 8HN England (Order information in the end of this message) Description ----------- The objective of this book is to describe a new approach in hybrid connectionist natural language processing which bridges the gap between strictly symbolic and connectionist systems. This objective is tackled in two ways: the book gives an overview of hybrid connectionist archi- tectures for natural language processing; and it demonstrates that a hybrid connectionist architecture can be used for learning real-world natural language problems. The book is primarily intended for scientists and students interested in the fields of artificial intelligence, neural networks, connectionism, natural language processing, hybrid symbolic connectionist architectures, parallel distributed processing, machine learning, automatic knowledge acquisition or computational linguistics. Furthermore, it might be of interest for scientists and students in information retrieval and cognitive science, since the book points out interdisciplinary relationships to these fields. We develop a systematic spectrum of hybrid connectionist architectures, from completely symbolic architectures to separated hybrid connectionist architectures, integrated hybrid connectionist architectures and completely connectionist architectures. Within this systematic spectrum we have designed a system SCAN with two separated hybrid connectionist architectures and two integrated hybrid connectionist architectures for a scanning understanding of phrases. A scanning understanding is a relation-based flat understanding in contrast to traditional symbolic in-depth understanding. Hybrid connectionist representations consist of either a combination of connectionist and symbolic representations or different connectionist representations. In particular, we focus on important tasks like structural disambiguation and semantic context classification. We show that a parallel modular, constraint-based, plausibility-based and learned use of multiple hybrid connectionist representations provides powerful architectures for learning a scanning understanding. In particular, the combination of direct encoding of domain-independent structural knowledge and the connectionist learning of domain-dependent semantic knowledge, as suggested by a scanning under- standing in SCAN, provides concepts which lead to flexible, adaptable, transportable architectures for different domains. Table of Contents ----------------- 1 Introduction 1.1 Learning a Scanning Understanding 1.2 The General Approach 1.3 Towards a Hybrid Connectionist Memory Organization 1.4 An Overview of the SCAN Architecture 1.5 Organization and Reader's Guide 2 Connectionist and Hybrid Models for Language Understanding 2.1 Foundations of Connectionist and Hybrid Connectionist Approaches 2.2 Connectionist Architectures 2.2.1 Representation of Language in Parallel Spatial Models Early Pattern Associator for Past Tense Learning Pattern Associator for Semantic Case Assignment Pattern Associator with Sliding Window Time Delay Neural Networks 2.2.2 Representation of Language in Recurrent Models Recurrent Jordan Network for Action Generation Simple Recurrent Network for Sequence Processing Recursive Autoassociative Memory Network 2.2.3 Towards Modular and Integrated Connectionist Models Cascaded Networks Sentence Gestalt Model Grounding Models 2.3 Hybrid Connectionist Architectures 2.3.1 Sentence Analysis in Hybrid Models Hybrid Interactive Model for Constraint Integration Hybrid Model for Sentence Analysis 2.3.2 Inferencing in Hybrid Models Symbolic Marker Passing and Localist Networks Symbolic Reasoning with Connectionist Models 2.3.3 Architectural Issues in Hybrid Connectionist Systems Symbolic Neuroengineering and Symbolic Recirculation Modular Model for Parsing 2.4 Summary and Discussion 3 A Hybrid Connectionist Scanning Understanding of Phrases 3.1 Foundations of a Hybrid Connectionist Architecture 3.1.1 Motivation for a Hybrid Connectionist Architecture 3.1.2 The Computational Theory Level for a Scanning Understanding 3.1.3 Constraint Integration 3.1.4 Plausibility view 3.1.5 Learning 3.1.6 Subtasks of Scanning Understanding at the Computational Theory Level 3.1.7 The Representation Level for a Scanning Understanding 3.2 Corpora and Lexicon for a Scanning Understanding 3.2.1 The Underlying Corpora 3.2.2 Complex Phrases 3.2.3 Context and Ambiguities of Phrases 3.2.4 Organization of the Lexicon 3.3 Plausibility Networks 3.3.1 Learning Semantic Relationships and Semantic Context 3.3.2 The Foundation of Plausibility Networks 3.3.3 Plausibility Networks for Noun-Connecting Semantic Relationships 3.3.4 Learning in Plausibility Networks 3.3.5 Recurrent Plausibility Networks for Contextual Relationships 3.3.6 Learning in Recurrent Plausibility Networks 3.4 Summary and Discussion 4 Structural Phrase Analysis in a Hybrid Separated Model 4.1 Introduction and Overview 4.2 Constraints for Coordination 4.3 Symbolic Representation of Syntactic Constraints 4.3.1 A Grammar for Complex Noun Phrases 4.3.2 The Active Chart Parser and the Syntactic Constraints 4.4 Connectionist Representation of Semantic Constraints 4.4.1 Head-noun Structure for Semantic Relationships 4.4.2 Training and Testing Plausibility Networks with NCN-relationships 4.4.3 Learned Internal Representation 4.5 Combining Chart Parser and Plausibility Networks 4.6 A Case Study 4.7 Summary and Discussion 5 Structural Phrase Analysis in a Hybrid Integrated Model 5.1 Introduction and Overview 5.2 Constraints for Prepositional Phrase Attachment 5.3 Representation of Constraints in Relaxation Networks 5.3.1 Integrated Relaxation Network 5.3.2 The Relaxation Algorithm 5.3.3 Testing Relaxation Networks 5.4 Representation of Semantic Constraints in Plausibility Networks 5.4.1 Training and Testing Plausibility Networks with NPN-Relationships 5.4.2 Learned Internal Representation 5.5 Combining Relaxation Networks and Plausibility Networks 5.5.1 The Interface between Relaxation Networks and Plausibility Networks 5.5.2 The Dynamics of Processing in a Relaxation Network 5.6 A Case Study 5.7 Summary and Discussion 6 Contextual Phrase Analysis in a Hybrid Separated Model 6.1 Introduction and Overview 6.2 Towards a Scanning Understanding of Semantic Phrase Context 6.2.1 Superficial Classification in Information Retrieval 6.2.2 Skimming Classification with Symbolic Matching 6.3 Constraints for Semantic Context Classification of Noun Phrases 6.4 Syntactic Condensation of Phrases to Compound Nouns 6.4.1 Motivation of Symbolic Condensation 6.4.2 Condensation Using a Symbolic Chart Parser 6.5 Plausibility Networks for Context Classification of Compound Nouns 6.5.1 Training and Testing the Recurrent Plausibility Network 6.5.2 Learned Internal Representation 6.6 Summary and Discussion 7 Contextual Phrase Analysis in a Hybrid Integrated Model 7.1 Introduction and Overview 7.2 Constraints for Semantic Context Classification of Phrases 7.3 Plausibility Networks for Context Classification of Phrases 7.3.1 Training and Testing with Complete Phrases 7.3.2 Training and Testing with Phrases without Insignificant Words 7.3.3 Learned Internal Representation 7.4 Semantic Context Classification and Text Filtering 7.5 Summary and Discussion 8 General Summary and Discussion 8.1 The General Framework of SCAN 8.2 Analysis and Evaluation 8.2.1 Evaluating the Problems 8.2.2 Evaluating the Methods 8.2.3 Evaluating the Representations 8.2.4 Evaluating the Experiment Design 8.2.5 Evaluating the Experiment Results 8.3 Extensions of a Scanning Understanding 8.3.1 Extending Modular Subtasks 8.3.2 Extending Interactions 8.4 Contributions and Conclusions 9 Appendix 9.1 Hierarchical Cluster Analysis 9.2 Implementation 9.3 Examples of Phrases for Structural Phrase Analysis 9.4 Examples of Phrases for Contextual Phrase Analysis References Index Orders information ----------------- ISBN: 0 412 59100 6 Pages: 190 Figures: 56 Price: 29.95 pounds sterling, 52.00 US dollars Credit cards: all major credit cards accepted by Chapman & Hall Please order from: -- Pam Hounsome Chapman & Hall Cheriton House North Way Andover Hants, SP10 5BE, UK England UK orders: Tel: 01264 342923 Fax: 01264 364418 Overseas orders: Tel: +44 1264 342830 Fax: +44 1264 342761 (Sister company in US) -- Peter Clifford Chapman & Hall Inc One Penn Plaza 41st Floor New York NY 10119 USA Tel: 212 564 1060 Fax: 212 564 1505 -- or e-mail on: order at Chaphall.com  From mario at physics.uottawa.ca Tue Dec 13 09:01:44 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Tue, 13 Dec 94 10:01:44 AST Subject: Neural net and PAC learning paper: errata Message-ID: <9412131401.AA27027@physics.uottawa.ca> About a week ago, I have sent you this message: >The following paper, which was presented at the NIPS'94 conference, >is available by anonymous ftp at: > >ftp://dirac.physics.uottawa.ca/usr2/ftp/pub/tr/marchand > > >FileName: nips94.ps > >Title: Learning Stochastic Perceptrons Under k-Blocking Distributions > >Authors: Marchand M. and Hadjifaradji S. > > >ALSO: you will find other papers co-authored by Mario Marchand in > this directory. The text file: Abstracts-mm.txt contains a > list of abstracts of all the papers. In fact the right path for ftp is: ftp://dirac.physics.uottawa.ca/pub/tr/marchand and the file name is: nips94.ps.Z (compressed) sorry for this inconvenience and many thanks for those who have kindly reported the mistakes. - mario ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ----------------------------------------------------------------  From bisant at gl.umbc.edu Tue Dec 13 20:21:48 1994 From: bisant at gl.umbc.edu (Mr. David Bisant) Date: Tue, 13 Dec 1994 20:21:48 -0500 Subject: FLAME: WCNN'95 - science in the mud pits In-Reply-To: <199412081651.LAA01963@garnet.cs.brandeis.edu> Message-ID: On Thu, 8 Dec 1994, Jordan Pollack wrote: > > > >Authors must submit > >registration payment with papers to be eligible for the early > >registration fee...Registration fee includes 1995 membership and a > >one (1) year subscription to the Journal Neural Networks. > >A $35 per paper processing fee must be enclosed > >in order for the paper to be refereed. > > Reading fees are charged by agents for aspiring science fiction > writers. Linking registration to submission is nothing more than an > admission that all papers are to be accepted. And a fee of $480 is > paying a lot more than membership, subscription, and a few days of > coffee breaks! > The new fee structure for WCNN did not strike me as being unusually high. I also think it will help solve a problem I observed last year with a few authors not showing up for their talks, which wasted valuable time slots for oral presentation and upset schedules. With interest in the neural network field diminishing, new problems in conferencing appear. I think their fee structure is a reasonable approach to solving these problems. David Bisant (no affiliation with the conference organization)  From cga at cc.gatech.edu Tue Dec 13 21:35:22 1994 From: cga at cc.gatech.edu (Christopher G. Atkeson) Date: Tue, 13 Dec 1994 21:35:22 -0500 Subject: About sequential learning (or interference) Message-ID: <199412140235.VAA17037@lennon.cc.gatech.edu> Memory-Based Learning is one approach to avoiding interference, and is in part descended from nearest neighbor neural networks, or at least neural networks with local representations. Some of our references that will point you to other work as well: Atkeson, C.G. (1992) "Memory-Based Approaches To Approximating Continuous Functions." In: Casdagli, M., & Eubank, S. (eds.), "Nonlinear Modeling and Forecasting" Addison Wesley, Redwood City, CA. Atkeson, C.G. ``Using Local Models to Control Movement'', Proceedings, Neural Information Processing Systems, Dec 1989, Denver, Colorado. In: Neural Information Processing Systems 2. Morgan Kaufmann, 1990. Schaal, S., & Atkeson, C.G. (1994). "Robot Juggling: An Implementation of Memory-Based Learning." IEEE Control Systems Magazine, vol. 15, no. 1, pp. 57-71. Chris Atkeson Stefan Schaal  From jlm at crab.psy.cmu.edu Tue Dec 13 23:16:34 1994 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Tue, 13 Dec 94 23:16:34 EST Subject: About sequential learning (or interference) In-Reply-To: (message from DeLiang Wang on Mon, 12 Dec 1994 10:50:23 -0500) Message-ID: <9412140416.AA17263@crab.psy.cmu.edu.psy.cmu.edu> DeLiang Wang writes: > It was reported some time ago that multilayer perceptrons suffer the problem > of so-called "catastrophic interference", meaning that later training will > destroy previously acquired "knowledge" (see McCloskey & Cohen, Psychol. of > Learning and Motivat. 24, 1990; Ratcliff, Psychol. Rev. 97, 1990). This seems > to be a serious problem, if we want to use neural networks both as a stable > knowledge store and a long-term problem solver. The brain appears to have solved this problem by storing new information in a special associative memory system in the hippocampus. According to this view, cortical (and some other non-hippocampal) systems learn slowly, using what I call 'interleaved learning'. Weights are adjusted a small amount after each experience, so that the overall direction of weight change is governed by the structure present in the ensemble of events and experiences. New material can be added to such a memory without catastrophic intereference if it is added slowly, interleaved with ongoing exposure to other events and experiences. This is too slow for the demands placed on memory by the world. To allow rapid learning of new material without catastrophic interference, new material is initially stored in the hippocampus, where sparse, conjunctive representations are used to minimize interference with other memories. Reinstatement of these patterns occurs in task-relevant situations via bi-directional connections between hippocampus and cortex together with pattern completion within the hippocampal formation. Interestingly, it appears that reinstatement occurs in off-line situations as well -- most notably, during sleep. Each reinstatement provides an opportunity for the neocortex to learn; but these reinstatements occur interleaved with other ongoing events and experiences, and weight changes are small on each reinstatement, so that the neocortex learns the new material via interleaved learning. This theory is consistent with a lot of data at this point, including of course the basic fact that bilateral removal of the hippocampus leads to a profound deficit in the ability to form arbitrary new memories rapidly. Among the most important additional findings are 1) the phenomenon of temporally graded retrograde amnesia --- the finding that damage to the hippocampus can produce a selective deficit for recently acquired memories leaving memories acquired several weeks or months earlier intact --- and 2) the finding that neurons in the hippocampus that are coactive during a particular behavior appear to be coactive during sleep following the behavior, as if the patterns of activity that were present during behavior are reactivated during sleep. I announced a technical report discussing the above ideas last March. In that report, we focused on why it makes sense from a connectionist point of view that the system should be organized as described above. A reprint of the announcement (sans abstract) appears below. This TR is currently under revision for publication; the revised version will contain a fuller presentation of the physiological evidence than the present version, and I will announce it on connectionists when it becomes available. ======================================================================== Technical report announcement: ------------------------------------------------------------------------ Why there are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory James L. McClelland, Bruce L. McNaughton & Randall C. O'Reilly Carnegie Mellon University & The University of Arizona Technical Report PDP.CNS.94.1 March, 1994 ======================================================================= Retrieval information: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.1.ps.Z ftp> quit unix> zcat pdp.cns.94.1.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 306994 bytes long. Uncompressed, the file is 840184 byes long. The printed version is 63 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . Ask for the report by title or pdp.cns technical report number.  From bogner at eleceng.adelaide.edu.au Tue Dec 13 23:22:37 1994 From: bogner at eleceng.adelaide.edu.au (Robert E. Bogner) Date: Wed, 14 Dec 1994 14:52:37 +1030 Subject: [dwang@cis.ohio-state.edu: About sequential learning (or interference)] Message-ID: <9412140422.AA16796@augean.eleceng.adelaide.edu.au> From dwang at cis.ohio-state.edu Mon Dec 12 10:50:23 1994 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 12 Dec 1994 10:50:23 -0500 Subject: About sequential learning (or interference) Message-ID: 1. Visakan Kadirkamanathan wrote a dissertation on "Sequential Learning in Artifician neural networks for PhD at U. of Cambridge, October 1991. there were some conference papers undeer the same author. 2. C. J. Wallace, "Autonomous Rehearsal: Stability and plasticity in a multilayer perceptron", project report, Dept. of EEE, Univ. of Adelaide, October 1992 " . . a psychologically-motivated idea for mainaining the original learning of a classifier neural network by getting it to reinforce associations formed from confidently classified random inputs. . ." Robert E. Bogner -------------------------------------------------------------------------- e_mail bogner at eleceng.adelaide.edu.au WATCH THIS SPACE Reliable mail: Robert E. Bogner Professor of Electrical Engineering, The University of Adelaide, Adelaide 5005, SOUTH AUSTRALIA Phone +61 8 303 5589 (answering machine all hours) Fax +61 8 303 4360 OR +61 8 224 0464 Home phone +61 8 332 5549 Time GMT+9h30m in northern hemisphere summer GMT+10h30m in northern hemisphere winter ==========================================================================  From sef+ at cs.cmu.edu Wed Dec 14 00:03:50 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Wed, 14 Dec 94 00:03:50 EST Subject: About sequential learning (or interference) In-Reply-To: Your message of Tue, 13 Dec 94 21:35:22 -0500. <199412140235.VAA17037@lennon.cc.gatech.edu> Message-ID: Yet another approach to incremental learning can be seen in the Cascade Correlation algorithm. This creates hidden units (or feature detectors, if you prefer) one by one, and freezes each one after it is created. The weights between these features and the outputs remain plastic and continue to be trained. This means that if you train a net on training set A and then switch to a different training set B, you may build some new hidden units for task B, but the hidden units created for task A are not cannibalized. The A units may be used in task B, either directly or as inputs to new hidden units, but they remain unchanged and available. As task B is trained, the output weights change, so performance on task A will generally decline, but it can come back very quickly if task A is re-trained or if you now train with a combined A+B task. The point is that the time-consuming part of learning is in creating the hidden units, and these are retained once they are learned. My Recurrent Cascade-Correlation paper has an example of this. This recurrent net can be trained to recognize a temporal sequence of 1's and 0's as Morse code. If you train all 26 letter-codes as a single training set, the network will learn the task, but learning is faster and the resulting net is smaller if you break the training set up into "lessons" of increasing difficulty: first train on the shortest codes, then the medium-length ones, then the longest ones, and finally on the whole set together. This is reminiscent of the modular networks explored by Waibel and his colleagues for large speech tasks, but in this case you just have to chop the training set up into modules -- the network architecture takes care of itself. References (these are also available online on Neuroprose, among other places): Scott E. Fahlman and Christian Lebiere (1990) "The Cascade-Correlation Learning Architecture", in {\it Advances in Neural Information Processing Systems 2}, D. S. Touretzky (ed.), Morgan Kaufmann Publishers, Los Altos CA, pp. 524-532. Scott E. Fahlman (1991) "The Recurrent Cascade-Correlation Architecture" in {\it Advances in Neural Information Processing Systems 3}, R. P. Lippmann, J. E. Moody, and D. S. Touretzky (eds.), Morgan Kaufmann Pulishers, Los Altos CA, pp. 190-196. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Principal Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 268-5576 (new!) Carnegie Mellon University Latitude: 40:26:46 N 5000 Forbes Avenue Longitude: 79:56:55 W Pittsburgh, PA 15213 Mood: :-) ===========================================================================  From pollack at cs.brandeis.edu Wed Dec 14 14:45:33 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Wed, 14 Dec 1994 14:45:33 -0500 Subject: Apology for Flame, and Survey In-Reply-To: (bisant@gl.umbc.edu) Message-ID: <199412141945.OAA10961@garnet.cs.brandeis.edu> I flamed about some of the policies of WCNN, and David Bisant wrote: > new problems >in conferencing appear. I think their fee structure is a reasonable >approach to solving these problems. I obviously over-reacted, since I agree that there is also a more positive reading of these policies which I didnt consider at the time - that they are innovations that that the organization is using to cope with several problems, such as: a) filtering out computer-generated articles, b) authors not showing up to deliver talks or posters, c) expense of supporting travel costs for both plenary speakers and conference organizers, and d) declining membership/subscription revenues. It was the simultaneous appearance of all these novel structures in one place which raised a red flag, and led me to breathe fire. I am not usually so rude, and now I owe MANY colleagues apologies for my outburst. First, I'm very sorry for linking the two completely separate societies - IEEE and INNS, and for denigrating the papers of individuals who have published their work in these refereed meetings over the years. Also, I want to apologize to the dozens of major scientists involved in organizing and governing INNS, for sullying their good names, and finally to Mike Cohen and my other new neighbors at Boston Univ, who are merely using their good offices to promote the meeting. I hope the society will eventually forgive my unsociable outburst. However, I do believe the $35 reviewing fee is a serious and big issue, as I have never seen a precedent in any branch of science. And while there are real costs to reviewing which are bundled into the costs of meetings, this "unbundling" is a very slippery slope to walk. If such fees catch on in conferences, they might catch on universally, from journals to funding agencies to faculty search committees, and this would dramatically change the "social contract," I fear, with many more negative than positive results! But perhaps even in this sub-issue I am over-reacting. Since the list is too large for a discussion, I would like to perform an informal survey, which I will then condense and post the results: 1) What do you think about review fees for scientific conferences? 2) What about review fees for scientific journals? 3) What about review fees for grant applications to NSF/NIH and other agencies? 4) What about review fees for faculty and postdoc job applications? 5) What do you predict as the ramifications of changing the current scientific free review system to a fee-based review system? Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu  From white at pwa.acusd.edu Wed Dec 14 14:44:50 1994 From: white at pwa.acusd.edu (Ray White) Date: Wed, 14 Dec 1994 11:44:50 -0800 (PST) Subject: FLAME: WCNN'95 - science in the mud pits In-Reply-To: <199412081651.LAA01963@garnet.cs.brandeis.edu> Message-ID: I'm not thrilled with the "reading fee" either, but $35 plus early member registration @ $170 is pretty low for a conference these days; certainly far less than the maximum registration fee of $480 that Jordan Pollack mentioned. As far as "sorting papers by fame" goes, for those of us who are not famous, it is a good thing that there are conferences where papers are merely "sorted by fame", when there are also elitist conferences where fame is more likely a requirement for acceptance. (Did anyone read NIPS into that sentence?) A conference is remembered for the quality of presented papers, not for the quality of rejected papers. Ray H. White | Depts. of Physics and Computer Science | University of San Diego 619/260-4244 or 619/260-4627 | 5998 Alcala Park, San Diego, CA 92110  From jose at scr.siemens.com Wed Dec 14 15:20:45 1994 From: jose at scr.siemens.com (Stephen Hanson) Date: Wed, 14 Dec 1994 15:20:45 -0500 (EST) Subject: About sequential learning (or interference) In-Reply-To: <199412140235.VAA17037@lennon.cc.gatech.edu> References: <199412140235.VAA17037@lennon.cc.gatech.edu> Message-ID: of course avoiding "interference" is another way of preventing generalization. As usual there is a tradeoff here. Steve Stephen J. Hanson, Ph.D. Head, Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540  From lautrup at connect.nbi.dk Thu Dec 15 11:56:25 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Thu, 15 Dec 94 11:56:25 MET Subject: New preprint: "Massive Weight Sharing: A Cure for ..." Message-ID: Subject: Paper available: "Massive Weight Sharing: A Cure for ..." Date: August 29, 1994 FTP-host: connect.nbi.dk FTP-file: neuroprose/lautrup.massive.ps.Z ---------------------------------------------- The following paper is now available: Massive Weight Sharing: A Cure for Extremely Ill-posed Learning [12 pages] B. Lautrup, L.K. Hansen, I. Law, N. Moerch, C. Svarer and S.C. Strother CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Presented at the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, Juelich, Germany, November 21-23, 1994 Abstract: In most learning problems, adaptation to given examples is well-posed because the number of examples far exceeds the number of internal parameters in the learning machine. Extremely ill-posed learning problems are, however, common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, e.g. pixel or pin values, and a modest number of patterns, e.g. images or spectra. In this paper we show, for the case of a set of PET images differing only in the values of one stimulus parameter, that it is possible to train a neural network to learn the underlying rule without using an excessive number of network weights or large amounts of computer time. The method is based upon the observation that the standard learning rules conserve the subspace spanned by the input images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get lautrup.massive.ps.Z ftp> quit unix> uncompress lautrup.massive.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From pah at unixg.ubc.ca Thu Dec 15 14:03:03 1994 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Thu, 15 Dec 1994 11:03:03 -0800 (PST) Subject: sequential learning In-Reply-To: <9412140416.AA17263@crab.psy.cmu.edu.psy.cmu.edu> Message-ID: A couple notes on interfernce. First, most of Jay's idea is still simply theory, a good theory, but there is still little direct evidence that the hippocampus does either pattern separation or pattern completion. Most of the pattern completion idea comes from Marr's conjectures on the architecture of CA3 in hippocampus. Second (to Hanson), interference is not necessarily the flip side of generalization. This is intuitive, but you can reduce one without affecting the other. See Lewandowsky's stuff on this . Next, whether the interference observed in connectionist nets is a problem or not depends on your needs. Is the net to model human learning or to solve a particular engineering problem. If the former, then see McRae and Hetherington (1993)--it is not clear that realisticly designed nets suffer from catastrophic interference. Finally, it appears the the bigger problem is to reduce interference without eliminating the distinction between old and new items. This was Ratcliff's main concern, and is addressed by Sharkey. See also: French, R.M. (1991). Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, 173-178. French, R.M. (1992). Connection Science, 4, 3-4, 365-377. Hetherington, P.A. (1990). Neural Network Review, 4(1), 27-29. Hetherington, P.A. (1991). The Sequential Learning Problem in Connectionist Networks. Unpublished Masters Thesis, McGill University, Montreal, Quebec. Hetherington, P.A., & Seidenberg, M.S. (1989). Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, 26-33. Hinton, G.E., & Plaut, D.C. (1987). Proceedings of the Ninth Annual Conference of the Cognitive Science Society, 177-186. Kortge, C.A. (1990). Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, 764-771. Krushke, J.K. (1992). Psychological Review, 99, 22-44. Krushke, J.K. (1993). Connection Science, 5, 3-36. Lewandowsky, S. (1991). In W.E. Hockley & S, Lewandowsky (Eds.), Relating theory and data: Essays on human memory in honor of Bennet B. Murdock. McRae, K., & Hetherington, P.A. (1993). Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, 723-728. Murre, J.M.J. (1992). Learning and categorization in modular neural networks. Lawrence Erlbaum. Sharkey, N.E., & Sharkey, A.J.C. Understanding Catastrophic Interference in neural Nets. Technical Report, Department of Computer Science, University of Sheffield, U.K. Sloman, S.A., & Rumelhart, D.E. (1992). In A. Healy, S.M. Kosslyn, & R.M. Shiffrin (Eds.), From learning theory to cognitive processes: Essays in honor of William K. Estes.  From french at cogsci.indiana.edu Thu Dec 15 14:28:01 1994 From: french at cogsci.indiana.edu (Bob French) Date: Thu, 15 Dec 94 14:28:01 EST Subject: Catastrophic forgetting and sequential learning Message-ID: Below I have indicated a number of references on current work on the problem of sequential learning in connectionist networks. All of these papers address the problem of catastrophic interference, which may result when a previously trained connectionist network attempts to learn new patterns. The following commented list is by no means complete. In particular, no mention is made of convolution-correlation models with their associated connectionist networks. Nonetheless, I hope it might prove to be a useful introduction to people interested in knowing a bit more about the subject. Bob French french at cogsci.indiana.edu ---------------------------------------------------------------------------- Recent Work in Catastrophic Interference in Connectionist Networks The two papers that really kicked off research in this area were: McCloskey, M. & Cohen, N. (1989) "Catastrophic interference in connections networks: the sequential learning problem" The Psychology of Learning and Motivation, 24, 109-165. Ratcliff, R (1990) "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions" Psychological Review, 97, 285-308 Hetherington and Seidenberg very early suggested an Ebbinghaus-like "savings" measure of catastrophic interference and, based on this, they concluded that catastrophic interference wasn't really as much of a problem as had been thought. While the problem has, in fact, been subsequently confirmed to be quite serious, the "savings" measure they proposed is still widely used to measure the extent of forgetting. Hetherington, P.. & Seidenberg, M. (1989) "Is there 'catastrophic interference' in connectionist networks?" Proceedings of the 11th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 26-33. Kortge was one of the first to propose a solution to this problem, using what he called "novelty vectors". Kortge, C. (1990) "Episodic Memory in Connectionist Networks" Proceedings of the 12th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 764-771 Sloman and Rumelhart also developed a technique called "episodic gating" designed to reduce the severity of the problem. Sloman, S. & Rumelhart, D., (1991) "Reducing interference in distributed memories through episodic gating" In Healy, Kosslyn, & Shiffrin (eds.) Essays in Honor of W. K. Estes. In 1991 French suggested that catastrophic interference might be the inevitable price you pay for the advantages of fully distributed representations (in particular, generalization). He suggested a way of dynamically producing "semi-distributed" hidden-layer representations to reduce the effect of catastrophic interference. French, R. (1991) "Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks" in Proceedings of the 13th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 173-178. A more detailed article presenting the same techique, called activation sharpening, appeared in Connection Science. French, R. (1992) "Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks", Connection Science, Vol. 4: 365-377. In a more recent paper (1994), French presented a technique called context biasing, which again dynamically "massages" hidden layer representations based on the "context" of other recently learned exemplars. The goal of this technique is to produce hidden-layer representations that are simultaneously well distributed and orthogonal. French, R. (1994) "Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference" in Proceedings of the 16th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 335-340 Finally, still in this vein, French proposed (1993) at the NIPS-93 workshop on catastrophic interference a dynamic system of two interacting networks working in tandem, one storing prototypes, the other doing short-term learning of new exemplars. For a "real brain" justification for this type of architecture see McClelland, McNaughton, and O'Reilly (1994), below. (A full paper on this tandem network architecture will be available at the beginning of next year.) This technique is discussed very briefly in: French, R. (1994) "Catastrophic interference in connectionist networks: Can it be predicted, can it be prevented?" Neural Information Processing Systems - 6, Cowan, Tesauro, Alspector (eds.) San Francisco, CA: Morgan Kaufmann. 1176-1177. Steve Lewandowsky has also been very active in this area. He developed a simple technique in 1991 that focused on producing orthogonalization at the input layer rather than the hidden layer. This "symmetric vectors" technique is discussed in: Lewandowsky, S. & Shu-Chen Li (1993) "Catastrophic Interference in Neural Networks: Causes, solutions and data" in New Perspectives on interference and inhibition in cognition. Dempster & Brainerd (eds.) New York: NY: Academic Press. Lewandowsky, S (1991) "Gradual unlearning and catastrophic interference: a comparison of distributed architectures. In Hockley & Lewandowsky (eds.) Relating theory and data: Essays on human memory in honor of Bennet B. Murdock. (pp. 445-476). Hillsdale, NJ: Lawrence Erlbaum. and in an earlier University of Oklahoma psychology department technical report: Lewandowsky, S. (1993) "On the relation between catastrophic interference and generalization in connectionist networks" In 1993 McRae and Hetherington published a study using pre-training to eliminate catastrophic interference. McRae, K. & Hetherington, P. (1993) "Catastrophic interference is eliminated in pretrained networks" in Proceedings of the 15th Annual Cognitive Science Society. Hillsdale, NJ: Erlbaum. 723-728 John Kruschke discussed the problem of catastrophic forgetting at length in the context of his connectionist model, ALCOVE, and showed the extent to which and under what circumstances this model is not subject to catastrophic forgetting. Kruschke, J. (1993) "Human category learning: implications for backpropagation models", Connection Science, Vol. 5, No. 1, Jacob Murre has also examined how his model, CALM, performs on the sequential learning problem. See, in particular: Murre, J. Learning and Categorization in Modular neural networks Hillsdale, NJ: Lawrence Erlbaum. 1992. (see esp. ch. 7.4) It is to be noted that both ALCOVE and CALM rely, at least in part, on reducing the distributedness of their internal representations in order to achieve improved performance on the problem of catastrophic interference. A 1994 article (in press) by Anthony Robins presents a novel technique, called "pseudorehearsal", whereby "pseudoexemplars" that reflect prior learning are added to the new data set to learned in order to reduce catastrophic forgetting. Robins, A. "Catastrophic forgetting, rehearsal, and pseudorehearsal", University of Otago (New Zealand) computer science technical report. (copies: coscavr at otago.ac.nz) Tetewsky, Shultz & Buckingham (1994) demonstrate the improvements that result from using Fahlman's cascade-correlation learning algorithm. Tetewsky, S., Shultz, T. and Buckingham, D. "Assessing interference and savings in connectionist models of human recognition memory" Department of Psychology TR, McGill University, Montreal. (presented at 1994 Meeting of the Psychonomic Society). Sharkey & Sharkey (1993) discussed the relation between the problem of interference and discrimination in connectionist networks. They conclude that sequentially trained networks using backprop will unavoidably suffer from one or the other problem. I am not aware if there is a final version of this paper in print yet, but Noel Sharkey is currently at University of Sheffield, Dept. of Computer Science, Sheffield. n.sharkey at dcs.shef.ac.uk Sharkey, N. & Sharkey, A., "An interference-discrimination tradeoff in connectionist models of human memory" McClelland, McNaughton, & O'Reilly issued a CMU technical report earlier this year (1994) in which they discuss the phenomenon of catastrophic interference in the context of the "real world", i.e., the brain. They suggest that the complementary learning systems in the hippocampus and the neocortex might be the brain's way of overcoming the problem. They argue that this dual system provides a means not only of rapidly acquiring new information, but also of storing well-learned information as prototypes. McClelland, J., McNaughton, B., & O'Reilly, R. "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory" CMU Tech report: PNP.CNS.94.1, March 1994. -------------------------------------------------------------------------  From marney at ai.mit.edu Thu Dec 15 18:19:23 1994 From: marney at ai.mit.edu (Marney Smyth) Date: Thu, 15 Dec 94 18:19:23 EST Subject: NNSP95 : Call for Papers Message-ID: <9412152319.AA00894@motor-cortex> ********************************************************************** * 1995 IEEE WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * * * * August 31 -- September 2, 1995, Cambridge, Massachusetts, USA * * Sponsored by the IEEE Signal Processing Society * * (In cooperation with the IEEE Neural Networks Council) * * * ********************************************************************** FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of IEEE Signal Processing Society, the co-sponsorship of IEEE Neural Network Council, the fifth of a series of IEEE Workshops on Neural Networks for Signal Processing will be held at the Royal Sonesta Hotel, Cambridge, Massachusetts, on Thursday 8/31 -- Saturday 9/2, 1995. Papers are solicited for, but not limited to, the following topics: ++ APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. ++ THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelets in NNs. ++ IMPLEMENTATIONS: Software, digital, analog, hybrid technologies, parallel processing. Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and e-mail address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Marney Smyth (Tel.) (617)-253-0547, (Fax) (617)-253-2964 (e-mail) marney at ai.mit.edu. We plan to use the World Wide Web (WWW) for posting further announcements on NNSP95 such as: submitted papers status, final program, hotel information etc. You can use MOSAIC and access URL site: http://www.cdsp.neu.edu. If you do not have access to WWW use anonymous ftp to site ftp.cdsp.neu.edu and look under the directory /pub/NNSP95. Please send paper submissions to: Prof. Elias S. Manolakos IEEE NNSP'95 409 Dana Research Building Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115, USA Phone: (617) 373-3021, Fax: (617) 373-4189 ******************* * IMPORTANT DATES * ******************* Extended summary received by: February 17 Notification of acceptance: April 21 Photo-ready accepted papers received by: May 22 Advanced registration received before: June 2 GENERAL CHAIRS Federico Girosi Center for Biological and Computational Learning, MIT girosi at ai.mit.edu John Makhoul Bolt, Beranek and Newman makhoul at bbn.com PROGRAM CHAIR Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Northeastern University elias at cdsp.neu.edu FINANCE CHAIR LOCAL ARRANGEMENTS Judy Franklin Mary Pat Fitzgerald, MIT GTE Laboratories marypat at ai.mit.edu jfranklin at gte.com PUBLICITY CHAIR PROCEEDINGS CHAIR Marney Smyth, MIT Elizabeth J. Wilson marney at ai.mit.edu Raytheon Co. email: bwilson at sud2.ed.ray.com TECHNICAL PROGRAM COMMITTEE Joshua Alspector John Makhoul Alice Chiang Elias Manolakos A. Constantinides P. Mathiopoulos Lee Giles Nahesan Niranjan Federico Girosi Tomaso Poggio Lars Kai Hansen Jose Principe Yu-Hen Hu Wojtek Przytula Jenq-Neng Hwang John Sorensen Bing-Huang Juang Andreas Stafylopatis Shigeru Katagiri John Vlontzos George Kechriotis Raymond Watrous Stephanos Kollias Christian Wellekens Sun-Yuan Kung Ron Williams Gary M. Kuhn Barbara Yoon Richard Lippmann Xinhua Zhuang  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Dec 16 03:22:31 1994 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 16 Dec 94 03:22:31 EST Subject: elitism at NIPS Message-ID: <1394.787566151@DST.BOLTZ.CS.CMU.EDU> The folks involved in running NIPS are aware that in the past, some people have felt the conference was biased against outsiders. As the program chair for NIPS*94, I want to mention some steps we took to reduce that perception: * This year we required authors to submit full papers, not just abstracts. That made it harder for "famous" people to get by on their reputation alone, and easier for "non-famous" people to get good papers in. * This year we required reviewers to provide real feedback to all authors. In order to make this less painful, we completely redesigned the review form (it's publicy accessible via the NIPS home page) and, for the first time, accepted reviews by email. Everyone liked getting comments from the reviewers, and authors whose papers were not accepted understood why. * We continue to recruit new people for positions in both the organizing and program committees. It's not the same dozen people year after year. We also have a large reviewer pool: 176 people served as reviewers this year. * We tend to bring in "outsiders" as our invited speakers, rather than the usual good old boys. This year's invited speakers included Francis Crick of the Salk Insititute, Bill Newsome (a neuroscientist from Stanford), and Malcolm Slaney (a signal processing expert formerly with Apple and now at Interval Research.) None had been to NIPS before. The fundamental limits on NIPS paper acceptances are that (1) we're committed to a single-track conference, and (2) we only have room for 138 papers in the proceedings. Therefore the reviewing process has to be selective. The acceptance rate this year was 33%; it was 25% in past years. The change is due to fewer submissions, not more acceptances. Requiring full papers was probably the cause of the drop in submissions. The quality of the accepted papers has remained high. There are other ways to participate in NIPS besides writing a paper. We have a very successful workshop program, where lots of intense interaction takes place between people with similar interests. Many people give talks at the workshops and not at the conference. (I did that this year.) NIPS issues a call for workshop proposals at about the same time as the call for papers. Consider organizing a workshop at NIPS*95, or at least participating in one. You do not even have to register for the conference; workshop registration is entirely separate. The URL for the NIPS home page appears below. Besides the review form, you'll also find formatting instructions for paper submissions. Authors of accepted NIPS*94 papers will find instructions for how to submit their final camera-ready copy, which is due January 23. http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html -- Dave Touretzky NIPS*94 Program Chair  From P.McKevitt at dcs.shef.ac.uk Fri Dec 16 05:55:02 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Fri, 16 Dec 94 10:55:02 GMT Subject: AISB-95 workshop call for papers Message-ID: <> <> <> <> <> <> Advance Announcement FIRST CALL FOR PAPERS AND PARTICIPATION AISB-95 Workshop on REACHING FOR MIND: FOUNDATIONS OF COGNITIVE SCIENCE April 3rd/4th 1995 at the The Tenth Biennial Conference on AI and Cognitive Science (AISB-95) (Theme: Hybrid Problems, Hybrid Solutions) Halifax Hall University of Sheffield Sheffield, England (Monday 3rd -- Friday 7th April 1995) Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Chair: Sean O Nuallain Dublin City University, Dublin, Ireland & National Research Council, Ottawa, Canada Co-Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, England WORKSHOP COMMITTEE: John Barnden (New Mexico State University, NM, USA) Istvan Berkeley (University of Alberta, Canada) Mike Brady (Oxford, England) Harry Bunt (ITK, Tilburg, The Netherlands) Peter Carruthers (University of Sheffield, England) Daniel Dennett (Tufts University, USA) Eric Dietrich (SUNY Binghamton, NY, USA) Jerry Feldman (ICSI, UC Berkeley, USA) John Frisby (University of Sheffield, England) Stevan Harnad (University of Southampton, England) James Martin (University of Colorado at Boulder, CO, USA) John Macnamara (McGill University, Canada) Mike McTear (Universities of Ulster and Koblenz, Germany) Ryuichi Oka (RWC P, Tsukuba, Japan) Jordan Pollack (Ohio State University, OH, USA) Zenon Pylyshyn (Rutgers University, USA) Ronan Reilly (University College, Dublin, Ireland) Roger Schank (ILS, Illinois) NNoel Sharkey (University of Sheffield, England) Walther v.Hahn (University of Hamburg, Germany) Yorick Wilks (University of Sheffield, England) WORKSHOP DESCRIPTION The assumption underlying this workshop is that Cognitive Science (CS) is in crisis. The crisis manifests itself, as exemplified by the recent Buffalo summer institute, in a complete lack of consensus among even the biggest names in the field on whether CS has or indeed should have a clearly identifiable focus of study; the issue of identifying this focus is a separate and more difficult one. Though academic programs in CS have in general settled into a pattern compatible with classical computationalist CS (Pylyshyn 1984, Von Eckardt 1993), including the relegation from focal consideration of consciousness, affect and social factors, two fronts have been opened on this classical position. The first front is well-publicised and highly visible. Both Searle (1992) and Edelman (1992) refuse to grant any special status to information-processing in explanation of mental process. In contrast, they argue, we should focus on Neuroscience on the one hand and Consciousness on the other. The other front is ultimately the more compelling one. It consists of those researchers from inside CS who are currently working on consciousness, affect and social factors and do not see any incompatibility between this research and their vision of CS, which is that of a Science of Mind (see Dennett 1993, O Nuallain (in press) and Mc Kevitt and Partridge 1991, Mc Kevitt and Guo 1994). References Dennett, D. (1993) Review of John Searle's "The Rediscovery of the Mind". The Journal of Philosophy 1993, pp 193-205 Edelman, G.(1992) Bright Air, Brilliant Fire. Basic Books Mc Kevitt, P. and D. Partridge (1991) Problem description and hypothesis testing in Artificial Intelligence In ``Artificial Intelligence and Cognitive Science '90'', Springer-Verlag British Computer Society Workshop Series, McTear, Michael and Norman Creaney (Eds.), 26-47, Berlin, Heidelberg: Springer-Verlag. Also, in Proceedings of the Third Irish Conference on Artificial Intelligence and Cognitive Science (AI/CS-90), University of Ulster at Jordanstown, Northern Ireland, EU, September and as Technical Report 224, Department of Computer Science, University of Exeter, GB- EX4 4PT, Exeter, England, EU, September, 1991. Mc Kevitt, P. and Guo, Cheng-ming (1995) From Chinese rooms to Irish rooms: new words on visions for language. Artificial Intelligence Review Vol. 8. Dordrecht, The Netherlands: Kluwer-Academic Publishers. (unabridged version) First published: International Workshop on Directions of Lexical Research, August, 1994, Beijing, China. O Nuallain, S (in press) The Search for Mind: a new foundation for CS. Norwood: Ablex Pylyshyn, Z.(1984) Computation and Cognition. MIT Press Searle, J (1992) The rediscovery of the mind. MIT Press. Von Eckardt, B. (1993) What is Cognitive Science? MIT Press WORKSHOP TOPICS: The tension which riddles current CS can therefore be stated thus: CS, which gained its initial capital by adopting the computational metaphor, is being constrained by this metaphor as it attempts to become an encompassing Science of Mind. Papers are invited for this workshop which: * Address the central tension * Propose an overall framework for CS (as attempted, inter alia, by O Nuallain (in press)) * Explicate the relations between the disciplines which comprise CS. * Relate educational experiences in the field * Describe research outside the framework of classical computationalist CS in the context of an alternative framework * Promote a single logico-mathematical formalism as a theory of Mind (as attempted by Harmony theory) * Disagree with the premise of the workshop Other relevant topics include: * Classical vs. neuroscience representations * Consciousness vs. Non-consciousness * Dictated vs. emergent behaviour * A life/Computational intelligence/Genetic algorithms/Connectionism * Holism and the move towards Zen integration The workshop will focus on three themes: * What is the domain of Cognitive Science ? * Classic computationalism and its limitations * Neuroscience and Consciousness WORKSHOP FORMAT: Our intention is to have as much discussion as possible during the workshop and to stress panel sessions and discussion rather than having formal paper presentations. The workshop will consist of half-hour presentations, with 15 minutes for discussion at the end of each presentation and other discussion sessions. A plenary session at the end will attempt to resolve the themes emerging from the different sessions. ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. Given the urgency of the topic, we expect it to be of interest not only to scientists in the AI/Cognitive Science (CS) area, but also to those in other of the sciences of mind who are curious about CS. We envisage researchers from Edinburgh, Leeds, York, Sheffield and Sussex attending from within England and many overseas visitors as the Conference Programme is looking very international. SUBMISSION REQUIREMENTS: Papers of not more than 8 pages should be submitted by electronic mail (preferably uuencoded compressed postscript) to Sean O Nuallain at the E-mail address(es) given below. If you cannot submit your paper by E-mail please submit three copies by snail mail. *******Submission Deadline: February 13th 1995 *******Notification Date: February 25th 1995 *******Camera ready Copy: March 10th 1995 PUBLICATION: Workshop notes/preprints will be published. If there is sufficient interest we will publish a book on the workshop possibly with the American Artificial Intelligence Association (AAAI) Press. WORKSHOP CHAIR: Sean O Nuallain ((Before Dec 23:)) Knowledge Systems Lab, Institute for Information Technology, National Research Council, Montreal Road, Ottawa Canada K1A OR6 Phone: 1-613-990-0113 E-mail: sean at ai.iit.nrc.ca FaX: 1-613-95271521 ((After Dec 23:)) Dublin City University, IRL- Dublin 9, Dublin Ireland, EU WWW: http://www.compapp.dcu.ie Ftp: ftp.vax1.dcu.ie E-mail: onuallains at dcu.ie FaX: 353-1-7045442 Phone: 353-1-7045237 AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 278-5572 AISB-95 CONFERENCE/LOCAL ORGANISATION CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 REGISTRATION: Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/users/christ/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 AISB-95 ENQUIRIES: Debbie Daly, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: debbie at dcs.shef.ac.uk (personal communication) Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-278-5565 (personal) -5590 (messages) Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) (Email welty at ijcai.org) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/users/christ/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb VENUE: The venue for registration and all conference events is: Halifax Hall of Residence, Endcliffe Vale Road, GB- S10 5DF, Sheffield, UK, EU. FaX: +44 (0) 114-268-4227 Tel: +44 (0) 114-268-2758 (24 hour porter) Tel: +44 (0) 114-266-4196 (manager) SHEFFIELD: Sheffield is one of the friendliest cities in Britain and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and is part of the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. Sheffield Philosophy Department has the Hang Seng Centre for Cognitive Studies, founded in 1992, which runs a workshop/conference series on a two-year cycle on topics of interdisciplinary interest. (1992-4: 'Theory of mind'; 1994- 6: 'Language and thought'.)  From lautrup at connect.nbi.dk Fri Dec 16 12:01:20 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 16 Dec 94 12:01:20 MET Subject: Paper available: "Massive Weight Sharing: A Cure for ..." Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/lautrup.massive.ps.Z ---------------------------------------------- The following paper is now available: Massive Weight Sharing: A Cure for Extremely Ill-posed Learning [12 pages] B. Lautrup, L.K. Hansen, I. Law, N. Moerch, C. Svarer and S.C. Strother CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Presented at the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, Juelich, Germany, November 21-23, 1994 Abstract: In most learning problems, adaptation to given examples is well-posed because the number of examples far exceeds the number of internal parameters in the learning machine. Extremely ill-posed learning problems are, however, common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, e.g. pixel or pin values, and a modest number of patterns, e.g. images or spectra. In this paper we show, for the case of a set of PET images differing only in the values of one stimulus parameter, that it is possible to train a neural network to learn the underlying rule without using an excessive number of network weights or large amounts of computer time. The method is based upon the observation that the standard learning rules conserve the subspace spanned by the input images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get lautrup.massive.ps.Z ftp> quit unix> uncompress lautrup.massive.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From jaap.murre at mrc-apu.cam.ac.uk Fri Dec 16 08:46:00 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 16 Dec 94 13:46:00 GMT Subject: Papers and PC demo available Message-ID: <9412161346.AA00917@rigel.mrc-apu.cam.ac.uk> The following three files have recently (15-12-1994) been added to our ftp site (ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre): File 1: nnga1.ps Happel, B.L.M., & J.M.J. Murre (1994). Design and evolution of modular neural network architectures. Neural Networks, 7, 985-1004. (About 0.5 Mb; ps.Z version is recommended.) Abstract: To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist out of several modules, standard subnetworks that serve as higher-order units with a distinct structure and function. The simulations rely on a particular network module called CALM (Murre, Phaf, and Wolters, 1989, 1992). This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is laid-out. A number of small-scale simulation studies shows how intermodule connectivity patterns implement 'neural assemblies' (Hebb, 1949) that induce a particular category structure in the network. Learning and categorization improves as the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary. File 2: chaos1.ps Happel, B.L.M., & J.M.J. Murre (submitted). Evolving complex dynamics in modular interactive neural networks. Submitted to Neural Networks. (This is a large file: 1.5 Mb! Retrieve the ps.Z version if possible.) Abstract: Computational simulation studies, carried out within a general framework of modular neural network design, demonstrate that networks consisting of many interacting modules provide a variety of different neural processing principles. The dynamics underlying these behaviors range from simple linear separation of input vectors in individual modules, to oscillations, evoking chaotic regimes in the activity evolution of a network. As opposed to static representations in conventional neural network models, information in oscillatory networks is represented as space-time patterns of activity. Chaos in a neural network can serve as: (i) a novelty filter, (ii) explorative deterministic noise, (iii) a fundamental form of neural activity that provides continuous, sequential access to memory patterns, and (iv) a mechanism that underlies the formation of complex categories. An experiment in the artificial evolution of modular neural architectures, demonstrates that by manipulating modular topology and parameters governing local learning and activation processes, "genetic algorithms" can effectively explore complex interactive dynamics to construct efficient, modular neural architectures for pattern categorization tasks. A particularly striking result is that coupled, oscillatory circuits were installed by the genetic algorithm, inducing the formation of fractal category boundaries. Dynamic representations in these networks, can significantly reduce sequential interference due to overlapping static representations in learning neural networks. File 3: The above two papers, among others, describe a digit recognition network. A demonstration version of this can be retrieved for PCs (486 DX recommended): digidemo.zip (unzip with PKUNZIP; contains several files) Some documentation is included with the program. One of its features is retraining on digits that go wrong. Only the wrong digits are retrained, without catastrophic interference with the other digits. With questions and remarks, contact either Bart Happel at Leiden University (happel at rulfsw.leiden.univ) or Jaap Murre at the MRC Applied Psychology Unit (jaap.murre at mrc-apu.cam.ac.uk).  From LINDBLAD at vana.physto.se Fri Dec 16 08:44:45 1994 From: LINDBLAD at vana.physto.se (LINDBLAD@vana.physto.se) Date: Fri, 16 Dec 1994 15:44:45 +0200 Subject: IBM ZISC Paper Message-ID: <01HKPT2KNC4Y8Y6ZY4@vana.physto.se> FTP-HOST: archive.cis.ohio-state.edu FTP-FILE: pub/neuroprose/eide.zisc.ps.Z The file eide.zisc.ps. has been placed into the Neuroprose repository: "An implementation of the Zero Instruction Set Computer (ZISC036) on a PC/ISA-bus card" (14 pages) A. Eide, Th. Lindblad, C.S. Lindsey, M. Minerskjoeld, G.Sekhniaidze and G. Szekely Abstract: The new IBM Zero Instruction Set Computer (ZISC036) chip has been implemented on a PC/ISA-bus card. The chip has 36 RBF-like neurons. It is highy parallel and cascadable with on-chip learning. A card with two ZISC036 chips was built and tested with a noisy character recognition "benchmark".  From LINDBLAD at vana.physto.se Fri Dec 16 08:46:00 1994 From: LINDBLAD at vana.physto.se (LINDBLAD@vana.physto.se) Date: Fri, 16 Dec 1994 15:46:00 +0200 Subject: WWW Page on Neural Nets in High Energy Physics Message-ID: <01HKPT45B9N68Y6ZY4@vana.physto.se> We would like to announce the installation of the WWW homepage "Neural Networks in High Energy Physics". The address is: "http://www1.cern.ch/NeuralNets/nnwInHep.html" Both hardware and software neural network techniques have been used in high energy physics. Hardware neural networks are used for real-time data selection, while software neural networks are used in off-line analysis to enhance signal to background discrimination. We include a long reference list of work done in these areas, as well as news of recent developments. Of most general interest is the extensive page on commercial hardware neural networks. Descriptions are given of VLSI chips (e.g. the new IBM ZISC chip with RBF neurons), accelerator boards, and neurocomputers. Clark S. Lindsey (lindsey at msia02.msi.se) Bruce Denby (denby at pisa.infn.it) Thomas Lindblad (lindblad at vana.physto.se)  From ricart at picard.jmb.bah.com Fri Dec 16 10:32:22 1994 From: ricart at picard.jmb.bah.com (Rick Ricart) Date: Fri, 16 Dec 94 10:32:22 EST Subject: Apology for Flame, and Survey Message-ID: <9412161532.AA03333@picard.jmb.bah.com> I agree with John Pollock that reviewing fees should not be instituted. I'm not sure what prompted the unprecedented decision by WCNN, but if one reason for the fee is to increase the overall quality of the submissions, I offer the following alternative. Have each society (IEEE, INNS, etc.) establish and publish minimum acceptance criteria for papers. These criteria might include, for example, precise algorithm and parameter descriptions so that experiments can be reproduced by others. I know this is a novel idea for some current scientific and engineering societies and publishers, but I think we're ready for it. The real reason for the reviewing fee might be, of course, financial. All reviewers are very busy individuals with many professional duties. Some, as is evident, feel they should be reimbursed for "volunteering" their precious time for the given society's benefit. My answer to the problem is, "don't volunteer." There are many other professionals in the field that would gladly volunteer their time to review papers given the opportunity; especially if they have clear minimum acceptance guidelines available from the given society. These are my personal feelings and in no way do they represent an official Booz-Allen & Hamilton, Inc view point. Rick Ricart Associate Booz-Allen & Hamilton Advanced Computational Technologies Practice McLean, VA 22020 Phone: (703) 902-5494 email: ricart at picard.jmb.bah.com  From timxb at faline.bellcore.com Fri Dec 16 11:26:07 1994 From: timxb at faline.bellcore.com (Timothy X Brown) Date: Fri, 16 Dec 1994 11:26:07 -0500 Subject: Apology for Flame, and Survey Message-ID: <199412161626.LAA21111@faline.bellcore.com> Review fees to cover costs may or may not be justified, but it is a dangerous precedent, especially when an implicit goal was to limit entry. Organizers of conferences/journals/grants/positions can only lose by restricting the submission that they receive. Each of the "problems" Jordan Pollack mentions can be addressed in slightly different ways: >a) filtering out computer-generated articles, I thought "filtering out" is the quintissential role of the review process. By having some standards for acceptance, such fluff can be removed. One of the roles of a conference is to do some of this before hand so I don't waste valuable time on low information content. >b) authors not showing up to deliver talks or posters, Have no review fee, but the authors must submit the conference fee which is then refundable if the paper is not accepted. >c) expense of supporting travel costs for both plenary speakers and conference organizers, and If the organizers cannot organize a cost effective conference, get someone else to do it. I know that the American Social Sciences Association runs a huge 28 parallel session conference, and conference fees last year (this was in downtown Boston) were $35 with a $15 student rate! Of course, banquets (choose from one of 10), etc, were extra and water was the only refreshment provided, but what is the point of the conference? BTW, I've organized conferences without my expenses being covered. >d) declining membership/subscription revenues. Orgainize smaller conferences (with <6 parallel sessions), smaller print runs, etc. If profits are the only motive, put a centerfold in the magazine and have naked dancers at the banquet. Tim Timothy X Brown MRE 2E-378 Adaptive Systems Research Bell Communications Research 445 South Street Morristown, NJ 07960 Tel: (201) 829-4314 Fax: (201) 829-5888  From singh at psyche.mit.edu Fri Dec 16 16:13:43 1994 From: singh at psyche.mit.edu (Satinder Singh) Date: Fri, 16 Dec 94 16:13:43 EST Subject: Transfer/Interference in control problems... Message-ID: <9412162113.AA17889@psyche.mit.edu> This is another note on the topic of transfer and interference across problems. I have investigated this issue *a bit* in the reinforcement learning setting of an agent required to solve a SET of hierarchically structured control problems in the *same* environment. Reinforcement learning (RL) algorithms solve control problems by using control experience to update utility functions. With experience, the utility function improves and therefore so does the resulting ``greedy'' decision strategy. I studied the case of compositionally structured control tasks, where complex tasks are sequences of simpler tasks. I showed how a modular RL architecture (that is based on Jacobs, Jordan, Nowlan and Hinton's mixture of experts architecture) is able to reuse utility functions learned for simple tasks to quickly construct utility functions for more complex tasks in the hierarchy. The composition of a complex task is not available to the agent, but has to be learned. The use of ``shaping'' to encourage transfer is also illustrated. Anonymous ftp instructions for the above work follow -- the filename is singh-MLJ-1.ps.Z (another file of possible interest is singh-ML92.ps.Z). Several other RL researchers have since worked on this topic. See Dayan and Hinton's paper on ``Feudal Reinforcement Learning'' in NIPS-5 for more recent work and uptodate references. ================================================================ unix> ftp envy.cs.umass.edu Name: anonymous Password: [your ID] ftp> cd pub/singh ftp> binary ftp> get .ps.Z ftp> bye unix> uncompress .ps.Z unix> [your command to print PostScript file] .ps  From N.Sharkey at dcs.shef.ac.uk Fri Dec 16 12:06:27 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 16 Dec 94 17:06:27 GMT Subject: About sequential learning (or interference) In-Reply-To: Stephen Hanson's message of Wed, 14 Dec 1994 15:20:45 -0500 (EST) Message-ID: <9412161706.AA01599@entropy.dcs.shef.ac.uk> >of course avoiding "interference" is >another way of preventing generalization. >As usual there is a tradeoff here. Yes there is a trade-off but not with interference as Hetherington has pointed out. In fact, with backprop, the way to entirely eliminate interference is to get a good approximation to the total underlying function that is being sampled. For example, with an autoencoder memory, if the there is good extraction of the identity function then there will be no interference from training on sucessive memory sets (and of course little need for further training). The trade-off is between old-new discrimination and generalisation. Definitionally, as one improves the other collapses. In the paper cited by Heterington (which is now under journal submission). We present a formally guaranteed solution to the interference and discrimination problem (the HARM) model, but it demands exponentially increasing computational resources. It is really used to show the problems of other localisation solutions (French, Murre, Kruschke etc.). We also report some interesting empirical (simulation) results of this trade-off in a much shorter paper: Sharkey, NE, and Sharkey, AJC, (in press) Interference and Discrimination in Neural Net Memory. In Joe Levy, Dimitrios Bairaktaris, John Bullinaria and Paul Cairns.(Ed) Connectionist Models of Memory and Language, UCL press. If anyone is interested I will mail a postscript copy of the tech report to them. Sharkey, N.E., & Sharkey, A.J.C. Understanding Catastrophic Interference in neural Nets. Technical Report, Department of Computer Science, University of Sheffield, U.K. Abstract A number of recent simulation studies have shown that when feedforward neural nets are trained, using backpropagation, to memorize sets of items in sequential blocks and without negative exemplars, severe retroactive interference or {\em catastrophic forgetting} results. Both formal analysis and simulation studies are employed here to show why and under what circumstances such retroactive interference arises. The conclusion is that, on the one hand, approximations to "ideal" network geometries can entirely alleviate interference, but at the cost of a breakdown in discrimination between input patterns that have been learned and those that have not: {\em catastrophic remembering}. On the other hand, localized geometries for subfunctions eliminate the discrimination problem but are easily disrupted by new training sets and thus cause {\em catastrophic interference}. The paper concludes with a Hebbian Autoassociative Recognition Memory (HARM) model which provides a formally guaranteed solution to the problems of interference and discrimination. This is then used as a yardstick with which to evaluate other proposed solutions. noel Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk FAX: (0742) 780972  From dyyeung at cs.ust.hk Sat Dec 17 09:54:27 1994 From: dyyeung at cs.ust.hk (Dr. D.Y. Yeung) Date: Sat, 17 Dec 94 09:54:27 HKT Subject: elitism at NIPS Message-ID: <9412170154.AA17552@cs.ust.hk> > The folks involved in running NIPS are aware that in the past, some people > have felt the conference was biased against outsiders. As the program > chair for NIPS*94, I want to mention some steps we took to reduce that > perception: > > * This year we required authors to submit full papers, not just abstracts. > That made it harder for "famous" people to get by on their reputation > alone, and easier for "non-famous" people to get good papers in. > > * This year we required reviewers to provide real feedback to all authors. > In order to make this less painful, we completely redesigned the review > form (it's publicy accessible via the NIPS home page) and, for the first > time, accepted reviews by email. Everyone liked getting comments from > the reviewers, and authors whose papers were not accepted understood why. > > * We continue to recruit new people for positions in both the organizing > and program committees. It's not the same dozen people year after year. > We also have a large reviewer pool: 176 people served as reviewers > this year. > > * We tend to bring in "outsiders" as our invited speakers, rather than > the usual good old boys. This year's invited speakers included Francis > Crick of the Salk Insititute, Bill Newsome (a neuroscientist from > Stanford), and Malcolm Slaney (a signal processing expert formerly with > Apple and now at Interval Research.) None had been to NIPS before. Another step that the organizers of NIPS may consider is to have a "blind" review process, which is an even more positive step to reduce that perception. This practice has been used in some other good conferences too. Regards, Dit-Yan Yeung  From raymond at fit.qut.edu.au Sat Dec 17 20:24:16 1994 From: raymond at fit.qut.edu.au (Raymond Lister) Date: Sat, 17 Dec 94 20:24:16 EST Subject: elitism at NIPS Message-ID: <199412171024.UAA20868@fitmail.fit.qut.edu.au> > From Dave_Touretzky at cs.cmu.edu Sat Dec 17 14:53:02 1994 > To: Connectionists at cs.cmu.edu > Subject: elitism at NIPS > Date: Fri, 16 Dec 94 03:22:31 EST > > ... > > * This year we required authors to submit full papers, not just abstracts. > That made it harder for "famous" people to get by on their reputation > alone, and easier for "non-famous" people to get good papers in. The move to full papers was a good change. It would be better still if the copies of papers sent to referees did not contain the names and addresses of authors. Raymond Lister, School of Computing Science, Queensland University of Technology, AUSTRALIA Internet: raymond at fitmail.fit.qut.edu.au  From gluck at pavlov.rutgers.edu Sat Dec 17 10:45:25 1994 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Sat, 17 Dec 94 10:45:25 EST Subject: Rutgers Grad Program in BEHAVIORAL AND NEURAL SCIENCES Message-ID: <9412171545.AA01777@james.rutgers.edu> --------------------------------------------------------------------- Seeking Applications for Fall 1995 for Ph.D. Program in BEHAVIORAL AND NEURAL SCIENCES Rutgers University, Newark Target date for applications is JANUARY 20, 1995 --------------------------------------------------------------------- If you are considering graduate study in Cognitive, Integrative, Molecular, or Computational Neuroscience, you may be interested in Rutgers' new interdisciplinary research-oriented graduate program in Behavioral and Neural Sciences (BNS). The BNS aims to provide students with a rigorous understanding of the basic tenets and underpinnings of modern neuroscience. The program emphasizes the multidisciplinary nature of this endeavor, and offers specific research training in Behavioral and Cognitive Neuroscience and Molecular, Cellular and Systems Neuroscience. These research areas represent different but complementary approaches to contemporary issues in behavioral and molecular neuroscience and can emphasize either human or animal studies. The graduate program is offered by two distinct university units: the newly established Center for Molecular and Behavioral Neuroscience (CMBN) and the Institute of Animal Behavior (IAB). These two units work together but each has its own special emphasis. Research at the CMBN emphasizes integration across levels of analysis and traditional disciplinary boundaries. The CMBN is one of the leading places in this country for the study of the neural bases of behavior and cognition in humans and other animals. Behavioral research areas include the study of memory, language (both signed and spoken), motor-control, and vision. Clinically relevant research areas are the study of the physiological and pharmacological aspects of schizophrenia, epilepsy and Parkinson's disease and molecular genetics of reading disorders and well as neuroendocrinology. We have a computational program for students interested in pursuing neural-network models as a tool for understanding psychological and biological issues. There is also a strong focus on single cell (patch clamp, intracellular and extracellular) electrophysiology and multi-unit recording, systems analysis, neuroanatomy and in vivo microdialysis. The IAB offers a unified program in psychobiology and ethological patterns of behavior, with an emphasis on evolution, development and reproduction, as well as the neurogenesis and recovery of function from brain damage. Other Info ---------- At present the CMBN supports up to 40 students with 12-month renewable assistantships for a period of four years. The curent stipend for first year students is $12,750; this includes tuition remission and excellent healthcare benefits. The IAB supported students receive D.S. Lehrman Fellowships which include a 12-month stipend of approximately $10,500 for four years and tuition remission. In addition, the Johnson & Johnson pharmaceutical company's Foundation has provided four Excellence Awards which increase students' stipends by $5,000. Several other fellowships are offered. More information is available in our graduate brochure. The Rutgers-Newark campus (as distinct from the New Brunswick campus), is 30 minutes outside New York City, and close to other major university research centers at NYU, Columbia, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and NEC. Faculty Associated With Rutgers Behavioral & Neural Sciences Ph.D. Program -------------------------------------------------------------------------- FACULTY - RUTGERS Elizabeth Abercrombie (Ph.D., Princeton), neurotransmitters and behavior [CMBN] Colin Beer (Ph.D., Oxford), ethology [IAB] April Benasich (Ph.D., New York), infant perception and cognition [CMBN] Ed Bonder (Ph.D., Pennsylvania), cell biology [Biology] Linda Brzustowicz (M.D.,Ph.D., Columbia), human genetics [CMBN] Gyorgy Buzsaki (Ph.D., Budapest), systems neuroscience [CMBN] Mei-Fang Cheng (Ph.D., Bryn Mawr) neuroethology/neurobiology [IAB] Ian Creese (Ph.D., Cambridge), neuropsychopharmacology [CMBN] Doina Ganea (Ph.D., Illinois Medical School), molecular immunology [Biology] Alan Gilchrist (Ph.D., Rutgers), visual perception [Psychology] Mark Gluck (Ph.D.,Stanford), learning, memory and neural computation [CMBN] Ron Hart (Ph.D., Michigan), molecular neuroscience [Biology] G. Miller Jonakait (Ph.D., Cornell Medical College), neuroimmunology [Biology] Judy Kegl (Ph.D., M.I.T.), linguistics/neurolinguistics [CMBN] Barry Komisaruk (Ph.D., Rutgers), behavioral neurophysiology/pharmacology [IAB] Sarah Lenington (Ph.D., Chicago), genetic basis of mating preference [IAB] Joan Morrell (Ph.D., Rochester), cellular neuroendocrinology [CMBN] Teresa Perney (Ph.D., Chicago), ion channel gene expression and function [CMBN] Howard Poizner (Ph.D., Northeastern), language and motor behavior [CMBN] Jay Rosenblatt (Ph.D., New York), maternal behavior [IAB] Anne Sereno (Ph.D., Harvard), attention and visual perception [CMBN] Maggie Shiffrar (Ph.D., Stanford), vision and motion perception[CMBN] Harold Siegel (Ph.D., Rutgers) neuroendocrine mechanisms [IAB] Ralph Siegel (Ph.D., McGill), neuropsychology of visual perception [CMBN] Donald Stein (Ph.D., Oregon), neural plasticity [IAB] Jennifer Swann (Ph.D., Michigan), neuroendocrinology [Biology] Paula Tallal (Ph.D., Cambridge), neural basis of language development [CMBN] James Tepper (Ph.D., Colorado), basal ganglia neurophysiology and anatomy [CMBN] Beverly Whipple (Ph.D., Rutgers), women's health [Nursing] Laszlo Zaborszky (Ph.D., Hungarian Academy), neuroanatomy of forebrain [CMBN] BNS FACULTY - UMDNJ Barry Levin (M.D., Emory Medical) neurobiology Benjamin Natelson (M.D., Pennsylvania) stress and distress Allan Siegel (Ph.D., SUNY-Buffalo), aggressive behavior Walter Tapp (Ph.D., Cornell), primate models of cognitive function ASSOCIATES OF CMBN Izrail Gelfand (Ph.D., Moscow State), biology of cells [Biology] Richard Katz (Ph.D., Bryn Mawr), psychopharmacology [Ciba Geigy] David Tank (Ph.D., Cornell), neural plasticity [Bell Labs] For More Information or an Application -------------------------------------- If you are interested in applying to our graduate program, or possibly applying to one of the labs as a post-doc, research assistant or programmer, please contact us via one of the following: Dr. Gyorgy Buzsaki or Dr. Mark A. Gluck CMBN, Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone (Secretary, Ann Kutyla): (201) 648-1080 (Ext. 3200) Fax: (201) 648-1272 Email: buzsaki at axon.rutgers.edu or gluck at pavlov.rutgers.edu We will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. INTERNET INFORMATION: --------------------- Additional information on this program can be obtained over the internet via World Wide Web at: http://www.cmbn.rutgers.edu/ Please be warned that it is still under construction.  From sef+ at cs.cmu.edu Sat Dec 17 15:20:28 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Sat, 17 Dec 94 15:20:28 EST Subject: elitism at NIPS In-Reply-To: Your message of Sat, 17 Dec 94 20:24:16 -0500. <199412171024.UAA20868@fitmail.fit.qut.edu.au> Message-ID: Personally, I think that "blind" reviewing is a bad idea because it is dishonest. In a field like this one, it is easy in 90% of the cases for an experienced reviewer to tell who is the author of a paper, or at least what group the paper is from. I think it's better to be explicit about the lack of anonymity, and to challenge the reviewers to consciously try to rise above any "in group" bias than to provide a show of anonymity that is really a fraud in most cases. I also believe that in some cases the identity of the author does provide essential context to the reviewer. If I were to present evidence that my own Cascade-Correlation algorithm gets certain things wrong, a reviewer might not have to worry as much about rookie mistakes in running Cascor as he would in a paper from someone he has never heard of. On the other hand, if I claim that Cascor does certain things *better* than the competition, the claim might warrant extra scrutiny because I am an interested party in the debate. Even more scrutiny is called for if I am known to have a long-standing feud with the person whose work I am criticizing. I know that this view is controversial. Some of you will certainly argue that such considerations have no place in science, and that every paper must stand or fall on its content alone. Perhaps that is true in a long paper, but it may be impossible to present all the relevant context in an eight-page paper. (Because of the NIPS style, this is more like a 5-page paper published elsehwere.) I don't know if this is the right forum for a debate on the merits of blind reviewing. I just thought it would be useful to point out that there are some arguments on the other side. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Principal Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 268-5576 Carnegie Mellon University Latitude: 40:26:46 N 5000 Forbes Avenue Longitude: 79:56:55 W Pittsburgh, PA 15213 Mood: :-) ===========================================================================  From peter at ai.iit.nrc.ca Sun Dec 18 22:41:32 1994 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Mon, 19 Dec 1994 08:41:32 +0500 Subject: Workshop on Data Engineering for Inductive Learning Message-ID: <9412191341.AA05163@ksl0j.iit.nrc.ca> CALL FOR PARTICIPATION: Workshop on Data Engineering for Inductive Learning --------------------------------------------------------------------------- IJCAI-95, Montreal (Canada), August 19/20/21, 1995 Objective --------- In inductive learning, algorithms are applied to data. It is well-understood that attention to both elements is critical -- unless instances are represented so as to make its generalization methods appropriate, no inductive learner can succeed. In applied work, it is not uncommon for practitioners to spend the bulk of their time exploring and transforming data in efforts to enable the use of existing induction techniques. Despite widespread acceptance of these facts, however, research reports normally give data work short shrift. In fact, a report devoted mainly to the data in an induction problem rather than to the algorithms that process it might well be difficult to publish in mainstream machine learning and neural network venues. Our goal in this workshop is to counterbalance the predominant focus on algorithms by providing a forum in which data takes center stage. Specifically, we invite discussion of issues relevant to data engineering, which we define as the transformation of raw data into a form useful as input to algorithms for inductive learning. Data engineering is a concern in industrial and commercial applications of machine learning, neural networks, genetic algorithms, and traditional statistics. Among others, papers of the following kind are welcome: 1. Detailed case studies of data engineering in real-world applications of inductive learning. 2. Descriptions of data engineering techniques or methods that have proven useful across a number of applications. 3. Studies of the data requirements of important inductive learning algorithms, the specifications to which data must be engineered for these algorithms to function. 4. Reports on software tools and environments for data engineering, including papers on "interactive induction" algorithms. 5. Empirical studies documenting the effect of data engineering on the success of induced models. 6. Surveys of data engineering practice in related fields: statistics, pattern recognition, etc. (but not problem-solving or theorem-proving). 7. Papers on constructive induction, feature selection and related techniques. 8. Papers on (re)formulating a problem, to make it suitable for inductive learning techniques. For example, a paper on reformulating the problem of information filtering as learning to classify. This workshop will enable an overview of current work in data engineering. Since the problem of data engineering has received relatively little published attention, it is difficult to anticipate the work that will be presented at this workshop. We expect that the workshop will make it possible to see common trends, shared problems, and clever solutions that we cannot guess at, given our current, limited view of data engineering. We have allowed ample time for discussion of each paper (10 minutes), to foster an atmosphere that will encourage data engineers to share their stories and to seek common elements. We aim to leave the workshop with a vision of the research directions that might bring science into data engineering. Participation ------------- During the workshop, we anticipate approximately 14 presentations. Each paper will be given 25 minutes, 15 minutes for presentation and 10 minutes for discussion. There will be at most 30 participants in the workshop. If you wish to participate in the workshop, you may either submit a paper or a description of work that you have done (are doing, plan to do) that is relevant to the workshop. Papers should be at most 10 pages long. The first page should include the title, the author's name(s) and affiliation(s), a complete mailing address, phone number, fax number, e-mail, an abstract of at most 300 words, and up to five keywords. For those who do not choose to submit a paper, a description of relevant work should be at most 1 page long and should include complete address information. Workshop participants are required to register for the main IJCAI-95 conference. All submissions (papers or descriptions of relevant work) will be reviewed by at least two members of the organizing committee. Please send your submissions to the contact address below. Submissions should be PostScript files, sent by e-mail. Accepted submissions will be available before the workshop through ftp. Workshop participants will also be given copies of the papers on the day of the workshop. In selecting the papers, the committee will aim for breadth of coverage of the topics listed above. Ideally, each of the eight kinds of papers listed above would have at least one representative in the workshop. A paper with new ideas on data engineering will be preferred to a high- quality paper on a familiar idea. The workshop organizers plan to publish revised versions of selected papers from the workshop. The papers would be published either as a book or as a special issue of a journal. The exact date for the workshop has not yet been decided by IJCAI. The workshop is one day in duration and will be held on one of August 19, 20, or 21. Schedule -------- Deadline for submissions: March 31, 1995 Notification of acceptance: April 21, 1995 Submissions available by ftp: April 28, 1995 Actual Workshop: August 19/20/21, 1995 Organizing Committee -------------------- Peter Turney, National Research Council (Canada) Cullen Schaffer, CUNY/Hunter College (USA) Rob Holte, University of Ottawa (Canada) Contact Address --------------- Dr. Peter Turney Knowledge Systems Laboratory Institute for Information Technology National Research Council Canada Ottawa, Ontario, Canada K1A 0R6 (613) 993-8564 (office) (613) 952-7151 (fax) peter at ai.iit.nrc.ca  From ucganlb at ucl.ac.uk Mon Dec 19 09:08:28 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Mon, 19 Dec 94 14:08:28 +0000 Subject: catastrophic interference of BP Message-ID: <187786.9412191408@link-1.ts.bcc.ac.uk> Studies of catastrophic interference in BP networks are interesting when considering such a network as a model of some human (or animal) memory system. Is there any reason for doing that? Neil  From ted at SPENCER.CTAN.YALE.EDU Mon Dec 19 09:52:27 1994 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Mon, 19 Dec 1994 09:52:27 -0500 Subject: "Blind" reviews are impossible Message-ID: <199412191452.AA23787@PLANCK.CTAN.YALE.EDU> Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. --Ted  From jbower at smaug.bbb.caltech.edu Mon Dec 19 15:44:10 1994 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Mon, 19 Dec 94 12:44:10 PST Subject: WCNN / INNS / or whatever Message-ID: <9412192044.AA00733@smaug.bbb.caltech.edu> In brief support of Jordon, I believe that it is well known that the finances of both the WCNN and the INNS have been strange for years. It is also not too surprising that one would confuse the two, same list of participants, basically. It is also well known that the "everyone who can pay" approach taken by the organizers of most of the neural network meetings has resulted in very poor signal to noise ratios. The historically more "old boy" approach of NIPS has meant a better quality meeting, but less openness. I wish Dave Touretzky luck in efforts to change, there is little evidence that INNS or IEEE even recognize that there is a problem. Two years ago, a considerable amount of pressure was placed on the neurobiologists who have organized the computational neuroscience meetings (CNS*92-94) to merge with the next INNS meeting (in this case). The vote by participants of CNS*93 not to, was essentially 100%. Other than the comment that very little that goes on in any of these meetings has much to do with neurobiology, the other reasons most often mentioned where the above. Jim Bower *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From maass at igi.tu-graz.ac.at Mon Dec 19 15:48:06 1994 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Mon, 19 Dec 94 21:48:06 +0100 Subject: elitism at NIPS Message-ID: <199412192048.AA16441@figids01> I have the impression that for theory-papers it would NOT be very benefitial if NIPS would change to "blind reviewing". From dtam at morticia.cnns.unt.edu Mon Dec 19 16:28:15 1994 From: dtam at morticia.cnns.unt.edu (David Tam) Date: Mon, 19 Dec 94 15:28:15 CST Subject: elitism at NIPS Message-ID: <199412192128.AA02352@morticia.cnns.unt.edu> I think a totally honest system has to be doubly-open, i.e., the reviewers names have to be attached to the referee's comments. That will hold them accountable for what they say. That will keep them HONEST. If they want to give negative critisim, let it be known who they are. As is now, it is not an open system -- It is blind one way, and open the other way, and that makes the system so unfair, bias, and dictatorial. As Scott Fahlman said, it is practically impossible to make it blind, because we all know who does what line of work. So, if it is not blind, make it totally open, and have a repeal process, such that if the referee is WRONG, there is a recourse!! That's what a democratic process is all about -- to keep the system honest by having an OPEN process with REPEAL recourse. A closed system is a sure way to breed corruption and dictatorship. This response applies to the whole scientific review process in reviewing papers and grants in general as well as in conferences. David C. Tam Center for Network Neuroscience Dept. of Biological Sciences University of North Texas  From shams at maxwell.hrl.hac.com Mon Dec 19 20:39:28 1994 From: shams at maxwell.hrl.hac.com (Soheil Shams) Date: Mon, 19 Dec 1994 17:39:28 -0800 Subject: Position Announcement Message-ID: <9412200146.AA07164@maelstrom> SIGNAL / IMAGE PROCESSING RESEARCH OPPORTUNITIES Hughes Research Laboratories has an immediate opening for a Research Staff Member to join a team of scientists in the Computational Intelligence and Advanced Signal Processing Algorithms Project. Team members in this project have developed novel, state-of-the-art neural networks, time-frequency transforms, and image compression algorithms for use in both commercial and military applications. The successful candidate will investigate advanced signal and image processing techniques for multimedia compression, data fusion, and pattern recognition applications. Current work is focused on the application of wavelets, neural networks, and computer vision techniques for data compression and pattern recognition applications. Specific duties will include theoretical analysis, algorithm design, and software simulation. Candidates are expected to have a Ph.D. in Electrical Engineering, Applied Mathematics, or Computer Science. Strong analytical skills and demonstrated ability to perform creative research, along with experience in signal and image processing, image or video compression, or neural networks are required. Practical experience with C or C++ is essential. Good communications and teamwork skills are keys to success. Overlooking the Pacific Ocean and the coastal community of Malibu, the Research Laboratories provides an ideal environment for you to make the most of your scientific abilities. Our organization offers a competitive salary and benefits package. Additional information may be obtained from Lynn Ross. For immediate consideration, send your resume to: Lynn W. Ross Department RM Hughes Research Laboratories 3011 Malibu Canyon Road Malibu, CA 90265 FAX: (310) 317-5651 Internet: lross at msmail4.hac.com Proof of legal right to work in the United States required. An Equal Opportunity Employer.  From wermter at nats2.informatik.uni-hamburg.de Tue Dec 20 06:28:41 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 20 Dec 94 12:28:41 +0100 Subject: IJCAI95-workshop: Learning for Natural Language Processing Message-ID: <9412201128.AA01573@nats2.informatik.uni-hamburg.de> --------------------------------------------------------------------------- CALL FOR PAPERS AND PARTICIPATION IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada currently scheduled for August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter Gabriele Scheler Ellen Riloff University of Hamburg Technical University Munich University of Utah PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing. Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge ac- quisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important be- cause language learning is an enabling technology for many other language pro- cessing problems, including noisy speech/language integration, machine trans- lation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will bring together researchers from the US/Canada, Europe, Japan, Australia and other countries working on new approaches to language learning. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP FORMAT --------------- The workshop will provide a forum for the interactive exchange of ideas and knowledge. Approximately 30-40 participants are expected and there will be time for up to 15 presentations depending on the number and quality of paper contri- butions received. Normal presentation length will be 15+5 minutes, leaving time for direct questions after each talk. There may be a few invited talks of 25+5 minutes length. In addition to prepared talks, there will be time for moderat- ed discussions after two related sessions. Furthermore, the moderated discus- sions will provide an opportunity for an open exchange of comments, questions, reactions, and opinions. PUBLICATION ----------- Workshop proceedings will be published by AAAI. If there is sufficient in- terest of the participants of the workshop there may be a possibility to pub- lish the results of the workshop as a book. REGISTRATION ------------ This workshop will take place directly before the general IJCAI-conference. It is an IJCAI policy, that workshop participation is not possible without regis- tration for the general conference. SUBMISSIONS ----------- All submissions will be refereed by the program committee and other experts in the field. Please submit 4 hardcopies AND a postscript file. The paper format is the IJCAI95 format: 12pt article style latex, no more than 43 lines, 15 pages maximum, including title, address and email address, abstract, figures, references. Papers should fit to 8 1/2" x 11" size. Notifications will be sent by email to the first author. Postscript files can be uploaded with anonymous ftp: ftp nats4.informatik.uni-hamburg.de (134.100.10.104) login: anonymous password: cd incoming/ijcai95-workshop binary put quit Hardcopies AND postscript files must arrive not later than 24th February 1995 at the address below. ##############Submission Deadline: 24th February 1995 ##############Notification Date: 24th March 1995 ##############Camera ready Copy: 13th April 1995 Please send correspondence and submissions to: ################################################ Dr. Stefan Wermter Department of Computer Science University of Hamburg Vogt-Koelln-Strasse 30 D-22527 Hamburg Germany phone: +49 40 54715-531 fax: +49 40 54715-515 e-mail: wermter at informatik.uni-hamburg.de ################################################  From N.Sharkey at dcs.shef.ac.uk Tue Dec 20 08:04:01 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Tue, 20 Dec 94 13:04:01 GMT Subject: catastrophic interference of BP In-Reply-To: Dr Neil Burgess - Anatomy UCL London's message of Mon, 19 Dec 94 14:08:28 +0000 <187786.9412191408@link-1.ts.bcc.ac.uk> Message-ID: <9412201304.AA18623@entropy.dcs.shef.ac.uk> >Studies of catastrophic interference in BP networks are >interesting when considering such a network as a model of some human >(or animal) memory system. >Is there any reason for doing that? >Neil BP has been used extensively in human cognitive and memory modelling and so it is useful to know if "cashes in" in terms of recognition memory as well. The interference issue is also problematic for training online control processes (e.g. in robotic coordination) where new training data is not necessarily accompanied by all of the "old" data samples. If there is not a nice regular general function to be extracted, then forgetting could be very severe. Problems also arise for BP in any such online task that requires discrimination between what was in the training sets and what was not. noel  From gbugmann at school-of-computing.plymouth.ac.uk Tue Dec 20 12:15:05 1994 From: gbugmann at school-of-computing.plymouth.ac.uk (Guido.Bugmann xtn 2566) Date: Tue, 20 Dec 94 17:15:05 GMT Subject: PhD Position available Message-ID: <210.9412201715@subnode.soc.plym.ac.uk> A PhD position is available immediately to work on the project "The Plymouth Hand" at the University of Plymouth, UK. This project comprises an application of neural networks to control. The aim of that project is to develop a robot/prosthetic hand with slippage detection as a source of feedback for the control of the grip pressure. A working prototype has been built which can lift a cylinder of solid steel and also a hollow rolled sheet of A4 paper without deforming it. Further work comprises: 1. Derivation of a practical specification for two systems using the slippage detection adaptive grasping force technique, i.e. a robot gripper and a prosthetic hand. 2. Kinematically model a range of proposed designs using virtual reality software. 3. Validate appropriate kinematic models 4. Design, build and test an improved version of the existing device to meet the specifications of a robot gripper. In particular: a) Software environment will be investigated e.g. assembly code, C, etc and an appropriate one chosen and developed. New techniques such as rule based (fuzzy logic) and neural networks will be investigated. b) Design and development of the power electronics and drive system. c) Specify and test the necessary microcontroller. 5) Same as 4) but as an adaptation of an existing NHS prostethic hand. ---------------------------------------------------- The salary in the standard range 5-6 Kpounds/year for PhD students. For further information, please contact: Paul Robinson Dept. of Electrical and Electronic Engineering University of Plymouth Plymouth PL4 8AA United Kingdom Phone (+44) 1752 23 25 95 / 72 Fax (+44) 1752 23 25 83 -----------------------------------------------------  From ucganlb at ucl.ac.uk Tue Dec 20 12:54:01 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 20 Dec 94 17:54:01 +0000 Subject: catastrophic interference of BP Message-ID: <154923.9412201754@link-1.ts.bcc.ac.uk> >> Studies of catastrophic interference in BP networks are >> interesting when considering such a network as a model of some human >> (or animal) memory system. >> Is there any reason for doing that? >> Neil > So far Jay McClelland has replied: his recently advertised > tech. report provides an example of useful consideration of cat. int. with repect to > the possible existence of complimentary learning systems in the brain, and attempts > to distinguish between the specific properties of BP and the more general question. > Japp Murre also replied that he partially addresses the problem in [1]. Generally, I think that caution should be expressed in generalising between artificial learning mechanisms. It may be that learning within a system with fixed parameters, mediated by iterative minimisation of some global attribute (e.g. sum-squared error) will tend to show interference with `catastrophic' characteristics, although how it manifests itself will depend on the details of the algorithm. But I would be surprised if other algorithms, e.g. learning by piecemeal constructive algorithms (where extra units are added for a specific local task, such as [2]) behaved like that - so e.g. we might not expect learning like song-learning in birds (associated with neural growth) to necessarily show the same type of interference. I also suspect that the characteristics of interference differ between iterative algorithms in which the `training set' must be presented many times (e.g. BP) and `one-shot' learning algorithms (e.g. Hopfield model). In the former case there is obviously a problem of how to interleave new data into the training set, in the latter case there is no such problem, and slight variations of the `Hebbian' learning rule can produce imprinting, primacy, recency or combinations of the above [3]. Given the availiblity of alternatives, it is not clear that BP should always be the canonical choice for modelling learing and memory. It is certainly not the easiest to motivate biologically. Merry Christmas, Neil [1] report: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/hyper1.ps [2] M. R. Frean, `The Upstart algorithm: A method for constructing and training feedforward neural networks', {\it Neural Computation,} {\bf 2}, 198-209 (1990). [3] `List Learning in Neural Networks', Neil Burgess, J. L. Shapiro and M. A. Moore, {\it Network} {\bf 2} 399-422, 1991.  From nburgess at lbs.lon.ac.uk Tue Dec 20 11:49:43 1994 From: nburgess at lbs.lon.ac.uk (nburgess@lbs.lon.ac.uk) Date: Tue, 20 Dec 94 16:49:43 GMT Subject: NNCM 95 Announcement Message-ID: <199412201649.LAA04285@jupiter.lbs.lon.ac.uk> ANNOUNCEMENT AND CALL FOR PAPERS NNCM-95 THIRD INTERNATIONAL CONFERENCE ON NEURAL NETWORKS IN THE CAPITAL MARKETS Thursday-Friday, October 12-13, 1995 with tutorials on Wednesday, October 11, 1995. The Langham Hilton, London, England. Neural networks are now emerging as a major modelling methodology in financial engineering. Because of the overwhelming interest of the NNCM workshops held in London in 1993 and Pasadena in 1994, the third annual NNCM conference will be held October 12-13, 1995, in London. NNCM'95 takes a critical look at state of the art neural network applications in finance. This is a research meeting where original, high-quality contribu- tions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, October 11) will be included to familiarise audiences of different backgrounds with financial engineering, and the mathematical aspects of the field. Application areas include: + Bond and stock valuation and trading + Foreign exchange rate prediction and trading + Commodity price forecasting + Risk management + Tactical asset allocation + Portfolio management + Option Pricing + Trading strategies Technical areas include but not limited to: + Generalised least squares + Robust model estimation + Univariate time series analysis + Multivariate data analysis + Classification and ranking + Pattern recognition + Model selection + Hypothesis testing and confidence intervals Instructions for Authors Authors who wish to present a paper should mail a copy of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (8.5" by 11") paper to the secretariat no later than May 31, 1995. Submissions will be refereed by no less than four referees and authors will be notified on acceptance by 30 July 1995. Sep- arate registration is required using the attached registration form. Authors are encouraged to submit abstracts as soon as pos- sible. Registration To register, complete the registration form and mail to the sec- retariat. Please note that places are limited and will be allocated on a "first-come first-served" basis. Secretariat: For further information, please contact the NNCM-95 secretariat: Ms Busola Oguntula, London Business School Sussex Place, Regent's Park, London NW1 4SA, UK e-mail: boguntula at lbs.lon.ac.uk phone (+44) (0171) 262 50 50 fax (+44) (0171) 724 78 75 Location: The main conference will be held at The Langham Hilton which is situated near Regent's Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Programme Commitee Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, University of Sydney Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University PUC-Rio Dr M. Steiner, Universitaet Munster Dr A. Timermann, University of California, San Diego Dr A. Weigend, University of Colorado Dr H. White, University of California, San Diego Hotel Accommodation: Convenient hotels include: The Langham Hilton 1 Portland Place London W1N 4JA Tel: (+44) (0171) 636 10 00 Fax: (+44) (0171) 323 23 40 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (+44) (0171) 486 61 61 Fax: (+44) (0171) 486 08 84 The White House Hotel Albany St., Regent's Park, London NW1 Tel: (+44) (0171) 387 12 00 Fax: (+44) (0171) 388 00 91 --------------------------Registration Form ---------------------------- NNCM-95 Registration Form Third International Conference on Neural Networks in the Capital Markets October 12-13 1995 Name:____________________________________________________ Affiliation:_____________________________________________ Mailing Address: ________________________________________ _________________________________________________________ Telephone:_______________________________________________ ****Please circle the applicable fees and write the total below**** Main Conference (October 12-13): (British Pounds) Registration fee 450 Discounted fee for academicians 250 (letter on university letterhead required) Discounted fee for full-time students 100 (letter from registrar or faculty advisor required) Tutorials (October 11): You must be registered for the main conference in order to register for the tutorials. (British Pounds) Morning Session Only 100 Afternoon Session Only 100 Both Sessions 150 Full-time students 50 (letter from registrar or faculty advisor required) TOTAL: _________ Payment may be made by: (please tick) ____ Check payable to London Business School ____ VISA ____Access ____American Express Card Number:___________________________________ --  From pah at unixg.ubc.ca Tue Dec 20 14:39:18 1994 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Tue, 20 Dec 1994 11:39:18 -0800 (PST) Subject: catastrophic interference of BP In-Reply-To: <187786.9412191408@link-1.ts.bcc.ac.uk> Message-ID: Neil Burgess wrote: > Studies of catastrophic interference in BP networks are > interesting when considering such a network as a model of some human > (or animal) memory system. > Is there any reason for doing that? Of course. Network models have properties such as distributed representations, generalization, interference, content addressability, etc., that are also true of animal and human memory. They provide an alternative framework for construing memory processes that is superior to box and arrow modeling primarily because they can be 'lesioned' as in animal experiments or as found in human patients and because they are 'executable'. Because they are executable (i.e., perform a function) the effects of these lesions and other parameter manipulations on the performance of the network can be observed. There are now many alternative supervised learning algorithms available, but there are still many reasons to continue to study this one. It is most certainly true that both humans and animals learn via feedback provided by the effect of erroneous behavior--a process analagous to back prop. Unsupervised learning algorithms such as competetive learning do not give you this. Unsupervised learning algorithms are not easily executable--you can't get them to do many simple 'behaviors', like plot trajectories from starting locations to multiple goals, as in Neil's models. Of course, any supervised learning algorithm will confer the ability to train the net to perform, but there are a couple, mostly pragmatic, reasons to stick with back prop for now. Primarily because of the availablity of the McClelland & Rumelhart books and program disks, back prop is most easily available and most commonly used. Back prop already provides the engine in countless models already published. Gaining an understanding of the behavior of this algorithm will enable a better understanding of the flaws of these already published models. Thus, its not so much that you *would want to* use the algorithm, it is that it has already been used. Lets understand why we are discarding it before we do so. Phil Hetherington pah at unixg.ubc.ca  From minton at ptolemy-ethernet.arc.nasa.gov Tue Dec 20 17:39:00 1994 From: minton at ptolemy-ethernet.arc.nasa.gov (Steve Minton) Date: Tue, 20 Dec 94 14:39:00 PST Subject: Learning Article Message-ID: <9412202239.AA09677@ptolemy.arc.nasa.gov> Readers of this group may be interested in the following article, which was just published in the Journal of Artificial Intelligence Research (a journal which is available both online and in print). Buntine, W.L. (1994) "Operations for Learning with Graphical Models", Volume 2, pages 159-225 Postscript: volume2/buntine94a.ps (1.53M) compressed, volume2/buntine94a.ps.Z (568K) Abstract: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms. The PostScript file is available via: -- comp.ai.jair.papers -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html -- Anonymous FTP from either of the two sites below: CMU: p.gp.cs.cmu.edu directory: /usr/jair/pub/volume2 Genoa: ftp.mrg.dist.unige.it directory: pub/jair/pub/volume2 -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND, and the body GET VOLUME2/BUNTINE94A.PS (either upper or lowercase is fine). Note: Your mailer might find this file too large to handle. (The compressed version of this paper cannot be mailed.) -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, check out our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From gary at cs.ucsd.edu Tue Dec 20 18:29:18 1994 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Tue, 20 Dec 94 15:29:18 -0800 Subject: "Blind" reviews are impossible Message-ID: <9412202329.AA18047@desi> Right, silly euphemisms work, or simply saying, "prior work on this by cottrell, etc." The year I was on the NIPS PC, I was also on the PC for ACL and AAAI. ACL used blind reviewing. I was sure I was reviewing a paper by Ken Church, and I was wrong. There was only one paper of the 15 or so I reviewed that I had any idea who it was. gary cottrell From pf2 at st-andrews.ac.uk Tue Dec 20 17:07:07 1994 From: pf2 at st-andrews.ac.uk (Peter Foldiak) Date: Tue, 20 Dec 94 22:07:07 GMT Subject: research assistant job - computational neuroscience Message-ID: <8312.9412202207@psych.st-andrews.ac.uk> University of St Andrews, School of Psychology RESEARCH ASSISTANT Computational Neuroscience A one-year research assistantship is available for research on a novel application of computational methods to the neurophysiology of the primate visual system with Dr Peter Foldiak, in collaboration with Dr David Perrett. The assistant will be involved in the development of software and experimental procedures. Training in computer programming (C, UNIX), and preferably in some of the following areas is preferred: mathematics (optimisation), neural networks, genetic algorithms, computer graphics (on SGI), vision- or neuroscience. Salary will be at the appropriate point on the 1B scale for Research Staff (GBP 13941-17813). Application forms and further particulars are available from Personnel Services, University of St Andrews, KY16 9AJ, U.K. tel: +44 334 462567, (out of hours +44 334 462571) or by fax: +44 334 462570, to whom completed forms accompanied by a letter of application should be returned to arrive not later than Monday, 30 January 1995. Please quote reference number: SH/APS0838. The University operates an Equal Opportunities Policy.  From jaap.murre at mrc-apu.cam.ac.uk Wed Dec 21 12:50:03 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Wed, 21 Dec 94 17:50:03 GMT Subject: sequential inteference Message-ID: <9412211750.AA26645@rigel.mrc-apu.cam.ac.uk> In response to recent comments by Neil Burgess, Bob French, Phil Hetherington, Noel Sharkey, Jay McClelland and others on 'catastrophic interference', I think it is important to establish that 'vanilla backpropagation' has by now been eliminated as a valid model of human learning and memory, its implausible learning transfer being the most obvious failure. An important reason for this failure can be found in the nature of the hidden layer representations. Catastrophic interference and hypertransfer (i.e., excessive positive transfer, see Murre, in press a) are both sides of the same coin: hidden- layer representations in backpropagation bear little relationship to orthogonalities of input patterns. In the case of interference experiments in humans the following result is well established: If the input patterns (stimuli) in two consecutive learning sets A and B are different than there will be neither interference nor positive transfer (e.g., Osgood, 1949). This is not the case in backpropagation: orthogonality of the stimuli has no effect on the reduction of interference. The reason for this is that the hidden-layer representations are always about equally overlapping *no matter what the input stimuli are*. Interference and transfer in backpropagation are only psychologically implausible with respect to its indifference towards the structure of the input stimuli. Surprisingly enough, two-layer networks (i.e., with a normal delta-rule, with one layer of weights) do not suffer from this problem and are in fact well able to model human interference (see Murre, in press a). Having said this, it is also important to consider the various variant models of error-correcting learning that have been developed recently and that use backpropagation as a starting point. These models have very plausible characteristics with respect to learning and categorization in humans: Kruschke (1990), Gluck (1991), Gluck and Bower (1988, 1990), Nosofsky, Kruschke, and McKinley (1992), Shanks and Gluck (1994). In addition, there are now many variant models of backpropagation that do not suffer from catastrophic interference. These have already been mentioned in this discussion, so I will not repeat them here. From a biological point of view backpropagation is certainly implausible but it has been show useful in inferring biological plausible parameters (e.g., Lockery et al., 1989; Zipser and Anderson, 1988). I can think of several reasons why backpropagation continues to attract researchers: 1. It is easy to understand. 2. Many simulators are available (see Murre, in press b). 3. It has been shown to approximate all 'well-behaved' functions (e.g., Hornik, Stinchcombe, and White, 1989) and thus is often felt to qualify as a generic learning mechanism. In particular, it can learn non-linearly separable pattern sets. 4. It has only a few parameters and these are not very critical for the final results. 5. It possesses most of the basic elements of a 'prototypical' neural network: distributed representations, graceful degradation, pattern completion, and adequate generalization of learned behavior. I do not myself think that these reasons form necessarily a sufficient motivation for using backpropagation, and I indeed prefer to work on other types of learning methods and neural networks. Perhaps, by investigating the limitations of backpropagation, necessary minimal improvements may become clear, so that we can replace it - in its leading role - by either a more plausible variant algorithm, or by a completely different learning method. Merry Christmas, -- Jaap Murre References Gluck, M.A. (1991). Stimulus generalization and representation in adaptive network models of category learning. Psychological Science, 2, 50- 55. Gluck, M.A., & G.H. Bower (1988). From conditioning to category learning: an adaptive network model. Journal of Experimental Psychology: General, 117, 227-247. Gluck, M.A., & G.H. Bower (1990). Component and pattern information in adaptive networks, Journal of Experimental Psychology: General, 119, 105-109. Kruschke, J.K. (1990). ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review, 99, 22-44. Lockery, S.R., G. Wittenberg, W.B. Kristan, Jr., & Garrison W. Cottrell (1989). Function of identified interneurons in the leech elucidated using neural networks trained by back-propagation. Nature, 340, 468- 471. Murre, J.M.J. (in press a). Transfer of learning in backpropagation and in related neural network models. In: J. Levy, D. Bairaktaris, J. Bullinaria, & P. Cairns (Eds.), Connectionist Models of Memory and Language. London: UCL Press. (In our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/hyper1.ps) Murre, J.M.J., (in press b). Neurosimulators. In: M.A. Arbib (Ed.), Handbook of Brain Research and Neural Networks, Cambridge, MA: MIT Press. (In our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/neurosim1.ps). Nosofsky, R.M., J.K. Kruschke, and S. C. McKinley (1992). Combining exemplar-based category representations and connectionist learning rules. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 211-233. Osgood, C.E. (1949). The similarity paradox in human learning: a resolution. Psychological Review, 56, 132-143. Shanks, D.R., & M.A. Gluck (1994). Tests of an adaptive network model for the identification and categorization of continuous-dimension stimuli. Connection Science, 6, 59-89. Zipser, D., & R.A. Anderson (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679-684.   From jras at uned.es Wed Dec 21 13:12:00 1994 From: jras at uned.es (Jose Ramon Alvarez Sanchez) Date: 21 Dec 94 19:12 +0100 Subject: CFP: W.S. McCulloch, 25 years in Memoriam Message-ID: <201*jras@uned.es> Preliminary Call For Papers CENTRO INTERNACIONAL DE INVESTIGACION EN CIENCIAS DE LA COMPUTACION UNIVERSIDAD DE LAS PALMAS DE GRAN CANARIA W.S. McCulloch: 25 years in Memoriam INTERNATIONAL CONFERENCE ON BRAIN PROCESSES, THEORIES AND MODELS Las Palmas, Gran Canaria, Spain. Nov 12-17, 1995. Conference Chairman: Moreno-Diaz, R. (Spain) Org. Comm. Chairman: Mira-Mira, J. (Spain) Scientific Advisory Comm. (preliminary): Amari SI (Japan), Anderson J (USA), Arbib M (USA), Beer S (Canada), Belmonte C (Spain), Blum M (USA), Braitenberg V (Germany), Cull P (USA), DaFonseca JL (Portugal), Eckorn R (Germany), Harth E (USA), Herault J (France), Jain LC (Australia), Kauffman S (USA), Kilmer W (USA), Leibovic KN (USA), Lettvin J (USA), Malsburg C. von der (Germany), Maturana H(Chile), McCleland J (USA), Mira-Mira J (Spain), Papert S (USA), Pichler F (Austria), Ricciardi L (Italy), Sato S (Japan), Vitozz E (Switzerland), Von Foester H (USA). Organizing Comm.: Alvarez JR (Spain), Cabestany J (Spain), Delgado A (Spain), Krasner J (USA), Moreno-Diaz jr R (Spain), Pierce A (USA), Prieto A (Spain), Sanchez JV (Spain), Suarez-Araujo C (Spain). Organization Staff: Alonso-Garcia MT (Spain), Perez Ruiz M (Spain). CALL FOR PAPERS The conference will consist in a series of invited lectures by leading scientists icluding M. Arbib, H. Maturana, J. Lettvin and S. Papert related to the life and work of W.S.McCulloch, an open forum of paper sessions on the topics listed below and a workshop on the "Embodiments of Mind at the end of the Century". Relevant topics include: Anatomical, Physiological, Biochemical and Biophysical levels. Natural and artificial neural networks. Mathematics, systems theory and global properties of the nervous system. Conceptual and formal tools in brain function modelling. Hybrid systems: symbolic-connectionist links. Plasticity, reliability, learning and memory. Implications of the work of WS McCulloch for philosophy and psychology. Reverse engineering and neurophysiology. All accepted papers will be published by the MIT Press in the pre-conferece proceedings. Those authors will be sent specific instructions and guidelines. Invited lectures and selected papers will be published by The MIT Press in a post-conference book. Three copies of the intended papers should be sent, prepared according to the following instructions for authors no later than May 30, 1995 to: Prof. Jose Mira-Mira Dpto. Informatica y Automatica-UNED Senda del Rey s/n 28040 MADRID-SPAIN Voice: +34 (1) 398 7155 Fax: +34 (1) 398 6697 e-mail: Please include your fax, e-mail, telephone for quick contact and notification of acceptance. INSTRUCTIONS FOR AUTHORS Authors should submit three copies of full intended papers, not exceeding eight pages of DIN A4 or 8.5 by 11 inch paper, including figures, tables and references, in English. The centered heading must include: Title in capitals. Name(s) of author(s). Address(es) of author(s) A 10 lines abstract. Three blank lines should be left between each of the above items and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 8-page limit. One additional DIN A-4 must be enclosed with the following organizational data: Title and author(s) name(s). A list of five keywords. A reference to the Topics the paper relates to. Postal address, phone, fax and e-mail if available. All accepted papers will be published in the conference proceedings. IMPORTANT DATES Final call and additonal info: March 1995 Final date of submission: May 30, 1995 Notification of acceptance: July 1, 1995 Conference: November 12-17, 1995 REGISTRATION FORM ------------------------------------------------------------------------ Last Name: First name Organization/University: Address: Phone: Fax: E-mail: Do you plan to submit a paper: Tentative title: Payment: Before April 30: - Bank draft for 50.000 pts payable on a Spanish Bank to FUNDACION UNIVERSITARIA DE LAS PALMAS: MCCULLOCH 95. - Money Transfer of 50.000 pts to BEX account No. 0104 0336 75 0307007836 After April 30: - Late registration fee of 60.000 pts at the conference desk. ------------------------------------------------------------------------ Please send this registration form to: Mrs. Maria Teresa Alonso-Garcia CIICC-Universidad de Las Palmas Campus de Tafira-Edf. de Informatica 35017 Las Palmas, SPAIN  From tetewsky at lima.psych.mcgill.ca Wed Dec 21 15:30:55 1994 From: tetewsky at lima.psych.mcgill.ca (Sheldon Tetewsky) Date: Wed, 21 Dec 1994 15:30:55 -0500 Subject: catastrophic interference Message-ID: <199412212030.PAA03615@lima.psych.mcgill.ca> Neil Burgess recently wrote that >studies of catastrophic interference in BP networks are interesting when >considering such a network as a model of some human (or animal) memory >system. However, he also questioned whether or not there was >..any reason for doing that. In the Tetewsky, Shultz, and Buckingham study ("Assessing interference and savings in connectionist models of a sequential recognition memory task"), referenced in Bob French's recent posting, we did some simulations and experiments indicating that neural networks can provide a good account of human performance in a simple recognition memory task when memory is assessed in terms of savings scores. The paper is in progress and will be announced soon. In the meantime, I'll just mention a few relevant details. Our work was motivated by Scott Fahlman's idea that his cascade-correlation algorithm (CC) has certain inherent design features that should give it an advantage over BP when it comes to dealing with the problem of catastrophic interference. We tested this idea by using an encoder version of CC to model recognition memory. Results indicated that in contrast to previous findings, retroactive interference is not that serious a problem for BP when memory is measured in terms of savings scores. However, CC also produced a significant increase in savings, relative to BP. Aside from this difference in the magnitude of savings, we also have evidence that CC was better than BP at accounting for the relative number of trials that subjects used in the different phases of learning. -- Sheldon Tetewsky  From rsun at cs.ua.edu Wed Dec 21 17:05:13 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 21 Dec 1994 16:05:13 -0600 Subject: No subject Message-ID: <9412212205.AA29941@athos.cs.ua.edu> ========================================================================= Call For Papers and Participation The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches to be held at IJCAI'95 Montreal, Canada August 19-20, 1995 ------------------------------------------------------------------------- There has been a considerable amount of research in integrating connectionist and symbolic processing. While such an approach has clear advantages, it also encounters serious difficulties and challenges. Therefore, various models and ideas have been proposed to address various problems and aspects in this integration. There is a growing interest from many segments of the AI community, ranging from expert systems, to cognitive modeling, to logical reasoning. Two major trends can be identified in the state of the art: these are the unified or purely and the hybrid approaches to integration. Whereas the purely connectionist ("connectionist-to-the-top") approach claims that complex symbol processing functionalities can be achieved via neural networks alone, the hybrid approach is premised on the complementarity of the two paradigms and aims at their synergistic combination in systems comprising both neural and symbolic components. In fact, these trends can be viewed as two ends of an entire spectrum. Up till now, overall, there is still relatively little work in comparing and combining these fairly isolated efforts. This workshop will provide a forum for discussions and exchanges of ideas in this area, to foster cooperative work. The workshop will tackle important issues in integrating connectionist and symbolic processing. A tentative Schedule --------------------- Day 1: A. Introduction: * Invited talks These talks will provide an overview of the field and set the tone for ensuing discussions. * Theoretical foundations for integrating connectionist and symbolic processing B. Definition of the two approaches: * Do they exhaust the space of current research in connectionist-symbolic integration, or is there room for additional categories? * How do we compare the unified and hybrid approaches? * Do the unified and hybrid approaches constitute a clearcut dichotomy or are they just endpoints of a continuum? * What class of processes and problems is well-suited to unified or hybrid integration? The relevant motivations and objectives. * What type of model is suitable for what type of application? Enumerate viable target domains. C. State of the art: * Recent or ongoing theoretical or experimental research work * Implemented models belonging to either the unified or hybrid approach * Practical applications of both types of systems Research addressing key issues concerning: * the unified approach: theoretical or practical issues involving systematicity, compositionality and variable binding, biologically inspired models, connectionist knowledge representation, other high-level connectionist models. * the hybrid approach: modes and methods of coupling, task sharing between various components of a hybrid system, knowledge representation and sharing. * both: commonsense reasoning, natural language processing, analogical reasoning, and more generally applications of unified and hybrid models. Day 2: D. Cognitive Aspects: * Cognitive plausibility and relations to other AI paradigms * In cognitive modeling, why should we integrate connectionist and symbolic processing? * Is there a clear cognitive rationale for such integration? (we may need to examine in detail some typical areas, such as commonsense reasoning, and natural language processing) * Is there psychological and/or biological evidence for existing models? If so, what is it? E. Open research issues: * Can we now propose a common terminology with precise definitions for both approaches to connectionist-symbolic integration and for the location on the continuum? * How far can unified systems go? Can unified models be supplemented by hybrid models? Can hybrid models be supplanted by unified models? * Limitations and barriers faced by both approaches * What breakthroughs are needed for both approaches? * Is it possible to synthesize various existing models? Workshop format --------------- - panel discussions - mini-group discussions: participants will break into groups of 7/8 to discuss a given theme; group leaders will then form a panel to report on group discussions and attempt a synthesis with audience participation - interactive talks: this is a novel type of oral presentation we will experiment with. Instead of a classical presentation, the speaker will present a problem or issue and give a brief statement of his personal stand (5 min) to launch discussions which he will then moderate and conclude. - classical slide talks followed by Q/A and discussions. Workshop Co-chairs: ------------------- Frederic Alexandre, Crin-Cnrs/Inria-Lorraine Ron Sun, The University of Alabama Organizing Committee: --------------------- John Barnden, New Mexico State University Steve Gallant, Belmont Research Inc. Larry Medsker, American University Christian Pellegrini, University of Geneva Noel Sharkey, Sheffield University Program Committee: ------------------ Lawrence Bookman (Sun Laboratory, USA) Michael Dyer (UCLA, USA) Wolfgang Ertel (FRW, Germany) LiMin Fu (University of Florida, USA) Jose Gonzalez-Cristobal (UPM, Spain) Ruben Gonzalez-Rubio (University of Sherbrooke, Canada) Jean-Paul Haton (Crin-Inria, France) Melanie Hilario (University of Geneva, Switzerland) Abderrahim Labbi (IMAG, France) Ronald Yager (Iona College, USA) Schedule: --------- - The submission deadline for participants is February 1, 1995. - The authors and potential participants will be notified the acceptance decision by March 15, 1995. - The camera-ready copies of working notes papers will be due on April 15, 1995 Submission: ----------- - If you wish to present a talk, specify the preferred type of presentation (classical or interactive talk) and submit 5 copies of an extended abstract (within the limit of 5-7 pages) to: Prof. Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 (205) 348-6363 - If you only wish to attend the workshop, send 5 copies of a short (no more than one page) description of your interest to the same address above. - Please be sure to include your e-mail address in all submissions.  From marks at u.washington.edu Thu Dec 22 00:55:40 1994 From: marks at u.washington.edu (Robert Marks) Date: Wed, 21 Dec 94 21:55:40 -0800 Subject: Neural Networks in Russia Message-ID: <9412220555.AA24404@carson.u.washington.edu> For Connectionists: The 2-nd International Symposium on Neuroinformatics and Neurocomputers Rostov-on-Don, RUSSIA September 20-23, 1995 Organized by Russian Neural Network Society (RNNS) and A.B. Kogan Research Institute for Neurocybernetics (KRINC) in co-operation with Institute of Electrical and Electronics Enginieers Neural Networks Council (IEEE NNC) First Call for Papers Research in Neuroinformatics and Neurocomputing continued in Russia after the research was deflated in the west in the 1970's. The research sophistication in neural networks, as a result, is quite advanced in Russia. The first international RNNS/IEEE Symposium, held in October 1992, proved to be a highly successful forum for a diverse international interchange of fresh and novel research results. The second International Symposium on Neuroinformatics and Neurocomputers is built on this remarkable success. The symposium focus is on the neuroscience, mathematics, physics, engineering and design of neuroinformatic and neurocomputing systems. Rostov-on-Don, the location of the Symposium, is about 1000 km south of Moscow on the scenic Don river. The Don is commonly identified as the boundary between the continents of Europe and Asia. Rostov is the home of the A.B. Kogan Research Institute for Neurocybernetics at Rostov State University - one of the premier neural network research centers in Russia. Papers for the Symposium should be sent in CAMERA-READY FORM, NOT EXEEDING 8 PAGES in A4 format to the Program Committee Co-Chair Alexander A. Frolov. Two copies of the paper should be submitted. The deadline for submission is 15 MARCH, 1995. Notification of acceptance will be sent on or before 15 May, 1995. SYMPOSIUM COMMITTEE GENERAL CHAIR Witali L. Dunin-Barkowski, Dr. Sci., The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Symposium Chair President of the Russian Neural Network Society, A.B. Kogan Research Institute for Neurocybernetics Rostov State University 194/1 Stachka avenue, 344104, Rostov-on-Don, Russia Tel: +7-8632-28-0588, Fax: +7-8632-28-0367 E-mail: wldb at krinc.rostov-na-donu.su PROGRAM COMMITTEE CO-CHAIRS Professor Alexander A. Frolov, The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Program Co-Chair 5a Butlerov str. Higher Nervous Activity and Neurophysiology Institute Russian Academy of Science 117220, Moscow, RUSSIA. Professor Robert J. Marks II Program Co-Chair The 2-nd International Symposium on Neuroinformatics and Neurocomputers University of Washington Department of Electrical Engineering c/o 1131 199th Street S.W., Suite N Lynnwood, WA 98036-7138 WA, USA. Other information is available from the Symposium Committee.  From david at cns.ed.ac.uk Thu Dec 22 03:58:07 1994 From: david at cns.ed.ac.uk (David Willshaw) Date: Thu, 22 Dec 1994 08:58:07 GMT Subject: elitism at NIPS Message-ID: <199412220858.IAA09676@dumbo.cns.ed.ac.uk> 1. Even though the practice of removing the authors' names from submitted papers before sending to referees is not perfect, in my view it certainly helps to make the refereeing process less biased. 2. I would also support the suggestion that the cohort of referees be broadened. One way of doing this is to make it more international. In the list of 174 referees given in this year's programme of Abstracts, I estimated that 84% (146) were from the USA and Canada. Only 6 referees were from Germany, 4 from France and 3 from the UK, for example. David  From trevor at media-lab.media.mit.edu Thu Dec 22 11:01:55 1994 From: trevor at media-lab.media.mit.edu (Trevor Darrell) Date: Thu, 22 Dec 94 11:01:55 EST Subject: "Blind" reviews are impossible In-Reply-To: ted@spencer.ctan.yale.edu's message of Mon, 19 Dec 1994 09:52:27 -0500 <199412191452.AA23787@PLANCK.CTAN.YALE.EDU> Message-ID: <9412221601.AA06532@marblearch.media.mit.edu> Delivery-Date: Thu, 22 Dec 94 10:24:24 -0500 Date: Mon, 19 Dec 1994 09:52:27 -0500 From: ted at spencer.ctan.yale.edu Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. In computer vision papers have been reviewed blind at the major conferences for the past several years. Authors are encouraged to reference themselves in the third person. There seem to be few complaints about the system. While it is sometimes possible to discern that a paper comes from a particular school or group, it is usually impossible to know who was the first author. And one can never be sure that the paper is not from some other (unknown) person who has written the paper building directly on the tradition of another group, and thus uses their terminology but does not actually work there. In any case, can there really be a disadvantage to blind reviewing? (It seems a weak point indeed to say the correctness of papers in NIPS is assurred by their authorship!) Papers are not blind at the program committee level, so any agregious mistakes brought on by blind reviewing (?) can be still corrected there. Just my $0.02 from the CV perspective... --trevor  From trevor at mallet.Stanford.EDU Thu Dec 22 18:10:03 1994 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Thu, 22 Dec 1994 15:10:03 -0800 Subject: Report Announcement Message-ID: <199412222310.PAA09264@mallet.Stanford.EDU> The following report is available via anonymous ftp or Mosaic: ftp://playfair.stanford.edu/pub/reports/hastie/dann.ps.Z Discriminant Adaptive Nearest Neighbor Classification Trevor Hastie and Robert Tibshirani We propose an adaptive nearest neighbor rule that uses local discriminant information to estimate an effective metric for classification. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbor classification.  From mozer at neuron.cs.colorado.edu Thu Dec 22 18:32:47 1994 From: mozer at neuron.cs.colorado.edu (Michael C. Mozer) Date: Thu, 22 Dec 1994 16:32:47 -0700 Subject: NIPS, blind reviewing, and elitism Message-ID: <199412222332.QAA08554@neuron.cs.colorado.edu> As NIPS*95 program chair, I want to respond to the issue of blind reviewing. We have considered this idea over the past few months, and in balance the costs seem to outweigh the benefits. Most of the arguments for and against blind reviewing were stated clearly in earlier messages. NIPS is an elite conference in that researchers tend to self-select their best work for submission, and even then only 25-30% of the submissions are accepted. Further, researchers who do good work one year and have papers accepted are likely to do good work in the future, so it is not surprising that there is a core group of consistent contributors to the conference, even without any bias. (Serving on the program committee before, I witnessed an opposite bias -- a bias against accepting multiple papers by an individual who had several strong submissions, and against awarding a talk to the same individual in successive years.) Prior to 1994, there was likely some validity to the perception that NIPS "insiders" had an edge. As a reviewer, it was difficult to evaluate work solely on the basis of extended abstracts; in borderline cases, it helped to know the authors' track records. After Dave Touretzky changed the submission format to full papers in 1994, reviewers and program committee members to whom I've spoken seemed satisfied that the reviewing process was objective. I suspect that the perception of unfairness may linger for a few years, but any such reality was squelched by the new submission format. If you felt the reviewing process was not objective in 1994, I would like to hear your story. (It is possible to communicate anonymously; to find out more about this service, send mail to help at anon.penet.fi.) Several points of note before expressing a complaint: (1) Many good submissions were ultimately rejected, simply because the submission pool tends to be of high quality and the number of accepted papers is limited by a maximum page count of the proceedings volume. (2) Most of the people who have served as program and general chairs at NIPS have had papers rejected, including the current chairs! The reviewers are the primary decision makers. (3) Reviewers are not always as competent and dilligent as one might like, although NIPS uses three reviewers per paper--plus an area chair to arbitrate--to mitigate the consequences of an inappropriate review. I gladly welcome comments and suggestions aimed at broadening the constituency of NIPS without lowering the meeting's quality. I will summarize the feedback I receive to the net. Cheers, Mike Mozer  From postma at cs.rulimburg.nl Fri Dec 23 05:26:23 1994 From: postma at cs.rulimburg.nl (Eric Postma) Date: Fri, 23 Dec 94 11:26:23 +0100 Subject: Paper Announcement: Priming and Memory Message-ID: <9412231026.AA28600@bommel.cs.rulimburg.nl> ------------------------------------------------------------------------ FTP-host: ftp.cs.rulimburg.nl FTP-file: pub/papers/postma/memory.ps.Z ------------------------------------------------------------------------ The following paper is now available: The Nature of Memory Representations [6 pages] Eric O. Postma, Ernst H. Wolf, H. Jaap van den Herik and Patrick T. W. Hudson Department of Computer Science, University of Maastricht P.O.Box 616, 6200 MD Maastricht, The Netherlands To appear in the Proceeding of the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, KFA Juelich, Germany, November 21-23, 1994. World Scientific Publishing Company. Abstract: This study investigates processing and storage in the brain using a priming task. We model a priming task with a relaxation neural network - the Coulomb Energy Network. The network's performance on a fragment-completion task is assessed by storing a set of memories into the network and, subsequently, testing its completion performance. It has been claimed that the findings of stochastic independence on repeated fragment completion imply noninteracting memory traces. We nevertheless model memory in a way in which all traces embodying a single representation interact and find stochastic independence when the memories in the network are densely packed. Dependence between priming fragments can be obtained, but only when the memories are sparsely packed. We conclude that independence on repeated fragment completion does not necessarily imply that the underlying memory traces are noninteracting memory traces. ------------------------------------------------------------------------ Please do not reply directly to this message ------------------------------------------------------------------------ FTP-instructions: unix> ftp ftp.cs.rulimburg.nl ftp> Name: anonymous ftp> Password: your e-mail address ftp> pub/papers/postma ftp> binary ftp> get memory.ps.Z ftp> quit unix> uncompress memory.ps.Z ------------------------------------------------------------------------ Merry Christmas, Eric Postma Computer Science Department Faculty of General Sciences University of Limburg PO Box 616 6200 MD Maastricht The Netherlands email: postma at cs.rulimburg.nl web : http://www.cs.rulimburg.nl tel : +31 43 883493 fax : +31 43 252392  From jon at maths.flinders.edu.au Fri Dec 23 06:58:43 1994 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Fri, 23 Dec 1994 22:28:43 +1030 Subject: TR Available: Learning Internal Representations Message-ID: <199412231158.AA05637@calvin.maths.flinders.edu.au> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/baxter.thesis.ps.Z -------------------------------------------------------- The following paper is now available: Learning Internal Representations [112 pages] Jonathan Baxter This is a preliminary draft of my PhD thesis. Note that it is in the Thesis subdirectory of the neuroprose archive. It is in the process of being broken into several pieces for submission to Information and Computation, Machine Learning and next year's COLT. I'm afraid I cannot offer hard-copies. ABSTRACT: Most machine learning theory and practice is concerned with learning a single task. In this thesis it is argued that in general there is insufficient information in a single task for a learner to generalise well and that what is required for good generalisation is information about {\em many similar learning tasks}. The information about similar learning tasks forms a body of prior information that can be used to constrain the hypothesis space of the learner and cause it to generalise better. Typical learning scenarios in which there are many similar tasks are image recognition and speech recognition. After proving that learning without prior information is impossible except in the simplest of situations, the concept of the {\em environment} of a learner is introduced as a probability measure over the set of learning problems the learner might be expected to learn. It is shown how a sample from such an environment can be used to learn a {\em representation}, or recoding of the input space that is appropriate for the environment. Learning a representation can equivalently be thought of as learning the appropriate features of the environment. Using Haussler's statistical decision theory framework for machine learning, rigorous bounds are derived on the sample size required to ensure good generalisation from a representation learning process. These bounds show that under certain circumstances learning a representation appropriate for $n$ tasks reduces the number of examples required of each task by a factor of $n$. It is argued that environments such as character recognition and speech recognition fall into the category of learning problems for which such a reduction is possible. Once a representation is learnt it can be used to learn {\em novel} tasks from the same environment, with the result that far fewer examples are required of the new tasks to ensure good generalisation. Rigorous bounds are given on the number of tasks and the number of samples from each task required to ensure that a representation will be a good one for learning novel tasks. All the results on representation learning are generalised to cover any form of automated hypothesis space bias that utilises information from similar learning problems. It is shown how gradient-descent based procedures for training Artificial Neural Networks can be generalised to cover representation learning. Two experiments using the new procedure are performed. Both experiments fully support the theoretical results. The concept of the environment of a learning process is applied to the problem of {\em vector quantization} with the result that a {\em canonical} distortion measure for the quantization process emerges. This distortion measure is proved to be optimal if the task is to approximate the functions in the environment. Finally, the results on vector quantization are reapplied to representation learning to yield an improved error measure for learning in classifier environments. An experiment is presented demonstrating the improvement. ------------- Retrieval Intructions: unix> ftp archive.cis.ohio-state.edu ftp> Login: anonymous ftp> Password: e-mail address ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get baxter.thesis.ps.Z ftp> quit unix> uncompress baxter.thesis.ps.Z unix> lpr -s baxter.thesis.ps --------------- Jonathan Baxter School of Information Science and Technology, The Flinders University of South Australia. jon at maths.flinders.edu.au  From 100020.2727 at compuserve.com Fri Dec 23 11:24:19 1994 From: 100020.2727 at compuserve.com (Andrew Wuensche) Date: 23 Dec 94 11:24:19 EST Subject: Discrete Dynamics Lab (for DOS) Message-ID: <941223162419_100020.2727_BHL46-1@CompuServe.COM> Discrete Dynamics Lab --------------------- Announcing the first release of Discrete Dynamics Lab (described below), a program for studying discrete dynamical networks, from Cellular Automata to random Boolean networks, including their attractor basins. Attractor basins are objects in space-time that link network states according to their transitions. Access to these objects provide insights into complexity, chaos and emergent phenomena in CA. In less ordered networks (as well as CA), attractor basins show how the network categorises its state space far from equilibrium, and represent the network's memory. The program is released as shareware. This is a beta version and comments are welcome. Section 1-10 in the ddlab.txt file gives an overview of the program. Appendix A gives some operating instructions including quick-start examples, but a detailed reference is not yet complete. An illustrated operating manual will be released in due course. For further background on attractor basins of CA and random Boolean networks, and their implications, refer to refs. [1,2,3,4] below. [1] is a book, hard copy pre-prints of [1,2,3] are available on request. Platform - PC-DOS 386 or higher, with VGA or SVGA graphics, maths co-processor, mouse, extended memory recomended. Ideally a fast 486 with 8MB ram. download instructions --------------------- ftp the file "ddlab.zip" (416,884 bytes): % ftp ftp.cogs.susx.ac.uk name: anonymous password: your complete e-mail address ftp> cd pub/alife/ddlab ftp> binary ftp get ddlab.zip ftp close unzip "ddlab.zip" to give the following files: ddlab.txt a text file describing ddlab ddlab.exe the program dos4dw.exe the DOS extender, giving access to extended memory gliders.r_s file with "glider" rules smalle.fon two font files sserife.fon to run, from the directory containing these files, enter: ddlab All questions and comments to Andy Wuensche contact address: Santa Fe Institute 48 Esmond Road, London W4 1JQ UK and The University of Sussex (COGS) tel 081 995 8893 fax 081 742 2178 wuensch at santafe.edu 100020.2727 at compuserve.com andywu at cogs.susx.ac.uk Discrete Dynamics Lab --------------------- Cellular Automata - Random Boolean Networks. (copyright (c) Andrew Wuensche 1993) First release of the beta version Dec 1994. DDLab is an interactive graphics program for research into the dynamics of finite binary networks (for a DOS-PC platform). The program is relevant to the study of complexity, emergent phenomena and neural networks, and implements the investigations presented in [1,2,3,4]. Using a flexible user interface, a network can be set up for any architecture between regular CA (1d or 2d with periodic boundary conditions) on the one hand[1], and random Boolean networks (disordered CA) on the other[2]. The latter have arbitrary connections, and rules which may be different at each site. The neighbourhood (or pseudo-neighbourhood) size may be set from 1 to 9, and the network may have a mix of neighbourhood sizes. The program iterates the network forward to display space-time patterns (mutations are possible "on the fly"), and also runs the network "backwards" to generate a pattern's predecessors and reconstruct its branching sub-tree of all ancestor patterns until all "garden of Eden" states, the leaves of the sub-tree, have been reached. For smaller networks, sub-trees, basins of attraction or the whole basin of attraction field can be displayed as a directed graph or set of graphs in real time, with many presentation options. Attractor basins may be "sculpted" towards a desired configuration. Various statistical and analytical measures and numerical data are made available, mostly displayed graphically. The network's parameters, and the graphics display and presentation options, can be very flexibly set, reviewed and altered. Changes can be made "on the fly", including mutations to rules, connections or current state. 2d networks (including the "game of life" or any mutation thereof) can be displayed as a space-time pattern in a 3d isometric projection. Network parameters, states, data, and the screen image can be saved and loaded in a variety of tailor-made file formats. Statistical measures and data (mostly presented graphically) include: lambda and Z parameters, rule-table lookup frequency and entropy, pattern density, detailed data on sub-trees, basins and fields, garden-of-Eden density, in-degree frequency and a scatter plot of state-space. Learning/forgetting algorithms allow attaching/detaching sets of states as predecessors of a given state by automatically mutating rules or changing connections. This allows "sculpting" the basin of attraction field to approach a desired scheme of hierarchical categorisation. References. ----------- Wuensche,A., and M.J.Lesser. "The Global Dynamics of Cellular Automata: An Atlas of Basin of Attraction Fields of One-Dimensional Cellular Automata", Santa Fe Institute Studies in the Sciences of Complexity, Reference Vol.I, Addison-Wesley, 1992. Wuensche.A.,"The Ghost in the Machine: Basins of Attraction of Random Boolean Networks", in Artificial Life III, Santa Fe Institute Studies in the Sciences of Complexity, Addison-Wesley, 1994. Wuensche.A., "Complexity in One-D Cellular Automata: Gliders, Basins of Attraction and the Z parameter", Santa Fe Institute working paper 94-04-025, 1994. Wuensche.A., "The Emergence of Memory; Categorisation Far From Equilibrium", Cognitive Science Research Paper 346, University of Sussex, 1994. To appear in "Towards a Scientific Basis for Consciousness" eds SR Hameroff, AW Kaszniak, AC Scot, MIT Press.  From G.Schram at ET.TUDelft.NL Fri Dec 23 11:54:58 1994 From: G.Schram at ET.TUDelft.NL (Gerard Schram) Date: Fri, 23 Dec 1994 17:54:58 +0100 Subject: Report Announcement Message-ID: <01HKZPN1751S004ZXG@TUDERA.ET.TUDELFT.NL> The following report is available via anonymous ftp or Mosaic: ftp://playfair.stanford.edu/pub/reports/hastie/dann.ps.Z Discriminant Adaptive Nearest Neighbor Classification Trevor Hastie and Robert Tibshirani We propose an adaptive nearest neighbor rule that uses local discriminant information to estimate an effective metric for classification. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbor classification. ---------------------*********--------------------------- Gerard Schram (g.schram at et.tudelft.nl) Control laboratory, Department of Electrical Engineering, Delft University of Technology P.O.Box 5031, 2600 GA Delft, The Netherlands Phone: +31-15-785114 Telefax: +31-15-626738 ---------------------*********---------------------------  From eric at research.NJ.NEC.com Fri Dec 23 11:59:24 1994 From: eric at research.NJ.NEC.com (Eric B. Baum) Date: Fri, 23 Dec 94 11:59:24 EST Subject: NIPS, blind reviewing, and elitism Message-ID: <9412231659.AA00986@yin> IMO the NIPS '94 program was the strongest NIPS program I've seen, and in fact the strongest program I've seen in any conference of even moderate size, in the sense that I saw no weak or uninteresting papers. (However I only visited a fraction of the posters so I may have missed some.) Any skeptics should wait and consult the Proceedings, which IMO will provide a Prima Facie Case that the refereeing procedure is not broken, and should not be fixed (at least radically). ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com  From schmidhu at informatik.tu-muenchen.de Fri Dec 23 12:43:29 1994 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Fri, 23 Dec 1994 18:43:29 +0100 Subject: No subject Message-ID: <94Dec23.184341met.42261@papa.informatik.tu-muenchen.de> SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES ORIENTATION SENSITIVE EDGE DETECTORS Technical Report FKI-201-94 (7 pages, 0.95 Megabytes) Juergen Schmidhuber Bernhard Foltin Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany December 24, 1994 Static real world images are processed by a computationally simple and biologically plausible version of the recent predictability minimization algorithm for unsupervised redundancy reduction. Without a teacher and without any significant pre-processing, the system automatically learns to generate orientation sensitive edge detectors in the first (semilinear) layer. To obtain a copy, do: unix> ftp flop.informatik.tu-muenchen.de (or ftp 131.159.8.35) Name: anonymous Password: your email address ftp> binary ftp> cd pub/fki ftp> get fki-201-94.ps.gz (0.222 MBytes) ftp> bye unix> gunzip fki-201-94.ps.gz unix> lpr fki-201-94.ps Alternatively, check out http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html If your net browser does not know gzip/gunzip (which works better than compress/uncompress), I can mail you an uncompressed postscript version (as a last resort). Merry Christmas! Juergen Schmidhuber  From wahba at stat.wisc.edu Fri Dec 23 22:36:19 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Fri, 23 Dec 94 21:36:19 -0600 Subject: SS-ANOVA for `soft' classification-anct Message-ID: <9412240336.AA14678@hera.stat.wisc.edu> The following report is available via anonymous ftp or Mosaic: ftp://ftp.stat.wisc.edu/pub/wahba/exptl.ssanova.ps.gz Smoothing Spline ANOVA for Exponential Families, With Application to the Wisconsin Epidemiological Study of Diabetic Retinopathy Grace Wahba, Yuedong Wang, Chong Gu Ronald Klein MD, and Barbara Klein MD Given attributes (which may be discrete or continuous) and outcomes (Class 1 or Class 0) of a sample of instances, we develop Smoothing Spline ANOVA methods to estimate the *probability* of membership in Class 1, given the attribute vector. These methods are suitable when outcomes as a function of attributes are not clear-cut, as, for example, occurs when estimating risk of some medical outcome, given various predictor variables and treatments. These methods are penalized log-likelihood methods and plots of the cross sections of the estimates are generally fairly easy to interpret in context. We use the results to estimate the probability of four-year progression of diabetic retinopathy, given the three predictor variables glycosylated hemoglobin, duration of diabetes and body mass index at the study baseline, based on data from the Wisconsin Epidemiological Study of Diabetic Retinopathy. We discuss methods for multiple smoothing parameter selection, (the bias-variance tradeoff!!), numerical methods for computing the estimate and Bayesian `confidence intervals' for the estimate. We discuss methods of informal and formal model selection (stacked generalization!!) and some open questions. This work provides further details for work which has previously been announced in NIPS-93 and elsewhere. Other related papers in the same directory include gacv.ps.gz, ssanova.ps.gz ml-bib.ps and theses/ywang.thesis.README Grace Wahba wahba at stat.wisc.edu .....`think snow'  From lazzaro at CS.Berkeley.EDU Wed Dec 28 18:11:33 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Wed, 28 Dec 1994 15:11:33 -0800 Subject: No subject Message-ID: <199412282311.PAA15382@snap.CS.Berkeley.EDU> I sent these comments to Mike Mozer in response to his call for NIPS comments here last week, and he suggested I pass my comments on to the mailing list. As a EE, I tend to look towards IEEE conferences as examples: there seems to be two major types. [1] Conferences whose prime goal is to bring together as large a segment of a research community as possible, while still maintaining a standard of quality for presentations. IEEE Circuits and Systems and IEEE Acoustics, Speech, and Signal Processing are two prime examples. Among other attributes, its expected that very many if not the majority of the attendees will be presenting, and that the conference's primary clientele is the research community (both academic and industrial), as opposed to the industrial developer. [2] Conferences whose prime goal is to showcase the state of the art in a field, primarily for the benefit of applied development attendees. In these conference, a large majority of the attendees will not be presenting or even doing research, but are either applied developers or non-technical observers. IEEE Solid State Circuits conference is probably the best example of this type of conference: it has a session on the N best new microprocessor designs, the N best new DRAM designs,ect. For some types of chips, academic groups can be competitive with industry groups, and the session is mixed; in others, a $100 million dollar investment is needed to design the chip, and so industrial groups dominate. Many [1] conferences have explicit rules in submission to ensure as many different research groups around the world are presenting as possible: some limit the number of papers a single author can submit, others use "membership" techniques to try to limit the number of papers from a lab. On the other hand, [2] conferences are concerned with fairness and with having the highest quality, but explicitly do not have "broadness" in their charter: if the N best new Op-Amps all come from a certain company, so be it. NIPS has always seemed squarely in the middle of these two types of conferences, with the "elite" aspirations of the Solid State Circuits conference, but with a ratio of "researcher" to "developer" attendees that is closer to the first type of conference. Certain other conferences (most notably SIGGRAPH) started out where NIPS has been, and ended up as a [2] type of conference -- even if you discount the majority of the SIGGRAPH attendees who are there for the trade show and arty stuff, most attendees of the conference papers are there to listen and learn new research ideas for possible use in development, not to present papers themselves. The tone of most of the postings on this thread seem to imply that [1] is the direction that they'd like NIPS to move to. If so, the IEEE experience has been that more direct approaches than "blind reviewing" can be used to broaden the conference. Personally, I'd prefer [2], because I believe the technologies associated with NIPS are going to have the same degree of engineering impact as SIGGRAPH (Computer Graphics) and ISSCC (Integrated Circuits) has had on the world, and part of realizing that impact is having a conference of type [2]. But for this to occur, NIPS needs to market itself to the product development community, so that the ratio of developer:researcher at NIPS increases significantly.  From ronnyk at CS.Stanford.EDU Sat Dec 3 10:26:20 1994 From: ronnyk at CS.Stanford.EDU (Ronny Kohavi) Date: Sat, 3 Dec 1994 23:26:20 +0800 Subject: MLC++: Machine Learning Utilities Available Message-ID: <9412040726.AA14178@starry.Stanford.EDU> MLC++ Utilities _______________ MLC++ is a Machine Learning library of C++ classes being developed at Stanford. More information about the library can be obtained at URL http://robotics.stanford.edu:/users/ronnyk/mlc.html. We are now releasing the object code for some utilities written using MLC++. These will run on Suns, either on SunOS or Solaris. Included in the current release are the following induction algorithms: 1. Majority (baseline). 2. Basic ID3 for inducing decision trees. The output can be sent to a mail server to get a postscript picture of the resulting tree. Very useful for looking at the final tree and for teaching. 3. Basic nearest neighbor. 4. Decision Table. 5. Interface to C4.5 (for utilities below). 6. Feature subset selection : wraps around any of the above and selects a good subset of the features, usually improving performance and comprehensibility. Utilities released are: 1. Cross validation : cross validate a file and any of the above induction algorithms. Allows regular or stratified CV. You can also generate the cross validation files to compare your own induction algorithm. 2. Learning curve : generate a learning curve for any of the above induction algorithms. 3. Project : project the data onto a subset of attributes. 4. Convert : convert nominal attributes to unary encoding or binary encoding. Quick starter guide: -------------------- The MLC++ utilities are accessible by anonymous ftp to starry.stanford.edu:pub/ronnyk/mlc/ There are currently two kits, one for Solaris (MLCutil-solaris.tar.Z) and one for SunOS (MLCutil-sunos.tar.Z). cd zcat | tar xvf - where is the directory under which the mlc directory will be built (e.g., /usr/local), and is the kit appropriate for your machine. The documentation is in utils.ps. The environment variable MLCDIR must be set to the directory where the utilities are installed. Databases in the MLC++ format, which is very similar to C4.5 format can be found in starry.stanford.edu:pub/ronnyk/mlc/db. Most datafiles are converted from the repository at UC Irvine. Questions or help requests related to the utilities should be addressed to mlcpp-help at CS.Stanford.EDU -- Ronny Kohavi (ronnyk at CS.Stanford.EDU, http://robotics.stanford.edu/~ronnyk)  From marcus at hydra.ifh.de Mon Dec 5 04:37:52 1994 From: marcus at hydra.ifh.de (Marcus Speh) Date: Mon, 5 Dec 94 10:37:52 +0100 Subject: Paper - GAUGE THEORY OF THINGS ALIVE AND UNIVERSAL DYNAMICS Message-ID: <9412050937.AA21444@hydra.ifh.de> The following paper is now available via anonymous FTP (see below): GAUGE THEORY OF THINGS ALIVE AND UNIVERSAL DYNAMICS BY G. MACK, Theoretical Physics, University of Hamburg, Germany SOURCE FORMAT: 13 pages, latex, uses fleqn.sty (can be removed without harm) REPORT-NO: DESY 94-184 ABSTRACT Positing complex adaptive systems made of agents with relations between them that can be composed, it follows that they can be described by gauge theories similar to elementary particle theory and general relativity. By definition, a universal dynamics is able to determine the time development of any such system without need for further specification. The possibilities are limited, but one of them - reproduction fork dynamics - describes DNA replication and is the basis of biological life on earth. It is a universal copy machine and a renormalization group fixed point. A universal equation of motion in continuous time is also presented. TO RETRIEVE, connect as USER "anonymous" with PASSWORD "@
" a) in Europe, to ftp.desy.de (131.169.30.42) and retrieve one of the files from directory "pub/outgoing" 203657 Dec 5 10:06 DESY_94-184.ps [PostScript, uncompressed] 60601 Dec 5 10:06 DESY_94-184.ps.gz [PostScript, compressed] 48393 Dec 1 14:48 DESY_94-184.tex [LaTeX source] OR in the US to b) ftp.scri.fsu.edu (144.174.128.34) and retrieve the LaTeX source file from directory "hep-lat/papers/9411": 48506 Nov 28 07:27 9411059 On the World-Wide Web, you can also access the URL http://www.desy.de/pub/outgoing/DESY_94-184.ps.gz [60KB] Inquiries and comments should be sent directly to the author, Prof. G. Mack  From wuertz at ips.id.ethz.ch Mon Dec 5 08:45:23 1994 From: wuertz at ips.id.ethz.ch (Diethelm Wuertz) Date: Mon, 05 Dec 1994 14:45:23 +0100 Subject: PASE 95: Parallel Applications in Statistics and Economics Message-ID: <9412051433.AA11623@sitter> 5th Anniversary First Announcement International Workshop on Parallel Applications in Statistics and Economics >> Non-linear Data Analysis << Trier - Mainz, Germany August 29 - September 2, 1995 ____________________________________________________________________________ PURPOSE OF THE WORKSHOP: The purpose of this workshop is to bring together researchers interested in innovative information processing systems and their applications in the areas of statistics, finance and economics. The focus will be on in-depth presentations of state-of-the-art methods and applications as well as on communicating current research topics. This workshop is intended for industrial and academic persons seeking new ways of comprehending the behavior of complex dynamic systems. The PASE'95 workshop is concerned with but not restricted to the following topics: o Applications in finance, economics, and natural science o Statistical tests for finding deterministic and chaotic behavior o Statistical tests for measuring stability and stationarity o Sampling and retrieving techniques for high frequency data o Modeling and analysis of non-linear multivariate time series o Statistical use of neural networks, genetic algorithms and fuzzy systems ____________________________________________________________________________ WORKSHOP SITE: The workshop will be held on a comfortable ship running from Trier to Mainz on the scenic rivers Mosel and Rhine in Germany. The best connection to Trier is to fly to Frankfurt and then to take the train. It will take about 2 1/2 hours. From Mainz there are direct trains to Frankfurt Airport (~ 30 minutes). Detailed traveling possibilities will be announced later. ____________________________________________________________________________ ACCOMMODATION AND REGISTRATION: All participants will be accommodated on the ship. The price of the accommodation in a double room for 4 nights is Sfr 870.- for the main deck (920.- for the upper deck) including breakfasts, lunches and dinners. The welcome and workshop dinner are also included. The service begins on Tuesday afternoon, August 29th, in Trier with registration and ends on Saturday morning, September 2nd in Mainz. Since the facilities of the ship are dedicated to the workshop the price is unique and no reductions can be provided for a shorter participation. Since the number of participants is limited to about 100 persons, early registration is recommended. All applications will be handled on the basis "first come first serve". Registration will only be accepted after a pre-payment of Sfr 300.-. The rest of the accommodation expenses of Sfr 570.- or 620.-, respectively, and the workshop fees must be transferred to our account PASE Workshop - Dr. Diethelm Wuertz Schweizerischer Bankverein, Zurich, Switzerland Number of Account: P0-206066.0 before May 31st 1995. No currencies other than Swiss Francs will be accepted. In the case of a later remittance we cannot guarantee your application. ____________________________________________________________________________ WORKSHOP FEE AND PROCEEDINGS: The workshop fee is Sfr 580.- for profit-making companies and Sfr 230.- for others. It includes the Proceedings, published as a Special Issue of the Journal "Neural Network World - International Journal on Neural and Mass-Parallel computing and Information Systems". ____________________________________________________________________________ WORKSHOP SCHOLARSHIPS: For students and a limited number of participants from "post-communist" European countries some support and scholarships will be available: please contact the organizers. ____________________________________________________________________________ ABSTRACTS AND DEADLINES: Regular abstracts of one page (approximately 30 lines with 60 characters) must be submitted before December 1st, 1994. (For post-deadline papers please contact directly: .) All contributions will be refereed. The deadline for the full papers (about 10 pages) is April 1st, 1995. We cannot guarantee that papers submitted later will be printed in the proceedings. The proceedings will be available at the conference. It is also possible to submit an extended abstract of 4 pages. From these contributions we will select, besides the regular invited speakers, 3 further speakers for an invited talk. The deadlines are the same as in the case of regular abstracts. If you plan a soft- or hardware demonstration please contact the organizers. Please send the abstracts and full papers to: Hynek Beran, ICS Prag Pod vodarenskou vezi 2 FAX: +42 2 858 57 89 182 07 PRAGUE 8, Czech Republic E-mail: pase at uivt.cas.cz ____________________________________________________________________________ ORGANIZATION: The Workshop will be organized by the Interdisciplinary Project Center for Supercomputing (ETH Zurich), Olsen & Associates (Research Institute for Applied Economics, Zurich) and the Institute of Computer Science (Academy of Sciences, Prague) W.M. van den Bergh, D.E. Baestaens, Erasmus University Rotterdam Thilo von Czarnowski, Helaba Frankfurt Michel M. Dacorogna, Olsen & Associates Zurich Susanne Fromme, SMH Research Frankfurt Heinz Muehlenbein, GMD Sankt Augustin Gholamreza Nakkaeizadeh, Daimler Benz Forschung Ulm Paul Ormerod, Henley Center for Forecasting London Emil Pelikan, ICS Czech Academy of Sciences Prague Heinz Rehkugler, University of Bamberg Marco Tomassini, CSCS Manno Dieter Wenger, Swiss Bank Corporation Basel Diethelm Wuertz, IPS ETH Zurich Hans Georg Zimmermann, Siemens AG, Munchen ____________________________________________________________________________ FURTHER INFORMATION: Further information will be available from anonymous ftp "maggia.ethz.ch" (129.132.17.1) or world wide webb "http://www.ips.id.ethz.ch/PASE/pase.html" ____________________________________________________________________________ REGISTRATION FORM - PASE '95: .................................................................... Mr./Mrs. First Name Surname .................................................................... Institution or Company .................................................................... .................................................................... Mailing Address .................................................................... E-mail .................................................................... Phone Fax o Fees University Sfr 230.-- o Main Deck Sfr 870.- o Fees Profit making Company Sfr 580.-- o Upper Deck Sfr 920.- CONTRIBUTION: I would like to present a talk and to submit a paper with the following title: .................................................................... .................................................................... .................................................................... Please, return this form as soon as possible to: Martin Hanf IPS CLU B2, ETH Zentrum CH-8092 Zurich, Switzerland E-Mail: pase at ips.id.ethz.ch ____________________________________________________________________________ PASE '95 -------------------------------------------------------------------------- PD Dr. Diethelm Wuertz Interdisciplinary Project Center for Supercomputing IPS CLU B3 - ETH Zentrum CH-8092 ZURICH, Switzerland Tel. +41-1-632.5567 Fax. +41-1-632.1104 --------------------------------------------------------------------------  From mbrown at aero.soton.ac.uk Mon Dec 5 05:08:01 1994 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Mon, 5 Dec 94 10:08:01 GMT Subject: neural -probability-fuzzy seminars Message-ID: <7370.9412051008@aero.soton.ac.uk> ADT '95 Date: April 3-5 1995 Venue: BRUNEL - The University of West London UNICOM Seminars Ltd., Brunel Science Park, Cleveland Road, Uxbridge, Middlesex, UB8 3PH, UK Telephone: +44 1895 256484 FAX +44 1895 813095 ADT 95 Sponsored by: Department of Trade and Industry (DTI) Institute of Mathematics and its Applications IFSA International Neural Networks Society European Neural Networks Society AI Watch Journal of Neural Computing and Applications Rapid Data Ltd Call for Papers, Registration Information, and Abstract Form An international team of experts describes and explores the most recent developments and research in Neural Networks, Bayesian Belief Networks, Modern Heuristic Search Methods and Fuzzy Logic. Presentations cover theoretical developments as well as industrial and commercial applications. Main Topics Neural Networks and their Applications: New Developments in Theory and Algorithms; Simulations and Hardware; Pattern Recognition; Medical Diagnosis. Modern Heuristic Search Methods: Genetic Algorithms; Classifier Systems; Simulated Annealing; Tabu Search; Hybrid Techniques; Genetic and Evolutionary Programming. Probabilistic Reasoning and Computational Learning: Bayesian Belief Networks; Computational Learning; Inductive Inference; Statistical Causality in Belief Networks; Applications in Computational Vision, Medicine, Legal Reasoning, Fault Diagnosis and Genetics; Model Selection. Fuzzy Logic: Neural Fuzzy Programming; Algorithms and Applications; Fuzzy Control; Fuzzy Data Processing; Fuzzy Silicon Chips. PROGRAMME COMMITTEE & SPEAKERS: Neural Networks and their Applications: J G Taylor, King's College London (Chair); I Aleksander, Imperial College; M J Denham, University of Plymouth; B Schuermann, Siemens AG; A G Carr, ERA Technology; R Wiggins, DTI. Other Speakers include: S Hancock, Neural Technologies Ltd; J Stonham, Brunel University, J Austin, Univ. of York; B Clements, Clement Neuronics Modern Heuristic Search Methods: V Rayward-Smith, University of East Anglia (Chair); I Osman, University of Kent; C R Reeves, University of Coventry; G D Smith, University of East Anglia. Other Speakers include: E Aarts, Philips Research, Netherlands; C Hughes, Logica; S Voss, Darmstadt Univ, D Schaaffer, Philips, USA; R Battiti, Trento Univ, Italy Probabilistic Reasoning & Computational Learning: A Gammerman, Royal Holloway University of London (Chair); J Pearl, UCLA; D Spiegelhalter, Medical Research Council; A Dempster, Harvard University; C S Wallace, Monash University (Australia); V Vapnik, AT&T Bell Labs; V Uspensky, University of Moscow. Other Speakers include: P Reverberi, Inst. di Analisi dei Sistemi..Rome; M Talamo, Univ degli Studi, Rome; S McClean, Ulster Univ; A Shen, Ind. Univ. of Moscow; J Kwaan, Marconi Simulation Fuzzy Logic: E H Mamdani, Queen Mary & Westfield College (Chair); J Baldwin, Bristol University; C Harris, University of Southampton; H Zimmerman, RWTH Aachen.Other Speakers include: L Zadeh, Univ of Calif., Berkeley; R Weber, MIT GmbH; R John, De Montfort Univ.; H Bersini, IRIDIA, Belgium. HOW TO CONTRIBUTE: PRESENTATIONS: You may wish to present a paper at any of these sessions. Please indicate your choice of session with your submission. Please ensure we receive the title of your proposed talk and a short abstract (30-40 words) by 14 January 1995. The Organising Committee will make their selections and we will advise you by 14 February 1995 whether you presentation can be included. Written papers for the proceedings are also welcome. Please include your telephone and fax numbers and postal address in all correspondence. PUBLICATIONS: Preliminary proceedings of this symposium and also a refereed volume will be published. Deadlines for Abstracts: 14 January 1995 - acceptance of paper notified by 14 February 1995. Extended abstracts (1000-1500) words, by 15 March 1995. Submission of papers for refereed proceedings (3000-5000) words by 31 August 1995. REGISTRATION: The conference program and registration material will be available in late February 1995. To ensure receiving your copy, complete the attached form and return it to ADT 95. Registration Fee: GBP175 (Academic), GBP350 (Industrial), GBP500-800 (Exhibition), and GBP1000 Commercial Sponsors, preliminary proceedings included in the fee. Late registration after 28 February 1995 will be charged GBP25 extra. ------------------------------------------------------------------- Please return the form below to: ADT 95, UNICOM Seminars Ltd., Brunel Science Park, Cleveland Road, Uxbridge, Middlesex, UB8 3PH. Telephone: +44 1895 256484. FAX: +44 1895 813095. Email: adt95 at unicom.demon.co.uk. -------------------------------------------------------------------- Name: Organization: Position / Department: Address: Telephone: Facsimile: E-Mail: Please send me information about the event, [ ] I wish to present a paper, title and abstract attached [ ] I am interested in exhibiting [ ] Send me more information about ADT95 This event is part of Brunel University's Continuing Education Programme. If you fail to return this form, you may not receive further information.  From dayan at cs.TORONTO.EDU Mon Dec 5 17:40:37 1994 From: dayan at cs.TORONTO.EDU (Peter Dayan) Date: Mon, 5 Dec 1994 17:40:37 -0500 Subject: The Helmholtz Machine and the Wake-Sleep Algorithm Message-ID: <94Dec5.174037edt.317@neuron.ai.toronto.edu> FTP-host: ftp.cs.toronto.edu FTP-filename: pub/dayan/wake-sleep.ps.Z FTP-filename: pub/dayan/helmholtz.ps.Z Two papers about stochastic and deterministic Helmholtz machines are available in compressed postscript by anonymous ftp from ftp.cs.toronto.edu - abstracts are given below. We regret that hardcopies are not available. ---------------------------------------------------------------------- pub/dayan/wake-sleep.ps.Z The Wake-Sleep Algorithm for Unsupervised Neural Networks Geoffrey E Hinton, Peter Dayan, Brendan J Frey and Radford M Neal Abstract: We describe an unsupervised learning algorithm for a multilayer network of stochastic neurons. Bottom-up "recognition" connections convert the input into representations in successive hidden layers and top-down "generative" connections reconstruct the representation in one layer from the representation in the layer above. In the "wake" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the "sleep" phase, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above. Submitted for publication ---------------------------------------------------------------------- pub/dayan/helmholtz.ps.Z The Helmholtz Machine Peter Dayan, Geoffrey E Hinton, Radford M Neal and Richard S Zemel Abstract: Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterised stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns, We describe a way of finessing this combinatorial explosion by maximising an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. Neural Computation, in press. ----------------------------------------------------------------------  From lazzaro at CS.Berkeley.EDU Mon Dec 5 17:44:49 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Mon, 5 Dec 1994 14:44:49 -0800 Subject: No subject Message-ID: <199412052244.OAA16160@boom.CS.Berkeley.EDU> =========================== TR announcement ================================= REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities ---Applications to Transition-based Connectionist Speech Recognition--- by H. Bourlard, Y. Konig & N. Morgan Intl. Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 email: bourlard,konig,morgan at icsi.berkeley.edu ICSI Technical Report TR-94-064 Abstract In this report, we describe the theoretical formulation of REMAP, an approach for the training and estimation of posterior probabilities using a recursive algorithm that is reminiscent of the EM (Expectation Maximization) algorithm for the estimation of data likelihoods. Although very general, the method is developed in the context of a statistical model for transition-based speech recognition using Artificial Neural Networks (ANN) to generate probabilities for hidden Markov models (HMMs). In the new approach, we use local conditional posterior probabilities of transitions to estimate global posterior probabili- ties of word sequences given acoustic speech data. Although we still use ANNs to estimate posterior probabilities, the network is trained with targets that are themselves estimates of local posterior probabilities. These targets are iteratively re-estimated by the REMAP equivalent of the forward and backward recursions of the Baum-Welch algorithm to guarantee regular increase (up to a local maximum) of the global posterior probability. Convergence of the whole scheme is proven. Unlike most previous hybrid HMM/ANN systems that we and others have developed, the new formulation determines the most probable word sequence, rather than the utterance corresponding to the most probable state sequence. Also, in addition to using all possible state sequences, the proposed training algorithm uses posterior probabilities at both local and global levels and is discriminant in nature. The postscript file of the full technical report (66 pages) can be copied from our (anonymous) ftp site as follows: ftp ftp.icsi.berkeley.edu username= anonymous passw= your email address cd pub/techreports/1994 binary get tr-94-064.ps.Z  From esann at dice.ucl.ac.be Tue Dec 6 04:33:59 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 6 Dec 1994 11:33:59 +0200 Subject: Neural Processing Letters Vol.1 No.2 Message-ID: <9412061028.AA06065@ns1.dice.ucl.ac.be> The following articles may be found in the second issue of the "Neural Processing Letters" journal (November 1994): - Good teaching inputs do not correspond to desired responses in ecological neural networks S. Nolfi, D. Parisi - Recurrent Sigma-Pi-linked back-propagation network T.W.S. Chow, G. Fei - An evolutionary approach to associative memory in recurrent neural networks S. Fujita, H. Nishimura - Equivalence between some dynamical systems for optimization K. Urahama - VISOR: schema-based scene analysis with structured neural networks W.K. Leow, R. Miikkulainen - Nonlinear neural controller with neural Smith predictor Y. Tan, A. Van Cauwenberghe - A neural method for geographical continuous field estimation D. Pariente, S. Servigne, R. Laurini Neural Processing Letters in a rapid publication journal, aimed to publish new ideas, original developments and work in progress in all aspects of the field of Artificial Neural Networks. The delay between submission of papers and publication is maximum about 3 months. Please don't hesitate to ask for the instructions for authors, in order to submit your work to Neural Processing Letters. We remind you that all information concerning this journal may be found on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL login: anonymous password: your_e-mail_address - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html For any information (subscriptions, instructions for authors, free sample copies...), you can also directly contact the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________  From mario at physics.uottawa.ca Tue Dec 6 09:18:06 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Tue, 6 Dec 94 10:18:06 AST Subject: Neural net paper available by anonymous ftp Message-ID: <9412061418.AA24196@physics.uottawa.ca> The following paper, which was presented at the NIPS'94 conference, is available by anonymous ftp at: ftp://dirac.physics.uottawa.ca/usr2/ftp/pub/tr/marchand FileName: nips94.ps Title: Learning Stochastic Perceptrons Under k-Blocking Distributions Authors: Marchand M. and Hadjifaradji S. Abstract: We present a statistical method that PAC learns the class of stochastic perceptrons with arbitrary monotonic activation function and weights $w_i \in \{-1, 0, +1\}$ when the probability distribution that generates the input examples is member of a family that we call {\em k-blocking distributions\/}. Such distributions represent an important step beyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic perceptron we mean a perceptron which, upon presentation of input vector $\x$, outputs~1 with probability $f(\sum_i w_i x_i - \theta)$. Because the same algorithm works for any monotonic (nondecreasing or nonincreasing) activation function $f$ on Boolean domain, it handles the well studied cases of sigmo\"{\i}ds and the ``usual'' radial basis functions. ALSO: you will find other papers co-authored by Mario Marchand in this directory. The text file: Abstracts-mm.txt contains a list of abstracts of all the papers. PLEASE: communicate to me any printing or transmission problems. Any comments concerning these papers are very welcome. ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ----------------------------------------------------------------  From J.Vroomen at kub.nl Tue Dec 6 12:58:06 1994 From: J.Vroomen at kub.nl (Jean Vroomen) Date: 6 Dec 94 12:58:06 MET Subject: PhD research position in Cogn Psy/Comput Linguistics at Tilburg Message-ID: Predoctoral research associate position in cognitive psychology/computational linguistics (Ph.D position, 4 year contract) Tilburg University A predoctoral position in cognitive psychology/computational linguistics is available immediately at Tilburg University. The focus of research will be on the development of a text-to-speech conversion system with special emphasis on modelling reading ac- quisition and developmental dyslexia. The development of the model and the simulations will be conducted together with behavioral experiments. In the project, a strong cross-linguistic approach is taken. Comparative models will be built for English, French, and Dutch. This approach furthermore includes the development of a computational measure of the complexity of these writing systems (`orthographic depth'). Candidates with training in cognitive psychology, cognitive neurop- sychology combined with computational (connectionist) modelling are preferred. Applicants should send a curriculum vitea and a brief description of fields of interest to: Professor Beatrice de Gelder Department of Psychology Tilburg University Warandelaan, 2 PO Box 90153 5000 LE Tilburg, The Netherlands email: b.degelder at kub.nl  From FRYRL at f1groups.fsd.jhuapl.edu Wed Dec 7 15:30:00 1994 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Wed, 07 Dec 94 12:30:00 PST Subject: New Neuroprose Entry Message-ID: <2EE61B8E@fsdsmtpgw.fsd.jhuapl.edu> A preprint of a manuscript entitled "Observer-participant models of neural computation" which has been accepted by the IEEE Trans. Neural Networks is being made available via FTP from the Neuroprose directory. Details for retrieving this article an associated figures follows the abstract below: ABSTRACT Observer-participant models of neural computation A model is proposed in which the neuron serves as an information channel. Channel distortion occurs through the channel since the mapping from input boolean codes to output codes are many-to-one in that neuron outputs consist of just two distinguished states. Within the described model, the neuron performs a decision-making function. Decisions are made regarding the validity of a question passively posed by the neuron to its environment. This question becomes defined through learning hence learning is viewed as the process of determining an appropriate question based on supplied input ensembles. An application of the Shannon information measures of entropy and mutual information taken together in the context of the proposed model lead to the Hopfield neuron model with conditionalized Hebbian learning rules implemented through a simple modification to Oja's learning equation. Neural decisions are shown to be based on a sigmoidal transfer characteristic or in the limit as computational temperature tends to zero, a maximum likelihood decision rule. The described work is contrasted with the information-theoretic approach of Linsker. The paper is available in two files from archive.cis.ohio-state.edu in the /pub/neuroprose subdirectory. The file names are fry.maxmut.ps.Z (compressed postscript manuscript 433927 bytes) fry.maxmut_figs.ps.Z (comp. postscript figures 622321 bytes) Robert L. Fry Johns Hopkins University/ Applied Physics Laboratory Johns Hopkins Road Laurel, MD 20723 robert_fry at jhuapl.edu  From rohwerrj at helios.aston.ac.uk Wed Dec 7 12:21:49 1994 From: rohwerrj at helios.aston.ac.uk (rohwerrj) Date: Wed, 7 Dec 1994 17:21:49 +0000 Subject: 2 tech reports on n-tuple classifiers Message-ID: <22434.9412071721@sun.aston.ac.uk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/morciniec.ntup_bench.ps.Z FTP-filename: /pub/neuroprose/rohwer.ram_bayes.ps.Z The following two tech reports have been placed on the Neuropose ftp site: archive.cis.ohio-state.edu (128.146.8.52) in pub/neuroprose/ Copies are also available on the Aston University ftp site: cs.aston.ac.uk (134.151.52.106) in pub/docs/ ----------------------------------------------------------------- morciniec.ntup_bench.ps.Z The n-tuple Classifier: Too good to Ignore (11 Pages) Michal Morciniec and Richard Rohwer The n-tuple classifier is compared to 23 other methods on 11 large datasets from the European Community StatLog project. ----------------------------------------------------------------- rohwer.ram_bayes.ps.Z Two Bayesian treatments of the n-tuple recognition method (8 pages) Richard Rohwer Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. ----------------------------------------------------------------- Sorry, no hardcopies available. Three cheers for Jordan Pollack!!! Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3621 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs  From mike at PARK.BU.EDU Tue Dec 6 20:51:49 1994 From: mike at PARK.BU.EDU (Michael Cohen) Date: Tue, 6 Dec 1994 20:51:49 -0500 Subject: WCNN'95 Call for Papers & Meeting Info Message-ID: <199412070151.UAA25340@cns.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS RENAISSANCE HOTEL WASHINGTON, DC, USA July 17-21, 1995 CALL FOR PAPERS! DEADLINE FOR PAPER SUBMISSION: February 10, 1995. Registration Fees: Category Pre-registration Pre-registration On-Site prior to prior to February 10, 1995 June 16, 1995 INNS Member $170.00 $250.00 $350.00 Non-member** $270.00 $380.00 $480.00 Student*** $ 85.00 $110.00 $135.00 **Registration fee includes 1995 membership and a one (1) year subscription to the Journal Neural Networks. ***Student registration must be accompanied by a letter of verification from department chairperson. Any student registration received with no verification letter will be processed at the higher member or non-member fee, depending on current membership status. Copies of student identification cards are NOT acceptable. This also applies to on-site registration. ORGANIZING COMMITTEE John G. Taylor, General Chair Walter J. Freeman Harold Szu Rolf Eckmiller Shun-ichi Amari David Casasent INNS OFFICERS GOVERNING BOARD President: Walter J. Freeman Shun-ichi Amari President-Elect: John G. Taylor James A. Anderson Past President: Harold Szu Andrew Barto Secretary: Gail Carpenter David Casasent Treasurer: Judith Dayhoff Leon Cooper Executive Director: R. K. Talley Rolf Eckmiller Kunihiko Fukushima Stephen Grossberg Mitsuo Kawato Christof Koch Bart Kosko Christoph von der Mallsburg Alianna Maren Paul Werbos Bernard Widrow Lotfi A. Zadeh PROGRAM COMMITTEE Shun-ichi Amari James A. Anderson Etienne Barnard Andrew R. Barron Andrew Barto Theodore Berger Artie Briggs Gail Carpenter David Casasent Leon Cooper Judith Dayhoff Rolf Eckmiller Jeff Elman Terrence L. Fine Francoise Fogelman-Soulie Walter J. Freeman Kunihiko Fukushima Patric Gallinari Apostolos Georgopoulos Stephen Grossberg John B. Hampshire II Michael Hasselmo Robert Hecht-Nielsen Akira Iwata Jari Kangas Bert Kappen Christof Koch Teuvo Kohonen Kenneth Kreutz-Delgado Clifford Lau Sam Levin Daniel S. Levine William B. Levy Christof von der Malsburg Alianna Maren Lina Massone Lance Optican Paul Refenes Jeffrey Sutton Harold Szu John G. Taylor Brian Telfer Andreas Weigand Paul Werbos Hal White Bernard Widrow Daniel Wolpert Kenji Yamanishi Mona E. Zaghloul CALL FOR PAPERS: PAPERS MUST BE RECEIVED BY FEBRUARY 10, 1995. Authors must submit registration payment with papers to be eligible for the early registration fee. A $35 per paper processing fee must be enclosed in order for the paper to be refereed. Please make checks payable to INNS and include with submitted paper. For review purposes, please submit six (6) copies (1 original, 5 copies) plus 3 1/2" disk (see instructions below), four page limit, in English. $20 per page for papers exceeding (4) pages (do not number pages). Checks for over length charges should be made out to INNS and must be included with submitted Paper. Papers must be on 8 1/2" x 11" white paper with 1" margins on all sides, one column format, single spaced, in Times or similar type style of 10 points or larger, one side of paper only. FAX's not acceptable. Centered at the top of first page should be complete title, author name(s), affiliation(s), and mailing address(es), followed by blank space, abstract (up to 15 lines), and text. The following information MUST be included in an accompanying cover letter in order for the paper to be reviewed: Full title of paper, corresponding author and presenting author name, address, telephone and fax numbers. Technical Session (see session topics) 1st and 2nd choices, oral or poster presentation preferred*, audio-visual requirements (for oral presentations only). Papers submitted which do not meet these requirements or for which insufficient funds are submitted will be returned. For the first time, the proceedings of the 1995 World Congress on Neural Networks will be distributed on CD-ROM. The CD-ROM Proceedings are included in your registration fee. Please note that the 4 volume book of proceedings will not be printed. Format of 3 1/2" disk for CD-ROM: Once paper is proofed, completed and printed for review, reformat the paper in Landscape format, page size 8" x 5" for CD. You may include a separate file with 1 paragraph biographical information with your name, company, address and telephone number. Presenters should submit their papers in one of the following Macintosh or Microsoft Windows formats: Microsoft Word, WordPerfect, FrameMaker, Quark or Quark Professional, PageMaker, Persuasion, ASCII, PowerPoint, Adobe.PDF, Postscript (text, not EPS). Images can be submitted in TIF or PCX format. If submitting in Macintosh format, you should submit a disk containing a Font Suitcase of fonts used in your presentationn to ensure proper match or enclose a cover letter with paper indicating a list of fonts used in your presentation. Information published on the CD will consist of text, in-line and offset mathematical equations, black and white images, line art and graphics. By submitting a previously unpublished paper, author agrees to the transfer of the copyright to INNS for the conference proceedings. All submitted papers become the property of INNS. Papers and disk to be sent to: WCNN'95, 875 Kings Highway, Suite 200, Woodbury, NJ 08096-3172 USA. *When to choose a poster presentation: more than 15 minutes are needed; author does not wish to run concurrently with an invited talk; work is a continuation of previous publication; author seeks a lively discussion opportunity or to solicit future collaboration. PLENARY SPEAKERS: Daniel L. Alkon, U.S. National Institutes of Health Shun-ichi Amari, University of Tokyo Gail Carpenter, Boston University Walter J. Freeman, University of California, Berkeley Teuvo Kohonen, Helsinki University of Technology Harold Szu, Naval Surface Warfare Center John G. Taylor, King's College London SESSION TOPICS: 1. Biological Vision: Rolf Eckmiller 2. Machine Vision: Kunihiko Fukushima, Robert Hecht-Nielsen 3. Speech and Language: Jeff Elman 4. Biological Sensory-Motor Control: Andrew Barto, Lina Massone 5. Neurocontrol and Robotics: Paul Werbos 6. Supervised Learning: Andrew R. Barron, Terrence L. Fine 7. Unsupervised Learning: Teuvo Kohonen, Francoise Fogelman-Soulie 8. Pattern Recognition: David Casasent, Brian Telfer 9. Prediction and System Identification: John G. Taylor, Paul Werbos 10. Cognitive Neuroscience: James Anderson, Jeffrey Sutton 11. Links to Cognitive Science & Artificial Intelligence: Alianna Maren 12. Signal Processing: Bernard Widrow 13. Neurodynamics and Chaos: Harold Szu, Mona E. Zaghloul 14. Hardware Implementation: Clifford Lau 15. Associative Memory: Christoph von der Malsburg 16. Applications: Leon Cooper 17. Circuits and Systems Neuroscience: Stephen Grossberg, Lance Optican 18. Mathematical Foundations: Shun-ichi Amari, D.S. Levine 19. Evolutionary Computing, Genetic Algorithms: Judith Dayhoff SHORT COURSES: a. Pattern Recognition and Neural Nets: David Casasent, Carnegie Mellon University b. Modelling Consciousness: John G. Taylor, King's College London c. Neocognitron and the Selective Attention Model: Kunihiko Fukushima, Osaka University d. What are the Differences & the Similarities Among Fuzzy, Neural, & Chaotic Systems: Takeshi Yamakawa, Kyushu Institute of Technology e. Image Processing & Pattern Recognition by Self-Organizing Neural Networks: Stephen Grossberg, Boston University f. Dynamic Neural Networks: Signal Processing & Coding: Judith Dayhoff, University of Maryland g. Language and Speech Processing: Jeff Elman, University of California-San Diego h. Introduction to Statistical Theory of Neural Networks: Shun-ichi Amari, University of Tokyo i. Cognitive Network Computation: James Anderson, Brown University j. Biology-Inspired Neural Networks: From Brain Research to Applications in Technology & Medicine: Rolf Eckmiller, University of Dusseldorf k. Neural Control Systems: Bernard Widrow, Stanford University l. Neural Networks to Advance Intelligent Systems: Alianna Maren, Accurate Automation Corporation m. Reinforcement Learning: Andrew G. Barto, University of Massachusetts n. Advanced Supervised-Learning Algorithms and Applications: Francoise Fogelman-Soulie, SLIGOS o. Neural Network & Statistical Methods for Function Estimation: Vladimir Cherkassky, University of Minnesota p. Adaptive Resonance Theory: Gail A. Carpenter, Boston University q. What Have We Learned from Experiences of Real World Applications in NN/FS/GA?: Hideyuki Takagi, Matsushita Elctrical Industrial Co., Ltd. r. Fuzzy Function Approximation: Julie A. Dickerson, University of Southern Califorrnia s. Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs: Lofti A. Zadeh, University of California-Berkeley t. Overview of Neuroengineering and Supervised Learning: Paul Werbos, National Science Foundation INDUSTRIAL ENTERPRISE DAY: Monday, July 17, 1995 Enterprise Session: Chair: Robert Hecht-Nielsen, HNC, Inc. Industrial Session: Chair: Takeshi Yamakawa, Kyushu Institute of Technology FUZZY NEURAL NETWORKS: Tuesday, July 18, 1995 Wednesday, July 19, 1995 Co-Chairs: Bart Kosko, University of Southern California Ronald R. Yager, Iona College SPECIAL SESSIONS: Neural Network Applications in the Electrical Utility Industry Biomedical Applications & Imaging/Computer Aided Diagnosis in Medical Imaging Statistics and Neural Networks Dynamical Systems in Financial Engineering Mind, Brain and Consciousness Physics and Neural Networks Biological Neural Networks To obtain additional information (complete registration brochure, registration and hotel forms) contact WCNN'95, 875 Kings Highway, Suite 200, Woodbury, New Jersey 08096-3172 USA, Tele: (609)845- 1720; Fax: (609)853-0411; e-mail: 74577.504 at compuserve.com  From jbower at smaug.bbb.caltech.edu Wed Dec 7 17:13:15 1994 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Wed, 7 Dec 94 14:13:15 PST Subject: Call for papers -- CNS*95 Message-ID: CALL FOR PAPERS Fourth Annual Computational Neuroscience Meeting CNS*95 July 11 - 15, 1995 Monterey, California ................ DEADLINE FOR SUMMARIES AND ABSTRACTS: **>> January 25, 1995 <<** ^^^^^^^^^^^^^^^^ This is the fourth annual meeting of an interdisciplinary conference intended to address the broad range of research approaches and issues involved in the field of computational neuroscience. The last three annual meetings, in San Francisco (CNS*92), Washington, DC (CNS*93), and Monterey, California (CNS*94) brought experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. Peer reviewed papers were presented on a range of subjects related to understanding how nervous systems compute. As in previous years, the meeting will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The meeting in 1995 will again take place at the Monterey Doubletree Hotel and include plenary, contributed, and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. The last day of the meeting will be devoted to a series of informal workshops focused on current issues in computational neuroscience. Student Travel funds and Child Day Care will be available. SUBMISSION INSTRUCTIONS: With this announcement we solicit the submission of presented papers. All papers will be refereed. Authors should send original research contributions in the form of a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Summaries are for program committee use only. Abstracts will be published in the conference program. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other systems F. Motor Systems and Sensory Motor Integration G. Learning and Memory H. Behavior I. Cognitive J. Disease Include addresses of all authors on the front of the summary and the abstract including the E-mail address for EACH author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered if they lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by surface mail ONLY by sending 6 copies of the abstract and summary to: CNS*95 Submissions Division of Biology 216-76 Caltech Pasadena, CA 91125 ADDITIONAL INFORMATION can be obtained by: o Using our on-line WWW information and registration server, URL of: http://www.bbb.caltech.edu/cns95.html o ftp-ing to our ftp site. yourhost% ftp 131.215.137.69 Name (131.215.137.69:): ftp Password: yourname at yourhost.yourside.yourdomain ftp> cd cns95 ftp> ls o Sending Email to: cns95-registration-info at smaug.bbb.caltech.edu CNS*94 ORGANIZING COMMITTEE: Co-meeting chair / logistics - John Miller, UC Berkeley Co-meeting chair / program - Jim Bower, Caltech Program committee: Gwen Jacobs, University of California, Berkeley Catherine Carr, University of Maryland, College Park Dennis Glanzman, NIMH/NIH Nancy Kopel, Boston University Christiane Linster, ESPCI, Paris, France Philip Ulinski, University of Chicago Charles Wilson, University of Tennessee, Memphis Regional Organizers: Europe- Erik DeSchutter (Belgium) Middle East - Idan Segev (Jerusalem) Down Under - Mike Paulin (New Zealand) South America - Renato Sabbatini (Brazil) Asia - Zhaoping Li (Hong Kong) *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From lars at ida.his.se Wed Dec 7 02:31:17 1994 From: lars at ida.his.se (Lars Niklasson) Date: Wed, 7 Dec 94 08:31:17 +0100 Subject: Lectureship Message-ID: <9412070731.AA00135@mhost.ida.his.se> The following lectureship is available for a connectionist with a background in Lingusitics. =========================================================== Lectureship in linguistics, ref. no. 276-94-40 University of Skoevde, Sweden. Deadline for application: Jan. 20th 1995. The appointment includes both research and lecturing duties. Applicants should posses a general competence in linguis- tics, but also be prepared to work within the border-areas of computer science and cognitive science, which are areas where the University is planning to develop both education and a research competence with special emphasis on linguis- tics. It is therefore considered a merit if the applicant has skills within traditional artificial intelligence or connectionism. The lectureship will belong to the Department of modern languages, but will involve lecturing duties at the Department of computer science. The University of Skoevde is one of Sweden's youngest, but it is undergoing a dynamical development. The main focus of all the University activities is towards the private sector, industry and international trade, especially the use and development of tools for information technology. Six areas of competence have been established; engineering, computer science, cognitive science, economics, language as well as media and arts. Currently, the education programmes of the Department of Modern Languages include; English, French, Spanish and Ger- man. In addition, the Department of computer science offers a number of 3-year programmes in computer science and cogni- tive science, as well as a research oriented master pro- gramme. Research is conducted in connectionism, distributed real-time databases and active databases. For more information please contact: The vice-chancellor's office Lars-Erik Johansson Email: Lars-Erik.Johansson at sta.his.se  From pollack at cs.brandeis.edu Wed Dec 7 17:12:09 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Wed, 7 Dec 1994 17:12:09 -0500 Subject: compneuro asst prof job Message-ID: <199412072212.RAA02297@onyx.cs.brandeis.edu> I recently moved to Brandeis University, near Boston, where there is a lovely new building housing a new center for complex systems, which focuses on the brain, the mind, and computation. The center houses biologists, biochemists, physicists, psychologists, linguists, and computer scientists. There is a faculty search which is relevant to the list: ------- The Volen National Center for Complex Systems at Brandeis University seeks candidates for a tenure-track assistant professorship to participate in establishing a Center for Theoretical Neuroscience funded by a grant from the Alfred P. Sloan Foundation. We are especially interested in theorists with strong interests and demonstrated commitment to research relevant to neuroscience. Candidates are expected to have Ph.D.s (or the equivalent) in Mathematics, Physics, Computer Science or another quantitative discipline. The successful candidate will be appointed in the appropriate academic department/s. The position carries a reduced teaching load and summer salary for three years. Prospective candidates should send a curriculum vitae, statement of research interests, and arrange for three letters of recommendation to be sent directly to: Search Committee, Theoretical/Computational Neuroscientist, Volen Center, Brandeis University, Waltham, MA 02254. Applications from minority and women are especially welcome. Consideration of completed applications will start after January 1, 1995, and will continue until the position is filled, although candidates are strongly encouraged to submit completed applications as soon as possible. ---- The likeliest candidates for the faculty position will have a body of work in neuroscience modeling, rather than in neural models of cognition or engineering. However, there are also several postdocs available on the same Sloan grant, for theorists or modelers who want to cross into experimental neuroscience. Contact Eve Marder at binah.cc.brandeis.edu, Larry Abbott at psy.ox.ac.uk or John Lisman at binah.cc.brandeis.edu for further info. Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu PS Neuroprose is staying at OSU for a while, just with delays in moving files.  From pollack at cs.brandeis.edu Thu Dec 8 11:51:23 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Thu, 8 Dec 1994 11:51:23 -0500 Subject: FLAME: WCNN'95 - science in the mud pits Message-ID: <199412081651.LAA01963@garnet.cs.brandeis.edu> I kept quiet about the first IEEE NN conference in 1987, where every paper was accepted, and sorted by how well the author's name was recognized by the local committee, a feast of surplus dollars were collected in hyperinflated registration fees. I just never submitted another paper. But these policies are sickening: >Authors must submit >registration payment with papers to be eligible for the early >registration fee...Registration fee includes 1995 membership and a >one (1) year subscription to the Journal Neural Networks. >A $35 per paper processing fee must be enclosed >in order for the paper to be refereed. Reading fees are charged by agents for aspiring science fiction writers. Linking registration to submission is nothing more than an admission that all papers are to be accepted. And a fee of $480 is paying a lot more than membership, subscription, and a few days of coffee breaks! Also, since the size of the intersection of the two sets |ORGANIZING COMMITTEE & PLENARY SPEAKERS| is 4 instead of 0, I guess a pentium chip was used. WHERE are the ethical governors of this scientific society? Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu  From carlos at cenoli1.ulb.ac.be Thu Dec 8 16:15:38 1994 From: carlos at cenoli1.ulb.ac.be (carlos@cenoli1.ulb.ac.be) Date: Thu, 8 Dec 94 16:15:38 MET Subject: paper available: dynamical computation with chaos Message-ID: <9412081515.AA01490@cenoli5.ulb.ac.be> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/babloyantz.comput_chaos.ps.Z The file babloyantz.comput_chaos.ps.Z is now available for copying from the Neuroprose repository. This is a 13-page long paper with 5 figures. Size is 204239 bytes compressed and 585704 uncompressed. Sorry, hardcopies not available. Computation with chaos: A paradigm for cortical activity A. Babloyantz and C. Louren\c{c}o Service de Chimie-Physique, Universit\'{e} Libre de Bruxelles CP 231 - Campus Plaine, Boulevard du Triomphe B-1050~Bruxelles, Belgium e-mail: carlos at cenoli.ulb.ac.be Abstract: A device comprising two interconnected networks of oscillators exhibiting spatiotemporal chaos is considered. An external cue stabilizes input specific unstable periodic orbits of the first network thus creating an ``attentive'' state. Only in this state is the device able to perform pattern discrimination and motion detection. We discuss the relevance of the procedure to the information processing of the brain. This paper appeared in Proc. Natl. Acad. Sci. USA, Vol. 91, p. 9027, 13 September 1994. Thanks to Jordan Pollack for maintaining the archive. Carlos Louren\c{c}o ------------------------------------------------------------------------- *** Indonesians persist in the racial cleansing of East Timor. *** *** Torture and killing of innocents happen daily. ***  From mdavies at psy.ox.ac.uk Wed Dec 7 11:24:21 1994 From: mdavies at psy.ox.ac.uk (Martin Davies) Date: Wed, 7 Dec 1994 16:24:21 +0000 Subject: Euro-SPP '95 Message-ID: <9412071624.AA21454@Mac8> ************************************************************************* EUROPEAN SOCIETY FOR PHILOSOPHY AND SOCIETY Fourth Annual Meeting St. Catherine's College, Oxford Wednesday 30 August - Friday 1 September, 1995 ************************************************************************* FIRST ANNOUNCEMENT AND CALL FOR PAPERS The Fourth Annual Meeting of the Euro-SPP will begin at 11.30 am on Wednesday 30 August and will end at 5.30 pm on Friday 1 September. Themes for Invited Symposia include: emotion, attention, artifical life, and brain imaging. The conference will be held in St. Catherine's College, Oxford, where accommodation will be available. We expect to be able to offer an accommodation and meals package for the period from Wednesday morning until Friday afternoon for 108 pounds. In addition, bed and breakfast accommodation will be available for the Tuesday night before the conference, and for the Friday and Saturday nights after the conference, at a cost of 28 pounds per night. A limited number of superior rooms with private bath will be available at a higher rate. ************************************************************************* For further information about local arrangements, email: espp95 at psy.ox.ac.uk. ************************************************************************* The Society welcomes submitted papers and posters for this meeting. Submitted papers and posters are refereed and selected on the basis of quality and relevance to both psychologists and philosophers. Submitted Papers: Papers should not exceed a length of 30 minutes (about 12 double-spaced pages). The full text should be submitted, along with a 300 word abstract. Poster Presentations: Proposals for poster presentations should consist of a 500 word abstract. Unless authors indicate otherwise, submitted papers that we are not able to accept will also be considered for poster presentation. The deadline for submission of both submitted papers and poster presentations is 20 January 1995. Please send three copies to: Professor Beatrice de Gelder Department of Psychology Tilburg University 5000 LE Tilburg The Netherlands or: Professor Christopher Peacocke Magdalen College Oxford OX1 4AU UK ************************************************************************* For information about membership of the Euro-SPP, email: espp at kub.nl. *************************************************************************  From marks at u.washington.edu Fri Dec 9 20:37:22 1994 From: marks at u.washington.edu (Robert Marks) Date: Fri, 9 Dec 94 17:37:22 -0800 Subject: Russian Symposium Message-ID: <9412100137.AA25592@carson.u.washington.edu> The 2-nd International Symposium on Neuroinformatics and Neurocomputers Rostov-on-Don, RUSSIA September 20-23, 1995 Organized by Russian Neural Network Society (RNNS) and A.B. Kogan Research Institute for Neurocybernetics (KRINC) in co-operation with Institute of Electrical and Electronics Enginieers Neural Networks Council (IEEE NNC) First Call for Papers Research in Neuroinformatics and Neurocomputing continued in Russia after the research was deflated in the west in the 1970's. The research sophistication in neural networks, as a result, is quite advanced in Russia. The first international RNNS/IEEE Symposium, held in October 1992, proved to be a highly successful forum for a diverse international interchange of fresh and novel research results. The second International Symposium on Neuroinformatics and Neurocomputers is built on this remarkable success. The symposium focus is on the neuroscience, mathematics, physics, engineering and design of neuroinformatic and neurocomputing systems. Rostov-on-Don, the location of the Symposium, is about 1000 km south of Moscow on the scenic Don river. The Don is commonly identified as the boundary between the continents of Europe and Asia. Rostov is the home of the A.B. Kogan Research Institute for Neurocybernetics at Rostov State University - one of the premier neural network research centers in Russia. Papers for the Symposium should be sent in CAMERA-READY FORM, NOT EXEEDING 8 PAGES in A4 format to the Program Committee Co-Chair Alexander A. Frolov. Two copies of the paper should be submitted. The deadline for submission is 15 MARCH, 1995. Notification of acceptance will be sent on or before 15 May, 1995. SYMPOSIUM COMMITTEE GENERAL CHAIR Witali L. Dunin-Barkowski, Dr. Sci., The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Symposium Chair President of the Russian Neural Network Society, A.B. Kogan Research Institute for Neurocybernetics Rostov State University 194/1 Stachka avenue, 344104, Rostov-on-Don, Russia Tel: +7-8632-28-0588, Fax: +7-8632-28-0367 E-mail: wldb at krinc.rostov-na-donu.su PROGRAM COMMITTEE CO-CHAIRS Professor Alexander A. Frolov, The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Program Co-Chair 5a Butlerov str. Higher Nervous Activity and Neurophysiology Institute Russian Academy of Science 117220, Moscow, RUSSIA. Professor Robert J. Marks II Program Co-Chair The 2-nd International Symposium on Neuroinformatics and Neurocomputers University of Washington Department of Electrical Engineering c/o 1131 199th Street S.W., Suite N Lynnwood, WA 98036-7138 WA, USA. Other information is available from the Symposium Committee.  From john at dcs.rhbnc.ac.uk Sat Dec 10 07:18:37 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Sat, 10 Dec 94 12:18:37 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <28465.9412101218@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): one new report available ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-011: ---------------------------------------- Valid Generalisation from Approximate Interpolation by Martin Anthony, Department of Mathematics, The London School of Economics Peter Bartlett, Research School of Information Sciences and Engineering, Australian National University Yuval Ishai, Department of Computer Science, Technion John Shawe-Taylor, Department of Computer Science, Royal Holloway, University of London Abstract: Let $\H$ and $\C$ be sets of functions from domain $X$ to the reals. We say that $\H$ validly generalises $\C$ from approximate interpolation if and only if for each $\eta>0$ and $\epsilon, \delta \in (0,1)$ there is a number $m_0(\eta,\epsilon, \delta)$ such that for any function $t \in \C$ and any probability distribution $P$ on $X$, if $m \ge m_0$ then with $P^m$-probability at least $1-\delta$, a sample $\vx =(x_1, x_2, \dots, x_m) \in X^m$ satisfies $$\forall h \in \H, \, |h(x_i) - t(x_i)|< \eta, \,(1 \le i \le m) \Longrightarrow P(\{x: |h(x) -t(x)| \ge \eta\}) < \epsilon.$$ We find conditions that are necessary and sufficient for $\H$ to validly generalise $\C$ from approximate interpolation, and we obtain bounds on the sample length $m_0(\eta, \epsilon, \delta)$ in terms of various parameters describing the expressive power of $\H$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-94-022: ---------------------------------------- Sample Sizes for Sigmoidal Neural Networks by John Shawe-Taylor, Department of Computer Science, Royal Holloway University of London Abstract: This paper applies the theory of Probably Approximately Correct (PAC) learning to feedforward neural networks with sigmoidal activation functions. Despite the best known upper bound on the VC dimension of such networks being $O((WN)^2)$, for $W$ parameters and $N$ computational nodes, it is shown that the asymptotic bound on the sample size required for learning with increasing accuracy $1 - \epsilon$ and decreasing probability of failure $\delta$ is $$O((1/\epsilon)(W\log(1/\epsilon) + (WN)^2 + \log(1/\delta)).$$ For practical values of $\epsilon$ and $\delta$ the formula obtained for the sample sizes is a factor $2\log(2e/\epsilon)$ smaller than a naive use of the VC dimension result would give. Similar results are obtained for learning where the hypothesis is only guaranteed to correctly classify a given proportion of the training sample. The results are formulated in general terms and show that for many learning classes defined by smooth functions thresholded at the output, the sample size for a class with VC-dimension $d$ and $\ell$ parameters is $O((1/\epsilon)(\ell\log(1/\epsilon) + o(\log(1/\epsilon))d + \log(1/\delta))$. ----------------------- The Report NC-TR-94-011 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-011.ps.Z ftp> bye % zcat nc-tr-94-011.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. Best wishes John Shawe-Taylor  From cnna at tce.ing.uniroma1.it Mon Dec 12 09:54:22 1994 From: cnna at tce.ing.uniroma1.it (cnna@tce.ing.uniroma1.it) Date: Mon, 12 Dec 1994 15:54:22 +0100 Subject: program CNNA-94 Message-ID: <9412121454.AA14007@tce.ing.uniroma1.it> (Multiple posting - please excuse us if you receive more than a copy) THIRD IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND THEIR APPLICATIONS (CNNA-94) ROME, ITALY, DEC. 18-21, 1994 PRELIMINARY PROGRAM SUNDAY, DECEMBER 18, 1994 17.30 Inaugural Session (Hall 1) 17.30 Welcome address G. Orlandi, Dean, Faculty of Engineering V. Cimagalli, Chairman, CNNA-94 17.45 Opening address C. Lau 18.05 Inaugural lecture: L.O. Chua The CNN Universal Chip: Dawn of a New Computer Paradigm 19.30 Welcome Cocktail (Room of the Frescoes) MONDAY, DECEMBER 19, 1994 9.00 Session 1: Theory I (Hall of the Cloister - chairman: L.O. Chua) 9.00 Invited review paper: T. Roska Analogic Algorithms Running on the CNN Universal Machine 9.30 G. Yang, T. Yang, L.-B. Yang On Unconditional Stability of the General Delayed Cellular Neural Networks 9.45 S. Arik, V. Tavsanoglu A Weaker Condition for the Stability of Nonsymmetric CNNs 10.00 M. P. Joy, V. Tavsanoglu Circulant Matrices and the Stability Theory of CNNs 10.15 B.E. Shi, S. Wendsche, T. Roska, L.O. Chua Random Variations in CNN Templates: Theoretical Models and Empirical Studies 10.30 Coffee Break (Room of the Frescoes) 11.00 Session 2: Connections with Neurophysiology (Hall of the Cloister - chairman: T. Roska) 11.00 Invited lecture: F. Werblin, A. Jacobs Using CNN to Unravel Space-Time Processing in the Vertebrate Retina 11.45 Invited lecture: J. Hmori Synaptic Organization of the Thalamic Visual Center (LGN) of Mammals as a Basis of CNN Model of Subcortical Visual Processing 12.30 K. Lotz, Z. Vidnynszky, T. Roska, J. Vandewalle, J. Hmori, A. Jacobs, F. Werblin Some Cortical Spiking Neuron Models Using CNN 12.45 T.W. Berger, B.J. Sheu, R. H.-J. Tsai Analog VLSI Implementation of a Nonlinear Systems Model of the Hippocampal Brain Region 13.00 A. Jacobs, T. Roska, F. Werblin Techniques for Constructing Physiologically Motivated Neuromorphic Models in CNN 13.15 Lunch (Room of the Frescoes) 14.45 Session 3: Hardware Implementations I (Hall of the Cloister - chairman: A. Rodrguez-Vzquez) 14.45 Invited review paper: A. Rodrguez-Vzquez, R. Domnguez-Castro, S. Espejo Design of CNN Universal Chips: Trends and Obstacles 15.15 J.M. Cruz, L.O. Chua, T. Roska A Fast, Complex and Efficient Test Implementation of the CNN Universal Machine 15.30 F. Sargeni, V. Bonaiuto High Performance Digitally Programmable CNN Chip with Discrete Templates 15.45 A. Paasio, A. Dawidziuk, K. Halonen, V. Porra Digitally Controllable Weights in Current Mode Cellular Neural Networks 16.00 D. Lm, G.S. Moschytz A Programmable, Modular CNN Cell 16.15 M.-D. Doan, R. Chakrabaty, M. Heidenreich, M. Glesner, S. Cheung Realisation of a Digital Cellular Neural Network for Image Processing 16.30 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, R. Carmona A CNN Universal Chip in CMOS Technology 16.45 Coffee Break (Room of the Frescoes) 17.15 Session 4: Theory II (Hall of the Cloister - chairman: R.-W. Liu) 17.15 R.-W. Liu, Y.-F. Huang, X.-T. Ling A Novel Approach to the Convergence of Neural Networks for Signal Processing 17.30 E. Pessa, M.P. Penna Local and Global Connectivity in Neuronic Cellular Automata 17.45 X.-Z. Huang, T. Yang, L.-B. Yang On Stability of the Time-Variant Delayed Cellular Neural Networks 18.00 J.J. Szczyrek, S. Jankowski A Class of Asymmetrical Templates in Cellular Neural Networks 18.15 P.P. Civalleri, M. Gilli A Topological description of the State Space of a Cellular Neural Network 18.30 M. Tanaka, T. Watanabe Cooperative and Competitive Cellular Neural Networks TUESDAY, DECEMBER 20, 1994 9.15 Session 5: Learning I (Hall of the Cloister - chairman: J.A. Nossek) 9.15 Invited review paper: J.A. Nossek Design and Learning with Cellular Neural Networks 9.45 I. Fajfar, F. Bratkovic Statistical Design Using Variable Parameter Variances and Application to Cellular Neural Networks 10.00 N.N. Aizenberg, I.N. Aizenberg CNN-like Networks Based on Multi-Valued and Universal Binary Neurons: Learning and Application to Image Processing 10.15 W. Utschick, J.A. Nossek Computational Learning Theory Applied to Discrete-Time Cellular Neural Networks 10.30 H. Magnussen, J.A. Nossek Global Learning Algorithms for Discrete-Time Cellular Neural Networks 10.45 Coffee Break (Room of the Frescoes) 11.15 Session 6: Learning II (Hall of the Cloister - chairman: J. Vandewalle) 11.15 H. Magnussen, G. Papoutsis, J.A. Nossek Continuation-Based Learning Algorithm for Discrete-Time Cellular Neural Networks 11.30 C. Gzelis, S. Karamahmut Recurrent Perceptron Learning Algorithm for Completely Stable Cellular Neural Networks 11.45 A.J. Schuler, M. Brabec, D. Schubel, J.A. Nossek Hardware-Oriented Learning for Cellular Neural Networks 12.00 F. Dellaert, J. Vandewalle Automatic Design of Cellular Neural Networks by Means of Genetic Algorithms: Finding a Feature Detector 12.15 H. Mizutani A New Learning Method for Multilayered Cellular Neural Networks 12.30 Lunch (Room of the Frescoes) 14.00 Panel Discussion: Trends and Applications of the CNN Paradigm (Hall of the Cloister - chairman: L.O. Chua) Panelists: L.O. Chua, V. Cimagalli, J.A. Nossek, T. Roska, A. Rodrguez- Vzquez, J. Vandewalle 15.30 Coffee Break (Room of the Frescoes) 16.00 Session 7: Applications I (Hall of the Cloister - chairman: N.N. Aizenberg) 16.00 N.N. Aizenberg, I.N. Aizenberg, T.P. Belikova Extraction and Localization of Important Features on Grey-Scale Images: Implementation on the CNN 16.15 K. Slot Large-Neighborhood Templates Implementation in Discrete-Time CNN Universal Machine with a Nearest-Neighbor Connection Pattern 16.30 J. Pineda de Gyvez XCNN: A Software Package for Color Image Processing 16.45 B.E. Shi Order Statistic Filtering with Cellular Neural Networks 17.00 L.-B. Yang, T. Yang, B.-S. Chen Moving Point Target Detection Using Cellular Neural Networks 17.15 X.-P. Yang, T. Yang, L.-B. Yang Extracting Focused Object from Defocused Background Using Cellular Neural Networks 17.30 M. Balsi, N. Racina Automatic Recognition of Train Tail Signs Using CNNs 17.45 A. Kellner, H. Magnussen, J.A. Nossek Texture Classification, Texture Segmentation and Text Segmentation with Discrete-Time Cellular Neural Networks 16.00 Session 8: Applications II (Hall 17 - chairman: S. Jankowski) 16.00 P.L. Venetianer, P. Szolgay, K.R. Crounse, T. Roska, L.O. Chua Analog Combinatorics and Cellular Automata-Key Algorithms and Layout Design 16.15 . Zarndy, T. Roska, Gy. Liszka, J. Hegyesi, L. Kk, Cs. Rekeczky Design of Analogic CNN Algorithms for Mammogram Analysis 16.30 P. Szolgay, Gy. Erss, A. Katona, . Kiss An Experimental System for Path Tracking of a Robot Using a 16*16 Connected Component Detector CNN Chip with Direct Optical Input 16.45 T. Kozek, T. Roska A Double Time-Scale CNN for Solving 2-D Navier-Stokes Equations 17.00 . Zarndy, F. Werblin, T. Roska, L.O. Chua Novel Types of Analogic CNN Algorithms for Recognizing Bank-Notes 17.15 B.J. Sheu, Sa H. Bang, W.-C. Fang Optimal Solutions of Selected Cellular Neural Network Applications by the Hardware Annealing Method 17.30 B. Siemiatkowska Cellular Neural Network for Mobile Robot Navigation 17.45 A. Murgu Distributed Neural Control for Markov Decision Processes in Hierarchic Communication Networks 20.00 Banquet (Restaurant "4 Colonne" - via della Posta Vecchia, 4) WEDNESDAY, DECEMBER 21, 1994 9.00 Session 9: Spatio-Temporal Dynamics I (Hall of the Cloister - chairman: V.D. Shalfeev) 9.00 Invited review paper V.D. Shalfeev, G.V. Osipov Spatio-Temporal Phenomena in Cellular Neural Networks 9.30 C.-M. Yang, T. Yang, K.-Y. Zhang Chaos in Discrete Time Cellular Neural Networks 9.45 R. Dogaru, A.T. Murgan, D. Ioan Robust Oscillations and Bifurcations in Cellular Neural Networks 10.00 H. Chen, M.-D. Dai, X.-Y. Wu Bifurcation and Chaos in Discrete-Time Cellular Neural Networks 10.15 M.J. Ogorzalek, A. Dabrowski, W. Dabrowski Hyperchaos, Clustering and Cooperative Phenomena in CNN Arrays Composed of Chaotic Circuits 10.30 Coffee Break (Room of the Frescoes) 11.00 Session 10: Spatio-Temporal Dynamics II (Hall of the Cloister - chairman: M. Hasler) 11.00 Invited review paper: P. Thiran, M. Hasler Information Processing Using Stable and Unstable Oscillations: A Tutorial 11.30 P. Szolgay, G. Vrs Transient Response Computation of a Mechanical Vibrating System Using Cellular Neural Networks 11.45 P.P. Civalleri, M. Gilli Propagation Phenomena in Cellular Neural Networks 12.00 S. Jankowski, A. Londei, C. Mazur, A. Lozowski Synchronization Phenomena in 2D Chaotic CNN 12.15 Poster Session (Hall of the Cloister) S. Jankowski, R. Wanczuk CNN models of complex pattern formation in excitable media Z. Galias, J.A. Nossek Control of a Real Chaotic Cellular Neural Network A. Piovaccari, G. Setti A Versatile CMOS Building Block for Fully Analogically-ProgrammableVLSI Cellular Neural Networks P. Thiran, G. Setti An Approach to Local Diffusion and Global Propagation in 1-dim. Cellular Neural Networks J. Kowalski, K. Slot, T. Kacprzak A CMOS Current-Mode VLSI Implementation of Cellular Neural Network for an Image Objects Area Estimation W.J. Jansen, R. van Drunen, L. Spaanenburg, J.A.G. Nijhuis The AD2 Microcontroller Extension for Artificial Neural Networks C.-K. Pham, M. Tanaka A Novel Chaos Generator Employing CMOS Inverter for Cellular Neural Networks 13.15 Lunch (Room of the Frescoes) 14.45 Session 11: Hardware Implementations II (Hall of the Cloister - chairman: J.L. Huertas) 14.45 R. Beccherelli, G. de Cesare, F. Palma Towards an Hydrogenated Amorphous Silicon Phototransistor Cellular Neural Network 15.00 A. Sani, S. Graffi, G. Masetti, G. Setti Design of CMOS Cellular Neural Networks Operating at Several Supply Voltages 15.15 M. Russell Grimaila, J. Pineda de Gyvez A Macromodel Fault Generator for Cellular Neural Networks 15.30 P. Kinget, M. Steyaert Evaluation of CNN Template Robustness Towards VLSI Implementation 15.45 B.J. Sheu, Sa H. Bang, W.-C. Fang Analog VLSI Design of Cellular Neural Networks with Annealing Ability 16.00 L. Raffo, S.P. Sabatini, G.M. Bisio A Reconfigurable Architecture Mapping Multilayer CNN Paradigms 14.45: Session 12: Applications III (Hall 17 - chairman: J. Herault) 14.45 G. Adorni, V. DAndrea, G. Destri A Massively Parallel Approach to Cellular Neural Networks Image Processing 15.00 M. Coli, P. Palazzari, R. Rughi Use of the CNN Dynamic to Associate Two Points with Different Quantization Grains in the State Space 15.15 M. Csapodi, L. Nemes, G. Tth, T. Roska, A. Radvnyi Some Novel Analogic CNN Algorithms for Object Rotation, 3D Interpolation- Approximation, and a "Door-in-a-Floor" Problem 15.30 H. Harrer, P.L. Venetianer, J.A. Nossek, T. Roska, L.O. Chua Some Examples of Preprocessing Analog Images with Discrete-Time Cellular Neural Networks 15.45 A.G. Radvnyi Solution of Stereo Correspondence in Real Scene: an Analogic CNN Algorithm 16.00 J.P. Miller, K.R. Crounse, T. Sziranyi, L. Nemes, L.O. Chua, T. Roska Deblurring of Images by Cellular Neural Networks with Applications to Microscopy 16.15 Coffee Break (Room of the Frescoes) 16.45 Session 13: Hardware Implementations III (Hall of the Cloister - chairman: M. Salerno) 16.45 T. Roska, P. Szolgay, . Zarndy, P.L. Venetianer, A. Radvnyi, T. Szirnyi On a CNN Chip-Prototyping System 17.00 M. Balsi, V. Cimagalli, I. Ciancaglioni, F. Galluzzi Optoelectronic Cellular Neural Network Based on Amorphous Silicon Thin Film Technology 17.15 S. Espejo, R. Domnguez-Castro, A. Rodrguez-Vzquez, R. Carmona Weight-Control Strategy for Programmable CNN Chips 17.30 S. Espejo, A. Rodrguez-Vzquez, R. Domnguez-Castro, R. Carmona Convergence and Stability of the FSR CNN Model 17.45 R. Domnguez-Castro, S. Espejo, A. Rodrguez-Vzquez, I. Garca- Vargas, J.F. Ramos, R. Carmona SIRENA: A Simulation Environment for CNNs 16.45 Session 14: Applications IV (Hall 17 - chairman: M. Tanaka) 16.45 P. Arena, S. Baglio, L. Fortuna, G. Manganaro CNN Processing for NMR Spectra 17.00 P. Arena, L. Fortuna, G. Manganaro, S. Spina CNN Image Processing for the Automatic Classification of Oranges 17.15 S. Schwarz Detection of Defects on Photolitographic Masks by Cellular Neural Networks 17.30 M. Ikegami, M. Tanaka Moving Image Coding and Decoding by DTCNN with 3-D Templates 17.45 M. Kanaya, M. Tanaka Robot Multi-Driving Controls by Cellular Neural Networks GENERAL INFORMATION Venue: All sessions will be held at the Faculty of Engineering, "La Sapienza" University of Rome, vie Eudossiana, 18 - 00184 Rome Coffee Breaks and lunches are included in the registration fee; they will be served in the room of the Frescoes, opposite the main lecture hall. In order to receive them, it is necessary to wear the workshop badge. Banquet will be held at restaurant "4 Colonne", via della Posta Vecchia, 4, near Piazza Navona, tel. 68307152/68805261. It can be reached on foot in about 20 minutes from the Faculty of Engineering, by bus no. 81 or 87 from via dei Fori Imperiali to Corso Rinascimento, or by taxi. You may ask for directions at the registration desk. Please bring the coupon you receive at registration. Communications: You may receive messages by fax, no. +39-6-4742647. Faxes should be addressed to your name, with clear statement "participant to CNNA-94" in the cover sheet. A UNIX ASCII terminal will be available for all participants, with full Internet capabilities. You may receive messages at the usual e-mail address of the workshop: cnna at tce.ing.uniroma1.it Subject of messages should include your name. Public telephones, suitable for local, long-distance, and international calls, are available in the Faculty. They accept magnetic cards that can be purchased at the registration desk, at tobacconists' and newsstands, and in other shops carrying the Telecom symbol. Bank: Banca di Roma has an agency within the Faculty of Engineering. It is open in the morning from 8.25 to 1.35. Several other banks are available in the area of the University, also open from 3 to 4pm. Some cash dispensers, e.g. the one at Banca di Roma on via Cavour, between Hotel Palatino and via dei Fori Imperiali, accept credit cards for cash retrieval 24 hours a day. Authors are requested to meet with their session chairman 15 minutes prior to the session in the same room where it will be held. Time slots for regular papers are strictly limited to 15 minutes, including discussion. Poster authors should set up posters on 10.30, Wed. Dec. 21, and remove them during lunch break. Demonstrations of working hardware and software for CNNs will be held throughout the workshop during breaks. Demonstration authors should bring their material for set up on 8am of the day agreed upon, and remove it at the end of sessions. FURTHER INFORMATION Please contact: CNNA-94 tel. +39-6-44585836 fax. +39-6-4742647 e-mail cnna at tce.ing.uniroma1.it  From ai at aaai.org Mon Dec 12 15:36:08 1994 From: ai at aaai.org (AAAI) Date: Mon, 12 Dec 94 12:36:08 PST Subject: AAAI 1995 Fall Symposium Series Call for Participation Message-ID: <9412122036.AA19948@aaai.org> AAAI 1995 Fall Symposium Series Call for Participation November 10-12, 1995 Massachusetts Institute of Technology Cambridge, Massachusetts Sponsored by the American Association for Artificial Intelligence 445 Burgess Drive, Menlo Park, CA 94025 (415) 328-3123 (voice) (415) 321-4457 (fax) fss at aaai.org The American Association for Artificial Intelligence presents the 1995 Fall Symposium Series, to be held Friday through Sunday, November 10-12, 1995, at the Massachusetts Institute of Technology. The topics of the eight symposia in the 1995 Fall Symposium Series are: - Active Learning - Adaptation of Knowledge for Reuse - AI Applications in Knowledge Navigation and Retrieval - Computational Models for Integrating Language and Vision - Embodied Language and Action - Formalizing Context - Genetic Programming - Rational Agency: Concepts, Theories, Models, and Applications Symposia will be limited to between forty and sixty participants. Each participant will be expected to attend a single symposium. Working notes will be prepared and distributed to participants in each symposium. A general plenary session, in which the highlights of each symposium will be presented, will be held on Saturday, November 11, and an informal reception will be held on Friday, November 10. In addition to invited participants, a limited number of other interested parties will be able to register in each symposium on a first-come, first-served basis. Registration will be available by 1 August, 1995. To obtain registration information write to the AAAI at 445 Burgess Drive, Menlo Park, CA 94025 (fss at aaai.org). Submission Dates - Submissions for the symposia are due on April 14, 1995. - Notification of acceptance will be given by May 19, 1995. - Material to be included in the working notes of the symposium must be received by September 1, 1995. See the appropriate section below for specific submission requirements for each symposium. This document is available as http://www.ai.mit.edu/people/las/aaai/fss-95/fss-95-cfp.html ******************************************************************************* ACTIVE LEARNING An active learning system is one that can influence the training data it receives by actions or queries to its environment. Properly selected, these actions can drastically reduce the amount of data and computation required by a machine learner. Active learning has been studied independently by researchers in machine learning, neural networks, robotics, computational learning theory, experiment design, information retrieval, and reinforcement learning, among other areas. This symposium will bring researchers together to clarify the foundations of active learning and point out synergies to build on. Submission Information Potential participants should submit a position paper (at most two pages) discussing what the participant could contribute to a dialogue on active learning and/or what they hope to learn by participating. Suggested topics include: Theory: What are the important results in the theory of active learning and what are important open problems? How much guidance does theory give to application? Algorithms: What successful algorithms have been found for active learning? How general are they? For what tasks are they appropriate? Evaluation: How can accuracy, convergence, and other properties of active learning algorithms be evaluated when, for instance, data is not sampled randomly? Taxonomy: What kinds of information are available to learners (e.g. membership vs. equivalence queries, labeled vs. unlabeled data) and what are the ways learning methods can use them? What are the commonalities among methods studied by different fields? Papers should be sent to David D. Lewis, lewis at research.att.com, AT&T Bell Laboratories, 600 Mountain Ave., Room 2C-408, Murray Hill, NJ 07974-0636. Electronic mail submissions are strongly preferred. Symposium Structure The symposium will be broken into sessions, each dedicated to a major theme identified within the position papers. Sessions will begin with a background presentation by an invited speaker, followed by brief position statements from selected participants. A significant portion of each session will be reserved for group discussion, guided by a moderator and focused on the core issue for the session. The final session of the symposium will accommodate new issues that are raised during sessions. Organizing Committee David A. Cohn (cochair), MIT, cohn at psyche.mit.edu; David D. Lewis (cochair), AT&T Bell Labs, lewis at research.att.com; Kathryn Chaloner, U. Minnesota ; Leslie Pack Kaelbling, Brown U.; Robert Schapire, AT&T Bell Labs; Sebastian Thrun, U. Bonn; Paul Utgoff, U. Mass Amherst. ****************************************************************************** ADAPTATION OF KNOWLEDGE FOR REUSE Several areas in AI address issues of creating and storing knowledge constructs (such as cases, plans, designs, specifications, concepts, domain theories, schedules). There is broad interest in reusing these constructs in similar problem-solving situations so as to avoid expensive re-derivation. Adaptation techniques have been developed to support reuse in frameworks such as analogical problem solving, case-based reasoning, problem reformulation, or representation change and task domains such as creativity, design, planning, program transformation or software reuse, schedule revision, and theory revision. However, many open issues remain, and progress on such issues as case adaptation would substantially assist many researchers and practitioners. Our goals are to characterize the approaches to adaptation employed in various AI subfields, define the core issues in adaptation of knowledge, and advance the state-of-the-art in addressing these issues. We intend that presentations will investigate novel solutions to unsolved problems on adaptation, reflect diverse viewpoints, and focus on adaptation issues that are common to several subfields of AI. Discussions will be held on the strengths and limitations of adaptation techniques and their interrelationships. Invited talks will be given by experts who will discuss methods for the adaptation of various types of knowledge constructs. Two panels will be held. First, researchers studying knowledge adaptation from different perspectives will discuss how approaches used in their community differ from those used elsewhere, focusing on their potential benefits for other problems. Panelists in the second panel will lead discussions on identifying the core issues in knowledge adaptation raised in the presentations and the impact of the proposed methods on addressing these issues. Submission Information Anyone interested in presenting relevant material is invited to email PostScript submissions to aha at aic.nrl.navy.mil using the six-page AAAI-94 proceedings format. Anyone interested in attending is asked to submit a two-page research statement and a list of relevant publications. Please see http://www.aic.nrl.navy.mil/~aha/aaai95-fss/home.html for further information. Organizing Committee David W. Aha (cochair), NRL, aha at aic. nrl.navy.mil; Brian Falkenhainer, Xerox; Eric K. Jones, Victoria University; Subbarao Kambhampati, Arizona State University; David Leake, Indiana University; Ashwin Ram (cochair), Georgia Institute of Technology, ashwin at cc.gatech.edu. ***************************************************************************** AI APPLICATIONS IN KNOWLEDGE NAVIGATION AND RETRIEVAL The diversity and volume of accessible on-line data is increasing dramatically. As a result, existing tools for searching and browsing information are becoming less effective. The increasing use of non-text data such as images, audio and video has amplified this trend. Knowledge navigation systems are knowledge-based interfaces to information resources. They allow users to investigate the contents of complex and diverse sources of data in a natural manner. For example, intelligent browsers that can help direct a user through a large multi-dimensional information space, agents that users can direct to perform information finding tasks, or knowledge-based intermediaries that employ retrieval strategies to gather information relevant to a userUs request. The purpose of this symposium is to examine the state of the art in knowledge navigation by examining existing applications and by discussing new techniques and research directions. We encourage two types of submissions: work-in-progress papers that point towards the future of this research area, and demonstrations of knowledge navigation systems. Some research issues of interest: - Indexing: What indexing methods are appropriate and feasible for knowledge navigation systems? How can indices be extracted from data? - Retrieval: What retrieval methods are appropriate for knowledge navigation? What retrieval strategies can be employed? - Learning: How can knowledge navigation systems adapt to a changing knowledge environment and to user needs? - User interfaces: What are the characteristics of a useful navigational interface? What roles can or should an "agent" metaphor play in such interfaces? How can a navigation system orient the user in the information space? - Multi-source integration: How can multiple data and knowledge sources be integrated to address usersU needs? - Multimedia: What are the challenges presented by multimedia information sources? Submission Information The symposium will consist of invited talks, presentations, and hands-on demonstration/discussion sessions. Interested participants should submit a short paper (8 pages maximum) addressing a research issue in knowledge navigation or describing a knowledge navigation system that can be made available for hands-on demonstration at the symposium. System descriptions should clearly indicate the novel and interesting features of the system to be presented and its applicability to the central problems in knowledge navigation. Those wishing to demonstrate should also include a one-page description of their hardware and connectivity requirements. Send, by email, either a URL pointing to a PostScript version of the paper or the PostScript copy itself to aiakn at cs.uchicago.edu. Or, send 5 hard copies to Robin Burke, AI Applications in Knowledge Navigation, University of Chicago, Department of Computer Science. 1100 E. 58th St. Chicago, IL 60637. For further information, a web page for this symposium is located at http://www-cs.uchicago.edu/ ~burke/aiakn.html Organizing Committee Robin Burke (chair), University of Chicago, burke at cs.uchicago.edu; Catharine Baudin, NASA Ames; Su-Shing Chen, National Science Foundation; Kristian Hammond, University of Chicago; Christopher Owens, Bolt, Beranek & Newman. Program Committee Ray Bariess, Institute for the Learning Sciences; Alon Levy, AT&T Bell Laboratories; Jim Mayfield, University of Maryland, Baltimore County; Dick Osgood, Andersen Consulting. ****************************************************************************** COMPUTATIONAL MODELS FOR INTEGRATING LANGUAGE AND VISION This symposium will focus on research issues in developing computational models for integrating language and vision. The intrinsic difficulty of both natural language processing and computer vision has discouraged researchers from attempting integration, although in some cases it may simplify individual tasks like collateral-based vision, resolving ambiguous sentences through the use of visual information. Developing a bridge between language and vision is nontrivial, because the correspondence between words and images is not one-to-one. Much has been said about the necessity of linking language and perception for a system to exhibit intelligent behavior, but there has been relatively little work on developing computational models for this task. A natural-language understanding system should be able to understand and make references to the visual world. The use of scene-specific context (obtained from written or spoken text accompanying a scene) could greatly enhance the performance of computer vision systems. Some topics to be addressed are: - use of collateral text in image and graphics understanding - generating natural-language descriptions of visual data (e.g., event perception in image sequences) - identifying and extracting visual information from language - understanding spatial language, spatial reasoning - knowledge representation for linguistic and visual information, hybrid (language and visual) knowledge bases - use of visual data in disambiguating/understanding text - content-based retrieval from integrated text/image databases - language-based scene modeling (e.g., picture or graphics generation) - cognitive theories connecting language and perception Submission Information The symposium will consist of invited talks, panel discussions, individual presentations and group discussions. Those interested in making a presentation should submit a technical paper (not to exceed 3,000 words). Other participants should submit either a position paper or a research abstract. Email submissions in postscript format are encouraged. Send to burhans @cs.buffalo.edu. Alternatively, 4 hard copies may be sent to Rohini Srihari, CEDAR/SUNY at Buffalo, UB Commons, 520 Lee Entrance Suite 202, Buffalo, NY 14228-2567. Further information on this symposium may be found at http://www.cedar.buffalo.edu/Piction/FSS95/CFP.html. Please address questions to Debra Burhans (burhans at cs.buffalo.edu) or Rajiv Chopra (rchopra at cs.buffalo.edu). Organizing Committee Janice Glasgow, Queen's University; Ken Forbus, Northwestern University; Annette Herskovits, Wellesley College; Gordon Novak, University of Texas at Austin; Candace Sidner, Lotus Development Corporation; Jeffrey Siskind, University of Toronto; Rohini K. Srihari (chair), CEDAR, SUNY at Buffalo, rohini at cedar.buffalo. edu; Thomas M. Strat, SRI International; David Waltz, NEC Research Institute. ******************************************************************************* EMBODIED LANGUAGE AND ACTION This symposium focuses on agents that can use language or similar communication, such as gesture, to facilitate extended interactions in a shared physical or simulated world. We examine how this embodiment in a shared world both stimulates communication and provides a resource for understanding it. Our focus is on the design of artificial agents, implemented in software, hardware, or as animated characters. Papers should clearly relate the technical content presented to one of the following tasks: - Two or more communicating agents work together to construct, carry out maintenance on, or destroy a physical or simulated artifact (Collaborative Engagement) - An agent assists a human by fetching or delivering physical or software objects. The human communicates with the agent about what is to be fetched or delivered to where. (Delivery Assistance) We solicit papers on the following issues (not to the exclusion of others): - Can task contexts act as resources for communication by simplifying the interpretation and production of communicative acts? - How does physical embodiment and its concomitant resource limitation affect an agentUs ability to interpret or generate language? - Can architectures designed to support perception and action support language or other forms of communication? - How can agents to mediate between the propositional representations of language and the (often) non-propositional representations of perception and action? - What tradeoffs exist between the use of communication to improve the agents' task performance and the additional overhead involved in understanding and generating messages? - Do differences between communication used to support concurrent task execution and communication used to support planning, reflect deeper differences in agent ability? - What is the role of negotiation, whether of task responsibilities, or of reference and meaning, in such situated task environments? Submission Information Interested participants should submit either (1) a paper (in 12 pt font, not to exceed 3000 words), or (2) a statement of interest briefly describing the authorUs relevant work in this area and listing recent relevant publications. Send contributions, plain ascii or postscript, to ian at ai. mit.edu. If electronic submission is impossible, mail 6 copies to Ian Horswill, MIT Artificial Intelligence Laboratory, 545 Technology Square, Cambridge, MA 02139. Organizing Committee John Batali, UCSD; Jim Firby, University of Chicago; Ian Horswill (cochair), MIT, ian at ai.mit.edu; Marilyn Walker (cochair), Mitsubishi Cambridge Research Labs, walker at merl.com; Bonnie Webber, University of Pennsylvania. ****************************************************************************** FORMALIZING CONTEXT The notion of context has played an important role in AI systems for many years. However, formal logical explication of contexts remains an area of research with significant open issues. This symposium will provide a forum for discussing formalizations of contexts, approaches to resolving open issues, and application areas for context formalisms. The most ambitious goal of formalizing contexts is to make automated reasoning systems which are never permanently stuck with the concepts they use at a given time because they can always transcend the context they are in. Such a capability would allow the designer of a reasoning system to include only such phenomena as are required for the system's immediate purpose, retaining the assurance that if a broader system is required later, "lifting rules" can be devised to restate the facts from the narrow context in the broader context with qualifications added as necessary. A formal theory of context in which sentences are always considered as asserted within a context could provide a basis for such transcendence. Formal theories of context are also needed to provide a representation of the context associated with a particular circumstance, e.g. the context of a conversation in which terms have particular meanings that they wouldn't have in the language in general. Linguists and philosophers have already studied similar notions of context. An example is the situation theory that has been proposed in philosophy and applied to linguistics. However, these theories usually lie embedded in the analysis of specific linguistic constructions, so locating the exact match with AI concerns is itself a research challenge. This symposium aims to bring together researchers who have studied or applied contexts in AI or related fields. Technical papers dealing with formalizations of context, the problem of generality, and use of context in common sense reasoning are especially welcome. However, survey papers which focus on contexts from other points of view, such as philosophy, linguistics, or natural language processing, or which apply contexts in other areas of AI, are also encouraged. Submission Information Persons wishing to make presentations should submit papers (up to 12 pages, 12 pt font). Persons wishing only to attend should submit a 1-2 page research summary including a list of relevant publications. A PostScript file or 8 paper copies should be sent to the program chair, Sasa Buvac Department of Computer Science Stanford University Stanford CA 94305-2140 buvac at sail.stanford.edu. Limited funding will be available to support student travel. Organizing Committee Sasa Buvac (chair), Stanford University, buvac at sail.stanford.edu; Richard Fikes, Stanford University; Ramanathan Guha, MCC; Pat Hayes, Beckman Institute; John McCarthy, Stanford University; Murray Shanahan, Imperial College; Robert Stalnaker, MIT; Johan van Benthem, University of Amsterdam. Genetic programming (GP) extends the genetic algorithm to the domain of computer programs. In genetic programming, populations of programs are genetically bred to solve problems. Genetic programming can solve problems of system identification, classification, control, robotics, optimization, game-playing, and pattern recognition. ****************************************************************************** GENETIC PROGRAMMING Starting with a primordial ooze of hundreds or thousands of randomly created programs composed of functions and terminals appropriate to the problem, the population is progressively evolved over a series of generations by applying the operations of Darwinian fitness proportionate reproduction and crossover (sexual recombination). Topics of interest for the symposium include: - The theoretical basis of genetic programming - Applications of genetic programming - Rigorousness of validation techniques - Hierarchical decomposition, e.g. automatically defined functions - Competitive coevolution - Automatic parameter tuning - Representation issues - Genetic operators - Establishing standard benchmark problems - Parallelization techniques - Innovative variations Submission Information The format of the symposium will encourage interaction and discussion, but will also include formal presentations. Persons wishing to make a presentation should submit an extended abstract of up to 2500 words of their work in progress or completed work. For accepted abstracts, full papers will be due at a date closer to the symposium. Persons not wishing to make a presentation are asked to submit a one-page description of their research interests since there may be limited room for participation. Submit your abstract or one-page description as plain text electronically by Friday April 14, 1995, with a hard-copy backup to Eric V. Siegel, AAAI GP Symposium Co-Chair, Columbia University, Department of Computer Science, 500 W 120th Street, New York, NY 10027, USA; telephone: 212-939-7112, fax: 212-666-0140, e-mail: evs at cs.columbia.edu. Organizing Committee Robert Collins, USAnimation, Inc.; Frederic Gruau, Stanford University; John R. Koza (co-chair), Stanford University, koza at cs.stanford.edu; Robert Collins, US Animation, Inc.; Conor Ryan, University College Cork; Eric V. Siegel (co-chair), Columbia University, evs at cs.columbia.edu; Andy Singleton, Creation Mechanics, Inc.; Astro Teller, Carnegie-Mellon University. ****************************************************************************** RATIONAL AGENCY: CONCEPTS, THEORIES, MODELS, AND APPLICATIONS This symposium explores conceptions of rational agency and their implications for theory, research, and practice. The view that intelligent systems are, or ought to be, rational agents underlies much of the theory and research in artificial intelligence and cognitive science. However, no consensus exists on a proper view of agency or rationality principles for practical agents. Traditionally agents are presumed disposed toward purposive action. However, agent theories abound in which behavior is fundamentally reactive. Some theories emphasize agents' abilities to manage private belief systems. Others focus on agents' interactions with their environment, sometimes including other agents. Application builders have recently broadened the term "agent" to mean any embedded system performing tasks to support human users. Rationality accounts are equally diverse. Rationality involves having reasons warranting particular beliefs (epistemic rationality) or particular desires and actions (strategic or practical rationality). Many agent models propose epistemic rationality criteria such as logical consistency or consequential closure. Other agent models base practical rationality on classical or non-monotonic logics for reasoning about action. Such logicist views are now being challenged by decision theoretic accounts emphasizing optimal action under uncertainty, including recent work on decision theoretic principles of metareasoning for limited rationality. Our symposium will explore the diverse views of rational agency. Through informal presentations and group discussion, participants will critically examine agency concepts and rationality principles, review computational agent models and applications, and consider promising directions for future work on this topic. Submission Information Prospective participants should submit a brief paper (5 pages or less) describing their research in relation to any of the following questions: - Is rationality important; must an agent be rational to be successful? - What are suitable principles of epistemic, strategic, and limited rationality? - Are rationality principles applicable to retrospective processes such as learning? - What are general requirements on rational agent architectures? - How, if at all, must a model of rational agency be modified to account for social, multi-agent interaction? Those wishing to make a specific presentation should describe its contents in their concept paper. Note: While we recognize that our topic lends itself to formal analysis, we also encourage discussion of experimental work with implemented agents. Postscript files of concept papers should be sent by email only to the program chair, fehling at lis.stanford.edu. Organizing Committee Michael Fehling (chair), Stanford University, fehling at lis.stanford.edu; Don Perlis, University of Maryland; Martha Pollack, University of Pittsburgh; John Pollock, University of Arizona.  From tesauro at watson.ibm.com Thu Dec 8 14:28:41 1994 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Thu, 8 Dec 94 14:28:41 EST Subject: NIPS*94 Top Ten List Message-ID: By popular demand, here once again is the Top Ten List. Thanks go to my comedy-writing collaborators, Mike Mozer and Don Mathis of the Univ. of Colorado. --Gerry Tesauro ------------------------------------------------------------------ Top 10 Little-Known Improvements to NIPS This Year 10. No more of that annoying math! 9. To keep speakers on their toes, a gong at every table. 8. Special prize for "Most Laughable Poster" 7. Push pins available to everyone, not just to poster presenters. 6. Special evening session in which senior researchers display their surgical scars. 5. Hotel staff all licensed to back-propagate. 4. More rock, less talk. 3. Student volunteers at the registration desk will finally stop asking "You want fries with that?" 2. Invited speakers disqualified if they test positive for steroids. 1. Morgan Kaufmann to publish special "Swimsuit Edition" of conference proceedings.  From dwang at cis.ohio-state.edu Mon Dec 12 10:50:23 1994 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 12 Dec 1994 10:50:23 -0500 Subject: About sequential learning (or interference) Message-ID: It was reported some time ago that multilayer perceptrons suffer the problem of so-called "catastrophic interference", meaning that later training will destroy previously acquired "knowledge" (see McCloskey & Cohen, Psychol. of Learning and Motivat. 24, 1990; Ratcliff, Psychol. Rev. 97, 1990). This seems to be a serious problem, if we want to use neural networks both as a stable knowledge store and a long-term problem solver. The problems seems to exist in associative memory models as well, even though the simple prescription of the Hopfield net can easily incorporate more patterns as long as they are within the capacity limit. But we all know that the original Hopfield net does not work well for correlated patterns, which are the most interesting ones for real applications. Kanter and Sompolinsky proposed a prescription to tackle the problem (Phys. Rev. A, 1987). Their prescription, however, requires nonlocal learning. Diederich and Opper (1987, Phys. Rev. Lett.) later proposed a local, iterative learning rule (similar to perceptron learning) that they show will converge to the prescription (which is the old idea of orthogonalization). According to their paper, to learn a new pattern one needs to bring in all previously acquired patterns during iterative training in order to make all of the patterns converge to the desired prescription. Because of this learning scheme, I suspect that the algorithm of Diederich and Opper also suffers "catastrophic forgetting". Is that a fair assessment of the problem? Has any major effort been taken to address this important problem? DeLiang Wang  From jaap.murre at mrc-apu.cam.ac.uk Tue Dec 13 11:59:33 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 13 Dec 94 16:59:33 GMT Subject: hypertransfer and interference Message-ID: <9412131659.AA03848@rigel.mrc-apu.cam.ac.uk> DeLiang Wang asks whether any major effort has been undertaken to investigate and possibly remedy the problem of catastrophic interference. A fairly detailed analysis of this problem can be found in: Murre, J.M.J. (1992b). Categorization and learning in modular neural networks. Hemel Hempstead: Harvester Wheatsheaf. Co-published by Lawrence Erlbaum in the USA and Canada (Hillsdale, NJ). I have recently completed a paper that shows that backpropagation not only suffers from 'catastrophic interference' but also from 'hyper transfer', i.e., in some circumstances performance on a set A actually *improves* when learning a second set B. The learning transfer effects are catastrophic (or hyper) with respect to human learning data. The paper also shows that two-layer networks do not suffer from excessive transfer and are in fact in very close accordance with the human interference data as summarized in the classic paper by Osgood (1949): Murre, J.M.J. (in press). Transfer of learning in backpropagation networks and in related neural network models. To appear in Levy, Bairaktaris, Bullinaria, & Cairns (Eds.), Connectionist Models of Memory and Language. London: UCL Press. (in fpt directory: hyper1.ps) I have put the paper in our anonymous ftp directory: ftp ftp.mrc-apu.cam.ac.uk cd /pub/nn/murre bin get hyper1.ps  From lpratt at franklinite.Mines.Colorado.EDU Tue Dec 13 05:20:35 1994 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Tue, 13 Dec 94 11:20:35 +0100 Subject: About sequential learning (or interference) Message-ID: <9412131020.AA01764@franklinite.Mines.Colorado.EDU> Deliang, My own work on transfer between neural networks has addressed the sequential learning problem, see pratt-93, cited below, and lots of other papers available from my web page: http://vita.mines.colorado.edu:3857/1s/lpratt. My formuation differs from others in that, if the new task is essentially different from the old task, I intentionally lose old-task performance if necessary to improve on the new task. Other people who have looked at a similar formulation include sharkey-92, sharkey-93, naik-92b, and agarwal-92, martin-88. Those who have attempted to preserve old-task performance include mccloskey-89, and recent work by Thrun & Mitchell, especially thrun-93a, but also including thrun-92, thrun-93b. Sebastian and I had a long talk at NIPS about a general formulation for transfer as a solution to the sequential learning problem -- he's got an excellent formulation along these lines. Approaches to modular training can also be viewed as one approach to handling the sequential learning problem, in the special case where the old tasks can be viewed as subtasks of the new task (i.e. pratt-91, waibel-89, and Jacobs' and others' recent work on mixtures of experts, including Pomerleau's approach to Alvinn, see last year's NIPS). My thesis (see my web page) has a more detailed review of all this stuff. Hope that helps. --Lori, recovering from NIPS @incollection{ pratt-93, MYKEY = " pratt-93 : .con .bap", EDITOR = "C.L. Giles and S. J. Hanson and J. D. Cowan", BOOKTITLE = "{Advances in Neural Information Processing Systems 5}", AUTHOR = "L. Y. Pratt", TITLE = "Discriminability-Based Transfer between Neural Networks", ADDRESS = "San Mateo, CA", PUBLISHER = "Morgan Kaufmann Publishers", YEAR = 1993, PAGES = {204--211}, CATALOGDATE = "April 12, 1993", NOTE = "Also available via anonymous ftp to franklinite.mines.colorado.edu: pub/pratt-papers/pratt-nips5.ps.Z", } @inproceedings{ sharkey-92, MYKEY = " sharkey-92 : ", TITLE = "Adaptive generalisation and the transfer of knowledge", AUTHOR = "Noel E. Sharkey and Amanda J. C. Sharkey", BOOKTITLE = "Proceedings of the Second Irish Neural Networks Conference, Belfast", YEAR = 1992, CATALOGDATE = "December 18, 1992", } @article{ sharkey-93, MYKEY = " sharkey-93 : ", TITLE = "Adaptive generalisation and the transfer of knowledge", AUTHOR = "N. E. Sharkey and A. J. C. Sharkey", NUMBER = "In press", VOLUME = "Special Issue on Connectionism", JOURNAL = "AI Review", YEAR = 1993, ANNOTE = "In press", CATALOGDATE = "May 19, 1993", } @inproceedings{ naik-92b, MYKEY = " naik-92b : .eml .unb .con ", AUTHOR = "D. K. Naik and R. J. Mammone and A. Agarwal", TITLE = "Meta-Neural Network approach to learning by learning", YEAR = 1992, BOOKTITLE = "Intelligence Engineering Systems through Artificial Neural Networks", ORGANIZATION = "The American Society of Mechanical Engineers", PUBLISHER = "ASME Press", VOLUME = 2, PAGES = {245--252}, CATALOGDATE = "December 23, 1992", } @inproceedings{ agarwal-92, MYKEY = " agarwal-92 : .eml .unb .con ", AUTHOR = "A. Agarwal and R. J. Mammone and D. K. Naik", TITLE = "An on-line Training Algorithm to Overcome Catastrophic Forgetting", BOOKTITLE = "Intelligence Engineering Systems through Artificial Neural Networks", YEAR = 1992, ORGANIZATION = "The American Society of Mechanical Engineers", PUBLISHER = "ASME Press", VOLUME = 2, PAGES = {239--244}, CATALOGDATE = "December 23, 1992", } @techreport{ martin-88, MYKEY = " martin-88 : ", TITLE = "The Effects of Old Learning on New in Hopfield and Backpropagation Nets", AUTHOR = "Gale Martin", KEY = "martin-88", INSTITUTION = "Microelectronics and Computer Technology Corporation (MCC)", NUMBER = "ACA-HI-019", YEAR = 1988, CATALOGDATE = "January 3, 1992", } @article{ mccloskey-89, MYKEY = " mccloskey-89 : .unb .con .csy .adap", TITLE = "Catastrophic interference in connectionist networks: the sequential learning problem", KEY = "mccloskey-89", AUTHOR = "Michael McCloskey and Neal J. Cohen", JOURNAL = "The psychology of learning and motivation", VOLUME = 24, YEAR = 1989, CATALOGDATE = "April 9, 1991", } @inproceedings{ thrun-93a, MYKEY = " thrun-93a : ", TITLE = "Lifelong Robot Learning", AUTHOR = "Sebastian B. Thrun and Tom M. Mitchell", BOOKTITLE = "Proceedings of the {NATO} {ASI}: The biology and technology of intelligent autonomous agents", EDITOR = "Luc Steels", YEAR = 1993, KEY = "thrun-93a", CATALOGDATE = "September 1, 1993", } @INPROCEEDINGS{thrun-93b, AUTHOR = {Thrun, Sebastian B. and Mitchell, Tom M.}, TITLE = {Integrating Inductive Neural Network Learning and Explanation-Based Learning}, BOOKTITLE = {Proceedings of IJCAI-93}, YEAR = {1993}, ORGANIZATION = {IJCAI, Inc.}, PUBLISHER = {}, ADDRESS = {Chamberry, France}, MONTH = {}, CATALOGDATE = "October 11, 1993", } @inproceedings{ pratt-91, MYKEY = " pratt-91 : .min .bap .app .spc .con ", AUTHOR = "Lorien Y. Pratt and Jack Mostow and Candace A. Kamm", TITLE = "Direct Transfer of Learned Information among Neural Networks", BOOKTITLE = "Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91)", PAGES = {584--589}, ADDRESS = "Anaheim, CA", YEAR = 1991, } @article{ waibel-89, MYKEY = " waibel-89 : .bap .unr .unb .tem .spc .con ", TITLE = "Modular Construction of Time-Delay Neural Networks for Speech Recognition", AUTHOR = "Alexander Waibel", journal = "Neural Computation", volume = 1, pages = {39--46}, year = 1989 }  From wermter at nats2.informatik.uni-hamburg.de Tue Dec 13 06:04:05 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 13 Dec 94 12:04:05 +0100 Subject: book on hybrid connectionist language processing Message-ID: <9412131104.AA01259@nats2.informatik.uni-hamburg.de> BOOK ANNOUNCEMENT ----------------- The following book is now available from the beginning of December 1994. Title: Hybrid connectionist natural language processing Date: 1995 Author: Stefan Wermter Dept. of Computer Science University of Hamburg Vogt-Koelln-Str. 30 D-22527 Hamburg Germany wermter at informatik.uni-hamburg.de Series: Neural Computing Series 7 Publisher: Chapman & Hall Inc 2-6 Boundary Row London SE1 8HN England (Order information in the end of this message) Description ----------- The objective of this book is to describe a new approach in hybrid connectionist natural language processing which bridges the gap between strictly symbolic and connectionist systems. This objective is tackled in two ways: the book gives an overview of hybrid connectionist archi- tectures for natural language processing; and it demonstrates that a hybrid connectionist architecture can be used for learning real-world natural language problems. The book is primarily intended for scientists and students interested in the fields of artificial intelligence, neural networks, connectionism, natural language processing, hybrid symbolic connectionist architectures, parallel distributed processing, machine learning, automatic knowledge acquisition or computational linguistics. Furthermore, it might be of interest for scientists and students in information retrieval and cognitive science, since the book points out interdisciplinary relationships to these fields. We develop a systematic spectrum of hybrid connectionist architectures, from completely symbolic architectures to separated hybrid connectionist architectures, integrated hybrid connectionist architectures and completely connectionist architectures. Within this systematic spectrum we have designed a system SCAN with two separated hybrid connectionist architectures and two integrated hybrid connectionist architectures for a scanning understanding of phrases. A scanning understanding is a relation-based flat understanding in contrast to traditional symbolic in-depth understanding. Hybrid connectionist representations consist of either a combination of connectionist and symbolic representations or different connectionist representations. In particular, we focus on important tasks like structural disambiguation and semantic context classification. We show that a parallel modular, constraint-based, plausibility-based and learned use of multiple hybrid connectionist representations provides powerful architectures for learning a scanning understanding. In particular, the combination of direct encoding of domain-independent structural knowledge and the connectionist learning of domain-dependent semantic knowledge, as suggested by a scanning under- standing in SCAN, provides concepts which lead to flexible, adaptable, transportable architectures for different domains. Table of Contents ----------------- 1 Introduction 1.1 Learning a Scanning Understanding 1.2 The General Approach 1.3 Towards a Hybrid Connectionist Memory Organization 1.4 An Overview of the SCAN Architecture 1.5 Organization and Reader's Guide 2 Connectionist and Hybrid Models for Language Understanding 2.1 Foundations of Connectionist and Hybrid Connectionist Approaches 2.2 Connectionist Architectures 2.2.1 Representation of Language in Parallel Spatial Models Early Pattern Associator for Past Tense Learning Pattern Associator for Semantic Case Assignment Pattern Associator with Sliding Window Time Delay Neural Networks 2.2.2 Representation of Language in Recurrent Models Recurrent Jordan Network for Action Generation Simple Recurrent Network for Sequence Processing Recursive Autoassociative Memory Network 2.2.3 Towards Modular and Integrated Connectionist Models Cascaded Networks Sentence Gestalt Model Grounding Models 2.3 Hybrid Connectionist Architectures 2.3.1 Sentence Analysis in Hybrid Models Hybrid Interactive Model for Constraint Integration Hybrid Model for Sentence Analysis 2.3.2 Inferencing in Hybrid Models Symbolic Marker Passing and Localist Networks Symbolic Reasoning with Connectionist Models 2.3.3 Architectural Issues in Hybrid Connectionist Systems Symbolic Neuroengineering and Symbolic Recirculation Modular Model for Parsing 2.4 Summary and Discussion 3 A Hybrid Connectionist Scanning Understanding of Phrases 3.1 Foundations of a Hybrid Connectionist Architecture 3.1.1 Motivation for a Hybrid Connectionist Architecture 3.1.2 The Computational Theory Level for a Scanning Understanding 3.1.3 Constraint Integration 3.1.4 Plausibility view 3.1.5 Learning 3.1.6 Subtasks of Scanning Understanding at the Computational Theory Level 3.1.7 The Representation Level for a Scanning Understanding 3.2 Corpora and Lexicon for a Scanning Understanding 3.2.1 The Underlying Corpora 3.2.2 Complex Phrases 3.2.3 Context and Ambiguities of Phrases 3.2.4 Organization of the Lexicon 3.3 Plausibility Networks 3.3.1 Learning Semantic Relationships and Semantic Context 3.3.2 The Foundation of Plausibility Networks 3.3.3 Plausibility Networks for Noun-Connecting Semantic Relationships 3.3.4 Learning in Plausibility Networks 3.3.5 Recurrent Plausibility Networks for Contextual Relationships 3.3.6 Learning in Recurrent Plausibility Networks 3.4 Summary and Discussion 4 Structural Phrase Analysis in a Hybrid Separated Model 4.1 Introduction and Overview 4.2 Constraints for Coordination 4.3 Symbolic Representation of Syntactic Constraints 4.3.1 A Grammar for Complex Noun Phrases 4.3.2 The Active Chart Parser and the Syntactic Constraints 4.4 Connectionist Representation of Semantic Constraints 4.4.1 Head-noun Structure for Semantic Relationships 4.4.2 Training and Testing Plausibility Networks with NCN-relationships 4.4.3 Learned Internal Representation 4.5 Combining Chart Parser and Plausibility Networks 4.6 A Case Study 4.7 Summary and Discussion 5 Structural Phrase Analysis in a Hybrid Integrated Model 5.1 Introduction and Overview 5.2 Constraints for Prepositional Phrase Attachment 5.3 Representation of Constraints in Relaxation Networks 5.3.1 Integrated Relaxation Network 5.3.2 The Relaxation Algorithm 5.3.3 Testing Relaxation Networks 5.4 Representation of Semantic Constraints in Plausibility Networks 5.4.1 Training and Testing Plausibility Networks with NPN-Relationships 5.4.2 Learned Internal Representation 5.5 Combining Relaxation Networks and Plausibility Networks 5.5.1 The Interface between Relaxation Networks and Plausibility Networks 5.5.2 The Dynamics of Processing in a Relaxation Network 5.6 A Case Study 5.7 Summary and Discussion 6 Contextual Phrase Analysis in a Hybrid Separated Model 6.1 Introduction and Overview 6.2 Towards a Scanning Understanding of Semantic Phrase Context 6.2.1 Superficial Classification in Information Retrieval 6.2.2 Skimming Classification with Symbolic Matching 6.3 Constraints for Semantic Context Classification of Noun Phrases 6.4 Syntactic Condensation of Phrases to Compound Nouns 6.4.1 Motivation of Symbolic Condensation 6.4.2 Condensation Using a Symbolic Chart Parser 6.5 Plausibility Networks for Context Classification of Compound Nouns 6.5.1 Training and Testing the Recurrent Plausibility Network 6.5.2 Learned Internal Representation 6.6 Summary and Discussion 7 Contextual Phrase Analysis in a Hybrid Integrated Model 7.1 Introduction and Overview 7.2 Constraints for Semantic Context Classification of Phrases 7.3 Plausibility Networks for Context Classification of Phrases 7.3.1 Training and Testing with Complete Phrases 7.3.2 Training and Testing with Phrases without Insignificant Words 7.3.3 Learned Internal Representation 7.4 Semantic Context Classification and Text Filtering 7.5 Summary and Discussion 8 General Summary and Discussion 8.1 The General Framework of SCAN 8.2 Analysis and Evaluation 8.2.1 Evaluating the Problems 8.2.2 Evaluating the Methods 8.2.3 Evaluating the Representations 8.2.4 Evaluating the Experiment Design 8.2.5 Evaluating the Experiment Results 8.3 Extensions of a Scanning Understanding 8.3.1 Extending Modular Subtasks 8.3.2 Extending Interactions 8.4 Contributions and Conclusions 9 Appendix 9.1 Hierarchical Cluster Analysis 9.2 Implementation 9.3 Examples of Phrases for Structural Phrase Analysis 9.4 Examples of Phrases for Contextual Phrase Analysis References Index Orders information ----------------- ISBN: 0 412 59100 6 Pages: 190 Figures: 56 Price: 29.95 pounds sterling, 52.00 US dollars Credit cards: all major credit cards accepted by Chapman & Hall Please order from: -- Pam Hounsome Chapman & Hall Cheriton House North Way Andover Hants, SP10 5BE, UK England UK orders: Tel: 01264 342923 Fax: 01264 364418 Overseas orders: Tel: +44 1264 342830 Fax: +44 1264 342761 (Sister company in US) -- Peter Clifford Chapman & Hall Inc One Penn Plaza 41st Floor New York NY 10119 USA Tel: 212 564 1060 Fax: 212 564 1505 -- or e-mail on: order at Chaphall.com  From mario at physics.uottawa.ca Tue Dec 13 09:01:44 1994 From: mario at physics.uottawa.ca (Mario Marchand) Date: Tue, 13 Dec 94 10:01:44 AST Subject: Neural net and PAC learning paper: errata Message-ID: <9412131401.AA27027@physics.uottawa.ca> About a week ago, I have sent you this message: >The following paper, which was presented at the NIPS'94 conference, >is available by anonymous ftp at: > >ftp://dirac.physics.uottawa.ca/usr2/ftp/pub/tr/marchand > > >FileName: nips94.ps > >Title: Learning Stochastic Perceptrons Under k-Blocking Distributions > >Authors: Marchand M. and Hadjifaradji S. > > >ALSO: you will find other papers co-authored by Mario Marchand in > this directory. The text file: Abstracts-mm.txt contains a > list of abstracts of all the papers. In fact the right path for ftp is: ftp://dirac.physics.uottawa.ca/pub/tr/marchand and the file name is: nips94.ps.Z (compressed) sorry for this inconvenience and many thanks for those who have kindly reported the mistakes. - mario ---------------------------------------------------------------- | UUU UUU Mario Marchand | | UUU UUU ----------------------------- | | UUU OOOOOOOOOOOOOOOO Department of Physics | | UUU OOO UUU OOO University of Ottawa | | UUUUUUUUUUUUUUUU OOO 150 Louis Pasteur street | | OOO OOO PO BOX 450 STN A | | OOOOOOOOOOOOOOOO Ottawa (Ont) Canada K1N 6N5 | | | | ***** Internet E-Mail: mario at physics.uottawa.ca ********** | | ***** Tel: (613)564-9293 ------------- Fax: 564-6712 ***** | ----------------------------------------------------------------  From bisant at gl.umbc.edu Tue Dec 13 20:21:48 1994 From: bisant at gl.umbc.edu (Mr. David Bisant) Date: Tue, 13 Dec 1994 20:21:48 -0500 Subject: FLAME: WCNN'95 - science in the mud pits In-Reply-To: <199412081651.LAA01963@garnet.cs.brandeis.edu> Message-ID: On Thu, 8 Dec 1994, Jordan Pollack wrote: > > > >Authors must submit > >registration payment with papers to be eligible for the early > >registration fee...Registration fee includes 1995 membership and a > >one (1) year subscription to the Journal Neural Networks. > >A $35 per paper processing fee must be enclosed > >in order for the paper to be refereed. > > Reading fees are charged by agents for aspiring science fiction > writers. Linking registration to submission is nothing more than an > admission that all papers are to be accepted. And a fee of $480 is > paying a lot more than membership, subscription, and a few days of > coffee breaks! > The new fee structure for WCNN did not strike me as being unusually high. I also think it will help solve a problem I observed last year with a few authors not showing up for their talks, which wasted valuable time slots for oral presentation and upset schedules. With interest in the neural network field diminishing, new problems in conferencing appear. I think their fee structure is a reasonable approach to solving these problems. David Bisant (no affiliation with the conference organization)  From cga at cc.gatech.edu Tue Dec 13 21:35:22 1994 From: cga at cc.gatech.edu (Christopher G. Atkeson) Date: Tue, 13 Dec 1994 21:35:22 -0500 Subject: About sequential learning (or interference) Message-ID: <199412140235.VAA17037@lennon.cc.gatech.edu> Memory-Based Learning is one approach to avoiding interference, and is in part descended from nearest neighbor neural networks, or at least neural networks with local representations. Some of our references that will point you to other work as well: Atkeson, C.G. (1992) "Memory-Based Approaches To Approximating Continuous Functions." In: Casdagli, M., & Eubank, S. (eds.), "Nonlinear Modeling and Forecasting" Addison Wesley, Redwood City, CA. Atkeson, C.G. ``Using Local Models to Control Movement'', Proceedings, Neural Information Processing Systems, Dec 1989, Denver, Colorado. In: Neural Information Processing Systems 2. Morgan Kaufmann, 1990. Schaal, S., & Atkeson, C.G. (1994). "Robot Juggling: An Implementation of Memory-Based Learning." IEEE Control Systems Magazine, vol. 15, no. 1, pp. 57-71. Chris Atkeson Stefan Schaal  From jlm at crab.psy.cmu.edu Tue Dec 13 23:16:34 1994 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Tue, 13 Dec 94 23:16:34 EST Subject: About sequential learning (or interference) In-Reply-To: (message from DeLiang Wang on Mon, 12 Dec 1994 10:50:23 -0500) Message-ID: <9412140416.AA17263@crab.psy.cmu.edu.psy.cmu.edu> DeLiang Wang writes: > It was reported some time ago that multilayer perceptrons suffer the problem > of so-called "catastrophic interference", meaning that later training will > destroy previously acquired "knowledge" (see McCloskey & Cohen, Psychol. of > Learning and Motivat. 24, 1990; Ratcliff, Psychol. Rev. 97, 1990). This seems > to be a serious problem, if we want to use neural networks both as a stable > knowledge store and a long-term problem solver. The brain appears to have solved this problem by storing new information in a special associative memory system in the hippocampus. According to this view, cortical (and some other non-hippocampal) systems learn slowly, using what I call 'interleaved learning'. Weights are adjusted a small amount after each experience, so that the overall direction of weight change is governed by the structure present in the ensemble of events and experiences. New material can be added to such a memory without catastrophic intereference if it is added slowly, interleaved with ongoing exposure to other events and experiences. This is too slow for the demands placed on memory by the world. To allow rapid learning of new material without catastrophic interference, new material is initially stored in the hippocampus, where sparse, conjunctive representations are used to minimize interference with other memories. Reinstatement of these patterns occurs in task-relevant situations via bi-directional connections between hippocampus and cortex together with pattern completion within the hippocampal formation. Interestingly, it appears that reinstatement occurs in off-line situations as well -- most notably, during sleep. Each reinstatement provides an opportunity for the neocortex to learn; but these reinstatements occur interleaved with other ongoing events and experiences, and weight changes are small on each reinstatement, so that the neocortex learns the new material via interleaved learning. This theory is consistent with a lot of data at this point, including of course the basic fact that bilateral removal of the hippocampus leads to a profound deficit in the ability to form arbitrary new memories rapidly. Among the most important additional findings are 1) the phenomenon of temporally graded retrograde amnesia --- the finding that damage to the hippocampus can produce a selective deficit for recently acquired memories leaving memories acquired several weeks or months earlier intact --- and 2) the finding that neurons in the hippocampus that are coactive during a particular behavior appear to be coactive during sleep following the behavior, as if the patterns of activity that were present during behavior are reactivated during sleep. I announced a technical report discussing the above ideas last March. In that report, we focused on why it makes sense from a connectionist point of view that the system should be organized as described above. A reprint of the announcement (sans abstract) appears below. This TR is currently under revision for publication; the revised version will contain a fuller presentation of the physiological evidence than the present version, and I will announce it on connectionists when it becomes available. ======================================================================== Technical report announcement: ------------------------------------------------------------------------ Why there are Complementary Learning Systems in the Hippocampus and Neocortex: Insights from the Successes and Failures of Connectionist Models of Learning and Memory James L. McClelland, Bruce L. McNaughton & Randall C. O'Reilly Carnegie Mellon University & The University of Arizona Technical Report PDP.CNS.94.1 March, 1994 ======================================================================= Retrieval information: unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.94.1.ps.Z ftp> quit unix> zcat pdp.cns.94.1.ps.Z | lpr # or however you print postscript NOTE: The compressed file is 306994 bytes long. Uncompressed, the file is 840184 byes long. The printed version is 63 total pages long. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . Ask for the report by title or pdp.cns technical report number.  From bogner at eleceng.adelaide.edu.au Tue Dec 13 23:22:37 1994 From: bogner at eleceng.adelaide.edu.au (Robert E. Bogner) Date: Wed, 14 Dec 1994 14:52:37 +1030 Subject: [dwang@cis.ohio-state.edu: About sequential learning (or interference)] Message-ID: <9412140422.AA16796@augean.eleceng.adelaide.edu.au> From dwang at cis.ohio-state.edu Mon Dec 12 10:50:23 1994 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 12 Dec 1994 10:50:23 -0500 Subject: About sequential learning (or interference) Message-ID: 1. Visakan Kadirkamanathan wrote a dissertation on "Sequential Learning in Artifician neural networks for PhD at U. of Cambridge, October 1991. there were some conference papers undeer the same author. 2. C. J. Wallace, "Autonomous Rehearsal: Stability and plasticity in a multilayer perceptron", project report, Dept. of EEE, Univ. of Adelaide, October 1992 " . . a psychologically-motivated idea for mainaining the original learning of a classifier neural network by getting it to reinforce associations formed from confidently classified random inputs. . ." Robert E. Bogner -------------------------------------------------------------------------- e_mail bogner at eleceng.adelaide.edu.au WATCH THIS SPACE Reliable mail: Robert E. Bogner Professor of Electrical Engineering, The University of Adelaide, Adelaide 5005, SOUTH AUSTRALIA Phone +61 8 303 5589 (answering machine all hours) Fax +61 8 303 4360 OR +61 8 224 0464 Home phone +61 8 332 5549 Time GMT+9h30m in northern hemisphere summer GMT+10h30m in northern hemisphere winter ==========================================================================  From sef+ at cs.cmu.edu Wed Dec 14 00:03:50 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Wed, 14 Dec 94 00:03:50 EST Subject: About sequential learning (or interference) In-Reply-To: Your message of Tue, 13 Dec 94 21:35:22 -0500. <199412140235.VAA17037@lennon.cc.gatech.edu> Message-ID: Yet another approach to incremental learning can be seen in the Cascade Correlation algorithm. This creates hidden units (or feature detectors, if you prefer) one by one, and freezes each one after it is created. The weights between these features and the outputs remain plastic and continue to be trained. This means that if you train a net on training set A and then switch to a different training set B, you may build some new hidden units for task B, but the hidden units created for task A are not cannibalized. The A units may be used in task B, either directly or as inputs to new hidden units, but they remain unchanged and available. As task B is trained, the output weights change, so performance on task A will generally decline, but it can come back very quickly if task A is re-trained or if you now train with a combined A+B task. The point is that the time-consuming part of learning is in creating the hidden units, and these are retained once they are learned. My Recurrent Cascade-Correlation paper has an example of this. This recurrent net can be trained to recognize a temporal sequence of 1's and 0's as Morse code. If you train all 26 letter-codes as a single training set, the network will learn the task, but learning is faster and the resulting net is smaller if you break the training set up into "lessons" of increasing difficulty: first train on the shortest codes, then the medium-length ones, then the longest ones, and finally on the whole set together. This is reminiscent of the modular networks explored by Waibel and his colleagues for large speech tasks, but in this case you just have to chop the training set up into modules -- the network architecture takes care of itself. References (these are also available online on Neuroprose, among other places): Scott E. Fahlman and Christian Lebiere (1990) "The Cascade-Correlation Learning Architecture", in {\it Advances in Neural Information Processing Systems 2}, D. S. Touretzky (ed.), Morgan Kaufmann Publishers, Los Altos CA, pp. 524-532. Scott E. Fahlman (1991) "The Recurrent Cascade-Correlation Architecture" in {\it Advances in Neural Information Processing Systems 3}, R. P. Lippmann, J. E. Moody, and D. S. Touretzky (eds.), Morgan Kaufmann Pulishers, Los Altos CA, pp. 190-196. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Principal Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 268-5576 (new!) Carnegie Mellon University Latitude: 40:26:46 N 5000 Forbes Avenue Longitude: 79:56:55 W Pittsburgh, PA 15213 Mood: :-) ===========================================================================  From pollack at cs.brandeis.edu Wed Dec 14 14:45:33 1994 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Wed, 14 Dec 1994 14:45:33 -0500 Subject: Apology for Flame, and Survey In-Reply-To: (bisant@gl.umbc.edu) Message-ID: <199412141945.OAA10961@garnet.cs.brandeis.edu> I flamed about some of the policies of WCNN, and David Bisant wrote: > new problems >in conferencing appear. I think their fee structure is a reasonable >approach to solving these problems. I obviously over-reacted, since I agree that there is also a more positive reading of these policies which I didnt consider at the time - that they are innovations that that the organization is using to cope with several problems, such as: a) filtering out computer-generated articles, b) authors not showing up to deliver talks or posters, c) expense of supporting travel costs for both plenary speakers and conference organizers, and d) declining membership/subscription revenues. It was the simultaneous appearance of all these novel structures in one place which raised a red flag, and led me to breathe fire. I am not usually so rude, and now I owe MANY colleagues apologies for my outburst. First, I'm very sorry for linking the two completely separate societies - IEEE and INNS, and for denigrating the papers of individuals who have published their work in these refereed meetings over the years. Also, I want to apologize to the dozens of major scientists involved in organizing and governing INNS, for sullying their good names, and finally to Mike Cohen and my other new neighbors at Boston Univ, who are merely using their good offices to promote the meeting. I hope the society will eventually forgive my unsociable outburst. However, I do believe the $35 reviewing fee is a serious and big issue, as I have never seen a precedent in any branch of science. And while there are real costs to reviewing which are bundled into the costs of meetings, this "unbundling" is a very slippery slope to walk. If such fees catch on in conferences, they might catch on universally, from journals to funding agencies to faculty search committees, and this would dramatically change the "social contract," I fear, with many more negative than positive results! But perhaps even in this sub-issue I am over-reacting. Since the list is too large for a discussion, I would like to perform an informal survey, which I will then condense and post the results: 1) What do you think about review fees for scientific conferences? 2) What about review fees for scientific journals? 3) What about review fees for grant applications to NSF/NIH and other agencies? 4) What about review fees for faculty and postdoc job applications? 5) What do you predict as the ramifications of changing the current scientific free review system to a fee-based review system? Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/fax 2741 Waltham, MA 02254 email: pollack at cs.brandeis.edu  From white at pwa.acusd.edu Wed Dec 14 14:44:50 1994 From: white at pwa.acusd.edu (Ray White) Date: Wed, 14 Dec 1994 11:44:50 -0800 (PST) Subject: FLAME: WCNN'95 - science in the mud pits In-Reply-To: <199412081651.LAA01963@garnet.cs.brandeis.edu> Message-ID: I'm not thrilled with the "reading fee" either, but $35 plus early member registration @ $170 is pretty low for a conference these days; certainly far less than the maximum registration fee of $480 that Jordan Pollack mentioned. As far as "sorting papers by fame" goes, for those of us who are not famous, it is a good thing that there are conferences where papers are merely "sorted by fame", when there are also elitist conferences where fame is more likely a requirement for acceptance. (Did anyone read NIPS into that sentence?) A conference is remembered for the quality of presented papers, not for the quality of rejected papers. Ray H. White | Depts. of Physics and Computer Science | University of San Diego 619/260-4244 or 619/260-4627 | 5998 Alcala Park, San Diego, CA 92110  From jose at scr.siemens.com Wed Dec 14 15:20:45 1994 From: jose at scr.siemens.com (Stephen Hanson) Date: Wed, 14 Dec 1994 15:20:45 -0500 (EST) Subject: About sequential learning (or interference) In-Reply-To: <199412140235.VAA17037@lennon.cc.gatech.edu> References: <199412140235.VAA17037@lennon.cc.gatech.edu> Message-ID: of course avoiding "interference" is another way of preventing generalization. As usual there is a tradeoff here. Steve Stephen J. Hanson, Ph.D. Head, Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540  From lautrup at connect.nbi.dk Thu Dec 15 11:56:25 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Thu, 15 Dec 94 11:56:25 MET Subject: New preprint: "Massive Weight Sharing: A Cure for ..." Message-ID: Subject: Paper available: "Massive Weight Sharing: A Cure for ..." Date: August 29, 1994 FTP-host: connect.nbi.dk FTP-file: neuroprose/lautrup.massive.ps.Z ---------------------------------------------- The following paper is now available: Massive Weight Sharing: A Cure for Extremely Ill-posed Learning [12 pages] B. Lautrup, L.K. Hansen, I. Law, N. Moerch, C. Svarer and S.C. Strother CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Presented at the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, Juelich, Germany, November 21-23, 1994 Abstract: In most learning problems, adaptation to given examples is well-posed because the number of examples far exceeds the number of internal parameters in the learning machine. Extremely ill-posed learning problems are, however, common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, e.g. pixel or pin values, and a modest number of patterns, e.g. images or spectra. In this paper we show, for the case of a set of PET images differing only in the values of one stimulus parameter, that it is possible to train a neural network to learn the underlying rule without using an excessive number of network weights or large amounts of computer time. The method is based upon the observation that the standard learning rules conserve the subspace spanned by the input images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get lautrup.massive.ps.Z ftp> quit unix> uncompress lautrup.massive.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From pah at unixg.ubc.ca Thu Dec 15 14:03:03 1994 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Thu, 15 Dec 1994 11:03:03 -0800 (PST) Subject: sequential learning In-Reply-To: <9412140416.AA17263@crab.psy.cmu.edu.psy.cmu.edu> Message-ID: A couple notes on interfernce. First, most of Jay's idea is still simply theory, a good theory, but there is still little direct evidence that the hippocampus does either pattern separation or pattern completion. Most of the pattern completion idea comes from Marr's conjectures on the architecture of CA3 in hippocampus. Second (to Hanson), interference is not necessarily the flip side of generalization. This is intuitive, but you can reduce one without affecting the other. See Lewandowsky's stuff on this . Next, whether the interference observed in connectionist nets is a problem or not depends on your needs. Is the net to model human learning or to solve a particular engineering problem. If the former, then see McRae and Hetherington (1993)--it is not clear that realisticly designed nets suffer from catastrophic interference. Finally, it appears the the bigger problem is to reduce interference without eliminating the distinction between old and new items. This was Ratcliff's main concern, and is addressed by Sharkey. See also: French, R.M. (1991). Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, 173-178. French, R.M. (1992). Connection Science, 4, 3-4, 365-377. Hetherington, P.A. (1990). Neural Network Review, 4(1), 27-29. Hetherington, P.A. (1991). The Sequential Learning Problem in Connectionist Networks. Unpublished Masters Thesis, McGill University, Montreal, Quebec. Hetherington, P.A., & Seidenberg, M.S. (1989). Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, 26-33. Hinton, G.E., & Plaut, D.C. (1987). Proceedings of the Ninth Annual Conference of the Cognitive Science Society, 177-186. Kortge, C.A. (1990). Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, 764-771. Krushke, J.K. (1992). Psychological Review, 99, 22-44. Krushke, J.K. (1993). Connection Science, 5, 3-36. Lewandowsky, S. (1991). In W.E. Hockley & S, Lewandowsky (Eds.), Relating theory and data: Essays on human memory in honor of Bennet B. Murdock. McRae, K., & Hetherington, P.A. (1993). Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society, 723-728. Murre, J.M.J. (1992). Learning and categorization in modular neural networks. Lawrence Erlbaum. Sharkey, N.E., & Sharkey, A.J.C. Understanding Catastrophic Interference in neural Nets. Technical Report, Department of Computer Science, University of Sheffield, U.K. Sloman, S.A., & Rumelhart, D.E. (1992). In A. Healy, S.M. Kosslyn, & R.M. Shiffrin (Eds.), From learning theory to cognitive processes: Essays in honor of William K. Estes.  From french at cogsci.indiana.edu Thu Dec 15 14:28:01 1994 From: french at cogsci.indiana.edu (Bob French) Date: Thu, 15 Dec 94 14:28:01 EST Subject: Catastrophic forgetting and sequential learning Message-ID: Below I have indicated a number of references on current work on the problem of sequential learning in connectionist networks. All of these papers address the problem of catastrophic interference, which may result when a previously trained connectionist network attempts to learn new patterns. The following commented list is by no means complete. In particular, no mention is made of convolution-correlation models with their associated connectionist networks. Nonetheless, I hope it might prove to be a useful introduction to people interested in knowing a bit more about the subject. Bob French french at cogsci.indiana.edu ---------------------------------------------------------------------------- Recent Work in Catastrophic Interference in Connectionist Networks The two papers that really kicked off research in this area were: McCloskey, M. & Cohen, N. (1989) "Catastrophic interference in connections networks: the sequential learning problem" The Psychology of Learning and Motivation, 24, 109-165. Ratcliff, R (1990) "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions" Psychological Review, 97, 285-308 Hetherington and Seidenberg very early suggested an Ebbinghaus-like "savings" measure of catastrophic interference and, based on this, they concluded that catastrophic interference wasn't really as much of a problem as had been thought. While the problem has, in fact, been subsequently confirmed to be quite serious, the "savings" measure they proposed is still widely used to measure the extent of forgetting. Hetherington, P.. & Seidenberg, M. (1989) "Is there 'catastrophic interference' in connectionist networks?" Proceedings of the 11th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 26-33. Kortge was one of the first to propose a solution to this problem, using what he called "novelty vectors". Kortge, C. (1990) "Episodic Memory in Connectionist Networks" Proceedings of the 12th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 764-771 Sloman and Rumelhart also developed a technique called "episodic gating" designed to reduce the severity of the problem. Sloman, S. & Rumelhart, D., (1991) "Reducing interference in distributed memories through episodic gating" In Healy, Kosslyn, & Shiffrin (eds.) Essays in Honor of W. K. Estes. In 1991 French suggested that catastrophic interference might be the inevitable price you pay for the advantages of fully distributed representations (in particular, generalization). He suggested a way of dynamically producing "semi-distributed" hidden-layer representations to reduce the effect of catastrophic interference. French, R. (1991) "Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks" in Proceedings of the 13th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 173-178. A more detailed article presenting the same techique, called activation sharpening, appeared in Connection Science. French, R. (1992) "Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks", Connection Science, Vol. 4: 365-377. In a more recent paper (1994), French presented a technique called context biasing, which again dynamically "massages" hidden layer representations based on the "context" of other recently learned exemplars. The goal of this technique is to produce hidden-layer representations that are simultaneously well distributed and orthogonal. French, R. (1994) "Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference" in Proceedings of the 16th Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Erlbaum, 335-340 Finally, still in this vein, French proposed (1993) at the NIPS-93 workshop on catastrophic interference a dynamic system of two interacting networks working in tandem, one storing prototypes, the other doing short-term learning of new exemplars. For a "real brain" justification for this type of architecture see McClelland, McNaughton, and O'Reilly (1994), below. (A full paper on this tandem network architecture will be available at the beginning of next year.) This technique is discussed very briefly in: French, R. (1994) "Catastrophic interference in connectionist networks: Can it be predicted, can it be prevented?" Neural Information Processing Systems - 6, Cowan, Tesauro, Alspector (eds.) San Francisco, CA: Morgan Kaufmann. 1176-1177. Steve Lewandowsky has also been very active in this area. He developed a simple technique in 1991 that focused on producing orthogonalization at the input layer rather than the hidden layer. This "symmetric vectors" technique is discussed in: Lewandowsky, S. & Shu-Chen Li (1993) "Catastrophic Interference in Neural Networks: Causes, solutions and data" in New Perspectives on interference and inhibition in cognition. Dempster & Brainerd (eds.) New York: NY: Academic Press. Lewandowsky, S (1991) "Gradual unlearning and catastrophic interference: a comparison of distributed architectures. In Hockley & Lewandowsky (eds.) Relating theory and data: Essays on human memory in honor of Bennet B. Murdock. (pp. 445-476). Hillsdale, NJ: Lawrence Erlbaum. and in an earlier University of Oklahoma psychology department technical report: Lewandowsky, S. (1993) "On the relation between catastrophic interference and generalization in connectionist networks" In 1993 McRae and Hetherington published a study using pre-training to eliminate catastrophic interference. McRae, K. & Hetherington, P. (1993) "Catastrophic interference is eliminated in pretrained networks" in Proceedings of the 15th Annual Cognitive Science Society. Hillsdale, NJ: Erlbaum. 723-728 John Kruschke discussed the problem of catastrophic forgetting at length in the context of his connectionist model, ALCOVE, and showed the extent to which and under what circumstances this model is not subject to catastrophic forgetting. Kruschke, J. (1993) "Human category learning: implications for backpropagation models", Connection Science, Vol. 5, No. 1, Jacob Murre has also examined how his model, CALM, performs on the sequential learning problem. See, in particular: Murre, J. Learning and Categorization in Modular neural networks Hillsdale, NJ: Lawrence Erlbaum. 1992. (see esp. ch. 7.4) It is to be noted that both ALCOVE and CALM rely, at least in part, on reducing the distributedness of their internal representations in order to achieve improved performance on the problem of catastrophic interference. A 1994 article (in press) by Anthony Robins presents a novel technique, called "pseudorehearsal", whereby "pseudoexemplars" that reflect prior learning are added to the new data set to learned in order to reduce catastrophic forgetting. Robins, A. "Catastrophic forgetting, rehearsal, and pseudorehearsal", University of Otago (New Zealand) computer science technical report. (copies: coscavr at otago.ac.nz) Tetewsky, Shultz & Buckingham (1994) demonstrate the improvements that result from using Fahlman's cascade-correlation learning algorithm. Tetewsky, S., Shultz, T. and Buckingham, D. "Assessing interference and savings in connectionist models of human recognition memory" Department of Psychology TR, McGill University, Montreal. (presented at 1994 Meeting of the Psychonomic Society). Sharkey & Sharkey (1993) discussed the relation between the problem of interference and discrimination in connectionist networks. They conclude that sequentially trained networks using backprop will unavoidably suffer from one or the other problem. I am not aware if there is a final version of this paper in print yet, but Noel Sharkey is currently at University of Sheffield, Dept. of Computer Science, Sheffield. n.sharkey at dcs.shef.ac.uk Sharkey, N. & Sharkey, A., "An interference-discrimination tradeoff in connectionist models of human memory" McClelland, McNaughton, & O'Reilly issued a CMU technical report earlier this year (1994) in which they discuss the phenomenon of catastrophic interference in the context of the "real world", i.e., the brain. They suggest that the complementary learning systems in the hippocampus and the neocortex might be the brain's way of overcoming the problem. They argue that this dual system provides a means not only of rapidly acquiring new information, but also of storing well-learned information as prototypes. McClelland, J., McNaughton, B., & O'Reilly, R. "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory" CMU Tech report: PNP.CNS.94.1, March 1994. -------------------------------------------------------------------------  From marney at ai.mit.edu Thu Dec 15 18:19:23 1994 From: marney at ai.mit.edu (Marney Smyth) Date: Thu, 15 Dec 94 18:19:23 EST Subject: NNSP95 : Call for Papers Message-ID: <9412152319.AA00894@motor-cortex> ********************************************************************** * 1995 IEEE WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * * * * August 31 -- September 2, 1995, Cambridge, Massachusetts, USA * * Sponsored by the IEEE Signal Processing Society * * (In cooperation with the IEEE Neural Networks Council) * * * ********************************************************************** FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of IEEE Signal Processing Society, the co-sponsorship of IEEE Neural Network Council, the fifth of a series of IEEE Workshops on Neural Networks for Signal Processing will be held at the Royal Sonesta Hotel, Cambridge, Massachusetts, on Thursday 8/31 -- Saturday 9/2, 1995. Papers are solicited for, but not limited to, the following topics: ++ APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. ++ THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelets in NNs. ++ IMPLEMENTATIONS: Software, digital, analog, hybrid technologies, parallel processing. Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and e-mail address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Marney Smyth (Tel.) (617)-253-0547, (Fax) (617)-253-2964 (e-mail) marney at ai.mit.edu. We plan to use the World Wide Web (WWW) for posting further announcements on NNSP95 such as: submitted papers status, final program, hotel information etc. You can use MOSAIC and access URL site: http://www.cdsp.neu.edu. If you do not have access to WWW use anonymous ftp to site ftp.cdsp.neu.edu and look under the directory /pub/NNSP95. Please send paper submissions to: Prof. Elias S. Manolakos IEEE NNSP'95 409 Dana Research Building Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115, USA Phone: (617) 373-3021, Fax: (617) 373-4189 ******************* * IMPORTANT DATES * ******************* Extended summary received by: February 17 Notification of acceptance: April 21 Photo-ready accepted papers received by: May 22 Advanced registration received before: June 2 GENERAL CHAIRS Federico Girosi Center for Biological and Computational Learning, MIT girosi at ai.mit.edu John Makhoul Bolt, Beranek and Newman makhoul at bbn.com PROGRAM CHAIR Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Northeastern University elias at cdsp.neu.edu FINANCE CHAIR LOCAL ARRANGEMENTS Judy Franklin Mary Pat Fitzgerald, MIT GTE Laboratories marypat at ai.mit.edu jfranklin at gte.com PUBLICITY CHAIR PROCEEDINGS CHAIR Marney Smyth, MIT Elizabeth J. Wilson marney at ai.mit.edu Raytheon Co. email: bwilson at sud2.ed.ray.com TECHNICAL PROGRAM COMMITTEE Joshua Alspector John Makhoul Alice Chiang Elias Manolakos A. Constantinides P. Mathiopoulos Lee Giles Nahesan Niranjan Federico Girosi Tomaso Poggio Lars Kai Hansen Jose Principe Yu-Hen Hu Wojtek Przytula Jenq-Neng Hwang John Sorensen Bing-Huang Juang Andreas Stafylopatis Shigeru Katagiri John Vlontzos George Kechriotis Raymond Watrous Stephanos Kollias Christian Wellekens Sun-Yuan Kung Ron Williams Gary M. Kuhn Barbara Yoon Richard Lippmann Xinhua Zhuang  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Dec 16 03:22:31 1994 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 16 Dec 94 03:22:31 EST Subject: elitism at NIPS Message-ID: <1394.787566151@DST.BOLTZ.CS.CMU.EDU> The folks involved in running NIPS are aware that in the past, some people have felt the conference was biased against outsiders. As the program chair for NIPS*94, I want to mention some steps we took to reduce that perception: * This year we required authors to submit full papers, not just abstracts. That made it harder for "famous" people to get by on their reputation alone, and easier for "non-famous" people to get good papers in. * This year we required reviewers to provide real feedback to all authors. In order to make this less painful, we completely redesigned the review form (it's publicy accessible via the NIPS home page) and, for the first time, accepted reviews by email. Everyone liked getting comments from the reviewers, and authors whose papers were not accepted understood why. * We continue to recruit new people for positions in both the organizing and program committees. It's not the same dozen people year after year. We also have a large reviewer pool: 176 people served as reviewers this year. * We tend to bring in "outsiders" as our invited speakers, rather than the usual good old boys. This year's invited speakers included Francis Crick of the Salk Insititute, Bill Newsome (a neuroscientist from Stanford), and Malcolm Slaney (a signal processing expert formerly with Apple and now at Interval Research.) None had been to NIPS before. The fundamental limits on NIPS paper acceptances are that (1) we're committed to a single-track conference, and (2) we only have room for 138 papers in the proceedings. Therefore the reviewing process has to be selective. The acceptance rate this year was 33%; it was 25% in past years. The change is due to fewer submissions, not more acceptances. Requiring full papers was probably the cause of the drop in submissions. The quality of the accepted papers has remained high. There are other ways to participate in NIPS besides writing a paper. We have a very successful workshop program, where lots of intense interaction takes place between people with similar interests. Many people give talks at the workshops and not at the conference. (I did that this year.) NIPS issues a call for workshop proposals at about the same time as the call for papers. Consider organizing a workshop at NIPS*95, or at least participating in one. You do not even have to register for the conference; workshop registration is entirely separate. The URL for the NIPS home page appears below. Besides the review form, you'll also find formatting instructions for paper submissions. Authors of accepted NIPS*94 papers will find instructions for how to submit their final camera-ready copy, which is due January 23. http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html -- Dave Touretzky NIPS*94 Program Chair  From P.McKevitt at dcs.shef.ac.uk Fri Dec 16 05:55:02 1994 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Fri, 16 Dec 94 10:55:02 GMT Subject: AISB-95 workshop call for papers Message-ID: <> <> <> <> <> <> Advance Announcement FIRST CALL FOR PAPERS AND PARTICIPATION AISB-95 Workshop on REACHING FOR MIND: FOUNDATIONS OF COGNITIVE SCIENCE April 3rd/4th 1995 at the The Tenth Biennial Conference on AI and Cognitive Science (AISB-95) (Theme: Hybrid Problems, Hybrid Solutions) Halifax Hall University of Sheffield Sheffield, England (Monday 3rd -- Friday 7th April 1995) Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Chair: Sean O Nuallain Dublin City University, Dublin, Ireland & National Research Council, Ottawa, Canada Co-Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, England WORKSHOP COMMITTEE: John Barnden (New Mexico State University, NM, USA) Istvan Berkeley (University of Alberta, Canada) Mike Brady (Oxford, England) Harry Bunt (ITK, Tilburg, The Netherlands) Peter Carruthers (University of Sheffield, England) Daniel Dennett (Tufts University, USA) Eric Dietrich (SUNY Binghamton, NY, USA) Jerry Feldman (ICSI, UC Berkeley, USA) John Frisby (University of Sheffield, England) Stevan Harnad (University of Southampton, England) James Martin (University of Colorado at Boulder, CO, USA) John Macnamara (McGill University, Canada) Mike McTear (Universities of Ulster and Koblenz, Germany) Ryuichi Oka (RWC P, Tsukuba, Japan) Jordan Pollack (Ohio State University, OH, USA) Zenon Pylyshyn (Rutgers University, USA) Ronan Reilly (University College, Dublin, Ireland) Roger Schank (ILS, Illinois) NNoel Sharkey (University of Sheffield, England) Walther v.Hahn (University of Hamburg, Germany) Yorick Wilks (University of Sheffield, England) WORKSHOP DESCRIPTION The assumption underlying this workshop is that Cognitive Science (CS) is in crisis. The crisis manifests itself, as exemplified by the recent Buffalo summer institute, in a complete lack of consensus among even the biggest names in the field on whether CS has or indeed should have a clearly identifiable focus of study; the issue of identifying this focus is a separate and more difficult one. Though academic programs in CS have in general settled into a pattern compatible with classical computationalist CS (Pylyshyn 1984, Von Eckardt 1993), including the relegation from focal consideration of consciousness, affect and social factors, two fronts have been opened on this classical position. The first front is well-publicised and highly visible. Both Searle (1992) and Edelman (1992) refuse to grant any special status to information-processing in explanation of mental process. In contrast, they argue, we should focus on Neuroscience on the one hand and Consciousness on the other. The other front is ultimately the more compelling one. It consists of those researchers from inside CS who are currently working on consciousness, affect and social factors and do not see any incompatibility between this research and their vision of CS, which is that of a Science of Mind (see Dennett 1993, O Nuallain (in press) and Mc Kevitt and Partridge 1991, Mc Kevitt and Guo 1994). References Dennett, D. (1993) Review of John Searle's "The Rediscovery of the Mind". The Journal of Philosophy 1993, pp 193-205 Edelman, G.(1992) Bright Air, Brilliant Fire. Basic Books Mc Kevitt, P. and D. Partridge (1991) Problem description and hypothesis testing in Artificial Intelligence In ``Artificial Intelligence and Cognitive Science '90'', Springer-Verlag British Computer Society Workshop Series, McTear, Michael and Norman Creaney (Eds.), 26-47, Berlin, Heidelberg: Springer-Verlag. Also, in Proceedings of the Third Irish Conference on Artificial Intelligence and Cognitive Science (AI/CS-90), University of Ulster at Jordanstown, Northern Ireland, EU, September and as Technical Report 224, Department of Computer Science, University of Exeter, GB- EX4 4PT, Exeter, England, EU, September, 1991. Mc Kevitt, P. and Guo, Cheng-ming (1995) From Chinese rooms to Irish rooms: new words on visions for language. Artificial Intelligence Review Vol. 8. Dordrecht, The Netherlands: Kluwer-Academic Publishers. (unabridged version) First published: International Workshop on Directions of Lexical Research, August, 1994, Beijing, China. O Nuallain, S (in press) The Search for Mind: a new foundation for CS. Norwood: Ablex Pylyshyn, Z.(1984) Computation and Cognition. MIT Press Searle, J (1992) The rediscovery of the mind. MIT Press. Von Eckardt, B. (1993) What is Cognitive Science? MIT Press WORKSHOP TOPICS: The tension which riddles current CS can therefore be stated thus: CS, which gained its initial capital by adopting the computational metaphor, is being constrained by this metaphor as it attempts to become an encompassing Science of Mind. Papers are invited for this workshop which: * Address the central tension * Propose an overall framework for CS (as attempted, inter alia, by O Nuallain (in press)) * Explicate the relations between the disciplines which comprise CS. * Relate educational experiences in the field * Describe research outside the framework of classical computationalist CS in the context of an alternative framework * Promote a single logico-mathematical formalism as a theory of Mind (as attempted by Harmony theory) * Disagree with the premise of the workshop Other relevant topics include: * Classical vs. neuroscience representations * Consciousness vs. Non-consciousness * Dictated vs. emergent behaviour * A life/Computational intelligence/Genetic algorithms/Connectionism * Holism and the move towards Zen integration The workshop will focus on three themes: * What is the domain of Cognitive Science ? * Classic computationalism and its limitations * Neuroscience and Consciousness WORKSHOP FORMAT: Our intention is to have as much discussion as possible during the workshop and to stress panel sessions and discussion rather than having formal paper presentations. The workshop will consist of half-hour presentations, with 15 minutes for discussion at the end of each presentation and other discussion sessions. A plenary session at the end will attempt to resolve the themes emerging from the different sessions. ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. Given the urgency of the topic, we expect it to be of interest not only to scientists in the AI/Cognitive Science (CS) area, but also to those in other of the sciences of mind who are curious about CS. We envisage researchers from Edinburgh, Leeds, York, Sheffield and Sussex attending from within England and many overseas visitors as the Conference Programme is looking very international. SUBMISSION REQUIREMENTS: Papers of not more than 8 pages should be submitted by electronic mail (preferably uuencoded compressed postscript) to Sean O Nuallain at the E-mail address(es) given below. If you cannot submit your paper by E-mail please submit three copies by snail mail. *******Submission Deadline: February 13th 1995 *******Notification Date: February 25th 1995 *******Camera ready Copy: March 10th 1995 PUBLICATION: Workshop notes/preprints will be published. If there is sufficient interest we will publish a book on the workshop possibly with the American Artificial Intelligence Association (AAAI) Press. WORKSHOP CHAIR: Sean O Nuallain ((Before Dec 23:)) Knowledge Systems Lab, Institute for Information Technology, National Research Council, Montreal Road, Ottawa Canada K1A OR6 Phone: 1-613-990-0113 E-mail: sean at ai.iit.nrc.ca FaX: 1-613-95271521 ((After Dec 23:)) Dublin City University, IRL- Dublin 9, Dublin Ireland, EU WWW: http://www.compapp.dcu.ie Ftp: ftp.vax1.dcu.ie E-mail: onuallains at dcu.ie FaX: 353-1-7045442 Phone: 353-1-7045237 AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 278-5572 AISB-95 CONFERENCE/LOCAL ORGANISATION CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 REGISTRATION: Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/users/christ/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 AISB-95 ENQUIRIES: Debbie Daly, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: debbie at dcs.shef.ac.uk (personal communication) Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-278-5565 (personal) -5590 (messages) Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) (Email welty at ijcai.org) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/users/christ/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb VENUE: The venue for registration and all conference events is: Halifax Hall of Residence, Endcliffe Vale Road, GB- S10 5DF, Sheffield, UK, EU. FaX: +44 (0) 114-268-4227 Tel: +44 (0) 114-268-2758 (24 hour porter) Tel: +44 (0) 114-266-4196 (manager) SHEFFIELD: Sheffield is one of the friendliest cities in Britain and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and is part of the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. Sheffield Philosophy Department has the Hang Seng Centre for Cognitive Studies, founded in 1992, which runs a workshop/conference series on a two-year cycle on topics of interdisciplinary interest. (1992-4: 'Theory of mind'; 1994- 6: 'Language and thought'.)  From lautrup at connect.nbi.dk Fri Dec 16 12:01:20 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 16 Dec 94 12:01:20 MET Subject: Paper available: "Massive Weight Sharing: A Cure for ..." Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/lautrup.massive.ps.Z ---------------------------------------------- The following paper is now available: Massive Weight Sharing: A Cure for Extremely Ill-posed Learning [12 pages] B. Lautrup, L.K. Hansen, I. Law, N. Moerch, C. Svarer and S.C. Strother CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Presented at the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, Juelich, Germany, November 21-23, 1994 Abstract: In most learning problems, adaptation to given examples is well-posed because the number of examples far exceeds the number of internal parameters in the learning machine. Extremely ill-posed learning problems are, however, common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, e.g. pixel or pin values, and a modest number of patterns, e.g. images or spectra. In this paper we show, for the case of a set of PET images differing only in the values of one stimulus parameter, that it is possible to train a neural network to learn the underlying rule without using an excessive number of network weights or large amounts of computer time. The method is based upon the observation that the standard learning rules conserve the subspace spanned by the input images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get lautrup.massive.ps.Z ftp> quit unix> uncompress lautrup.massive.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk  From jaap.murre at mrc-apu.cam.ac.uk Fri Dec 16 08:46:00 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Fri, 16 Dec 94 13:46:00 GMT Subject: Papers and PC demo available Message-ID: <9412161346.AA00917@rigel.mrc-apu.cam.ac.uk> The following three files have recently (15-12-1994) been added to our ftp site (ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre): File 1: nnga1.ps Happel, B.L.M., & J.M.J. Murre (1994). Design and evolution of modular neural network architectures. Neural Networks, 7, 985-1004. (About 0.5 Mb; ps.Z version is recommended.) Abstract: To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist out of several modules, standard subnetworks that serve as higher-order units with a distinct structure and function. The simulations rely on a particular network module called CALM (Murre, Phaf, and Wolters, 1989, 1992). This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is laid-out. A number of small-scale simulation studies shows how intermodule connectivity patterns implement 'neural assemblies' (Hebb, 1949) that induce a particular category structure in the network. Learning and categorization improves as the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary. File 2: chaos1.ps Happel, B.L.M., & J.M.J. Murre (submitted). Evolving complex dynamics in modular interactive neural networks. Submitted to Neural Networks. (This is a large file: 1.5 Mb! Retrieve the ps.Z version if possible.) Abstract: Computational simulation studies, carried out within a general framework of modular neural network design, demonstrate that networks consisting of many interacting modules provide a variety of different neural processing principles. The dynamics underlying these behaviors range from simple linear separation of input vectors in individual modules, to oscillations, evoking chaotic regimes in the activity evolution of a network. As opposed to static representations in conventional neural network models, information in oscillatory networks is represented as space-time patterns of activity. Chaos in a neural network can serve as: (i) a novelty filter, (ii) explorative deterministic noise, (iii) a fundamental form of neural activity that provides continuous, sequential access to memory patterns, and (iv) a mechanism that underlies the formation of complex categories. An experiment in the artificial evolution of modular neural architectures, demonstrates that by manipulating modular topology and parameters governing local learning and activation processes, "genetic algorithms" can effectively explore complex interactive dynamics to construct efficient, modular neural architectures for pattern categorization tasks. A particularly striking result is that coupled, oscillatory circuits were installed by the genetic algorithm, inducing the formation of fractal category boundaries. Dynamic representations in these networks, can significantly reduce sequential interference due to overlapping static representations in learning neural networks. File 3: The above two papers, among others, describe a digit recognition network. A demonstration version of this can be retrieved for PCs (486 DX recommended): digidemo.zip (unzip with PKUNZIP; contains several files) Some documentation is included with the program. One of its features is retraining on digits that go wrong. Only the wrong digits are retrained, without catastrophic interference with the other digits. With questions and remarks, contact either Bart Happel at Leiden University (happel at rulfsw.leiden.univ) or Jaap Murre at the MRC Applied Psychology Unit (jaap.murre at mrc-apu.cam.ac.uk).  From LINDBLAD at vana.physto.se Fri Dec 16 08:44:45 1994 From: LINDBLAD at vana.physto.se (LINDBLAD@vana.physto.se) Date: Fri, 16 Dec 1994 15:44:45 +0200 Subject: IBM ZISC Paper Message-ID: <01HKPT2KNC4Y8Y6ZY4@vana.physto.se> FTP-HOST: archive.cis.ohio-state.edu FTP-FILE: pub/neuroprose/eide.zisc.ps.Z The file eide.zisc.ps. has been placed into the Neuroprose repository: "An implementation of the Zero Instruction Set Computer (ZISC036) on a PC/ISA-bus card" (14 pages) A. Eide, Th. Lindblad, C.S. Lindsey, M. Minerskjoeld, G.Sekhniaidze and G. Szekely Abstract: The new IBM Zero Instruction Set Computer (ZISC036) chip has been implemented on a PC/ISA-bus card. The chip has 36 RBF-like neurons. It is highy parallel and cascadable with on-chip learning. A card with two ZISC036 chips was built and tested with a noisy character recognition "benchmark".  From LINDBLAD at vana.physto.se Fri Dec 16 08:46:00 1994 From: LINDBLAD at vana.physto.se (LINDBLAD@vana.physto.se) Date: Fri, 16 Dec 1994 15:46:00 +0200 Subject: WWW Page on Neural Nets in High Energy Physics Message-ID: <01HKPT45B9N68Y6ZY4@vana.physto.se> We would like to announce the installation of the WWW homepage "Neural Networks in High Energy Physics". The address is: "http://www1.cern.ch/NeuralNets/nnwInHep.html" Both hardware and software neural network techniques have been used in high energy physics. Hardware neural networks are used for real-time data selection, while software neural networks are used in off-line analysis to enhance signal to background discrimination. We include a long reference list of work done in these areas, as well as news of recent developments. Of most general interest is the extensive page on commercial hardware neural networks. Descriptions are given of VLSI chips (e.g. the new IBM ZISC chip with RBF neurons), accelerator boards, and neurocomputers. Clark S. Lindsey (lindsey at msia02.msi.se) Bruce Denby (denby at pisa.infn.it) Thomas Lindblad (lindblad at vana.physto.se)  From ricart at picard.jmb.bah.com Fri Dec 16 10:32:22 1994 From: ricart at picard.jmb.bah.com (Rick Ricart) Date: Fri, 16 Dec 94 10:32:22 EST Subject: Apology for Flame, and Survey Message-ID: <9412161532.AA03333@picard.jmb.bah.com> I agree with John Pollock that reviewing fees should not be instituted. I'm not sure what prompted the unprecedented decision by WCNN, but if one reason for the fee is to increase the overall quality of the submissions, I offer the following alternative. Have each society (IEEE, INNS, etc.) establish and publish minimum acceptance criteria for papers. These criteria might include, for example, precise algorithm and parameter descriptions so that experiments can be reproduced by others. I know this is a novel idea for some current scientific and engineering societies and publishers, but I think we're ready for it. The real reason for the reviewing fee might be, of course, financial. All reviewers are very busy individuals with many professional duties. Some, as is evident, feel they should be reimbursed for "volunteering" their precious time for the given society's benefit. My answer to the problem is, "don't volunteer." There are many other professionals in the field that would gladly volunteer their time to review papers given the opportunity; especially if they have clear minimum acceptance guidelines available from the given society. These are my personal feelings and in no way do they represent an official Booz-Allen & Hamilton, Inc view point. Rick Ricart Associate Booz-Allen & Hamilton Advanced Computational Technologies Practice McLean, VA 22020 Phone: (703) 902-5494 email: ricart at picard.jmb.bah.com  From timxb at faline.bellcore.com Fri Dec 16 11:26:07 1994 From: timxb at faline.bellcore.com (Timothy X Brown) Date: Fri, 16 Dec 1994 11:26:07 -0500 Subject: Apology for Flame, and Survey Message-ID: <199412161626.LAA21111@faline.bellcore.com> Review fees to cover costs may or may not be justified, but it is a dangerous precedent, especially when an implicit goal was to limit entry. Organizers of conferences/journals/grants/positions can only lose by restricting the submission that they receive. Each of the "problems" Jordan Pollack mentions can be addressed in slightly different ways: >a) filtering out computer-generated articles, I thought "filtering out" is the quintissential role of the review process. By having some standards for acceptance, such fluff can be removed. One of the roles of a conference is to do some of this before hand so I don't waste valuable time on low information content. >b) authors not showing up to deliver talks or posters, Have no review fee, but the authors must submit the conference fee which is then refundable if the paper is not accepted. >c) expense of supporting travel costs for both plenary speakers and conference organizers, and If the organizers cannot organize a cost effective conference, get someone else to do it. I know that the American Social Sciences Association runs a huge 28 parallel session conference, and conference fees last year (this was in downtown Boston) were $35 with a $15 student rate! Of course, banquets (choose from one of 10), etc, were extra and water was the only refreshment provided, but what is the point of the conference? BTW, I've organized conferences without my expenses being covered. >d) declining membership/subscription revenues. Orgainize smaller conferences (with <6 parallel sessions), smaller print runs, etc. If profits are the only motive, put a centerfold in the magazine and have naked dancers at the banquet. Tim Timothy X Brown MRE 2E-378 Adaptive Systems Research Bell Communications Research 445 South Street Morristown, NJ 07960 Tel: (201) 829-4314 Fax: (201) 829-5888  From singh at psyche.mit.edu Fri Dec 16 16:13:43 1994 From: singh at psyche.mit.edu (Satinder Singh) Date: Fri, 16 Dec 94 16:13:43 EST Subject: Transfer/Interference in control problems... Message-ID: <9412162113.AA17889@psyche.mit.edu> This is another note on the topic of transfer and interference across problems. I have investigated this issue *a bit* in the reinforcement learning setting of an agent required to solve a SET of hierarchically structured control problems in the *same* environment. Reinforcement learning (RL) algorithms solve control problems by using control experience to update utility functions. With experience, the utility function improves and therefore so does the resulting ``greedy'' decision strategy. I studied the case of compositionally structured control tasks, where complex tasks are sequences of simpler tasks. I showed how a modular RL architecture (that is based on Jacobs, Jordan, Nowlan and Hinton's mixture of experts architecture) is able to reuse utility functions learned for simple tasks to quickly construct utility functions for more complex tasks in the hierarchy. The composition of a complex task is not available to the agent, but has to be learned. The use of ``shaping'' to encourage transfer is also illustrated. Anonymous ftp instructions for the above work follow -- the filename is singh-MLJ-1.ps.Z (another file of possible interest is singh-ML92.ps.Z). Several other RL researchers have since worked on this topic. See Dayan and Hinton's paper on ``Feudal Reinforcement Learning'' in NIPS-5 for more recent work and uptodate references. ================================================================ unix> ftp envy.cs.umass.edu Name: anonymous Password: [your ID] ftp> cd pub/singh ftp> binary ftp> get .ps.Z ftp> bye unix> uncompress .ps.Z unix> [your command to print PostScript file] .ps  From N.Sharkey at dcs.shef.ac.uk Fri Dec 16 12:06:27 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 16 Dec 94 17:06:27 GMT Subject: About sequential learning (or interference) In-Reply-To: Stephen Hanson's message of Wed, 14 Dec 1994 15:20:45 -0500 (EST) Message-ID: <9412161706.AA01599@entropy.dcs.shef.ac.uk> >of course avoiding "interference" is >another way of preventing generalization. >As usual there is a tradeoff here. Yes there is a trade-off but not with interference as Hetherington has pointed out. In fact, with backprop, the way to entirely eliminate interference is to get a good approximation to the total underlying function that is being sampled. For example, with an autoencoder memory, if the there is good extraction of the identity function then there will be no interference from training on sucessive memory sets (and of course little need for further training). The trade-off is between old-new discrimination and generalisation. Definitionally, as one improves the other collapses. In the paper cited by Heterington (which is now under journal submission). We present a formally guaranteed solution to the interference and discrimination problem (the HARM) model, but it demands exponentially increasing computational resources. It is really used to show the problems of other localisation solutions (French, Murre, Kruschke etc.). We also report some interesting empirical (simulation) results of this trade-off in a much shorter paper: Sharkey, NE, and Sharkey, AJC, (in press) Interference and Discrimination in Neural Net Memory. In Joe Levy, Dimitrios Bairaktaris, John Bullinaria and Paul Cairns.(Ed) Connectionist Models of Memory and Language, UCL press. If anyone is interested I will mail a postscript copy of the tech report to them. Sharkey, N.E., & Sharkey, A.J.C. Understanding Catastrophic Interference in neural Nets. Technical Report, Department of Computer Science, University of Sheffield, U.K. Abstract A number of recent simulation studies have shown that when feedforward neural nets are trained, using backpropagation, to memorize sets of items in sequential blocks and without negative exemplars, severe retroactive interference or {\em catastrophic forgetting} results. Both formal analysis and simulation studies are employed here to show why and under what circumstances such retroactive interference arises. The conclusion is that, on the one hand, approximations to "ideal" network geometries can entirely alleviate interference, but at the cost of a breakdown in discrimination between input patterns that have been learned and those that have not: {\em catastrophic remembering}. On the other hand, localized geometries for subfunctions eliminate the discrimination problem but are easily disrupted by new training sets and thus cause {\em catastrophic interference}. The paper concludes with a Hebbian Autoassociative Recognition Memory (HARM) model which provides a formally guaranteed solution to the problems of interference and discrimination. This is then used as a yardstick with which to evaluate other proposed solutions. noel Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk FAX: (0742) 780972  From dyyeung at cs.ust.hk Sat Dec 17 09:54:27 1994 From: dyyeung at cs.ust.hk (Dr. D.Y. Yeung) Date: Sat, 17 Dec 94 09:54:27 HKT Subject: elitism at NIPS Message-ID: <9412170154.AA17552@cs.ust.hk> > The folks involved in running NIPS are aware that in the past, some people > have felt the conference was biased against outsiders. As the program > chair for NIPS*94, I want to mention some steps we took to reduce that > perception: > > * This year we required authors to submit full papers, not just abstracts. > That made it harder for "famous" people to get by on their reputation > alone, and easier for "non-famous" people to get good papers in. > > * This year we required reviewers to provide real feedback to all authors. > In order to make this less painful, we completely redesigned the review > form (it's publicy accessible via the NIPS home page) and, for the first > time, accepted reviews by email. Everyone liked getting comments from > the reviewers, and authors whose papers were not accepted understood why. > > * We continue to recruit new people for positions in both the organizing > and program committees. It's not the same dozen people year after year. > We also have a large reviewer pool: 176 people served as reviewers > this year. > > * We tend to bring in "outsiders" as our invited speakers, rather than > the usual good old boys. This year's invited speakers included Francis > Crick of the Salk Insititute, Bill Newsome (a neuroscientist from > Stanford), and Malcolm Slaney (a signal processing expert formerly with > Apple and now at Interval Research.) None had been to NIPS before. Another step that the organizers of NIPS may consider is to have a "blind" review process, which is an even more positive step to reduce that perception. This practice has been used in some other good conferences too. Regards, Dit-Yan Yeung  From raymond at fit.qut.edu.au Sat Dec 17 20:24:16 1994 From: raymond at fit.qut.edu.au (Raymond Lister) Date: Sat, 17 Dec 94 20:24:16 EST Subject: elitism at NIPS Message-ID: <199412171024.UAA20868@fitmail.fit.qut.edu.au> > From Dave_Touretzky at cs.cmu.edu Sat Dec 17 14:53:02 1994 > To: Connectionists at cs.cmu.edu > Subject: elitism at NIPS > Date: Fri, 16 Dec 94 03:22:31 EST > > ... > > * This year we required authors to submit full papers, not just abstracts. > That made it harder for "famous" people to get by on their reputation > alone, and easier for "non-famous" people to get good papers in. The move to full papers was a good change. It would be better still if the copies of papers sent to referees did not contain the names and addresses of authors. Raymond Lister, School of Computing Science, Queensland University of Technology, AUSTRALIA Internet: raymond at fitmail.fit.qut.edu.au  From gluck at pavlov.rutgers.edu Sat Dec 17 10:45:25 1994 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Sat, 17 Dec 94 10:45:25 EST Subject: Rutgers Grad Program in BEHAVIORAL AND NEURAL SCIENCES Message-ID: <9412171545.AA01777@james.rutgers.edu> --------------------------------------------------------------------- Seeking Applications for Fall 1995 for Ph.D. Program in BEHAVIORAL AND NEURAL SCIENCES Rutgers University, Newark Target date for applications is JANUARY 20, 1995 --------------------------------------------------------------------- If you are considering graduate study in Cognitive, Integrative, Molecular, or Computational Neuroscience, you may be interested in Rutgers' new interdisciplinary research-oriented graduate program in Behavioral and Neural Sciences (BNS). The BNS aims to provide students with a rigorous understanding of the basic tenets and underpinnings of modern neuroscience. The program emphasizes the multidisciplinary nature of this endeavor, and offers specific research training in Behavioral and Cognitive Neuroscience and Molecular, Cellular and Systems Neuroscience. These research areas represent different but complementary approaches to contemporary issues in behavioral and molecular neuroscience and can emphasize either human or animal studies. The graduate program is offered by two distinct university units: the newly established Center for Molecular and Behavioral Neuroscience (CMBN) and the Institute of Animal Behavior (IAB). These two units work together but each has its own special emphasis. Research at the CMBN emphasizes integration across levels of analysis and traditional disciplinary boundaries. The CMBN is one of the leading places in this country for the study of the neural bases of behavior and cognition in humans and other animals. Behavioral research areas include the study of memory, language (both signed and spoken), motor-control, and vision. Clinically relevant research areas are the study of the physiological and pharmacological aspects of schizophrenia, epilepsy and Parkinson's disease and molecular genetics of reading disorders and well as neuroendocrinology. We have a computational program for students interested in pursuing neural-network models as a tool for understanding psychological and biological issues. There is also a strong focus on single cell (patch clamp, intracellular and extracellular) electrophysiology and multi-unit recording, systems analysis, neuroanatomy and in vivo microdialysis. The IAB offers a unified program in psychobiology and ethological patterns of behavior, with an emphasis on evolution, development and reproduction, as well as the neurogenesis and recovery of function from brain damage. Other Info ---------- At present the CMBN supports up to 40 students with 12-month renewable assistantships for a period of four years. The curent stipend for first year students is $12,750; this includes tuition remission and excellent healthcare benefits. The IAB supported students receive D.S. Lehrman Fellowships which include a 12-month stipend of approximately $10,500 for four years and tuition remission. In addition, the Johnson & Johnson pharmaceutical company's Foundation has provided four Excellence Awards which increase students' stipends by $5,000. Several other fellowships are offered. More information is available in our graduate brochure. The Rutgers-Newark campus (as distinct from the New Brunswick campus), is 30 minutes outside New York City, and close to other major university research centers at NYU, Columbia, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and NEC. Faculty Associated With Rutgers Behavioral & Neural Sciences Ph.D. Program -------------------------------------------------------------------------- FACULTY - RUTGERS Elizabeth Abercrombie (Ph.D., Princeton), neurotransmitters and behavior [CMBN] Colin Beer (Ph.D., Oxford), ethology [IAB] April Benasich (Ph.D., New York), infant perception and cognition [CMBN] Ed Bonder (Ph.D., Pennsylvania), cell biology [Biology] Linda Brzustowicz (M.D.,Ph.D., Columbia), human genetics [CMBN] Gyorgy Buzsaki (Ph.D., Budapest), systems neuroscience [CMBN] Mei-Fang Cheng (Ph.D., Bryn Mawr) neuroethology/neurobiology [IAB] Ian Creese (Ph.D., Cambridge), neuropsychopharmacology [CMBN] Doina Ganea (Ph.D., Illinois Medical School), molecular immunology [Biology] Alan Gilchrist (Ph.D., Rutgers), visual perception [Psychology] Mark Gluck (Ph.D.,Stanford), learning, memory and neural computation [CMBN] Ron Hart (Ph.D., Michigan), molecular neuroscience [Biology] G. Miller Jonakait (Ph.D., Cornell Medical College), neuroimmunology [Biology] Judy Kegl (Ph.D., M.I.T.), linguistics/neurolinguistics [CMBN] Barry Komisaruk (Ph.D., Rutgers), behavioral neurophysiology/pharmacology [IAB] Sarah Lenington (Ph.D., Chicago), genetic basis of mating preference [IAB] Joan Morrell (Ph.D., Rochester), cellular neuroendocrinology [CMBN] Teresa Perney (Ph.D., Chicago), ion channel gene expression and function [CMBN] Howard Poizner (Ph.D., Northeastern), language and motor behavior [CMBN] Jay Rosenblatt (Ph.D., New York), maternal behavior [IAB] Anne Sereno (Ph.D., Harvard), attention and visual perception [CMBN] Maggie Shiffrar (Ph.D., Stanford), vision and motion perception[CMBN] Harold Siegel (Ph.D., Rutgers) neuroendocrine mechanisms [IAB] Ralph Siegel (Ph.D., McGill), neuropsychology of visual perception [CMBN] Donald Stein (Ph.D., Oregon), neural plasticity [IAB] Jennifer Swann (Ph.D., Michigan), neuroendocrinology [Biology] Paula Tallal (Ph.D., Cambridge), neural basis of language development [CMBN] James Tepper (Ph.D., Colorado), basal ganglia neurophysiology and anatomy [CMBN] Beverly Whipple (Ph.D., Rutgers), women's health [Nursing] Laszlo Zaborszky (Ph.D., Hungarian Academy), neuroanatomy of forebrain [CMBN] BNS FACULTY - UMDNJ Barry Levin (M.D., Emory Medical) neurobiology Benjamin Natelson (M.D., Pennsylvania) stress and distress Allan Siegel (Ph.D., SUNY-Buffalo), aggressive behavior Walter Tapp (Ph.D., Cornell), primate models of cognitive function ASSOCIATES OF CMBN Izrail Gelfand (Ph.D., Moscow State), biology of cells [Biology] Richard Katz (Ph.D., Bryn Mawr), psychopharmacology [Ciba Geigy] David Tank (Ph.D., Cornell), neural plasticity [Bell Labs] For More Information or an Application -------------------------------------- If you are interested in applying to our graduate program, or possibly applying to one of the labs as a post-doc, research assistant or programmer, please contact us via one of the following: Dr. Gyorgy Buzsaki or Dr. Mark A. Gluck CMBN, Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone (Secretary, Ann Kutyla): (201) 648-1080 (Ext. 3200) Fax: (201) 648-1272 Email: buzsaki at axon.rutgers.edu or gluck at pavlov.rutgers.edu We will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. INTERNET INFORMATION: --------------------- Additional information on this program can be obtained over the internet via World Wide Web at: http://www.cmbn.rutgers.edu/ Please be warned that it is still under construction.  From sef+ at cs.cmu.edu Sat Dec 17 15:20:28 1994 From: sef+ at cs.cmu.edu (Scott E. Fahlman) Date: Sat, 17 Dec 94 15:20:28 EST Subject: elitism at NIPS In-Reply-To: Your message of Sat, 17 Dec 94 20:24:16 -0500. <199412171024.UAA20868@fitmail.fit.qut.edu.au> Message-ID: Personally, I think that "blind" reviewing is a bad idea because it is dishonest. In a field like this one, it is easy in 90% of the cases for an experienced reviewer to tell who is the author of a paper, or at least what group the paper is from. I think it's better to be explicit about the lack of anonymity, and to challenge the reviewers to consciously try to rise above any "in group" bias than to provide a show of anonymity that is really a fraud in most cases. I also believe that in some cases the identity of the author does provide essential context to the reviewer. If I were to present evidence that my own Cascade-Correlation algorithm gets certain things wrong, a reviewer might not have to worry as much about rookie mistakes in running Cascor as he would in a paper from someone he has never heard of. On the other hand, if I claim that Cascor does certain things *better* than the competition, the claim might warrant extra scrutiny because I am an interested party in the debate. Even more scrutiny is called for if I am known to have a long-standing feud with the person whose work I am criticizing. I know that this view is controversial. Some of you will certainly argue that such considerations have no place in science, and that every paper must stand or fall on its content alone. Perhaps that is true in a long paper, but it may be impossible to present all the relevant context in an eight-page paper. (Because of the NIPS style, this is more like a 5-page paper published elsehwere.) I don't know if this is the right forum for a debate on the merits of blind reviewing. I just thought it would be useful to point out that there are some arguments on the other side. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Principal Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 268-5576 Carnegie Mellon University Latitude: 40:26:46 N 5000 Forbes Avenue Longitude: 79:56:55 W Pittsburgh, PA 15213 Mood: :-) ===========================================================================  From peter at ai.iit.nrc.ca Sun Dec 18 22:41:32 1994 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Mon, 19 Dec 1994 08:41:32 +0500 Subject: Workshop on Data Engineering for Inductive Learning Message-ID: <9412191341.AA05163@ksl0j.iit.nrc.ca> CALL FOR PARTICIPATION: Workshop on Data Engineering for Inductive Learning --------------------------------------------------------------------------- IJCAI-95, Montreal (Canada), August 19/20/21, 1995 Objective --------- In inductive learning, algorithms are applied to data. It is well-understood that attention to both elements is critical -- unless instances are represented so as to make its generalization methods appropriate, no inductive learner can succeed. In applied work, it is not uncommon for practitioners to spend the bulk of their time exploring and transforming data in efforts to enable the use of existing induction techniques. Despite widespread acceptance of these facts, however, research reports normally give data work short shrift. In fact, a report devoted mainly to the data in an induction problem rather than to the algorithms that process it might well be difficult to publish in mainstream machine learning and neural network venues. Our goal in this workshop is to counterbalance the predominant focus on algorithms by providing a forum in which data takes center stage. Specifically, we invite discussion of issues relevant to data engineering, which we define as the transformation of raw data into a form useful as input to algorithms for inductive learning. Data engineering is a concern in industrial and commercial applications of machine learning, neural networks, genetic algorithms, and traditional statistics. Among others, papers of the following kind are welcome: 1. Detailed case studies of data engineering in real-world applications of inductive learning. 2. Descriptions of data engineering techniques or methods that have proven useful across a number of applications. 3. Studies of the data requirements of important inductive learning algorithms, the specifications to which data must be engineered for these algorithms to function. 4. Reports on software tools and environments for data engineering, including papers on "interactive induction" algorithms. 5. Empirical studies documenting the effect of data engineering on the success of induced models. 6. Surveys of data engineering practice in related fields: statistics, pattern recognition, etc. (but not problem-solving or theorem-proving). 7. Papers on constructive induction, feature selection and related techniques. 8. Papers on (re)formulating a problem, to make it suitable for inductive learning techniques. For example, a paper on reformulating the problem of information filtering as learning to classify. This workshop will enable an overview of current work in data engineering. Since the problem of data engineering has received relatively little published attention, it is difficult to anticipate the work that will be presented at this workshop. We expect that the workshop will make it possible to see common trends, shared problems, and clever solutions that we cannot guess at, given our current, limited view of data engineering. We have allowed ample time for discussion of each paper (10 minutes), to foster an atmosphere that will encourage data engineers to share their stories and to seek common elements. We aim to leave the workshop with a vision of the research directions that might bring science into data engineering. Participation ------------- During the workshop, we anticipate approximately 14 presentations. Each paper will be given 25 minutes, 15 minutes for presentation and 10 minutes for discussion. There will be at most 30 participants in the workshop. If you wish to participate in the workshop, you may either submit a paper or a description of work that you have done (are doing, plan to do) that is relevant to the workshop. Papers should be at most 10 pages long. The first page should include the title, the author's name(s) and affiliation(s), a complete mailing address, phone number, fax number, e-mail, an abstract of at most 300 words, and up to five keywords. For those who do not choose to submit a paper, a description of relevant work should be at most 1 page long and should include complete address information. Workshop participants are required to register for the main IJCAI-95 conference. All submissions (papers or descriptions of relevant work) will be reviewed by at least two members of the organizing committee. Please send your submissions to the contact address below. Submissions should be PostScript files, sent by e-mail. Accepted submissions will be available before the workshop through ftp. Workshop participants will also be given copies of the papers on the day of the workshop. In selecting the papers, the committee will aim for breadth of coverage of the topics listed above. Ideally, each of the eight kinds of papers listed above would have at least one representative in the workshop. A paper with new ideas on data engineering will be preferred to a high- quality paper on a familiar idea. The workshop organizers plan to publish revised versions of selected papers from the workshop. The papers would be published either as a book or as a special issue of a journal. The exact date for the workshop has not yet been decided by IJCAI. The workshop is one day in duration and will be held on one of August 19, 20, or 21. Schedule -------- Deadline for submissions: March 31, 1995 Notification of acceptance: April 21, 1995 Submissions available by ftp: April 28, 1995 Actual Workshop: August 19/20/21, 1995 Organizing Committee -------------------- Peter Turney, National Research Council (Canada) Cullen Schaffer, CUNY/Hunter College (USA) Rob Holte, University of Ottawa (Canada) Contact Address --------------- Dr. Peter Turney Knowledge Systems Laboratory Institute for Information Technology National Research Council Canada Ottawa, Ontario, Canada K1A 0R6 (613) 993-8564 (office) (613) 952-7151 (fax) peter at ai.iit.nrc.ca  From ucganlb at ucl.ac.uk Mon Dec 19 09:08:28 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Mon, 19 Dec 94 14:08:28 +0000 Subject: catastrophic interference of BP Message-ID: <187786.9412191408@link-1.ts.bcc.ac.uk> Studies of catastrophic interference in BP networks are interesting when considering such a network as a model of some human (or animal) memory system. Is there any reason for doing that? Neil  From ted at SPENCER.CTAN.YALE.EDU Mon Dec 19 09:52:27 1994 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Mon, 19 Dec 1994 09:52:27 -0500 Subject: "Blind" reviews are impossible Message-ID: <199412191452.AA23787@PLANCK.CTAN.YALE.EDU> Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. --Ted  From jbower at smaug.bbb.caltech.edu Mon Dec 19 15:44:10 1994 From: jbower at smaug.bbb.caltech.edu (jbower@smaug.bbb.caltech.edu) Date: Mon, 19 Dec 94 12:44:10 PST Subject: WCNN / INNS / or whatever Message-ID: <9412192044.AA00733@smaug.bbb.caltech.edu> In brief support of Jordon, I believe that it is well known that the finances of both the WCNN and the INNS have been strange for years. It is also not too surprising that one would confuse the two, same list of participants, basically. It is also well known that the "everyone who can pay" approach taken by the organizers of most of the neural network meetings has resulted in very poor signal to noise ratios. The historically more "old boy" approach of NIPS has meant a better quality meeting, but less openness. I wish Dave Touretzky luck in efforts to change, there is little evidence that INNS or IEEE even recognize that there is a problem. Two years ago, a considerable amount of pressure was placed on the neurobiologists who have organized the computational neuroscience meetings (CNS*92-94) to merge with the next INNS meeting (in this case). The vote by participants of CNS*93 not to, was essentially 100%. Other than the comment that very little that goes on in any of these meetings has much to do with neurobiology, the other reasons most often mentioned where the above. Jim Bower *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From maass at igi.tu-graz.ac.at Mon Dec 19 15:48:06 1994 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Mon, 19 Dec 94 21:48:06 +0100 Subject: elitism at NIPS Message-ID: <199412192048.AA16441@figids01> I have the impression that for theory-papers it would NOT be very benefitial if NIPS would change to "blind reviewing". From dtam at morticia.cnns.unt.edu Mon Dec 19 16:28:15 1994 From: dtam at morticia.cnns.unt.edu (David Tam) Date: Mon, 19 Dec 94 15:28:15 CST Subject: elitism at NIPS Message-ID: <199412192128.AA02352@morticia.cnns.unt.edu> I think a totally honest system has to be doubly-open, i.e., the reviewers names have to be attached to the referee's comments. That will hold them accountable for what they say. That will keep them HONEST. If they want to give negative critisim, let it be known who they are. As is now, it is not an open system -- It is blind one way, and open the other way, and that makes the system so unfair, bias, and dictatorial. As Scott Fahlman said, it is practically impossible to make it blind, because we all know who does what line of work. So, if it is not blind, make it totally open, and have a repeal process, such that if the referee is WRONG, there is a recourse!! That's what a democratic process is all about -- to keep the system honest by having an OPEN process with REPEAL recourse. A closed system is a sure way to breed corruption and dictatorship. This response applies to the whole scientific review process in reviewing papers and grants in general as well as in conferences. David C. Tam Center for Network Neuroscience Dept. of Biological Sciences University of North Texas  From shams at maxwell.hrl.hac.com Mon Dec 19 20:39:28 1994 From: shams at maxwell.hrl.hac.com (Soheil Shams) Date: Mon, 19 Dec 1994 17:39:28 -0800 Subject: Position Announcement Message-ID: <9412200146.AA07164@maelstrom> SIGNAL / IMAGE PROCESSING RESEARCH OPPORTUNITIES Hughes Research Laboratories has an immediate opening for a Research Staff Member to join a team of scientists in the Computational Intelligence and Advanced Signal Processing Algorithms Project. Team members in this project have developed novel, state-of-the-art neural networks, time-frequency transforms, and image compression algorithms for use in both commercial and military applications. The successful candidate will investigate advanced signal and image processing techniques for multimedia compression, data fusion, and pattern recognition applications. Current work is focused on the application of wavelets, neural networks, and computer vision techniques for data compression and pattern recognition applications. Specific duties will include theoretical analysis, algorithm design, and software simulation. Candidates are expected to have a Ph.D. in Electrical Engineering, Applied Mathematics, or Computer Science. Strong analytical skills and demonstrated ability to perform creative research, along with experience in signal and image processing, image or video compression, or neural networks are required. Practical experience with C or C++ is essential. Good communications and teamwork skills are keys to success. Overlooking the Pacific Ocean and the coastal community of Malibu, the Research Laboratories provides an ideal environment for you to make the most of your scientific abilities. Our organization offers a competitive salary and benefits package. Additional information may be obtained from Lynn Ross. For immediate consideration, send your resume to: Lynn W. Ross Department RM Hughes Research Laboratories 3011 Malibu Canyon Road Malibu, CA 90265 FAX: (310) 317-5651 Internet: lross at msmail4.hac.com Proof of legal right to work in the United States required. An Equal Opportunity Employer.  From wermter at nats2.informatik.uni-hamburg.de Tue Dec 20 06:28:41 1994 From: wermter at nats2.informatik.uni-hamburg.de (Stefan Wermter) Date: Tue, 20 Dec 94 12:28:41 +0100 Subject: IJCAI95-workshop: Learning for Natural Language Processing Message-ID: <9412201128.AA01573@nats2.informatik.uni-hamburg.de> --------------------------------------------------------------------------- CALL FOR PAPERS AND PARTICIPATION IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada currently scheduled for August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter Gabriele Scheler Ellen Riloff University of Hamburg Technical University Munich University of Utah PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing. Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge ac- quisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important be- cause language learning is an enabling technology for many other language pro- cessing problems, including noisy speech/language integration, machine trans- lation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will bring together researchers from the US/Canada, Europe, Japan, Australia and other countries working on new approaches to language learning. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP FORMAT --------------- The workshop will provide a forum for the interactive exchange of ideas and knowledge. Approximately 30-40 participants are expected and there will be time for up to 15 presentations depending on the number and quality of paper contri- butions received. Normal presentation length will be 15+5 minutes, leaving time for direct questions after each talk. There may be a few invited talks of 25+5 minutes length. In addition to prepared talks, there will be time for moderat- ed discussions after two related sessions. Furthermore, the moderated discus- sions will provide an opportunity for an open exchange of comments, questions, reactions, and opinions. PUBLICATION ----------- Workshop proceedings will be published by AAAI. If there is sufficient in- terest of the participants of the workshop there may be a possibility to pub- lish the results of the workshop as a book. REGISTRATION ------------ This workshop will take place directly before the general IJCAI-conference. It is an IJCAI policy, that workshop participation is not possible without regis- tration for the general conference. SUBMISSIONS ----------- All submissions will be refereed by the program committee and other experts in the field. Please submit 4 hardcopies AND a postscript file. The paper format is the IJCAI95 format: 12pt article style latex, no more than 43 lines, 15 pages maximum, including title, address and email address, abstract, figures, references. Papers should fit to 8 1/2" x 11" size. Notifications will be sent by email to the first author. Postscript files can be uploaded with anonymous ftp: ftp nats4.informatik.uni-hamburg.de (134.100.10.104) login: anonymous password: cd incoming/ijcai95-workshop binary put quit Hardcopies AND postscript files must arrive not later than 24th February 1995 at the address below. ##############Submission Deadline: 24th February 1995 ##############Notification Date: 24th March 1995 ##############Camera ready Copy: 13th April 1995 Please send correspondence and submissions to: ################################################ Dr. Stefan Wermter Department of Computer Science University of Hamburg Vogt-Koelln-Strasse 30 D-22527 Hamburg Germany phone: +49 40 54715-531 fax: +49 40 54715-515 e-mail: wermter at informatik.uni-hamburg.de ################################################  From N.Sharkey at dcs.shef.ac.uk Tue Dec 20 08:04:01 1994 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Tue, 20 Dec 94 13:04:01 GMT Subject: catastrophic interference of BP In-Reply-To: Dr Neil Burgess - Anatomy UCL London's message of Mon, 19 Dec 94 14:08:28 +0000 <187786.9412191408@link-1.ts.bcc.ac.uk> Message-ID: <9412201304.AA18623@entropy.dcs.shef.ac.uk> >Studies of catastrophic interference in BP networks are >interesting when considering such a network as a model of some human >(or animal) memory system. >Is there any reason for doing that? >Neil BP has been used extensively in human cognitive and memory modelling and so it is useful to know if "cashes in" in terms of recognition memory as well. The interference issue is also problematic for training online control processes (e.g. in robotic coordination) where new training data is not necessarily accompanied by all of the "old" data samples. If there is not a nice regular general function to be extracted, then forgetting could be very severe. Problems also arise for BP in any such online task that requires discrimination between what was in the training sets and what was not. noel  From gbugmann at school-of-computing.plymouth.ac.uk Tue Dec 20 12:15:05 1994 From: gbugmann at school-of-computing.plymouth.ac.uk (Guido.Bugmann xtn 2566) Date: Tue, 20 Dec 94 17:15:05 GMT Subject: PhD Position available Message-ID: <210.9412201715@subnode.soc.plym.ac.uk> A PhD position is available immediately to work on the project "The Plymouth Hand" at the University of Plymouth, UK. This project comprises an application of neural networks to control. The aim of that project is to develop a robot/prosthetic hand with slippage detection as a source of feedback for the control of the grip pressure. A working prototype has been built which can lift a cylinder of solid steel and also a hollow rolled sheet of A4 paper without deforming it. Further work comprises: 1. Derivation of a practical specification for two systems using the slippage detection adaptive grasping force technique, i.e. a robot gripper and a prosthetic hand. 2. Kinematically model a range of proposed designs using virtual reality software. 3. Validate appropriate kinematic models 4. Design, build and test an improved version of the existing device to meet the specifications of a robot gripper. In particular: a) Software environment will be investigated e.g. assembly code, C, etc and an appropriate one chosen and developed. New techniques such as rule based (fuzzy logic) and neural networks will be investigated. b) Design and development of the power electronics and drive system. c) Specify and test the necessary microcontroller. 5) Same as 4) but as an adaptation of an existing NHS prostethic hand. ---------------------------------------------------- The salary in the standard range 5-6 Kpounds/year for PhD students. For further information, please contact: Paul Robinson Dept. of Electrical and Electronic Engineering University of Plymouth Plymouth PL4 8AA United Kingdom Phone (+44) 1752 23 25 95 / 72 Fax (+44) 1752 23 25 83 -----------------------------------------------------  From ucganlb at ucl.ac.uk Tue Dec 20 12:54:01 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Tue, 20 Dec 94 17:54:01 +0000 Subject: catastrophic interference of BP Message-ID: <154923.9412201754@link-1.ts.bcc.ac.uk> >> Studies of catastrophic interference in BP networks are >> interesting when considering such a network as a model of some human >> (or animal) memory system. >> Is there any reason for doing that? >> Neil > So far Jay McClelland has replied: his recently advertised > tech. report provides an example of useful consideration of cat. int. with repect to > the possible existence of complimentary learning systems in the brain, and attempts > to distinguish between the specific properties of BP and the more general question. > Japp Murre also replied that he partially addresses the problem in [1]. Generally, I think that caution should be expressed in generalising between artificial learning mechanisms. It may be that learning within a system with fixed parameters, mediated by iterative minimisation of some global attribute (e.g. sum-squared error) will tend to show interference with `catastrophic' characteristics, although how it manifests itself will depend on the details of the algorithm. But I would be surprised if other algorithms, e.g. learning by piecemeal constructive algorithms (where extra units are added for a specific local task, such as [2]) behaved like that - so e.g. we might not expect learning like song-learning in birds (associated with neural growth) to necessarily show the same type of interference. I also suspect that the characteristics of interference differ between iterative algorithms in which the `training set' must be presented many times (e.g. BP) and `one-shot' learning algorithms (e.g. Hopfield model). In the former case there is obviously a problem of how to interleave new data into the training set, in the latter case there is no such problem, and slight variations of the `Hebbian' learning rule can produce imprinting, primacy, recency or combinations of the above [3]. Given the availiblity of alternatives, it is not clear that BP should always be the canonical choice for modelling learing and memory. It is certainly not the easiest to motivate biologically. Merry Christmas, Neil [1] report: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/hyper1.ps [2] M. R. Frean, `The Upstart algorithm: A method for constructing and training feedforward neural networks', {\it Neural Computation,} {\bf 2}, 198-209 (1990). [3] `List Learning in Neural Networks', Neil Burgess, J. L. Shapiro and M. A. Moore, {\it Network} {\bf 2} 399-422, 1991.  From nburgess at lbs.lon.ac.uk Tue Dec 20 11:49:43 1994 From: nburgess at lbs.lon.ac.uk (nburgess@lbs.lon.ac.uk) Date: Tue, 20 Dec 94 16:49:43 GMT Subject: NNCM 95 Announcement Message-ID: <199412201649.LAA04285@jupiter.lbs.lon.ac.uk> ANNOUNCEMENT AND CALL FOR PAPERS NNCM-95 THIRD INTERNATIONAL CONFERENCE ON NEURAL NETWORKS IN THE CAPITAL MARKETS Thursday-Friday, October 12-13, 1995 with tutorials on Wednesday, October 11, 1995. The Langham Hilton, London, England. Neural networks are now emerging as a major modelling methodology in financial engineering. Because of the overwhelming interest of the NNCM workshops held in London in 1993 and Pasadena in 1994, the third annual NNCM conference will be held October 12-13, 1995, in London. NNCM'95 takes a critical look at state of the art neural network applications in finance. This is a research meeting where original, high-quality contribu- tions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, October 11) will be included to familiarise audiences of different backgrounds with financial engineering, and the mathematical aspects of the field. Application areas include: + Bond and stock valuation and trading + Foreign exchange rate prediction and trading + Commodity price forecasting + Risk management + Tactical asset allocation + Portfolio management + Option Pricing + Trading strategies Technical areas include but not limited to: + Generalised least squares + Robust model estimation + Univariate time series analysis + Multivariate data analysis + Classification and ranking + Pattern recognition + Model selection + Hypothesis testing and confidence intervals Instructions for Authors Authors who wish to present a paper should mail a copy of their extended abstract (4 pages, single-sided, single-spaced) typed on A4 (8.5" by 11") paper to the secretariat no later than May 31, 1995. Submissions will be refereed by no less than four referees and authors will be notified on acceptance by 30 July 1995. Sep- arate registration is required using the attached registration form. Authors are encouraged to submit abstracts as soon as pos- sible. Registration To register, complete the registration form and mail to the sec- retariat. Please note that places are limited and will be allocated on a "first-come first-served" basis. Secretariat: For further information, please contact the NNCM-95 secretariat: Ms Busola Oguntula, London Business School Sussex Place, Regent's Park, London NW1 4SA, UK e-mail: boguntula at lbs.lon.ac.uk phone (+44) (0171) 262 50 50 fax (+44) (0171) 724 78 75 Location: The main conference will be held at The Langham Hilton which is situated near Regent's Park and is a short walk from Baker Street Underground Station. Further directions including a map will be sent to all registries. Programme Commitee Dr A. Refenes, London Business School (Chairman) Dr Y. Abu-Mostafa, Caltech Dr A. Atiya, Cairo University Dr N. Biggs, London School of Economics Dr D. Bunn, London Business School Dr M. Jabri, University of Sydney Dr B. LeBaron, University of Wisconsin Dr A. Lo, MIT Sloan School Dr J. Moody, Oregon Graduate Institute Dr C. Pedreira, Catholic University PUC-Rio Dr M. Steiner, Universitaet Munster Dr A. Timermann, University of California, San Diego Dr A. Weigend, University of Colorado Dr H. White, University of California, San Diego Hotel Accommodation: Convenient hotels include: The Langham Hilton 1 Portland Place London W1N 4JA Tel: (+44) (0171) 636 10 00 Fax: (+44) (0171) 323 23 40 Sherlock Holmes Hotel 108 Baker Street, London NW1 1LB Tel: (+44) (0171) 486 61 61 Fax: (+44) (0171) 486 08 84 The White House Hotel Albany St., Regent's Park, London NW1 Tel: (+44) (0171) 387 12 00 Fax: (+44) (0171) 388 00 91 --------------------------Registration Form ---------------------------- NNCM-95 Registration Form Third International Conference on Neural Networks in the Capital Markets October 12-13 1995 Name:____________________________________________________ Affiliation:_____________________________________________ Mailing Address: ________________________________________ _________________________________________________________ Telephone:_______________________________________________ ****Please circle the applicable fees and write the total below**** Main Conference (October 12-13): (British Pounds) Registration fee 450 Discounted fee for academicians 250 (letter on university letterhead required) Discounted fee for full-time students 100 (letter from registrar or faculty advisor required) Tutorials (October 11): You must be registered for the main conference in order to register for the tutorials. (British Pounds) Morning Session Only 100 Afternoon Session Only 100 Both Sessions 150 Full-time students 50 (letter from registrar or faculty advisor required) TOTAL: _________ Payment may be made by: (please tick) ____ Check payable to London Business School ____ VISA ____Access ____American Express Card Number:___________________________________ --  From pah at unixg.ubc.ca Tue Dec 20 14:39:18 1994 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Tue, 20 Dec 1994 11:39:18 -0800 (PST) Subject: catastrophic interference of BP In-Reply-To: <187786.9412191408@link-1.ts.bcc.ac.uk> Message-ID: Neil Burgess wrote: > Studies of catastrophic interference in BP networks are > interesting when considering such a network as a model of some human > (or animal) memory system. > Is there any reason for doing that? Of course. Network models have properties such as distributed representations, generalization, interference, content addressability, etc., that are also true of animal and human memory. They provide an alternative framework for construing memory processes that is superior to box and arrow modeling primarily because they can be 'lesioned' as in animal experiments or as found in human patients and because they are 'executable'. Because they are executable (i.e., perform a function) the effects of these lesions and other parameter manipulations on the performance of the network can be observed. There are now many alternative supervised learning algorithms available, but there are still many reasons to continue to study this one. It is most certainly true that both humans and animals learn via feedback provided by the effect of erroneous behavior--a process analagous to back prop. Unsupervised learning algorithms such as competetive learning do not give you this. Unsupervised learning algorithms are not easily executable--you can't get them to do many simple 'behaviors', like plot trajectories from starting locations to multiple goals, as in Neil's models. Of course, any supervised learning algorithm will confer the ability to train the net to perform, but there are a couple, mostly pragmatic, reasons to stick with back prop for now. Primarily because of the availablity of the McClelland & Rumelhart books and program disks, back prop is most easily available and most commonly used. Back prop already provides the engine in countless models already published. Gaining an understanding of the behavior of this algorithm will enable a better understanding of the flaws of these already published models. Thus, its not so much that you *would want to* use the algorithm, it is that it has already been used. Lets understand why we are discarding it before we do so. Phil Hetherington pah at unixg.ubc.ca  From minton at ptolemy-ethernet.arc.nasa.gov Tue Dec 20 17:39:00 1994 From: minton at ptolemy-ethernet.arc.nasa.gov (Steve Minton) Date: Tue, 20 Dec 94 14:39:00 PST Subject: Learning Article Message-ID: <9412202239.AA09677@ptolemy.arc.nasa.gov> Readers of this group may be interested in the following article, which was just published in the Journal of Artificial Intelligence Research (a journal which is available both online and in print). Buntine, W.L. (1994) "Operations for Learning with Graphical Models", Volume 2, pages 159-225 Postscript: volume2/buntine94a.ps (1.53M) compressed, volume2/buntine94a.ps.Z (568K) Abstract: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms. The PostScript file is available via: -- comp.ai.jair.papers -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html -- Anonymous FTP from either of the two sites below: CMU: p.gp.cs.cmu.edu directory: /usr/jair/pub/volume2 Genoa: ftp.mrg.dist.unige.it directory: pub/jair/pub/volume2 -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND, and the body GET VOLUME2/BUNTINE94A.PS (either upper or lowercase is fine). Note: Your mailer might find this file too large to handle. (The compressed version of this paper cannot be mailed.) -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, check out our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From gary at cs.ucsd.edu Tue Dec 20 18:29:18 1994 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Tue, 20 Dec 94 15:29:18 -0800 Subject: "Blind" reviews are impossible Message-ID: <9412202329.AA18047@desi> Right, silly euphemisms work, or simply saying, "prior work on this by cottrell, etc." The year I was on the NIPS PC, I was also on the PC for ACL and AAAI. ACL used blind reviewing. I was sure I was reviewing a paper by Ken Church, and I was wrong. There was only one paper of the 15 or so I reviewed that I had any idea who it was. gary cottrell From pf2 at st-andrews.ac.uk Tue Dec 20 17:07:07 1994 From: pf2 at st-andrews.ac.uk (Peter Foldiak) Date: Tue, 20 Dec 94 22:07:07 GMT Subject: research assistant job - computational neuroscience Message-ID: <8312.9412202207@psych.st-andrews.ac.uk> University of St Andrews, School of Psychology RESEARCH ASSISTANT Computational Neuroscience A one-year research assistantship is available for research on a novel application of computational methods to the neurophysiology of the primate visual system with Dr Peter Foldiak, in collaboration with Dr David Perrett. The assistant will be involved in the development of software and experimental procedures. Training in computer programming (C, UNIX), and preferably in some of the following areas is preferred: mathematics (optimisation), neural networks, genetic algorithms, computer graphics (on SGI), vision- or neuroscience. Salary will be at the appropriate point on the 1B scale for Research Staff (GBP 13941-17813). Application forms and further particulars are available from Personnel Services, University of St Andrews, KY16 9AJ, U.K. tel: +44 334 462567, (out of hours +44 334 462571) or by fax: +44 334 462570, to whom completed forms accompanied by a letter of application should be returned to arrive not later than Monday, 30 January 1995. Please quote reference number: SH/APS0838. The University operates an Equal Opportunities Policy.  From jaap.murre at mrc-apu.cam.ac.uk Wed Dec 21 12:50:03 1994 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Wed, 21 Dec 94 17:50:03 GMT Subject: sequential inteference Message-ID: <9412211750.AA26645@rigel.mrc-apu.cam.ac.uk> In response to recent comments by Neil Burgess, Bob French, Phil Hetherington, Noel Sharkey, Jay McClelland and others on 'catastrophic interference', I think it is important to establish that 'vanilla backpropagation' has by now been eliminated as a valid model of human learning and memory, its implausible learning transfer being the most obvious failure. An important reason for this failure can be found in the nature of the hidden layer representations. Catastrophic interference and hypertransfer (i.e., excessive positive transfer, see Murre, in press a) are both sides of the same coin: hidden- layer representations in backpropagation bear little relationship to orthogonalities of input patterns. In the case of interference experiments in humans the following result is well established: If the input patterns (stimuli) in two consecutive learning sets A and B are different than there will be neither interference nor positive transfer (e.g., Osgood, 1949). This is not the case in backpropagation: orthogonality of the stimuli has no effect on the reduction of interference. The reason for this is that the hidden-layer representations are always about equally overlapping *no matter what the input stimuli are*. Interference and transfer in backpropagation are only psychologically implausible with respect to its indifference towards the structure of the input stimuli. Surprisingly enough, two-layer networks (i.e., with a normal delta-rule, with one layer of weights) do not suffer from this problem and are in fact well able to model human interference (see Murre, in press a). Having said this, it is also important to consider the various variant models of error-correcting learning that have been developed recently and that use backpropagation as a starting point. These models have very plausible characteristics with respect to learning and categorization in humans: Kruschke (1990), Gluck (1991), Gluck and Bower (1988, 1990), Nosofsky, Kruschke, and McKinley (1992), Shanks and Gluck (1994). In addition, there are now many variant models of backpropagation that do not suffer from catastrophic interference. These have already been mentioned in this discussion, so I will not repeat them here. From a biological point of view backpropagation is certainly implausible but it has been show useful in inferring biological plausible parameters (e.g., Lockery et al., 1989; Zipser and Anderson, 1988). I can think of several reasons why backpropagation continues to attract researchers: 1. It is easy to understand. 2. Many simulators are available (see Murre, in press b). 3. It has been shown to approximate all 'well-behaved' functions (e.g., Hornik, Stinchcombe, and White, 1989) and thus is often felt to qualify as a generic learning mechanism. In particular, it can learn non-linearly separable pattern sets. 4. It has only a few parameters and these are not very critical for the final results. 5. It possesses most of the basic elements of a 'prototypical' neural network: distributed representations, graceful degradation, pattern completion, and adequate generalization of learned behavior. I do not myself think that these reasons form necessarily a sufficient motivation for using backpropagation, and I indeed prefer to work on other types of learning methods and neural networks. Perhaps, by investigating the limitations of backpropagation, necessary minimal improvements may become clear, so that we can replace it - in its leading role - by either a more plausible variant algorithm, or by a completely different learning method. Merry Christmas, -- Jaap Murre References Gluck, M.A. (1991). Stimulus generalization and representation in adaptive network models of category learning. Psychological Science, 2, 50- 55. Gluck, M.A., & G.H. Bower (1988). From conditioning to category learning: an adaptive network model. Journal of Experimental Psychology: General, 117, 227-247. Gluck, M.A., & G.H. Bower (1990). Component and pattern information in adaptive networks, Journal of Experimental Psychology: General, 119, 105-109. Kruschke, J.K. (1990). ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review, 99, 22-44. Lockery, S.R., G. Wittenberg, W.B. Kristan, Jr., & Garrison W. Cottrell (1989). Function of identified interneurons in the leech elucidated using neural networks trained by back-propagation. Nature, 340, 468- 471. Murre, J.M.J. (in press a). Transfer of learning in backpropagation and in related neural network models. In: J. Levy, D. Bairaktaris, J. Bullinaria, & P. Cairns (Eds.), Connectionist Models of Memory and Language. London: UCL Press. (In our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/hyper1.ps) Murre, J.M.J., (in press b). Neurosimulators. In: M.A. Arbib (Ed.), Handbook of Brain Research and Neural Networks, Cambridge, MA: MIT Press. (In our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/neurosim1.ps). Nosofsky, R.M., J.K. Kruschke, and S. C. McKinley (1992). Combining exemplar-based category representations and connectionist learning rules. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 211-233. Osgood, C.E. (1949). The similarity paradox in human learning: a resolution. Psychological Review, 56, 132-143. Shanks, D.R., & M.A. Gluck (1994). Tests of an adaptive network model for the identification and categorization of continuous-dimension stimuli. Connection Science, 6, 59-89. Zipser, D., & R.A. Anderson (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679-684.   From jras at uned.es Wed Dec 21 13:12:00 1994 From: jras at uned.es (Jose Ramon Alvarez Sanchez) Date: 21 Dec 94 19:12 +0100 Subject: CFP: W.S. McCulloch, 25 years in Memoriam Message-ID: <201*jras@uned.es> Preliminary Call For Papers CENTRO INTERNACIONAL DE INVESTIGACION EN CIENCIAS DE LA COMPUTACION UNIVERSIDAD DE LAS PALMAS DE GRAN CANARIA W.S. McCulloch: 25 years in Memoriam INTERNATIONAL CONFERENCE ON BRAIN PROCESSES, THEORIES AND MODELS Las Palmas, Gran Canaria, Spain. Nov 12-17, 1995. Conference Chairman: Moreno-Diaz, R. (Spain) Org. Comm. Chairman: Mira-Mira, J. (Spain) Scientific Advisory Comm. (preliminary): Amari SI (Japan), Anderson J (USA), Arbib M (USA), Beer S (Canada), Belmonte C (Spain), Blum M (USA), Braitenberg V (Germany), Cull P (USA), DaFonseca JL (Portugal), Eckorn R (Germany), Harth E (USA), Herault J (France), Jain LC (Australia), Kauffman S (USA), Kilmer W (USA), Leibovic KN (USA), Lettvin J (USA), Malsburg C. von der (Germany), Maturana H(Chile), McCleland J (USA), Mira-Mira J (Spain), Papert S (USA), Pichler F (Austria), Ricciardi L (Italy), Sato S (Japan), Vitozz E (Switzerland), Von Foester H (USA). Organizing Comm.: Alvarez JR (Spain), Cabestany J (Spain), Delgado A (Spain), Krasner J (USA), Moreno-Diaz jr R (Spain), Pierce A (USA), Prieto A (Spain), Sanchez JV (Spain), Suarez-Araujo C (Spain). Organization Staff: Alonso-Garcia MT (Spain), Perez Ruiz M (Spain). CALL FOR PAPERS The conference will consist in a series of invited lectures by leading scientists icluding M. Arbib, H. Maturana, J. Lettvin and S. Papert related to the life and work of W.S.McCulloch, an open forum of paper sessions on the topics listed below and a workshop on the "Embodiments of Mind at the end of the Century". Relevant topics include: Anatomical, Physiological, Biochemical and Biophysical levels. Natural and artificial neural networks. Mathematics, systems theory and global properties of the nervous system. Conceptual and formal tools in brain function modelling. Hybrid systems: symbolic-connectionist links. Plasticity, reliability, learning and memory. Implications of the work of WS McCulloch for philosophy and psychology. Reverse engineering and neurophysiology. All accepted papers will be published by the MIT Press in the pre-conferece proceedings. Those authors will be sent specific instructions and guidelines. Invited lectures and selected papers will be published by The MIT Press in a post-conference book. Three copies of the intended papers should be sent, prepared according to the following instructions for authors no later than May 30, 1995 to: Prof. Jose Mira-Mira Dpto. Informatica y Automatica-UNED Senda del Rey s/n 28040 MADRID-SPAIN Voice: +34 (1) 398 7155 Fax: +34 (1) 398 6697 e-mail: Please include your fax, e-mail, telephone for quick contact and notification of acceptance. INSTRUCTIONS FOR AUTHORS Authors should submit three copies of full intended papers, not exceeding eight pages of DIN A4 or 8.5 by 11 inch paper, including figures, tables and references, in English. The centered heading must include: Title in capitals. Name(s) of author(s). Address(es) of author(s) A 10 lines abstract. Three blank lines should be left between each of the above items and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 8-page limit. One additional DIN A-4 must be enclosed with the following organizational data: Title and author(s) name(s). A list of five keywords. A reference to the Topics the paper relates to. Postal address, phone, fax and e-mail if available. All accepted papers will be published in the conference proceedings. IMPORTANT DATES Final call and additonal info: March 1995 Final date of submission: May 30, 1995 Notification of acceptance: July 1, 1995 Conference: November 12-17, 1995 REGISTRATION FORM ------------------------------------------------------------------------ Last Name: First name Organization/University: Address: Phone: Fax: E-mail: Do you plan to submit a paper: Tentative title: Payment: Before April 30: - Bank draft for 50.000 pts payable on a Spanish Bank to FUNDACION UNIVERSITARIA DE LAS PALMAS: MCCULLOCH 95. - Money Transfer of 50.000 pts to BEX account No. 0104 0336 75 0307007836 After April 30: - Late registration fee of 60.000 pts at the conference desk. ------------------------------------------------------------------------ Please send this registration form to: Mrs. Maria Teresa Alonso-Garcia CIICC-Universidad de Las Palmas Campus de Tafira-Edf. de Informatica 35017 Las Palmas, SPAIN  From tetewsky at lima.psych.mcgill.ca Wed Dec 21 15:30:55 1994 From: tetewsky at lima.psych.mcgill.ca (Sheldon Tetewsky) Date: Wed, 21 Dec 1994 15:30:55 -0500 Subject: catastrophic interference Message-ID: <199412212030.PAA03615@lima.psych.mcgill.ca> Neil Burgess recently wrote that >studies of catastrophic interference in BP networks are interesting when >considering such a network as a model of some human (or animal) memory >system. However, he also questioned whether or not there was >..any reason for doing that. In the Tetewsky, Shultz, and Buckingham study ("Assessing interference and savings in connectionist models of a sequential recognition memory task"), referenced in Bob French's recent posting, we did some simulations and experiments indicating that neural networks can provide a good account of human performance in a simple recognition memory task when memory is assessed in terms of savings scores. The paper is in progress and will be announced soon. In the meantime, I'll just mention a few relevant details. Our work was motivated by Scott Fahlman's idea that his cascade-correlation algorithm (CC) has certain inherent design features that should give it an advantage over BP when it comes to dealing with the problem of catastrophic interference. We tested this idea by using an encoder version of CC to model recognition memory. Results indicated that in contrast to previous findings, retroactive interference is not that serious a problem for BP when memory is measured in terms of savings scores. However, CC also produced a significant increase in savings, relative to BP. Aside from this difference in the magnitude of savings, we also have evidence that CC was better than BP at accounting for the relative number of trials that subjects used in the different phases of learning. -- Sheldon Tetewsky  From rsun at cs.ua.edu Wed Dec 21 17:05:13 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 21 Dec 1994 16:05:13 -0600 Subject: No subject Message-ID: <9412212205.AA29941@athos.cs.ua.edu> ========================================================================= Call For Papers and Participation The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches to be held at IJCAI'95 Montreal, Canada August 19-20, 1995 ------------------------------------------------------------------------- There has been a considerable amount of research in integrating connectionist and symbolic processing. While such an approach has clear advantages, it also encounters serious difficulties and challenges. Therefore, various models and ideas have been proposed to address various problems and aspects in this integration. There is a growing interest from many segments of the AI community, ranging from expert systems, to cognitive modeling, to logical reasoning. Two major trends can be identified in the state of the art: these are the unified or purely and the hybrid approaches to integration. Whereas the purely connectionist ("connectionist-to-the-top") approach claims that complex symbol processing functionalities can be achieved via neural networks alone, the hybrid approach is premised on the complementarity of the two paradigms and aims at their synergistic combination in systems comprising both neural and symbolic components. In fact, these trends can be viewed as two ends of an entire spectrum. Up till now, overall, there is still relatively little work in comparing and combining these fairly isolated efforts. This workshop will provide a forum for discussions and exchanges of ideas in this area, to foster cooperative work. The workshop will tackle important issues in integrating connectionist and symbolic processing. A tentative Schedule --------------------- Day 1: A. Introduction: * Invited talks These talks will provide an overview of the field and set the tone for ensuing discussions. * Theoretical foundations for integrating connectionist and symbolic processing B. Definition of the two approaches: * Do they exhaust the space of current research in connectionist-symbolic integration, or is there room for additional categories? * How do we compare the unified and hybrid approaches? * Do the unified and hybrid approaches constitute a clearcut dichotomy or are they just endpoints of a continuum? * What class of processes and problems is well-suited to unified or hybrid integration? The relevant motivations and objectives. * What type of model is suitable for what type of application? Enumerate viable target domains. C. State of the art: * Recent or ongoing theoretical or experimental research work * Implemented models belonging to either the unified or hybrid approach * Practical applications of both types of systems Research addressing key issues concerning: * the unified approach: theoretical or practical issues involving systematicity, compositionality and variable binding, biologically inspired models, connectionist knowledge representation, other high-level connectionist models. * the hybrid approach: modes and methods of coupling, task sharing between various components of a hybrid system, knowledge representation and sharing. * both: commonsense reasoning, natural language processing, analogical reasoning, and more generally applications of unified and hybrid models. Day 2: D. Cognitive Aspects: * Cognitive plausibility and relations to other AI paradigms * In cognitive modeling, why should we integrate connectionist and symbolic processing? * Is there a clear cognitive rationale for such integration? (we may need to examine in detail some typical areas, such as commonsense reasoning, and natural language processing) * Is there psychological and/or biological evidence for existing models? If so, what is it? E. Open research issues: * Can we now propose a common terminology with precise definitions for both approaches to connectionist-symbolic integration and for the location on the continuum? * How far can unified systems go? Can unified models be supplemented by hybrid models? Can hybrid models be supplanted by unified models? * Limitations and barriers faced by both approaches * What breakthroughs are needed for both approaches? * Is it possible to synthesize various existing models? Workshop format --------------- - panel discussions - mini-group discussions: participants will break into groups of 7/8 to discuss a given theme; group leaders will then form a panel to report on group discussions and attempt a synthesis with audience participation - interactive talks: this is a novel type of oral presentation we will experiment with. Instead of a classical presentation, the speaker will present a problem or issue and give a brief statement of his personal stand (5 min) to launch discussions which he will then moderate and conclude. - classical slide talks followed by Q/A and discussions. Workshop Co-chairs: ------------------- Frederic Alexandre, Crin-Cnrs/Inria-Lorraine Ron Sun, The University of Alabama Organizing Committee: --------------------- John Barnden, New Mexico State University Steve Gallant, Belmont Research Inc. Larry Medsker, American University Christian Pellegrini, University of Geneva Noel Sharkey, Sheffield University Program Committee: ------------------ Lawrence Bookman (Sun Laboratory, USA) Michael Dyer (UCLA, USA) Wolfgang Ertel (FRW, Germany) LiMin Fu (University of Florida, USA) Jose Gonzalez-Cristobal (UPM, Spain) Ruben Gonzalez-Rubio (University of Sherbrooke, Canada) Jean-Paul Haton (Crin-Inria, France) Melanie Hilario (University of Geneva, Switzerland) Abderrahim Labbi (IMAG, France) Ronald Yager (Iona College, USA) Schedule: --------- - The submission deadline for participants is February 1, 1995. - The authors and potential participants will be notified the acceptance decision by March 15, 1995. - The camera-ready copies of working notes papers will be due on April 15, 1995 Submission: ----------- - If you wish to present a talk, specify the preferred type of presentation (classical or interactive talk) and submit 5 copies of an extended abstract (within the limit of 5-7 pages) to: Prof. Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 (205) 348-6363 - If you only wish to attend the workshop, send 5 copies of a short (no more than one page) description of your interest to the same address above. - Please be sure to include your e-mail address in all submissions.  From marks at u.washington.edu Thu Dec 22 00:55:40 1994 From: marks at u.washington.edu (Robert Marks) Date: Wed, 21 Dec 94 21:55:40 -0800 Subject: Neural Networks in Russia Message-ID: <9412220555.AA24404@carson.u.washington.edu> For Connectionists: The 2-nd International Symposium on Neuroinformatics and Neurocomputers Rostov-on-Don, RUSSIA September 20-23, 1995 Organized by Russian Neural Network Society (RNNS) and A.B. Kogan Research Institute for Neurocybernetics (KRINC) in co-operation with Institute of Electrical and Electronics Enginieers Neural Networks Council (IEEE NNC) First Call for Papers Research in Neuroinformatics and Neurocomputing continued in Russia after the research was deflated in the west in the 1970's. The research sophistication in neural networks, as a result, is quite advanced in Russia. The first international RNNS/IEEE Symposium, held in October 1992, proved to be a highly successful forum for a diverse international interchange of fresh and novel research results. The second International Symposium on Neuroinformatics and Neurocomputers is built on this remarkable success. The symposium focus is on the neuroscience, mathematics, physics, engineering and design of neuroinformatic and neurocomputing systems. Rostov-on-Don, the location of the Symposium, is about 1000 km south of Moscow on the scenic Don river. The Don is commonly identified as the boundary between the continents of Europe and Asia. Rostov is the home of the A.B. Kogan Research Institute for Neurocybernetics at Rostov State University - one of the premier neural network research centers in Russia. Papers for the Symposium should be sent in CAMERA-READY FORM, NOT EXEEDING 8 PAGES in A4 format to the Program Committee Co-Chair Alexander A. Frolov. Two copies of the paper should be submitted. The deadline for submission is 15 MARCH, 1995. Notification of acceptance will be sent on or before 15 May, 1995. SYMPOSIUM COMMITTEE GENERAL CHAIR Witali L. Dunin-Barkowski, Dr. Sci., The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Symposium Chair President of the Russian Neural Network Society, A.B. Kogan Research Institute for Neurocybernetics Rostov State University 194/1 Stachka avenue, 344104, Rostov-on-Don, Russia Tel: +7-8632-28-0588, Fax: +7-8632-28-0367 E-mail: wldb at krinc.rostov-na-donu.su PROGRAM COMMITTEE CO-CHAIRS Professor Alexander A. Frolov, The 2-nd International Symposium on Neuroinformatics and Neurocomputers, Program Co-Chair 5a Butlerov str. Higher Nervous Activity and Neurophysiology Institute Russian Academy of Science 117220, Moscow, RUSSIA. Professor Robert J. Marks II Program Co-Chair The 2-nd International Symposium on Neuroinformatics and Neurocomputers University of Washington Department of Electrical Engineering c/o 1131 199th Street S.W., Suite N Lynnwood, WA 98036-7138 WA, USA. Other information is available from the Symposium Committee.  From david at cns.ed.ac.uk Thu Dec 22 03:58:07 1994 From: david at cns.ed.ac.uk (David Willshaw) Date: Thu, 22 Dec 1994 08:58:07 GMT Subject: elitism at NIPS Message-ID: <199412220858.IAA09676@dumbo.cns.ed.ac.uk> 1. Even though the practice of removing the authors' names from submitted papers before sending to referees is not perfect, in my view it certainly helps to make the refereeing process less biased. 2. I would also support the suggestion that the cohort of referees be broadened. One way of doing this is to make it more international. In the list of 174 referees given in this year's programme of Abstracts, I estimated that 84% (146) were from the USA and Canada. Only 6 referees were from Germany, 4 from France and 3 from the UK, for example. David  From trevor at media-lab.media.mit.edu Thu Dec 22 11:01:55 1994 From: trevor at media-lab.media.mit.edu (Trevor Darrell) Date: Thu, 22 Dec 94 11:01:55 EST Subject: "Blind" reviews are impossible In-Reply-To: ted@spencer.ctan.yale.edu's message of Mon, 19 Dec 1994 09:52:27 -0500 <199412191452.AA23787@PLANCK.CTAN.YALE.EDU> Message-ID: <9412221601.AA06532@marblearch.media.mit.edu> Delivery-Date: Thu, 22 Dec 94 10:24:24 -0500 Date: Mon, 19 Dec 1994 09:52:27 -0500 From: ted at spencer.ctan.yale.edu Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. In computer vision papers have been reviewed blind at the major conferences for the past several years. Authors are encouraged to reference themselves in the third person. There seem to be few complaints about the system. While it is sometimes possible to discern that a paper comes from a particular school or group, it is usually impossible to know who was the first author. And one can never be sure that the paper is not from some other (unknown) person who has written the paper building directly on the tradition of another group, and thus uses their terminology but does not actually work there. In any case, can there really be a disadvantage to blind reviewing? (It seems a weak point indeed to say the correctness of papers in NIPS is assurred by their authorship!) Papers are not blind at the program committee level, so any agregious mistakes brought on by blind reviewing (?) can be still corrected there. Just my $0.02 from the CV perspective... --trevor  From trevor at mallet.Stanford.EDU Thu Dec 22 18:10:03 1994 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Thu, 22 Dec 1994 15:10:03 -0800 Subject: Report Announcement Message-ID: <199412222310.PAA09264@mallet.Stanford.EDU> The following report is available via anonymous ftp or Mosaic: ftp://playfair.stanford.edu/pub/reports/hastie/dann.ps.Z Discriminant Adaptive Nearest Neighbor Classification Trevor Hastie and Robert Tibshirani We propose an adaptive nearest neighbor rule that uses local discriminant information to estimate an effective metric for classification. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbor classification.  From mozer at neuron.cs.colorado.edu Thu Dec 22 18:32:47 1994 From: mozer at neuron.cs.colorado.edu (Michael C. Mozer) Date: Thu, 22 Dec 1994 16:32:47 -0700 Subject: NIPS, blind reviewing, and elitism Message-ID: <199412222332.QAA08554@neuron.cs.colorado.edu> As NIPS*95 program chair, I want to respond to the issue of blind reviewing. We have considered this idea over the past few months, and in balance the costs seem to outweigh the benefits. Most of the arguments for and against blind reviewing were stated clearly in earlier messages. NIPS is an elite conference in that researchers tend to self-select their best work for submission, and even then only 25-30% of the submissions are accepted. Further, researchers who do good work one year and have papers accepted are likely to do good work in the future, so it is not surprising that there is a core group of consistent contributors to the conference, even without any bias. (Serving on the program committee before, I witnessed an opposite bias -- a bias against accepting multiple papers by an individual who had several strong submissions, and against awarding a talk to the same individual in successive years.) Prior to 1994, there was likely some validity to the perception that NIPS "insiders" had an edge. As a reviewer, it was difficult to evaluate work solely on the basis of extended abstracts; in borderline cases, it helped to know the authors' track records. After Dave Touretzky changed the submission format to full papers in 1994, reviewers and program committee members to whom I've spoken seemed satisfied that the reviewing process was objective. I suspect that the perception of unfairness may linger for a few years, but any such reality was squelched by the new submission format. If you felt the reviewing process was not objective in 1994, I would like to hear your story. (It is possible to communicate anonymously; to find out more about this service, send mail to help at anon.penet.fi.) Several points of note before expressing a complaint: (1) Many good submissions were ultimately rejected, simply because the submission pool tends to be of high quality and the number of accepted papers is limited by a maximum page count of the proceedings volume. (2) Most of the people who have served as program and general chairs at NIPS have had papers rejected, including the current chairs! The reviewers are the primary decision makers. (3) Reviewers are not always as competent and dilligent as one might like, although NIPS uses three reviewers per paper--plus an area chair to arbitrate--to mitigate the consequences of an inappropriate review. I gladly welcome comments and suggestions aimed at broadening the constituency of NIPS without lowering the meeting's quality. I will summarize the feedback I receive to the net. Cheers, Mike Mozer  From postma at cs.rulimburg.nl Fri Dec 23 05:26:23 1994 From: postma at cs.rulimburg.nl (Eric Postma) Date: Fri, 23 Dec 94 11:26:23 +0100 Subject: Paper Announcement: Priming and Memory Message-ID: <9412231026.AA28600@bommel.cs.rulimburg.nl> ------------------------------------------------------------------------ FTP-host: ftp.cs.rulimburg.nl FTP-file: pub/papers/postma/memory.ps.Z ------------------------------------------------------------------------ The following paper is now available: The Nature of Memory Representations [6 pages] Eric O. Postma, Ernst H. Wolf, H. Jaap van den Herik and Patrick T. W. Hudson Department of Computer Science, University of Maastricht P.O.Box 616, 6200 MD Maastricht, The Netherlands To appear in the Proceeding of the workshop on Supercomputing in Brain Research: From Tomography to Neural Networks, HLRZ, KFA Juelich, Germany, November 21-23, 1994. World Scientific Publishing Company. Abstract: This study investigates processing and storage in the brain using a priming task. We model a priming task with a relaxation neural network - the Coulomb Energy Network. The network's performance on a fragment-completion task is assessed by storing a set of memories into the network and, subsequently, testing its completion performance. It has been claimed that the findings of stochastic independence on repeated fragment completion imply noninteracting memory traces. We nevertheless model memory in a way in which all traces embodying a single representation interact and find stochastic independence when the memories in the network are densely packed. Dependence between priming fragments can be obtained, but only when the memories are sparsely packed. We conclude that independence on repeated fragment completion does not necessarily imply that the underlying memory traces are noninteracting memory traces. ------------------------------------------------------------------------ Please do not reply directly to this message ------------------------------------------------------------------------ FTP-instructions: unix> ftp ftp.cs.rulimburg.nl ftp> Name: anonymous ftp> Password: your e-mail address ftp> pub/papers/postma ftp> binary ftp> get memory.ps.Z ftp> quit unix> uncompress memory.ps.Z ------------------------------------------------------------------------ Merry Christmas, Eric Postma Computer Science Department Faculty of General Sciences University of Limburg PO Box 616 6200 MD Maastricht The Netherlands email: postma at cs.rulimburg.nl web : http://www.cs.rulimburg.nl tel : +31 43 883493 fax : +31 43 252392  From jon at maths.flinders.edu.au Fri Dec 23 06:58:43 1994 From: jon at maths.flinders.edu.au (Jonathan Baxter) Date: Fri, 23 Dec 1994 22:28:43 +1030 Subject: TR Available: Learning Internal Representations Message-ID: <199412231158.AA05637@calvin.maths.flinders.edu.au> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/Thesis/baxter.thesis.ps.Z -------------------------------------------------------- The following paper is now available: Learning Internal Representations [112 pages] Jonathan Baxter This is a preliminary draft of my PhD thesis. Note that it is in the Thesis subdirectory of the neuroprose archive. It is in the process of being broken into several pieces for submission to Information and Computation, Machine Learning and next year's COLT. I'm afraid I cannot offer hard-copies. ABSTRACT: Most machine learning theory and practice is concerned with learning a single task. In this thesis it is argued that in general there is insufficient information in a single task for a learner to generalise well and that what is required for good generalisation is information about {\em many similar learning tasks}. The information about similar learning tasks forms a body of prior information that can be used to constrain the hypothesis space of the learner and cause it to generalise better. Typical learning scenarios in which there are many similar tasks are image recognition and speech recognition. After proving that learning without prior information is impossible except in the simplest of situations, the concept of the {\em environment} of a learner is introduced as a probability measure over the set of learning problems the learner might be expected to learn. It is shown how a sample from such an environment can be used to learn a {\em representation}, or recoding of the input space that is appropriate for the environment. Learning a representation can equivalently be thought of as learning the appropriate features of the environment. Using Haussler's statistical decision theory framework for machine learning, rigorous bounds are derived on the sample size required to ensure good generalisation from a representation learning process. These bounds show that under certain circumstances learning a representation appropriate for $n$ tasks reduces the number of examples required of each task by a factor of $n$. It is argued that environments such as character recognition and speech recognition fall into the category of learning problems for which such a reduction is possible. Once a representation is learnt it can be used to learn {\em novel} tasks from the same environment, with the result that far fewer examples are required of the new tasks to ensure good generalisation. Rigorous bounds are given on the number of tasks and the number of samples from each task required to ensure that a representation will be a good one for learning novel tasks. All the results on representation learning are generalised to cover any form of automated hypothesis space bias that utilises information from similar learning problems. It is shown how gradient-descent based procedures for training Artificial Neural Networks can be generalised to cover representation learning. Two experiments using the new procedure are performed. Both experiments fully support the theoretical results. The concept of the environment of a learning process is applied to the problem of {\em vector quantization} with the result that a {\em canonical} distortion measure for the quantization process emerges. This distortion measure is proved to be optimal if the task is to approximate the functions in the environment. Finally, the results on vector quantization are reapplied to representation learning to yield an improved error measure for learning in classifier environments. An experiment is presented demonstrating the improvement. ------------- Retrieval Intructions: unix> ftp archive.cis.ohio-state.edu ftp> Login: anonymous ftp> Password: e-mail address ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get baxter.thesis.ps.Z ftp> quit unix> uncompress baxter.thesis.ps.Z unix> lpr -s baxter.thesis.ps --------------- Jonathan Baxter School of Information Science and Technology, The Flinders University of South Australia. jon at maths.flinders.edu.au  From 100020.2727 at compuserve.com Fri Dec 23 11:24:19 1994 From: 100020.2727 at compuserve.com (Andrew Wuensche) Date: 23 Dec 94 11:24:19 EST Subject: Discrete Dynamics Lab (for DOS) Message-ID: <941223162419_100020.2727_BHL46-1@CompuServe.COM> Discrete Dynamics Lab --------------------- Announcing the first release of Discrete Dynamics Lab (described below), a program for studying discrete dynamical networks, from Cellular Automata to random Boolean networks, including their attractor basins. Attractor basins are objects in space-time that link network states according to their transitions. Access to these objects provide insights into complexity, chaos and emergent phenomena in CA. In less ordered networks (as well as CA), attractor basins show how the network categorises its state space far from equilibrium, and represent the network's memory. The program is released as shareware. This is a beta version and comments are welcome. Section 1-10 in the ddlab.txt file gives an overview of the program. Appendix A gives some operating instructions including quick-start examples, but a detailed reference is not yet complete. An illustrated operating manual will be released in due course. For further background on attractor basins of CA and random Boolean networks, and their implications, refer to refs. [1,2,3,4] below. [1] is a book, hard copy pre-prints of [1,2,3] are available on request. Platform - PC-DOS 386 or higher, with VGA or SVGA graphics, maths co-processor, mouse, extended memory recomended. Ideally a fast 486 with 8MB ram. download instructions --------------------- ftp the file "ddlab.zip" (416,884 bytes): % ftp ftp.cogs.susx.ac.uk name: anonymous password: your complete e-mail address ftp> cd pub/alife/ddlab ftp> binary ftp get ddlab.zip ftp close unzip "ddlab.zip" to give the following files: ddlab.txt a text file describing ddlab ddlab.exe the program dos4dw.exe the DOS extender, giving access to extended memory gliders.r_s file with "glider" rules smalle.fon two font files sserife.fon to run, from the directory containing these files, enter: ddlab All questions and comments to Andy Wuensche contact address: Santa Fe Institute 48 Esmond Road, London W4 1JQ UK and The University of Sussex (COGS) tel 081 995 8893 fax 081 742 2178 wuensch at santafe.edu 100020.2727 at compuserve.com andywu at cogs.susx.ac.uk Discrete Dynamics Lab --------------------- Cellular Automata - Random Boolean Networks. (copyright (c) Andrew Wuensche 1993) First release of the beta version Dec 1994. DDLab is an interactive graphics program for research into the dynamics of finite binary networks (for a DOS-PC platform). The program is relevant to the study of complexity, emergent phenomena and neural networks, and implements the investigations presented in [1,2,3,4]. Using a flexible user interface, a network can be set up for any architecture between regular CA (1d or 2d with periodic boundary conditions) on the one hand[1], and random Boolean networks (disordered CA) on the other[2]. The latter have arbitrary connections, and rules which may be different at each site. The neighbourhood (or pseudo-neighbourhood) size may be set from 1 to 9, and the network may have a mix of neighbourhood sizes. The program iterates the network forward to display space-time patterns (mutations are possible "on the fly"), and also runs the network "backwards" to generate a pattern's predecessors and reconstruct its branching sub-tree of all ancestor patterns until all "garden of Eden" states, the leaves of the sub-tree, have been reached. For smaller networks, sub-trees, basins of attraction or the whole basin of attraction field can be displayed as a directed graph or set of graphs in real time, with many presentation options. Attractor basins may be "sculpted" towards a desired configuration. Various statistical and analytical measures and numerical data are made available, mostly displayed graphically. The network's parameters, and the graphics display and presentation options, can be very flexibly set, reviewed and altered. Changes can be made "on the fly", including mutations to rules, connections or current state. 2d networks (including the "game of life" or any mutation thereof) can be displayed as a space-time pattern in a 3d isometric projection. Network parameters, states, data, and the screen image can be saved and loaded in a variety of tailor-made file formats. Statistical measures and data (mostly presented graphically) include: lambda and Z parameters, rule-table lookup frequency and entropy, pattern density, detailed data on sub-trees, basins and fields, garden-of-Eden density, in-degree frequency and a scatter plot of state-space. Learning/forgetting algorithms allow attaching/detaching sets of states as predecessors of a given state by automatically mutating rules or changing connections. This allows "sculpting" the basin of attraction field to approach a desired scheme of hierarchical categorisation. References. ----------- Wuensche,A., and M.J.Lesser. "The Global Dynamics of Cellular Automata: An Atlas of Basin of Attraction Fields of One-Dimensional Cellular Automata", Santa Fe Institute Studies in the Sciences of Complexity, Reference Vol.I, Addison-Wesley, 1992. Wuensche.A.,"The Ghost in the Machine: Basins of Attraction of Random Boolean Networks", in Artificial Life III, Santa Fe Institute Studies in the Sciences of Complexity, Addison-Wesley, 1994. Wuensche.A., "Complexity in One-D Cellular Automata: Gliders, Basins of Attraction and the Z parameter", Santa Fe Institute working paper 94-04-025, 1994. Wuensche.A., "The Emergence of Memory; Categorisation Far From Equilibrium", Cognitive Science Research Paper 346, University of Sussex, 1994. To appear in "Towards a Scientific Basis for Consciousness" eds SR Hameroff, AW Kaszniak, AC Scot, MIT Press.  From G.Schram at ET.TUDelft.NL Fri Dec 23 11:54:58 1994 From: G.Schram at ET.TUDelft.NL (Gerard Schram) Date: Fri, 23 Dec 1994 17:54:58 +0100 Subject: Report Announcement Message-ID: <01HKZPN1751S004ZXG@TUDERA.ET.TUDELFT.NL> The following report is available via anonymous ftp or Mosaic: ftp://playfair.stanford.edu/pub/reports/hastie/dann.ps.Z Discriminant Adaptive Nearest Neighbor Classification Trevor Hastie and Robert Tibshirani We propose an adaptive nearest neighbor rule that uses local discriminant information to estimate an effective metric for classification. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbor classification. ---------------------*********--------------------------- Gerard Schram (g.schram at et.tudelft.nl) Control laboratory, Department of Electrical Engineering, Delft University of Technology P.O.Box 5031, 2600 GA Delft, The Netherlands Phone: +31-15-785114 Telefax: +31-15-626738 ---------------------*********---------------------------  From eric at research.NJ.NEC.com Fri Dec 23 11:59:24 1994 From: eric at research.NJ.NEC.com (Eric B. Baum) Date: Fri, 23 Dec 94 11:59:24 EST Subject: NIPS, blind reviewing, and elitism Message-ID: <9412231659.AA00986@yin> IMO the NIPS '94 program was the strongest NIPS program I've seen, and in fact the strongest program I've seen in any conference of even moderate size, in the sense that I saw no weak or uninteresting papers. (However I only visited a fraction of the posters so I may have missed some.) Any skeptics should wait and consult the Proceedings, which IMO will provide a Prima Facie Case that the refereeing procedure is not broken, and should not be fixed (at least radically). ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com  From schmidhu at informatik.tu-muenchen.de Fri Dec 23 12:43:29 1994 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Fri, 23 Dec 1994 18:43:29 +0100 Subject: No subject Message-ID: <94Dec23.184341met.42261@papa.informatik.tu-muenchen.de> SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES ORIENTATION SENSITIVE EDGE DETECTORS Technical Report FKI-201-94 (7 pages, 0.95 Megabytes) Juergen Schmidhuber Bernhard Foltin Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany December 24, 1994 Static real world images are processed by a computationally simple and biologically plausible version of the recent predictability minimization algorithm for unsupervised redundancy reduction. Without a teacher and without any significant pre-processing, the system automatically learns to generate orientation sensitive edge detectors in the first (semilinear) layer. To obtain a copy, do: unix> ftp flop.informatik.tu-muenchen.de (or ftp 131.159.8.35) Name: anonymous Password: your email address ftp> binary ftp> cd pub/fki ftp> get fki-201-94.ps.gz (0.222 MBytes) ftp> bye unix> gunzip fki-201-94.ps.gz unix> lpr fki-201-94.ps Alternatively, check out http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html If your net browser does not know gzip/gunzip (which works better than compress/uncompress), I can mail you an uncompressed postscript version (as a last resort). Merry Christmas! Juergen Schmidhuber  From wahba at stat.wisc.edu Fri Dec 23 22:36:19 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Fri, 23 Dec 94 21:36:19 -0600 Subject: SS-ANOVA for `soft' classification-anct Message-ID: <9412240336.AA14678@hera.stat.wisc.edu> The following report is available via anonymous ftp or Mosaic: ftp://ftp.stat.wisc.edu/pub/wahba/exptl.ssanova.ps.gz Smoothing Spline ANOVA for Exponential Families, With Application to the Wisconsin Epidemiological Study of Diabetic Retinopathy Grace Wahba, Yuedong Wang, Chong Gu Ronald Klein MD, and Barbara Klein MD Given attributes (which may be discrete or continuous) and outcomes (Class 1 or Class 0) of a sample of instances, we develop Smoothing Spline ANOVA methods to estimate the *probability* of membership in Class 1, given the attribute vector. These methods are suitable when outcomes as a function of attributes are not clear-cut, as, for example, occurs when estimating risk of some medical outcome, given various predictor variables and treatments. These methods are penalized log-likelihood methods and plots of the cross sections of the estimates are generally fairly easy to interpret in context. We use the results to estimate the probability of four-year progression of diabetic retinopathy, given the three predictor variables glycosylated hemoglobin, duration of diabetes and body mass index at the study baseline, based on data from the Wisconsin Epidemiological Study of Diabetic Retinopathy. We discuss methods for multiple smoothing parameter selection, (the bias-variance tradeoff!!), numerical methods for computing the estimate and Bayesian `confidence intervals' for the estimate. We discuss methods of informal and formal model selection (stacked generalization!!) and some open questions. This work provides further details for work which has previously been announced in NIPS-93 and elsewhere. Other related papers in the same directory include gacv.ps.gz, ssanova.ps.gz ml-bib.ps and theses/ywang.thesis.README Grace Wahba wahba at stat.wisc.edu .....`think snow'  From lazzaro at CS.Berkeley.EDU Wed Dec 28 18:11:33 1994 From: lazzaro at CS.Berkeley.EDU (John Lazzaro) Date: Wed, 28 Dec 1994 15:11:33 -0800 Subject: No subject Message-ID: <199412282311.PAA15382@snap.CS.Berkeley.EDU> I sent these comments to Mike Mozer in response to his call for NIPS comments here last week, and he suggested I pass my comments on to the mailing list. As a EE, I tend to look towards IEEE conferences as examples: there seems to be two major types. [1] Conferences whose prime goal is to bring together as large a segment of a research community as possible, while still maintaining a standard of quality for presentations. IEEE Circuits and Systems and IEEE Acoustics, Speech, and Signal Processing are two prime examples. Among other attributes, its expected that very many if not the majority of the attendees will be presenting, and that the conference's primary clientele is the research community (both academic and industrial), as opposed to the industrial developer. [2] Conferences whose prime goal is to showcase the state of the art in a field, primarily for the benefit of applied development attendees. In these conference, a large majority of the attendees will not be presenting or even doing research, but are either applied developers or non-technical observers. IEEE Solid State Circuits conference is probably the best example of this type of conference: it has a session on the N best new microprocessor designs, the N best new DRAM designs,ect. For some types of chips, academic groups can be competitive with industry groups, and the session is mixed; in others, a $100 million dollar investment is needed to design the chip, and so industrial groups dominate. Many [1] conferences have explicit rules in submission to ensure as many different research groups around the world are presenting as possible: some limit the number of papers a single author can submit, others use "membership" techniques to try to limit the number of papers from a lab. On the other hand, [2] conferences are concerned with fairness and with having the highest quality, but explicitly do not have "broadness" in their charter: if the N best new Op-Amps all come from a certain company, so be it. NIPS has always seemed squarely in the middle of these two types of conferences, with the "elite" aspirations of the Solid State Circuits conference, but with a ratio of "researcher" to "developer" attendees that is closer to the first type of conference. Certain other conferences (most notably SIGGRAPH) started out where NIPS has been, and ended up as a [2] type of conference -- even if you discount the majority of the SIGGRAPH attendees who are there for the trade show and arty stuff, most attendees of the conference papers are there to listen and learn new research ideas for possible use in development, not to present papers themselves. The tone of most of the postings on this thread seem to imply that [1] is the direction that they'd like NIPS to move to. If so, the IEEE experience has been that more direct approaches than "blind reviewing" can be used to broaden the conference. Personally, I'd prefer [2], because I believe the technologies associated with NIPS are going to have the same degree of engineering impact as SIGGRAPH (Computer Graphics) and ISSCC (Integrated Circuits) has had on the world, and part of realizing that impact is having a conference of type [2]. But for this to occur, NIPS needs to market itself to the product development community, so that the ratio of developer:researcher at NIPS increases significantly.