From todd at phy.ucsf.edu Mon Aug 1 12:30:59 1994 From: todd at phy.ucsf.edu (todd@phy.ucsf.edu) Date: Mon, 1 Aug 1994 09:30:59 -0700 Subject: New paper: Ffwd Hebbian Learning w/ Nonlinear Outputs Message-ID: <9408011630.AA04378@dizzy.ucsf.EDU> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/troyer.ffwd_hebb.ps.Z The following paper has been submitted to Neural Networks and is available from the Ohio State neuroprose archive. TITLE: Feedforward Hebbian Learning with Nonlinear Output Units: A Lyapunov Approach AUTHOR: Todd Troyer W.M. Keck Center for Integrative Neuroscience and Department of Pysiology 513 Parnassus Ave. Box 0444 University of California, San Francisco San Francisco, CA. 94143 ABSTRACT: A Lyapunov function is constructed for the unsupervised learning equations of a large class of two layer networks. Units in the output layer are recurrently connected by fixed symmetric weights; only the feedforward connections between layers undergo learning. In contrast to much of the previous work on self-organization of this type, the output units have nonlinear transfer functions. The Lyapunov function is similar in form to that derived by Cohen-Grossberg and Hopfield. Two theorems are proved regarding the location of stable equilibria in the limit of high gain transfer functions. The analysis is applied to the soft competitive learning networks of Amari and Takeuchi. Retrieve this paper by anonymous ftp from: archive.cis.ohio-state.edu (128.146.8.52) in the /pub/neuroprose directory The name of the paper in this archive is: troyer.ffwd_hebb.ps.Z [15 pages] No hard copies available. From giles at research.nj.nec.com Mon Aug 1 13:12:11 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 1 Aug 94 13:12:11 EDT Subject: Reprint:SYNAPTIC NOISE IN DYNAMICALLY-DRIVEN RECURRENT NEURAL NETWORKS Message-ID: <9408011712.AA13061@fuzzy> **************************************************************************************** Reprint:SYNAPTIC NOISE IN DYNAMICALLY-DRIVEN RECURRENT NEURAL NETWORKS: CONVERGENCE AND GENERALIZATION The following reprint is available via the University of Maryland Department of Computer Science Technical Report archive: _________________________________________________________________________________________ "Synaptic Noise in Dynamically-driven Recurrent Neural Networks: Convergence and Generalization" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-89 AND CS-TR-3322 Kam Jim(a), C.L. Giles(a,b), B.G. Horne(a) {kamjim,giles,horne}@research.nj.nec.com (a) NEC Research Institute,4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. of Maryland, College Park, MD 20742 There has been much interest in applying noise to feedforward neural networks in order to observe their effect on network performance. We extend these results by introducing and analyzing various methods of injecting synaptic noise into dynamically-driven recurrent networks during training. By analyzing and comparing the effects of these noise models on the error function, we found that applying a controlled amount of noise during training can improve convergence time and generalization performance. In addition, we analyze the effects of various noise parameters (additive vs. multiplicative, cumulative vs. non-cumulative, per time step vs. per sequence) and predict that best overall performance can be achieved by injecting additive noise at each time step. Noise contributes a second-order gradient term to the error function which can be viewed as an anticipatory agent to aid convergence. This term appears to find promising regions of weight space in the beginning stages of training when the training error is large and should improve convergence on error surfaces with local minima.Synaptic noise also enhances the error function by favoring internal representations where state nodes are operating in the saturated regions of the sigmoid discriminant function, thus improving generalization to longer sequences. We substantiate these predictions by performing extensive simulations on learning the dual parity grammar from grammatical strings encoded as temporal sequences with a second-order fully recurrent neural network. -------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3322.ps.Z ftp> quit unix> uncompress 3322.ps.Z -------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From inbs at bingsuns.cc.binghamton.edu Mon Aug 1 13:48:58 1994 From: inbs at bingsuns.cc.binghamton.edu (INBS-conference) Date: Mon, 1 Aug 94 13:48:58 EDT Subject: call for papers Message-ID: <9408011748.AA23972@bingsuns.cc.binghamton.edu > C A L L for P A P E R S First International IEEE Symposium on INTELLIGENCE in NEURAL AND BIOLOGICAL SYSTEMS May 29-31,1995, Washington DC Area, USA Sponsored by: IEEE Computer Society; In Cooperating with: AAAS Society; AAAI Society; SMC Society AIMS and SCOPE of the Symposium The Intelligence in Artificial Neural Networks and the Computational evolution of the Biological Systems are two very important and very active research areas, which offer and promise many practical applications to scientists and other professionals in industry and goverment as well. In response to this demand, the INBS Symposium offers a theoretical and a practical medium for the evolutionary and the intelligence processes in both artificial and biological systems and the interaction between these fields. Some Topics EVOLUTIONARY COMPUTING (DNA sequence processing, Genome Processes, DNA, Topologies, Synthesis of DNA, Formal Linguistic of DNA, Structure/Function Correlation, Computational Genetics) BIOTECHNOLOGY AND APPLICATIONS (Mapping the Genome, Human Genome, Molecular Computing, Limitations of Biological Models) GENETIC ALGORITHMS (Clustering, Optimization, Searching, Programming, etc) LANGUAGES UNDERSTANDING (Natural Languages, NL Translation, Text Abstraction, Computational Linguistics) LEARNING AND PERCEPTION (Supervised, Unsupervised, Hybrid, Understanding, Planning, Interpretation) NEUROSCIENCE (Adaptive Control Models, Fuzzy & Probabilistic Models, Hybrid Models, Dynamic Neural and Neurocomputing Models, Self-Organized Models) SOME KEYNOTE DISTIGUISHED SPEAKERS A.Apostolico, Europe; S.Arikawa, Japan; S.Hameroff, USA PROGRAM COMMITTEE Local Arrangement/Publicity Committee N.Bourbakis, BU-SUNY,USA, Chair Cynthia Shapiro R.Brause, UG,Germany Sukarno Mertoguno K.DeJong, GMU,USA James Gattiker J.Shavlik, UWM,USA Ali Moghaddamzadeh C.Koutsougeras, TU,USA H.Kitano, JAPAN M.Perlin, CMU,USA Publication Registration Committee H.Pattee, BU,USA D.I. Kavraki D.Schaffer, Philips Lab,USA D.Searls, UPA,USA J.Collado-Vides, UNAM T.Yokomori, UEC,Japan S.Rasmussen, Los Alamos,NL G.Paun, Roumania A.Restivo, U.Palermo,Italy M.Chrochmore, U.Paris, France D.Perrin, U.Paris, France R.Reynolds, WSU,USA M.Conrad, WSU,USA M.Kameyama, TU,Japan J.Nikolis, UP,Greece T.Head, BU, USA, Vice Chair C.Benham, MS,USA R.VanBuskirk, BU-SUNY,USA E.Dietrich, BU-SUNY,USA S.Koslow, NIH,USA M.Huerta, NIH,USA B.Punch, MSU,USA INFORMATION FOR AUTHORS Authors are requested to submit four copies (in English) of their typed complete manuscript (25 pages max), or an extended summary (5-10 pages max) by Nov. 31,1994, to N. G. Bourbakis, Binghamton University,T.J.Watson School, AAAI Lab, NY 13902,Tel: 607-777-2165, 607-771-4033; Fax:607-777-4822, E-mail : Bourbaki at BingSuns.CC.Binghamton.edu, or INBS at Bingsuns.CC.Binghamton.edu. Each manuscript submitted to INBS must indicate the most relevant areas and include the complete address of at least one of the authors. Notification of acceptance, Jan. 31,1995. Final copies of the papers accepted by INBS due March 21 ,1995. From tesauro at watson.ibm.com Mon Aug 1 13:50:48 1994 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Mon, 1 Aug 94 13:50:48 EDT Subject: NIPS*94 Registration Message-ID: Registration for NIPS*94 is now open. A registration brochure is available on-line via the NIPS*94 Mosaic homepage (http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html), or by anonymous FTP: FTP site: mines.colorado.edu (138.67.1.3) FTP file: /pub/nips94/nips94-registration-brochure.ps The brochure contains the registration form and describes the program highlights, including the list of invited speakers and tutorial speakers. People without access to FTP or Mosaic may request a copy of the brochure by e-mail to "nips94 at mines.colorado.edu" or by physical mail to: NIPS*94 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA -- Gerry Tesauro NIPS*94 General Chair From plaut at cmu.edu Tue Aug 2 10:14:54 1994 From: plaut at cmu.edu (David Plaut) Date: Tue, 02 Aug 1994 10:14:54 -0400 Subject: TR: Understanding Normal and Impaired Word Reading Message-ID: <7478.775836894@crab.psy.cmu.edu> FTP-host: hydra.psy.cmu.edu [128.2.248.152] FTP-file: pub/pdp.cns/pdp.cns.94.5.ps.Z [78 pages; 353Kb compressed; 924Kb uncompressed] For those who do not have FTP access, physical copies can be requested from Barbara Dorney . ============================================================================ Understanding Normal and Impaired Word Reading: Computational Principles in Quasi-Regular Domains David C. Plaut James L. McClelland Carnegie Mellon University Carnegie Mellon University Mark S. Seidenberg Karalyn E. Patterson University of Southern California MRC Applied Psychology Unit Technical Report PDP.CNS.94.5 July 1994 We develop a connectionist approach to processing in quasi-regular domains, as exemplified by English word reading. A consideration of the shortcomings of a previous implementation (Seidenberg & McClelland, 1989, Psych. Rev.) in reading nonwords leads to the development of orthographic and phonological representations that capture better the relevant structure among the written and spoken forms of words. In a number of simulation experiments, networks using the new representations learn to read both regular and exception words, including low-frequency exception words, and yet are still able to read pronounceable nonwords as well as skilled readers. A mathematical analysis of the effects of word frequency and spelling-sound consistency in a related but simpler system serves to clarify the close relationship of these factors in influencing naming latencies. These insights are verified in subsequent simulations, including an attractor network that reproduces the naming latency data directly in its time to settle on a response. Further analyses of the network's ability to reproduce data on impaired reading in surface dyslexia support a view of the reading system that incorporates a graded division-of-labor between semantic and phonological processes. Such a view is consistent with the more general Seidenberg and McClelland framework and has some similarities with---but also important differences from---the standard dual-route account. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut plaut at cmu.edu "Doubt is not a pleasant Department of Psychology 412/268-5145 condition, but certainty Carnegie Mellon University 412/268-5060 (FAX) is an absurd one." Pittsburgh, PA 15213-3890 345H Baker Hall --Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From john at dcs.rhbnc.ac.uk Tue Aug 2 11:12:04 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Tue, 02 Aug 94 16:12:04 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <8129.9408021512@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): --------------------------------------- NeuroCOLT Technical Report NC-TR-94-5: --------------------------------------- A Weak Version of the Blum, Shub \& Smale Model by Pascal Koiran Abstract: We propose a weak version of the Blum-Shub-Smale model of computation over the real numbers. In this weak model only a ``moderate" usage of multiplications and divisions is allowed. The class of boolean languages recognizable in polynomial time is shown to be the complexity class P/poly. The main tool is a result on the existence of small rational points in semi-algebraic sets which is of independent interest. As an application, we generalize recent results of Siegelmann \& Sontag on recurrent neural networks, and of Maass on feedforward nets. A preliminary version of this paper was presented at the 1993 IEEE Symposium on Foundations of Computer Science. Additional results include: \begin{itemize} \item an efficient simulation of order-free real Turing machines by probabilistic Turing machines in the full Blum-Shub-Smale model; \item an efficient simulation of arithmetic circuits over the integers by boolean circuits; \item the strict inclusion of the real polynomial hierarchy in weak exponential time. \end{itemize} ------------------------ The Report NC-TR-94-5 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-5.ps.Z ftp> bye % zcat nc-tr-94-5.ps.Z | lpr -l Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. Best wishes John Shawe-Taylor From sutton at gte.com Tue Aug 2 15:54:04 1994 From: sutton at gte.com (Rich Sutton) Date: Tue, 2 Aug 1994 14:54:04 -0500 Subject: On Step-Size and Bias in TD Learning (Paper Available) Message-ID: <199408021849.AA05405@ns.gte.com> The following paper appeared in the proceeding of the 1994 Yale Workshop on Adaptive and Learning Systems, 1994. ftp instructions follow. ON STEP-SIZE AND BIAS IN TEMPORAL-DIFFERENCE LEARNING by Richard S. Sutton and Satinder P. Singh (MIT) We present results for three new algorithms for setting the step-size parameters, alpha and lambda, of temporal-difference learning methods such as TD(lambda). The overall task is that of learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories. The new algorithms select step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods. We compare our algorithms with conventional Monte Carlo methods. Monte Carlo methods have a natural way of setting the step size: for each state s they use a step size of 1/n(s), where n(s) is the number of times state s has been visited. We seek and come close to achieving comparable step-size algorithms for TD(lambda). One new algorithm uses a lambda=1/n(s) schedule to achieve the same effect as processing a state backwards with TD(0), but remains completely incremental. Another algorithm uses a lambda at each time equal to the estimated transition probability of the current transition. We present empirical results showing improvement in convergence rate over Monte Carlo methods and conventional TD(lambda). A limitation of our results at present is that they apply only to tasks whose state trajectories do not contain cycles. ================================================================ unix> ftp envy.cs.umass.edu Name: anonymous Password: [your ID] ftp> cd pub/singh ftp> binary ftp> get Yale94.ps.Z ftp> bye unix> uncompress Yale94.ps.Z unix> [your command to print PostScript file] Yale94.ps =============================================================== The paper is six pages long. The file is about 130K. FTP-host: envy.cs.umass.edu FTP-filename: /pub/singh/Yale94.ps.Z From marwan at sedal.su.oz.au Tue Aug 2 23:12:08 1994 From: marwan at sedal.su.oz.au (Marwan Jabri) Date: Wed, 3 Aug 1994 13:12:08 +1000 Subject: paper available: Practical Performance and Credit Assignment... Message-ID: <199408030312.NAA28697@sedal.sedal.su.OZ.AU> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/jabri.ppap.ps.Z The file jabri.ppap.ps.Z is now available for copying from the Neuroprose repository and has been submitted for publication: Practical Performance and Credit Assignment Efficiency of Analog Multi-layer Perceptron Perturbation Based Training Algorithms Marwan A. Jabri Systems Engineering and Design Automation Laboratory Sydney University Electrical Engineering NSW 2006 Australia marwan at sedal.su.oz.au SEDAL Technical Report 1-7-94 Abstract Many algorithms have been recently reported for the training of analog multi-layer perceptron. Most of these algorithms were evaluated either from a computational or simulation view point. This paper applies several of these algorithms to the training of an analog multi-layer perceptron chip. The advantages and shortcomings of these algorithms in terms of training and generalisation performance and their capabilities in a limited precision environment are discussed. Extensive experiments demonstrate that a trade-off exists between the parallelisation of perturbations and the efficiency of credit assignment. Two semi-parallelisation heuristics are presented and are shown to provide advantages in terms of efficient exploration of the solution space and fewer credit assignment confusions. Retrieve this paper by anonymous ftp from: archive.cis.ohio-state.edu (128.146.8.52) in the /pub/neuroprose directory The name of the paper in this archive is: jabri.ppap.ps.Z [28 pages] Sorry, but no hard copies available. From niranjan at eng.cam.ac.uk Wed Aug 3 15:07:20 1994 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Wed, 3 Aug 94 15:07:20 BST Subject: Post Doc job in Cambridge Message-ID: <18000.9408031407@tulip.eng.cam.ac.uk> ---------------------- JOB JOB JOB JOB ------------------------- University Of Cambridge, Department of Obstetrics & Gynaecology Post-Doctoral Research Assistant Neural Networks in the Quality Assurance in Maternity Care (QAMC) Under the terms of a grant recently awarded to the QAMC project by the Commission of the European Communities (CEC), we expect that we will soon be able to offer a Post Doctoral Research Position in Cambridge for the above investigation. From g.gaskell at psychology.bbk.ac.uk Thu Aug 4 14:17:00 1994 From: g.gaskell at psychology.bbk.ac.uk (Gareth) Date: Thu, 4 Aug 94 14:17 BST Subject: Thesis: Speech Perception & Connectionsim Message-ID: (Sorry - ignore the previous message, I sent the wrong file!) FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-filename: /pub/neuroprose/Thesis/gaskell.thesis.ps.Z A new PhD thesis (150 pages) is now available in the neuroprose archive. The thesis examines the role of phonological variation in human speech perception using both experimental and connectionist techniques. Abstract: The research reported in this thesis examines issues of word recognition in human speech perception. The main aim of the research is to assess the effect of regular variation in speech on lexical access. In particular, the effect of a type of neutralising phonological variation, assimilation of place of articulation, is examined. This variation occurs regressively across word boundaries in connected speech, altering the surface phonetic form of the underlying words. Two methods of investigation are used to explore this issue. Firstly, experiments using cross- modal priming and phoneme monitoring techniques are used to examine the effect of variation on the matching process between speech input and lexical form. Secondly, simulated experiments are performed using two computational models of speech recognition: TRACE (McClelland & Elman, 1986) and a simple recurrent network. The priming experiments show that the mismatching effects of a phonological change on the word-recognition process depend on their viability, as defined by phonological constraints. This implies that speech perception involves a process of context- dependent inference, that recovers the abstract underlying representation of speech. Simulations of these and other experiments are then reported using a simple recurrent network model of speech perception. The model accommodates the results of the priming studies and predicts that similar phonological context effects will occur in non-words. Two phoneme monitoring studies support this prediction, but also show interaction between lexical status and viability, implying that phonological inference relies on both lexical and phonological constraints. A revision of the network model is proposed which learns the mapping from the surface form of speech to semantic and phonological representations. To retrieve the file: ftp archive.cis.ohio-state.edu login: anonymous password: ftp> cd /pub/neuroprose/Thesis ftp> binary ftp> get gaskell.thesis.ps.Z ftp> bye uncompress gaskell.thesis.ps.Z lpr gaskell.thesis.ps [or whatever you normally do to print] Gareth Gaskell Centre for Speech and Language, Birkbeck College, London, g.gaskell at psyc.bbk.ac.uk From giles at research.nj.nec.com Thu Aug 4 18:00:26 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Aug 94 18:00:26 EDT Subject: Reprint:LEARNING A CLASS OF LARGE FINITE STATE MACHINES WITH A RECURRENT Message-ID: <9408042200.AA18471@fuzzy> NEURAL NETWORK The following reprint is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ Learning a Class of Large Finite State Machines with a Recurrent Neural Network UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-94 AND CS-TR-3328 C. L. Giles[1,2], B. G. Horne[1], T. Lin[1,3] [1] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]UMIACS, University of Maryland, College Park, MD 20742 [3] EE Department, Princeton University, Princeton, NJ 08540 {giles,horne,lin}@research.nj.nec.com One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically- driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specif ically, these FMMs have a large number of states (simulations are for $256$ and $512$ state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the num ber of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that necessary to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. -------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3328.ps.Z ftp> quit unix> uncompress 3328.ps.Z OR unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get large.fsm.ps.Z ftp> quit unix> uncompress large.fsm.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From john at dcs.rhbnc.ac.uk Fri Aug 5 08:00:06 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Fri, 05 Aug 94 13:00:06 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <10057.9408051200@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): Many apologies that the previous technical report NC-TR-94-5 was not available when first announced. It is now installed. I have also installed an ascii file entitled `abstracts' which holds an up to date list of the Technical Reports with their abstracts. A new technical report is also now available: --------------------------------------- NeuroCOLT Technical Report NC-TR-94-7: --------------------------------------- On the power of real Turing machines over binary inputs by Felipe Cucker, Universitat Pompeu Fabra, Balmes 132, Barcelona 08008, SPAIN and Dima Grigoriev, Depts. of Comp. Science and Maths, Penn State University, University Park, PA 16802, USA Abstract: In recent years the study of the complexity of computational problems involving real numbers has been an increasing research area. Blum, Shub and Smale (1989) proposed a computational model ---the real Turing machine--- for dealing with the above problems was developed. The aim of this paper is to prove that $\BP(\PAR)=\;$PSPACE/{\it poly} where $\PAR$ is the class of sets computed in parallel polynomial time by (ordinary) real Turing machines. As a consequence we obtain the existence of binary sets that do not belong to the Boolean part of $\PAR$ (an extension of the result of Koiran (1994) since $\PH\subseteq\PAR$) and a separation of complexity classes in the real setting. ------------------------ The Report NC-TR-94-7 can be accessed and printed as follows (follow the same procedure for getting the abstracts file) % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-7.ps.Z ftp> bye % zcat nc-tr-94-7.ps.Z | lpr -l Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. Best wishes John From shultz at hebb.psych.mcgill.ca Fri Aug 5 14:19:36 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Fri, 5 Aug 94 14:19:36 EDT Subject: No subject Message-ID: <9408051819.AA06064@hebb.psych.mcgill.ca> Subject: Paper available: Modeling cognitive development on balance scale phenomena Date: 5 Aug. '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.balance-ml.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: Modeling cognitive development on balance scale phenomena (33 pages) Thomas R. Shultz, Denis Mareschal, & William C. Schmidt Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca Abstract We used cascade-correlation to model human cognitive development on a well studied psychological task, the balance scale. In balance scale experiments, the child is asked to predict the outcome of placing certain numbers of equal weights at various distances to the left or right of a fulcrum. Both stage progressions and information salience effects have been found with children on this task. Cascade-correlation is a generative connectionist algorithm that constructs its own network topology as it learns. Cascade- correlation networks provided better fits to these human data than did previous models, whether rule-based or connectionist. The network model was used to generate a variety of novel predictions for psychological research. Keywords: cognitive development, balance scale, connectionist learning, cascade-correlation The paper is published in Machine Learning, 1994, 16, 59-88. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.balance-ml.ps.Z ftp> quit unix> uncompress shultz.balance-ml.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From cohn at psyche.mit.edu Fri Aug 5 17:00:03 1994 From: cohn at psyche.mit.edu (David Cohn) Date: Fri, 5 Aug 94 17:00:03 EDT Subject: Optimal Experiment Design paper available Message-ID: <9408052100.AA24701@psyche.mit.edu> ---------------------------------------------------------------------- Neural Network Exploration Using Optimal Experiment Design AI Lab Memo #1491/CBCL Paper #99 David A. Cohn Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 I consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov and MacKay, I apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. I demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. I conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. ---------------------------------------------------------------------- The above paper is a greatly expanded version of one that appeared at last year's NIPS, and is available by anonymous ftp to: publications.ai.mit.edu in the file: ai-publications/1994/AIM-1491.ps.Z It is also available from my home page at: http://www.ai.mit.edu/people/cohn/cohn.html I welcome all comments, questions, and (gentle) criticisms. -David Cohn e-mail: cohn at psyche.mit.edu Dept. of Brain & Cognitive Science phone: (617) 253-8409 MIT, E10-243 Cambridge, MA 02139 http://www.ai.mit.edu/people/cohn/cohn.html From harnad at Princeton.EDU Fri Aug 5 20:25:42 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 5 Aug 94 20:25:42 EDT Subject: Motor Control (Feldman): BBS Call for Commentators Message-ID: <9408060025.AA29660@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by: A.G. Feldman & M.F. Levin on: POSITIONAL FRAMES OF REFERENCE IN MOTOR CONTROL This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ POSITIONAL FRAMES OF REFERENCE IN MOTOR CONTROL: ORIGIN AND USE Anatol G. Feldman (1,2,4) & Mindy F. Levin (2,3,4) Institute of Biomedical Engineering, University of Montreal (1) Research Centre, Rehabilitation Institute of Montreal, H3S 2J4 (2) School of Rehabilitation, University of Montreal (3) Centre for Research in Neurological Sciences, University of Montreal (4) EMAIL:Feldman at ere.umontreal.ca KEYWORDS: motor control, frames of reference, motoneurons, control variables, proprioception, kinaesthesis, equilibrium points, multi-muscle systems, pointing, synergy, redundancy problem. ABSTRACT: A hypothesis about sensorimotor integration (the lambda model) is described and applied to movement control and kinesthesia. The nervous system organizes positional frames of reference for the sensorimotor apparatus and produces active movements by shifting frames in terms of spatial coordinates. Kinematic and electromyographic patterns are not programmed but emerge from the dynamic interaction of the system's components, including external forces, within the designated frame of reference. Motoneuronal threshold properties and proprioceptive inputs to motoneurons may be important components in the physiological mechanism which produces positional frames of reference. The hypothesis that intentional movements are produced by shifting the frame of reference is extended to multi-muscle and multi-degrees of freedom systems by providing a solution for the redundancy problem the allows the control of a joint alone or in combination with other joints to produce any desired limb configuration and movement trajectory. For each motor behavior, the nervous system uses a strategy which minimizes the number of changeable control variables and keep sthe parameters of these changes invariant. This is illustrated by examples of simulated kinematic and electromyographic signals from single- and multi-joint arm movements produced by patterns of control variables. Empirical support is provided and additional tests are suggested. The model is contrasted with others based on the ideas of programming of motoneuronal activity, muscle forces, stiffness or movement kinematics. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.feldman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The file is also retrievable using archie, gopher, and World-Wide Web URLs (Universal Resource Locators): ftp://princeton.edu/pub/harnad/BBS/ gopher://gopher.princeton.edu/1ftp%3aprinceton.edu%40/pub/harnad/BBS/ http://192.190.21.10/wic/psych.02.html ------------------------------------------------------------- To retrieve a file by ftp from an Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.feldman When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). JANET users without ftp can instead utilise the file transfer facilities at sites uk.ac.ft-relay or uk.ac.nsf.sun. Full details are available on request. ------------------------------------------------------------- From bishopc at helios.aston.ac.uk Mon Aug 8 10:26:53 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Mon, 8 Aug 1994 14:26:53 +0000 Subject: Lectureships in Neural Computing Message-ID: <29457.9408081326@sun.aston.ac.uk> ------------------------------------------------------------------- Aston University Neural Computing Research Group Department of Computer Science and Applied Mathematics LECTURESHIPS IN NEURAL COMPUTING -------------------------------- Applications are invited for two lectureships commencing in the next academic year. Candidates are expected to have excellent academic qualifications and a proven record of research. The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Successful candidates will be expected to make a substantial contribution to the research activities of the Department in the area of neural computing. They will also be expected to contribute to the undergraduate and postgraduate teaching programmes in computer science. The Neural Computing Research Group currently comprises three professors, two lecturers, three postdoctoral research fellows and ten postgraduate research students. Current research activity focusses on principled approaches to neural computing, and spans a broad spectrum from theoretical foundations to industrial and commercial applications. These new appointments will further strengthen the research activity of this group. Salaries will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds). If you wish to be considered for one of these positions, please send a CV and publications list, together with the names of 3 referees, to: Professor Chris Bishop Neural Computing Research Group Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk From bishopc at helios.aston.ac.uk Mon Aug 8 12:36:57 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Mon, 8 Aug 1994 16:36:57 +0000 Subject: Neural Computing Applications Forum Message-ID: <992.9408081536@sun.aston.ac.uk> --------------------------------------------------------------- NCAF Two-Day Conference: PRACTICAL APPLICATIONS AND TECHNIQUES OF NEURAL NETWORKS 14 and 15 September 1994 Aston University, Birmingham, UK 14 September 1994 ----------------- Tutorial: Introduction to Neural Networks (How to get started in Neural Computing) Richard Palmer, ERA Technology INVITED TALK: From Regularisation to Cheque Verification Francoise Fogelman, SLIGOS, France Workshop: Error Bars and Confidence Limits (Practical Techniques for Assigning Error Bars to Network Predictions) Odyssey: Stocks, Shakespeare and Sunshine: (or: Useful Tricks with Radial Basis Functions) David Lowe, Aston University Social Event: Canal Boat Trip with Wine and Buffet Dinner 15 September 1994 ----------------- Obstacles and Challenges in the Industrial Application of Neural Networks Inderjit Sandhu, Barclays Bank PLC Applying Neural Networks to the Analysis of Small Medical Data Sets Paul Beatty, University Hospital of South Manchester The Neural Nose Gareth Jones, Neotronics Ltd Hand-printed Character Recognition Using Deformable Models Chris Williams, Aston University Determination of Ocean Surface Wind Velocities from Satellite Radar Data Iain Strachan, AEA Technology Use of Neural Networks to Derive Helicopter Component Load Spectra Alf Vella, Cranfield University Working Together: NCAF and the DTI Neural Computing Programme Bob Wiggins, DTI The EPSRC Neural Computing Programme Catherine Barnes and Peter Bates, EPSRC A Study of Various Neural Network Techniques for Automotive Systems M Arain, Lucas Advanced Engineering Predicting Driver Alertness from Steering Behaviour Kevin Swingler, Stirling University (with Ford UK) ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by around 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Annual membership rates (Pounds Stirling): Company: 250 Individual: 140 Associate 90 Student: 55 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. Associate membership is intended for those who are unable to attend meetings (eg those overseas) but who wish to receive the journal and the newsletter, and does not include registration at the conferences. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 email: c.m.bishop at aston.ac.uk Chris Bishop (Chairman, NCAF) -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From harnad at Princeton.EDU Mon Aug 8 22:32:16 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Mon, 8 Aug 94 22:32:16 EDT Subject: Subsymbolic Language Processing: PSYC Multiple Book Review Message-ID: <9408090232.AA12841@clarity.Princeton.EDU> CALL FOR BOOK REVIEWERS Below is the Precis of SUBSYMBOLIC NATURAL LANGUAGE PROCESSING by Risto Mikkulainen. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it. (If you have never reviewed for PSYCOLOQUY or Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message.) If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ---------------------------------------------------------------------- psycoloquy.94.5.46.language-network.1.miikkulainen Monday 8 Aug 1994 ISSN 1055-0143 (34 paragraphs, 1 fig, 1 note, 16 references, 609 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1994 Risto Miikkulainen Precis of: SUBSYMBOLIC NATURAL LANGUAGE PROCESSING: AN INTEGRATED MODEL OF SCRIPTS, LEXICON, AND MEMORY Cambridge, MA: MIT Press, 1993 15 chapters, 403 Pages Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 risto at cs.utexas.edu ABSTRACT: Distributed neural networks have been very successful in modeling isolated cognitive phenomena, but complex high-level behavior has been amenable only to symbolic artificial intelligence techniques. Aiming to bridge this gap, this book describes DISCERN, a complete natural language processing system implemented entirely at the subsymbolic level. In DISCERN, distributed neural network models of parsing, generating, reasoning, lexical processing and episodic memory are integrated into a single system that learns to read, paraphrase, and answer questions about stereotypical narratives. Using DISCERN as an example, a general approach to building high-level cognitive models from distributed neural networks is introduced, and the special properties of such networks are shown to provide insight into human performance. In this approach, connectionist networks are not only plausible models of isolated cognitive phenomena, but also sufficient constituents for generating complex, high-level behavior. KEYWORDS: computational modeling, connectionism, distributed neural networks, episodic memory, lexicon, natural language processing, scripts. I. MOTIVATION 1. Recently there has been a great deal of excitement in cognitive science about the subsymbolic (i.e., parallel distributed processing, or distributed connectionist, or distributed neural network) approach to natural language processing. Subsymbolic systems seem to capture a number of intriguing properties of human-like information processing such as learning from examples, context sensitivity, generalization, robustness of behavior, and intuitive reasoning. These properties have been very difficult to model with traditional, symbolic techniques. 2. Within this new paradigm, the central issues are quite different from (even incompatible with) the traditional issues in symbolic cognitive science, and the research has proceeded without much in common with the past. However, the ultimate goal is still the same: to understand how human cognition is put together. Even if cognitive science is being built on a new foundation, as can be argued, many of the results obtained through symbolic research are still valid, and could be used as a guide for developing subsymbolic models of cognitive processes. 3. This is where DISCERN, the computer-simulated neural network model described in this book (Miikkulainen 1993), fits in. DISCERN is a purely subsymbolic model, but at the high level it consists of modules and information structures similar to those of symbolic systems, such as scripts, lexicon, and episodic memory. At the highest level of cognitive modeling, the symbolic and subsymbolic paradigms have to address the same basic issues. Outlining a parallel distributed approach to those issues is the purpose of DISCERN. 4. In more specific terms, DISCERN aims: (1) to demonstrate that distributed artificial neural networks can be used to build a large-scale natural language processing system that performs approximately at the level of symbolic models; (2) to show that several cognitive phenomena can be explained at the subsymbolic level using the special properties of these networks; and (3) to identify central issues in subsymbolic cognitive modeling and to develop well-motivated techniques to deal with them. To the extent that DISCERN is successful in these areas, it constitutes a first step towards subsymbolic natural language processing. II. THE SCRIPT PROCESSING TASK 5. Scripts (Schank and Abelson, 1977) are schemas of often-encountered, stereotypic event sequences, such as visiting a restaurant, traveling by airplane, and shopping at a supermarket. Each script divides further into tracks, or established minor variations. A script can be represented as a causal chain of events with a number of open roles. Script-based understanding means reading a script-based story, identifying the proper script and track, and filling its roles with the constituents of the story. Events and role fillers that were not mentioned in the story but are part of the script can then be inferred. Understanding is demonstrated by generating an expanded paraphrase of the original story, and by answering questions about the story. 6. To see what is involved in the task, let us consider an example of DISCERN input/output behavior. The following input stories are examples of the fancy-restaurant, plane-travel, and electronics-shopping tracks: John went to MaMaison. John asked the waiter for lobster. John left the waiter a big tip. John went to LAX. John checked in for a flight to JFK. The plane landed at JFK. John went to Radio-Shack. John asked the staff questions about CD-players. John chose the best CD-player. 7. DISCERN reads the orthographic word symbols sequentially, one at a time. An internal representation of each story is formed, where all inferences are made explicit. These representations are stored in the episodic memory. The system then answers questions about the stories: What did John buy at Radio-Shack? John bought a CD-player at Radio-Shack. Where did John fly to? John flew to JFK. What did John eat at MaMaison? John ate a good lobster. With the question as a cue, the appropriate story representation is retrieved from the episodic memory and the answer is generated word by word. DISCERN also generates full paraphrases of the input stories. For example, it generates an expanded version of the restaurant story: John went to MaMaison. The waiter seated John. John asked the waiter for lobster. John ate a good lobster. John paid the waiter. John left a big tip. John left MaMaison. 8. The answers and the paraphrase show that DISCERN has made a number of inferences beyond the original story. For example, it inferred that John ate the lobster and the lobster tasted good. The inferences are not based on specific rules but are statistical and learned from experience. DISCERN has read a number of similar stories in the past and the unmentioned events and role bindings have occurred in most cases. They are assumed immediately and automatically upon reading the story and have become part of the memory of the story. In a similar fashion, human readers often confuse what was mentioned in the story with what was only inferred (Bower et al., 1979; Graesser et al., 1979). 9. A number of issues can be identified from the above examples. Specifically, DISCERN has to (1) make statistical, script-based inferences and account for learning them from experience; (2) store items in the episodic memory in a single presentation and retrieve them with a partial cue; (3) develop a meaningful organization for the episodic memory, based on the stories it reads; (4) represent meanings of words, sentences, and stories internally; and (5) organize a lexicon of symbol and concept representations based on examples of how words are used in the language and form a many-to-many mapping between them. Script processing constitutes a good framework for studying these issues, and a good domain for developing an approach towards the goals outlined above. III. APPROACH 10. Parallel distributed processing models typically have very little internal structure. They produce the statistically most likely answer given the input conditions in a process that is opaque to the external observer. This is well suited to the modeling of isolated low-level tasks, such as learning past tense forms of verbs (Rumelhart and McClelland, 1986) or word pronunciation (Sejnowski and Rosenberg, 1987). Given the success of such models, a possible approach to higher-level cognitive modeling would be to construct the system from several submodules that work together to produce the higher-level behavior. 11. In DISCERN, the immediate goal is to build a complete, integrated system that performs well in the script processing task. In this sense, DISCERN is very similar to traditional models in artificial intelligence. However, DISCERN also aims to show how certain parts of human cognition could actually be built. The components of DISCERN were designed as independent cognitive models that can account for interesting language processing and memory phenomena, many of which are not even required in the DISCERN task. Combining these models into a single, working system is one way of validating them. In DISCERN, the components are not just models of isolated cognitive phenomena; they are sufficient constituents for generating complex high-level behavior. IV. THE DISCERN MODEL 12. DISCERN can be divided into parsing, generating, question answering, and memory subsystems, each with two modules (figure 1). Each module is trained in its task separately and in parallel. During performance, the modules form a network of networks, each feeding its output to the input of another module. Input text Output text | ^ V | ================= ================= ================= Sentence Parser <------- Lexicon <------- Sentence Gener. ================= ================= ================= | | ^ ^ | | | | | +-------+-----------------------+ | | | | | | | | V V | | | ================= ================= | | | Cue Former Answer Producer ---+ | | ================= ================= | | | ^ | | | | | V V | | ================= ================= ================= Story Parser -------> Episodic Memory -------> Story Generator ================= ================= ================= Figure 1: The DISCERN Model. 13. The sentence parser reads the input words one at a time and forms a representation of each sentence. The story parser combines the sequence of sentences into an internal representation of the story, which is then stored in the episodic memory. The story generator receives the internal representation and generates the sentences of the paraphrase one at a time. The sentence generator outputs the sequence of words for each sentence. The cue former receives a question representation, built by the sentence parser, and forms a cue pattern for the episodic memory, which returns the appropriate story representation. The answer producer receives the question and the story and generates an answer representation, which is output word by word by the sentence generator. The architecture and behavior of each of these modules in isolation is outlined below. V. LEXICON 14. The input and output of DISCERN consist of distributed representations for orthographic word symbols (also called lexical words). Internally, DISCERN processes semantic concept representations (semantic words). Both the lexical and semantic words are represented distributively as vectors of gray-scale values between 0.0 and 1.0. The lexical representations are based on the visual patterns of characters that make up the written word; they remain fixed throughout the training and performance of DISCERN. The semantic representations stand for distinct meanings and are developed automatically by the system while it is learning the processing task. 15. The lexicon stores the lexical and semantic representations and translates between them. It is implemented as two feature maps (Kohonen, 1989), one lexical and the other semantic. Words whose lexical forms are similar, such as "LINE" and "LIKE", are represented by nearby units in the lexical map. In the semantic map, words with similar semantic content, such as "John" and "Mary", or "Leone's" and "MaMaison" are mapped near each other. There is a dense set of associative interconnections between the two maps. A localized activity pattern representing a word in one map will cause a localized activity pattern to form in the other map, representing the same word. The output representation is then obtained from the weight vector of the most highly active unit. The lexicon thus transforms a lexical input vector into a semantic output vector and vice versa. Both maps and the associative connections between them are organized simultaneously, based on examples of co-occurring symbols and meanings. 16. The lexicon architecture facilitates interesting behavior. Localized damage to the semantic map results in category-specific lexical deficits similar to human aphasia (Caramazza, 1988; McCarthy and Warrington, 1990). For example, the system selectively loses access to restaurant names, or animate words, when that part of the map is damaged. Dyslexic performance errors can also be modeled. If the performance is degraded, for example, by adding noise to the connections, parsing and generation errors that occur are quite similar to those observed in human deep dyslexia (Coltheart et al., 1988). For example, the system may confuse "Leone's" with "MaMaison", or "LINE" with "LIKE", because they are nearby in the map and share similar associative connections. VI. FGREP PROCESSING MODULES 17. Processing in DISCERN is carried out by hierarchically organized pattern-transformation networks. Each module performs a specific subtask, such as parsing a sentence or generating an answer to a question. All these networks have the same basic architecture: they are three-layer, simple-recurrent backpropagation networks (Elman, 1990), with the extension called FGREP that allows them to develop distributed representations for their input/output words. 18. The network learns the processing task by adapting the connection weights according to the standard on-line backpropagation procedure (Rumelhart et al., 1986, pp. 327-329). The error signal is propagated to the input layer, and the current input representations are modified as if they were an extra layer of weights. The modified representation vectors are put back in the lexicon, replacing the old representations. Next time the same words occur in the input or output, their new representations are used to form the input/output patterns for the network. In FGREP, therefore, the required mappings change as the representations evolve, and backpropagation is shooting at a moving target. 19. The representations that result from this process have a number of useful properties for cognitive modeling. (1) Since they adapt to the error signal, they end up coding information most crucial to the task. Representations for words that are used in similar ways in the examples become similar. Thus, these profiles of continuous activity values can be claimed to code the meanings of the words as well. (2) As a result, the system never has to process very novel input patterns, because generalization has already been done in the representations. (3) The representation of a word is determined by all the contexts in which that word has been encountered; consequently, it is also a representation of all those contexts. Expectations emerge automatically and cumulatively from the input word representations. (4) Single representation components do not usually stand for identifiable semantic features. Instead, the representation is holographic: word categories can often be recovered from the values of single components. (5) Holography makes the system very robust against noise and damage. Performance degrades approximately linearly as representation components become defective or inaccurate. VII. EPISODIC MEMORY 20. The episodic memory in DISCERN consists of a hierarchical pyramid of feature maps organized according to the taxonomy of script-based stories. The highest level of the hierarchy is a single feature map that lays out the different script classes. Beneath each unit of this map there is another feature map that lays out the tracks within the particular script. The different role bindings within each track are separated at the bottom level. The map hierarchy receives a story representation vector as its input and classifies it as an instance of a particular script, track, and role binding. The hierarchy thereby provides a unique memory representation for each script-based story as the maximally responding units in the feature maps at the three levels. 21. Whereas the top and the middle level in the hierarchy only serve as classifiers, selecting the appropriate track and role-binding map for each input, at the bottom level a permanent trace of the story must also be created. The role-binding maps are trace feature maps, with modifiable lateral connections. When the story representation vector is presented to a role-binding map, a localized activity pattern forms as a response. Each lateral connection to a unit with higher activity is made excitatory, while a connection to a unit with lower activity is made inhibitory. The units within the response now "point" towards the unit with highest activity, permanently encoding that the story was mapped at that location. 22. A story is retrieved from the episodic memory by giving it a partial story representation as a cue. Unless the cue is highly deficient, the map hierarchy is able to recognize it as an instance of the correct script and track and form a partial cue for the role-binding map. The trace feature map mechanism then completes the role binding. The initial response of the map is again a localized activity pattern; because the map is topological, it is likely to be located somewhere near the stored trace. If the cue is close enough, the lateral connections pull the activity to the center of the stored trace. The complete story representation is retrieved from the weight vectors of the maximally responding units at the script, track, and role-binding levels. 23. Hierarchical feature maps have a number of properties that make them useful for memory organization: (1) The organization is formed in an unsupervised manner, extracting it from the input experience of the system. (2) The resulting order reflects the properties of the data, the hierarchy corresponding to the levels of variation, and the maps laying out the similarities at each level. (3) By dividing the data first into major categories and gradually making finer distinctions lower in the hierarchy, the most salient components of the input data are singled out and more resources are allocated for representing them accurately. (4) Because the representation is based on salient differences in the data, the classification is very robust, and usually correct even if the input is noisy or incomplete. (5) Because the memory is based on classifying the similarities and storing the differences, retrieval becomes a reconstructive process (Kolodner, 1984; Williams and Hollan, 1981) similar to human memory. 24. The trace feature map exhibits interesting memory effects that result from interactions between traces. Later traces capture units from earlier ones, making later traces more likely to be retrieved. The extent of the traces determines memory capacity. The smaller the traces, the more of them will fit in the map, but more accurate cues are required to retrieve them. If the memory capacity is exceeded, older traces will be selectively replaced by newer ones. Traces that are unique, that is, located in a sparse area of the map, are not affected, no matter how old they are. Similar effects are common in human long-term memory (Baddeley, 1976; Postman, 1971). VIII. DISCERN HIGH-LEVEL BEHAVIOR 25. DISCERN is more than just a collection of individual cognitive models. Interesting behavior results from the interaction of the components in a complete story-processing system. 26. DISCERN was trained and tested with an artificially generated corpus of script-based stories consisting of three scripts with three tracks and three open roles each. The complete DISCERN system performs very well: at the output, about 98 percent of the words are correct. This is rather remarkable for a chain of networks that is 9 modules long and consists of several different types of modules. 27. A modular neural network system can only operate if it is stable, that is, if small deviations from the normal flow of information are automatically corrected. It turns out that DISCERN has several built-in safeguards against minor inaccuracies and noise. The semantic representations are distributed and redundant, and inaccuracies in the output of one module are cleaned up by the module that uses the output. The memory modules clean up by categorical processing: a noisy input is recognized as a representative of an established class and replaced by the correct representation of that class. As a result, small deviations do not throw the system off course, but rather the system filters out the errors and returns to the normal course of processing, which is an essential requirement for building robust cognitive models. 28. DISCERN also demonstrates strong script-based inferencing. Even when the input story is incomplete, consisting of only a few main events, DISCERN can usually form an accurate internal representation of it. DISCERN was trained to form complete story representations from the first sentence on, and because the stories are stereotypical, missing sentences have little effect on the parsing process. Once the story representation has been formed, DISCERN performs as if the script had been fully instantiated. Questions about missing events and role-bindings are answered as if they were part of the original story. If events occurred in an unusual order, they are recalled in the stereotypical order in the paraphrase. If there is not enough information to fill a role, the most likely filler is selected and maintained throughout the paraphrase generation. Such behavior automatically results from the modular architecture of DISCERN and is consistent with experimental observations on how people remember stories of familiar event sequences (Bower et al., 1979; Graesser et al., 1979). 29. In general, given the information in the question, DISCERN recalls the story that best matches it in the memory. An interesting issue is: what happens when DISCERN is asked a question that is inaccurate or ambiguous, that is, one that does not uniquely specify a story? For example, DISCERN might have read a story about John eating lobster at MaMaison, and then about Mary doing the same at Leone's, and the question could be "Who ate lobster?" Because later traces are more prominent in the memory, DISCERN is more likely to retrieve the Mary-at-Leone's story in this case. The earlier story is still in the memory, but to recall it, more details need to be specified in the question, such as `Who ate lobster at MaMaison?" Similarly, DISCERN can robustly retrieve a story even if the question is slightly inaccurate. When asked "How did John like the steak at MaMaison?", DISCERN generates the answer "John thought lobster was good at MaMaison", ignoring the inaccuracy in the question, because the cue is still close enough to the stored trace. DISCERN does recognize, though, when a question is too different from anything in the memory, and should not be answered. For "Who ate at McDonald's?", the cue vector is not close to any trace, the memory does not settle, and nothing is retrieved. Note that these mechanisms were not explicitly built into DISCERN, but they emerge automatically from the physical layout of the architecture and representations. IX. DISCUSSION 30. There is an important distinction between scripts (or more generally, schemas) in symbolic systems, and scripts in subsymbolic models such as DISCERN. In the symbolic approach, a script is stored in memory as a separate, exact knowledge structure, coded by the knowledge engineer. The script has to be instantiated by searching the schema memory sequentially for a structure that matches the input. After instantiation, the script is active in the memory and later inputs are interpreted primarily in terms of this script. Deviations are easy to recognize and can be taken care of with special mechanisms. 31. In the subsymbolic approach, schemas are based on statistical properties of the training examples, extracted automatically during training. The resulting knowledge structures do not have explicit representations. For example, a script exists in a neural network only as statistical correlations coded in the weights. Every input is automatically matched to every correlation in parallel. There is no all-or-none instantiation of a particular knowledge structure. The strongest, most probable correlations will dominate, depending on how well they match the input, but all of them are simultaneously active at all times. Regularities that make up scripts can be particularly well captured by such correlations, making script-based inference a good domain for the subsymbolic approach. Generalization and graceful degradation give rise to inferencing that is intuitive, immediate, and occurs without conscious control, as is script-based inference in humans. On the other hand, it is very difficult to recognize deviations from the script and to initiate exception-processing when the automatic mechanisms fail. Such sequential reasoning would require intervention of a high-level "conscious" monitor, which has yet to be built in the connectionist framework. X. CONCLUSION 32. The main conclusion from DISCERN is that building subsymbolic models is a feasible approach to understanding mechanisms underlying natural language processing. DISCERN shows how several cognitive phenomena may result from subsymbolic mechanisms. Learning word meanings, script processing, and episodic memory organization are based on self-organization and gradient-descent in error in this model. Script-based inferences, expectations, and defaults automatically result from generalization and graceful degradation. Several types of performance errors in role binding, episodic memory, and lexical access emerge from the physical organization of the system. Perhaps most significantly, DISCERN shows how individual connectionist models can be combined into a large, integrated system that demonstrates that these models are sufficient constituents for generating sequential, symbolic, high-level behavior. 33. Although processing simple script instantiations is a start, there is a long way to go before subsymbolic models will rival the best symbolic cognitive models. For example, in story understanding, symbolic systems have been developed that analyze realistic stories in depth, based on higher-level knowledge structures such as goals, plans, themes, affects, beliefs, argument structures, plots, and morals. In designing subsymbolic models that would do that, we are faced with two major challenges: (1) how to implement connectionist control of high-level processing strategies (making it possible to model processes more sophisticated than a series of reflex responses), and (2) how to represent and learn abstractions (making it possible to process information at a higher level than correlations in the raw input data). Progress in these areas would constitute a major step towards extending the capabilities of subsymbolic natural language processing models beyond those of DISCERN. XI. NOTE 34. Software for the DISCERN system is available through anonymous ftp from cs.utexas.edu:pub/neural-nets/discern. An X11 graphics demo, showing DISCERN in processing the example stories discussed in the book, can be run remotely under the World Wide Web at http://www.cs.utexas.edu/~risto/discern.html, or by telnet with "telnet cascais.utexas.edu 30000". XII. TABLE OF CONTENTS PART I Overview 1 Introduction 2 Background 3 Overview of DISCERN PART II Processing Mechanisms 4 Backpropagation Networks 5 Developing Representations in FGREP Modules 6 Building from FGREP Modules PART III Memory Mechanisms 7 Self-Organizing Feature Maps 8 Episodic Memory Organization: Hierarchical Feature Maps 9 Episodic Memory Storage and Retrieval: Trace Feature Maps 10 Lexicon PART IV Evaluation 11 Behavior of the Complete Model 12 Discussion 13 Comparison to Related Work 14 Extensions and Future Work 15 Conclusions APPENDICES A Story Data B Implementation Details C Instructions for Obtaining the DISCERN Software XIII. REFERENCES Baddeley, A.D. (1976) The Psychology of Memory. New York: Basic Books. Bower, G.H., Black, J.B. and Turner, T.J. (1979) Scripts in memory for text. Cognitive Psychology, 11:177-220. Caramazza, A. (1988) Some aspects of language processing revealed through the analysis of acquired aphasia: The lexical system. Annual Review of Neuroscience, 11:395-421. Coltheart, M., Patterson, K. and Marshall, J.C., editors (1988) Deep Dyslexia. London; Boston: Routledge and Kegan Paul. Second edition. Elman, J.L. (1990) Finding structure in time. Cognitive Science, 14:179-211. Graesser, A.C., Gordon, S.E. and Sawyer, J.D. (1979) Recognition memory for typical and atypical actions in scripted activities: Tests for the script pointer+tag hypothesis. Journal of Verbal Learning and Verbal Behavior, 18:319-332. Kohonen, T. (1989) Self-Organization and Associative Memory. Berlin; Heidelberg; New York: Springer. Third edition. Kolodner, J.L. (1984) Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model. Hillsdale, NJ: Erlbaum. Miikkulainen, R. (1993) Subsymbolic Natural Language Processing: an Integrated Model of Scripts, Lexicon, and Memory. Cambridge MA: MIT. McCarthy, R.A. and Warrington, E.K. (1990) Cognitive Neuropsychology: A Clinical Introduction. New York: Academic Press. Postman, L. (1971) Transfer, interference and forgetting. In Kling, J.W., and Riggs, L.A., editors, Woodworth and Schlosberg's Experimental Psychology, 1019-1132. New York: Holt, Rinehart and Winston. Third edition. Rumelhart, D.E. and McClelland, J.L. (1986) On learning past tenses of English verbs. In Rumelhart, D.E., and McClelland, J.L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2, 216--271. Cambridge, MA: MIT Press. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning internal representations by error propagation. In Rumelhart, D.E. and McClelland, J.L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1, 318-362. Cambridge, MA: MIT Press. Sejnowski, T. J., and Rosenberg, C. R. (1987) Parallel networks that learn to pronounce English text. Complex Systems, 1:145--168. Schank, R.C. and Abelson, R.P. (1977) Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, NJ: Erlbaum. Williams, M.D. and Hollan, J.D. (1981) The process of retrieval from very long-term memory. Cognitive Science, 5:87--119. -------------------------------------------------------------------- PSYCOLOQUY Book Review Instructions The PSYCOLOQUY book review procedure is very similar to the commentary procedure except that it is the book itself, not a target article, that is under review. (The Precis summarizing the book is intended to permit PSYCOLOQUY readers who have not read the book to assess the exchange, but the reviews should address the book, not primarily the Precis.) Note that as multiple reviews will be co-appearing, you need only comment on the aspects of the book relevant to your own specialty and interests, not necessarily the book in its entirety. Any substantive comments and criticism -- including points calling for a detailed and substantive response from the author -- are appropriate. Hence, investigators who have already reviewed or intend to review this book elsewhere are still encouraged to submit a PSYCOLOQUY review specifically written with this specialized multilateral review-and-response feature in mind. 1. Before preparing your review, please read carefully the Instructions for Authors and Commentators and examine recent numbers of PSYCOLOQUY. 2. Reviews should not exceed 500 lines. Where judged necessary by the Editor, reviews will be formally refereed. 3. Please provide a title for your review. As many commentators will address the same general topic, your title should be a distinctive one that reflects the gist of your specific contribution and is suitable for the kind of keyword indexing used in modern bibliographic retrieval systems. Each review should also have a brief (~50-60 word) Abstract 4. All paragraphs should be numbered consecutively. Line length should not exceed 72 characters. The review should begin with the title, your name and full institutional address (including zip code) and email address. References must be prepared in accordance with the examples given in the Instructions. Please read the sections of the Instruction for Authors concerning style, INSTRUCTIONS FOR PSYCOLOQUY AUTHORS AND COMMENTATORS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From fellous at selforg.usc.edu Tue Aug 9 19:05:48 1994 From: fellous at selforg.usc.edu (Jean-Marc Fellous) Date: Tue, 9 Aug 1994 16:05:48 -0700 Subject: Position in Bochum Message-ID: <199408092305.QAA08804@selforg.usc.edu> Please post the following Job announcement: ----------------------- The following announcement is related to a C3 assistant professor at the University of Bochum (Germany) in the field of neuro-computation. It is intentionally kept in its original German form, since knowledge of German is prefered but not required (However, non-German speaking applicants are expected to eventually learn German ...). Ruhr-Universit"at Bochum Am Institut f"ur Neuroinformatik ist eine C3-Professur "Neuroinformatik" zu besetzen. Das Institut ist eine zentrale wissenschaftliche Einrichtung der Universit"at mit den Abteilungen Systembiophysik und Theoretische Biologie. Arbeitsschwerpunkte sind Prinzipien der Selbst-organisation und Informationsverarbeitung in neuronaler Architektur. Die Position schlie{\ss}t die Mitwirkung der kollegialen Leitung des Instituts ein. Zu den Lehraufgaben geh"oren Vorlesungen "uber Neuronale Netze sowie "uber technisch nutzbare Organisationsprinzipien biologischer neuronaler Systeme. Es wird erwartet, da{\ss} die zu berufende Pers"onlichkeit mindestens in einem der folgenden Gebiete wissenschaftlich ausgewiesen ist: - Analyse und Anwendung biologischer neuronaler Organisationsprinzipien - K"unstliche Neuronale Netze - Entwurf von Systemen in neuronaler Architektur - Probleme der Selbstorganisation Neben Erfahrungen im theoretischen Bereich (Systemtheorie, nicht-lineare Dynamik) wird Interesse an anwendungsorientierten Problemen vorausgesetzt. Eine Kooperation mit dem Zentrum f"ur Neuroinformatik in Bochum, das bevorzugt anwendungsorientierte Probleme bearbeitet, ist m"oglich. Die Ruhr-Universit"at Bochum bem"uht sich um die F"orderung von Frauen in Forschung und Lehre. Schwerbehinderte werden bei gleicher Qualifikation bevorzugt. Ihre schriftliche Bewerbung mit den "ublichen Unterlagen richten Sie bitte an den Rektor der Ruhr-Universit"at Bochum, 44780 Bochum, Germany. ----- End Included Message ----- From druck at afit.af.mil Wed Aug 10 15:40:12 1994 From: druck at afit.af.mil (Dennis W. Ruck) Date: Wed, 10 Aug 94 15:40:12 -0400 Subject: CFP: SPIE Applications and Science of Artificial Neural Networks VI Message-ID: <9408101940.AA06900@gandalf.afit.af.mil> CALL FOR PAPERS SPIE Conference on Applications and Science of Artificial Neural Networks VI Orlando, Florida 17-21 April 1995 You are invited to submit a paper to the SPIE conference Applications and Science of Artificial Neural Networks VI. This conference will be held in conjunction with nearly 40 other conferences on topics in object recognition, aerospace sensing, and photonics. See below for a complete list. -------------------------------------------------------------- Announcement and Call for Papers for Applications and Science of Artificial Neural Networks VI -------------------------------------------------------------- Conference Chairs: Steven K. Rogers, Dennis W. Ruck, Air Force Institute of Technology Program Committee: Stanley C. Ahalt, The Ohio State Univ.; James C. Bezdek, Univ. of West Florida; Joe R. Brown, Microelectronics and Computer Technology Corp.; Lee A. Feldkamp, Ford Motor Co.; Michael Georgiopoulos, Univ. of Central Florida; Joydeep Ghosh, Univ. of Texas/Austin; Charles W. Glover, Oak Ridge National Lab.; John B. Hampshire, II, Jet Propulsion Lab.; Richard P. Lippmann, MIT Lincoln Lab.; Harley R. Myler, Univ. of Central Florida; Mary Lou Padgett, Auburn Univ.; Kevin L. Priddy, Accurate Automation Corp.; Gintaras V. Puskorius, Ford Motor Co.; Donald F. Specht, Lockheed Palo Alto Research Lab.; Gregory L. Tarr, Air Force Phillips Lab.; Gary Whittington, Univ. of Aberdeen (UK); Rodney G. Winter, Dept. of Defense The focus of this conference is on real-world applications of artificial neural networks and on recent theoretical developments applicable to current applications. The goal of this conference is to provide a forum for interaction between researchers and industrial/government agencies with information processing requirements. Papers that investigate advantages/disadvantages of artificial neural networks in specific real-world applications will be presented. Papers that clearly state existing problems in information processing that could potentially be solved by artificial neural networks will also be considered. Sessions will concentrate on: * innovative applications of artificial neural networks to solve real-world problems * comparative performance in applications of target recognition, object recognition, speech processing, speaker identification, cochannel processing, signal processing in realistic environments, robotics, process control, and image processing * demonstrations of properties and limitations of existing or new artificial neural networks as shown by or related to an application * environments for artificial neural networks development and implementation with specific applications used to demonstrate features of the systems * hardware implementation technologies that are general purpose or application specific * knowledge acquisition and representation * biologically inspired visual representation techniques * decision support systems * artificial life * cognitive science * hybrid systems (fuzzy, neural, genetic) * neurobiology * optimization * sensation and perception * system identification * financial applications * time series analysis and prediction * pattern recognition * medical applications * intelligent control * robotics. ------------------------------------------------------------------ List of Conferences ------------------------------------------------------------------ The following conferences will all be held in Orlando, Florida 17-21 April 1995 at the Marriott Orlando World Center: 1. Public Safety/Law Enforcement Technology 2. Photonics for Space Environments III 3. Space Environmental, Legal, and Safety Issues 4. Imaging Spectrometry 5. Commercialization of High-Resolution Satellite Imagery for Dual-Use Applications 6. Infrared Detectors and Instrumentation for Astronomy 7. Spaceborne Interferometry II 8. Space Telescopes and Instruments III 9. Fiber Optics in Astronomical Applictions 10. Telescope Control Systems 11. Distributed Interactive Simulation (Critical Reviews) 12. Helmet- and Head-Mounted Displays and Symbology Design Requirements II 13. Cockpit Displays II 14. Space Guidance, Control, and Tracking II 15. Synthetic Vision for Vehicle Guidance and Control 16. Acquisition, Tracking, and Pointing IX 17. Applied Laser Radar Technology II 18. Air Traffic Control Technologies 19. Technologies for Advanced Land Combat (Critical Reviews) Part I: Rapid Force Projection Initiative Part II: Advanced Vehicle Technologies Part III: Information Sciences for Digitizing the Battlefield 20. Detection Technologies for Mines and Minelike Targets 21. Targets and Backgrounds: Characterization and Representation 22. Atmospheric Propagation and Remote Sensing IV 23. Tactical Control Technologies 24. Test and Evaluation of Defense-Related Infrared Detectors and Arrays 25. Infrared Imaging Systems: Design, Analysis, Modeling, and Testing VI 26. Signal Processing, Sensor Fusion, and Target Recognition IV 27. Algorithms for Synthetic Aperture Radar Imagery II 28. Integration Photogrammetric Techniques with Scene Analysis and Machine Vision II 29. Automatic Object Recognition V 30. Smart Infrared Focal Plane Arrays and Technology 31. Transition of Optical Processors into Systems 1995 32. Optical Pattern Recognition VI 33. Visual Information Processing IV 34. Applications and Science of Artificial Neural Networks VI 35. Applications of Fuzzy Logic Technology II 36. Thermosense XVII: An International Conference on Thermal Sensing and Imaging Diagnostic Applications 37. Flat Panel Displays for Defense Applications (Critical Reviews) 38. Digital Signal Processing Technology (Critical Reviews) ------------------------------------------------------------------ General Information ------------------------------------------------------------------ SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics 17-21 April 1995 Marriott's Orlando World Center Resort and Convention Center Orlando, Florida USA SPIE's 1995 Aerospace/Defense Sensing and Dual-Use Photonics Symposium will be held at: Marriott's Orlando World Center Hotel 8701 World Center Drive Orlando, Florida 32821-6398 Phone: 407/239-4200 or 800/621-0638 (outside Florida) Fax: 407/239-5958 Accommodations -------------- SPIE will reserve a block of rooms for attendees at the Marriott Orlando World Center Hotel. Room rates at the Marriott will be $129 single and $142 double plus tax. Alternate hotels in the immediate area will also be available. Information concerning hotels and prices will be announced in the advance program. Advance Technical Program ------------------------- The comprehensive Advance Technical Program for this symposium will list conferences, paper titles and authors in order of presentation, educational short courses schedule including course descriptions and instructor biographies, and an outline of all planned special events. Call SPIE at 206/676-3290 (Pacific Time) to request that a copy be sent to you when it becomes available in January 1995. Conference Registration ----------------------- The following registration fees for SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics are preliminary and included to assist you in planning. Conference Fees without Proceedings Member Nonmember Attendee Full Conference.................$360.........$420 One day...................................160..........190 Author Full Conference*...................325..........385 Author One Day*...........................160..........190 Student....................................85...........95 * Author fee includes a proceedings Short Course Fees Member Nonmember Half-day course (3.5 hr).......$145.......$170 Full-day course (6.5 hr)........265........310 Two-day course (12 hr)..........485........570 Florida sales tax will be added to short course fees. How to Contact SPIE ------------------- If you have further questions, or need assistance, please send a message to info-optolink-service at mom.spie.org. You will receive a response from an SPIE staff member. Join SPIE Today --------------- Keep in touch with the dynamic world of optics and optoelectronics by becoming a member of SPIE. Full SPIE Membership Joining SPIE as a full member provides you with many benefits, including: * Voting privileges * Eligibility to hold SPIE office. * Subscription to OE Reports * Subscription to Optical Engineering, SPIE's monthly journal * Annual SPIE Member Guide * Full member discounts (~20%) on SPIE publications * Full member discounts (~15%) on SPIE conferences and short courses * Discounts on publications from other publishers as available * Member rates for SPIE-cosponsored technical events $85 in North America/$95 outside North America (Student Memberships and Associate Student Memberships available at reduced rates.) Working Group Membership ------------------------ Working Groups are interactive networks that foster professional contacts and information flow among technically related individuals, groups, companies, and institutions in specific areas of technology. Individual Membership ($15) Group Memberships and Corporate Memberships are available. Contact SPIE for complete list of working groups and benefits. ------------------------------------------------------------------- Abstract Due Date: 19 September 1994 On-Site Proceedings Manuscript Due Date: 23 January 1995 Manuscript due date for on-site proceedings must be strictly observed. ------------------------------------------------------------------ For a complete text of the Announcement and Call for Papers for SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics, contact SPIE at either the European Office, or International Headquarters addresses below. Contact addresses: SPIE in Europe: SPIE European Office c/o HIB-INFONET P.O. Box 4463 N-5028 Bergen, Norway Phone: 47 55 54 37 84 Fax: 47 55 96 21 75 E-mail: spie at hibinc.no SPIE International Headquarters P.O. Box 10 Bellingham, WA 98227-0010 USA Phone: 206/676-3290 Fax: 206/647-1445 E-mail: spie at spie.org Telnet/FTP: spie.org World Wide Web URL: http://www.spie.org ------------------------------------------------------------------- SPIE--The International Society for Optical Engineering SPIE is a nonprofit society dedicated to advancing engineering and scientific applications of optical, electro-optical, and optoelectronic instrumentation, systems, and technology. Its members are scientists, engineers, and users interested in the reduction to practice of these technologies. SPIE provides the means for communicating new developments and applications to the scientific, engineering, and user communities through its publications, symposia, and short courses. SPIE is dedicated to bringing you quality electronic media and online services. ------------------------------------------------------------------- Dennis W. Ruck Air Force Institute of Technology d.ruck at ieee.org Wright-Patterson AFB, Ohio AFIT/ENG, Bldg 642, 2950 P ST, Wright-Patterson AFB OH 45433-7765 Ph. (513) 255-6565 ext. 4285 Fax (513) 476-4055 From UBJTP69 at CCS.BBK.AC.UK Thu Aug 11 14:41:00 1994 From: UBJTP69 at CCS.BBK.AC.UK (Gareth) Date: Thu, 11 Aug 94 14:41 BST Subject: Thesis printing problem Message-ID: It has been reported that the file gaskell.thesis.ps.Z in the neuroprose archive will not print out from some unix systems. If this is the case, edit the file and delete the initial character (^D). The file should then print out properly. Sorry for any inconvenience. Gareth Gaskell From pluto at cs.ucsd.edu Thu Aug 11 16:49:04 1994 From: pluto at cs.ucsd.edu (Mark Plutowski) Date: Thu, 11 Aug 94 13:49:04 -0700 Subject: Paper in Neuroprose: estimating generalization with cross-validation Message-ID: <9408112049.AA14173@beowulf> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/pluto.imse.ps.Z |This is related to the generalization debate on Machine Learning List.| By popular request, the following paper has been placed in Neuroprose. Title: "Cross-validation estimates integrated mean squared error." Authors: Plutowski , M. (1,3), S. Sakata (2), H. White (2,3). (1) Computer Science and Engineering (2) Economics (3) Institute for Neural Computation (All at UCSD). A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." The statistical literature abounds with application-specific and model-specific demonstrations that cross-validation is statistically accurate and precise for use as a real-world estimate of an ideal measure of generalization known as Integrated Mean Squared Error (IMSE). IMSE is the average mean squared error, averaged over all training sets of a particular size. IMSE is closely related to Prediction Risk, (aka statistical risk) therefore such results are applicable to statistical risk as well (as averaged over training sets of a particular size). See Plutowski's thesis in Neuroprose/Thesis for explicit relationship between IMSE and statistical risk. This paper extends such results to apply to nonlinear regression in general. Strong convergence (w.p.1) and unbiasedness are proved. The key assumption is that the training and test data be independent and identically distributed (i.i.d.) - therefore, data must be drawn from the same space (stationarity), independent of previously sampled datum. Note that if training data are explicitly excluded from the test sample, then the i.i.d. assumption does not hold, since in this case the test sample is drawn from a space that depends upon (is conditioned by) particular choice of training sample. Therefore, the measure of generalization referred to in the raging debate on the Machine Learning List would not meet the conditions employed by the results in this paper. Filename: pluto.imse.ps.Z. Title: "Cross-validation estimates integrated mean squared error." File size: 97K compressed, 242K uncompressed. 17 single-spaced pages (8 pages of text, the remainder is a mathematical appendix). Email contact: pluto at cs.ucsd.edu. SUBJECT: theorems proving cross-validation is a statistically accurate and precise estimator of an ideal measure of generalization. Abridged version of this appeared in NIPS 6. = Mark Plutowski PS: Sorry, no hard copies available. From giles at research.nj.nec.com Thu Aug 11 19:01:15 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 11 Aug 94 19:01:15 EDT Subject: Corrected ps file of previously announced TR. Message-ID: <9408112301.AA10157@fuzzy> It seems that a figure in the postscript file of this TR generated printing problems for certain printers. We changed the figure to eliminate this problem. We apologize to anyone who was inconvenienced. In the revised TR, we also included some references that initially were unintentionally excluded. Lee Giles, Bill Horne, Tsungnan Lin _________________________________________________________________________________________ Learning a Class of Large Finite State Machines with a Recurrent Neural Network UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-94 AND CS-TR-3328 C. L. Giles[1,2], B. G. Horne[1], T. Lin[1,3] [1] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]UMIACS, University of Maryland, College Park, MD 20742 [3] EE Department, Princeton University, Princeton, NJ 08540 {giles,horne,lin}@research.nj.nec.com One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically- driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specif ically, these FMMs have a large number of states (simulations are for $256$ and $512$ state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the num ber of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that sufficient to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. -------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3328.ps.Z ftp> quit unix> uncompress 3328.ps.Z OR unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get large.fsm.ps.Z ftp> quit unix> uncompress large.fsm.ps.Z -------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From grino at ic.upc.es Fri Aug 12 16:11:50 1994 From: grino at ic.upc.es (Robert Grino) Date: Fri, 12 Aug 1994 16:11:50 UTC+0100 Subject: Paper in Neuroprose: Nonlinear System Identification Using Additive Dynamic Neural Networks Message-ID: <232*/S=grino/OU=ic/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/grino.sysid.ps.Z The file grino.sysid.ps.Z is now available for copying from the Neuroprose repository: NONLINEAR SYSTEM IDENTIFICATION USING ADDITIVE DYNAMIC NEURAL NETWORKS R. Grino (grino at ic.upc.es) Instituto de Cibernetica - ESAII Univ. Politecnica Catalunya Diagonal, 647, 2nd floor 08028-Barcelona, SPAIN (Reprint of a SICICA'94 (IFAC) paper) ABSTRACT: In this work additive dynamic neural models are used for the identification of nonlinear plants in on-line operation. In order to accomplish this task a gradient parameter adaptation method based in sensitivity analysis is formulated taking into account that the parameters of the model are arranged in matrix form. This methodology is applied to several nonlinear systems in simulation and with a real dataset to verify its performance. ============================================================================= Robert Grino E-mail: grino at ic.upc.es Instituto de Cibernetica Diagonal,647, 2nd floor FAX number: (343) 4016605 08028 - Barcelona SPAIN ============================================================================= From dhw at santafe.edu Fri Aug 12 16:32:37 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Aug 94 14:32:37 MDT Subject: No subject Message-ID: <9408122032.AA06773@chimayo.santafe.edu> Mark Plutowski recently said on connectionist: >>> A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." >>> Mark is being polite by not using names; I am the one he is (implicitly) taking to task, for the following comment on the ML list: >>> . an assumption *must always* be present if we are to have any belief in learnability in the problem at hand... However, to give just one example, nobody has yet delineated just what those assumptions are for the technique of cross-validation. >>> Mark is completely correct in his (implicit) criticism. As he says, there has in fact been decades of work analyzing cross-validation from a sampling theory perspective. Mark's thesis is a major contribution to this literature. Any implications coming from my message that such literature doesn't exist or is somehow invalid are completely mistaken and were not intended. (Indeed, I've had several very illuminating discussions with Mark about his thesis!) The only defense for the imprecision of my comment is that I made it in the context of the ongoing discussion on the ML list that Mark referred to. That discussion concerned off-training set error rather than iid error, so my comments implicitly assumed off-training set error. And as Mark notes, his results (and the others in the literature) don't extend to that kind of error. (Another important distinction between the framework Mark uses and the implicit framework in the ML list discussion is that the latter has been concerned w/ zero-one loss, whereas Mark's work concentrates on quadratic loss. The no-free-lunch results being discussed in the ML list change form drastically if one uses quadratic rather than zero-one loss. That should be no surprise, given, for example, the results in Michael Perrone's work.) While on the subject of cross-validation and iid error though, it's interesting to note that there is still much to be understood. For example, in the average-data scenario of sampling theory statistics that Mark uses, asymptotic properties are better understood than finite data properties. And in the Bayesian this-data framework, very little is known for any data-size regime (though some work by Dawid on this subject comes to mind). David Wolpert From mccauley at ecn.purdue.edu Thu Aug 11 21:51:21 1994 From: mccauley at ecn.purdue.edu (James Darrell McCauley) Date: Thu, 11 Aug 1994 20:51:21 -0500 Subject: connectionists suggestion In-Reply-To: <9408112049.AA14173@beowulf> Message-ID: <199408120151.UAA03022@alx3.ecn.purdue.edu> Mark Plutowski (pluto at cs.ucsd.edu) writes on 11 Aug 94: >FTP-host: archive.cis.ohio-state.edu >FTP-filename: /pub/neuroprose/pluto.imse.ps.Z May I suggest that posters be encouraged to use the URL format and *not* post instructions on the usage of ftp? I.e., the following would be sufficient: ftp://archive.cis.ohio-state.edu/pub/neuroprose/pluto.imse.ps.Z (this way readers may just cut/paste the string into xmosaic or whatever browser they use). Not picking on Dr Plutoski, just a suggestion that came to mind... --Darrell McCauley, aka jdm5548 at diamond.tamu.edu James Darrell McCauley, Purdue Univ, West Lafayette, IN 47907-1146, USA mccauley at ecn.purdue.edu, mccauley%ecn at purccvm.bitnet, pur-ee!mccauley From harnad at Princeton.EDU Fri Aug 12 23:53:54 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 12 Aug 94 23:53:54 EDT Subject: French Cognitive Science Conference Message-ID: <9408130353.AA25579@clarity.Princeton.EDU> From: payette at uranus.atoci.uqam.ca Subject: French International Cognitive Science Conference ANNOUNCEMENT The Seventh Colloquium of the Jacques Cartier Center Lyon, France. THE COGNITIVE SCIENCES: FROM COMPUTATIONAL MODELS TO THE PHILOSOPHY OF MIND under the aegis of: the Pole Rhones-Alpes of the Cognitive Sciences, Programme Interdisciplinaire de Recherche Cognisciences,CNRS Universite du Quebec a Montreal Universite de Montreal Universite Joseph Fourier Universite Claude Bernard Scientific committee: Denis Fisette (Universite du Quebec a Montreal, Quebec) Marc Jeannerod (Universite Claude Bernard, Lyon) Daniel Laurier (Universite de Montreal, Quebec) Daniel Payette (Universite du Quebec a Montreal, Quebec) Vincent Rialle (Universite Joseph Fourier, Grenoble) Guy Tiberghien (Universite Pierre Mendes-France, Grenoble) Coordination in North America: Daniel Payette and Denis Fisette Universite du Quebec a Montreal, Dpt de Philosophie, Dpt Psychology; C.P. 8888,Succ A, Montreal (Quebec) H3C-3P8, Canada; E.mail : payette at uranus.atoci.uqam.ca; tel (+514) 987 8418; Fax: (+514) 9876721 Coordination in Europe: Vincent Rialle Universite J. Fourier, Labo.TIMC-IMAG, Faculte de Medecine, 38706 LaTronche Cedex E.mail: Vincent.Rialle at imag.fr; Tel. (+33) 76 63 71 87; Fax. (+33) 76 51 8667 DATES: Wednesday, November 30th to Friday, December 2nd 1994 CONFERENCE SITE: Amphitheatre CHARLES BERAUDIER. Conseil Regional RHONE-ALPES,78 route de Paris 69751 CHARBONNIERES-les-BAINS. France *Talks will only be given by invited speakers. (Simultaneous French-English and English-French will be provided). THEME OF COLLOQUIUM The modeling of mental processes in the various human cognitive activities has generated increasing interest in the scientific world today. Cognitive models, cognitive simulations, auto-organization, adaptation, emergence, genetic selection, Darwinian mentalism and enaction are active research topics in neurological and psychological theory. The cognitive sciences offer a continuum of research extending from the engineering sciences to the philosophy of mind, including the neurosciences, cognitive psychology, linguistics, semantics, semiotics and artificial intelligence. Three subconferences will organize themselves around the following major complementary themes: (i) Modeling (cognitive and brain functions), (ii) Philosophy of Mind and Epistemology, and (iii) Applications (AI, technical and computational engineering). (i) Modeling is a point of intersection for all these specialties because it includes the modeling of functions and dysfunctions of the central nervous system, the neurocomputer sciences, the modeling of psychocognitive and mental processes, the emergence of intentional structure on the basis of biological structure, enaction, genetic algorithms, neural networks, artificial "life," etc. (ii) The philosophical and epistemological subcomponent poses questions like the following: Can we elaborate mathematical models of the mind and use them to describe and explain human behavior? Are we aiming toward a mathematical model of the mind? Can we capture the formal principles of the development and emergence of cognition? Can we technologically recreate thought? Is the computational symbolic paradigm, which has imposed itself for the last decades, still a powerful conceptual tool or is it proving too reductionistic and if so, how? What is the epistemological status of, for example, the alternative proposed by the parallel distributed model to the computational models of classical cognitivism? What is the relation os the modeling activity of the cognitive and neurosciences and human experience? (iii) The applications subconference will consider practical domains in which scientific results have been applied in the treatment of language, the automated cognitive analyses of textual documents (an intersection of linguistics, semantics, semiotics and artificial intelligence), aids to decision making, applications in sensory information processing, etc. PREPROGRAM WEDNESDAY, 30 November 1994 8h15 - 8h30 Allocution d'accueil du Conseil Regional 8h30 - 9h Guy Tiberghien (Universite Pierre Mendes-France, Grenoble) Introduction SESSION 1 : Modelisation neuro et psycho-cognitives 9h - 9h-30 Jean Francois Le Ny (Universite Paris-Sud, psychologie cognitive) Pourquoi les modeles cognitifs devraient-ils etre calculatoires ? 9h30 - 9h45 Discussion 9h-45 - 10h15 Marc Jeannerod (Universite Claude Bernard, Lyon, neurosciences) Le cerveau representationnel 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE 10h45 - 11h15 Zenon Pylyshyn (Rutgers University, USA, psychologie cognitive) What's in the Mind? A Computational Approach to a Ancient Question. 11h15 - 11h30 Discussion 11h30 - 12h00 Stevan Harnad (Princeton University, psychologie cognitive) Modeles, mobiles et mentalite 12h00 - 12h15 Discussion MEAL 14h00 - 14h30 Michel Imbert (Universite Paul Sabatier, Toulouse, neurosciences) De l'etude du cerveau a la comprehension de l'esprit 14h30 - 14h45 Discussion 14h45 - 15h15 Guy Tiberghien (Univers Pierre Mendes-France,Grenoble,psychologie cognitive) Connexionnisme: stade supreme du behaviorisme ? 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Jacques Demongeot (Universite Joseph Fourier, Grenoble, neurosciences) Memoire d'evocation dans les reseaux de neurones 16h15 - 16h30 Discussion 16h30 - 17h00 Bennet Murdock (Universite de Toronto, psychologie cognitive) THE ROLE OF FORMAL MODELS IN MEMORY RESEARCH 17h00 - 17h15 Discussion 17h15 - 17h45 Robert Proulx (Universite du Quebec a Montreal, neuro-psychologie) Plausibilite biologique de certains systemes de categorisation adaptative a base de reseaux de neurones 17h45 - 18h00 Discussion TUESDAY, December 1 Session 2 : Epistemology, Philosophy of Mind and Cognition 9h - 9h30 Elisabeth Pacherie (Universite de Provence, CNRS & CREA, Paris) Domaines cognitifs et modularite 9h30 - 9h45 Discussion 9h-45 - 10h15 Pierre Livet (Universite de Provence & CREA, Paris, philosophie) Categorisation et connexionnisme 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE Normand Lacharite (Universite du Quebec a Montreal, epistemologie) 10h45 - 11h15 Conflits de modeles en theorie de la representation 11h15 - 11h30 Discussion 11h30 - 12h00 Peter Gardenfors (Lund University, Suede, philosophie) Language and the Evolution of Mind 12h15 - 12h15 Discussion MEAL 14h00 - 14h30 Andy Clark (Washington University, philosophie) Wild Cognition: Putting Representation in its Place 14h30 - 14h45 Discussion 14h45 - 15h15 Kevin Mulligan (Universite de Geneve, Suisse, philosophie) Constance perceptuelle et contenu spatial 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Ronald De Sousa (Universite de Toronto, epistemologie) La rationalite: un concept normatif ou descriptif ? 16h15 - 16h30 Discussion 16h30 - 17h00 Daniel Laurier (Universite de Montreal, philosophie) Rationalite et naturalisme 17h00 - 17h15 Discussion 17h15 - 17h45 Joelle Proust (CNRS & CREA, Paris, philosophie) Un modele naturaliste de l'intentionnalite 17h45 - 18h00 Discussion FRIDAY, December 2 Session 3: Modelisation IA, Traitement du langage, et semantique cognitive Paul Jorion (Maison des Sciences de l'Homme, Paris, psychologie cognitive) 9h - 9h30 Modelisation du reseau mnesique : une utilisation minimaliste de l'IA 9h30 - 9h45 Discussion 9h-45 - 10h15 Bernard Amy (Universite Joseph Fourier, Grenoble, connexionnisme) La place des reseaux neuronaux dans l'IA 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE 10h45 - 11h15 Paul Bourgine (CEMAGREF, Paris-Antony, IA-modelisation) Co-evolution et emergence du soi 11h15 - 11h30 Discussion 11h30 - 12h00 Paul Pietroski (Universite McGill, Canada, philosophie) What can linguistics teach us about belief 12h00 - 12h15 Discussion MEAL 14h00 - 14h30 Le paradigme hermeneutique et la mediation semiotique Francois Rastier (Institut National de la Langue Francaise, CNRS, linguistique computationnelle) 14h30 - 14h45 Discussion 14h45 - 15h15 L'impact des perspectives cognitives dans le traitement de l'information Jean-Guy Meunier (Universite du Quebec a Montreal, semiotique) 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Guy Denhiere (Universite Paris VIII, psychologie cognitive) Isabelle Tapiero (Universite Lyon II, psychologie cognitive) La signification comme structure emergente : de l'acces au lexique a la comprehension de textes 16h15 - 16h30 Discussion 16h30 - 17h00 Paul Freedman (Centre de Recherche en Informatique de Montreal, IA) La vision artificielle: le traitement intelligent de documents 17h00 - 17h15 Discussion 17h15 - 17h45 Denis Vernant (Universite Pierre Mendes-France, Grenoble, philosophie) L'intelligence de la machine et sa capacite dialogique 17h45 - 18h00 Discussion 18h00: END OF COLLOQUIUM ------------------------------------------------------------------------ -ADMISSION FEES- (Includes:access to the conference room, meals and the colloquium documents) Individuals-------------------------------------------------1500FF Student (join proof of eligibility with registration)------- 500FF ---------------------------------------------------------------------------- -- REGISTRATION BULLETIN (The Cognitive Sciences:From computational models to philosophy of mind) Name:___________________________________________________________________ Status:_____________________________________ Institution/Company_________________________ Complete Address_________________________________________________________ Fax:________________________ Phone :______________________ @mail number__________________________________ Enclosed : Check or money order of (_____________________FF) (Make check or money order payable to CENTRE JACQUES CARTIER) -Send information on possibilities of housing in Lyon(______) _Send me the colloquium brochure (_____) -November 30 meal __ -December 1, meal __ -December 2, meal __ RETURN TO: CENTRE JACQUES CARTIER, 86 rue Pasteur, 69365 Lyon Cedex 07, France. Phone:(33) 78 69 72 21 From dhw at santafe.edu Sat Aug 13 01:19:30 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Aug 94 23:19:30 MDT Subject: Cross-validation theory Message-ID: <9408130519.AA08172@chimayo.santafe.edu> Mark Plutowski recently said on connectionist: >>> A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." >>> As Mark goes on to point out, there are decades of work on cross-validation from a likelihood-driven sampling theory perspective. Indeed, Mark's thesis is a major addition to that literature. Mark then correctly notes that this literature doesn't *directly* apply to the discusson on the ML list, since that discussion involves off-training set rather than iid error. It should be noted that there is another important distinction between the framework Mark uses and the implicit framework in the ML list discussion; the latter has been concerned w/ zero-one loss, whereas Mark's work concentrates on quadratic loss. The no-free-lunch results being discussed in the ML list change form drastically if one uses quadratic rather than zero-one loss. That should not be too surprising, given, for example, the results in Michael Perrone's work involving quadratic loss. It's also worth noting that even in the regime of iid error and quadratic loss, there is still much to be understood. For example, in the average-data scenario of sampling theory statistics that Mark uses, asymptotic properties are better understood than finite data properties. And in the Bayesian this-data framework, very little is known for any data-size regime (though some work by Dawid on this subject comes to mind). David Wolpert From mdg at magi.ncsl.nist.gov Tue Aug 16 11:58:28 1994 From: mdg at magi.ncsl.nist.gov (Mike Garris x2928) Date: Tue, 16 Aug 94 11:58:28 EDT Subject: Announcement: Public Domain OCR Message-ID: <9408161558.AA02660@magi.ncsl.nist.gov> ANNOUNCEMENT - PUBLIC DOMAIN OCR NIST FORM-BASED HANDPRINT RECOGNITION SYSTEM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Michael D. Garris (mdg at magi.ncsl.nist.gov) James L. Blue, Gerald T. Candela, Darrin L. Dimmick, Jon Geist, Patrick J. Grother, Stanley A. Janet, and Charles L. Wilson National Institute of Standards and Technology, Building 225, Room A216 Gaithersburg, Maryland 20899 Phone: (301)975-2928 FAX: (301)840-1357 The National Institute of Standards and Technology (NIST) has developed a standard reference form-based handprint recognition system for evaluating optical character recognition (OCR). NIST is making this recognition system freely available to the general public on an ISO-9660 format CD-ROM. The recognition system processes the Handwriting Sample Forms distributed with NIST Special Database 1 and NIST Special Database 3. The system reads handprinted fields containing digits, lower case letters, upper case letters, and reads a text paragraph containing the Preamble to the U.S. Constitution. This is a source code distribution written primarily in C and is organized into 11 libraries. There are approximately 19,000 lines of code supporting more than 550 subroutines. Source code is provided for form registration, form removal, field isolation, field segmentation, character normalization, feature extraction, character classification, and dictionary-based post- processing. A host of data structures and low-level utilities are also provided. These utilities include the application of CCITT Group 4 decompres- sion, IHead file manipulation, spatial histograms, Least-Squares fitting, spatial zooming, connected components, Karhunen Loeve (KL) feature extraction, optimized Probabilistic Neural Network classification, multiple-key sorting, Levenstein distance dynamic string alignment, and dictionary-based post- processing. Two supporting programs are provided that compute eigenvectors and KL feature vectors for training classifiers. Unlike the recognition system (which is written entirely in C), these two programs contain FORTRAN subroutines. To support these programs, a training set of 168,365 segmented and labeled character images is provided. About 1000 writers contributed to this training set. The NIST standard reference recognition system is designed to run on UNIX workstations and has been successfully compiled and tested on a Digital Equipment Corporation (DEC) Alpha, Hewlett Packard (HP) Model 712/80, IBM RS6000, Silicon Graphics Incorporated (SGI) Indigo 2, SGI Onyx, SGI Challenge, Sun Microsystems (Sun) IPC, Sun SPARCstation 2, Sun 4/470, and a Sun SPARC- station 10.** Scripts for installation and compilation on these architectures are provided with this distribution. A CD-ROM distribution of this standard reference system can be obtained free of charge by sending a letter of request to Michael D. Garris at the address above. The letter, preferably on company letterhead, should identify the requesting organization or individuals. This system or any portion of this system may be used without restrictions. However, redistribution of this standard reference recognition system is strongly discouraged as any subsequent corrections or updates will be sent to registered recipients only. This software was produced by NIST, an agency of the U.S. government, and by statute is not subject to copyright in the United States. Recipients of this software assume all responsibilities associated with its operation, modification, and maintenance. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ** Specific hardware and software products identified were used in order to adequately support the development of this technology. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment identified is necessarily the best available for the purpose. From terry at salk.edu Tue Aug 16 20:12:13 1994 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 16 Aug 94 17:12:13 PDT Subject: Neural Computation 6:5 Message-ID: <9408170012.AA07668@salk.edu> NEURAL COMPUTATION September 1994 Volume 6 Number 5 Articles: A Bayesian Analysis of Self-Organizing Maps Stephen P. Luttrell Network Amplification of Local Fluctuations Causes High Spike Rate Variability, Fractal Firing Patterns and Oscillatory Local Field Potentials Marius Usher, Martin Stemmler, Christof Koch and Zeev Olami Note: Statistical Analysis of an Autoassociative Memory Network A. M. N. Fu Letters: Loading Deep Networks is Hard Jiri Sima Measuring the VC-dimension of a Learning Machine Vladimir Vapnik, Esther Levin and Yann Le Cun Neural Nets with Superlinear VC-Dimension Wolfgang Maass A Novel Design Method for Multilayer Feedforward Neural Networks Jihong Lee An Internal Mechanism for Detecting Parasite Attractors in a Hopfield Network Jean-Dominique Gascuel, Bahram Moobed and Michel Weinfeld On Langevin Updating in Multilayer Perceptrons Thorsteinn Rognvaldsson Probabilistic Winner-Take-All Learning Algorithm for Radial-Basis-Function Neural Classifiers Hossam Osman and Moustafa M. Fahmy Realization of the "Weak Rod" by a Double Layer Parallel Network T. Matsumoto and K. Kondo Learning in Neural Networks with Material synapses Daniel J. Amit and Stefano Fusi Model Based on Extracellular Potassium for Spontaneous Synchronous Activity in Developing Retinas Pierre-Yves Burgi and Norberto M. Grzywacz Bayesian Modeling and Classification of Neural Signals Michael S. Lewicki ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From ucganlb at ucl.ac.uk Wed Aug 17 05:01:47 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Wed, 17 Aug 94 10:01:47 +0100 Subject: pre-print in neuroprose: hippocampus - spatial models Message-ID: <93386.9408170901@link-1.ts.bcc.ac.uk> ftp://archive.cis.ohio-state.edu/pub/neuroprose/burgess.hbtnn.ps.Z The above/below file has been put on neuroprose for anonymous ftp, www or whatever, contact n.burgess at ucl.ac.uk with any retrieval problems. All the best, Neil HIPPOCAMPUS - SPATIAL MODELS Neil Burgess, Michael Recce & John O'Keefe Dept. of Anatomy, University College London, London WC1E 6BT, U.K. e-mail: n.burgess at ucl.ac.uk This is a brief review of models of the hippocampus, focussing on spatial aspects of cell-firing and hippocampal function. To appear in `The Handbook of Brain Theory and Neural Networks' (M. A. Arbib Ed.), Bradford Books/ MIT Press, 1995, and restricted in length and number of citations accordingly. 8 pages, 0.46 Mbytes uncompressed, hard-copies availible in extreme circumstances only. From uzimmer at informatik.uni-kl.de Wed Aug 17 12:14:01 1994 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Wed, 17 Aug 94 17:14:01 +0100 Subject: Papers available (world modelling, mobile robots) Message-ID: <940817.171401.723@ag_vp_file_server.informatik.uni-kl.de> A couple of actual papers about: -------------------------------------------------------------- --- Learning, Robotics, Visual Search, Navigation, --- --- Topologic Maps & Robust Mobile Robots --- --- Neural Networks --- -------------------------------------------------------------- are now available via FTP: ------------------------------------------------------------------------ --- Comparing World-Modelling Strategies for Autonomous Mobile Robots ------------------------------------------------------------------------ --- File name is : Zimmer.Comparison.ps.Z --- IWK `94, Ilmenau, Germany, September 27 - 30, 1994 Comparing World-Modelling Strategies for Autonomous Mobile Robots Uwe R. Zimmer & Ewald von Puttkamer The focus of this paper is on strategies for adapting a couple of internal representations to the actual environment of a mobile robot. From svc at demos.lanl.gov Tue Aug 16 15:22:10 1994 From: svc at demos.lanl.gov (Stephen Coggeshall) Date: Tue, 16 Aug 94 13:22:10 MDT Subject: graduate student positions Message-ID: <9408161922.AA13573@demos.lanl.gov> At Los Alamos National Laboratory in New Mexico we have a small research effort using adaptive computational models for a variety of projects. At this time there may be the possibility of a few graduate student positions available specifically for pattern recognition/data base mining with application to financial problems. We are interested in highly motivated, self-directed students with strong backgrounds in programming, math, neural net applications. Interested parties can contact Steve (svc at lanl.gov). Please describe briefly your past work and current interests, as well as your availability. From R.Gaizauskas at dcs.shef.ac.uk Fri Aug 19 17:12:12 1994 From: R.Gaizauskas at dcs.shef.ac.uk (Robert John Gaizauskas) Date: Fri, 19 Aug 94 17:12:12 BST Subject: AISB-95 WORKSHOP/TUTORIAL CALL Message-ID: <9408191612.AA03843@dcs.shef.ac.uk> ------------------------------------------------- AISB-95: CALL FOR WORKSHOP AND TUTORIAL PROPOSALS ------------------------------------------------- Call for Workshop Proposals: AISB-95 University of Sheffield, Sheffield, England April 3 -- 4, 1995 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB Committee invites proposals for workshops to be held in conjunction with the Tenth Biennial Conference on AI and Cognitive Science (AISB-95). While the main conference will run for three days from Wednesday, April 5 to Friday, April 7, the workshops will be held on the two days preceding the main event: Monday, April 3 and Tuesday, April 4. The main conference has the theme "Hybrid Problems, Hybrid Solutions" (see the main conference call) and while proposals for workshops related to that theme would be particularly welcome, proposals are invited for workshops relating to any aspect of Artificial Intelligence or the Simulation of Behaviour. Proposals, from an individual or a pair of organisers, for workshops between 0.5 and 2 days long will be considered. Workshops will probably address topics which are at the forefront of research, but not yet sufficiently developed to warrant a full-scale conference. Submission: ---------- A workshop proposal should contain the following information: 1. Workshop Title 2. A detailed outline of the workshop. This should include the necessary background and the potential target audience for the workshop and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the workshop. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 3. A brief resume of the organiser(s). This should include: background in the research area, references to published work in the topic area and relevant experience, such as previous organisation or chairing of workshops. 4. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple organisers, information for each organiser should be provided, but one organiser should be identified as the principal contact. 5. A draft Call for Participation. This should serve the dual purposes of informing and attracting potential participants. The organisers of accepted workshops are responsible for issuing a call for participation, reviewing requests to participate and scheduling the workshop activities within the constraints set by the Workshop Organiser. They are also responsible for submitting a collated set of papers for their workshop to the Workshop Organiser. Dates: ------ Intentions to organise a workshop should be made known to the Workshop Organiser as soon as possible. Proposals must be received by October 18th 1994. Decisions about topics and speakers will be made in early November. Collated sets of papers to be received by March 15th 1995. Proposals should be sent to: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. email: robertg at dcs.shef.ac.uk phone: +44 (0)742 825572 fax: +44 (0)742 780972 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). --------------------------------------------------------------------- Call for Tutorial Proposals: AISB-95 University of Sheffield, Sheffield, England April 3 -- 4, 1995 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB Committee invites proposals for Tutorials to be held in conjunction with the Tenth Biennial Conference on AI and Cognitive Science (AISB-95). While the main conference will run for three days from Wednesday, April 5 to Friday, April 7, the tutorials will be held on the two days preceding the main event: Monday, April 3 and Tuesday, April 4. Proposals for full and half day tutorials, from an individual or pair of presenters, will be considered. They may be offered both on standard topics and on new and more advanced aspects of Artificial Intelligence or Simulation of Behaviour. Anyone interested in presenting a tutorial should submit a proposal to the Workshop Organiser Dr Robert Gaizauskas (addresses below). Submission: ---------- A tutorial proposal should contain the following information: 1. Tutorial Title 2. A brief description of the tutorial, suitable for inclusion in a brochure. 3. A detailed outline of the tutorial. This should include the necessary background and the potential target audience for the tutorial and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the tutorial. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 4. A brief resume of the presenter(s). This should include: background in the tutorial area, references to published work in the topic area and relevant experience. Published work should, ideally, include a published tutorial-level article on the subject. Relevant experience is teaching experience, including previous conference tutorials or short courses presented. 5. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple presenters, information for each presenter should be provided, but one presenter should be identified as the principal contact. The presenter(s) of accepted tutorials must submit a set of tutorial notes (which may include relevant tutorial-level publications) to the Workshop Organisers by March 15th 1995. Dates: ------ Intentions to organise a tutorial should be made known to the the Workshop Organiser as soon as possible. Proposals must be received by October 18th 1994. Decisions about tutorial topics and speakers will be made in early November. Tutorial notes must be received by March 15th 1995. Proposals should be sent to: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. email: robertg at dcs.shef.ac.uk phone: +44 (0)742 825572 fax: +44 (0)742 780972 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). From bishopc at helios.aston.ac.uk Fri Aug 19 09:33:57 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 19 Aug 1994 13:33:57 +0000 Subject: Research Associate - Software Support Message-ID: <9192.9408191233@sun.aston.ac.uk> ------------------------------------------------------------------- Aston University Neural Computing Research Group RESEARCH ASSOCIATE - SYSTEMS AND SOFTWARE SUPPORT ------------------------------------------------- Applications are invited for a position as a Research Associate within the Neural Computing Research Group both to provide support for the Group's local network of Sun workstations and to undertake software development and research in support of projects within the Group. The Neural Computing Research Group currently comprises three professors, two lecturers, three postdoctoral research fellows and ten postgraduate research students. In addition, two further Lecturerships have recently been advertised. Current research activity focusses on principled approaches to neural computing, and spans a broad spectrum from theoretical foundations to industrial and commercial applications. The ideal candidate will have significant experience of the UNIX operating system and system maintenance, experience of software engineering in C++, and an understanding of neural networks. The responsibilities of the successful candidate will be as follows: (1) To provide system support for the Group's LAN of Sun UNIX workstations and associated peripherals, ensurimg that an efficient working environment is maintained. (2) To support the development, testing and documentation of the NetLib C++ library of neural network software. (3) To assist with numerical experiments in support of research projects, including industrial contracts. This aspect of the work may provide opportunities for joint publication in academic journals. (4) To provide such other software support as may be required, such as the maintenance of LaTeX, provision of WWW pages for the Group, etc. Salaries will be 13,941 UK pounds or above, depending on the experience and qualifications of the successful applicant. If you wish to apply for this position, please send a CV, together with the names and addresses of 3 referees, to: Professor Chris Bishop Neural Computing Research Group Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk closing date: 9 September 1994 From harnad at Princeton.EDU Fri Aug 19 22:51:41 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 19 Aug 94 22:51:41 EDT Subject: Brain Rhythms & Cognition: PSYCOLOQUY Call for Commentary Message-ID: <9408200251.AA06967@clarity.Princeton.EDU> Pulvermueller et al: BRAIN RHYTHMS, CELL ASSEMBLIES AND COGNITION The target article whose abstract appear below has just been just been published in PSYCOLOQUY, a refereed electronic journal of Peer Commentary sponsored by the American Psychological Association. Formal commentaries are now invited. The full text can be easily and instantly retrieved by a variety of simple means, described below. Instructions for Commentators appear after the retrieval Instructions. TARGET ARTICLE AUTHOR'S RATIONALE FOR SOLICITING COMMENTARY: Fast periodic brain responses have been investigated in various mammals, humans included. Although most neuroscientists agree on the importance of these processes, it is not at all clear what role they play in cortical and subcortical processing. Are they simply a byproduct of perceptual processes, or do they play a role in what can be called higher or cognitive processing in the brain? We tried to answer this question by performing experiments in which spectral responses to meaningful words and physically similar but meaningless pseudowords were recorded from the human cortex. The result, differential 30-Hz responses to these stimuli, is interpreted in the framework of a Hebbian cell assembly theory. We hope that both the results and the brain-theoretic approach will stimulate a fruitful multidisciplinary discussion. ----------------------------------------------------------------------- psycoloquy.94.5.48.brain-rhythms.1.pulvermueller Friday 19 August 1994 ISSN 1055-0143 (30 paragraphs, 10 figs, 9 notes, 61 refs, 1203 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1994 Friedemann Pulvermueller et al. BRAIN RHYTHMS, CELL ASSEMBLIES AND COGNITION: EVIDENCE FROM THE PROCESSING OF WORDS AND PSEUDOWORDS Friedemann Pulvermueller (1) Hubert Preissl (1) Carsten Eulitz (2) Christo Pantev (2) Werner Lutzenberger (1) Thomas Elbert (2) Niels Birbaumer (1, 3) (1) Institut fuer Medizinische Psychologie und Verhaltensneurobiologie, Universitaet Tuebingen, Gartenstrasse 29, 72074 Tuebingen, Germany PUMUE at mailserv.zdv.uni-tuebingen.de (2) Institut fuer Experimentelle Audiologie, Universitaet Muenster, Kardinal von Galen-Ring 10, 48149 Muenster, Germany (3) Universita degli Studi, Padova, Italy ABSTRACT: In modern brain theory, cortical cell assemblies are assumed to form the basis of higher brain functions such as form and word processing. When gestures or words are produced and perceived repeatedly by the infant, cell assemblies develop which represent these building blocks of cognitive processing. This leads to an obvious prediction: cell assembly activation ("ignition") should take place upon presentation of items relevant for cognition (e.g., words, such as "moon"), whereas no ignition should occur with meaningless items (e.g., pseudowords, such as "noom"). Cell assembly activity may be reflected by high-frequency brain responses, such as synchronous oscillations or rhythmic spatiotemporal activity patterns in which large numbers of neurons participate. In recent MEG and EEG experiments, differential gamma-band responses of the human brain were observed upon presentation of words and pseudowords. These findings are consistent with the view that fast coherent and rhythmic activation of large neuronal assemblies takes place with word but not pseudowords. KEYWORDS: brain theory, cell assembly, cognition, event related potentials (ERP), electroencephalograph (EEG), gamma band, Hebb, language, lexical processing, magnetoencephalography (MEG), psychophysiology, periodicity, power spectral analysis, synchrony The article is retrievable from the following sites: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1994.volume.5 http://info.cern.ch/hypertext/DataSources/bySubject/Psychology/Psycoloquy.html http://www.princeton.edu/~harnad/ gopher://gopher.cic.net/11/e-serials/alphabetic/p/psycoloquy gopher://gopher.lib.virginia.edu/11/alpha/psyc gopher://wachau.ai.univie.ac.at/11/archives/Psycoloquy The filename is: psycoloquy.94.5.48.brain-rhythms.1.pulvermueller To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (your password is your own userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") and then change directories with: cd pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get psycoloquy.94.5.48.brain-rhythms.1.pulvermueller or mget *pulv* When you have the file(s) you want, type: quit ---------------------------------------------------------------- The file is also retrievable through archie, gopher, and World-Wide Web using the URLs (Universal Resource Locators): http://info.cern.ch/hypertext/DataSources/bySubject/Psychology/Psycoloquy.html ftp://princeton.edu/pub/harnad/Psycoloquy or gopher://gopher.princeton.edu/1ftp%3aprinceton.edu%40/pub/harnad/BBS/ ---------------------------------------------------------------- Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ------------------------------------------------------------------ Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers -- ftpmail at decwrl.dec.com and bitftp at pucc.bitnet -- that will do the transfer for you. Send either one the one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------------------- INSTRUCTIONS FOR PSYCOLOQUY AUTHORS AND COMMENTATORS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From miku at sedal.su.oz.au Tue Aug 23 03:02:45 1994 From: miku at sedal.su.oz.au (Michael Usher) Date: Tue, 23 Aug 1994 17:02:45 +1000 (EST) Subject: Sixth Australian Conference on Neural Networks Message-ID: <199408230702.RAA26903@sedal.sedal.su.OZ.AU> For those people putting the finishing touches to their submissions, the Author's Style Guidelines are now available via the World Wide Web. LaTeX style information can be found at URL: http://www.sedal.su.oz.au/acnn95 The conference programme and registration details will also be placed there, when they become available. Further queries about ACNN'95 should be directed to . Problems with the WWW server should be directed to myself. Michael Usher -- Michael Usher Systems Administrator miku at sedal.su.OZ.AU Systems Engineering & Design Automation Lab (SEDAL) Tel: +61 2 692 4135 Department of Electrical Engineering, Building J03 Fax: +61 2 660 1228 University of Sydney, NSW 2006, AUSTRALIA From shultz at hebb.psych.mcgill.ca Tue Aug 23 09:10:34 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 23 Aug 94 09:10:34 EDT Subject: No subject Message-ID: <9408231310.AA19055@hebb.psych.mcgill.ca> Subject: Paper available: A connectionist model of the learning of personal pronouns in English. Date: 23 August '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.pronouns.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A connectionist model of the learning of personal pronouns in English. (13 pages) Thomas R. Shultz, David Buckingham, & Yuriko Oshima-Takane Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca Abstract Both experimental and observational psycholinguistic research have shown that children's acquisition of first and second person pronouns is affected by the opportunity to hear these pronouns used in speech not addressed to them. These effects were simulated with the cascade-correlation connectionist algorithm. The networks learned, in effect, to produce the pronouns "me" and "you" depending on identification of the speaker, addressee, and referent. Analysis of network performance and structure indicated that generalization to correct pronoun production was aided by listening to non-addressed speech and that persistent pronoun errors were created by listening to directly addressed speech. It was noted that explicit symbolic rule models would likely have difficulty simulating the pattern frequency effects common to the present simulations and to the natural language environment of the child. The paper has been published in S. J. Hanson, T. Petsche, M. Kearns, & R. L. Rivest (Eds.) (1994). Computational learning theory and natural learning systems, Vol. 2: Intersection between theory and experiment (pp. 347-362). Cambridge, MA: MIT Press. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.pronouns.ps.Z ftp> quit unix> uncompress shultz.pronouns.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From tap at cs.toronto.edu Tue Aug 23 16:18:48 1994 From: tap at cs.toronto.edu (Tony Plate) Date: Tue, 23 Aug 1994 16:18:48 -0400 Subject: Thesis available for ftp Message-ID: <94Aug23.161849edt.240@neuron.ai.toronto.edu> Ftp-host: ftp.cs.utoronto.ca Ftp-filename: /pub/tap/plate.thesis.2up.ps.Z Ftp-filename: /pub/tap/plate.thesis.ps.Z The following thesis is available for ftp. There are two versions: plate.thesis.ps.Z prints on 216 sheets of paper, and plate.thesis.2up.ps.Z prints on 108 sheets. The compressed files are around 750Kb each. Distributed Representations and Nested Compositional Structure by Tony A. Plate* Department of Computer Science, University of Toronto, 1994 A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy in the Graduate Department of Computer Science at the University of Toronto. Abstract Distributed representations are attractive for a number of reasons. They offer the possibility of representing concepts in a continuous space, they degrade gracefully with noise, and they can be processed in a parallel network of simple processing elements. However, the problem of representing nested structure in distributed representations has been for some time a prominent concern of both proponents and critics of connectionism \cite{fodor-pylyshyn-88,smolensky-90,hinton-90}. The lack of connectionist representations for complex structure has held back progress in tackling higher-level cognitive tasks such as language understanding and reasoning. In this thesis I review connectionist representations and propose a method for the distributed representation of nested structure, which I call ``Holographic Reduced Representations'' (HRRs). HRRs provide an implementation of Hinton's~\shortcite{hinton-90} ``reduced descriptions''. HRRs use circular convolution to associate atomic items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, and predicates can be represented in a fixed-width vector. These representations are items in their own right, and can be used in constructing compositional structures. The noisy reconstructions extracted from convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. Circular convolution, which is the basic associative operator for HRRs, can be built into a recurrent neural network. The network can store and produce sequences. I show that neural network learning techniques can be used with circular convolution in order to learn representations for items and sequences. One of the attractions of connectionist representations of compositional structures is the possibility of computing without decomposing structures. I show that it is possible to use dot-product comparisons of HRRs for nested structures to estimate the analogical similarity of the structures. This demonstrates how the surface form of connectionist representations can reflect underlying structural similarity and alignment. * New Address: Tony Plate, Department of Chemical Engineering University of British Columbia 2216 Main Mall, Vancouver, BC, Canada V6T 1Z4 tap at chml.ubc.ca From d.rosen at ieee.org Wed Aug 24 10:47:12 1994 From: d.rosen at ieee.org (d.rosen@ieee.org) Date: Wed, 24 Aug 94 07:47:12 -0700 Subject: Software Engineer/Developer (N.Y.) [Neural Networks] Wanted Message-ID: <9408241445.AA14964@solstice> Full-time position for Software Engineer/Developer with recent Degree. Possible alternative: Part-time Experienced Software Engineer/Developer. Starting date: Immediate/September. A federally-funded research team at New York Medical College is applying neural networks and advanced probabilistic/statistical methods to improve the accuracy with which the stage (of advancement) of cancer cases can be evaluated -- an important factor in determining treatment. We seek a skilled developer to take primary responsibility for the design and implementation of our neural network software, which will be geared towards flexible experimental use in our fast-paced research program, as well as a simple GUI prototype of clinical production software. The successful candidate will work with us (medical, neural network, and statistical researchers) to plan the best path from our current C code to a more carefully-designed, extensible, OO approach, perhaps with partial rapid implementation in an interpreted language, and evaluate the possible role of other available tools and libraries. It is expected that eventually, much of the resulting software will be freely distributed for use in many fields. Candidates should have demonstrably outstanding skills in designing object-oriented software, and in C++ (or both C and some OO language) development under the Unix[/X11] environment. Prefer knowledge of as many of the following as possible: neural nets, statistics, numerical / scientific computation, portable GUI, prototyping language (Python, Smalltalk, Perl, S-plus, ...), MS Windows, and Unix system administration. New York Medical College (NYMC) is located in the community of Valhalla, NY, just half an hour north of New York City. The position would be on-site, though doing some portion of the work remotely could perhaps be arranged. NYMC is the third-largest private medical university in the United States. It is an Equal Opportunity and Affirmative Action Institution. Currently we do not believe we could justify hiring an individual who is not already authorized to work in the U.S. If you are interested and qualified, please e-mail your resume to me as soon as possible (plain text preferred). -- David Rosen, PhD From iiscorp at netcom.com Wed Aug 24 18:48:07 1994 From: iiscorp at netcom.com (IIS Corp) Date: Wed, 24 Aug 94 15:48:07 PDT Subject: Soft Computing Days in San Francisco Message-ID: Soft Computing Days in San Francisco Zadeh, Widrow, Koza, Ruspini, Stork, Whitley, Bezdek, Bonissone, and Berenji On Soft Computing: Fuzzy Logic, Neural Networks, and Genetic Algorithms Three short courses San Francisco, CA October 24-28, 1994 Traditional (hard) computing methods do not provide sufficient capabilities to develop and implement intelligent systems. Soft computing methods have proved to be important practical tools to build and construct these systems. The following three courses, offered by Intelligent Inference Systems Corp., will focus on all major soft computing technologies: fuzzy logic, neural networks, genetic algorithms, and genetic programming. These courses may be taken either individually or in combination. Course 1: Artificial Neural Networks (Oct. 24) Bernard Widrow and David Stork Course 2: Genetic Algorithms and Genetic Programming (Oct. 25) John Koza and Darrell Whitley Course 3: Fuzzy Logic Inference (Oct. 26-28) Lotfi Zadeh, Jim Bezdek, Enrique Ruspini, Piero Bonissone, and Hamid Berenji For further details on course topics and registration information, send an email to iiscorp at netcom.com or contact Intelligent Inference Systems Corp., Phone (415) 988-9934, Fax: (415) 988-9935. A detailed brochure will be sent to you as soon as possible. From ken at phy.ucsf.edu Thu Aug 25 08:16:49 1994 From: ken at phy.ucsf.edu (ken@phy.ucsf.edu) Date: Thu, 25 Aug 1994 05:16:49 -0700 Subject: postdoctoral position available in computational neuroscience Message-ID: <9408251216.AA20136@coltrane.ucsf.EDU> Christof Schreiner works on the physiology of auditory cortex. His lab has defined several of the auditory parameters that seem to be mapped in auditory cortex. He has funding for a postdoctoral position for theoretical work aimed at understanding the mapping to auditory cortex and its possible consequences for representation of complex sounds. His description follows: The objective is to develop network models of the mammalian auditory cortex on the basis of a broad data base of physiological observations from our laboratory. Based on the spectral-temporal filter properties of neurons and their spatial distribution in the cortex, consequences for the cortical representation of complex signals (such as animal vocalizations and speech) shall be evaluated. Previous experience in signal processing and self-organizing or other neural network classes is required. Christof and I are both members of the Keck Center for Integrative Neuroscience at UCSF, a group of 10 faculty working on systems neuroscience including Michael Stryker, Michael Merzenich, Steve Lisberger, Allison Doupe, Alan Basbaum, Roger Nicoll, Howard Fields, and Henry Ralston. The postdoc will work in the Keck Center, and both I and Christof will be available to work closely with them. DO NOT SEND YOUR APPLICATIONS TO ME!! Send applications to: Christof Schreiner Dept. of Otolaryngology Box 0732 UCSF 513 Parnassus SF, CA 94143-0732 email: chris at phy.ucsf.edu Please send a cv, and names, addresses and phone numbers of three individuals who can provide references for you. Copies of your publications would also be helpful. Ken Miller From bishopc at helios.aston.ac.uk Thu Aug 25 07:18:00 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Thu, 25 Aug 1994 11:18:00 +0000 Subject: NIPS Workshop - Inverse Problems Message-ID: <4310.9408251018@sun.aston.ac.uk> ------------------------------------------------------------------- NIPS*94 Workshop: DOING IT BACKWARDS: ------------------- NEURAL NETWORKS AND THE SOLUTION OF INVERSE PROBLEMS ---------------------------------------------------- Organizer: Chris M Bishop CALL FOR PRESENTATIONS ---------------------- Introduction: ------------ Many of the tasks for which neural networks are commonly used correspond to the solution of an `inverse' problem. Such tasks are characterized by the existence of a well-defined, deterministic `forward' problem which might, for instance, correspond to causality in a physical system. By contrast the inverse problem may be ill-posed, and may exhibit multiple solutions. Aims: ---- A wide range of different approaches have been developed to tackle inverse problems, and one of the main goals of the workshop is to contrast the way in which they address the underlying technical issues, and to identify key areas for future research. Format: ------ This will be a one-day workshop, and will involve short presentations, with ample time allowed for discussions. Contributions ------------- If you wish to make a contribution to this workshop, please e-mail a brief outline of your proposed presentation to c.m.bishop at aston.ac.uk by 5 September. Chris Bishop -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From oby at TechFak.Uni-Bielefeld.DE Thu Aug 25 13:13:10 1994 From: oby at TechFak.Uni-Bielefeld.DE (oby@TechFak.Uni-Bielefeld.DE) Date: Thu, 25 Aug 94 19:13:10 +0200 Subject: No subject Message-ID: <9408251713.AA02980@gaukler.TechFak.Uni-Bielefeld.DE> POSTDOCTORAL POSITION Postdoctoral position available beginning spring / summer 1995 in a project combining anatomy, physiology and theoretical modelling to understand the functional architecture of primate visual cortex. Funded by the Human Frontiers program, the position is initially for two years and the postdoctoral fellow will be expected to interact with members of the group in Germany, England, and USA. The candidate should have some experience with neural modeling and possess appropriate math and and computer competency; the candidate should also have relevant experimental skills (eg anatomy or physiological unit recording) in mammalian visual system. Applicants should send their CV, list of publications, letter describing their interest in the position, and name,address and phone number of two referees to either Prof. Jennifer Lund, Institute of Ophthalmology, 11-43 Bath St. , London EC1V 9EL,UK Phone (0)71-608-6864), E-mail smgxjsl at ucl.ac.uk; or Dr. Klaus Obermayer, Universitaet Bielefeld, Technische Fakultat, Universitaetsstrasse 25, 33615 Bielefeld, Germany Phone (0)521-106-6058; Email oby at techfak.uni-bielefeld.de. From ccg at melissa.lanl.gov Thu Aug 25 18:23:12 1994 From: ccg at melissa.lanl.gov (Camilo Gomez) Date: Thu, 25 Aug 94 16:23:12 MDT Subject: positions in financial research Message-ID: <9408252223.AA00343@melissa.lanl.gov.demosdiscs> (PLEASE POST) ----------------------------------------------------------------------------------------- POSITIONS IN FINANCIAL RESEARCH AT LOS ALAMOS NATIONAL LABORATORY CENTER FOR NON-LINEAR STUDIES (Postdoctoral and Graduate Student) Positions involving research and development of financial models are available at Los Alamos National Laboratory. Depending on funding we will have a number of positions including: a)postdoctoral b)graduate student The successful candidates will be expected to work on projects involving research and development in the area of financial derivatives. These will involve both fixed-income and equity derivatives. Projects will focus on valuation problems for a number of these financial derivatives. Exceptionally well qualified candidates with an interest in computational investigations of above mentioned topics and expertise in one or more of the following or related areas, are encouraged to apply: a)finance b)financial derivatives c)statistical analysis d)time series analysis e)neural net/pattern recognition f)emergent behavior systems g)parallel computing h)programming skills (C and C++ languages) Candidates may contact: M.F. Gomez CNLS, MS-B258 Los Alamos National Laboratory Los Alamos, NM 87544 frankie at cnls.lanl.gov for application material and questions. Please indicate in your initial inquiry that is for a position in financial research and whether you are interested in a student or post-doctoral position. Los Alamos National Laboratory is an equal opportunity employer. ----------------------------------------------------------------------------------------- From shannon at cis.ohio-state.edu Thu Aug 25 15:02:52 1994 From: shannon at cis.ohio-state.edu (shannon roy campbell) Date: Thu, 25 Aug 1994 15:02:52 -0400 (EDT) Subject: paper available on synchrony and desynchrony in oscillator an oscillator network Message-ID: <199408251902.PAA27060@axon.cis.ohio-state.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename : /pub/neuroprose/campbell.wc_oscillators.ps.Z The file campbell.wc_oscillators.ps.Z (34 pages) is now available for copying from the Neuroprose repository. Contact shannon at cis.ohio-state.edu for retrieval problems. Synchronization and Desynchronization in a Network of Locally Coupled Wilson-Cowan Oscillators by Shannon Campbell and DeLiang Wang~ Dept. of Physics ~Dept. of Computer and Information Science Ohio State University Columbus, Ohio 43210-1277 Abstract - A network of Wilson-Cowan oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a two-dimensional matrix where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piece-wise linear approximation to the Wilson-Cowan oscillator) synchronizes, and present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections, that preserve spatial relationships among components and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get campbell.wc_oscillators.ps.Z ftp> quit unix> uncompress campbell.wc_oscillators.ps.Z From shannon at cis.ohio-state.edu Fri Aug 26 10:25:48 1994 From: shannon at cis.ohio-state.edu (shannon roy campbell) Date: Fri, 26 Aug 1994 10:25:48 -0400 (EDT) Subject: correct reference for paper on synchrony and desynchrony... Message-ID: <199408261425.KAA27616@axon.cis.ohio-state.edu> Sorry that I did not specify the proper reference to a previously announced paper in neuroprose ("Synchrony and desynchrony in a network of Wilson-Cowan oscillators"). The paper is a techical report from The Department of Computer and Information Science, The Ohio State University, numbered "OSU-CISRC-8/94-TR43". Thanks for your attention. -shannon campbell From kreider at bechtel.Colorado.EDU Sat Aug 27 16:21:38 1994 From: kreider at bechtel.Colorado.EDU (Jan Kreider) Date: Sat, 27 Aug 1994 14:21:38 -0600 Subject: System identification competition announcement Message-ID: <199408272021.OAA10216@bechtel.Colorado.EDU> Announcement of ***System Identification Competition*** Benchmark tests for estimation methods of thermal characteristics of buildings and building components. *Objective The objective of the benchmark is to set-up a comparison between alternative techniques and to clarify particular problems of system identification applied to the thermal performance of buildings. *Organisation J. Bloem, U. Norlen, EC - Joint Research Centre, Ispra, Italy H. Madsen, H. Melgaard, IMM TU of Denmark, Lyngby, Denmark J. Kreider, JCEM, University of Colorado, U.S. *Active period of the competition : July 1, 1994 - December 31, 1994 *Introduction A wide range of system identification techniques is now being applied to the analysis problems involved with estimation of thermal properties of buildings and building components. Similar problems arise in most observational disciplines, including physics, biology, and economics. New commercially available software tools and special purpose computer programs promise to provide results that were unobtainable just a decade ago. Unfortunately, the realisation and evaluation of this promise has been hampered by the difficulty of making rigorous comparisons between competing techniques, particularly ones that come from different disciplines. This competition has been organised to help clarify the conflicting claims among many researchers who use and analyse building energy data and to foster contact among these persons and their institutions. The intent is not necessarily only to declare winners but rather to set up a format in which rigorous evaluations of techniques can be made. Because there are natural measures of performance, a rank-ordering will be given. In all cases, however, the goal is to collect and analyse quantitative results in order to understand similarities and differences among the approaches. At the close of the competition the performance of the techniques submitted will be compared. Those with the best results will be asked to write a scientific paper and will be invited for a presentation of the paper. There will be no monetary prizes. A symposium at the JRC Ispra, Northern Italy, has been scheduled for the Autumn 1995 to explore the results of the competition in formal papers. The competition, the overall results and papers on selected methods will published by the organisers in a book. Research on energy savings in buildings can be divided in three major areas: 1) building components, 2) test cells and unoccupied buildings in real climate and 3) occupied buildings. Three competitions are planned along this line of which the present competition concerned with building components will be the first one. The present competition is concerned with wall components and no solar radiation involved. Five different cases are provided for estimation and prediction. Four cases have been designed with wall components in order to test parameter estimation methods. Prediction tests are also included. Some of the dependent variable values will be withheld from the data set in these cases. Contestants are free to submit results from any number of cases. When the outcome of this first competition is positive a second competition is planned which concerns test cells and unoccupied buildings under real climate conditions (1995). A third competition concerns occupied buildings (1996). If there is sufficient interest, a network server may be set up to operate as an on-line archive of interesting data sets, programs, and comparisons among algorithms in the future. ***ACCESSING THE DATA The competition does not require advanced registration; there are two ways to enter: 1. by normal mail. Simply request the data by sending a letter. The data are available on diskettes (3.5-in size) in ASCII, IBM-PC format. (there is no charge for the data diskette). To receive the data, send the letter together with a self-addressed rigid envelope to : Joint Research Centre Institute of System Engineering and Informatics J.J. BLOEM, Building 45 I - 21020 Ispra (VA), ITALY 2. by E-mail. In that case just send an request for participation by E-mail to J. Kreider at the University of Colorado, Boulder, CO 80309-0428, U.S. at the following E-mail address: jkreider at vaxf.colorado.edu Information how to obtain the necessary instructions and the required data series, using FTP, are forwarded to you by E-mail. Instructions on submitting a return disk with the analysis of the cases will be included in a README file. The disk will also include an entry form that each participant will need to complete and submit along with the results. ***FOR MORE INFORMATION Further questions about the competition should be directed to one of the following organisers: Joint Research Centre Institute for Systems Engineering and Informatics J.J. BLOEM Building 45 I - 21020 ISPRA (VA), Italy tel: +39 332 789842/789145 fax: +39 332 789992 E-mail: hans.bloem at cen.jrc.it Joint Center for Energy Management J. KREIDER Campus Box 428 University of Colorado Boulder, CO 80309-0428, U.S. tel: +303 492 3915 fax: +303 492 7317 E-mail: jkreider at vaxf.colorado.edu End of Announcement SysId Competition. From lautrup at connect.nbi.dk Mon Aug 29 15:54:23 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Mon, 29 Aug 94 15:54:23 METDST Subject: no subject (file transmission) Message-ID: Subject: Paper available: Extremely Ill-posed Learning Date: August 29, 1994 FTP-host: connect.nbi.dk FTP-file: neuroprose/hansen.ill-posed.ps.Z ---------------------------------------------- The following paper is now available: Extremely Ill-posed Learning [14 pages] L.K. Hansen, B. Lautrup, I. Law, N. Moerch, and J. Thomsen CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: Extremely ill-posed learning problems are common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, eg pixel or or pin values, and a modest number of patterns, eg images or spectra. We show that it is possible to train neural networks to learn such patterns without using an excessive number of weights, and we devise a test to decide if new patterns should be included in the training set or whether they fall within the subspace already explored. The method is applied to the analysis of PET-images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get hansen.ill-posed.ps.Z ftp> quit unix> uncompress hansen.ill-posed.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk From marshall at cs.unc.edu Mon Aug 29 12:03:50 1994 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Mon, 29 Aug 1994 12:03:50 -0400 Subject: POSTDOC JOB (Neural net models of learning) Message-ID: <199408291603.MAA27461@marshall.cs.unc.edu> ********************************************************************** POSTDOCTORAL POSITION Available in September 1994. Focus of research is neural network models of learning, with a special emphasis on spatial tasks. Qualified Ph.D. applicants should send a cover letter indicating research experience, vitae, and names of three references to Dr. Nestor Schmajuk Department of Psychology Duke University Durham, NC 27706 e-mail: nestor at acpub.duke.edu ********************************************************************** From ccg at melissa.lanl.gov Tue Aug 30 11:01:01 1994 From: ccg at melissa.lanl.gov (Camilo Gomez) Date: Tue, 30 Aug 94 09:01:01 MDT Subject: positions in financial research (corrected e-mail) Message-ID: <9408301501.AA02500@melissa.lanl.gov.demosdiscs> (PLEASE POST) ----------------------------------------------------------------------------------------- POSITIONS IN FINANCIAL RESEARCH AT LOS ALAMOS NATIONAL LABORATORY CENTER FOR NON-LINEAR STUDIES (Postdoctoral and Graduate Student) Positions involving research and development of financial models are available at Los Alamos National Laboratory. Depending on funding we will have a number of positions including: a)postdoctoral b)graduate student The successful candidates will be expected to work on projects involving research and development in the area of financial derivatives. These will involve both fixed-income and equity derivatives. Projects will focus on valuation problems for a number of these financial derivatives. Exceptionally well qualified candidates with an interest in computational investigations of above mentioned topics and expertise in one or more of the following or related areas, are encouraged to apply: a)finance b)financial derivatives c)statistical analysis d)time series analysis e)neural net/pattern recognition f)emergent behavior systems g)parallel computing h)programming skills (C and C++ languages) Candidates may contact: M.F. Gomez CNLS, MS-B258 Los Alamos National Laboratory Los Alamos, NM 87544 frankie at goshawk.lanl.gov for application material and questions. Please indicate in your initial inquiry that is for a position in financial research and whether you are interested in a student or post-doctoral position. Los Alamos National Laboratory is an equal opportunity employer. ----------------------------------------------------------------------------------------- From prechelt at ira.uka.de Tue Aug 30 15:32:35 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Tue, 30 Aug 1994 21:32:35 +0200 Subject: Report on NN learning algorithm evaluation practices Message-ID: <"irafs2.ira.198:30.07.94.19.32.14"@ira.uka.de> A technical report titled A Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice is now available for anonymous ftp (binary mode!) from archive.cis.ohio-state.edu /pub/neuroprose/prechelt.eval.ps.Z The file is 70 Kb, the paper has 9 pages. This is the abstract: 113 journal articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6\% of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessible collections of example problems be built. Here is a bibtex entry for the report: @techreport{Prechelt94d, author = {Lutz Prechelt}, title = {A Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice}, institution = {Fakult\"at f\"ur Informatik, Universit\"at Karlsruhe}, year = {1994}, number = {19/94}, address = {D-76128 Karlsruhe, Germany}, month = aug, note = {anonymous ftp: /pub/papers/techreports/1994/1994-19.ps.Z on ftp.ira.uka.de}, } Comments are welcome. Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple. From tgd at chert.CS.ORST.EDU Tue Aug 30 19:07:33 1994 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Tue, 30 Aug 94 16:07:33 PDT Subject: paper: Error Correcting Output Codes Message-ID: <9408302307.AA13478@edison.CS.ORST.EDU> The following paper is available at URL: "ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-ecoc.ps.gz" Solving Multiclass Learning Problems via Error-Correcting Output Codes Thomas G. Dietterich tgd at cs.orst.edu Department of Computer Science, 303 Dearborn Hall Oregon State University Corvallis, OR 97331 USA Ghulum Bakiri Department of Computer Science University of Bahrain Isa Town, Bahrain Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k>2 values (i.e., k ``classes''). The definition is acquired by studying large collections of training examples of the form . Existing approaches to multiclass learning problems include (a) direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the $k$ classes, and (c) application of binary concept learning algorithms with distributed output representations such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. Thomas G. Dietterich Voice: 503-737-5559 Department of Computer Science FAX: 503-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd/index.html Oregon State University Corvallis, OR 97331-3102 From georgiou at wiley.csusb.edu Wed Aug 31 14:00:01 1994 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Thu, 1 Sep 1994 02:00:01 +0800 Subject: JCIS: Last Call for Papers Message-ID: <9409010900.AA04051@orion.csci.csusb.edu> JOINT CONFERENCE ON INFORMATION SCIENCES Last Call for Papers (For submission of Neural Networks papers please see end of message for address. Deadline: September 10, 1994) ORGANIZERS Honorary Chairs Lotfi A. Zadeh & Azriel Rosenfeld Managing Chair of the Joint Conferences Paul P. Wang Dept. of Electrical Engineering Duke University Durham, NC 27708-0291 Tel: (919) 660 5271, 660-5259 Fax: (919) 660-5293, 684-4860 e-mail: ppw at ee.duke.edu Advisorv Board Nick Baker Earl Dowell Erol Gelenbe Stephen Grossberg Kaoru Hirota Abe Kandel George Klir Teuvo Kohonen Tosiyasu L. Kunii Jin-Cherng Lin E. Mamdani Xuan-Zhong Ni C.V Ramamoorthy John E.R Staddon Masaki Togai Victor Van Beuren Max Woodbury Stephen S. Yau Lotfi Zadeh H. Zimmerman Keynote Speakers Lotfi A. Zadeh & Stephen Grossberg Plenarv Speakers Suguru Arimoto Dennis Bahler James Bowen Abe Kandel George Klir Phillipe Smets John R. Rice l.B. Turksen Benjamin Wah Stephen S. Yau General Information The JOINT CONFERENCE ON INFORMATION SCIENCES consists of two international conferences and one workshop. All interested attendees including researchers, organizers, speakers, exhibitors, students and other participants should register either in Plan A: 3rd International Conference on Fuzzy Theory and Technology or Plan B: First International Conference on Computer Theory & lnformatics and Workshop on Mobile Computing Svstems. Any participants can attend all the keynote speeches, plenary sessions, all parallel sessions and exhibits. The only difference is that all authors registered in Plan A will participate in Lotfi A. Zadeh BestPaper Competition. Plan B will have no best paper competition; at least for this year. In addition, each plan will publish its own proceeding. Tutorials Session A: Fuzzy Theory & Technology (Sunday, November 13, 1994, 8:30 am - 12:30 pm) 1. George Klir .................................Fuzzy Set and Logic 2. l. B. Turksen ................................Fuzzy Expert Systems 3. Jack Aldridge .......................................Fuzzy Control 4. Marcus Thint .......................Fuzzy Logic and NN Integration Session B: Computers (Sunday, November 13, 1994, 1:30pm- 6:30pm) 1. Richard Palmer ....................................Neural Network 2. Frank Y. Shih ................................Pattern Recognition 3. Patrick Wang ......Intelligent Pattern Recognition & Applications 4., 5. Ken W. White...............Success with Machine Vision I & II Time Schedule & Venue Tutorials ....................November 13, 1994 o 8:30 am - 6:30 pm Conferences .............November 14 - 16, 1994 o 8:30 am - 5:30 pm Venue........Pinehurst Resort & Country Club, Pinehurst, NC, U.S.A. PARTICIPATION PLAN A: 3rd Annual Conference on Fuzzy Theory and Technology PROGRAM COMMITTEE Jack Aldridge Suguru Arimoto W Bandler P Bonnisone Bruno Bosacchi B Bouchon-Meunier J Buckley Dev Garg Rhett George George Georgiou I R Goodman Siegfried Gottwald Silvia Guiasu M M. Gupta Ralph Horvath D L Hung Timothy Jacobs Y.K Jani Joaquim Jorge Paul Kainen S.C. Kak Abe Kandel P Klement L J Kohout Vladik Kreinovlch N Kuroki Reza Langari Harry Hua Li Don Meyers C K Mitchell John Mordeson Akira Nakamura Kyung Whan Oh Maria Orlowska Robert M. Pap Arthur Rarner Elie Sanchez B Scwhott Shouyoe Shao Sujeet Shenoi Frank Shih H Allison Smith L M Sztandera Alade ToKuta R Tong I Turksen Guo Jun Wang Tom Whalen Edward K Wong T Yamakawa The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. Example topics include, but are not limited to the following: TOPICS: 3rd ANNUAL CONFERENCE ON FUZZY THEORY AND TECHNOLoGY o Fuzzy Mathematics o Basic Principles and Foundations of Fuzzy Logic o Qualitative and Approximate-Reasoning Modeling o Hardware Implementations of Fuzzy Logic Algorithms o Design . Analysis, and Synthesis of Fuzzy Logic Controllers o Learning and Acquisition of Approximate Models o Fuzzy Expert Systems o Neural Network Architectures o Artificially Intelligent Neural Networks o Artificial Life o Associative Memory o Computational Intelligence o Cognitive Science o Fuzzy Neural Systems o Relations between Fuzzy Logic and Neural Networks o Theory of Evolutionary Computation o Efficiency/robustness comparisons with other direct search algorithms o Parallelcomputer:applications o Integration of Fuzzy Logic and Evolutionary Computing o Comparisons between different variants of evolutionary algorithms o Evolutionary Computation for neural networks o Fuzzy logic in Evolutionary algorithms o Neurocognition o Neurodynamics o Optirnization o Pattern Recognition o Learning and Memory o Machine Learning Applications o Implementations (electronic, Optical, biochips) o Intelligent Control APPLICATIONS OF THE TOPICS: o Hybrid Systems o Image Processing o Image Understanding o Pattern Recognition o Robotics and Automation o Intelligent Vehicle and Highway Systems o Virtual Reality o Tactile Sensors o Machine Vision o Motion Analysis o Nuro biology o Sensation and Perception o Sensorimotor Systems o Speech, Hearing and Language o Signal Processing o Time Series Analysis o Prediction o System ldentification o System Control o Intelligent Information Systems o Case-Based Reasoning o Decision Analysis o Databases and Information Retrieval o Dynamic Systems Modeling & Diagnosis o Electric & Nuclear Power Systems PARTICIPATION PLAN B: 1st Annual Computer Theory and Informatics Conference & Workshop on Mobile Computing Systems PROGRAM COMMITTEE: FIRST ANNUAL, COMPUTER & INFORMATICS Rafi Ahmed R Alonso Suguru Arimoto B. Badrinath C.R. Baker Martin Boogaard O Bukhres P Chrysantis Eliseo Clementini E.M. Ehlers Ahmed Elmagarmid K Ferentinos Godfrey Mohamed Gouda Albert G. Greensberg S Helal T Imielinski Subhash C Kak Abe Kandel Teuvo Kohomen Timo Koski Devendra Kumar Tosiyasu L Kunii Shahram Latifi Lin-shan Lee Mark Levene Jason Lin M i Lu Yanni Manolopoulos Jorge Muruzabal Sham B Navathe Sigeru Ornatu C V Ramamoorthy Hari N. Reddy John R Rice Abdellah Salhi Frank S.Shih Harpreet Singh Stanley Y W. Su Abdullah Uz Tansel Kishor Trivedi Millist Vincent Benjamin Wah Z.A Wahab Jun Wang Patrick Wang Edward K. Wong Lotfi A Zadeh TOPICS: 1st ANNUAL COMPUTER THEORY & INFORMATICS CONFERENCE The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. Example topics include, but are not limited to the following: o Computational Theory: Coding theory, automata, information theory, modern algebraic theory, measure theory, probability and statistics, and numerical methods o Design of Algorithms: algorithmic complexity, theory of algorithms, design, analyis and evaluation of efficient algorithms for engineering applications (such as computer-aided design, computational fluid dynamics, computer graphics, and virtual reality), combinatorics, scheduling theory, discrete optimization, data compression, and approximation theory o Software Design: Formal languages, theory and design of optimizing compilers (especially those for parallel and supercomputers), object-oriented programming, database theory and data organization, software design methodology, program verification, and software reliability. o Computer systems and architectures: parallel and distributed computing systems, high speed computer networks, theory and data orgarization, software design methodology, program verification, and software reliability. o Evaluation methods and tools: Performance evaluation methods, visualization tools, and simulation theory and methodology. TOPICS: WORKSHOP ON MOBILE COMPUTING SYSTEMS The workshop will focus on system support for mobile information and data access, to recognize the role of mobile computer systems in today's business and scientific communities. The workshop will be organized to gather leading researchers and practitioners in the field. We shall focus on issues related but not limited to: o Architectures for Mobile Computing Systems o Mobile ComputingTechnology o Wireless Communications o User lnterfaces in Palmtop Computing o Databases for Nomadic Computing o Transaction Models and Management o System Cornplexity, Integrity and Security o Legal/Social/Health Issues o Operating System Support o Battery Management o Nomadic Applications o Handheld Multimedia o Personai Communication Networks PROGRAM COMMITTEE: WORKSHOP ON MOBILE COMPUTING SYSTEMS R Alonso (Technology) B Badrinath (Query Processing) O Bukhres (Database Systems) P Chrysantis (Transactions) S Helal (Applications) T Imielinski (New Directions) Note: All Attendees must choose either Participation Plan A or Participation Plan B PUBLICATIONS The conference publishes two Proceedings on Summary; One entitled "Third International Conference on Fuzzy Theory and Technology" and the other entitled "First International Conference on Computer Theory and Informatics, and First Workshop on Mobile Computing Systems." Both proceedings will be made available on November 13,1994. A summary shall not exceed 4 pages of 10-point font, doublecolumn, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by September 10, 1994. It is very important to mark "plan A" or "plan B" on your manuscript. The conference will make the choice for you if you forget to do so. Final version of the full length paper must be submitted by November 14, 1994. Four (4) copies of the full length paper shall be prepared according to the "Information for Authors" appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1995. Accepted papers will appear in the hard-covered proceeding (book) with uniform typesetting to be published by a publisher (there will be two books published this year, one for each plan) or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Science, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on November 14, 1994; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or both hard-covered, deluxe, professional books at 1/2 price. The Title of the books are "Advances in Fuzzy Theory & Technology, Volume lll", "Advances in Computer Science and Informatics, Volume 1." Lotfi A. Zadeh "Best Paper Award" FT&T 1994 All technical papers subrnitted to FT & T, 1994 are automatically qualified as candidates for this award. The prize for this award is $2,500 plus hotel accommodations (traveling expenses excluded) at FT & T, 1995. The date for announcement of th best paper is May 30,1995. Oral presentation in person at FT & T, 1994, is required and an acceptance speech at FT & T, 1995 is also required. The evaluation committee for FT & T, 1994 consists of the following 10 members: Jack Aldridge B. Bouchon-Meunier George Klir l.R. Goodman John Mordeson Sujeet Shenoi H. Chris Tseng Frank Y. Shih Akira Nakamura Edward K. Wong I. B. Turksen. The selection of the top ten best papers will be decided by conference attendees and session chairs jointly. EXHIBITIONS JCIS '94 follows on last year's highly successful exhibits by some major publishers, most publishers, lead by Elsevier, will return. Intelligent Machines, Inc. will demonstrate their highly successful new software "O'inca"-a FL-NN, Fuzzy-Neuro Design Framework. Dr. Ken W. White of Ithaca, N.Y. will demonstrate his visual-sense systems. Virtus - a virtual reality company based in Cary, North Carolina has committed to participate. Negotiations are also underway with UNCVR research laboratory for its participation. This conference intends to develop "virtual reality" as one of the themes to benefit all attendees. Interested potential contributors should contact Dr. Paul P. Wang or Dr. Rhett T. George. Interested vendors should contact: Rhett George, E.E. Dept., Duke University Telephone: (919) 660-5242 Fax: (919) 660-5293 rtg at ee.duke.edu TRAVEL ARRANGEMENTS The Travel Center of Durham, Inc. has been designated the official travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada:1 -800-334-1 085 International FAX: (919) 687-0903 HOTEL RESERVATIONS Pinehurst Resort & Country Club Pinehurst, North Carolina, U.S.A. This is the conference site and lodging. Group Reservation Request designed specifically for our conference. Very Special discount rates have been agreed upon. Daily Rates for Hotel: Single Occupancy - $122.00 Double Occupancy - $91.00 per person Daily Rates for Manor Inn: Single Occupancy - $108.00 Double Occupancy - $79.00/person (Rates are per person, per night and include accommodations, breakast and dinner daily.) Pinehurst Resort encompasses an elegant historic hotel (registered with Historic Hotels of America") with the best in accommcations, gourmet dining and modern meeting facilities. Our AAA Four Diamond and Mobil Four Star resort offers a wide range of activities including seven championship golf courses, tennis, waters sports, croquet and sport shooting. Please contact: JACKIE HAYTER Associate Director of Sales Pinehurst Resort and Country Club Carolina Vista P.O. Box 4000 Pinehurst, NC 28374-4000 (919) 295-1339 - (919) 295-8484 1-800-659-G0LF SPONSORS Machine Intelligence and Fuzzy Logic Laboratory o Department of Electrical Engineering, Duke University o Elsevier Science Inc. New York, N.Y. PARTICIPANTS IFSA, International Fuzzy Systems Association o Institute Of Information science, Academia Sinica. JCIS '94 REGISTRATION FEES & INFO. Up to 9/15/94 After 9/15/94 Full Registration $275.00 $395.00 Student Registration $85.00 $160.00 Tutorial w/ Conf_ Reg. $150.00 $200.00 Tutorial w/o Conf. Reg. $300.00 $500.00 Exhibit Booth Fee $400.00 $500.00 One Day Fee (no pre-reg. discount) $165.00 Full $80.00 Student Above fees applicable to both Plan A & Plan B FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda A copy of conference proceedings (summany) at conference and one year subscnption of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Banquets (November 14 & 15, 1994) are included through Hotel Registration. Tutorials are not included STUDENT CONFERENCE REGISTRATION. For full-time students only. A letter from your department is required. You must present a current student I D with picture. A copy of Conference Proceedings (Summary) is included. Admission to all sessions, exhibit area, coffee, tea and soda. The nght to purchase the hard-cver deluxe books at 1/2 price Free Subscription of INS Journal - Applications is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials A copy of lecture notes for the oaurse registered is included. Coffee, tea and soda is included The summary and tree subscription to the INS journal is, however, not included. The right to purchase hard-cover deluxe books is included. VARIOUS CONFERENCE CONTACTS: LOCAL INFORMATlON Rhett T. George Dept. of Electrical Engineering Box 90291, Duke University, Durham, NC 27708-0291 e-mail: rtg at ee.duke.edu Tel. (919) 660-5228 TUTORIAL & CONFERENCE INFORMATION Paul P. Wang Kitahiro Kaneda e-mail: ppw at ee.duke.edu e-mail: hiro at ee.duke.edu Tel. (919) 660-5271, 660-5259 Tel. (919) 660-5233 Jerry C.Y. Tyan e-mail: ctyan at ee.duke.edu Tel. (919) 660-5233 ---------------------------------------------------------------- Neural Networks papers: Send summaries to George M. Georgiou Computer Science Department TEL: (909) 880-5332 California State University FAX: (909) 880-7004 5500 University Pkwy San Bernardino, CA 92407, USA georgiou at silicon.csci.csusb.edu Deadline: September 10, 1994 TeX/Latex or postcript by email is fine. --------------------------------------------------------------------  From todd at phy.ucsf.edu Mon Aug 1 12:30:59 1994 From: todd at phy.ucsf.edu (todd@phy.ucsf.edu) Date: Mon, 1 Aug 1994 09:30:59 -0700 Subject: New paper: Ffwd Hebbian Learning w/ Nonlinear Outputs Message-ID: <9408011630.AA04378@dizzy.ucsf.EDU> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/troyer.ffwd_hebb.ps.Z The following paper has been submitted to Neural Networks and is available from the Ohio State neuroprose archive. TITLE: Feedforward Hebbian Learning with Nonlinear Output Units: A Lyapunov Approach AUTHOR: Todd Troyer W.M. Keck Center for Integrative Neuroscience and Department of Pysiology 513 Parnassus Ave. Box 0444 University of California, San Francisco San Francisco, CA. 94143 ABSTRACT: A Lyapunov function is constructed for the unsupervised learning equations of a large class of two layer networks. Units in the output layer are recurrently connected by fixed symmetric weights; only the feedforward connections between layers undergo learning. In contrast to much of the previous work on self-organization of this type, the output units have nonlinear transfer functions. The Lyapunov function is similar in form to that derived by Cohen-Grossberg and Hopfield. Two theorems are proved regarding the location of stable equilibria in the limit of high gain transfer functions. The analysis is applied to the soft competitive learning networks of Amari and Takeuchi. Retrieve this paper by anonymous ftp from: archive.cis.ohio-state.edu (128.146.8.52) in the /pub/neuroprose directory The name of the paper in this archive is: troyer.ffwd_hebb.ps.Z [15 pages] No hard copies available. From giles at research.nj.nec.com Mon Aug 1 13:12:11 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Mon, 1 Aug 94 13:12:11 EDT Subject: Reprint:SYNAPTIC NOISE IN DYNAMICALLY-DRIVEN RECURRENT NEURAL NETWORKS Message-ID: <9408011712.AA13061@fuzzy> **************************************************************************************** Reprint:SYNAPTIC NOISE IN DYNAMICALLY-DRIVEN RECURRENT NEURAL NETWORKS: CONVERGENCE AND GENERALIZATION The following reprint is available via the University of Maryland Department of Computer Science Technical Report archive: _________________________________________________________________________________________ "Synaptic Noise in Dynamically-driven Recurrent Neural Networks: Convergence and Generalization" UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-89 AND CS-TR-3322 Kam Jim(a), C.L. Giles(a,b), B.G. Horne(a) {kamjim,giles,horne}@research.nj.nec.com (a) NEC Research Institute,4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. of Maryland, College Park, MD 20742 There has been much interest in applying noise to feedforward neural networks in order to observe their effect on network performance. We extend these results by introducing and analyzing various methods of injecting synaptic noise into dynamically-driven recurrent networks during training. By analyzing and comparing the effects of these noise models on the error function, we found that applying a controlled amount of noise during training can improve convergence time and generalization performance. In addition, we analyze the effects of various noise parameters (additive vs. multiplicative, cumulative vs. non-cumulative, per time step vs. per sequence) and predict that best overall performance can be achieved by injecting additive noise at each time step. Noise contributes a second-order gradient term to the error function which can be viewed as an anticipatory agent to aid convergence. This term appears to find promising regions of weight space in the beginning stages of training when the training error is large and should improve convergence on error surfaces with local minima.Synaptic noise also enhances the error function by favoring internal representations where state nodes are operating in the saturated regions of the sigmoid discriminant function, thus improving generalization to longer sequences. We substantiate these predictions by performing extensive simulations on learning the dual parity grammar from grammatical strings encoded as temporal sequences with a second-order fully recurrent neural network. -------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3322.ps.Z ftp> quit unix> uncompress 3322.ps.Z -------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From inbs at bingsuns.cc.binghamton.edu Mon Aug 1 13:48:58 1994 From: inbs at bingsuns.cc.binghamton.edu (INBS-conference) Date: Mon, 1 Aug 94 13:48:58 EDT Subject: call for papers Message-ID: <9408011748.AA23972@bingsuns.cc.binghamton.edu > C A L L for P A P E R S First International IEEE Symposium on INTELLIGENCE in NEURAL AND BIOLOGICAL SYSTEMS May 29-31,1995, Washington DC Area, USA Sponsored by: IEEE Computer Society; In Cooperating with: AAAS Society; AAAI Society; SMC Society AIMS and SCOPE of the Symposium The Intelligence in Artificial Neural Networks and the Computational evolution of the Biological Systems are two very important and very active research areas, which offer and promise many practical applications to scientists and other professionals in industry and goverment as well. In response to this demand, the INBS Symposium offers a theoretical and a practical medium for the evolutionary and the intelligence processes in both artificial and biological systems and the interaction between these fields. Some Topics EVOLUTIONARY COMPUTING (DNA sequence processing, Genome Processes, DNA, Topologies, Synthesis of DNA, Formal Linguistic of DNA, Structure/Function Correlation, Computational Genetics) BIOTECHNOLOGY AND APPLICATIONS (Mapping the Genome, Human Genome, Molecular Computing, Limitations of Biological Models) GENETIC ALGORITHMS (Clustering, Optimization, Searching, Programming, etc) LANGUAGES UNDERSTANDING (Natural Languages, NL Translation, Text Abstraction, Computational Linguistics) LEARNING AND PERCEPTION (Supervised, Unsupervised, Hybrid, Understanding, Planning, Interpretation) NEUROSCIENCE (Adaptive Control Models, Fuzzy & Probabilistic Models, Hybrid Models, Dynamic Neural and Neurocomputing Models, Self-Organized Models) SOME KEYNOTE DISTIGUISHED SPEAKERS A.Apostolico, Europe; S.Arikawa, Japan; S.Hameroff, USA PROGRAM COMMITTEE Local Arrangement/Publicity Committee N.Bourbakis, BU-SUNY,USA, Chair Cynthia Shapiro R.Brause, UG,Germany Sukarno Mertoguno K.DeJong, GMU,USA James Gattiker J.Shavlik, UWM,USA Ali Moghaddamzadeh C.Koutsougeras, TU,USA H.Kitano, JAPAN M.Perlin, CMU,USA Publication Registration Committee H.Pattee, BU,USA D.I. Kavraki D.Schaffer, Philips Lab,USA D.Searls, UPA,USA J.Collado-Vides, UNAM T.Yokomori, UEC,Japan S.Rasmussen, Los Alamos,NL G.Paun, Roumania A.Restivo, U.Palermo,Italy M.Chrochmore, U.Paris, France D.Perrin, U.Paris, France R.Reynolds, WSU,USA M.Conrad, WSU,USA M.Kameyama, TU,Japan J.Nikolis, UP,Greece T.Head, BU, USA, Vice Chair C.Benham, MS,USA R.VanBuskirk, BU-SUNY,USA E.Dietrich, BU-SUNY,USA S.Koslow, NIH,USA M.Huerta, NIH,USA B.Punch, MSU,USA INFORMATION FOR AUTHORS Authors are requested to submit four copies (in English) of their typed complete manuscript (25 pages max), or an extended summary (5-10 pages max) by Nov. 31,1994, to N. G. Bourbakis, Binghamton University,T.J.Watson School, AAAI Lab, NY 13902,Tel: 607-777-2165, 607-771-4033; Fax:607-777-4822, E-mail : Bourbaki at BingSuns.CC.Binghamton.edu, or INBS at Bingsuns.CC.Binghamton.edu. Each manuscript submitted to INBS must indicate the most relevant areas and include the complete address of at least one of the authors. Notification of acceptance, Jan. 31,1995. Final copies of the papers accepted by INBS due March 21 ,1995. From tesauro at watson.ibm.com Mon Aug 1 13:50:48 1994 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Mon, 1 Aug 94 13:50:48 EDT Subject: NIPS*94 Registration Message-ID: Registration for NIPS*94 is now open. A registration brochure is available on-line via the NIPS*94 Mosaic homepage (http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html), or by anonymous FTP: FTP site: mines.colorado.edu (138.67.1.3) FTP file: /pub/nips94/nips94-registration-brochure.ps The brochure contains the registration form and describes the program highlights, including the list of invited speakers and tutorial speakers. People without access to FTP or Mosaic may request a copy of the brochure by e-mail to "nips94 at mines.colorado.edu" or by physical mail to: NIPS*94 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA -- Gerry Tesauro NIPS*94 General Chair From plaut at cmu.edu Tue Aug 2 10:14:54 1994 From: plaut at cmu.edu (David Plaut) Date: Tue, 02 Aug 1994 10:14:54 -0400 Subject: TR: Understanding Normal and Impaired Word Reading Message-ID: <7478.775836894@crab.psy.cmu.edu> FTP-host: hydra.psy.cmu.edu [128.2.248.152] FTP-file: pub/pdp.cns/pdp.cns.94.5.ps.Z [78 pages; 353Kb compressed; 924Kb uncompressed] For those who do not have FTP access, physical copies can be requested from Barbara Dorney . ============================================================================ Understanding Normal and Impaired Word Reading: Computational Principles in Quasi-Regular Domains David C. Plaut James L. McClelland Carnegie Mellon University Carnegie Mellon University Mark S. Seidenberg Karalyn E. Patterson University of Southern California MRC Applied Psychology Unit Technical Report PDP.CNS.94.5 July 1994 We develop a connectionist approach to processing in quasi-regular domains, as exemplified by English word reading. A consideration of the shortcomings of a previous implementation (Seidenberg & McClelland, 1989, Psych. Rev.) in reading nonwords leads to the development of orthographic and phonological representations that capture better the relevant structure among the written and spoken forms of words. In a number of simulation experiments, networks using the new representations learn to read both regular and exception words, including low-frequency exception words, and yet are still able to read pronounceable nonwords as well as skilled readers. A mathematical analysis of the effects of word frequency and spelling-sound consistency in a related but simpler system serves to clarify the close relationship of these factors in influencing naming latencies. These insights are verified in subsequent simulations, including an attractor network that reproduces the naming latency data directly in its time to settle on a response. Further analyses of the network's ability to reproduce data on impaired reading in surface dyslexia support a view of the reading system that incorporates a graded division-of-labor between semantic and phonological processes. Such a view is consistent with the more general Seidenberg and McClelland framework and has some similarities with---but also important differences from---the standard dual-route account. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut plaut at cmu.edu "Doubt is not a pleasant Department of Psychology 412/268-5145 condition, but certainty Carnegie Mellon University 412/268-5060 (FAX) is an absurd one." Pittsburgh, PA 15213-3890 345H Baker Hall --Voltaire =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From john at dcs.rhbnc.ac.uk Tue Aug 2 11:12:04 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Tue, 02 Aug 94 16:12:04 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <8129.9408021512@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): --------------------------------------- NeuroCOLT Technical Report NC-TR-94-5: --------------------------------------- A Weak Version of the Blum, Shub \& Smale Model by Pascal Koiran Abstract: We propose a weak version of the Blum-Shub-Smale model of computation over the real numbers. In this weak model only a ``moderate" usage of multiplications and divisions is allowed. The class of boolean languages recognizable in polynomial time is shown to be the complexity class P/poly. The main tool is a result on the existence of small rational points in semi-algebraic sets which is of independent interest. As an application, we generalize recent results of Siegelmann \& Sontag on recurrent neural networks, and of Maass on feedforward nets. A preliminary version of this paper was presented at the 1993 IEEE Symposium on Foundations of Computer Science. Additional results include: \begin{itemize} \item an efficient simulation of order-free real Turing machines by probabilistic Turing machines in the full Blum-Shub-Smale model; \item an efficient simulation of arithmetic circuits over the integers by boolean circuits; \item the strict inclusion of the real polynomial hierarchy in weak exponential time. \end{itemize} ------------------------ The Report NC-TR-94-5 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-5.ps.Z ftp> bye % zcat nc-tr-94-5.ps.Z | lpr -l Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. Best wishes John Shawe-Taylor From sutton at gte.com Tue Aug 2 15:54:04 1994 From: sutton at gte.com (Rich Sutton) Date: Tue, 2 Aug 1994 14:54:04 -0500 Subject: On Step-Size and Bias in TD Learning (Paper Available) Message-ID: <199408021849.AA05405@ns.gte.com> The following paper appeared in the proceeding of the 1994 Yale Workshop on Adaptive and Learning Systems, 1994. ftp instructions follow. ON STEP-SIZE AND BIAS IN TEMPORAL-DIFFERENCE LEARNING by Richard S. Sutton and Satinder P. Singh (MIT) We present results for three new algorithms for setting the step-size parameters, alpha and lambda, of temporal-difference learning methods such as TD(lambda). The overall task is that of learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories. The new algorithms select step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods. We compare our algorithms with conventional Monte Carlo methods. Monte Carlo methods have a natural way of setting the step size: for each state s they use a step size of 1/n(s), where n(s) is the number of times state s has been visited. We seek and come close to achieving comparable step-size algorithms for TD(lambda). One new algorithm uses a lambda=1/n(s) schedule to achieve the same effect as processing a state backwards with TD(0), but remains completely incremental. Another algorithm uses a lambda at each time equal to the estimated transition probability of the current transition. We present empirical results showing improvement in convergence rate over Monte Carlo methods and conventional TD(lambda). A limitation of our results at present is that they apply only to tasks whose state trajectories do not contain cycles. ================================================================ unix> ftp envy.cs.umass.edu Name: anonymous Password: [your ID] ftp> cd pub/singh ftp> binary ftp> get Yale94.ps.Z ftp> bye unix> uncompress Yale94.ps.Z unix> [your command to print PostScript file] Yale94.ps =============================================================== The paper is six pages long. The file is about 130K. FTP-host: envy.cs.umass.edu FTP-filename: /pub/singh/Yale94.ps.Z From marwan at sedal.su.oz.au Tue Aug 2 23:12:08 1994 From: marwan at sedal.su.oz.au (Marwan Jabri) Date: Wed, 3 Aug 1994 13:12:08 +1000 Subject: paper available: Practical Performance and Credit Assignment... Message-ID: <199408030312.NAA28697@sedal.sedal.su.OZ.AU> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/jabri.ppap.ps.Z The file jabri.ppap.ps.Z is now available for copying from the Neuroprose repository and has been submitted for publication: Practical Performance and Credit Assignment Efficiency of Analog Multi-layer Perceptron Perturbation Based Training Algorithms Marwan A. Jabri Systems Engineering and Design Automation Laboratory Sydney University Electrical Engineering NSW 2006 Australia marwan at sedal.su.oz.au SEDAL Technical Report 1-7-94 Abstract Many algorithms have been recently reported for the training of analog multi-layer perceptron. Most of these algorithms were evaluated either from a computational or simulation view point. This paper applies several of these algorithms to the training of an analog multi-layer perceptron chip. The advantages and shortcomings of these algorithms in terms of training and generalisation performance and their capabilities in a limited precision environment are discussed. Extensive experiments demonstrate that a trade-off exists between the parallelisation of perturbations and the efficiency of credit assignment. Two semi-parallelisation heuristics are presented and are shown to provide advantages in terms of efficient exploration of the solution space and fewer credit assignment confusions. Retrieve this paper by anonymous ftp from: archive.cis.ohio-state.edu (128.146.8.52) in the /pub/neuroprose directory The name of the paper in this archive is: jabri.ppap.ps.Z [28 pages] Sorry, but no hard copies available. From niranjan at eng.cam.ac.uk Wed Aug 3 15:07:20 1994 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Wed, 3 Aug 94 15:07:20 BST Subject: Post Doc job in Cambridge Message-ID: <18000.9408031407@tulip.eng.cam.ac.uk> ---------------------- JOB JOB JOB JOB ------------------------- University Of Cambridge, Department of Obstetrics & Gynaecology Post-Doctoral Research Assistant Neural Networks in the Quality Assurance in Maternity Care (QAMC) Under the terms of a grant recently awarded to the QAMC project by the Commission of the European Communities (CEC), we expect that we will soon be able to offer a Post Doctoral Research Position in Cambridge for the above investigation. From g.gaskell at psychology.bbk.ac.uk Thu Aug 4 14:17:00 1994 From: g.gaskell at psychology.bbk.ac.uk (Gareth) Date: Thu, 4 Aug 94 14:17 BST Subject: Thesis: Speech Perception & Connectionsim Message-ID: (Sorry - ignore the previous message, I sent the wrong file!) FTP-host: archive.cis.ohio-state.edu (128.146.8.52) FTP-filename: /pub/neuroprose/Thesis/gaskell.thesis.ps.Z A new PhD thesis (150 pages) is now available in the neuroprose archive. The thesis examines the role of phonological variation in human speech perception using both experimental and connectionist techniques. Abstract: The research reported in this thesis examines issues of word recognition in human speech perception. The main aim of the research is to assess the effect of regular variation in speech on lexical access. In particular, the effect of a type of neutralising phonological variation, assimilation of place of articulation, is examined. This variation occurs regressively across word boundaries in connected speech, altering the surface phonetic form of the underlying words. Two methods of investigation are used to explore this issue. Firstly, experiments using cross- modal priming and phoneme monitoring techniques are used to examine the effect of variation on the matching process between speech input and lexical form. Secondly, simulated experiments are performed using two computational models of speech recognition: TRACE (McClelland & Elman, 1986) and a simple recurrent network. The priming experiments show that the mismatching effects of a phonological change on the word-recognition process depend on their viability, as defined by phonological constraints. This implies that speech perception involves a process of context- dependent inference, that recovers the abstract underlying representation of speech. Simulations of these and other experiments are then reported using a simple recurrent network model of speech perception. The model accommodates the results of the priming studies and predicts that similar phonological context effects will occur in non-words. Two phoneme monitoring studies support this prediction, but also show interaction between lexical status and viability, implying that phonological inference relies on both lexical and phonological constraints. A revision of the network model is proposed which learns the mapping from the surface form of speech to semantic and phonological representations. To retrieve the file: ftp archive.cis.ohio-state.edu login: anonymous password: ftp> cd /pub/neuroprose/Thesis ftp> binary ftp> get gaskell.thesis.ps.Z ftp> bye uncompress gaskell.thesis.ps.Z lpr gaskell.thesis.ps [or whatever you normally do to print] Gareth Gaskell Centre for Speech and Language, Birkbeck College, London, g.gaskell at psyc.bbk.ac.uk From giles at research.nj.nec.com Thu Aug 4 18:00:26 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 4 Aug 94 18:00:26 EDT Subject: Reprint:LEARNING A CLASS OF LARGE FINITE STATE MACHINES WITH A RECURRENT Message-ID: <9408042200.AA18471@fuzzy> NEURAL NETWORK The following reprint is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: _____________________________________________________________________________ Learning a Class of Large Finite State Machines with a Recurrent Neural Network UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-94 AND CS-TR-3328 C. L. Giles[1,2], B. G. Horne[1], T. Lin[1,3] [1] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]UMIACS, University of Maryland, College Park, MD 20742 [3] EE Department, Princeton University, Princeton, NJ 08540 {giles,horne,lin}@research.nj.nec.com One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically- driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specif ically, these FMMs have a large number of states (simulations are for $256$ and $512$ state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the num ber of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that necessary to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. -------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3328.ps.Z ftp> quit unix> uncompress 3328.ps.Z OR unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get large.fsm.ps.Z ftp> quit unix> uncompress large.fsm.ps.Z -------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From john at dcs.rhbnc.ac.uk Fri Aug 5 08:00:06 1994 From: john at dcs.rhbnc.ac.uk (john@dcs.rhbnc.ac.uk) Date: Fri, 05 Aug 94 13:00:06 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <10057.9408051200@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): Many apologies that the previous technical report NC-TR-94-5 was not available when first announced. It is now installed. I have also installed an ascii file entitled `abstracts' which holds an up to date list of the Technical Reports with their abstracts. A new technical report is also now available: --------------------------------------- NeuroCOLT Technical Report NC-TR-94-7: --------------------------------------- On the power of real Turing machines over binary inputs by Felipe Cucker, Universitat Pompeu Fabra, Balmes 132, Barcelona 08008, SPAIN and Dima Grigoriev, Depts. of Comp. Science and Maths, Penn State University, University Park, PA 16802, USA Abstract: In recent years the study of the complexity of computational problems involving real numbers has been an increasing research area. Blum, Shub and Smale (1989) proposed a computational model ---the real Turing machine--- for dealing with the above problems was developed. The aim of this paper is to prove that $\BP(\PAR)=\;$PSPACE/{\it poly} where $\PAR$ is the class of sets computed in parallel polynomial time by (ordinary) real Turing machines. As a consequence we obtain the existence of binary sets that do not belong to the Boolean part of $\PAR$ (an extension of the result of Koiran (1994) since $\PH\subseteq\PAR$) and a separation of complexity classes in the real setting. ------------------------ The Report NC-TR-94-7 can be accessed and printed as follows (follow the same procedure for getting the abstracts file) % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-94-7.ps.Z ftp> bye % zcat nc-tr-94-7.ps.Z | lpr -l Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. Best wishes John From shultz at hebb.psych.mcgill.ca Fri Aug 5 14:19:36 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Fri, 5 Aug 94 14:19:36 EDT Subject: No subject Message-ID: <9408051819.AA06064@hebb.psych.mcgill.ca> Subject: Paper available: Modeling cognitive development on balance scale phenomena Date: 5 Aug. '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.balance-ml.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: Modeling cognitive development on balance scale phenomena (33 pages) Thomas R. Shultz, Denis Mareschal, & William C. Schmidt Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca Abstract We used cascade-correlation to model human cognitive development on a well studied psychological task, the balance scale. In balance scale experiments, the child is asked to predict the outcome of placing certain numbers of equal weights at various distances to the left or right of a fulcrum. Both stage progressions and information salience effects have been found with children on this task. Cascade-correlation is a generative connectionist algorithm that constructs its own network topology as it learns. Cascade- correlation networks provided better fits to these human data than did previous models, whether rule-based or connectionist. The network model was used to generate a variety of novel predictions for psychological research. Keywords: cognitive development, balance scale, connectionist learning, cascade-correlation The paper is published in Machine Learning, 1994, 16, 59-88. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.balance-ml.ps.Z ftp> quit unix> uncompress shultz.balance-ml.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From cohn at psyche.mit.edu Fri Aug 5 17:00:03 1994 From: cohn at psyche.mit.edu (David Cohn) Date: Fri, 5 Aug 94 17:00:03 EDT Subject: Optimal Experiment Design paper available Message-ID: <9408052100.AA24701@psyche.mit.edu> ---------------------------------------------------------------------- Neural Network Exploration Using Optimal Experiment Design AI Lab Memo #1491/CBCL Paper #99 David A. Cohn Dept. of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 I consider the question "How should one act when the only goal is to learn as much as possible?" Building on the theoretical results of Fedorov and MacKay, I apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. I demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. I conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. ---------------------------------------------------------------------- The above paper is a greatly expanded version of one that appeared at last year's NIPS, and is available by anonymous ftp to: publications.ai.mit.edu in the file: ai-publications/1994/AIM-1491.ps.Z It is also available from my home page at: http://www.ai.mit.edu/people/cohn/cohn.html I welcome all comments, questions, and (gentle) criticisms. -David Cohn e-mail: cohn at psyche.mit.edu Dept. of Brain & Cognitive Science phone: (617) 253-8409 MIT, E10-243 Cambridge, MA 02139 http://www.ai.mit.edu/people/cohn/cohn.html From harnad at Princeton.EDU Fri Aug 5 20:25:42 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 5 Aug 94 20:25:42 EDT Subject: Motor Control (Feldman): BBS Call for Commentators Message-ID: <9408060025.AA29660@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by: A.G. Feldman & M.F. Levin on: POSITIONAL FRAMES OF REFERENCE IN MOTOR CONTROL This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ POSITIONAL FRAMES OF REFERENCE IN MOTOR CONTROL: ORIGIN AND USE Anatol G. Feldman (1,2,4) & Mindy F. Levin (2,3,4) Institute of Biomedical Engineering, University of Montreal (1) Research Centre, Rehabilitation Institute of Montreal, H3S 2J4 (2) School of Rehabilitation, University of Montreal (3) Centre for Research in Neurological Sciences, University of Montreal (4) EMAIL:Feldman at ere.umontreal.ca KEYWORDS: motor control, frames of reference, motoneurons, control variables, proprioception, kinaesthesis, equilibrium points, multi-muscle systems, pointing, synergy, redundancy problem. ABSTRACT: A hypothesis about sensorimotor integration (the lambda model) is described and applied to movement control and kinesthesia. The nervous system organizes positional frames of reference for the sensorimotor apparatus and produces active movements by shifting frames in terms of spatial coordinates. Kinematic and electromyographic patterns are not programmed but emerge from the dynamic interaction of the system's components, including external forces, within the designated frame of reference. Motoneuronal threshold properties and proprioceptive inputs to motoneurons may be important components in the physiological mechanism which produces positional frames of reference. The hypothesis that intentional movements are produced by shifting the frame of reference is extended to multi-muscle and multi-degrees of freedom systems by providing a solution for the redundancy problem the allows the control of a joint alone or in combination with other joints to produce any desired limb configuration and movement trajectory. For each motor behavior, the nervous system uses a strategy which minimizes the number of changeable control variables and keep sthe parameters of these changes invariant. This is illustrated by examples of simulated kinematic and electromyographic signals from single- and multi-joint arm movements produced by patterns of control variables. Empirical support is provided and additional tests are suggested. The model is contrasted with others based on the ideas of programming of motoneuronal activity, muscle forces, stiffness or movement kinematics. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.feldman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The file is also retrievable using archie, gopher, and World-Wide Web URLs (Universal Resource Locators): ftp://princeton.edu/pub/harnad/BBS/ gopher://gopher.princeton.edu/1ftp%3aprinceton.edu%40/pub/harnad/BBS/ http://192.190.21.10/wic/psych.02.html ------------------------------------------------------------- To retrieve a file by ftp from an Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.feldman When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). JANET users without ftp can instead utilise the file transfer facilities at sites uk.ac.ft-relay or uk.ac.nsf.sun. Full details are available on request. ------------------------------------------------------------- From bishopc at helios.aston.ac.uk Mon Aug 8 10:26:53 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Mon, 8 Aug 1994 14:26:53 +0000 Subject: Lectureships in Neural Computing Message-ID: <29457.9408081326@sun.aston.ac.uk> ------------------------------------------------------------------- Aston University Neural Computing Research Group Department of Computer Science and Applied Mathematics LECTURESHIPS IN NEURAL COMPUTING -------------------------------- Applications are invited for two lectureships commencing in the next academic year. Candidates are expected to have excellent academic qualifications and a proven record of research. The appointments will be for an initial period of three years, with the possibility of subsequent renewal or transfer to a continuing appointment. Successful candidates will be expected to make a substantial contribution to the research activities of the Department in the area of neural computing. They will also be expected to contribute to the undergraduate and postgraduate teaching programmes in computer science. The Neural Computing Research Group currently comprises three professors, two lecturers, three postdoctoral research fellows and ten postgraduate research students. Current research activity focusses on principled approaches to neural computing, and spans a broad spectrum from theoretical foundations to industrial and commercial applications. These new appointments will further strengthen the research activity of this group. Salaries will be within the lecturer A and B range 14,756 to 25,735, and exceptionally up to 28,756 (UK pounds). If you wish to be considered for one of these positions, please send a CV and publications list, together with the names of 3 referees, to: Professor Chris Bishop Neural Computing Research Group Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk From bishopc at helios.aston.ac.uk Mon Aug 8 12:36:57 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Mon, 8 Aug 1994 16:36:57 +0000 Subject: Neural Computing Applications Forum Message-ID: <992.9408081536@sun.aston.ac.uk> --------------------------------------------------------------- NCAF Two-Day Conference: PRACTICAL APPLICATIONS AND TECHNIQUES OF NEURAL NETWORKS 14 and 15 September 1994 Aston University, Birmingham, UK 14 September 1994 ----------------- Tutorial: Introduction to Neural Networks (How to get started in Neural Computing) Richard Palmer, ERA Technology INVITED TALK: From Regularisation to Cheque Verification Francoise Fogelman, SLIGOS, France Workshop: Error Bars and Confidence Limits (Practical Techniques for Assigning Error Bars to Network Predictions) Odyssey: Stocks, Shakespeare and Sunshine: (or: Useful Tricks with Radial Basis Functions) David Lowe, Aston University Social Event: Canal Boat Trip with Wine and Buffet Dinner 15 September 1994 ----------------- Obstacles and Challenges in the Industrial Application of Neural Networks Inderjit Sandhu, Barclays Bank PLC Applying Neural Networks to the Analysis of Small Medical Data Sets Paul Beatty, University Hospital of South Manchester The Neural Nose Gareth Jones, Neotronics Ltd Hand-printed Character Recognition Using Deformable Models Chris Williams, Aston University Determination of Ocean Surface Wind Velocities from Satellite Radar Data Iain Strachan, AEA Technology Use of Neural Networks to Derive Helicopter Component Load Spectra Alf Vella, Cranfield University Working Together: NCAF and the DTI Neural Computing Programme Bob Wiggins, DTI The EPSRC Neural Computing Programme Catherine Barnes and Peter Bates, EPSRC A Study of Various Neural Network Techniques for Automotive Systems M Arain, Lucas Advanced Engineering Predicting Driver Alertness from Steering Behaviour Kevin Swingler, Stirling University (with Ford UK) ------------------------------------------------------------------- NEURAL COMPUTING APPLICATIONS FORUM The Neural Computing Applications Forum (NCAF) was formed in 1990 and has since come to provide the principal mechanism for exchange of ideas and information between academics and industrialists in the UK on all aspects of neural networks and their practical applications. NCAF organises four 2-day conferences each year, which are attended by around 100 participants. It has its own international journal `Neural Computing and Applications' which is published quarterly by Springer-Verlag, and it produces a quarterly newsletter `Networks'. Annual membership rates (Pounds Stirling): Company: 250 Individual: 140 Associate 90 Student: 55 Membership includes free registration at all four annual conferences, a subscription to the journal `Neural Computing and Applications', and a subscription to `Networks'. Associate membership is intended for those who are unable to attend meetings (eg those overseas) but who wish to receive the journal and the newsletter, and does not include registration at the conferences. For further information: Tel: +44 (0)784 477271 Fax: +44 (0)784 472879 email: c.m.bishop at aston.ac.uk Chris Bishop (Chairman, NCAF) -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From harnad at Princeton.EDU Mon Aug 8 22:32:16 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Mon, 8 Aug 94 22:32:16 EDT Subject: Subsymbolic Language Processing: PSYC Multiple Book Review Message-ID: <9408090232.AA12841@clarity.Princeton.EDU> CALL FOR BOOK REVIEWERS Below is the Precis of SUBSYMBOLIC NATURAL LANGUAGE PROCESSING by Risto Mikkulainen. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it. (If you have never reviewed for PSYCOLOQUY or Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message.) If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ---------------------------------------------------------------------- psycoloquy.94.5.46.language-network.1.miikkulainen Monday 8 Aug 1994 ISSN 1055-0143 (34 paragraphs, 1 fig, 1 note, 16 references, 609 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1994 Risto Miikkulainen Precis of: SUBSYMBOLIC NATURAL LANGUAGE PROCESSING: AN INTEGRATED MODEL OF SCRIPTS, LEXICON, AND MEMORY Cambridge, MA: MIT Press, 1993 15 chapters, 403 Pages Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 risto at cs.utexas.edu ABSTRACT: Distributed neural networks have been very successful in modeling isolated cognitive phenomena, but complex high-level behavior has been amenable only to symbolic artificial intelligence techniques. Aiming to bridge this gap, this book describes DISCERN, a complete natural language processing system implemented entirely at the subsymbolic level. In DISCERN, distributed neural network models of parsing, generating, reasoning, lexical processing and episodic memory are integrated into a single system that learns to read, paraphrase, and answer questions about stereotypical narratives. Using DISCERN as an example, a general approach to building high-level cognitive models from distributed neural networks is introduced, and the special properties of such networks are shown to provide insight into human performance. In this approach, connectionist networks are not only plausible models of isolated cognitive phenomena, but also sufficient constituents for generating complex, high-level behavior. KEYWORDS: computational modeling, connectionism, distributed neural networks, episodic memory, lexicon, natural language processing, scripts. I. MOTIVATION 1. Recently there has been a great deal of excitement in cognitive science about the subsymbolic (i.e., parallel distributed processing, or distributed connectionist, or distributed neural network) approach to natural language processing. Subsymbolic systems seem to capture a number of intriguing properties of human-like information processing such as learning from examples, context sensitivity, generalization, robustness of behavior, and intuitive reasoning. These properties have been very difficult to model with traditional, symbolic techniques. 2. Within this new paradigm, the central issues are quite different from (even incompatible with) the traditional issues in symbolic cognitive science, and the research has proceeded without much in common with the past. However, the ultimate goal is still the same: to understand how human cognition is put together. Even if cognitive science is being built on a new foundation, as can be argued, many of the results obtained through symbolic research are still valid, and could be used as a guide for developing subsymbolic models of cognitive processes. 3. This is where DISCERN, the computer-simulated neural network model described in this book (Miikkulainen 1993), fits in. DISCERN is a purely subsymbolic model, but at the high level it consists of modules and information structures similar to those of symbolic systems, such as scripts, lexicon, and episodic memory. At the highest level of cognitive modeling, the symbolic and subsymbolic paradigms have to address the same basic issues. Outlining a parallel distributed approach to those issues is the purpose of DISCERN. 4. In more specific terms, DISCERN aims: (1) to demonstrate that distributed artificial neural networks can be used to build a large-scale natural language processing system that performs approximately at the level of symbolic models; (2) to show that several cognitive phenomena can be explained at the subsymbolic level using the special properties of these networks; and (3) to identify central issues in subsymbolic cognitive modeling and to develop well-motivated techniques to deal with them. To the extent that DISCERN is successful in these areas, it constitutes a first step towards subsymbolic natural language processing. II. THE SCRIPT PROCESSING TASK 5. Scripts (Schank and Abelson, 1977) are schemas of often-encountered, stereotypic event sequences, such as visiting a restaurant, traveling by airplane, and shopping at a supermarket. Each script divides further into tracks, or established minor variations. A script can be represented as a causal chain of events with a number of open roles. Script-based understanding means reading a script-based story, identifying the proper script and track, and filling its roles with the constituents of the story. Events and role fillers that were not mentioned in the story but are part of the script can then be inferred. Understanding is demonstrated by generating an expanded paraphrase of the original story, and by answering questions about the story. 6. To see what is involved in the task, let us consider an example of DISCERN input/output behavior. The following input stories are examples of the fancy-restaurant, plane-travel, and electronics-shopping tracks: John went to MaMaison. John asked the waiter for lobster. John left the waiter a big tip. John went to LAX. John checked in for a flight to JFK. The plane landed at JFK. John went to Radio-Shack. John asked the staff questions about CD-players. John chose the best CD-player. 7. DISCERN reads the orthographic word symbols sequentially, one at a time. An internal representation of each story is formed, where all inferences are made explicit. These representations are stored in the episodic memory. The system then answers questions about the stories: What did John buy at Radio-Shack? John bought a CD-player at Radio-Shack. Where did John fly to? John flew to JFK. What did John eat at MaMaison? John ate a good lobster. With the question as a cue, the appropriate story representation is retrieved from the episodic memory and the answer is generated word by word. DISCERN also generates full paraphrases of the input stories. For example, it generates an expanded version of the restaurant story: John went to MaMaison. The waiter seated John. John asked the waiter for lobster. John ate a good lobster. John paid the waiter. John left a big tip. John left MaMaison. 8. The answers and the paraphrase show that DISCERN has made a number of inferences beyond the original story. For example, it inferred that John ate the lobster and the lobster tasted good. The inferences are not based on specific rules but are statistical and learned from experience. DISCERN has read a number of similar stories in the past and the unmentioned events and role bindings have occurred in most cases. They are assumed immediately and automatically upon reading the story and have become part of the memory of the story. In a similar fashion, human readers often confuse what was mentioned in the story with what was only inferred (Bower et al., 1979; Graesser et al., 1979). 9. A number of issues can be identified from the above examples. Specifically, DISCERN has to (1) make statistical, script-based inferences and account for learning them from experience; (2) store items in the episodic memory in a single presentation and retrieve them with a partial cue; (3) develop a meaningful organization for the episodic memory, based on the stories it reads; (4) represent meanings of words, sentences, and stories internally; and (5) organize a lexicon of symbol and concept representations based on examples of how words are used in the language and form a many-to-many mapping between them. Script processing constitutes a good framework for studying these issues, and a good domain for developing an approach towards the goals outlined above. III. APPROACH 10. Parallel distributed processing models typically have very little internal structure. They produce the statistically most likely answer given the input conditions in a process that is opaque to the external observer. This is well suited to the modeling of isolated low-level tasks, such as learning past tense forms of verbs (Rumelhart and McClelland, 1986) or word pronunciation (Sejnowski and Rosenberg, 1987). Given the success of such models, a possible approach to higher-level cognitive modeling would be to construct the system from several submodules that work together to produce the higher-level behavior. 11. In DISCERN, the immediate goal is to build a complete, integrated system that performs well in the script processing task. In this sense, DISCERN is very similar to traditional models in artificial intelligence. However, DISCERN also aims to show how certain parts of human cognition could actually be built. The components of DISCERN were designed as independent cognitive models that can account for interesting language processing and memory phenomena, many of which are not even required in the DISCERN task. Combining these models into a single, working system is one way of validating them. In DISCERN, the components are not just models of isolated cognitive phenomena; they are sufficient constituents for generating complex high-level behavior. IV. THE DISCERN MODEL 12. DISCERN can be divided into parsing, generating, question answering, and memory subsystems, each with two modules (figure 1). Each module is trained in its task separately and in parallel. During performance, the modules form a network of networks, each feeding its output to the input of another module. Input text Output text | ^ V | ================= ================= ================= Sentence Parser <------- Lexicon <------- Sentence Gener. ================= ================= ================= | | ^ ^ | | | | | +-------+-----------------------+ | | | | | | | | V V | | | ================= ================= | | | Cue Former Answer Producer ---+ | | ================= ================= | | | ^ | | | | | V V | | ================= ================= ================= Story Parser -------> Episodic Memory -------> Story Generator ================= ================= ================= Figure 1: The DISCERN Model. 13. The sentence parser reads the input words one at a time and forms a representation of each sentence. The story parser combines the sequence of sentences into an internal representation of the story, which is then stored in the episodic memory. The story generator receives the internal representation and generates the sentences of the paraphrase one at a time. The sentence generator outputs the sequence of words for each sentence. The cue former receives a question representation, built by the sentence parser, and forms a cue pattern for the episodic memory, which returns the appropriate story representation. The answer producer receives the question and the story and generates an answer representation, which is output word by word by the sentence generator. The architecture and behavior of each of these modules in isolation is outlined below. V. LEXICON 14. The input and output of DISCERN consist of distributed representations for orthographic word symbols (also called lexical words). Internally, DISCERN processes semantic concept representations (semantic words). Both the lexical and semantic words are represented distributively as vectors of gray-scale values between 0.0 and 1.0. The lexical representations are based on the visual patterns of characters that make up the written word; they remain fixed throughout the training and performance of DISCERN. The semantic representations stand for distinct meanings and are developed automatically by the system while it is learning the processing task. 15. The lexicon stores the lexical and semantic representations and translates between them. It is implemented as two feature maps (Kohonen, 1989), one lexical and the other semantic. Words whose lexical forms are similar, such as "LINE" and "LIKE", are represented by nearby units in the lexical map. In the semantic map, words with similar semantic content, such as "John" and "Mary", or "Leone's" and "MaMaison" are mapped near each other. There is a dense set of associative interconnections between the two maps. A localized activity pattern representing a word in one map will cause a localized activity pattern to form in the other map, representing the same word. The output representation is then obtained from the weight vector of the most highly active unit. The lexicon thus transforms a lexical input vector into a semantic output vector and vice versa. Both maps and the associative connections between them are organized simultaneously, based on examples of co-occurring symbols and meanings. 16. The lexicon architecture facilitates interesting behavior. Localized damage to the semantic map results in category-specific lexical deficits similar to human aphasia (Caramazza, 1988; McCarthy and Warrington, 1990). For example, the system selectively loses access to restaurant names, or animate words, when that part of the map is damaged. Dyslexic performance errors can also be modeled. If the performance is degraded, for example, by adding noise to the connections, parsing and generation errors that occur are quite similar to those observed in human deep dyslexia (Coltheart et al., 1988). For example, the system may confuse "Leone's" with "MaMaison", or "LINE" with "LIKE", because they are nearby in the map and share similar associative connections. VI. FGREP PROCESSING MODULES 17. Processing in DISCERN is carried out by hierarchically organized pattern-transformation networks. Each module performs a specific subtask, such as parsing a sentence or generating an answer to a question. All these networks have the same basic architecture: they are three-layer, simple-recurrent backpropagation networks (Elman, 1990), with the extension called FGREP that allows them to develop distributed representations for their input/output words. 18. The network learns the processing task by adapting the connection weights according to the standard on-line backpropagation procedure (Rumelhart et al., 1986, pp. 327-329). The error signal is propagated to the input layer, and the current input representations are modified as if they were an extra layer of weights. The modified representation vectors are put back in the lexicon, replacing the old representations. Next time the same words occur in the input or output, their new representations are used to form the input/output patterns for the network. In FGREP, therefore, the required mappings change as the representations evolve, and backpropagation is shooting at a moving target. 19. The representations that result from this process have a number of useful properties for cognitive modeling. (1) Since they adapt to the error signal, they end up coding information most crucial to the task. Representations for words that are used in similar ways in the examples become similar. Thus, these profiles of continuous activity values can be claimed to code the meanings of the words as well. (2) As a result, the system never has to process very novel input patterns, because generalization has already been done in the representations. (3) The representation of a word is determined by all the contexts in which that word has been encountered; consequently, it is also a representation of all those contexts. Expectations emerge automatically and cumulatively from the input word representations. (4) Single representation components do not usually stand for identifiable semantic features. Instead, the representation is holographic: word categories can often be recovered from the values of single components. (5) Holography makes the system very robust against noise and damage. Performance degrades approximately linearly as representation components become defective or inaccurate. VII. EPISODIC MEMORY 20. The episodic memory in DISCERN consists of a hierarchical pyramid of feature maps organized according to the taxonomy of script-based stories. The highest level of the hierarchy is a single feature map that lays out the different script classes. Beneath each unit of this map there is another feature map that lays out the tracks within the particular script. The different role bindings within each track are separated at the bottom level. The map hierarchy receives a story representation vector as its input and classifies it as an instance of a particular script, track, and role binding. The hierarchy thereby provides a unique memory representation for each script-based story as the maximally responding units in the feature maps at the three levels. 21. Whereas the top and the middle level in the hierarchy only serve as classifiers, selecting the appropriate track and role-binding map for each input, at the bottom level a permanent trace of the story must also be created. The role-binding maps are trace feature maps, with modifiable lateral connections. When the story representation vector is presented to a role-binding map, a localized activity pattern forms as a response. Each lateral connection to a unit with higher activity is made excitatory, while a connection to a unit with lower activity is made inhibitory. The units within the response now "point" towards the unit with highest activity, permanently encoding that the story was mapped at that location. 22. A story is retrieved from the episodic memory by giving it a partial story representation as a cue. Unless the cue is highly deficient, the map hierarchy is able to recognize it as an instance of the correct script and track and form a partial cue for the role-binding map. The trace feature map mechanism then completes the role binding. The initial response of the map is again a localized activity pattern; because the map is topological, it is likely to be located somewhere near the stored trace. If the cue is close enough, the lateral connections pull the activity to the center of the stored trace. The complete story representation is retrieved from the weight vectors of the maximally responding units at the script, track, and role-binding levels. 23. Hierarchical feature maps have a number of properties that make them useful for memory organization: (1) The organization is formed in an unsupervised manner, extracting it from the input experience of the system. (2) The resulting order reflects the properties of the data, the hierarchy corresponding to the levels of variation, and the maps laying out the similarities at each level. (3) By dividing the data first into major categories and gradually making finer distinctions lower in the hierarchy, the most salient components of the input data are singled out and more resources are allocated for representing them accurately. (4) Because the representation is based on salient differences in the data, the classification is very robust, and usually correct even if the input is noisy or incomplete. (5) Because the memory is based on classifying the similarities and storing the differences, retrieval becomes a reconstructive process (Kolodner, 1984; Williams and Hollan, 1981) similar to human memory. 24. The trace feature map exhibits interesting memory effects that result from interactions between traces. Later traces capture units from earlier ones, making later traces more likely to be retrieved. The extent of the traces determines memory capacity. The smaller the traces, the more of them will fit in the map, but more accurate cues are required to retrieve them. If the memory capacity is exceeded, older traces will be selectively replaced by newer ones. Traces that are unique, that is, located in a sparse area of the map, are not affected, no matter how old they are. Similar effects are common in human long-term memory (Baddeley, 1976; Postman, 1971). VIII. DISCERN HIGH-LEVEL BEHAVIOR 25. DISCERN is more than just a collection of individual cognitive models. Interesting behavior results from the interaction of the components in a complete story-processing system. 26. DISCERN was trained and tested with an artificially generated corpus of script-based stories consisting of three scripts with three tracks and three open roles each. The complete DISCERN system performs very well: at the output, about 98 percent of the words are correct. This is rather remarkable for a chain of networks that is 9 modules long and consists of several different types of modules. 27. A modular neural network system can only operate if it is stable, that is, if small deviations from the normal flow of information are automatically corrected. It turns out that DISCERN has several built-in safeguards against minor inaccuracies and noise. The semantic representations are distributed and redundant, and inaccuracies in the output of one module are cleaned up by the module that uses the output. The memory modules clean up by categorical processing: a noisy input is recognized as a representative of an established class and replaced by the correct representation of that class. As a result, small deviations do not throw the system off course, but rather the system filters out the errors and returns to the normal course of processing, which is an essential requirement for building robust cognitive models. 28. DISCERN also demonstrates strong script-based inferencing. Even when the input story is incomplete, consisting of only a few main events, DISCERN can usually form an accurate internal representation of it. DISCERN was trained to form complete story representations from the first sentence on, and because the stories are stereotypical, missing sentences have little effect on the parsing process. Once the story representation has been formed, DISCERN performs as if the script had been fully instantiated. Questions about missing events and role-bindings are answered as if they were part of the original story. If events occurred in an unusual order, they are recalled in the stereotypical order in the paraphrase. If there is not enough information to fill a role, the most likely filler is selected and maintained throughout the paraphrase generation. Such behavior automatically results from the modular architecture of DISCERN and is consistent with experimental observations on how people remember stories of familiar event sequences (Bower et al., 1979; Graesser et al., 1979). 29. In general, given the information in the question, DISCERN recalls the story that best matches it in the memory. An interesting issue is: what happens when DISCERN is asked a question that is inaccurate or ambiguous, that is, one that does not uniquely specify a story? For example, DISCERN might have read a story about John eating lobster at MaMaison, and then about Mary doing the same at Leone's, and the question could be "Who ate lobster?" Because later traces are more prominent in the memory, DISCERN is more likely to retrieve the Mary-at-Leone's story in this case. The earlier story is still in the memory, but to recall it, more details need to be specified in the question, such as `Who ate lobster at MaMaison?" Similarly, DISCERN can robustly retrieve a story even if the question is slightly inaccurate. When asked "How did John like the steak at MaMaison?", DISCERN generates the answer "John thought lobster was good at MaMaison", ignoring the inaccuracy in the question, because the cue is still close enough to the stored trace. DISCERN does recognize, though, when a question is too different from anything in the memory, and should not be answered. For "Who ate at McDonald's?", the cue vector is not close to any trace, the memory does not settle, and nothing is retrieved. Note that these mechanisms were not explicitly built into DISCERN, but they emerge automatically from the physical layout of the architecture and representations. IX. DISCUSSION 30. There is an important distinction between scripts (or more generally, schemas) in symbolic systems, and scripts in subsymbolic models such as DISCERN. In the symbolic approach, a script is stored in memory as a separate, exact knowledge structure, coded by the knowledge engineer. The script has to be instantiated by searching the schema memory sequentially for a structure that matches the input. After instantiation, the script is active in the memory and later inputs are interpreted primarily in terms of this script. Deviations are easy to recognize and can be taken care of with special mechanisms. 31. In the subsymbolic approach, schemas are based on statistical properties of the training examples, extracted automatically during training. The resulting knowledge structures do not have explicit representations. For example, a script exists in a neural network only as statistical correlations coded in the weights. Every input is automatically matched to every correlation in parallel. There is no all-or-none instantiation of a particular knowledge structure. The strongest, most probable correlations will dominate, depending on how well they match the input, but all of them are simultaneously active at all times. Regularities that make up scripts can be particularly well captured by such correlations, making script-based inference a good domain for the subsymbolic approach. Generalization and graceful degradation give rise to inferencing that is intuitive, immediate, and occurs without conscious control, as is script-based inference in humans. On the other hand, it is very difficult to recognize deviations from the script and to initiate exception-processing when the automatic mechanisms fail. Such sequential reasoning would require intervention of a high-level "conscious" monitor, which has yet to be built in the connectionist framework. X. CONCLUSION 32. The main conclusion from DISCERN is that building subsymbolic models is a feasible approach to understanding mechanisms underlying natural language processing. DISCERN shows how several cognitive phenomena may result from subsymbolic mechanisms. Learning word meanings, script processing, and episodic memory organization are based on self-organization and gradient-descent in error in this model. Script-based inferences, expectations, and defaults automatically result from generalization and graceful degradation. Several types of performance errors in role binding, episodic memory, and lexical access emerge from the physical organization of the system. Perhaps most significantly, DISCERN shows how individual connectionist models can be combined into a large, integrated system that demonstrates that these models are sufficient constituents for generating sequential, symbolic, high-level behavior. 33. Although processing simple script instantiations is a start, there is a long way to go before subsymbolic models will rival the best symbolic cognitive models. For example, in story understanding, symbolic systems have been developed that analyze realistic stories in depth, based on higher-level knowledge structures such as goals, plans, themes, affects, beliefs, argument structures, plots, and morals. In designing subsymbolic models that would do that, we are faced with two major challenges: (1) how to implement connectionist control of high-level processing strategies (making it possible to model processes more sophisticated than a series of reflex responses), and (2) how to represent and learn abstractions (making it possible to process information at a higher level than correlations in the raw input data). Progress in these areas would constitute a major step towards extending the capabilities of subsymbolic natural language processing models beyond those of DISCERN. XI. NOTE 34. Software for the DISCERN system is available through anonymous ftp from cs.utexas.edu:pub/neural-nets/discern. An X11 graphics demo, showing DISCERN in processing the example stories discussed in the book, can be run remotely under the World Wide Web at http://www.cs.utexas.edu/~risto/discern.html, or by telnet with "telnet cascais.utexas.edu 30000". XII. TABLE OF CONTENTS PART I Overview 1 Introduction 2 Background 3 Overview of DISCERN PART II Processing Mechanisms 4 Backpropagation Networks 5 Developing Representations in FGREP Modules 6 Building from FGREP Modules PART III Memory Mechanisms 7 Self-Organizing Feature Maps 8 Episodic Memory Organization: Hierarchical Feature Maps 9 Episodic Memory Storage and Retrieval: Trace Feature Maps 10 Lexicon PART IV Evaluation 11 Behavior of the Complete Model 12 Discussion 13 Comparison to Related Work 14 Extensions and Future Work 15 Conclusions APPENDICES A Story Data B Implementation Details C Instructions for Obtaining the DISCERN Software XIII. REFERENCES Baddeley, A.D. (1976) The Psychology of Memory. New York: Basic Books. Bower, G.H., Black, J.B. and Turner, T.J. (1979) Scripts in memory for text. Cognitive Psychology, 11:177-220. Caramazza, A. (1988) Some aspects of language processing revealed through the analysis of acquired aphasia: The lexical system. Annual Review of Neuroscience, 11:395-421. Coltheart, M., Patterson, K. and Marshall, J.C., editors (1988) Deep Dyslexia. London; Boston: Routledge and Kegan Paul. Second edition. Elman, J.L. (1990) Finding structure in time. Cognitive Science, 14:179-211. Graesser, A.C., Gordon, S.E. and Sawyer, J.D. (1979) Recognition memory for typical and atypical actions in scripted activities: Tests for the script pointer+tag hypothesis. Journal of Verbal Learning and Verbal Behavior, 18:319-332. Kohonen, T. (1989) Self-Organization and Associative Memory. Berlin; Heidelberg; New York: Springer. Third edition. Kolodner, J.L. (1984) Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model. Hillsdale, NJ: Erlbaum. Miikkulainen, R. (1993) Subsymbolic Natural Language Processing: an Integrated Model of Scripts, Lexicon, and Memory. Cambridge MA: MIT. McCarthy, R.A. and Warrington, E.K. (1990) Cognitive Neuropsychology: A Clinical Introduction. New York: Academic Press. Postman, L. (1971) Transfer, interference and forgetting. In Kling, J.W., and Riggs, L.A., editors, Woodworth and Schlosberg's Experimental Psychology, 1019-1132. New York: Holt, Rinehart and Winston. Third edition. Rumelhart, D.E. and McClelland, J.L. (1986) On learning past tenses of English verbs. In Rumelhart, D.E., and McClelland, J.L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2, 216--271. Cambridge, MA: MIT Press. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning internal representations by error propagation. In Rumelhart, D.E. and McClelland, J.L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1, 318-362. Cambridge, MA: MIT Press. Sejnowski, T. J., and Rosenberg, C. R. (1987) Parallel networks that learn to pronounce English text. Complex Systems, 1:145--168. Schank, R.C. and Abelson, R.P. (1977) Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, NJ: Erlbaum. Williams, M.D. and Hollan, J.D. (1981) The process of retrieval from very long-term memory. Cognitive Science, 5:87--119. -------------------------------------------------------------------- PSYCOLOQUY Book Review Instructions The PSYCOLOQUY book review procedure is very similar to the commentary procedure except that it is the book itself, not a target article, that is under review. (The Precis summarizing the book is intended to permit PSYCOLOQUY readers who have not read the book to assess the exchange, but the reviews should address the book, not primarily the Precis.) Note that as multiple reviews will be co-appearing, you need only comment on the aspects of the book relevant to your own specialty and interests, not necessarily the book in its entirety. Any substantive comments and criticism -- including points calling for a detailed and substantive response from the author -- are appropriate. Hence, investigators who have already reviewed or intend to review this book elsewhere are still encouraged to submit a PSYCOLOQUY review specifically written with this specialized multilateral review-and-response feature in mind. 1. Before preparing your review, please read carefully the Instructions for Authors and Commentators and examine recent numbers of PSYCOLOQUY. 2. Reviews should not exceed 500 lines. Where judged necessary by the Editor, reviews will be formally refereed. 3. Please provide a title for your review. As many commentators will address the same general topic, your title should be a distinctive one that reflects the gist of your specific contribution and is suitable for the kind of keyword indexing used in modern bibliographic retrieval systems. Each review should also have a brief (~50-60 word) Abstract 4. All paragraphs should be numbered consecutively. Line length should not exceed 72 characters. The review should begin with the title, your name and full institutional address (including zip code) and email address. References must be prepared in accordance with the examples given in the Instructions. Please read the sections of the Instruction for Authors concerning style, INSTRUCTIONS FOR PSYCOLOQUY AUTHORS AND COMMENTATORS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From fellous at selforg.usc.edu Tue Aug 9 19:05:48 1994 From: fellous at selforg.usc.edu (Jean-Marc Fellous) Date: Tue, 9 Aug 1994 16:05:48 -0700 Subject: Position in Bochum Message-ID: <199408092305.QAA08804@selforg.usc.edu> Please post the following Job announcement: ----------------------- The following announcement is related to a C3 assistant professor at the University of Bochum (Germany) in the field of neuro-computation. It is intentionally kept in its original German form, since knowledge of German is prefered but not required (However, non-German speaking applicants are expected to eventually learn German ...). Ruhr-Universit"at Bochum Am Institut f"ur Neuroinformatik ist eine C3-Professur "Neuroinformatik" zu besetzen. Das Institut ist eine zentrale wissenschaftliche Einrichtung der Universit"at mit den Abteilungen Systembiophysik und Theoretische Biologie. Arbeitsschwerpunkte sind Prinzipien der Selbst-organisation und Informationsverarbeitung in neuronaler Architektur. Die Position schlie{\ss}t die Mitwirkung der kollegialen Leitung des Instituts ein. Zu den Lehraufgaben geh"oren Vorlesungen "uber Neuronale Netze sowie "uber technisch nutzbare Organisationsprinzipien biologischer neuronaler Systeme. Es wird erwartet, da{\ss} die zu berufende Pers"onlichkeit mindestens in einem der folgenden Gebiete wissenschaftlich ausgewiesen ist: - Analyse und Anwendung biologischer neuronaler Organisationsprinzipien - K"unstliche Neuronale Netze - Entwurf von Systemen in neuronaler Architektur - Probleme der Selbstorganisation Neben Erfahrungen im theoretischen Bereich (Systemtheorie, nicht-lineare Dynamik) wird Interesse an anwendungsorientierten Problemen vorausgesetzt. Eine Kooperation mit dem Zentrum f"ur Neuroinformatik in Bochum, das bevorzugt anwendungsorientierte Probleme bearbeitet, ist m"oglich. Die Ruhr-Universit"at Bochum bem"uht sich um die F"orderung von Frauen in Forschung und Lehre. Schwerbehinderte werden bei gleicher Qualifikation bevorzugt. Ihre schriftliche Bewerbung mit den "ublichen Unterlagen richten Sie bitte an den Rektor der Ruhr-Universit"at Bochum, 44780 Bochum, Germany. ----- End Included Message ----- From druck at afit.af.mil Wed Aug 10 15:40:12 1994 From: druck at afit.af.mil (Dennis W. Ruck) Date: Wed, 10 Aug 94 15:40:12 -0400 Subject: CFP: SPIE Applications and Science of Artificial Neural Networks VI Message-ID: <9408101940.AA06900@gandalf.afit.af.mil> CALL FOR PAPERS SPIE Conference on Applications and Science of Artificial Neural Networks VI Orlando, Florida 17-21 April 1995 You are invited to submit a paper to the SPIE conference Applications and Science of Artificial Neural Networks VI. This conference will be held in conjunction with nearly 40 other conferences on topics in object recognition, aerospace sensing, and photonics. See below for a complete list. -------------------------------------------------------------- Announcement and Call for Papers for Applications and Science of Artificial Neural Networks VI -------------------------------------------------------------- Conference Chairs: Steven K. Rogers, Dennis W. Ruck, Air Force Institute of Technology Program Committee: Stanley C. Ahalt, The Ohio State Univ.; James C. Bezdek, Univ. of West Florida; Joe R. Brown, Microelectronics and Computer Technology Corp.; Lee A. Feldkamp, Ford Motor Co.; Michael Georgiopoulos, Univ. of Central Florida; Joydeep Ghosh, Univ. of Texas/Austin; Charles W. Glover, Oak Ridge National Lab.; John B. Hampshire, II, Jet Propulsion Lab.; Richard P. Lippmann, MIT Lincoln Lab.; Harley R. Myler, Univ. of Central Florida; Mary Lou Padgett, Auburn Univ.; Kevin L. Priddy, Accurate Automation Corp.; Gintaras V. Puskorius, Ford Motor Co.; Donald F. Specht, Lockheed Palo Alto Research Lab.; Gregory L. Tarr, Air Force Phillips Lab.; Gary Whittington, Univ. of Aberdeen (UK); Rodney G. Winter, Dept. of Defense The focus of this conference is on real-world applications of artificial neural networks and on recent theoretical developments applicable to current applications. The goal of this conference is to provide a forum for interaction between researchers and industrial/government agencies with information processing requirements. Papers that investigate advantages/disadvantages of artificial neural networks in specific real-world applications will be presented. Papers that clearly state existing problems in information processing that could potentially be solved by artificial neural networks will also be considered. Sessions will concentrate on: * innovative applications of artificial neural networks to solve real-world problems * comparative performance in applications of target recognition, object recognition, speech processing, speaker identification, cochannel processing, signal processing in realistic environments, robotics, process control, and image processing * demonstrations of properties and limitations of existing or new artificial neural networks as shown by or related to an application * environments for artificial neural networks development and implementation with specific applications used to demonstrate features of the systems * hardware implementation technologies that are general purpose or application specific * knowledge acquisition and representation * biologically inspired visual representation techniques * decision support systems * artificial life * cognitive science * hybrid systems (fuzzy, neural, genetic) * neurobiology * optimization * sensation and perception * system identification * financial applications * time series analysis and prediction * pattern recognition * medical applications * intelligent control * robotics. ------------------------------------------------------------------ List of Conferences ------------------------------------------------------------------ The following conferences will all be held in Orlando, Florida 17-21 April 1995 at the Marriott Orlando World Center: 1. Public Safety/Law Enforcement Technology 2. Photonics for Space Environments III 3. Space Environmental, Legal, and Safety Issues 4. Imaging Spectrometry 5. Commercialization of High-Resolution Satellite Imagery for Dual-Use Applications 6. Infrared Detectors and Instrumentation for Astronomy 7. Spaceborne Interferometry II 8. Space Telescopes and Instruments III 9. Fiber Optics in Astronomical Applictions 10. Telescope Control Systems 11. Distributed Interactive Simulation (Critical Reviews) 12. Helmet- and Head-Mounted Displays and Symbology Design Requirements II 13. Cockpit Displays II 14. Space Guidance, Control, and Tracking II 15. Synthetic Vision for Vehicle Guidance and Control 16. Acquisition, Tracking, and Pointing IX 17. Applied Laser Radar Technology II 18. Air Traffic Control Technologies 19. Technologies for Advanced Land Combat (Critical Reviews) Part I: Rapid Force Projection Initiative Part II: Advanced Vehicle Technologies Part III: Information Sciences for Digitizing the Battlefield 20. Detection Technologies for Mines and Minelike Targets 21. Targets and Backgrounds: Characterization and Representation 22. Atmospheric Propagation and Remote Sensing IV 23. Tactical Control Technologies 24. Test and Evaluation of Defense-Related Infrared Detectors and Arrays 25. Infrared Imaging Systems: Design, Analysis, Modeling, and Testing VI 26. Signal Processing, Sensor Fusion, and Target Recognition IV 27. Algorithms for Synthetic Aperture Radar Imagery II 28. Integration Photogrammetric Techniques with Scene Analysis and Machine Vision II 29. Automatic Object Recognition V 30. Smart Infrared Focal Plane Arrays and Technology 31. Transition of Optical Processors into Systems 1995 32. Optical Pattern Recognition VI 33. Visual Information Processing IV 34. Applications and Science of Artificial Neural Networks VI 35. Applications of Fuzzy Logic Technology II 36. Thermosense XVII: An International Conference on Thermal Sensing and Imaging Diagnostic Applications 37. Flat Panel Displays for Defense Applications (Critical Reviews) 38. Digital Signal Processing Technology (Critical Reviews) ------------------------------------------------------------------ General Information ------------------------------------------------------------------ SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics 17-21 April 1995 Marriott's Orlando World Center Resort and Convention Center Orlando, Florida USA SPIE's 1995 Aerospace/Defense Sensing and Dual-Use Photonics Symposium will be held at: Marriott's Orlando World Center Hotel 8701 World Center Drive Orlando, Florida 32821-6398 Phone: 407/239-4200 or 800/621-0638 (outside Florida) Fax: 407/239-5958 Accommodations -------------- SPIE will reserve a block of rooms for attendees at the Marriott Orlando World Center Hotel. Room rates at the Marriott will be $129 single and $142 double plus tax. Alternate hotels in the immediate area will also be available. Information concerning hotels and prices will be announced in the advance program. Advance Technical Program ------------------------- The comprehensive Advance Technical Program for this symposium will list conferences, paper titles and authors in order of presentation, educational short courses schedule including course descriptions and instructor biographies, and an outline of all planned special events. Call SPIE at 206/676-3290 (Pacific Time) to request that a copy be sent to you when it becomes available in January 1995. Conference Registration ----------------------- The following registration fees for SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics are preliminary and included to assist you in planning. Conference Fees without Proceedings Member Nonmember Attendee Full Conference.................$360.........$420 One day...................................160..........190 Author Full Conference*...................325..........385 Author One Day*...........................160..........190 Student....................................85...........95 * Author fee includes a proceedings Short Course Fees Member Nonmember Half-day course (3.5 hr).......$145.......$170 Full-day course (6.5 hr)........265........310 Two-day course (12 hr)..........485........570 Florida sales tax will be added to short course fees. How to Contact SPIE ------------------- If you have further questions, or need assistance, please send a message to info-optolink-service at mom.spie.org. You will receive a response from an SPIE staff member. Join SPIE Today --------------- Keep in touch with the dynamic world of optics and optoelectronics by becoming a member of SPIE. Full SPIE Membership Joining SPIE as a full member provides you with many benefits, including: * Voting privileges * Eligibility to hold SPIE office. * Subscription to OE Reports * Subscription to Optical Engineering, SPIE's monthly journal * Annual SPIE Member Guide * Full member discounts (~20%) on SPIE publications * Full member discounts (~15%) on SPIE conferences and short courses * Discounts on publications from other publishers as available * Member rates for SPIE-cosponsored technical events $85 in North America/$95 outside North America (Student Memberships and Associate Student Memberships available at reduced rates.) Working Group Membership ------------------------ Working Groups are interactive networks that foster professional contacts and information flow among technically related individuals, groups, companies, and institutions in specific areas of technology. Individual Membership ($15) Group Memberships and Corporate Memberships are available. Contact SPIE for complete list of working groups and benefits. ------------------------------------------------------------------- Abstract Due Date: 19 September 1994 On-Site Proceedings Manuscript Due Date: 23 January 1995 Manuscript due date for on-site proceedings must be strictly observed. ------------------------------------------------------------------ For a complete text of the Announcement and Call for Papers for SPIE's International Symposium on Aerospace/Defense Sensing and Dual-Use Photonics, contact SPIE at either the European Office, or International Headquarters addresses below. Contact addresses: SPIE in Europe: SPIE European Office c/o HIB-INFONET P.O. Box 4463 N-5028 Bergen, Norway Phone: 47 55 54 37 84 Fax: 47 55 96 21 75 E-mail: spie at hibinc.no SPIE International Headquarters P.O. Box 10 Bellingham, WA 98227-0010 USA Phone: 206/676-3290 Fax: 206/647-1445 E-mail: spie at spie.org Telnet/FTP: spie.org World Wide Web URL: http://www.spie.org ------------------------------------------------------------------- SPIE--The International Society for Optical Engineering SPIE is a nonprofit society dedicated to advancing engineering and scientific applications of optical, electro-optical, and optoelectronic instrumentation, systems, and technology. Its members are scientists, engineers, and users interested in the reduction to practice of these technologies. SPIE provides the means for communicating new developments and applications to the scientific, engineering, and user communities through its publications, symposia, and short courses. SPIE is dedicated to bringing you quality electronic media and online services. ------------------------------------------------------------------- Dennis W. Ruck Air Force Institute of Technology d.ruck at ieee.org Wright-Patterson AFB, Ohio AFIT/ENG, Bldg 642, 2950 P ST, Wright-Patterson AFB OH 45433-7765 Ph. (513) 255-6565 ext. 4285 Fax (513) 476-4055 From UBJTP69 at CCS.BBK.AC.UK Thu Aug 11 14:41:00 1994 From: UBJTP69 at CCS.BBK.AC.UK (Gareth) Date: Thu, 11 Aug 94 14:41 BST Subject: Thesis printing problem Message-ID: It has been reported that the file gaskell.thesis.ps.Z in the neuroprose archive will not print out from some unix systems. If this is the case, edit the file and delete the initial character (^D). The file should then print out properly. Sorry for any inconvenience. Gareth Gaskell From pluto at cs.ucsd.edu Thu Aug 11 16:49:04 1994 From: pluto at cs.ucsd.edu (Mark Plutowski) Date: Thu, 11 Aug 94 13:49:04 -0700 Subject: Paper in Neuroprose: estimating generalization with cross-validation Message-ID: <9408112049.AA14173@beowulf> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/pluto.imse.ps.Z |This is related to the generalization debate on Machine Learning List.| By popular request, the following paper has been placed in Neuroprose. Title: "Cross-validation estimates integrated mean squared error." Authors: Plutowski , M. (1,3), S. Sakata (2), H. White (2,3). (1) Computer Science and Engineering (2) Economics (3) Institute for Neural Computation (All at UCSD). A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." The statistical literature abounds with application-specific and model-specific demonstrations that cross-validation is statistically accurate and precise for use as a real-world estimate of an ideal measure of generalization known as Integrated Mean Squared Error (IMSE). IMSE is the average mean squared error, averaged over all training sets of a particular size. IMSE is closely related to Prediction Risk, (aka statistical risk) therefore such results are applicable to statistical risk as well (as averaged over training sets of a particular size). See Plutowski's thesis in Neuroprose/Thesis for explicit relationship between IMSE and statistical risk. This paper extends such results to apply to nonlinear regression in general. Strong convergence (w.p.1) and unbiasedness are proved. The key assumption is that the training and test data be independent and identically distributed (i.i.d.) - therefore, data must be drawn from the same space (stationarity), independent of previously sampled datum. Note that if training data are explicitly excluded from the test sample, then the i.i.d. assumption does not hold, since in this case the test sample is drawn from a space that depends upon (is conditioned by) particular choice of training sample. Therefore, the measure of generalization referred to in the raging debate on the Machine Learning List would not meet the conditions employed by the results in this paper. Filename: pluto.imse.ps.Z. Title: "Cross-validation estimates integrated mean squared error." File size: 97K compressed, 242K uncompressed. 17 single-spaced pages (8 pages of text, the remainder is a mathematical appendix). Email contact: pluto at cs.ucsd.edu. SUBJECT: theorems proving cross-validation is a statistically accurate and precise estimator of an ideal measure of generalization. Abridged version of this appeared in NIPS 6. = Mark Plutowski PS: Sorry, no hard copies available. From giles at research.nj.nec.com Thu Aug 11 19:01:15 1994 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 11 Aug 94 19:01:15 EDT Subject: Corrected ps file of previously announced TR. Message-ID: <9408112301.AA10157@fuzzy> It seems that a figure in the postscript file of this TR generated printing problems for certain printers. We changed the figure to eliminate this problem. We apologize to anyone who was inconvenienced. In the revised TR, we also included some references that initially were unintentionally excluded. Lee Giles, Bill Horne, Tsungnan Lin _________________________________________________________________________________________ Learning a Class of Large Finite State Machines with a Recurrent Neural Network UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-94-94 AND CS-TR-3328 C. L. Giles[1,2], B. G. Horne[1], T. Lin[1,3] [1] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 [2]UMIACS, University of Maryland, College Park, MD 20742 [3] EE Department, Princeton University, Princeton, NJ 08540 {giles,horne,lin}@research.nj.nec.com One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically- driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specif ically, these FMMs have a large number of states (simulations are for $256$ and $512$ state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the num ber of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that sufficient to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. -------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp cs.umd.edu (128.8.128.8) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/papers/TRs ftp> binary ftp> get 3328.ps.Z ftp> quit unix> uncompress 3328.ps.Z OR unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get large.fsm.ps.Z ftp> quit unix> uncompress large.fsm.ps.Z -------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 == From grino at ic.upc.es Fri Aug 12 16:11:50 1994 From: grino at ic.upc.es (Robert Grino) Date: Fri, 12 Aug 1994 16:11:50 UTC+0100 Subject: Paper in Neuroprose: Nonlinear System Identification Using Additive Dynamic Neural Networks Message-ID: <232*/S=grino/OU=ic/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/grino.sysid.ps.Z The file grino.sysid.ps.Z is now available for copying from the Neuroprose repository: NONLINEAR SYSTEM IDENTIFICATION USING ADDITIVE DYNAMIC NEURAL NETWORKS R. Grino (grino at ic.upc.es) Instituto de Cibernetica - ESAII Univ. Politecnica Catalunya Diagonal, 647, 2nd floor 08028-Barcelona, SPAIN (Reprint of a SICICA'94 (IFAC) paper) ABSTRACT: In this work additive dynamic neural models are used for the identification of nonlinear plants in on-line operation. In order to accomplish this task a gradient parameter adaptation method based in sensitivity analysis is formulated taking into account that the parameters of the model are arranged in matrix form. This methodology is applied to several nonlinear systems in simulation and with a real dataset to verify its performance. ============================================================================= Robert Grino E-mail: grino at ic.upc.es Instituto de Cibernetica Diagonal,647, 2nd floor FAX number: (343) 4016605 08028 - Barcelona SPAIN ============================================================================= From dhw at santafe.edu Fri Aug 12 16:32:37 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Aug 94 14:32:37 MDT Subject: No subject Message-ID: <9408122032.AA06773@chimayo.santafe.edu> Mark Plutowski recently said on connectionist: >>> A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." >>> Mark is being polite by not using names; I am the one he is (implicitly) taking to task, for the following comment on the ML list: >>> . an assumption *must always* be present if we are to have any belief in learnability in the problem at hand... However, to give just one example, nobody has yet delineated just what those assumptions are for the technique of cross-validation. >>> Mark is completely correct in his (implicit) criticism. As he says, there has in fact been decades of work analyzing cross-validation from a sampling theory perspective. Mark's thesis is a major contribution to this literature. Any implications coming from my message that such literature doesn't exist or is somehow invalid are completely mistaken and were not intended. (Indeed, I've had several very illuminating discussions with Mark about his thesis!) The only defense for the imprecision of my comment is that I made it in the context of the ongoing discussion on the ML list that Mark referred to. That discussion concerned off-training set error rather than iid error, so my comments implicitly assumed off-training set error. And as Mark notes, his results (and the others in the literature) don't extend to that kind of error. (Another important distinction between the framework Mark uses and the implicit framework in the ML list discussion is that the latter has been concerned w/ zero-one loss, whereas Mark's work concentrates on quadratic loss. The no-free-lunch results being discussed in the ML list change form drastically if one uses quadratic rather than zero-one loss. That should be no surprise, given, for example, the results in Michael Perrone's work.) While on the subject of cross-validation and iid error though, it's interesting to note that there is still much to be understood. For example, in the average-data scenario of sampling theory statistics that Mark uses, asymptotic properties are better understood than finite data properties. And in the Bayesian this-data framework, very little is known for any data-size regime (though some work by Dawid on this subject comes to mind). David Wolpert From mccauley at ecn.purdue.edu Thu Aug 11 21:51:21 1994 From: mccauley at ecn.purdue.edu (James Darrell McCauley) Date: Thu, 11 Aug 1994 20:51:21 -0500 Subject: connectionists suggestion In-Reply-To: <9408112049.AA14173@beowulf> Message-ID: <199408120151.UAA03022@alx3.ecn.purdue.edu> Mark Plutowski (pluto at cs.ucsd.edu) writes on 11 Aug 94: >FTP-host: archive.cis.ohio-state.edu >FTP-filename: /pub/neuroprose/pluto.imse.ps.Z May I suggest that posters be encouraged to use the URL format and *not* post instructions on the usage of ftp? I.e., the following would be sufficient: ftp://archive.cis.ohio-state.edu/pub/neuroprose/pluto.imse.ps.Z (this way readers may just cut/paste the string into xmosaic or whatever browser they use). Not picking on Dr Plutoski, just a suggestion that came to mind... --Darrell McCauley, aka jdm5548 at diamond.tamu.edu James Darrell McCauley, Purdue Univ, West Lafayette, IN 47907-1146, USA mccauley at ecn.purdue.edu, mccauley%ecn at purccvm.bitnet, pur-ee!mccauley From harnad at Princeton.EDU Fri Aug 12 23:53:54 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 12 Aug 94 23:53:54 EDT Subject: French Cognitive Science Conference Message-ID: <9408130353.AA25579@clarity.Princeton.EDU> From: payette at uranus.atoci.uqam.ca Subject: French International Cognitive Science Conference ANNOUNCEMENT The Seventh Colloquium of the Jacques Cartier Center Lyon, France. THE COGNITIVE SCIENCES: FROM COMPUTATIONAL MODELS TO THE PHILOSOPHY OF MIND under the aegis of: the Pole Rhones-Alpes of the Cognitive Sciences, Programme Interdisciplinaire de Recherche Cognisciences,CNRS Universite du Quebec a Montreal Universite de Montreal Universite Joseph Fourier Universite Claude Bernard Scientific committee: Denis Fisette (Universite du Quebec a Montreal, Quebec) Marc Jeannerod (Universite Claude Bernard, Lyon) Daniel Laurier (Universite de Montreal, Quebec) Daniel Payette (Universite du Quebec a Montreal, Quebec) Vincent Rialle (Universite Joseph Fourier, Grenoble) Guy Tiberghien (Universite Pierre Mendes-France, Grenoble) Coordination in North America: Daniel Payette and Denis Fisette Universite du Quebec a Montreal, Dpt de Philosophie, Dpt Psychology; C.P. 8888,Succ A, Montreal (Quebec) H3C-3P8, Canada; E.mail : payette at uranus.atoci.uqam.ca; tel (+514) 987 8418; Fax: (+514) 9876721 Coordination in Europe: Vincent Rialle Universite J. Fourier, Labo.TIMC-IMAG, Faculte de Medecine, 38706 LaTronche Cedex E.mail: Vincent.Rialle at imag.fr; Tel. (+33) 76 63 71 87; Fax. (+33) 76 51 8667 DATES: Wednesday, November 30th to Friday, December 2nd 1994 CONFERENCE SITE: Amphitheatre CHARLES BERAUDIER. Conseil Regional RHONE-ALPES,78 route de Paris 69751 CHARBONNIERES-les-BAINS. France *Talks will only be given by invited speakers. (Simultaneous French-English and English-French will be provided). THEME OF COLLOQUIUM The modeling of mental processes in the various human cognitive activities has generated increasing interest in the scientific world today. Cognitive models, cognitive simulations, auto-organization, adaptation, emergence, genetic selection, Darwinian mentalism and enaction are active research topics in neurological and psychological theory. The cognitive sciences offer a continuum of research extending from the engineering sciences to the philosophy of mind, including the neurosciences, cognitive psychology, linguistics, semantics, semiotics and artificial intelligence. Three subconferences will organize themselves around the following major complementary themes: (i) Modeling (cognitive and brain functions), (ii) Philosophy of Mind and Epistemology, and (iii) Applications (AI, technical and computational engineering). (i) Modeling is a point of intersection for all these specialties because it includes the modeling of functions and dysfunctions of the central nervous system, the neurocomputer sciences, the modeling of psychocognitive and mental processes, the emergence of intentional structure on the basis of biological structure, enaction, genetic algorithms, neural networks, artificial "life," etc. (ii) The philosophical and epistemological subcomponent poses questions like the following: Can we elaborate mathematical models of the mind and use them to describe and explain human behavior? Are we aiming toward a mathematical model of the mind? Can we capture the formal principles of the development and emergence of cognition? Can we technologically recreate thought? Is the computational symbolic paradigm, which has imposed itself for the last decades, still a powerful conceptual tool or is it proving too reductionistic and if so, how? What is the epistemological status of, for example, the alternative proposed by the parallel distributed model to the computational models of classical cognitivism? What is the relation os the modeling activity of the cognitive and neurosciences and human experience? (iii) The applications subconference will consider practical domains in which scientific results have been applied in the treatment of language, the automated cognitive analyses of textual documents (an intersection of linguistics, semantics, semiotics and artificial intelligence), aids to decision making, applications in sensory information processing, etc. PREPROGRAM WEDNESDAY, 30 November 1994 8h15 - 8h30 Allocution d'accueil du Conseil Regional 8h30 - 9h Guy Tiberghien (Universite Pierre Mendes-France, Grenoble) Introduction SESSION 1 : Modelisation neuro et psycho-cognitives 9h - 9h-30 Jean Francois Le Ny (Universite Paris-Sud, psychologie cognitive) Pourquoi les modeles cognitifs devraient-ils etre calculatoires ? 9h30 - 9h45 Discussion 9h-45 - 10h15 Marc Jeannerod (Universite Claude Bernard, Lyon, neurosciences) Le cerveau representationnel 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE 10h45 - 11h15 Zenon Pylyshyn (Rutgers University, USA, psychologie cognitive) What's in the Mind? A Computational Approach to a Ancient Question. 11h15 - 11h30 Discussion 11h30 - 12h00 Stevan Harnad (Princeton University, psychologie cognitive) Modeles, mobiles et mentalite 12h00 - 12h15 Discussion MEAL 14h00 - 14h30 Michel Imbert (Universite Paul Sabatier, Toulouse, neurosciences) De l'etude du cerveau a la comprehension de l'esprit 14h30 - 14h45 Discussion 14h45 - 15h15 Guy Tiberghien (Univers Pierre Mendes-France,Grenoble,psychologie cognitive) Connexionnisme: stade supreme du behaviorisme ? 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Jacques Demongeot (Universite Joseph Fourier, Grenoble, neurosciences) Memoire d'evocation dans les reseaux de neurones 16h15 - 16h30 Discussion 16h30 - 17h00 Bennet Murdock (Universite de Toronto, psychologie cognitive) THE ROLE OF FORMAL MODELS IN MEMORY RESEARCH 17h00 - 17h15 Discussion 17h15 - 17h45 Robert Proulx (Universite du Quebec a Montreal, neuro-psychologie) Plausibilite biologique de certains systemes de categorisation adaptative a base de reseaux de neurones 17h45 - 18h00 Discussion TUESDAY, December 1 Session 2 : Epistemology, Philosophy of Mind and Cognition 9h - 9h30 Elisabeth Pacherie (Universite de Provence, CNRS & CREA, Paris) Domaines cognitifs et modularite 9h30 - 9h45 Discussion 9h-45 - 10h15 Pierre Livet (Universite de Provence & CREA, Paris, philosophie) Categorisation et connexionnisme 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE Normand Lacharite (Universite du Quebec a Montreal, epistemologie) 10h45 - 11h15 Conflits de modeles en theorie de la representation 11h15 - 11h30 Discussion 11h30 - 12h00 Peter Gardenfors (Lund University, Suede, philosophie) Language and the Evolution of Mind 12h15 - 12h15 Discussion MEAL 14h00 - 14h30 Andy Clark (Washington University, philosophie) Wild Cognition: Putting Representation in its Place 14h30 - 14h45 Discussion 14h45 - 15h15 Kevin Mulligan (Universite de Geneve, Suisse, philosophie) Constance perceptuelle et contenu spatial 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Ronald De Sousa (Universite de Toronto, epistemologie) La rationalite: un concept normatif ou descriptif ? 16h15 - 16h30 Discussion 16h30 - 17h00 Daniel Laurier (Universite de Montreal, philosophie) Rationalite et naturalisme 17h00 - 17h15 Discussion 17h15 - 17h45 Joelle Proust (CNRS & CREA, Paris, philosophie) Un modele naturaliste de l'intentionnalite 17h45 - 18h00 Discussion FRIDAY, December 2 Session 3: Modelisation IA, Traitement du langage, et semantique cognitive Paul Jorion (Maison des Sciences de l'Homme, Paris, psychologie cognitive) 9h - 9h30 Modelisation du reseau mnesique : une utilisation minimaliste de l'IA 9h30 - 9h45 Discussion 9h-45 - 10h15 Bernard Amy (Universite Joseph Fourier, Grenoble, connexionnisme) La place des reseaux neuronaux dans l'IA 10h15 - 10h30 Discussion 10h30 - 10h45 PAUSE 10h45 - 11h15 Paul Bourgine (CEMAGREF, Paris-Antony, IA-modelisation) Co-evolution et emergence du soi 11h15 - 11h30 Discussion 11h30 - 12h00 Paul Pietroski (Universite McGill, Canada, philosophie) What can linguistics teach us about belief 12h00 - 12h15 Discussion MEAL 14h00 - 14h30 Le paradigme hermeneutique et la mediation semiotique Francois Rastier (Institut National de la Langue Francaise, CNRS, linguistique computationnelle) 14h30 - 14h45 Discussion 14h45 - 15h15 L'impact des perspectives cognitives dans le traitement de l'information Jean-Guy Meunier (Universite du Quebec a Montreal, semiotique) 15h15 - 15h30 Discussion 15h30 - 15h45 PAUSE 15h45 - 16h15 Guy Denhiere (Universite Paris VIII, psychologie cognitive) Isabelle Tapiero (Universite Lyon II, psychologie cognitive) La signification comme structure emergente : de l'acces au lexique a la comprehension de textes 16h15 - 16h30 Discussion 16h30 - 17h00 Paul Freedman (Centre de Recherche en Informatique de Montreal, IA) La vision artificielle: le traitement intelligent de documents 17h00 - 17h15 Discussion 17h15 - 17h45 Denis Vernant (Universite Pierre Mendes-France, Grenoble, philosophie) L'intelligence de la machine et sa capacite dialogique 17h45 - 18h00 Discussion 18h00: END OF COLLOQUIUM ------------------------------------------------------------------------ -ADMISSION FEES- (Includes:access to the conference room, meals and the colloquium documents) Individuals-------------------------------------------------1500FF Student (join proof of eligibility with registration)------- 500FF ---------------------------------------------------------------------------- -- REGISTRATION BULLETIN (The Cognitive Sciences:From computational models to philosophy of mind) Name:___________________________________________________________________ Status:_____________________________________ Institution/Company_________________________ Complete Address_________________________________________________________ Fax:________________________ Phone :______________________ @mail number__________________________________ Enclosed : Check or money order of (_____________________FF) (Make check or money order payable to CENTRE JACQUES CARTIER) -Send information on possibilities of housing in Lyon(______) _Send me the colloquium brochure (_____) -November 30 meal __ -December 1, meal __ -December 2, meal __ RETURN TO: CENTRE JACQUES CARTIER, 86 rue Pasteur, 69365 Lyon Cedex 07, France. Phone:(33) 78 69 72 21 From dhw at santafe.edu Sat Aug 13 01:19:30 1994 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Aug 94 23:19:30 MDT Subject: Cross-validation theory Message-ID: <9408130519.AA08172@chimayo.santafe.edu> Mark Plutowski recently said on connectionist: >>> A discussion on the Machine Learning List prompted the question "Have theoretical conditions been established under which cross-validation is justified?" The answer is "Yes." >>> As Mark goes on to point out, there are decades of work on cross-validation from a likelihood-driven sampling theory perspective. Indeed, Mark's thesis is a major addition to that literature. Mark then correctly notes that this literature doesn't *directly* apply to the discusson on the ML list, since that discussion involves off-training set rather than iid error. It should be noted that there is another important distinction between the framework Mark uses and the implicit framework in the ML list discussion; the latter has been concerned w/ zero-one loss, whereas Mark's work concentrates on quadratic loss. The no-free-lunch results being discussed in the ML list change form drastically if one uses quadratic rather than zero-one loss. That should not be too surprising, given, for example, the results in Michael Perrone's work involving quadratic loss. It's also worth noting that even in the regime of iid error and quadratic loss, there is still much to be understood. For example, in the average-data scenario of sampling theory statistics that Mark uses, asymptotic properties are better understood than finite data properties. And in the Bayesian this-data framework, very little is known for any data-size regime (though some work by Dawid on this subject comes to mind). David Wolpert From mdg at magi.ncsl.nist.gov Tue Aug 16 11:58:28 1994 From: mdg at magi.ncsl.nist.gov (Mike Garris x2928) Date: Tue, 16 Aug 94 11:58:28 EDT Subject: Announcement: Public Domain OCR Message-ID: <9408161558.AA02660@magi.ncsl.nist.gov> ANNOUNCEMENT - PUBLIC DOMAIN OCR NIST FORM-BASED HANDPRINT RECOGNITION SYSTEM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Michael D. Garris (mdg at magi.ncsl.nist.gov) James L. Blue, Gerald T. Candela, Darrin L. Dimmick, Jon Geist, Patrick J. Grother, Stanley A. Janet, and Charles L. Wilson National Institute of Standards and Technology, Building 225, Room A216 Gaithersburg, Maryland 20899 Phone: (301)975-2928 FAX: (301)840-1357 The National Institute of Standards and Technology (NIST) has developed a standard reference form-based handprint recognition system for evaluating optical character recognition (OCR). NIST is making this recognition system freely available to the general public on an ISO-9660 format CD-ROM. The recognition system processes the Handwriting Sample Forms distributed with NIST Special Database 1 and NIST Special Database 3. The system reads handprinted fields containing digits, lower case letters, upper case letters, and reads a text paragraph containing the Preamble to the U.S. Constitution. This is a source code distribution written primarily in C and is organized into 11 libraries. There are approximately 19,000 lines of code supporting more than 550 subroutines. Source code is provided for form registration, form removal, field isolation, field segmentation, character normalization, feature extraction, character classification, and dictionary-based post- processing. A host of data structures and low-level utilities are also provided. These utilities include the application of CCITT Group 4 decompres- sion, IHead file manipulation, spatial histograms, Least-Squares fitting, spatial zooming, connected components, Karhunen Loeve (KL) feature extraction, optimized Probabilistic Neural Network classification, multiple-key sorting, Levenstein distance dynamic string alignment, and dictionary-based post- processing. Two supporting programs are provided that compute eigenvectors and KL feature vectors for training classifiers. Unlike the recognition system (which is written entirely in C), these two programs contain FORTRAN subroutines. To support these programs, a training set of 168,365 segmented and labeled character images is provided. About 1000 writers contributed to this training set. The NIST standard reference recognition system is designed to run on UNIX workstations and has been successfully compiled and tested on a Digital Equipment Corporation (DEC) Alpha, Hewlett Packard (HP) Model 712/80, IBM RS6000, Silicon Graphics Incorporated (SGI) Indigo 2, SGI Onyx, SGI Challenge, Sun Microsystems (Sun) IPC, Sun SPARCstation 2, Sun 4/470, and a Sun SPARC- station 10.** Scripts for installation and compilation on these architectures are provided with this distribution. A CD-ROM distribution of this standard reference system can be obtained free of charge by sending a letter of request to Michael D. Garris at the address above. The letter, preferably on company letterhead, should identify the requesting organization or individuals. This system or any portion of this system may be used without restrictions. However, redistribution of this standard reference recognition system is strongly discouraged as any subsequent corrections or updates will be sent to registered recipients only. This software was produced by NIST, an agency of the U.S. government, and by statute is not subject to copyright in the United States. Recipients of this software assume all responsibilities associated with its operation, modification, and maintenance. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ** Specific hardware and software products identified were used in order to adequately support the development of this technology. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment identified is necessarily the best available for the purpose. From terry at salk.edu Tue Aug 16 20:12:13 1994 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 16 Aug 94 17:12:13 PDT Subject: Neural Computation 6:5 Message-ID: <9408170012.AA07668@salk.edu> NEURAL COMPUTATION September 1994 Volume 6 Number 5 Articles: A Bayesian Analysis of Self-Organizing Maps Stephen P. Luttrell Network Amplification of Local Fluctuations Causes High Spike Rate Variability, Fractal Firing Patterns and Oscillatory Local Field Potentials Marius Usher, Martin Stemmler, Christof Koch and Zeev Olami Note: Statistical Analysis of an Autoassociative Memory Network A. M. N. Fu Letters: Loading Deep Networks is Hard Jiri Sima Measuring the VC-dimension of a Learning Machine Vladimir Vapnik, Esther Levin and Yann Le Cun Neural Nets with Superlinear VC-Dimension Wolfgang Maass A Novel Design Method for Multilayer Feedforward Neural Networks Jihong Lee An Internal Mechanism for Detecting Parasite Attractors in a Hopfield Network Jean-Dominique Gascuel, Bahram Moobed and Michel Weinfeld On Langevin Updating in Multilayer Perceptrons Thorsteinn Rognvaldsson Probabilistic Winner-Take-All Learning Algorithm for Radial-Basis-Function Neural Classifiers Hossam Osman and Moustafa M. Fahmy Realization of the "Weak Rod" by a Double Layer Parallel Network T. Matsumoto and K. Kondo Learning in Neural Networks with Material synapses Daniel J. Amit and Stefano Fusi Model Based on Extracellular Potassium for Spontaneous Synchronous Activity in Developing Retinas Pierre-Yves Burgi and Norberto M. Grzywacz Bayesian Modeling and Classification of Neural Signals Michael S. Lewicki ----- SUBSCRIPTIONS - 1994 - VOLUME 6 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $65 Individual ______ $166 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From ucganlb at ucl.ac.uk Wed Aug 17 05:01:47 1994 From: ucganlb at ucl.ac.uk (Dr Neil Burgess - Anatomy UCL London) Date: Wed, 17 Aug 94 10:01:47 +0100 Subject: pre-print in neuroprose: hippocampus - spatial models Message-ID: <93386.9408170901@link-1.ts.bcc.ac.uk> ftp://archive.cis.ohio-state.edu/pub/neuroprose/burgess.hbtnn.ps.Z The above/below file has been put on neuroprose for anonymous ftp, www or whatever, contact n.burgess at ucl.ac.uk with any retrieval problems. All the best, Neil HIPPOCAMPUS - SPATIAL MODELS Neil Burgess, Michael Recce & John O'Keefe Dept. of Anatomy, University College London, London WC1E 6BT, U.K. e-mail: n.burgess at ucl.ac.uk This is a brief review of models of the hippocampus, focussing on spatial aspects of cell-firing and hippocampal function. To appear in `The Handbook of Brain Theory and Neural Networks' (M. A. Arbib Ed.), Bradford Books/ MIT Press, 1995, and restricted in length and number of citations accordingly. 8 pages, 0.46 Mbytes uncompressed, hard-copies availible in extreme circumstances only. From uzimmer at informatik.uni-kl.de Wed Aug 17 12:14:01 1994 From: uzimmer at informatik.uni-kl.de (Uwe R. Zimmer, AG vP) Date: Wed, 17 Aug 94 17:14:01 +0100 Subject: Papers available (world modelling, mobile robots) Message-ID: <940817.171401.723@ag_vp_file_server.informatik.uni-kl.de> A couple of actual papers about: -------------------------------------------------------------- --- Learning, Robotics, Visual Search, Navigation, --- --- Topologic Maps & Robust Mobile Robots --- --- Neural Networks --- -------------------------------------------------------------- are now available via FTP: ------------------------------------------------------------------------ --- Comparing World-Modelling Strategies for Autonomous Mobile Robots ------------------------------------------------------------------------ --- File name is : Zimmer.Comparison.ps.Z --- IWK `94, Ilmenau, Germany, September 27 - 30, 1994 Comparing World-Modelling Strategies for Autonomous Mobile Robots Uwe R. Zimmer & Ewald von Puttkamer The focus of this paper is on strategies for adapting a couple of internal representations to the actual environment of a mobile robot. From svc at demos.lanl.gov Tue Aug 16 15:22:10 1994 From: svc at demos.lanl.gov (Stephen Coggeshall) Date: Tue, 16 Aug 94 13:22:10 MDT Subject: graduate student positions Message-ID: <9408161922.AA13573@demos.lanl.gov> At Los Alamos National Laboratory in New Mexico we have a small research effort using adaptive computational models for a variety of projects. At this time there may be the possibility of a few graduate student positions available specifically for pattern recognition/data base mining with application to financial problems. We are interested in highly motivated, self-directed students with strong backgrounds in programming, math, neural net applications. Interested parties can contact Steve (svc at lanl.gov). Please describe briefly your past work and current interests, as well as your availability. From R.Gaizauskas at dcs.shef.ac.uk Fri Aug 19 17:12:12 1994 From: R.Gaizauskas at dcs.shef.ac.uk (Robert John Gaizauskas) Date: Fri, 19 Aug 94 17:12:12 BST Subject: AISB-95 WORKSHOP/TUTORIAL CALL Message-ID: <9408191612.AA03843@dcs.shef.ac.uk> ------------------------------------------------- AISB-95: CALL FOR WORKSHOP AND TUTORIAL PROPOSALS ------------------------------------------------- Call for Workshop Proposals: AISB-95 University of Sheffield, Sheffield, England April 3 -- 4, 1995 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB Committee invites proposals for workshops to be held in conjunction with the Tenth Biennial Conference on AI and Cognitive Science (AISB-95). While the main conference will run for three days from Wednesday, April 5 to Friday, April 7, the workshops will be held on the two days preceding the main event: Monday, April 3 and Tuesday, April 4. The main conference has the theme "Hybrid Problems, Hybrid Solutions" (see the main conference call) and while proposals for workshops related to that theme would be particularly welcome, proposals are invited for workshops relating to any aspect of Artificial Intelligence or the Simulation of Behaviour. Proposals, from an individual or a pair of organisers, for workshops between 0.5 and 2 days long will be considered. Workshops will probably address topics which are at the forefront of research, but not yet sufficiently developed to warrant a full-scale conference. Submission: ---------- A workshop proposal should contain the following information: 1. Workshop Title 2. A detailed outline of the workshop. This should include the necessary background and the potential target audience for the workshop and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the workshop. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 3. A brief resume of the organiser(s). This should include: background in the research area, references to published work in the topic area and relevant experience, such as previous organisation or chairing of workshops. 4. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple organisers, information for each organiser should be provided, but one organiser should be identified as the principal contact. 5. A draft Call for Participation. This should serve the dual purposes of informing and attracting potential participants. The organisers of accepted workshops are responsible for issuing a call for participation, reviewing requests to participate and scheduling the workshop activities within the constraints set by the Workshop Organiser. They are also responsible for submitting a collated set of papers for their workshop to the Workshop Organiser. Dates: ------ Intentions to organise a workshop should be made known to the Workshop Organiser as soon as possible. Proposals must be received by October 18th 1994. Decisions about topics and speakers will be made in early November. Collated sets of papers to be received by March 15th 1995. Proposals should be sent to: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. email: robertg at dcs.shef.ac.uk phone: +44 (0)742 825572 fax: +44 (0)742 780972 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). --------------------------------------------------------------------- Call for Tutorial Proposals: AISB-95 University of Sheffield, Sheffield, England April 3 -- 4, 1995 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB Committee invites proposals for Tutorials to be held in conjunction with the Tenth Biennial Conference on AI and Cognitive Science (AISB-95). While the main conference will run for three days from Wednesday, April 5 to Friday, April 7, the tutorials will be held on the two days preceding the main event: Monday, April 3 and Tuesday, April 4. Proposals for full and half day tutorials, from an individual or pair of presenters, will be considered. They may be offered both on standard topics and on new and more advanced aspects of Artificial Intelligence or Simulation of Behaviour. Anyone interested in presenting a tutorial should submit a proposal to the Workshop Organiser Dr Robert Gaizauskas (addresses below). Submission: ---------- A tutorial proposal should contain the following information: 1. Tutorial Title 2. A brief description of the tutorial, suitable for inclusion in a brochure. 3. A detailed outline of the tutorial. This should include the necessary background and the potential target audience for the tutorial and a justified estimate of the number of possible attendees. Please also state the length and preferred date(s) of the tutorial. Specify any equipment requirements, indicating whether the organisers would be expected to meet them. 4. A brief resume of the presenter(s). This should include: background in the tutorial area, references to published work in the topic area and relevant experience. Published work should, ideally, include a published tutorial-level article on the subject. Relevant experience is teaching experience, including previous conference tutorials or short courses presented. 5. Administrative information. This should include: name, mailing address, phone number, fax, and email address if available. In the case of multiple presenters, information for each presenter should be provided, but one presenter should be identified as the principal contact. The presenter(s) of accepted tutorials must submit a set of tutorial notes (which may include relevant tutorial-level publications) to the Workshop Organisers by March 15th 1995. Dates: ------ Intentions to organise a tutorial should be made known to the the Workshop Organiser as soon as possible. Proposals must be received by October 18th 1994. Decisions about tutorial topics and speakers will be made in early November. Tutorial notes must be received by March 15th 1995. Proposals should be sent to: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. email: robertg at dcs.shef.ac.uk phone: +44 (0)742 825572 fax: +44 (0)742 780972 Electronic submission (plain ascii text) is highly preferred, but hard copy submission is also accepted, in which case 5 copies should be submitted. Proposals should not exceed 2 sides of A4 (i.e. 120 lines of text approx.). From bishopc at helios.aston.ac.uk Fri Aug 19 09:33:57 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Fri, 19 Aug 1994 13:33:57 +0000 Subject: Research Associate - Software Support Message-ID: <9192.9408191233@sun.aston.ac.uk> ------------------------------------------------------------------- Aston University Neural Computing Research Group RESEARCH ASSOCIATE - SYSTEMS AND SOFTWARE SUPPORT ------------------------------------------------- Applications are invited for a position as a Research Associate within the Neural Computing Research Group both to provide support for the Group's local network of Sun workstations and to undertake software development and research in support of projects within the Group. The Neural Computing Research Group currently comprises three professors, two lecturers, three postdoctoral research fellows and ten postgraduate research students. In addition, two further Lecturerships have recently been advertised. Current research activity focusses on principled approaches to neural computing, and spans a broad spectrum from theoretical foundations to industrial and commercial applications. The ideal candidate will have significant experience of the UNIX operating system and system maintenance, experience of software engineering in C++, and an understanding of neural networks. The responsibilities of the successful candidate will be as follows: (1) To provide system support for the Group's LAN of Sun UNIX workstations and associated peripherals, ensurimg that an efficient working environment is maintained. (2) To support the development, testing and documentation of the NetLib C++ library of neural network software. (3) To assist with numerical experiments in support of research projects, including industrial contracts. This aspect of the work may provide opportunities for joint publication in academic journals. (4) To provide such other software support as may be required, such as the maintenance of LaTeX, provision of WWW pages for the Group, etc. Salaries will be 13,941 UK pounds or above, depending on the experience and qualifications of the successful applicant. If you wish to apply for this position, please send a CV, together with the names and addresses of 3 referees, to: Professor Chris Bishop Neural Computing Research Group Aston University Birmingham B4 7ET, U.K. Tel: 021 359 3611 ext. 4270 Fax: 021 333 6215 e-mail: c.m.bishop at aston.ac.uk closing date: 9 September 1994 From harnad at Princeton.EDU Fri Aug 19 22:51:41 1994 From: harnad at Princeton.EDU (Stevan Harnad) Date: Fri, 19 Aug 94 22:51:41 EDT Subject: Brain Rhythms & Cognition: PSYCOLOQUY Call for Commentary Message-ID: <9408200251.AA06967@clarity.Princeton.EDU> Pulvermueller et al: BRAIN RHYTHMS, CELL ASSEMBLIES AND COGNITION The target article whose abstract appear below has just been just been published in PSYCOLOQUY, a refereed electronic journal of Peer Commentary sponsored by the American Psychological Association. Formal commentaries are now invited. The full text can be easily and instantly retrieved by a variety of simple means, described below. Instructions for Commentators appear after the retrieval Instructions. TARGET ARTICLE AUTHOR'S RATIONALE FOR SOLICITING COMMENTARY: Fast periodic brain responses have been investigated in various mammals, humans included. Although most neuroscientists agree on the importance of these processes, it is not at all clear what role they play in cortical and subcortical processing. Are they simply a byproduct of perceptual processes, or do they play a role in what can be called higher or cognitive processing in the brain? We tried to answer this question by performing experiments in which spectral responses to meaningful words and physically similar but meaningless pseudowords were recorded from the human cortex. The result, differential 30-Hz responses to these stimuli, is interpreted in the framework of a Hebbian cell assembly theory. We hope that both the results and the brain-theoretic approach will stimulate a fruitful multidisciplinary discussion. ----------------------------------------------------------------------- psycoloquy.94.5.48.brain-rhythms.1.pulvermueller Friday 19 August 1994 ISSN 1055-0143 (30 paragraphs, 10 figs, 9 notes, 61 refs, 1203 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1994 Friedemann Pulvermueller et al. BRAIN RHYTHMS, CELL ASSEMBLIES AND COGNITION: EVIDENCE FROM THE PROCESSING OF WORDS AND PSEUDOWORDS Friedemann Pulvermueller (1) Hubert Preissl (1) Carsten Eulitz (2) Christo Pantev (2) Werner Lutzenberger (1) Thomas Elbert (2) Niels Birbaumer (1, 3) (1) Institut fuer Medizinische Psychologie und Verhaltensneurobiologie, Universitaet Tuebingen, Gartenstrasse 29, 72074 Tuebingen, Germany PUMUE at mailserv.zdv.uni-tuebingen.de (2) Institut fuer Experimentelle Audiologie, Universitaet Muenster, Kardinal von Galen-Ring 10, 48149 Muenster, Germany (3) Universita degli Studi, Padova, Italy ABSTRACT: In modern brain theory, cortical cell assemblies are assumed to form the basis of higher brain functions such as form and word processing. When gestures or words are produced and perceived repeatedly by the infant, cell assemblies develop which represent these building blocks of cognitive processing. This leads to an obvious prediction: cell assembly activation ("ignition") should take place upon presentation of items relevant for cognition (e.g., words, such as "moon"), whereas no ignition should occur with meaningless items (e.g., pseudowords, such as "noom"). Cell assembly activity may be reflected by high-frequency brain responses, such as synchronous oscillations or rhythmic spatiotemporal activity patterns in which large numbers of neurons participate. In recent MEG and EEG experiments, differential gamma-band responses of the human brain were observed upon presentation of words and pseudowords. These findings are consistent with the view that fast coherent and rhythmic activation of large neuronal assemblies takes place with word but not pseudowords. KEYWORDS: brain theory, cell assembly, cognition, event related potentials (ERP), electroencephalograph (EEG), gamma band, Hebb, language, lexical processing, magnetoencephalography (MEG), psychophysiology, periodicity, power spectral analysis, synchrony The article is retrievable from the following sites: ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1994.volume.5 http://info.cern.ch/hypertext/DataSources/bySubject/Psychology/Psycoloquy.html http://www.princeton.edu/~harnad/ gopher://gopher.cic.net/11/e-serials/alphabetic/p/psycoloquy gopher://gopher.lib.virginia.edu/11/alpha/psyc gopher://wachau.ai.univie.ac.at/11/archives/Psycoloquy The filename is: psycoloquy.94.5.48.brain-rhythms.1.pulvermueller To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (your password is your own userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") and then change directories with: cd pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get psycoloquy.94.5.48.brain-rhythms.1.pulvermueller or mget *pulv* When you have the file(s) you want, type: quit ---------------------------------------------------------------- The file is also retrievable through archie, gopher, and World-Wide Web using the URLs (Universal Resource Locators): http://info.cern.ch/hypertext/DataSources/bySubject/Psychology/Psycoloquy.html ftp://princeton.edu/pub/harnad/Psycoloquy or gopher://gopher.princeton.edu/1ftp%3aprinceton.edu%40/pub/harnad/BBS/ ---------------------------------------------------------------- Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ------------------------------------------------------------------ Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers -- ftpmail at decwrl.dec.com and bitftp at pucc.bitnet -- that will do the transfer for you. Send either one the one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------------------- INSTRUCTIONS FOR PSYCOLOQUY AUTHORS AND COMMENTATORS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 40,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral science, cognitive science, neuroscience, social science, etc.). All contributions are refereed. Target article length should normally not exceed 500 lines [c. 4500 words]. Commentaries and responses should not exceed 200 lines [c. 1800 words]. All target articles, commentaries and responses must have (1) a short abstract (up to 100 words for target articles, shorter for commentaries and responses), (2) an indexable title, (3) the authors' full name(s) and institutional address(es). In addition, for target articles only: (4) 6-8 indexable keywords, (5) a separate statement of the authors' rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). All paragraphs should be numbered in articles, commentaries and responses (see format of already published articles in the PSYCOLOQUY archive; line length should be < 80 characters, no hyphenation). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu Anonymous ftp archive is DIRECTORY pub/harnad/Psycoloquy HOST princeton.edu From miku at sedal.su.oz.au Tue Aug 23 03:02:45 1994 From: miku at sedal.su.oz.au (Michael Usher) Date: Tue, 23 Aug 1994 17:02:45 +1000 (EST) Subject: Sixth Australian Conference on Neural Networks Message-ID: <199408230702.RAA26903@sedal.sedal.su.OZ.AU> For those people putting the finishing touches to their submissions, the Author's Style Guidelines are now available via the World Wide Web. LaTeX style information can be found at URL: http://www.sedal.su.oz.au/acnn95 The conference programme and registration details will also be placed there, when they become available. Further queries about ACNN'95 should be directed to . Problems with the WWW server should be directed to myself. Michael Usher -- Michael Usher Systems Administrator miku at sedal.su.OZ.AU Systems Engineering & Design Automation Lab (SEDAL) Tel: +61 2 692 4135 Department of Electrical Engineering, Building J03 Fax: +61 2 660 1228 University of Sydney, NSW 2006, AUSTRALIA From shultz at hebb.psych.mcgill.ca Tue Aug 23 09:10:34 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 23 Aug 94 09:10:34 EDT Subject: No subject Message-ID: <9408231310.AA19055@hebb.psych.mcgill.ca> Subject: Paper available: A connectionist model of the learning of personal pronouns in English. Date: 23 August '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/shultz.pronouns.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A connectionist model of the learning of personal pronouns in English. (13 pages) Thomas R. Shultz, David Buckingham, & Yuriko Oshima-Takane Department of Psychology & McGill Cognitive Science Centre McGill University Montreal, Quebec, Canada H3A 1B1 shultz at psych.mcgill.ca Abstract Both experimental and observational psycholinguistic research have shown that children's acquisition of first and second person pronouns is affected by the opportunity to hear these pronouns used in speech not addressed to them. These effects were simulated with the cascade-correlation connectionist algorithm. The networks learned, in effect, to produce the pronouns "me" and "you" depending on identification of the speaker, addressee, and referent. Analysis of network performance and structure indicated that generalization to correct pronoun production was aided by listening to non-addressed speech and that persistent pronoun errors were created by listening to directly addressed speech. It was noted that explicit symbolic rule models would likely have difficulty simulating the pattern frequency effects common to the present simulations and to the natural language environment of the child. The paper has been published in S. J. Hanson, T. Petsche, M. Kearns, & R. L. Rivest (Eds.) (1994). Computational learning theory and natural learning systems, Vol. 2: Intersection between theory and experiment (pp. 347-362). Cambridge, MA: MIT Press. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.pronouns.ps.Z ftp> quit unix> uncompress shultz.pronouns.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz From tap at cs.toronto.edu Tue Aug 23 16:18:48 1994 From: tap at cs.toronto.edu (Tony Plate) Date: Tue, 23 Aug 1994 16:18:48 -0400 Subject: Thesis available for ftp Message-ID: <94Aug23.161849edt.240@neuron.ai.toronto.edu> Ftp-host: ftp.cs.utoronto.ca Ftp-filename: /pub/tap/plate.thesis.2up.ps.Z Ftp-filename: /pub/tap/plate.thesis.ps.Z The following thesis is available for ftp. There are two versions: plate.thesis.ps.Z prints on 216 sheets of paper, and plate.thesis.2up.ps.Z prints on 108 sheets. The compressed files are around 750Kb each. Distributed Representations and Nested Compositional Structure by Tony A. Plate* Department of Computer Science, University of Toronto, 1994 A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy in the Graduate Department of Computer Science at the University of Toronto. Abstract Distributed representations are attractive for a number of reasons. They offer the possibility of representing concepts in a continuous space, they degrade gracefully with noise, and they can be processed in a parallel network of simple processing elements. However, the problem of representing nested structure in distributed representations has been for some time a prominent concern of both proponents and critics of connectionism \cite{fodor-pylyshyn-88,smolensky-90,hinton-90}. The lack of connectionist representations for complex structure has held back progress in tackling higher-level cognitive tasks such as language understanding and reasoning. In this thesis I review connectionist representations and propose a method for the distributed representation of nested structure, which I call ``Holographic Reduced Representations'' (HRRs). HRRs provide an implementation of Hinton's~\shortcite{hinton-90} ``reduced descriptions''. HRRs use circular convolution to associate atomic items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, and predicates can be represented in a fixed-width vector. These representations are items in their own right, and can be used in constructing compositional structures. The noisy reconstructions extracted from convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. Circular convolution, which is the basic associative operator for HRRs, can be built into a recurrent neural network. The network can store and produce sequences. I show that neural network learning techniques can be used with circular convolution in order to learn representations for items and sequences. One of the attractions of connectionist representations of compositional structures is the possibility of computing without decomposing structures. I show that it is possible to use dot-product comparisons of HRRs for nested structures to estimate the analogical similarity of the structures. This demonstrates how the surface form of connectionist representations can reflect underlying structural similarity and alignment. * New Address: Tony Plate, Department of Chemical Engineering University of British Columbia 2216 Main Mall, Vancouver, BC, Canada V6T 1Z4 tap at chml.ubc.ca From d.rosen at ieee.org Wed Aug 24 10:47:12 1994 From: d.rosen at ieee.org (d.rosen@ieee.org) Date: Wed, 24 Aug 94 07:47:12 -0700 Subject: Software Engineer/Developer (N.Y.) [Neural Networks] Wanted Message-ID: <9408241445.AA14964@solstice> Full-time position for Software Engineer/Developer with recent Degree. Possible alternative: Part-time Experienced Software Engineer/Developer. Starting date: Immediate/September. A federally-funded research team at New York Medical College is applying neural networks and advanced probabilistic/statistical methods to improve the accuracy with which the stage (of advancement) of cancer cases can be evaluated -- an important factor in determining treatment. We seek a skilled developer to take primary responsibility for the design and implementation of our neural network software, which will be geared towards flexible experimental use in our fast-paced research program, as well as a simple GUI prototype of clinical production software. The successful candidate will work with us (medical, neural network, and statistical researchers) to plan the best path from our current C code to a more carefully-designed, extensible, OO approach, perhaps with partial rapid implementation in an interpreted language, and evaluate the possible role of other available tools and libraries. It is expected that eventually, much of the resulting software will be freely distributed for use in many fields. Candidates should have demonstrably outstanding skills in designing object-oriented software, and in C++ (or both C and some OO language) development under the Unix[/X11] environment. Prefer knowledge of as many of the following as possible: neural nets, statistics, numerical / scientific computation, portable GUI, prototyping language (Python, Smalltalk, Perl, S-plus, ...), MS Windows, and Unix system administration. New York Medical College (NYMC) is located in the community of Valhalla, NY, just half an hour north of New York City. The position would be on-site, though doing some portion of the work remotely could perhaps be arranged. NYMC is the third-largest private medical university in the United States. It is an Equal Opportunity and Affirmative Action Institution. Currently we do not believe we could justify hiring an individual who is not already authorized to work in the U.S. If you are interested and qualified, please e-mail your resume to me as soon as possible (plain text preferred). -- David Rosen, PhD From iiscorp at netcom.com Wed Aug 24 18:48:07 1994 From: iiscorp at netcom.com (IIS Corp) Date: Wed, 24 Aug 94 15:48:07 PDT Subject: Soft Computing Days in San Francisco Message-ID: Soft Computing Days in San Francisco Zadeh, Widrow, Koza, Ruspini, Stork, Whitley, Bezdek, Bonissone, and Berenji On Soft Computing: Fuzzy Logic, Neural Networks, and Genetic Algorithms Three short courses San Francisco, CA October 24-28, 1994 Traditional (hard) computing methods do not provide sufficient capabilities to develop and implement intelligent systems. Soft computing methods have proved to be important practical tools to build and construct these systems. The following three courses, offered by Intelligent Inference Systems Corp., will focus on all major soft computing technologies: fuzzy logic, neural networks, genetic algorithms, and genetic programming. These courses may be taken either individually or in combination. Course 1: Artificial Neural Networks (Oct. 24) Bernard Widrow and David Stork Course 2: Genetic Algorithms and Genetic Programming (Oct. 25) John Koza and Darrell Whitley Course 3: Fuzzy Logic Inference (Oct. 26-28) Lotfi Zadeh, Jim Bezdek, Enrique Ruspini, Piero Bonissone, and Hamid Berenji For further details on course topics and registration information, send an email to iiscorp at netcom.com or contact Intelligent Inference Systems Corp., Phone (415) 988-9934, Fax: (415) 988-9935. A detailed brochure will be sent to you as soon as possible. From ken at phy.ucsf.edu Thu Aug 25 08:16:49 1994 From: ken at phy.ucsf.edu (ken@phy.ucsf.edu) Date: Thu, 25 Aug 1994 05:16:49 -0700 Subject: postdoctoral position available in computational neuroscience Message-ID: <9408251216.AA20136@coltrane.ucsf.EDU> Christof Schreiner works on the physiology of auditory cortex. His lab has defined several of the auditory parameters that seem to be mapped in auditory cortex. He has funding for a postdoctoral position for theoretical work aimed at understanding the mapping to auditory cortex and its possible consequences for representation of complex sounds. His description follows: The objective is to develop network models of the mammalian auditory cortex on the basis of a broad data base of physiological observations from our laboratory. Based on the spectral-temporal filter properties of neurons and their spatial distribution in the cortex, consequences for the cortical representation of complex signals (such as animal vocalizations and speech) shall be evaluated. Previous experience in signal processing and self-organizing or other neural network classes is required. Christof and I are both members of the Keck Center for Integrative Neuroscience at UCSF, a group of 10 faculty working on systems neuroscience including Michael Stryker, Michael Merzenich, Steve Lisberger, Allison Doupe, Alan Basbaum, Roger Nicoll, Howard Fields, and Henry Ralston. The postdoc will work in the Keck Center, and both I and Christof will be available to work closely with them. DO NOT SEND YOUR APPLICATIONS TO ME!! Send applications to: Christof Schreiner Dept. of Otolaryngology Box 0732 UCSF 513 Parnassus SF, CA 94143-0732 email: chris at phy.ucsf.edu Please send a cv, and names, addresses and phone numbers of three individuals who can provide references for you. Copies of your publications would also be helpful. Ken Miller From bishopc at helios.aston.ac.uk Thu Aug 25 07:18:00 1994 From: bishopc at helios.aston.ac.uk (bishopc) Date: Thu, 25 Aug 1994 11:18:00 +0000 Subject: NIPS Workshop - Inverse Problems Message-ID: <4310.9408251018@sun.aston.ac.uk> ------------------------------------------------------------------- NIPS*94 Workshop: DOING IT BACKWARDS: ------------------- NEURAL NETWORKS AND THE SOLUTION OF INVERSE PROBLEMS ---------------------------------------------------- Organizer: Chris M Bishop CALL FOR PRESENTATIONS ---------------------- Introduction: ------------ Many of the tasks for which neural networks are commonly used correspond to the solution of an `inverse' problem. Such tasks are characterized by the existence of a well-defined, deterministic `forward' problem which might, for instance, correspond to causality in a physical system. By contrast the inverse problem may be ill-posed, and may exhibit multiple solutions. Aims: ---- A wide range of different approaches have been developed to tackle inverse problems, and one of the main goals of the workshop is to contrast the way in which they address the underlying technical issues, and to identify key areas for future research. Format: ------ This will be a one-day workshop, and will involve short presentations, with ample time allowed for discussions. Contributions ------------- If you wish to make a contribution to this workshop, please e-mail a brief outline of your proposed presentation to c.m.bishop at aston.ac.uk by 5 September. Chris Bishop -------------------------------------------------------------------- Professor Chris M Bishop Tel. +44 (0)21 359 3611 x4270 Neural Computing Research Group Fax. +44 (0)21 333 6215 Dept. of Computer Science c.m.bishop at aston.ac.uk Aston University Birmingham B4 7ET, UK -------------------------------------------------------------------- From oby at TechFak.Uni-Bielefeld.DE Thu Aug 25 13:13:10 1994 From: oby at TechFak.Uni-Bielefeld.DE (oby@TechFak.Uni-Bielefeld.DE) Date: Thu, 25 Aug 94 19:13:10 +0200 Subject: No subject Message-ID: <9408251713.AA02980@gaukler.TechFak.Uni-Bielefeld.DE> POSTDOCTORAL POSITION Postdoctoral position available beginning spring / summer 1995 in a project combining anatomy, physiology and theoretical modelling to understand the functional architecture of primate visual cortex. Funded by the Human Frontiers program, the position is initially for two years and the postdoctoral fellow will be expected to interact with members of the group in Germany, England, and USA. The candidate should have some experience with neural modeling and possess appropriate math and and computer competency; the candidate should also have relevant experimental skills (eg anatomy or physiological unit recording) in mammalian visual system. Applicants should send their CV, list of publications, letter describing their interest in the position, and name,address and phone number of two referees to either Prof. Jennifer Lund, Institute of Ophthalmology, 11-43 Bath St. , London EC1V 9EL,UK Phone (0)71-608-6864), E-mail smgxjsl at ucl.ac.uk; or Dr. Klaus Obermayer, Universitaet Bielefeld, Technische Fakultat, Universitaetsstrasse 25, 33615 Bielefeld, Germany Phone (0)521-106-6058; Email oby at techfak.uni-bielefeld.de. From ccg at melissa.lanl.gov Thu Aug 25 18:23:12 1994 From: ccg at melissa.lanl.gov (Camilo Gomez) Date: Thu, 25 Aug 94 16:23:12 MDT Subject: positions in financial research Message-ID: <9408252223.AA00343@melissa.lanl.gov.demosdiscs> (PLEASE POST) ----------------------------------------------------------------------------------------- POSITIONS IN FINANCIAL RESEARCH AT LOS ALAMOS NATIONAL LABORATORY CENTER FOR NON-LINEAR STUDIES (Postdoctoral and Graduate Student) Positions involving research and development of financial models are available at Los Alamos National Laboratory. Depending on funding we will have a number of positions including: a)postdoctoral b)graduate student The successful candidates will be expected to work on projects involving research and development in the area of financial derivatives. These will involve both fixed-income and equity derivatives. Projects will focus on valuation problems for a number of these financial derivatives. Exceptionally well qualified candidates with an interest in computational investigations of above mentioned topics and expertise in one or more of the following or related areas, are encouraged to apply: a)finance b)financial derivatives c)statistical analysis d)time series analysis e)neural net/pattern recognition f)emergent behavior systems g)parallel computing h)programming skills (C and C++ languages) Candidates may contact: M.F. Gomez CNLS, MS-B258 Los Alamos National Laboratory Los Alamos, NM 87544 frankie at cnls.lanl.gov for application material and questions. Please indicate in your initial inquiry that is for a position in financial research and whether you are interested in a student or post-doctoral position. Los Alamos National Laboratory is an equal opportunity employer. ----------------------------------------------------------------------------------------- From shannon at cis.ohio-state.edu Thu Aug 25 15:02:52 1994 From: shannon at cis.ohio-state.edu (shannon roy campbell) Date: Thu, 25 Aug 1994 15:02:52 -0400 (EDT) Subject: paper available on synchrony and desynchrony in oscillator an oscillator network Message-ID: <199408251902.PAA27060@axon.cis.ohio-state.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename : /pub/neuroprose/campbell.wc_oscillators.ps.Z The file campbell.wc_oscillators.ps.Z (34 pages) is now available for copying from the Neuroprose repository. Contact shannon at cis.ohio-state.edu for retrieval problems. Synchronization and Desynchronization in a Network of Locally Coupled Wilson-Cowan Oscillators by Shannon Campbell and DeLiang Wang~ Dept. of Physics ~Dept. of Computer and Information Science Ohio State University Columbus, Ohio 43210-1277 Abstract - A network of Wilson-Cowan oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a two-dimensional matrix where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piece-wise linear approximation to the Wilson-Cowan oscillator) synchronizes, and present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections, that preserve spatial relationships among components and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get campbell.wc_oscillators.ps.Z ftp> quit unix> uncompress campbell.wc_oscillators.ps.Z From shannon at cis.ohio-state.edu Fri Aug 26 10:25:48 1994 From: shannon at cis.ohio-state.edu (shannon roy campbell) Date: Fri, 26 Aug 1994 10:25:48 -0400 (EDT) Subject: correct reference for paper on synchrony and desynchrony... Message-ID: <199408261425.KAA27616@axon.cis.ohio-state.edu> Sorry that I did not specify the proper reference to a previously announced paper in neuroprose ("Synchrony and desynchrony in a network of Wilson-Cowan oscillators"). The paper is a techical report from The Department of Computer and Information Science, The Ohio State University, numbered "OSU-CISRC-8/94-TR43". Thanks for your attention. -shannon campbell From kreider at bechtel.Colorado.EDU Sat Aug 27 16:21:38 1994 From: kreider at bechtel.Colorado.EDU (Jan Kreider) Date: Sat, 27 Aug 1994 14:21:38 -0600 Subject: System identification competition announcement Message-ID: <199408272021.OAA10216@bechtel.Colorado.EDU> Announcement of ***System Identification Competition*** Benchmark tests for estimation methods of thermal characteristics of buildings and building components. *Objective The objective of the benchmark is to set-up a comparison between alternative techniques and to clarify particular problems of system identification applied to the thermal performance of buildings. *Organisation J. Bloem, U. Norlen, EC - Joint Research Centre, Ispra, Italy H. Madsen, H. Melgaard, IMM TU of Denmark, Lyngby, Denmark J. Kreider, JCEM, University of Colorado, U.S. *Active period of the competition : July 1, 1994 - December 31, 1994 *Introduction A wide range of system identification techniques is now being applied to the analysis problems involved with estimation of thermal properties of buildings and building components. Similar problems arise in most observational disciplines, including physics, biology, and economics. New commercially available software tools and special purpose computer programs promise to provide results that were unobtainable just a decade ago. Unfortunately, the realisation and evaluation of this promise has been hampered by the difficulty of making rigorous comparisons between competing techniques, particularly ones that come from different disciplines. This competition has been organised to help clarify the conflicting claims among many researchers who use and analyse building energy data and to foster contact among these persons and their institutions. The intent is not necessarily only to declare winners but rather to set up a format in which rigorous evaluations of techniques can be made. Because there are natural measures of performance, a rank-ordering will be given. In all cases, however, the goal is to collect and analyse quantitative results in order to understand similarities and differences among the approaches. At the close of the competition the performance of the techniques submitted will be compared. Those with the best results will be asked to write a scientific paper and will be invited for a presentation of the paper. There will be no monetary prizes. A symposium at the JRC Ispra, Northern Italy, has been scheduled for the Autumn 1995 to explore the results of the competition in formal papers. The competition, the overall results and papers on selected methods will published by the organisers in a book. Research on energy savings in buildings can be divided in three major areas: 1) building components, 2) test cells and unoccupied buildings in real climate and 3) occupied buildings. Three competitions are planned along this line of which the present competition concerned with building components will be the first one. The present competition is concerned with wall components and no solar radiation involved. Five different cases are provided for estimation and prediction. Four cases have been designed with wall components in order to test parameter estimation methods. Prediction tests are also included. Some of the dependent variable values will be withheld from the data set in these cases. Contestants are free to submit results from any number of cases. When the outcome of this first competition is positive a second competition is planned which concerns test cells and unoccupied buildings under real climate conditions (1995). A third competition concerns occupied buildings (1996). If there is sufficient interest, a network server may be set up to operate as an on-line archive of interesting data sets, programs, and comparisons among algorithms in the future. ***ACCESSING THE DATA The competition does not require advanced registration; there are two ways to enter: 1. by normal mail. Simply request the data by sending a letter. The data are available on diskettes (3.5-in size) in ASCII, IBM-PC format. (there is no charge for the data diskette). To receive the data, send the letter together with a self-addressed rigid envelope to : Joint Research Centre Institute of System Engineering and Informatics J.J. BLOEM, Building 45 I - 21020 Ispra (VA), ITALY 2. by E-mail. In that case just send an request for participation by E-mail to J. Kreider at the University of Colorado, Boulder, CO 80309-0428, U.S. at the following E-mail address: jkreider at vaxf.colorado.edu Information how to obtain the necessary instructions and the required data series, using FTP, are forwarded to you by E-mail. Instructions on submitting a return disk with the analysis of the cases will be included in a README file. The disk will also include an entry form that each participant will need to complete and submit along with the results. ***FOR MORE INFORMATION Further questions about the competition should be directed to one of the following organisers: Joint Research Centre Institute for Systems Engineering and Informatics J.J. BLOEM Building 45 I - 21020 ISPRA (VA), Italy tel: +39 332 789842/789145 fax: +39 332 789992 E-mail: hans.bloem at cen.jrc.it Joint Center for Energy Management J. KREIDER Campus Box 428 University of Colorado Boulder, CO 80309-0428, U.S. tel: +303 492 3915 fax: +303 492 7317 E-mail: jkreider at vaxf.colorado.edu End of Announcement SysId Competition. From lautrup at connect.nbi.dk Mon Aug 29 15:54:23 1994 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Mon, 29 Aug 94 15:54:23 METDST Subject: no subject (file transmission) Message-ID: Subject: Paper available: Extremely Ill-posed Learning Date: August 29, 1994 FTP-host: connect.nbi.dk FTP-file: neuroprose/hansen.ill-posed.ps.Z ---------------------------------------------- The following paper is now available: Extremely Ill-posed Learning [14 pages] L.K. Hansen, B. Lautrup, I. Law, N. Moerch, and J. Thomsen CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: Extremely ill-posed learning problems are common in image and spectral analysis. They are characterized by a vast number of highly correlated inputs, eg pixel or or pin values, and a modest number of patterns, eg images or spectra. We show that it is possible to train neural networks to learn such patterns without using an excessive number of weights, and we devise a test to decide if new patterns should be included in the training set or whether they fall within the subspace already explored. The method is applied to the analysis of PET-images. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get hansen.ill-posed.ps.Z ftp> quit unix> uncompress hansen.ill-posed.ps.Z ----------------------------------------------- Benny Lautrup, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5358 Fax: +45-3142-1016 e-mail: lautrup at connect.nbi.dk From marshall at cs.unc.edu Mon Aug 29 12:03:50 1994 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Mon, 29 Aug 1994 12:03:50 -0400 Subject: POSTDOC JOB (Neural net models of learning) Message-ID: <199408291603.MAA27461@marshall.cs.unc.edu> ********************************************************************** POSTDOCTORAL POSITION Available in September 1994. Focus of research is neural network models of learning, with a special emphasis on spatial tasks. Qualified Ph.D. applicants should send a cover letter indicating research experience, vitae, and names of three references to Dr. Nestor Schmajuk Department of Psychology Duke University Durham, NC 27706 e-mail: nestor at acpub.duke.edu ********************************************************************** From ccg at melissa.lanl.gov Tue Aug 30 11:01:01 1994 From: ccg at melissa.lanl.gov (Camilo Gomez) Date: Tue, 30 Aug 94 09:01:01 MDT Subject: positions in financial research (corrected e-mail) Message-ID: <9408301501.AA02500@melissa.lanl.gov.demosdiscs> (PLEASE POST) ----------------------------------------------------------------------------------------- POSITIONS IN FINANCIAL RESEARCH AT LOS ALAMOS NATIONAL LABORATORY CENTER FOR NON-LINEAR STUDIES (Postdoctoral and Graduate Student) Positions involving research and development of financial models are available at Los Alamos National Laboratory. Depending on funding we will have a number of positions including: a)postdoctoral b)graduate student The successful candidates will be expected to work on projects involving research and development in the area of financial derivatives. These will involve both fixed-income and equity derivatives. Projects will focus on valuation problems for a number of these financial derivatives. Exceptionally well qualified candidates with an interest in computational investigations of above mentioned topics and expertise in one or more of the following or related areas, are encouraged to apply: a)finance b)financial derivatives c)statistical analysis d)time series analysis e)neural net/pattern recognition f)emergent behavior systems g)parallel computing h)programming skills (C and C++ languages) Candidates may contact: M.F. Gomez CNLS, MS-B258 Los Alamos National Laboratory Los Alamos, NM 87544 frankie at goshawk.lanl.gov for application material and questions. Please indicate in your initial inquiry that is for a position in financial research and whether you are interested in a student or post-doctoral position. Los Alamos National Laboratory is an equal opportunity employer. ----------------------------------------------------------------------------------------- From prechelt at ira.uka.de Tue Aug 30 15:32:35 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Tue, 30 Aug 1994 21:32:35 +0200 Subject: Report on NN learning algorithm evaluation practices Message-ID: <"irafs2.ira.198:30.07.94.19.32.14"@ira.uka.de> A technical report titled A Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice is now available for anonymous ftp (binary mode!) from archive.cis.ohio-state.edu /pub/neuroprose/prechelt.eval.ps.Z The file is 70 Kb, the paper has 9 pages. This is the abstract: 113 journal articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. Every third of them does employ not even a single realistic or real learning problem. Only 6\% of all articles present results for more than one problem using real world data. Furthermore, one third of all articles does not present any quantitative comparison with a previously known algorithm. These results indicate that the quality of research in the area of neural network learning algorithms needs improvement. The publication standards should be raised and easily accessible collections of example problems be built. Here is a bibtex entry for the report: @techreport{Prechelt94d, author = {Lutz Prechelt}, title = {A Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice}, institution = {Fakult\"at f\"ur Informatik, Universit\"at Karlsruhe}, year = {1994}, number = {19/94}, address = {D-76128 Karlsruhe, Germany}, month = aug, note = {anonymous ftp: /pub/papers/techreports/1994/1994-19.ps.Z on ftp.ira.uka.de}, } Comments are welcome. Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple. From tgd at chert.CS.ORST.EDU Tue Aug 30 19:07:33 1994 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Tue, 30 Aug 94 16:07:33 PDT Subject: paper: Error Correcting Output Codes Message-ID: <9408302307.AA13478@edison.CS.ORST.EDU> The following paper is available at URL: "ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-ecoc.ps.gz" Solving Multiclass Learning Problems via Error-Correcting Output Codes Thomas G. Dietterich tgd at cs.orst.edu Department of Computer Science, 303 Dearborn Hall Oregon State University Corvallis, OR 97331 USA Ghulum Bakiri Department of Computer Science University of Bahrain Isa Town, Bahrain Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k>2 values (i.e., k ``classes''). The definition is acquired by studying large collections of training examples of the form . Existing approaches to multiclass learning problems include (a) direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the $k$ classes, and (c) application of binary concept learning algorithms with distributed output representations such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. Thomas G. Dietterich Voice: 503-737-5559 Department of Computer Science FAX: 503-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd/index.html Oregon State University Corvallis, OR 97331-3102 From georgiou at wiley.csusb.edu Wed Aug 31 14:00:01 1994 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Thu, 1 Sep 1994 02:00:01 +0800 Subject: JCIS: Last Call for Papers Message-ID: <9409010900.AA04051@orion.csci.csusb.edu> JOINT CONFERENCE ON INFORMATION SCIENCES Last Call for Papers (For submission of Neural Networks papers please see end of message for address. Deadline: September 10, 1994) ORGANIZERS Honorary Chairs Lotfi A. Zadeh & Azriel Rosenfeld Managing Chair of the Joint Conferences Paul P. Wang Dept. of Electrical Engineering Duke University Durham, NC 27708-0291 Tel: (919) 660 5271, 660-5259 Fax: (919) 660-5293, 684-4860 e-mail: ppw at ee.duke.edu Advisorv Board Nick Baker Earl Dowell Erol Gelenbe Stephen Grossberg Kaoru Hirota Abe Kandel George Klir Teuvo Kohonen Tosiyasu L. Kunii Jin-Cherng Lin E. Mamdani Xuan-Zhong Ni C.V Ramamoorthy John E.R Staddon Masaki Togai Victor Van Beuren Max Woodbury Stephen S. Yau Lotfi Zadeh H. Zimmerman Keynote Speakers Lotfi A. Zadeh & Stephen Grossberg Plenarv Speakers Suguru Arimoto Dennis Bahler James Bowen Abe Kandel George Klir Phillipe Smets John R. Rice l.B. Turksen Benjamin Wah Stephen S. Yau General Information The JOINT CONFERENCE ON INFORMATION SCIENCES consists of two international conferences and one workshop. All interested attendees including researchers, organizers, speakers, exhibitors, students and other participants should register either in Plan A: 3rd International Conference on Fuzzy Theory and Technology or Plan B: First International Conference on Computer Theory & lnformatics and Workshop on Mobile Computing Svstems. Any participants can attend all the keynote speeches, plenary sessions, all parallel sessions and exhibits. The only difference is that all authors registered in Plan A will participate in Lotfi A. Zadeh BestPaper Competition. Plan B will have no best paper competition; at least for this year. In addition, each plan will publish its own proceeding. Tutorials Session A: Fuzzy Theory & Technology (Sunday, November 13, 1994, 8:30 am - 12:30 pm) 1. George Klir .................................Fuzzy Set and Logic 2. l. B. Turksen ................................Fuzzy Expert Systems 3. Jack Aldridge .......................................Fuzzy Control 4. Marcus Thint .......................Fuzzy Logic and NN Integration Session B: Computers (Sunday, November 13, 1994, 1:30pm- 6:30pm) 1. Richard Palmer ....................................Neural Network 2. Frank Y. Shih ................................Pattern Recognition 3. Patrick Wang ......Intelligent Pattern Recognition & Applications 4., 5. Ken W. White...............Success with Machine Vision I & II Time Schedule & Venue Tutorials ....................November 13, 1994 o 8:30 am - 6:30 pm Conferences .............November 14 - 16, 1994 o 8:30 am - 5:30 pm Venue........Pinehurst Resort & Country Club, Pinehurst, NC, U.S.A. PARTICIPATION PLAN A: 3rd Annual Conference on Fuzzy Theory and Technology PROGRAM COMMITTEE Jack Aldridge Suguru Arimoto W Bandler P Bonnisone Bruno Bosacchi B Bouchon-Meunier J Buckley Dev Garg Rhett George George Georgiou I R Goodman Siegfried Gottwald Silvia Guiasu M M. Gupta Ralph Horvath D L Hung Timothy Jacobs Y.K Jani Joaquim Jorge Paul Kainen S.C. Kak Abe Kandel P Klement L J Kohout Vladik Kreinovlch N Kuroki Reza Langari Harry Hua Li Don Meyers C K Mitchell John Mordeson Akira Nakamura Kyung Whan Oh Maria Orlowska Robert M. Pap Arthur Rarner Elie Sanchez B Scwhott Shouyoe Shao Sujeet Shenoi Frank Shih H Allison Smith L M Sztandera Alade ToKuta R Tong I Turksen Guo Jun Wang Tom Whalen Edward K Wong T Yamakawa The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. Example topics include, but are not limited to the following: TOPICS: 3rd ANNUAL CONFERENCE ON FUZZY THEORY AND TECHNOLoGY o Fuzzy Mathematics o Basic Principles and Foundations of Fuzzy Logic o Qualitative and Approximate-Reasoning Modeling o Hardware Implementations of Fuzzy Logic Algorithms o Design . Analysis, and Synthesis of Fuzzy Logic Controllers o Learning and Acquisition of Approximate Models o Fuzzy Expert Systems o Neural Network Architectures o Artificially Intelligent Neural Networks o Artificial Life o Associative Memory o Computational Intelligence o Cognitive Science o Fuzzy Neural Systems o Relations between Fuzzy Logic and Neural Networks o Theory of Evolutionary Computation o Efficiency/robustness comparisons with other direct search algorithms o Parallelcomputer:applications o Integration of Fuzzy Logic and Evolutionary Computing o Comparisons between different variants of evolutionary algorithms o Evolutionary Computation for neural networks o Fuzzy logic in Evolutionary algorithms o Neurocognition o Neurodynamics o Optirnization o Pattern Recognition o Learning and Memory o Machine Learning Applications o Implementations (electronic, Optical, biochips) o Intelligent Control APPLICATIONS OF THE TOPICS: o Hybrid Systems o Image Processing o Image Understanding o Pattern Recognition o Robotics and Automation o Intelligent Vehicle and Highway Systems o Virtual Reality o Tactile Sensors o Machine Vision o Motion Analysis o Nuro biology o Sensation and Perception o Sensorimotor Systems o Speech, Hearing and Language o Signal Processing o Time Series Analysis o Prediction o System ldentification o System Control o Intelligent Information Systems o Case-Based Reasoning o Decision Analysis o Databases and Information Retrieval o Dynamic Systems Modeling & Diagnosis o Electric & Nuclear Power Systems PARTICIPATION PLAN B: 1st Annual Computer Theory and Informatics Conference & Workshop on Mobile Computing Systems PROGRAM COMMITTEE: FIRST ANNUAL, COMPUTER & INFORMATICS Rafi Ahmed R Alonso Suguru Arimoto B. Badrinath C.R. Baker Martin Boogaard O Bukhres P Chrysantis Eliseo Clementini E.M. Ehlers Ahmed Elmagarmid K Ferentinos Godfrey Mohamed Gouda Albert G. Greensberg S Helal T Imielinski Subhash C Kak Abe Kandel Teuvo Kohomen Timo Koski Devendra Kumar Tosiyasu L Kunii Shahram Latifi Lin-shan Lee Mark Levene Jason Lin M i Lu Yanni Manolopoulos Jorge Muruzabal Sham B Navathe Sigeru Ornatu C V Ramamoorthy Hari N. Reddy John R Rice Abdellah Salhi Frank S.Shih Harpreet Singh Stanley Y W. Su Abdullah Uz Tansel Kishor Trivedi Millist Vincent Benjamin Wah Z.A Wahab Jun Wang Patrick Wang Edward K. Wong Lotfi A Zadeh TOPICS: 1st ANNUAL COMPUTER THEORY & INFORMATICS CONFERENCE The conference will consist of both plenary sessions and contributory sessions, focusing on topics of critical interest and the direction of future research. Example topics include, but are not limited to the following: o Computational Theory: Coding theory, automata, information theory, modern algebraic theory, measure theory, probability and statistics, and numerical methods o Design of Algorithms: algorithmic complexity, theory of algorithms, design, analyis and evaluation of efficient algorithms for engineering applications (such as computer-aided design, computational fluid dynamics, computer graphics, and virtual reality), combinatorics, scheduling theory, discrete optimization, data compression, and approximation theory o Software Design: Formal languages, theory and design of optimizing compilers (especially those for parallel and supercomputers), object-oriented programming, database theory and data organization, software design methodology, program verification, and software reliability. o Computer systems and architectures: parallel and distributed computing systems, high speed computer networks, theory and data orgarization, software design methodology, program verification, and software reliability. o Evaluation methods and tools: Performance evaluation methods, visualization tools, and simulation theory and methodology. TOPICS: WORKSHOP ON MOBILE COMPUTING SYSTEMS The workshop will focus on system support for mobile information and data access, to recognize the role of mobile computer systems in today's business and scientific communities. The workshop will be organized to gather leading researchers and practitioners in the field. We shall focus on issues related but not limited to: o Architectures for Mobile Computing Systems o Mobile ComputingTechnology o Wireless Communications o User lnterfaces in Palmtop Computing o Databases for Nomadic Computing o Transaction Models and Management o System Cornplexity, Integrity and Security o Legal/Social/Health Issues o Operating System Support o Battery Management o Nomadic Applications o Handheld Multimedia o Personai Communication Networks PROGRAM COMMITTEE: WORKSHOP ON MOBILE COMPUTING SYSTEMS R Alonso (Technology) B Badrinath (Query Processing) O Bukhres (Database Systems) P Chrysantis (Transactions) S Helal (Applications) T Imielinski (New Directions) Note: All Attendees must choose either Participation Plan A or Participation Plan B PUBLICATIONS The conference publishes two Proceedings on Summary; One entitled "Third International Conference on Fuzzy Theory and Technology" and the other entitled "First International Conference on Computer Theory and Informatics, and First Workshop on Mobile Computing Systems." Both proceedings will be made available on November 13,1994. A summary shall not exceed 4 pages of 10-point font, doublecolumn, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by September 10, 1994. It is very important to mark "plan A" or "plan B" on your manuscript. The conference will make the choice for you if you forget to do so. Final version of the full length paper must be submitted by November 14, 1994. Four (4) copies of the full length paper shall be prepared according to the "Information for Authors" appearing at the back cover of Information Sciences, an International Journal (Elsevier Publishing Co.). A full paper shall not exceed 20 pages including figures and tables. All full papers will be reviewed by experts in their respective fields. Revised papers will be due on April 15, 1995. Accepted papers will appear in the hard-covered proceeding (book) with uniform typesetting to be published by a publisher (there will be two books published this year, one for each plan) or Information Sciences Journal (INS journal now has three publications: Informatics and Computer Science, Intelligent Systems, Applications). All fully registered conference attendees will receive a copy of proceeding (summary) on November 14, 1994; a free one-year subscription (paid by this conference) of Information Sciences Journal - Applications. Lastly, the right to purchase either or both hard-covered, deluxe, professional books at 1/2 price. The Title of the books are "Advances in Fuzzy Theory & Technology, Volume lll", "Advances in Computer Science and Informatics, Volume 1." Lotfi A. Zadeh "Best Paper Award" FT&T 1994 All technical papers subrnitted to FT & T, 1994 are automatically qualified as candidates for this award. The prize for this award is $2,500 plus hotel accommodations (traveling expenses excluded) at FT & T, 1995. The date for announcement of th best paper is May 30,1995. Oral presentation in person at FT & T, 1994, is required and an acceptance speech at FT & T, 1995 is also required. The evaluation committee for FT & T, 1994 consists of the following 10 members: Jack Aldridge B. Bouchon-Meunier George Klir l.R. Goodman John Mordeson Sujeet Shenoi H. Chris Tseng Frank Y. Shih Akira Nakamura Edward K. Wong I. B. Turksen. The selection of the top ten best papers will be decided by conference attendees and session chairs jointly. EXHIBITIONS JCIS '94 follows on last year's highly successful exhibits by some major publishers, most publishers, lead by Elsevier, will return. Intelligent Machines, Inc. will demonstrate their highly successful new software "O'inca"-a FL-NN, Fuzzy-Neuro Design Framework. Dr. Ken W. White of Ithaca, N.Y. will demonstrate his visual-sense systems. Virtus - a virtual reality company based in Cary, North Carolina has committed to participate. Negotiations are also underway with UNCVR research laboratory for its participation. This conference intends to develop "virtual reality" as one of the themes to benefit all attendees. Interested potential contributors should contact Dr. Paul P. Wang or Dr. Rhett T. George. Interested vendors should contact: Rhett George, E.E. Dept., Duke University Telephone: (919) 660-5242 Fax: (919) 660-5293 rtg at ee.duke.edu TRAVEL ARRANGEMENTS The Travel Center of Durham, Inc. has been designated the official travel provider. Special domestic fares have been arranged and The Travel Center is prepared to book all flight travel. Domestic United States and Canada:1 -800-334-1 085 International FAX: (919) 687-0903 HOTEL RESERVATIONS Pinehurst Resort & Country Club Pinehurst, North Carolina, U.S.A. This is the conference site and lodging. Group Reservation Request designed specifically for our conference. Very Special discount rates have been agreed upon. Daily Rates for Hotel: Single Occupancy - $122.00 Double Occupancy - $91.00 per person Daily Rates for Manor Inn: Single Occupancy - $108.00 Double Occupancy - $79.00/person (Rates are per person, per night and include accommodations, breakast and dinner daily.) Pinehurst Resort encompasses an elegant historic hotel (registered with Historic Hotels of America") with the best in accommcations, gourmet dining and modern meeting facilities. Our AAA Four Diamond and Mobil Four Star resort offers a wide range of activities including seven championship golf courses, tennis, waters sports, croquet and sport shooting. Please contact: JACKIE HAYTER Associate Director of Sales Pinehurst Resort and Country Club Carolina Vista P.O. Box 4000 Pinehurst, NC 28374-4000 (919) 295-1339 - (919) 295-8484 1-800-659-G0LF SPONSORS Machine Intelligence and Fuzzy Logic Laboratory o Department of Electrical Engineering, Duke University o Elsevier Science Inc. New York, N.Y. PARTICIPANTS IFSA, International Fuzzy Systems Association o Institute Of Information science, Academia Sinica. JCIS '94 REGISTRATION FEES & INFO. Up to 9/15/94 After 9/15/94 Full Registration $275.00 $395.00 Student Registration $85.00 $160.00 Tutorial w/ Conf_ Reg. $150.00 $200.00 Tutorial w/o Conf. Reg. $300.00 $500.00 Exhibit Booth Fee $400.00 $500.00 One Day Fee (no pre-reg. discount) $165.00 Full $80.00 Student Above fees applicable to both Plan A & Plan B FULL CONFERENCE REGISTRATION: Includes admission to all sessions, exhibit area, coffee, tea and soda A copy of conference proceedings (summany) at conference and one year subscnption of Information Sciences - Applications, An International Journal, published by Elsevier Publishing Co. In addition, the right to purchase the hard-cover deluxe books at 1/2 price. Banquets (November 14 & 15, 1994) are included through Hotel Registration. Tutorials are not included STUDENT CONFERENCE REGISTRATION. For full-time students only. A letter from your department is required. You must present a current student I D with picture. A copy of Conference Proceedings (Summary) is included. Admission to all sessions, exhibit area, coffee, tea and soda. The nght to purchase the hard-cver deluxe books at 1/2 price Free Subscription of INS Journal - Applications is not included. TUTORIALS REGISTRATION: Any person can register for the Tutorials A copy of lecture notes for the oaurse registered is included. Coffee, tea and soda is included The summary and tree subscription to the INS journal is, however, not included. The right to purchase hard-cover deluxe books is included. VARIOUS CONFERENCE CONTACTS: LOCAL INFORMATlON Rhett T. George Dept. of Electrical Engineering Box 90291, Duke University, Durham, NC 27708-0291 e-mail: rtg at ee.duke.edu Tel. (919) 660-5228 TUTORIAL & CONFERENCE INFORMATION Paul P. Wang Kitahiro Kaneda e-mail: ppw at ee.duke.edu e-mail: hiro at ee.duke.edu Tel. (919) 660-5271, 660-5259 Tel. (919) 660-5233 Jerry C.Y. Tyan e-mail: ctyan at ee.duke.edu Tel. (919) 660-5233 ---------------------------------------------------------------- Neural Networks papers: Send summaries to George M. Georgiou Computer Science Department TEL: (909) 880-5332 California State University FAX: (909) 880-7004 5500 University Pkwy San Bernardino, CA 92407, USA georgiou at silicon.csci.csusb.edu Deadline: September 10, 1994 TeX/Latex or postcript by email is fine. --------------------------------------------------------------------