From mav at cs.uq.oz.au Wed Apr 1 18:49:46 1992 From: mav at cs.uq.oz.au (mav@cs.uq.oz.au) Date: Thu, 02 Apr 92 09:49:46 +1000 Subject: Why does the error rise in a SRN? Message-ID: <9204012349.AA03878@uqcspe.cs.uq.oz.au> I have been working with the Simple Recurrent Network (Elman style) and variants there of for some time. Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. I have seen the same phenomena using both root mean square and dprime error. It often occurs over quite long time periods (several thousand epochs). The task that I have studied most carefully is episodic recognition. A list (usually very short) of words is given. A recognize symbol is then followed by an item which either was in the list or wasn't. The task is to make this decision. Following is a set of example inputs and outputs: Input Output table blank beach blank king blank recognize blank beach yes Questions: (1) Has anyone else noticed this? (2) Is it task dependent? (3) Why does it happen? Simon Dennis.  From ken at cns.caltech.edu Thu Apr 2 01:18:58 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Wed, 1 Apr 92 22:18:58 PST Subject: Special Issue on Neural Modeling Message-ID: <9204020618.AA04292@cns.caltech.edu> VOL. 4 NO. 1 of SEMINARS IN THE NEUROSCIENCES (Mar 1992) THE USE OF MODELS IN THE NEUROSCIENCES Guest Editor: Kenneth D. Miller CONTENTS: Introduction: The Use Of Models In The Neurosciences ----- Kenneth D. Miller A Quantitative Model of Phototransduction and Light Adaptation in Amphibian Rod Photoreceptors ----- V. Torre, M. Straforini and M. Campani Realistic Single-Neuron Modeling ----- William W. Lytton and John C. Wathey Modeling Hippocampal Circuitry Using Data From Whole Cell Patch Clamp and Dual Intracellular Recordings In Vitro ----- Roger D. Traub and Richard Miles Models of Central Pattern Generators as Oscillators: The Lamprey Locomotor CPG ----- Karen A. Sigvardt and Thelma L. Williams Realistic Neural Network Models Using Backpropagation: Panacea or Oxymoron? ----- S.R. Lockery Models of Activity-Dependent Neural Development ----- Kenneth D. Miller Neural Modeling of a Visual Perceptual Task --- A Case Study ----- Gerald Westheimer An Introduction to Silicon Neural Analogs ----- M.A. Mahowald, R.J. Douglas, J.E. LeMoncheck, and C.A. Mead Seminars in the Neurosciences is a review journal. Each issue is devoted to a single topic or theme of current interest to neuroscientists, and is edited by a guest editor appointed for that issue. The aim of each issue is to provide a co-ordinated, topical review of a selected subject that will be of interest to all levels of reader from the senior undergraduate to the researcher. Forthcoming topics (with guest editor): Dopamine (Trevor W. Robbins); Immune Responses in the Nervous System (Cedric S. Raine); Building the Architecture of the Nervous System (Andrew Lumsden). The Editorial Advisory Board for the journal is: Marianne Bonner-Fraser, J.P. Changeux, Klaus Peter Hoffman, John S. Kelly, Peter Kennedy, Motoy Kuno, Dale Purves, and Janis C. Weeks. Six issues are published annually. SUBSCRIPTION RATES: Personal: $65, or 39 pounds (UK); Institutional: $130, or 78 pounds (UK). SINGLE ISSUES: $34, or 18 pounds (UK). Canada: add GST at current rate of 7%. SEND ORDERS TO: Academic Press Limited, Foots Cray, Sidcup, Kent DA14 5HP, UK (telephone: 081-300-3222).  From watrous at cortex.siemens.com Thu Apr 2 09:18:07 1992 From: watrous at cortex.siemens.com (Ray Watrous) Date: Thu, 2 Apr 92 09:18:07 EST Subject: Why does the error rise in a SRN? Message-ID: <9204021418.AA00863@cortex.siemens.com.siemens.com> Simon - An increase in error can occur with fixed step size algorithms since although the step is in the negative gradient direction, the size can be such that the new point is actually uphill. The algorithm simply steps across a ravine, as it were, and ends higher up on the other side. This is a well-known property of such algorithms, but seems to be encountered in practice more frequently with recurrent networks. This is due to the fact that small changes in some regions of weight space can have large effects on the error because of the nonlinear feedback in the recurrent network. There are many effective ways of controlling this behavior; you may want to consult the line search algorithms in a standard text on nonlinear optimization (such as D. Luenberger Linear and Nonlinear Programming, 1984). Raymond Watrous Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596  From Minh.Tue.Vo at cs.cmu.edu Thu Apr 2 12:05:37 1992 From: Minh.Tue.Vo at cs.cmu.edu (Minh.Tue.Vo@cs.cmu.edu) Date: Thu, 2 Apr 92 12:05:37 -0500 (EST) Subject: Why does the error rise in a SRN? In-Reply-To: <9204012349.AA03878@uqcspe.cs.uq.oz.au> References: <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: > Excerpts from connect: 2-Apr-92 Why does the error rise in .. > mav at cs.uq.oz.au (886) > I have been working with the Simple Recurrent Network (Elman style) > and variants there of for some time. Something which seems to happen > with surprising frequency is that the error will decrease for a period > and then will start to increase again. I have seen the same phenomena > using both root mean square and dprime error. It often occurs over I have noticed the same phenomenon in my work on on-line gesture recognition, so it doesn't seem to be task-dependent. I could reduce the effect somewhat by tweaking the learning rate and the momentum, but I couldn't eliminate it completely. TDNN doesn't seem to have that problem. Minh Tue Vo -- CMU  From tap at ai.toronto.edu Thu Apr 2 15:34:52 1992 From: tap at ai.toronto.edu (Tony Plate) Date: Thu, 2 Apr 1992 15:34:52 -0500 Subject: Why does the error rise in a SRN? In-Reply-To: Your message of "Wed, 01 Apr 92 18:49:46 EST." <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: <92Apr2.153501edt.446@neuron.ai.toronto.edu> If you are really using the SRN "Elman-style", then you are not calculating the true gradient. Thus it is not at all surprising that the error might increase as you follow a false gradient. To calculate the true gradient you need to do the full backpropagation in time. Also, the behaviour you describe of the error first going down and then going up is quite likely, as in the beginning the gradients are large and the false and true gradients are more likely to be pointing in approximately the same direction. Tony Plate  From peter at ai.iit.nrc.ca Thu Apr 2 08:39:02 1992 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Thu, 2 Apr 92 08:39:02 EST Subject: Why does the error rise in a SRN? Message-ID: <9204021339.AA10596@ai.iit.nrc.ca> > I have been working with the Simple Recurrent Network (Elman style) > and variants there of for some time. Something which seems to happen > with surprising frequency is that the error will decrease for a period > and then will start to increase again. > (1) Has anyone else noticed this? I have had the same experience. Here is an example: lrate lgrain mu momentum epoch tss ------------------------------------------------------------------ 0.05 pattern 0.5 0.1 26 40.8941 0.05 pattern 0.5 0.1 53 29.8656 0.05 pattern 0.5 0.1 86 26.2229 0.05 pattern 0.5 0.1 391 11.6567 0.05 pattern 0.5 0.1 458 12.1636 0.05 pattern 0.5 0.1 513 14.0021 The data consist of 16 separate sequences of 700 patterns per sequence. Thus one epoch consists of 11,200 patterns. > (2) Is it task dependent? My task looks quite different from yours. I am trying to train a SRN (Elman) to generate sensor readings for an accelerating jet engine. The sensors include thrust, exhaust gas temperature, shaft rpm, and six others. The sensor readings are the target outputs. The inputs are the ambient conditions (humidity, atmospheric pressure, outside temperature, ...) and the idle shaft rpm. All training data comes from a single jet engine under a variety of ambient conditions. The same throttle motion is used for each of the 16 sequences of patterns. > (3) Why does it happen? I don't know. I have tried lgrain (learning grain) = epoch, but then the net does not seem to converge at all -- tss (total sum squares) stays around 300. I have tried momentum = 0.9, but, again, tss seems to stay around 300. I suspect -- without any real justification -- that this phenomenon is related to catastrophic interference. I am in the process of applying Fahlman's Recurrent Cascade-Correlation algorithm to the same problem. I hope that RCC may work better than SRN, in this case.  From gary at cs.UCSD.EDU Thu Apr 2 19:08:53 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Thu, 2 Apr 92 16:08:53 PST Subject: Why does the error rise in a SRN? Message-ID: <9204030008.AA29877@odin.ucsd.edu> Well, Ray is right, but it is also true that Elman nets do an approximation to the true gradient. There are cases where it will actually point in the wrong direction for the pattern. g.  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Thu Apr 2 23:16:30 1992 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Thu, 02 Apr 92 23:16:30 EST Subject: special issue on language learning Message-ID: <2715.702274590@DST.BOLTZ.CS.CMU.EDU> Ken Miller's special issue announcement reminded me that I should have announced my own special issue on this forum. Well, it's not too late. Following Ken's format: VOL. 7 NOS. 2 and 3 of MACHINE LEARNING (1991) This special double issue is also available as a book from Kluwer Academic Publishers. CONNECTIONIST APPROACHES TO LANGUAGE LEARNING Guest Editor: David S. Touretzky CONTENTS: Introduction ----- David S. Touretzky Learning Automata from Ordered Examples ----- Sara Porat and Jerome A. Feldman SLUG: A Connectionist Architecture for Inferring the Structure of Finite-State Environments ----- Michael C. Mozer and Jonathan Bachrach Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks ----- David S. Servan-Schreiber, Axel Cleeremans, and James L. McClelland Distributed Representations, Simple Recurrent Networks, and Grammatical Structure ----- Jeffrey L. Elman The Induction of Dynamical Recognizers ----- Jordan B. Pollack Copies can be ordered from: Outside North America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands tel. 617-871-6600 fax. 617-871-6528  From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Fri Apr 3 01:43:33 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Fri, 03 Apr 92 01:43:33 EST Subject: Why does the error rise in a SRN? In-Reply-To: Your message of Thu, 02 Apr 92 09:49:46 +1000. <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: I have been working with the Simple Recurrent Network (Elman style) and variants there of for some time. Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. As Ray Watrous suggests, your problem might be due to online updating with a fixed step size, but I have seen the same kind of problem with batch updating, in which none of the weights are updated until the error gradient dE/dw has been computed over the whole set of training sequences. And this was with Quickprop, which adjusts the step-size dynamically. In fact, I know of several independent attempts to apply Quickprop learning to Elman nets, most with very disappointing results. I think that you're running into an approximation that often causes trouble in Elman-style recurrent nets. In these nets, in addition to the usual inputs units, we have a set of "state" variables that hold the values of the hidden units from the previous time-step. These state variables are treated just like the inputs during network training. That is, we pretend that the state variables are independent of the weights being trained, and we compute dE(t)/dw for all the network's weights based on that assumption. However, the state variables are not really independent of the network's weights, since they are just the hidden-unit values from time t-1. The true value of dE(t)/dw will include terms involving dS(t)/dw for the various weights w that affect the state variables S. Or, if you prefer, they will include dH(t-1)/dw, for the hidden units H. These terms are dropped in the usual Elman or SRN formulation, but that can be dangerous, since they are not negligible in general. In fact, it is these terms that implement the "back propagation in time", which can alter the network's weights so that a state bit is set at one point and used many cycles later. So in an Elman net, even if you are using batch updating, you are not following the true error gradient dE/dw, but only a rough approximation to it. Often this will get you to the right place, or at least to a very interesting place, but it causes a lot of trouble for algorithms like Quickprop that try to follow the (alleged) gradient more aggressively. Even if you descend the pseudo-gradient slowly and carefully, you will often see that the true error begins to increase after a while. It would be possible, but very expensive, to add the missing terms into the Elman net. You end up with something that looks much like the Williams-Zipser RTRL model, which basically requires you to keep a matrix showing the derivative of every state value with respect to every weight. In a net that allows only self-recurrent connections, you only need to save one extra value for each input-side weight, so in these networks it is practical to keep the extra terms. Such models, including Mike Mozer's "Focused" recurrent nets and my own Recurrent Cascade-Correlation model, don't suffer from the approximation described above. -- Scott =========================================================================== Scott E. Fahlman School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Internet: sef+ at cs.cmu.edu  From jain at rtc.atk.com Fri Apr 3 15:22:15 1992 From: jain at rtc.atk.com (jain@rtc.atk.com) Date: Fri, 3 Apr 1992 14:22:15 -0600 Subject: Thesis Available Message-ID: <92Apr3.142217cst.46132@nic.rtc.atk.com> My recently completed PhD thesis is now available as a technical report (number CMU-CS-91-208). To obtain a copy, please send email to "reports+ at cs.cmu.edu" or physical mail to: Technical Reports Request School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 To defray printing and mailing costs, there will be a small fee. I apologize to those of you who had requested copies directly from me and have not received them. The number of requests was high enough that I ran out of time to process them (and copies to send!). REMEBER: DON'T REPLY TO THE WHOLE LIST! REPLY TO: reports+ at cs.cmu.edu TR: CMU-CS-91-208 TITLE: PARSEC: A Connectionist Learning Architecture for Parsing Spoken Language ABSTRACT: A great deal of research has been done developing parsers for natural language, but adequate solutions for some of the particular problems involved in spoken language are still in their infancy. Among the unsolved problems are: difficulty in constructing task-specific grammars, lack of tolerance to noisy input, and inability to effectively utilize complimentary non-symbolic information. This thesis describes PARSEC---a system for generating connectionist parsing networks from example parses. PARSEC networks exhibit three strengths: 1) They automatically learn to parse, and they generalize well compared to hand-coded grammars. 2) They tolerate several types of noise without any explicit noise-modeling. 3) They can learn to use multi-modal input, e.g. a combination of intonation, syntax, and semantics. The PARSEC network architecture relies on a variation of supervised back-propagation learning. The architecture differs from some other connectionist approaches in that it is highly structured, both at the macroscopic level of modules, and at the microscopic level of connections. Structure is exploited to enhance system performance. Conference registration dialogs formed the primary development testbed for PARSEC. A separate simultaneous effort in speech recognition and translation for conference registration provided a useful data source for performance evaluations. Presented in this thesis are the PARSEC architecture, its training algorithms, and detailed performance analyses along several dimensions that concretely demonstrate PARSEC's advantages.  From gary at cs.UCSD.EDU Fri Apr 3 21:12:16 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Fri, 3 Apr 92 18:12:16 PST Subject: Why does the error rise in a SRN? Message-ID: <9204040212.AB22283@odin.ucsd.edu> Yes, it seems that Elman nets can't learn in batch mode. I ported TLEARN to the INTEL hypercube using data parallelism, and it ran 90 times faster, and learned 90 times slower. I suspect that all that may be necessary is to go back one more step in time, but I haven't tried it. g.  From jose at tractatus.siemens.com Fri Apr 3 07:57:11 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 3 Apr 1992 07:57:11 -0500 (EST) Subject: Why does the error rise in a SRN? In-Reply-To: <9204030008.AA29877@odin.ucsd.edu> References: <9204030008.AA29877@odin.ucsd.edu> Message-ID: Well, Ray is right, but it is also true that Elman nets do an approximation to the true gradient. There are cases where it will actually point in the wrong direction for the pattern. g. Not to the complete gradient...right? Steve  From p-mehra at uiuc.edu Fri Apr 3 14:45:54 1992 From: p-mehra at uiuc.edu (Pankaj Mehra) Date: Fri, 03 Apr 92 13:45:54 CST Subject: Why does the error rise in a SRN? Message-ID: <9204031945.AA29224@rhea> In response to: Simon Dennis Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. Questions: (3) Why does it happen? --------- Ray Watrous An increase in error can occur with fixed step size algorithms ... a well-known property of such algorithms, but seems to be encountered in practice more frequently with recurrent networks. ... small changes in some regions of weight space can have large effects on the error because of the nonlinear feedback in the recurrent network. --------- Minh.Tue.Vo at cs.cmu.edu the effect somewhat by tweaking the learning rate and the momentum, but I couldn't eliminate it completely. TDNN doesn't seem to have that problem. --------- Pineda (1988) explains this sensitivity to learning rate/step-size very well. On pages 223 and 231 of that paper, he shows that "adiabatic" weight modification (= slowness of learning rate w.r.t. the fluctuations at the input) is important for learning to converge. TDNNs work because they do not exhibit the same kind of feedback dynamics as the recurrent networks of Jordan and Elman. Pineda, Fernando J., Dynamics and Architectures for Neural Computation, Jrnl. of Complexity, Vol. 4, pp. 216-245, 1988. -Pankaj  From peter at psy.ox.ac.uk Fri Apr 3 15:03:23 1992 From: peter at psy.ox.ac.uk (Peter Foldiak) Date: Fri, 3 Apr 92 15:03:23 BST Subject: Technical Report available Message-ID: <9204031403.AA00345@brain.cns.ox.ac.uk> The following report is now available: Models of sensory coding Peter Foldiak Cambridge University Engineering Department Tech. Report CUED/F-INFENG/TR 91 (technical report version of Ph.D. dissertation) For a copy, send physical mail address to: peter at psy.oxford.ac.uk or to Peter Foldiak MRC Research Centre in Brain and Behaviour, Dept. Experimental Psychol., University of Oxford, South Parks Road, Oxford OX1 3UD, U.K. (I may also have to ask you for a check to cover postage.) ------ Abstract 1 - An 'anti-Hebbian' synaptic modification rule is demonstrated to be able to adaptively form an uncorrelated representation of the correlated input signal. This mechanism can match the distribution of input patterns to the actual signalling space of the representation units, achieving information-theoretically optimal signal on noisy units. An uncorrelated, equal variance signal also makes fast, optimally efficient least-mean-square (LMS) error correcting learning possible. 2 - A combination of Hebbian and anti-Hebbian connections is demonstrated to implement a form of the statistical method of Principal Component Analysis, which reduces the dimensionality of a noisy Gaussian signal while maximising the information content of the representation, even when the units themselves are noisy. 3 - A similar arrangement of biologically more plausible, non- linear units is shown to be able to adaptively code inputs into a sparse representation, substantially reducing the higher- order statistical redundancy of the representation without considerable loss of information. Such a representation is advantageous if it is to be used in further associative learning stages. 4 - A Hebbian rule modified by a trace mechanism is studied, that allows processing units to respond in a way which is invariant with respect to commonly occurring transformations of the input signal. ----  From doya at crayfish.UCSD.EDU Fri Apr 3 23:43:12 1992 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Fri, 3 Apr 92 20:43:12 PST Subject: Papers on recurrent nets in Neuroprose Message-ID: <9204040443.AA29317@crayfish.UCSD.EDU> The following papers have been placed in Neuroprose archive. The first paper deals with Simon's problem: Why the error increases in the gradient descent learning of recurrent neural networks? ------------------------------------------------------------------------- File name: doya.bifurcation.ps.Z Bifurcations in the Learning of Recurrent Neural Networks Kenji Doya Dept. of Biology, U. C. San Diego Unlike feed-forward networks, the output of a recurrent network can change drastically with an infinitesimal change in the network parameter when it passes through a bifurcation point. The possible hazards caused by the bifurcations of the network dynamics and the learning equations are investigated. The roles of teacher forcing, preprogramming of network structures, and the approximate learning algorithms are discussed. To appear in: Proceedings of 1992 IEEE International Symposium on Circuits and Systems, May 10-13, 1992, San Diego. ------------------------------------------------------------------------- File name: doya.synchronization.ps.Z Adaptive Synchronization of Neural and Physical Oscillators Kenji Doya Shuji Yoshizawa U. C. San Diego University of Tokyo Animal locomotion patterns are controlled by recurrent neural networks called central pattern generators (CPGs). Although a CPG can oscillate autonomously, its rhythm and phase must be well coordinated with the state of the physical system using sensory inputs. In this paper we propose a learning algorithm for synchronizing neural and physical oscillators with specific phase relationships. Sensory input connections are modified by the correlation between cellular activities and input signals. Simulations show that the learning rule can be used for setting sensory feedback connections to a CPG as well as coupling connections between CPGs. To appear in: J.E. Moody, S.J. Hanson, and R.P. Lippmann, (Eds.) Advances in Neural Information Processing Systems 4, San Mateo, CA: Morgan Kaufmann (1992). ------------------------------------------------------------------------- To retrieve the papers by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get doya.bifurcation.ps.Z ftp> get doya.synchronization.ps.Z ftp> quit unix> uncompress doya.bifurcation.ps.Z unix> uncompress doya.synchronization.ps.Z unix> lpr doya.bifurcation.ps unix> lpr doya.synchronization.ps They are also available for anonymous ftp from "crayfish.ucsd.edu" (128.54.16.104), directory "pub/doya". ------------------------------------------------------------------------- Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301  From ajr at eng.cam.ac.uk Sun Apr 5 13:15:24 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Sun, 5 Apr 92 13:15:24 BST Subject: Why does the error rise in a SRN? In-Reply-To: <1992Apr4.144502.28704@eng.cam.ac.uk> Message-ID: <2743.9204051215@dsl.eng.cam.ac.uk> In response to the lack of a true gradient signal in "simple recurrent" (Elman-style) back-propagation networks Scott Fahlman writes: >It would be possible, but very expensive, to add the missing terms into the >Elman net. You end up with something that looks much like the >Williams-Zipser RTRL model... It does not have to be significantly harder to compute a good approximation to the error signal than Elman's approximation to it. The method of expanding the network in time achieves this by changing the per-pattern update to a per-buffer update. The buffer length should be longer than the expected context effects, and shorter than the training set size if the advantages of frequent updating are to be maintained [in practice this is not a difficult constraint]. The method is: Replicate the network N times where N is the buffer length, and stitch it together where the activations are to be passed forward to make one large network Place N patterns at the N input positions, and do a forward pass. Place N targets at the N output positions and (using your favourite error measure) perform a standard backward pass through the large network. Add up all the partial gradients for every shared weight and use the result in your favourite hack of gradient descent. Of course there are some end effects with a finite length buffer, but these can be made small by making the buffer large enough, and placing the buffer boundaries at different positions in the training data on subsequent passes. However, adding in all those extra nasty non-linearities into the gradient signal gives a much harder training problem. I think that it is worth it in terms of the increase in computational power of the network. Tony [Robinson]  From watrous at cortex.siemens.com Mon Apr 6 15:42:45 1992 From: watrous at cortex.siemens.com (Ray Watrous) Date: Mon, 6 Apr 92 15:42:45 EDT Subject: Increase in Error in SRN Message-ID: <9204061942.AA02176@cortex.siemens.com.siemens.com> The following summary and clarification of the further comments on increased error for recurrent networks might be helpful: There are three possible sources of an increase in error: 1. The approximation introduced in online learning by considering only one example at a time. This method of gradient approximation clearly should be paired with an infinitesimal step size algorithm, since long range extrapolations (as in variable step size algorithms) would lead to too large a change in the model based on insufficient data; this would destabilize learning. 2. The approximation in the gradient computation for a recurrent network by truncating the backward recursion. Here, the computation of the full gradient by backpropagation-in-time is no more expensive that the truncated version; it requires only that the activation history for a token be recorded, and the gradient information accumulated in a right-to-left pass in the analogous way to the top-bottom pass in normal backprop. Thus, the approximation is unnecessary. A forward form of the complete gradient is more complex computationally; (see Barak Pearlmutter, Two New Learning Procedures for Recurrent Networks, Neural Network Review, v3, pp 99-101, 1990). 3. The use of a fixed step-size algorithm, which is known to be unstable. This is where a line-search, or golden section search, or other method can be used to control the descent of the error. So, roughly the situation is that fixed step-sized methods can be used with gradient approximation methods with the possibility that the error can increase. Variable step size methods can be used with complete gradient methods, in which case the error is guaranteed to be monotonically decreasing. Gary Kuhn has reported good results using a forward-in-time complete gradient algorithm based on estimates of the gradient over a balanced subset of the training data that increases during training (Some Variations on the Training of Recurrent Networks, with Norman Herzberg, in Neural Networks: Theory and Applications, Mammone and Zeeri, eds. Academic Press, 1991). Whether the network instantiates time-delay links is relevant only if it is restricted to a feedforward architecture; in that case, only considerations 1 and 3 apply. Recurrent time-delay models have been successfully trained using the complete gradient and a line-search (embedded in a quasi-Newton optimization method), with the result that there has been no increase in the objective function. (R. Watrous, Phoneme Recognition Using Connectionist Networks, J. Acoust. Soc. Am. 87(4) pp 1753-1772, 1990). Raymond Watrous Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596  From plaut+ at CMU.EDU Tue Apr 7 20:23:02 1992 From: plaut+ at CMU.EDU (David Plaut) Date: Tue, 07 Apr 92 20:23:02 -0400 Subject: TR available Message-ID: <28938.702692582@K.GP.CS.CMU.EDU> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* Perseverative and Semantic Influences on Visual Object Naming Errors in Optic Aphasia: A Connectionist Account David C. Plaut Tim Shallice Department of Psychology Department of Psychology Carnegie Mellon University University College, London plaut+ at cmu.edu ucjtsts at ucl.ac.uk Technical Report PDP.CNS.92.1 A recurrent back-propagation network is trained to generate semantic representations of objects from high-level visual representations. In addition to the standard weights, the network has correlational weights useful for implementing short-term associative memory. Under damage, the network exhibits the complex semantic and perseverative effects of patients with a visual naming disorder known as ``optic aphasia,'' in which previously presented objects influence the response to the current object. Like optic aphasics, the network produces predominantly semantic rather than visual errors because, in contrast to reading, there is some structure in the mapping from visual to semantic representations for objects. This is the third TR in the "Parallel Distributed Processing and Cognitive Neuroscience" Technical Report series. To retrieve it via FTP (note that this is NOT the neuroprose archive): unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.92.1.ps.Z ftp> quit unix> zcat pdp.cns.92.1.ps.Z | lpr The file ABSTRACTS in the same directory contains the titles and abstracts of all of the TRs in the series. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . -Dave ------------------------------------------------------------------------------ David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University Pittsburgh, PA 15213-3890  From TEPPER at CVAX.IPFW.INDIANA.EDU Tue Apr 7 21:25:58 1992 From: TEPPER at CVAX.IPFW.INDIANA.EDU (TEPPER@CVAX.IPFW.INDIANA.EDU) Date: Tue, 7 Apr 1992 21:25:58 -0400 (EDT) Subject: NEW Fifth NN & PDP Conference Program Message-ID: <920407212559.202007c8@CVAX.IPFW.INDIANA.EDU> Here is an updated version of the program for The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University. Please notice Phil Best's lecture on Friday. It did not appear on the previous announcement. Fifth NN & PDP CONFERENCE PROGRAM - April 9, 10 and 11,1992 ----------------------------------------------------------- The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University at Fort Wayne will be held April 9, 10, and 11, 1992. Conference registration is $20 (on site). Students and members or employees of supporting organizations attend free. Inquiries should be addressed to: US mail: ------- Pr. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805-1499 email: sayegh at ipfwcvax.bitnet ----- FAX: (219)481-6880 --- Voice: (219) 481-6306 OR 481-6157 ----- All talks will be held in Kettler Hall, Room G46: Thursday, April 9, 6pm-9pm; Friday Morning & Afternoon (Tutorial Sessions), 8:30am-12pm & 1pm-4:30pm and Friday Evening 6pm-9pm; Saturday, 9am-12noon. Parking will be available near the Athletic Building or at any Blue A-B parking lots. Do not park in an Orange A lot or you may get a parking violation ticket. Special hotel rates (IPFW corporate rates) are available at Canterbury Green, which is a 5 minute drive from the campus. The number is (219) 485-9619. The Marriott Hotel also has corporate rates for IPFW and is about a 10 minute drive. Their number is (219) 484-0411. Another hotel with corporate rates for IPFW is Don Hall's Guesthouse (about 10 minutes away). Their number is (219) 489-2524. The following talks will be presented: Applications I - Thursday 6pm-7:30pm -------------------------------------- Nasser Ansari & Janusz A. Starzyk, Ohio University. DISTANCE FIELD APPROACH TO HANDWRITTEN CHARACTER RECOGNITION Thomas L. Hemminger & Yoh-Han Pao, Case Western Reserve University. A REAL- TIME NEURAL-NET COMPUTING APPROACH TO THE DETECTION AND CLASSIFICATION OF UNDERWATER ACOUSTIC TRANSIENTS Seibert L. Murphy & Samir I. Sayegh, Indiana-Purdue University. ANALYSIS OF THE CLASSIFICATION PERFORMANCE OF A BACK PROPAGATION NEURAL NETWORK DESIGNED FOR ACOUSTIC SCREENING S. Keyvan, L. C. Rabelo, & A. Malkani, Ohio University. NUCLEAR DIAGNOSTIC MONITORING SYSTEM USING ADAPTIVE RESONANCE THEORY J.L. Fleming & D.G. Hill, Armstrong Lab, Brooks AFB. STUDENT MODELING USING ARTIFICIAL NEURAL NETWORKS Biological and Cooperative Phenomena Optimization I - Thursday 7:50pm-9pm --------------------------------------------------------------------------- Ljubomir T. Citkusev & Ljubomir J., Buturovic, Boston University. NON- DERIVATIVE NETWORK FOR EARLY VISION Yalin Hu & Robert J. Jannarone, University of South Carolina. A NEUROCOMPUTING KERNEL ALGORITHM FOR REAL-TIME, CONTINUOUS COGNITIVE PROCESSING M.B. Khatri & P.G. Madhavan, Indiana-Purdue University, Indianapolis. ANN SIMULATION OF THE PLACE CELL PHENOMENON USING CUE SIZE RATIO Mark M. Millonas, University of Texas at Austin. CONNECTIONISM AND SWARM INTELLIGENCE --------------------------------------------------------------------------- --------------------------------------------------------------------------- Tutorials I - Friday 8:30am-11:45am ------------------------------------- Phil Best, Miami University. PROCESSING OF SPATIAL INFORMATION IN THE BRAIN" Bill Frederick, Indiana-Purdue University. INTRODUCTION TO FUZZY LOGIC Helmut Heller, University of Illinois. INTRODUCTION TO TRANSPUTER SYSTEMS Arun Jagota, SUNY-Buffalo. THE HOPFIELD NETWORK, ASSOCIATIVE MEMORIES, AND OPTIMIZATION Tutorials II - Friday 1:15pm-4:30pm ------------------------------------- Krzysztof J. Cios, University Of Toledo. SELF-GENERATING NEURAL NETWORK ALGORITHM : CID3 APPLICATION TO CARDIOLOGY Robert J. Jannarone, University of South Carolina. REAL-TIME NEUROCOMPUTING, AN INTRODUCTION Network Analysis I - Friday 6pm-7:30pm ---------------------------------------- M.R. Banan & K.D. Hjelmstad, University of Illinois at Urbana-Champaign. A SUPERVISED TRAINING ENVIRONMENT BASED ON LOCAL ADAPTATION, FUZZINESS, AND SIMULATION Pranab K. Das II, University of Texas at Austin. CHAOS IN A SYSTEM OF FEW NEURONS Arun Maskara & Andrew Noetzel, New Jersey Institute of Technology. FORCED LEARNING IN SIMPLE RECURRENT NEURAL NETWORKS Samir I. Sayegh, Indiana-Purdue University. SEQUENTIAL VS CUMULATIVE UPDATE: AN EXPANSION D.A. Brown, P.L.N. Murthy, & L. Berke, The College of Wooster. SELF- ADAPTATION IN BACKPROPAGATION NETWORKS THROUGH VARIABLE DECOMPOSITION AND OUTPUT SET DECOMPOSITION Applications II - Friday 7:50pm-9pm ------------------------------------- Susith Fernando & Karan Watson, Texas A & M University. ANNs TO INCORPORATE ENVIRONMENTAL FACTORS IN HI FAULTS DETECTION D.K. Singh, G.V. Kudav, & T.T. Maxwell, Youngstown State University. FUNCTIONAL MAPPING OF SURFACE PRESSURES ON 2-D AUTOMOTIVE SHAPES BY NEURAL NETWORKS K. Hooks, A. Malkani, & L. C. Rabelo, Ohio University. APPLICATION OF ARTIFICIAL NEURAL NETWORKS IN QUALITY CONTROL CHARTS B.E. Stephens & P.G. Madhavan, Purdue University at Indianapolis. SIMPLE NONLINEAR CURVE FITTING USING THE ARTIFICIAL NEURAL NETWOR ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Network Analysis II - Saturday 9am-10:30am ------------------------------------------- Sandip Sen, University of Michigan. NOISE SENSITIVITY IN A SIMPLE CLASSIFIER SYSTEM Xin Wang, University of Southern California. DYNAMICS OF DISCRETE-TIME RECURRENT NEURAL NETWORKS: PATTERN FORMATION AND EVOLUTION Zhenni Wang and Christine Di Massimo, University of Newcastle. A PROCEDURE FOR DETERMINING THE CANONICAL STRUCTURE OF MULTILAYER NEURAL NETWORKS Srikanth Radhakrishnan, Tulane University. PATTERN CLASSIFICATION USING THE HYBRID COULOMB ENERGY NETWORK Biological and Cooperative Phenomena Optimization II - Saturday 10:50am-12noon ------------------------------------------------------------------------------- J. Wu, M. Penna, P.G. Madhavan, & L. Zheng, Purdue University at Indianapolis. COGNITIVE MAP BUILDING AND NAVIGATION C. Zhu, J. Wu, & Michael A. Penna, Purdue University at Indianapolis. USING THE NADEL TO SOLVE THE CORRESPONDENCE PROBLEM Arun Jagota, SUNY-Buffalo. COMPUTATIONAL COMPLEXITY OF ANALYZING A HOPFIELD-CLIQUE NETWORK Assaad Makki, & Pepe Siy, Wayne State University. OPTIMAL SOLUTIONS BY MODIFIED HOPFIELD NEURAL NETWORKS  From pollack at cis.ohio-state.edu Wed Apr 8 16:43:38 1992 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 8 Apr 92 16:43:38 -0400 Subject: Refs requested: Linear Threshold Decisionmaking Message-ID: <9204082043.AA01988@dendrite.cis.ohio-state.edu> Below is a one-page abstract describing results for when simple linear threshold algorithms (m-out-of-n functions) are effective for noisy inputs and any number of outputs. I have not been able to uncover similar results for these algorithms or for linear threshold algorithms in general. What I have found are psychological studies using linear regression to find optimized decision models: Dawes, R. M. and Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2):95-106. Elstein, A. S., Shulman, L. S., and Sprafka, S. A. (1978). Medical Problem Solving. Harvard University Press, Cambridge, Massachusetts. Meehl, P. E. (1954). Clinical versus Statistical Predictions: A Theoretical Analysis and Review of the Evidence. University of Minnesota Press, Minneapolis. and some explanations of why they work: Wainer, H. (1976). Estimating coefficients in linear models: It don't make no nevermind. Psychological Bulletin, 83(2):213-217. Of course, the statistical literature is filled with work on linear discriminant functions and linear regression. A superficial search did not yield any results in the same spirit as mine. I should also mention that I know of Nick Littlestone's algorithm for learning m-out-of-n functions when the sample is linearly separable. I don't know of any more recent results for noisy inputs. Littlestone, N. (1987). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2(4):285-318. If you know of related work, please send mail to byland at cis.ohio-state.edu. Thanks, Tom ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Effectiveness of Simple Linear Threshold Algorithms Tom Bylander Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, Ohio, 43210 byland at cis.ohio-state.edu Consider the following simple linear threshold algorithm (SLT) (m-out-of-n function) for determining whether to believe a given "output" proposition (hereafter called an "output") from a given set of "input" propositions (hereafter called "inputs"): Collect all the inputs relevant to the output. Count the number of inputs in favor of the output. Subtract the number of inputs against the output. Believe the output if the result exceeds some threshold. SLT is an effective algorithm if two conditions are met. One, sufficient inputs are available. Two, the inputs are independent evidence for the output. For example, if x_1 and x_2 are two inputs and y is the output, then it should be the case that: P(x_1&x_{2}|y) = P(x_1|y)*P(x_2|y) I shall call this property "conditional independence." Consider the case where there are n outputs, each output is either 0 or 1, and each output is associated with a set of conditionally independent inputs, each 0 (unfavorable) or 1 (favorable). Let t be a number between 0 and .5 such that for any output y, the difference between the average value of its inputs when y=1 and the average when y=0 is greater than 2t. Based on inequalities from Hoeffding (1963), (ln n)/t^2 inputs per output are sufficient to ensure that the probability that SLT makes a wrong conclusion (i.e., that some assignment to some output is in error) is less than 1/n. This result does not depend on the prior probabilities of the outputs. How does this compare to optimal? Suppose that each input for an output corresponds to the output except for noise r. Based on the central limit theorem, r(1-r)(ln n)/t^2 inputs per output are insufficient to achieve 1/n error for the optimal algorithm, where t=.5-r. This result assumes that the prior probability of each assignment to the outputs is equally likely. I speculate that if H is the entropy of the outputs, then ln H should be substituted for ln n in the above bound. Thus, SLT is nearly optimal, asymptotically speaking, for noisy inputs and any number of outputs. Conditional independence is the most problematic assumption in the analysis. Perfect conditional independence is unlikely in reality; however, Hoeffding's inequalities for m-dependent random variables strongly suggest that SLT should work well as long as the same evidence is not counted too many times. Finding inputs that are sufficiently conditionally-independent is the central learning problem for SLT. Hoeffding, W. (1963). Probability inequalities for sums of bounded variables. J. American Statistical Association, 58(1):13-30.  From aisb93-prog at computer-science.birmingham.ac.uk Sat Apr 11 14:25:38 1992 From: aisb93-prog at computer-science.birmingham.ac.uk (Donald Peterson) Date: Sat, 11 Apr 92 14:25:38 BST Subject: Conference : AISB'93 Message-ID: <324.9204111325@christopher-robin.cs.bham.ac.uk> ================================================================ AISB'93 CONFERENCE : ANNOUNCEMENT AND CALL FOR PAPERS Theme: "Prospects for AI as the General Science of Intelligence" 29 March -- 2 April 1993 University of Birmingham ================================================================ 1. Introduction 2. Invited talks 3. Topic areas for submitted papers 4. Timetable for submitted papers 5. Paper lengths and submission details 6. Call for referees 7. Workshops and Tutorials 8. LAGB Conference 9. Email, paper mail, phone and fax. 1. INTRODUCTION The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (one of the oldest AI societies) will hold its ninth bi-annual conference on the dates above at the University of Birmingham. The site is Manor House, a charming and convivial residential hall close to the University. Tutorials and Workshops are planned for Monday 29th March and the morning of Tuesday 30th March, and the main conference will start with lunch on Tuesday 30th March and end on Friday 2nd April. The Programme Chair is Aaron Sloman, and the Local Arrangements Organiser is Donald Peterson, both assisted by Petra Hickey. The conference will be "single track" as usual, with invited speakers and submitted papers, plus a "poster session" to allow larger numbers to report on their work, and the proceedings will be published. The conference will cover the usual topic areas for conferences on AI and Cognitive Science. However, with the turn of the century approaching, and with computer power no longer a major bottleneck in most AI research (apart from connectionism) it seemed appropriate to ask our invited speakers to look forwards rather than backwards, and so the theme of the conference will be "Prospects for AI as the general science of intelligence". Submitted papers exploring this are also welcome, in addition to the normal technical papers. 2. INVITED TALKS So far the following have agreed to give invited talks: Prof David Hogg (Leeds) "Prospects for computer vision" Prof Allan Ramsay (Dublin) "Prospects for natural language processing by machine" Prof Glyn Humphreys (Birmingham) "Prospects for connectionism - science and engineering". Prof Ian Sommerville (Lancaster) "Prospects for AI in systems design" Titles are provisional. 3. TOPIC AREAS for SUBMITTED PAPERS Papers are invited in any of the normal areas represented at AI and Cognitive Science conferences, including: AI in Design, AI in software engineering Teaching AI and Cognitive Science, Analogical and other forms of Reasoning Applications of AI, Automated discovery, Control of actions, Creativity, Distributed intelligence, Expert Systems, Intelligent interfaces Intelligent tutoring systems, Knowledge representation, Learning, Methodology, Modelling affective processes, Music, Natural language, Naive physics, Philosophical foundations, Planning, Problem Solving, Robotics, Tools for AI, Vision, Papers on neural nets or genetic algorithms are welcomed, but should be capable of being judged as contributing to one of the other topic areas. Papers may either be full papers or descriptions of work to be presented in a poster session. 4. TIMETABLE for SUBMITTED PAPERS Submission deadline: 1st September 1992 Date for notification of acceptances: mid October 1992 Date for submission of camera ready final copy: mid December 1992 The conference proceedings will be published. Long papers and invited papers will definitely be included. Selected poster summaries may be included if there is space. 5. PAPER LENGTH and SUBMISSION DETAILS Full papers: 10 pages maximum, A4 or 8.5"x11", no smaller than 12 point print size Times Roman or similar preferred, in letter quality print. Poster submissions 5 pages summary Excessively long papers will be rejected without being reviewed. All submissions should include 1. Full names and addresses of all authors 2. Electronic mail address if available 3. Topic area 4. Label: "Long paper" or "Poster summary" 5. Abstract no longer than 10 lines. 6. Statement certifying that the paper is not being submitted elsewhere for publication. 7. An undertaking that if the paper is accepted at least one of the authors will attend the conference. THREE copies are required. 6. CALL for REFEREES Anyone willing to act as a reviewer during September should write to the Programme Chair, with a summary CV or indication of status and experience, and preferred topic areas. 7. WORKSHOPS and TUTORIALS The first day and a half of the Conference are allocated to workshops and tutorials. These will be organised by Dr Hyacinth S. Nwana, and anyone interested in giving a workshop or tutorial should contact her at: Department of Computer Science, University of Keele, Staffs. ST5 5BG. U.K. phone: +44 782 583413, or +44 782 621111(x 3413) email JANET: nwanahs at uk.ac.keele.cs BITNET: nwanahs%cs.kl.ac.uk at ukacrl UUCP : ...!ukc!kl-cs!nwanahs other : nwanahs at cs.keele.ac.uk 8. LAGB CONFERENCE. Shortly before AISB'93, the Linguistics Association of Great Britain (LAGB) will hold its Spring Meeting at the University of Birmingham from 22-24th March, 1993. For more information, please contact Dr. William Edmondson: postal address as below; phone +44-(0)21-414-4763; email EDMONDSONWH at vax1.bham.ac.uk 9. EMAIL, PAPER MAIL, PHONE and FAX. Email: * aisb93-prog at cs.bham.ac.uk (for communications relating to submission of papers to the programme) * aisb93-delegates at cs.bham.ac.uk (for information on accommodation, meals, programme etc. as it becomes available --- enquirers will be placed on a mailing list) Address: AISB'93 (prog) or AISB'93 (delegates), School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, U.K. Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281 Donald Peterson and Aaron Sloman, April 1992. ----------------------------------------------------------------------------  From stolcke at ICSI.Berkeley.EDU Mon Apr 13 15:32:48 1992 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Mon, 13 Apr 92 13:32:48 MDT Subject: Paper available Message-ID: <9204132032.AA23691@icsib30.ICSI.Berkeley.EDU> The following short paper is available from the ICSI techreport archive. Instructions for retrieval can be found at the end of this message. Please ask me for hardcopies only if you don't have access to ftp. The paper will be presented at the Workshop on Integrating Neural and Symbolic Processes at AAAI-92 later this year. We are making it generally available mainly for the benefit of several people who had expressed interest in our results in the past. --Andreas ------------------------------------------------------------------------------ tr-92-025.ps.Z Tree Matching with Recursive Distributed Representations Andreas Stolcke and Dekai Wu TR-92-025 April 1992 We present an approach to the structure unification problem using distributed representations of hierarchical objects. Binary trees are encoded using the recursive auto-association method (RAAM), and a unification network is trained to perform the tree matching operation on the RAAM representations. It turns out that this restricted form of unification can be learned without hidden layers and producing good generalization if we allow the error signal from the unification task to modify both the unification network and the RAAM representations themselves. ------------------------------------------------------------------------------ Instructions for retrieving ICSI technical reports via ftp. Replace tr-XX-YYY with the appropriate TR number. unix% ftp ftp.icsi.berkeley.edu Connected to icsic.ICSI.Berkeley.EDU. 220 icsic FTP server (Version 6.16 Mon Jan 20 13:24:00 PST 1992) ready. Name (ftp.icsi.berkeley.edu:): anonymous 331 Guest login ok, send e-mail address as password. Password: your_name at your_machine 230 Guest login ok, access restrictions apply. ftp> cd /pub/techreports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get tr-XX-YYY.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for tr-XX-YYY.ps.Z (ZZZZ bytes). 226 Transfer complete. local: tr-XX-YYY.ps.Z remote: tr-XX-YYY.ps.Z 272251 bytes received in ZZ seconds (ZZZ Kbytes/s) ftp> quit 221 Goodbye. unix% uncompress tr-XX-YYY.ps.Z unix% lpr -Pyour_printer tr-XX-YYY.ps  From LINDSEY at FNAL.FNAL.GOV Wed Apr 15 21:35:36 1992 From: LINDSEY at FNAL.FNAL.GOV (LINDSEY@FNAL.FNAL.GOV) Date: Wed, 15 Apr 1992 20:35:36 -0500 (CDT) Subject: A VLSI Neural Net Application in High Energy Physics Message-ID: <920415203536.20e0365d@FNAL.FNAL.GOV> For those interested in hardware neural network applications, copies of the following paper are available via mail or fax. Send requests to Clark Lindsey at BITNET%"LINDSEY at FNAL". REAL TIME TRACK FINDING IN A DRIFT CHAMBER WITH A VLSI NEURAL NETWORK* Clark S. Lindsey (a), Bruce Denby (a), Herman Haggerty (a), and Ken Johns (b) (a) Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510. (b) University of Arizona, Dept of Physics, Tucson, Arizona 85721. ABSTRACT In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. Accepted by Nuclear Instruments and Methods, Section A * FERMILAB-Pub-92/55  From dlovell at s1.elec.uq.oz.au Thu Apr 16 13:28:07 1992 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Thu, 16 Apr 92 12:28:07 EST Subject: Neocognitron Performance paper in Neuroprose Message-ID: <9204160228.AA01588@c10.elec.uq.oz.au> **DO NOT FORWARD TO OTHER GROUPS** The following paper (10 pages in length) has been placed in the Neuroprose archive and submitted to Neural Networks. Any comments or questions (both of which are invited) should be addressed to the first author: dlovell at s1.elec.uq.oz.au Thanks must go to Jordan Pollack for maintaining this excellent service. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% THE PERFORMANCE OF THE NEOCOGNITRON WITH VARIOUS S-CELL AND C-CELL TRANSFER FUNCTIONS David Lovell & Ah Chung Tsoi Intelligent Machines Laboratory, Department of Electrical Engineering University of Queensland, Queensland 4072, Australia When a neural network solution to a problem (e.g. handwritten character recognition) is proposed, it is important to know if the structure of the network and the function of the component neurons are well suited to the task. Recent research has examined the {\em structure} of Fukushima's neocognitron and the effect that it has on the classification of distorted input patterns. We present results which assess the classification performance of the neocognitron when the {\em function} of the component neurons is altered. The tests we describe demonstrate that using S-cells with a sigmoidal transfer function and modified activation function significantly enhances the classification performance of the neocognitron. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% filename: lovell.neocog.ps.Z FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get lovell.neocog.ps.Z ftp> bye unix% zcat lovell.neocog.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ----------------------------------------------------------------------------- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | tel: (07) 365 3564 |  From mccauley at ecn.purdue.edu Mon Apr 20 13:27:57 1992 From: mccauley at ecn.purdue.edu (Darrell McCauley) Date: Mon, 20 Apr 92 12:27:57 -0500 Subject: paper on beef/ultrasound/adaptive logic networks Message-ID: <9204201727.AA14554@cocklebur.ecn.purdue.edu> The following paper (11 pages in length) has been placed in the Neuroprose archive. A shorter version was submitted to Transactions of the ASAE. Any comments or questions should be sent to mccauley at ecn.purdue.edu. This annoucement may be forwarded to other lists/newsgroups. Though I cannot mail hardcopies, I may be willing to e-mail compressed, uuencoded PostScript versions. Of course, thanks to Jordan Pollack for offering this service. I find it very valuable. ------------------------------------------------------------------------- FAT ESTIMATION IN BEEF ULTRASOUND IMAGES USING TEXTURE AND ADAPTIVE LOGIC NETWORKS James Darrell McCauley, USDA Fellow Brian R. Thane, Graduate Student Dept of Agricultural Engineering Dept of Agricultural Engineering Purdue University Texas A&M University (mccauley at ecn.purdue.edu) (thane at diamond.tamu.edu) A. Dale Whittaker, Assistant Professor Dept of Agricultural Engineering Texas A&M University (dale at diamond.tamu.edu) Overviews of Adaptive Logic Networks and co--occurrence image texture are presented, along with a brief synopsis of instrument grading of beef. These tools are used for both prediction and classification of intramuscular fat in beef from ultrasonic images of both live beef animals and slaughtered carcasses. Results showed that Adaptive Logic Networks perform better than any fat prediction method for beef ultrasound images to date and are a viable alternative to statistical techniques. \keywords{Meat, Grading, Automation, Ultrasound Images, Neural Networks.} ------------------------------------------------------------------------- filename: mccauley.beef.ps.Z FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mccauley.beef.ps.Z ftp> quit unix% zcat mccauley.beef.ps.Z | lpr (or the equivalent) -- James Darrell McCauley Department of Ag Engr, Purdue Univ mccauley at ecn.purdue.edu West Lafayette, Indiana 47907-1146 ** "Do what is important first, then what is urgent." (unknown) **  From ahg at eng.cam.ac.uk Tue Apr 21 11:46:17 1992 From: ahg at eng.cam.ac.uk (ahg@eng.cam.ac.uk) Date: Tue, 21 Apr 92 11:46:17 BST Subject: Paper in neuroprose Message-ID: <28405.9204211046@tulip.eng.cam.ac.uk> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGOUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: ALTERNATIVE ENERGY FUNCTIONS FOR OPTIMIZING NEURAL NETWORKS Andrew Gee and Richard Prager Technical Report CUED/F-INFENG/TR 95 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract When feedback neural networks are used to solve combinatorial optimization problems, their dynamics perform some sort of descent on a continuous energy function related to the objective of the discrete problem. For any particular discrete problem, there are generally a number of suitable continuous energy functions, and the performance of the network can be expected to depend heavily on the choice of such a function. In this paper, alternative energy functions are employed to modify the dynamics of the network in a predictable manner, and progress is made towards identifying which are well suited to the underlying discrete problems. This is based on a revealing study of a large database of solved problems, in which the optimal solutions are decomposed along the eigenvectors of the network's connection matrix. It is demonstrated that there is a strong correlation between the mean and variance of this decomposition and the ability of the network to find good solutions. A consequence of this is that there may be some problems which neural networks are not well adapted to solve, irrespective of the manner in which the problems are mapped onto the network for solution. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gee.energy_fns.ps.Z ftp> quit unix> uncompress gee.energy_fns.ps.Z unix> lpr gee.energy_fns.ps (or however you print PostScript) Please note that a couple of the figures in the paper were produced on an Apple Mac, and the resulting PostScript is not quite standard. People using an Apple LaserWriter should have no problems though. b) Via postal mail: Request a hardcopy from Andrew Gee, Speech Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: ahg at eng.cam.ac.uk  From dhw at santafe.edu Tue Apr 21 17:54:03 1992 From: dhw at santafe.edu (David Wolpert) Date: Tue, 21 Apr 92 15:54:03 MDT Subject: New paper Message-ID: <9204212154.AA08364@sfi.santafe.edu> ********* DO NOT FORWARD TO OTHER MAILING LISTS ****** The following paper has just been placed in neuroprose. It is a major revision of an earlier preprint of the same title, and appears in the current issue of Complex Systems. ON THE CONNECTION BETWEEN IN-SAMPLE TESTING AND GENERALIZATION ERROR. David H. Wolpert, The Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, NM, 87501. Abstract: This paper proves that it is impossible to justify a correlation between reproduction of a training set and generalization error off of the training set using only a priori reasoning. As a result, the use in the real world of any generalizer which fits a hypothesis function to a training set (e.g., the use of back-propagation) is implicitly predicated on an assumption about the physical universe. This paper shows how this assumption can be expressed in terms of a non-Euclidean inner product between two vectors, one representing the physical universe and one representing the generalizer. In deriving this result, a novel formalism for addressing machine learning is developed. This new formalism can be viewed as an extension of the conventional "Bayesian" formalism which (amongst other things) allows one to address the case where one's assumed "priors" are not exactly correct. The most important feature of this new formalism is that it uses an extremely low-level event space, consisting of triples of {target function, hypothesis function, training set}. Partly as a result of this feature, most other formalisms that have been constructed to address machine learning (e.g., PAC, the Bayesian formalism, the "statistical mechanics" formalism) are special cases of the formalism presented in this paper. Consequently such formalisms are capable of addressing only a subset of the issues addressed in this paper. In fact, the formalism of this paper can be used to address all generalization issues of which I am aware: over-training, the need to restrict the number of free parameters in the hypothesis function, the problems associated with a "non-representative" training set, whether and when cross-validation works, whether and when stacked generalization works, whether and when a particular regularizer will work, etc. A summary of some of the more important results of this paper concerning these and related topics can be found in the conclusion. ********************************************** To retrieve this paper, which comes in two parts, do the following: unix> ftp archive.cis.ohio-state.edu login> anonymous password> neuron ftp> binary ftp> cd pub/neuroprose ftp> get wolpert.reichenbach-1.ps.Z ftp> get wolpert.reichenbach-2.ps.Z ftp> quit unix> uncompress wolpert.reichenbach-1.ps.Z unix> uncompress wolpert.reichenbach-2.ps.Z unix> lpr wolpert.reichenbach-1.ps.Z # or however you print out postscript unix> lpr wolpert.reichenbach-2.ps.Z # or however you print out postscript  From radford at ai.toronto.edu Wed Apr 22 14:46:52 1992 From: radford at ai.toronto.edu (Radford Neal) Date: Wed, 22 Apr 1992 14:46:52 -0400 Subject: TR on Bayesian backprop by Hybrid Monte Carlo Message-ID: <92Apr22.144703edt.311@neuron.ai.toronto.edu> *** DO NOT FORWARD TO OTHER LISTS *** The following paper has been placed in the neuroprose archive: BAYESIAN TRAINING OF BACKPROPAGATION NETWORKS BY THE HYBRID MONTE CARLO METHOD Radford M. Neal Department of Computer Science University of Toronto radford at cs.toronto.edu It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the ``Hybrid Monte Carlo'' method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of ``free energy'' differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks. This paper may be retrieved and printed on a PostScript printer as follows: unix> ftp archive.cis.ohio-state.edu (log on as user 'anonymous') ftp> cd pub/neuroprose ftp> binary ftp> get neal.hmc.ps.Z ftp> quit unix> uncompress neal.hmc.ps.Z unix> lpr neal.hmc.ps For those unable to do this, hardcopies may be requested from: The CRG Technical Report Secretary Department of Computer Science University of Toronto 10 King's College Road Toronto M5S 1A4 CANADA INTERNET: maureen at cs.toronto.edu UUCP: uunet!utai!maureen BITNET: maureen at utorgpu  From rsun at orion.ssdc.honeywell.com Fri Apr 24 15:48:15 1992 From: rsun at orion.ssdc.honeywell.com (Ron Sun) Date: Fri, 24 Apr 92 14:48:15 CDT Subject: No subject Message-ID: <9204241948.AA12586@orion.ssdc.honeywell.com> TR availble: A Connectionist Model for Commonsense Reasoning Incorporating Rules and Similarities Ron Sun Honeywell SSDC 3660 Technology Dr. Minneapolis, MN 55413 rsun at orion.ssdc.honeywell.com For the purpose of modeling commonsense reasoning, we investigate connectionist models of rule-based reasoning, and show that while such models can usually carry out reasoning in exactly the same way as symbolic systems, they have more to offer in terms of commonsense reasoning. A connectionist architecture, {\sc CONSYDERR}, is proposed for capturing certain commonsense reasoning competence, which partially remedies the brittleness problem in traditional rule-based systems. The architecture employs a two-level, dual representational scheme, which utilizes both localist and distributed representations and explores the synergy resulting from the interaction between the two. {\sc CONSYDERR} is therefore capable of accounting for many difficult patterns in commonsense reasoning with this simple combination of the two levels. This work also shows that connectionist models of reasoning are not just ``implementations" of their symbolic counterparts, but better computational models of commonsense reasoning. It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.ka.ps.Z ftp> quit unix> uncompress sun.ka.ps.Z unix> lpr sun.ka.ps (or however you print postscript)  From efiesler at idiap.ch Thu Apr 23 11:46:14 1992 From: efiesler at idiap.ch (Emile Fiesler) Date: Thu, 23 Apr 92 17:46:14 +0200 Subject: Paper available entitled: "Neural Network Formalization". Message-ID: <9204231546.AA01268@idiap.ch> The following paper is available via ftp from the neuroprose archive (instructions for retrieval follow the abstract). Neural Network Formalization Emile Fiesler IDIAP Case postale 609, CH-1920 Martigny, Switzerland Electronic mail: EFiesler at IDIAP.CH and H. John Caulfield Alabama A&M University P.O.Box 1268, Normal, AL 35762, U.S.A. A short version of this paper has been accepted for publication in "Artificial Neural Networks II", (Editors I. Alexander and J. Taylor, North-Holland/ Elsevier Science Publishers, Amsterdam, 1992), under the title: "Layer Based Neural Network Formalization". ABSTRACT In order to assist the field of neural networks in its maturing, a formaliza- tion and a solid foundation are essential. Additionally, to permit the intro- duction of formal proofs, it is important to have an all encompassing formal mathematical definition of a neural network. This publication offers a neural network formalization consisting of a topological taxonomy, a uniform nomenclature, and an accompanying consistent mnemonic notation. Supported by this formalization, both a flexible hierar- chical and a universal mathematical definition are presented. ------------------------------ To obtain a copy of the paper, follow these FTP instructions: unix> ftp archive.cis.ohio-state.edu (or: ftp 128.146.8.52) login: anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fiesler.formalization.ps.Z ftp> bye unix> zcat fiesler.formalization.ps.Z | lpr (or however you uncompress and print postscript) (Unfortunately, I will not be able to provide hard copies of the paper.)  From CSHERTZE at ucs.indiana.edu Fri Apr 24 16:49:39 1992 From: CSHERTZE at ucs.indiana.edu (CANDACE SHERTZER, 5-4658, PSYCHOLOGY 341) Date: Fri, 24 Apr 92 15:49:39 EST Subject: 1992 Cognitive Science Society Conference Info Message-ID: ================================================================ REGISTRATION INFORMATION AND PRELIMINARY PROGRAM for The Fourteenth Annual Conference of the Cognitive Science Society July 29 -- August 1, 1992 Cognitive Science Program Indiana University Bloomington, IN The Fourteenth Annual Conference of the Cognitive Science Society will be held July 29 - August 1, 1992, at Indiana University in Bloomington. The conference will feature seven invited plenary speakers, nine symposia, 110 submitted talks, and approximately 90 posters. There are also special evening events scheduled. ================================================================ PROGRAM (subject to change) Wednesday, July 29: 2:00 - 5:30 p.m. Registration 5:30 - 7:00 Reception 7:00 - 8:15 Plenary Talk Thursday, July 30: 9:00 - 10:15 a.m. Plenary Talk 10:15 - 10:50 Break 10:50 - 12:30 p.m. Talks and Symposia 12:30 - 2:00 Lunch 2:00 - 3:40 Talks and Symposia 3:40 - 4:15 Break 4:15 - 5:30 Plenary Talk 5:30 - 8:00 Banquet (optional) Friday, July 31: 9:00 - 5:30 Same as Thursday 5:30 - 8:00 Poster Session I Saturday, August 1 9:00 - 2:00 Same as Thursday and Friday 2:00 - 3:40 Poster Session II 3:40 - 4:15 Break 4:15 - 5:30 Plenary Talk 5:30 - 8:00 Concert (optional) Sunday Morning, August 2: Buses depart for the Indianapolis Airport ============================================================================= PLENARY SPEAKERS and talk titles: Elizabeth Bates Crosslinguistic studies of language breakdown in aphasia. Department of Cognitive Science, University of California, San Diego Daniel Dennett Problems with some models of consciousness. Center for Cognitive Studies, Tufts University Martha Farah Neuropsychology. Department of Psychology, Carnegie-Mellon University Douglas Hofstadter The centrality of analogy-making in human cognition. Center for Research on Concepts and Cognition, Indiana University John Holland Must learning precede cognition? Department of Psychology, University of Michigan Richard Shiffrin Memory representation, storage, and retrieval. Department of Psychology, Indiana University Michael Turvey Ecological foundations of cognition. Department of Psychology, University of Connecticut ============================================================================== SYMPOSIA: Topics and organizers Representation: Who needs it? Timothy van Gelder, Indiana University Beth Preston, University of Georgia Computational models of evolution as tools for cognitive science Rik Belew, University of California, San Diego Dynamic processes in music cognition Caroline Palmer, The Ohio State University Allen Winold, Indiana University Dynamics in the control and coordination of action Geoffrey Bingham, Indiana University Bruce Kay, Brown University Goal-Driven Learning David Leake, Indiana University Ashwin Ram, Georgia Institute of Technology Similarity and representation in early cognitive development Mary Jo Rattermann, Hampshire College Reasoning and visual representations K. Jon Barwise, Indiana University Speech perception and spoken language processing David Pisoni, Indiana University Robert Peterson, Indiana University Analogy, high-level perception, and categorization Douglas Hofstadter, Indiana University Melanie Mitchell, University of Michigan ================================================================ SPECIAL EVENING EVENTS: * Welcoming Reception - Wednesday, July 29. * Gala Banquet - Thursday, July 30. * Indiana University Opera performance of "Carousel" - Saturday, August 1. (The full program will be sent to all registered participants in early July, and will also be available at the Registration Desk.) ================================================================ About the Conference Site: Bloomington, a city of 60,000 people, is located in a region of state and national forests amid rolling hills and numerous lakes in south central Indiana. Within a twenty minute drive there are a number of state parks, several large lakes offering recreational activities such as sailing, boating, swimming, and waterskiing, and a winter ski resort. Brown County State Park attracts thousands of visitors year-round and is renowned for its spectacular fall colors; nearby is the picturesque artist community of Nashville, Indiana. The Conference will be held at the student union building of Indiana University. The building is the largest single-structure student union in the nation and offers many amenities and services, including automated teller machines, check-cashing services, post office, newsstand, photocopy shop, barber and hair styling shops, lost-and-found, hearing-aid compatible public phones and fax, several eateries, bowling alleys, bookstore, etc. Also located in the IMU is the University Computing Services, which can provide guest login, so that conference participants can access their home computers through the Internet. The IMU is fully wheelchair accessible. ================================================================ LODGING: ******** Forms for reserving rooms at any of these locations are included at the end of this note. On-Campus: ********** Indiana Memorial Union (IMU) Hotel: The conference will be held in the IMU, so these rooms offer maximal convenience. All rooms have air-conditioning, bathrooms, telephone, cable TV and services. Type of room Number of people* 1 2 3 4 Standard Double (1 double bed) $52.00 $60.00 xxx xxx Deluxe $57.00 $65.00 xxx xxx Standard Double (2 double beds) $60.00 $68.00 $76.00 $84.00 Deluxe $66.00 $74.00 $82.00 $90.00 King (1 king-sized bed) $58.00 $66.00 xxx xxx Residence Halls: Rooms have air conditioning and telephones. Bathrooms are shared. It is an easy half-mile walk to the Union, but shuttle vans will also be available. Single rooms only - $22.50* Off-Campus: *********** Rooms have also been reserved at two off-campus hotels. Both hotels offer amenities, and are approximately one mile from the IMU. Free shuttle service will be provided. Hampton Inn (2100 N. Walnut)* Single room..............................................$43.00 Double room..............................................$43.00 Econo Lodge (4501 E. 3rd Street)* Single room..............................................$38.00 Double room..............................................$45.00 King-sized room..........................................$45.00 *all rates subject to 10% hotel tax ==================================================================== FOOD: ***** Meals other than the special conference banquet and reception are available in many different venues. The IMU has several eating establishments: Cafeteria, Deli, and Tudor Room. Costs range from $3 for a sandwich and drink at the Deli to more than $20 for a full-service dinner in the Tudor Room. (The Tudor Room is not open for breakfast.) There are many restaurants within walking distance of the IMU. These include a Thai restaurant, Tibetan, Chinese, Ethiopian, Middle Eastern and others. A wide variety of vegetarian selections are also available on local menus. Conference participants who stay in the residence halls may purchase individual meal tickets. Meals are "all you can eat." The price for breakfast is $2.95, lunch $4.90 and dinner $7.45. =================================================================== TRAVEL: ******* USAir is the official airline for Cognitive Science 1992, and offers the following discounts on travel to the Conference. USAir Conference Rates: Within the USA: 40% off the full round trip day coach fare with no advance ticket purchase. 5% off published fares excluding first class, government contract fares, senior fares, system fares, and tour fares following all restrictions. >From Canada: 35% off the full round trip day coach fare with no advance ticket purchase. Reservations: To obtain this meeting discount, call USAir's Meeting and Convention Reservation Office at 1-800-334-8644, 8:00 AM - 9:00 PM, Eastern Standard Time. Refer to Gold File Number: 36570038. Participants will need to fly into Indianapolis Airport, which is a one hour drive (50 miles) from Bloomington. Several options are available for travel between Indianapolis and Bloomington: *Buses* will be chartered from Indiana University, for Wednesday arrivals and Sunday departures. They are scheduled to leave every hour, on the hour, from Indianapolis Airport on Wednesday starting at 12:00 noon through 7:00 p.m., with two additional buses leaving at 9:00 p.m. and 11:00 p.m. On Sunday, they will leave Bloomington on the hour from 6:00 a.m. through 12:00 noon, plus an additional bus leaving at 3:00 p.m. These charters will cost $20.00 per person, round trip. These buses are not equipped with wheelchair elevators, but wheelchairs can be stowed on board and disabled persons assisted into a regular seat. *Private limousine services* cost approximately $70 round trip. Participants choosing this option can make their own arrangements directly with the limousine companies. Two services available are Classic Touch Limousine Service, Inc. (812-339-7269) and Indy Connection Limousines, Inc. (1-800-888-4639). *Rental cars* are also available at Indianapolis Airport. Bloomington is easily reached from Indianapolis via a divided highway. Parking: For those participants who will drive to Bloomington, free parking is available at the IMU for guests of the Union; others may purchase a temporary decal for $3.00/day which allows access to campus parking garages. These decals will be available at the residence hall or at the registration desk. =================================================================== REGISTRATION: ************* Registration fees are outlined below. To register for the conference, please complete the enclosed form and return with appropriate payment to: Conference Registrar Conference #199-92 Indiana University Conference Bureau Indiana Memorial Union Rm. 677 Bloomington, IN 47405 On-site registration will be held from 2:00 p.m. - 7:00 p.m., Wednesday, July 29, in the East Lounge of the IMU. This will be staffed each day of the Conference from 8:00 a.m. - 5:00 p.m., providing information and assistance to participants and their guests. Fees: before Status June 27, 1992 Late Member $155.00 $200.00 Non-Member $180.00 $230.00 Student $80.00 $100.00 The registration fee may be paid by check or money order in US currency made payable to: Indiana University #199-92. Visa and MasterCard will also be accepted. Those paying by credit card may register by fax (812-855-8077) or by phone (812-855-9824 or 812-855-4661). Please refer to 199-92 as our Conference number. Cancellations: Cancellations received in writing prior to July 10, 1992, will be entitled to a full refund, less a $25 administrative fee. No refunds will be granted after that date. ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 -- August 1, 1992 REGISTRATION FORM please print Name: Address: City: State: Zip: Country: Daytime phone: ( ) E-Mail Address: PLEASE COMPLETE ONE REGISTRATION FORM PER PERSON ATTENDING. DUPLICATE IF NECESSARY. after Registration type Fees June 27, 1992 Total Paid Member $155.00 $200.00 __________ If you are not a member, but wish to apply, complete the application form and include a photocopy of it and of your membership fee check with this registration. Non-Member $180.00 $230.00 __________ Student $80.00 $100.00 __________ If you are registering as a student, please include a photocopy of a university form or letter from a faculty member indicating current enrollment. Bus trip on Wed. July 29, and Sun. Aug. 1: no. riding ____ @ $20.00=________ Gala Banquet on Thursday, July 30: no. attending ____ @ $25.00 = ________ Opera perf. of "Carousel" on Sat., Aug. 1: no. attending ____ @ $16.00=_____ Method of payment: ________ MC ________ Visa ________ Check Credit Card Number: _________________________ Exp. date: ________ Signature of Cardholder ________________________ Make checks payable to: Indiana University #199-92 Registration fee includes admission to all sessions and coffee breaks of the conference, one copy of the Conference Proceedings, and an information packet. ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 -- August 1, 1992 ON-CAMPUS HOUSING FORM Indiana Memorial Union *all rooms will be assigned on a first-come-first-served basis. Type of room Number of people 1 2 3 4 Standard Double (1 double bed) $52.00 $60.00 xxx xxx Deluxe $57.00 $65.00 xxx xxx Standard Double (2 double beds) $60.00 $68.00 $76.00 $84.00 Deluxe $66.00 $74.00 $82.00 $90.00 King (1 king-sized bed) $58.00 $66.00 xxx xxx *all rates subject to 10% hotel tax check-in date: check-out date: roommate _____ I prefer to have a roommate assigned ______ male ______ female ______ smoker ______ non-smoker *Confirmation will come directly from the IMU. Cancellation must be made by 6:00 p.m. on the day of arrival to avoid penalties. Provide credit card information below to hold room past 6:00 p.m. *********************** Halls of Residence Only single rooms are available - $22.50 per night, plus 10% tax check-in date: check-out date: _______ male _______ female Method of Payment ______ same as reg. fees ______ MC ______ Visa ______ Check Credit Card Number: __________________________ Exp. date: __________ Signature of Cardholder: ______________________________ *make checks payable to: Indiana University #199-92 Send REGISTRATION and ON-CAMPUS HOUSING forms to: Conference Registrar Indiana University Conference Bureau Indiana Memorial Union Rm. 677 Bloomington, IN 47405 ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 - August 1, 1992 OFF-CAMPUS HOUSING FORM *all rooms will be assigned on a first-come-first-served basis. Hampton Inn, 2100 N. Walnut, Bloomington, IN 47401, (812)334-2100 Single room..............................................$43.00/night Double room..............................................$43.00/night check-in date: __________ check-out date: __________ ************************ Econo Lodge, 4501 E. 3rd, Bloomington, IN 47403, (812)332-2141 Single room..............................................$38.00/night Double room..............................................$45.00/night King-sized room..........................................$45.00/night check-in date: __________ check-out date: __________ *all room rates are subject to 10% tax. earliest check-in time for both hotels is: 2:00 p.m. latest check-out time for both is: 12:00 noon Name: Address: City: State: Zip: Daytime phone ( ) Your reservations will be held until 6:00 p.m. unless accompanied by an accepted credit card number, expiration date, and signature. ________ Hold until 6 p.m. only ________ Hold until arrival (credit card information below) Method of Payment __________ MC __________ Visa Credit card number: Exp. date: Signature of Cardholder IMPORTANT: DO NOT mail this form with the registration, and/or campus housing form. Mail it directly to the appropriate hotel. ==================================================================== COGNITIVE SCIENCE SOCIETY 1992 Membership Application Includes a one year subscription to the journal Cognitive Science Name: (last/first/middle) Mailing address: FEES Member - $50.00 Student - $25.00 Foreign Postage (excluding Canada) - $14.00 Spouse of Member (no journal) - $25.00 Name of Spouse: TOTAL: $ ______________ METHOD OF PAYMENT [ ]Check in U.S. $ on U.S. bank [ ]VISA/MasterCard/Access (fill in below) Card Number: Exp. date: / Name on Card (please print): Signature of Card Holder: Telephone Number (including area codes): Electronic Mail: FAX Number: Full member applicants please provide either (a) the signatures of two current members of the Society who are sponsoring your application, (b) evidence of a Ph.D. degree or equivalent in a cognitive science related field, or (c) a curriculum vita indicating publications in a cognitive science related field. Sponsor 1 ___________________ Sponsor 2 ________________________ Students must provide a xerox of university form or letter from faculty member indicating current enrollment. IMPORTANT: DO NOT mail this form with any of the forms for registration. Mail this directly to: Alan Lesgold, Secretary/Treasurer Cognitive Science Society LRDC University of Pittsburgh Pittsburgh, PA 15260 USA ======================================================================= For more information or a hard copy of this brochure contact Candace Shertzer Cognitive Science Program Indiana University (812)855-4658 cshertze at silver.ucs.indiana.edu ======================================================================= The Fourteenth Annual Conference of the Cognitive Science Society Conference Chair John K. Kruschke Department of Psychology and Cognitive Science Program Indiana University, Bloomington, IN 47405 Steering Committee Indiana University, Cognitive Science Program David Chalmers, Center for Research on Concepts and Cognition J. Michael Dunn, Philosophy Michael Gasser, Computer Science & Linguistics Douglas Hofstadter, Center for Research on Concepts and Cognition David Leake, Computer Science David Pisoni, Psychology Robert Port, Computer Science & Linguistics Richard Shiffrin, Psychology Timothy van Gelder, Philosophy Local Arrangements: Candace Shertzer, Cognitive Science Program Officers of the Cognitive Science Society James L. McClelland, President 1988 - 1996 Geoffrey Hinton 1986 - 1992 David Rumelhart 1986 - 1993 Dedre Gentner 1987 - 1993 James Greeno 1987 - 1993 Walter Kintsch 1988 - 1994 Steve Kosslyn 1989 - 1995 George Lakoff 1989 - 1995 Philip Johnson-Laird 1990 - 1996 Wendy Lehnert 1990 - 1996 Janet Kolodner 1991 - 1997 Kurt VanLehn 1991 - 1997 Ex Officio Board Members Martin Ringle, Executive Editor, Cognitive Science 1986 - Alan Lesgold, Secretary/Treasurer (2nd Term) 1988 - 1994 ====================================================================  From braun%bmsr8.usc.edu at usc.edu Fri Apr 24 19:33:35 1992 From: braun%bmsr8.usc.edu at usc.edu (Stephanie Braun) Date: Fri, 24 Apr 92 16:33:35 PDT Subject: Short Course Announcement Message-ID: <9204242333.AA15713@bmsr8.usc.edu> The Biomedical Simulations Resource of the University of Southern California announces a two-day Short Course on COMPUTER SIMULATION IN NEUROBIOLOGY May 30-31, 1992 This course will illustrate the use of modeling and simulation in the exploration of research design and hypothesis testing in experimental neurobiology. Lectures, discussions, and computer laboratory sessions will focus on three exemplary case histories that are experimental bases of important current theoretical concepts in the neurobiology of movement and perception. Three short background papers will be distributed to participants in advance: (1) Bizzi, Mussa-Ivaldi & Giszter (1991) Computations underlying the execution of movement: a biological perspective. Science 253, 287-291. (2) Georgopoulos, Schwartz & Kettner (1986) Neuronal population coding of movement direction. Science 233, 1416-1419. (3) Hecht, Schlaer & Pirenne (1941) Energy at the threshold of vision. Science 93, 585-587. The course is intended primarily for college and university instructors, postdoctoral scholars and graduate students. Prior computer experience is not essential, but it will be assumed that participants have some understanding of basic issues in contemporary neurobiology. There will be no registration fee, but a nominal fee for course materials (discettes, notes, etc.) will be charged. Enrollment is limited; early registration is advised. Course Instructor: George P. Moore, PhD Biomedical Engineering, USC Associate Instructor: Reza Shadmehr, PhD Brain & Cognitive Sciences, MIT For further information: Call: (213)740-0342 FAX : (213)740-0343 E-mail: bmsr at bmsrs.usc.edu  From berg at cs.albany.edu Mon Apr 27 18:01:31 1992 From: berg at cs.albany.edu (George Berg) Date: Mon, 27 Apr 92 18:01:31 EDT Subject: Computational Biology Conference Message-ID: <9204272201.AA00710@odin.albany.edu> =============================================================================== * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * UPDATE: SECOND ALBANY CONFERENCE ON COMPUTATIONAL BIOLOGY "PATTERNS OF BIOLOGICAL ORGANIZATION" * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * GENERAL DESCRIPTION The Second Albany Conference on Computational Biology will be held October 8-11, 1992 in Rensselaerville near Albany, New York. The aim of this conference (like that of the 1990 Albany Conference) is to explore the computational tools and approaches being developed in diverse fields within biology, with emphasis this year on topics related to organization and self-assembly. The conference will be designed to provide an environment for a frank and informal exchange among scientists and mathematicians that is not normally possible within the constraints of topical, single-discipline meetings. The theme of the Conference, "Patterns of Biological Organization", will be developed in five sessions on topics ranging from the level of sequence to the level of embryo development. Leading specialists in the various disciplines are being invited, with the degree of involvement in novel computational approaches as one of the most important criteria for selection. We are seeking an interdisciplinary audience, mathematicians, and computer scientists as well as biologists. All participants will be invited to submit abstracts for posters, although submission is not mandatory. Also, if funding permits, we will sponsor "young investigator" travel awards as we did in 1990 for the first Albany Conference on Computational Biology. CONFERENCE FORMAT The conference will consist of three morning and two evening sessions over a period of three nights and days (Thursday afternoon through Sunday morning). Each session will be comprised of four 30-minute talks interspersed by question-and-answer periods of 15-20 minutes. Afternoons are free for discussion and workshops (some planned, others impromptu). Tentative workshop topics include visualization tools and structure data bases. In addition, a workshop is planned for Thursday afternoon that will introduce non-biologists to the main issues of macromolecular and cellular structure to be addressed at the meeting. The following is an outline of the conference sessions, including a partial listing of confirmed speakers: Keynote Address: Prof. Hermann Haken --------------- Institute for Theoretical Physics and Synergetics University of Stuttgart Session 1 Sequence analysis and secondary structure ---------------------------------------------------- Discussion leader: Charles Lawrence Wadsworth Center, and State Univ. of New York, Albany 518-473-3382 CEL at BIOMETRICS.PH.ALBANY.EDU Speakers: David L. Waltz Thinking Machines, Inc., and Brandeis University Waltham, MA Jean Michel Claverie National Center for Biotechnology Information, NIH, Bethesda, MD Michael Zucker Stanford University Stanford, CA Stephen Altschul National Center for Biotechnology Information, NIH, Bethesda, MD Session 2 Tertiary structure prediction ---------------------------------------- Discussion leader: George Berg State Univ. of New York, Albany 518-442-4267 BERG at CS.ALBANY.EDU Speakers: Rick Fine Biosym Technologies, Inc. San Diego, CA Stephen Bryant National Center for Biotechnology Information, NIH Bethesda, MD James Bowie University of California Los Angeles, CA Francois Michel Centre de Genetique Moleculaire, CNRS Gif-sur-Yvette, France Session 3 Macromolecular function ---------------------------------- Discussion leader: Jacquelyn Fetrow State Univ. of New York, Albany 518-442-4389 JACQUE at ISADORA.ALBANY.EDU Speakers: Judith Hempel Biosym Technologies, Inc. San Diego, CA Fred Cohen University of California San Francisco, CA Chris Lee Stanford University Stanford, CA Session 4 Recognition and assembly ----------------------------------- Discussion leader: Joachim Frank Wadsworth Center and State Univ. of New York, Albany 518-474-7002 JOACHIM at TETHYS.PH.ALBANY.EDU Speakers: David DeRosier Brandeis University Waltham, MA Phoebe Stewart University of Pennsylvania Philadelphia, PA John Sedat University of California San Francisco, CA Session 5 Development ---------------------- Discussion leader: John Reinitz Yale Univ., New Haven, CT 203-785-7049 REINITZ-JOHN at CS.YALE.EDU Speakers: Michael Levine University of California San Diego, CA John Reinitz Yale University New Haven, CT George Oster University of California Berkeley, CA Brian Goodwin Open University Milton Keynes, UK Questions about individual sessions may be sent to the respective Discussion Leaders (phone numbers and email addresses provided above). For general conference information, you may contact any of the discussion leaders or any other member of the Organizing Committee (chair: Carmen Mannella) or Program Committee (chair: Joachim Frank). Phone numbers and email addresses of the other members of these committees are listed below: Jeff Bell, Rennselaer Polytechnic Institute, Troy, NY 518-276-4075 BELL at VAX1.CHEM.RPI.EDU Stephen Bryant, National Center for Biotechnology Information, NIH, Bethesda, MD 301-496-2475 (ext. 65) BRYANT at NCBI.NLM.NIH.GOV Carmen Mannella, Wadsworth Center and State Univ. of New York, Albany 518-474-2462 CARMEN at TETHYS.PH.ALBANY.EDU Patrick Van Roey, Wadsworth Center, Albany, NY 518-473-1336 VANROEY at TETHYS.PH.ALBANY.EDU CONFERENCE SITE The conference, one of the Albany Conference series held annually since 1984, will take place at the Rensselaerville Conference Center, located 30 miles southwest of Albany, NY in the Helderberg Mountains. The Institute offers on-campus facilities including a large auditorium with all necessary audio-visual equipment, and smaller conference halls for informal workshops and poster sessions. The Weathervane Restaurant, located on-campus and formerly the carriage house of the Huyck estate, provides meals and refreshments, while overnight lodging is available in the modern and classic estate houses. Rooms are assigned in advance to registrants, and transportation to and from Rensselaerville is provided from the airport, as well as train and bus stations. The rural, secluded setting of the conference, the limited number of participants and the scheduling of sessions in the morning and the evening -- leaving the afternoons free -- are intended to facilitate informal discussions among conference participants. REGISTRATION INFORMATION CONFERENCE FEE: $475 includes registration, accomodations (double occupancy), meals and transportation between the conference center and Albany airport. A limited number of single occupancy accomodations are available for an extra $100. Payment of the full fee will be required by AUGUST 31, 1992. Please note that neither the Albany Conferences nor the Rensselaerville Conference Center accepts credit cards. APPLICATION DEADLINE: July 31, 1992. For further registration information and a copy of the application form for the 1992 Albany Conference on Computational Biology, please call the conference coordinator, Carole Keith, 518-442-4327, FAX 518-442-4767, Bitnet: CAROLE at ALBNYVM1, or write to The 1992 Albany Conference, P.O. Box 8836, Albany, NY 12208-0836. - - - - - - - - - - - - - - - - - - - - - - - - Individuals may also use the following "E-Mail application form" to register for this meeting: Name: Organization: Business Address: City: State: Zip: Business Phone: Fax: Because attendance is limited, please describe briefly your research interests or activities which explain your interest in participating in this conference. If you plan to submit a poster, please include its title and (if ready) a short abstract. (You will be asked to provide a one-page, camera-ready version of the poster abstract, using 1.5 inch borders, for the meeting workbook.) Send this E-mail application to CAROLE at ALBNYVM1 before the registration deadline (July 31). TRAVEL AWARDS Graduate students and postdocs who would like to be considered for a Young Investigators travel award should submit with their registration form a brief letter explaining his/her research interests. Graduate students should also include a letter of recommendation from a faculty advisor. Applications from members of groups that are underrepresented in this field (women and racial minorities) are encouraged. ===============================================================================  From rsun at orion.ssdc.honeywell.com Tue Apr 28 10:04:42 1992 From: rsun at orion.ssdc.honeywell.com (Ron Sun) Date: Tue, 28 Apr 92 09:04:42 CDT Subject: TR available Message-ID: <9204281404.AA17820@orion.ssdc.honeywell.com> TR availble: Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning} Ron Sun Honeywell SSDC Minneapolis, MN 55418 This paper proposes a fuzzy evidential model for commonsense causal reasoning. After an analysis of the advantages and limitations of existing accounts of causality, a generalized rule-based model FEL ({\it Fuzzy Evidential Logic}) is proposed that takes into account the inexactness and the cumulative evidentiality of commonsense reasoning. It corresponds naturally to a neural (connectionist) network. Detailed analyses are performed regarding how the model handles commonsense causal reasoning. To appear in Proc. of 14th Coggnitive Science Conference, 1992 ---------------------------------------------------------------- It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.cogsci92.ps.Z ftp> quit unix> uncompress sun.cogsci92.ps.Z unix> lpr sun.cogsci92.ps (or however you print postscript) p.s. If you can't find the paper there, it might still be in Inbox (pub/neuroprose/Inbox). It is ftpable from there.  From gordon at prodigal.psych.rochester.edu Thu Apr 30 20:07:57 1992 From: gordon at prodigal.psych.rochester.edu (Dustin Gordon) Date: Thu, 30 Apr 92 20:07:57 EDT Subject: Postdoctoral Positions Message-ID: <9205010007.AA05044@prodigal.psych.rochester.edu> POSTDOCTORAL POSITIONS IN THE LANGUAGE SCIENCES AT ROCHESTER The Center for the Sciences of Language [CSL] at the University of Rochester has two NIH-funded postdoctoral trainee positions that can start anytime after July 1, 1992, and can run from one to two years. CSL is an interdisciplinary unit which connects programs in American Sign Language, Psycholinguistics, Linguistics, Natural language processing, Neuroscience, Philosophy, and Vision. Fellows will be expected to participate in a variety of exisiting research and teaching projects between these disciplines. Applicants should have a relevant background and an interest in interdisciplinary research training in the language sciences. We encourage applications from minorities and women. Applications should be sent to Tom Bever, CSL Director, Meliora Hall, University of Rochester, Rochester, NY, 14627; Bever at prodigal.psych.rochester.edu; 716-275-8724. Please include a vita, a statement of interests and the names and email addresses and/or phone numbers of three recommenders.  From rosanna at cns.edinburgh.ac.uk Wed Apr 29 10:17:41 1992 From: rosanna at cns.edinburgh.ac.uk (Rosanna Maccagnano) Date: Wed, 29 Apr 92 10:17:41 BST Subject: NETWORK Message-ID: <4169.9204290917@subnode.cns.ed.ac.uk> CONTENTS OF NETWORK - COMPUTATION IN NEURAL SYSTEMS Volume 3 Number 2 May 1992 LETTER TO THE EDITOR 101 A modified neuron model that scales and resolves network paralysis M SASEETHARAN & M P MOODY PAPERS 105 Modelling Hebbian cell assemblies comprised of cortical neurons A LANSNER & E FRANSEN 121 Effective neurons and attractor neural networks in cortical environment D J AMIT & M V TSODYKS 139 Associative memory in a network of ``spiking'' neurons W GERSTNER & J L VAN HEMMEN 165 Study of a learning algorithm for neural networks with discrete synaptic couplings C J PEREZ VICENTE, J CARRABINA & E VALDERRAMA 177 Information capacity in recurrent McCulloch-Pitts networks with sparsely coded memory states G PALM & F T SOMMER 187 Computing with a difference neuron D SELIGSON, M GRINIASTY, D HANSEL & N SHORESH 205 Self-organisation with partial data T SAMAD & S A HARP REVIEW ARTICLE 213 Could information theory provide an ecological theory of sensory processing? J J ATICK 253 ABSTRACTS SECTION NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. Macros can be supplied to produce papers in the form suitable for refereeing and for IOP house style. For more details contact the Editorial Services Manager at IOP Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Telephone: 0272 297481 Fax: 0272 294318 Telex: 449149 INSTP G Email Janet: IOPPL at UK.AC.RL.GB Subscription Information Frequency: quarterly Subscription rates: Institution 149.00 pounds (US$274.00) Individual (UK) 17.50 pounds (Overseas) 20.50 pounds (US$41.00) A microfiche edition is also available at 89.00 pounds (US$164.00)  From mdg at magi.ncsl.nist.gov Fri Apr 10 08:23:11 1992 From: mdg at magi.ncsl.nist.gov (Mike Garris x2928) Date: Fri, 10 Apr 92 08:23:11 EDT Subject: New NIST OCR Database Message-ID: <9204101223.AA14894@magi.ncsl.nist.gov> NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Announces a New Database +-----------------------------+ | "NIST Special Database 3" | +-----------------------------+ Binary Images of Handwritten Segmented Characters (HWSC) The NIST database of handwritten segmented characters contains 313,389 isolated character images segmented from the 2,100 full-page images distributed with "NIST Special Database 1". The database includes the 2,100 pages of binary, black and white, images of hand-printed numerals and text. This significant new database contains 223,125 digits, 44,951 upper-case, and 45,313 lower-case character images. Each character image has been centered in a separate 128 by 128 pixel region and has been assigned a classification which has been manually corrected so that the error rate of the segmentation and assigned classification is less than 0.1%. The uncompressed database totals approximately 2.75 gigabytes of image data and includes image format documentation and example software. "NIST Special Database 3" has the following features: + 313,389 isolated character images including classifications + 223,125 digits, 44,951 upper-case, and 45,313 lower-case images + 2,100 full-page images + 12 pixel per millimeter resolution + image format documentation and example software Suitable for automated hand-print recognition research, the database can be used for: + algorithm development + system training and testing The database is a valuable tool for training recognition systems on a large statistical sample of hand-printed characters. The system requirements are a 5.25" CD-ROM drive with software to read ISO-9660 format. If you have any further technical questions please contact: Michael D. Garris mdg at magi.ncsl.nist.gov (301)975-2928 (new number!) If you wish to order the database, please contact: Standard Reference Data National Institute of Standards and Technology 221/A323 Gaithersburg, MD 20899 (301)975-2208 (301)926-0416 (FAX)  From dld at magi.ncsl.nist.gov Mon Apr 13 08:09:43 1992 From: dld at magi.ncsl.nist.gov (Darrin Dimmick X4147) Date: Mon, 13 Apr 92 08:09:43 EDT Subject: NIST SPECIAL DATABASE 2 Message-ID: <9204131209.AA21234@magi.ncsl.nist.gov> NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Announces a New Database +-----------------------------+ | "NIST Special Database 2" | +-----------------------------+ Structured Forms Reference Set (SFRS) The NIST database of structured forms contains 5,590 full page images of simulated tax forms completed using machine print. THERE IS NO REAL TAX DATA IN THIS DATABASE. The structured forms used in this database are 12 different forms from the 1988, IRS 1040 Package X. These include Forms 1040, 2106, 2441, 4562, and 6251 together with Schedules A, B, C, D, E, F and SE. Eight of these forms contain two pages or form faces making a total of 20 form faces represented in the database. Each image is stored in bi-level black and white raster format. The images in this database appear to be real forms prepared by individuals but the images have been automatically derived and synthesized using a computer and contain no "real" tax data. The entry field values on the forms have been automatically generated by a computer in order to make the data available without the danger of distributing privileged tax information. In addition to the images the database includes 5,590 answer files, one for each image. Each answer file contains an ASCII representation of the data found in the entry fields on the corresponding image. Image format documentation and example software are also provided. The uncompressed database totals approximately 5.9 gigabytes of data. "NIST Special Database 2" has the following features: + 5,590 full-page images + 5,590 answer files + 12 pixel per millimeter resolution + image format documentation and example software Suitable for automated document processing system research and development, the database can be used for: + algorithm development + system training and testing The system requirements are a 5.25" CD-ROM drive with software to read ISO-9660 format. If you have any further technical questions please contact: Darrin L. Dimmick dld at magi.ncsl.nist.gov (301)975-4147 If you wish to order the database, please contact: Standard Reference Data National Institute of Standards and Technology 221/A323 Gaithersburg, MD 20899 (301)975-2208 (301)926-0416 (FAX)  From mav at cs.uq.oz.au Wed Apr 1 18:49:46 1992 From: mav at cs.uq.oz.au (mav@cs.uq.oz.au) Date: Thu, 02 Apr 92 09:49:46 +1000 Subject: Why does the error rise in a SRN? Message-ID: <9204012349.AA03878@uqcspe.cs.uq.oz.au> I have been working with the Simple Recurrent Network (Elman style) and variants there of for some time. Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. I have seen the same phenomena using both root mean square and dprime error. It often occurs over quite long time periods (several thousand epochs). The task that I have studied most carefully is episodic recognition. A list (usually very short) of words is given. A recognize symbol is then followed by an item which either was in the list or wasn't. The task is to make this decision. Following is a set of example inputs and outputs: Input Output table blank beach blank king blank recognize blank beach yes Questions: (1) Has anyone else noticed this? (2) Is it task dependent? (3) Why does it happen? Simon Dennis.  From ken at cns.caltech.edu Thu Apr 2 01:18:58 1992 From: ken at cns.caltech.edu (Ken Miller) Date: Wed, 1 Apr 92 22:18:58 PST Subject: Special Issue on Neural Modeling Message-ID: <9204020618.AA04292@cns.caltech.edu> VOL. 4 NO. 1 of SEMINARS IN THE NEUROSCIENCES (Mar 1992) THE USE OF MODELS IN THE NEUROSCIENCES Guest Editor: Kenneth D. Miller CONTENTS: Introduction: The Use Of Models In The Neurosciences ----- Kenneth D. Miller A Quantitative Model of Phototransduction and Light Adaptation in Amphibian Rod Photoreceptors ----- V. Torre, M. Straforini and M. Campani Realistic Single-Neuron Modeling ----- William W. Lytton and John C. Wathey Modeling Hippocampal Circuitry Using Data From Whole Cell Patch Clamp and Dual Intracellular Recordings In Vitro ----- Roger D. Traub and Richard Miles Models of Central Pattern Generators as Oscillators: The Lamprey Locomotor CPG ----- Karen A. Sigvardt and Thelma L. Williams Realistic Neural Network Models Using Backpropagation: Panacea or Oxymoron? ----- S.R. Lockery Models of Activity-Dependent Neural Development ----- Kenneth D. Miller Neural Modeling of a Visual Perceptual Task --- A Case Study ----- Gerald Westheimer An Introduction to Silicon Neural Analogs ----- M.A. Mahowald, R.J. Douglas, J.E. LeMoncheck, and C.A. Mead Seminars in the Neurosciences is a review journal. Each issue is devoted to a single topic or theme of current interest to neuroscientists, and is edited by a guest editor appointed for that issue. The aim of each issue is to provide a co-ordinated, topical review of a selected subject that will be of interest to all levels of reader from the senior undergraduate to the researcher. Forthcoming topics (with guest editor): Dopamine (Trevor W. Robbins); Immune Responses in the Nervous System (Cedric S. Raine); Building the Architecture of the Nervous System (Andrew Lumsden). The Editorial Advisory Board for the journal is: Marianne Bonner-Fraser, J.P. Changeux, Klaus Peter Hoffman, John S. Kelly, Peter Kennedy, Motoy Kuno, Dale Purves, and Janis C. Weeks. Six issues are published annually. SUBSCRIPTION RATES: Personal: $65, or 39 pounds (UK); Institutional: $130, or 78 pounds (UK). SINGLE ISSUES: $34, or 18 pounds (UK). Canada: add GST at current rate of 7%. SEND ORDERS TO: Academic Press Limited, Foots Cray, Sidcup, Kent DA14 5HP, UK (telephone: 081-300-3222).  From watrous at cortex.siemens.com Thu Apr 2 09:18:07 1992 From: watrous at cortex.siemens.com (Ray Watrous) Date: Thu, 2 Apr 92 09:18:07 EST Subject: Why does the error rise in a SRN? Message-ID: <9204021418.AA00863@cortex.siemens.com.siemens.com> Simon - An increase in error can occur with fixed step size algorithms since although the step is in the negative gradient direction, the size can be such that the new point is actually uphill. The algorithm simply steps across a ravine, as it were, and ends higher up on the other side. This is a well-known property of such algorithms, but seems to be encountered in practice more frequently with recurrent networks. This is due to the fact that small changes in some regions of weight space can have large effects on the error because of the nonlinear feedback in the recurrent network. There are many effective ways of controlling this behavior; you may want to consult the line search algorithms in a standard text on nonlinear optimization (such as D. Luenberger Linear and Nonlinear Programming, 1984). Raymond Watrous Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596  From Minh.Tue.Vo at cs.cmu.edu Thu Apr 2 12:05:37 1992 From: Minh.Tue.Vo at cs.cmu.edu (Minh.Tue.Vo@cs.cmu.edu) Date: Thu, 2 Apr 92 12:05:37 -0500 (EST) Subject: Why does the error rise in a SRN? In-Reply-To: <9204012349.AA03878@uqcspe.cs.uq.oz.au> References: <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: > Excerpts from connect: 2-Apr-92 Why does the error rise in .. > mav at cs.uq.oz.au (886) > I have been working with the Simple Recurrent Network (Elman style) > and variants there of for some time. Something which seems to happen > with surprising frequency is that the error will decrease for a period > and then will start to increase again. I have seen the same phenomena > using both root mean square and dprime error. It often occurs over I have noticed the same phenomenon in my work on on-line gesture recognition, so it doesn't seem to be task-dependent. I could reduce the effect somewhat by tweaking the learning rate and the momentum, but I couldn't eliminate it completely. TDNN doesn't seem to have that problem. Minh Tue Vo -- CMU  From tap at ai.toronto.edu Thu Apr 2 15:34:52 1992 From: tap at ai.toronto.edu (Tony Plate) Date: Thu, 2 Apr 1992 15:34:52 -0500 Subject: Why does the error rise in a SRN? In-Reply-To: Your message of "Wed, 01 Apr 92 18:49:46 EST." <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: <92Apr2.153501edt.446@neuron.ai.toronto.edu> If you are really using the SRN "Elman-style", then you are not calculating the true gradient. Thus it is not at all surprising that the error might increase as you follow a false gradient. To calculate the true gradient you need to do the full backpropagation in time. Also, the behaviour you describe of the error first going down and then going up is quite likely, as in the beginning the gradients are large and the false and true gradients are more likely to be pointing in approximately the same direction. Tony Plate  From peter at ai.iit.nrc.ca Thu Apr 2 08:39:02 1992 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Thu, 2 Apr 92 08:39:02 EST Subject: Why does the error rise in a SRN? Message-ID: <9204021339.AA10596@ai.iit.nrc.ca> > I have been working with the Simple Recurrent Network (Elman style) > and variants there of for some time. Something which seems to happen > with surprising frequency is that the error will decrease for a period > and then will start to increase again. > (1) Has anyone else noticed this? I have had the same experience. Here is an example: lrate lgrain mu momentum epoch tss ------------------------------------------------------------------ 0.05 pattern 0.5 0.1 26 40.8941 0.05 pattern 0.5 0.1 53 29.8656 0.05 pattern 0.5 0.1 86 26.2229 0.05 pattern 0.5 0.1 391 11.6567 0.05 pattern 0.5 0.1 458 12.1636 0.05 pattern 0.5 0.1 513 14.0021 The data consist of 16 separate sequences of 700 patterns per sequence. Thus one epoch consists of 11,200 patterns. > (2) Is it task dependent? My task looks quite different from yours. I am trying to train a SRN (Elman) to generate sensor readings for an accelerating jet engine. The sensors include thrust, exhaust gas temperature, shaft rpm, and six others. The sensor readings are the target outputs. The inputs are the ambient conditions (humidity, atmospheric pressure, outside temperature, ...) and the idle shaft rpm. All training data comes from a single jet engine under a variety of ambient conditions. The same throttle motion is used for each of the 16 sequences of patterns. > (3) Why does it happen? I don't know. I have tried lgrain (learning grain) = epoch, but then the net does not seem to converge at all -- tss (total sum squares) stays around 300. I have tried momentum = 0.9, but, again, tss seems to stay around 300. I suspect -- without any real justification -- that this phenomenon is related to catastrophic interference. I am in the process of applying Fahlman's Recurrent Cascade-Correlation algorithm to the same problem. I hope that RCC may work better than SRN, in this case.  From gary at cs.UCSD.EDU Thu Apr 2 19:08:53 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Thu, 2 Apr 92 16:08:53 PST Subject: Why does the error rise in a SRN? Message-ID: <9204030008.AA29877@odin.ucsd.edu> Well, Ray is right, but it is also true that Elman nets do an approximation to the true gradient. There are cases where it will actually point in the wrong direction for the pattern. g.  From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Thu Apr 2 23:16:30 1992 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Thu, 02 Apr 92 23:16:30 EST Subject: special issue on language learning Message-ID: <2715.702274590@DST.BOLTZ.CS.CMU.EDU> Ken Miller's special issue announcement reminded me that I should have announced my own special issue on this forum. Well, it's not too late. Following Ken's format: VOL. 7 NOS. 2 and 3 of MACHINE LEARNING (1991) This special double issue is also available as a book from Kluwer Academic Publishers. CONNECTIONIST APPROACHES TO LANGUAGE LEARNING Guest Editor: David S. Touretzky CONTENTS: Introduction ----- David S. Touretzky Learning Automata from Ordered Examples ----- Sara Porat and Jerome A. Feldman SLUG: A Connectionist Architecture for Inferring the Structure of Finite-State Environments ----- Michael C. Mozer and Jonathan Bachrach Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks ----- David S. Servan-Schreiber, Axel Cleeremans, and James L. McClelland Distributed Representations, Simple Recurrent Networks, and Grammatical Structure ----- Jeffrey L. Elman The Induction of Dynamical Recognizers ----- Jordan B. Pollack Copies can be ordered from: Outside North America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands tel. 617-871-6600 fax. 617-871-6528  From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Fri Apr 3 01:43:33 1992 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Fri, 03 Apr 92 01:43:33 EST Subject: Why does the error rise in a SRN? In-Reply-To: Your message of Thu, 02 Apr 92 09:49:46 +1000. <9204012349.AA03878@uqcspe.cs.uq.oz.au> Message-ID: I have been working with the Simple Recurrent Network (Elman style) and variants there of for some time. Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. As Ray Watrous suggests, your problem might be due to online updating with a fixed step size, but I have seen the same kind of problem with batch updating, in which none of the weights are updated until the error gradient dE/dw has been computed over the whole set of training sequences. And this was with Quickprop, which adjusts the step-size dynamically. In fact, I know of several independent attempts to apply Quickprop learning to Elman nets, most with very disappointing results. I think that you're running into an approximation that often causes trouble in Elman-style recurrent nets. In these nets, in addition to the usual inputs units, we have a set of "state" variables that hold the values of the hidden units from the previous time-step. These state variables are treated just like the inputs during network training. That is, we pretend that the state variables are independent of the weights being trained, and we compute dE(t)/dw for all the network's weights based on that assumption. However, the state variables are not really independent of the network's weights, since they are just the hidden-unit values from time t-1. The true value of dE(t)/dw will include terms involving dS(t)/dw for the various weights w that affect the state variables S. Or, if you prefer, they will include dH(t-1)/dw, for the hidden units H. These terms are dropped in the usual Elman or SRN formulation, but that can be dangerous, since they are not negligible in general. In fact, it is these terms that implement the "back propagation in time", which can alter the network's weights so that a state bit is set at one point and used many cycles later. So in an Elman net, even if you are using batch updating, you are not following the true error gradient dE/dw, but only a rough approximation to it. Often this will get you to the right place, or at least to a very interesting place, but it causes a lot of trouble for algorithms like Quickprop that try to follow the (alleged) gradient more aggressively. Even if you descend the pseudo-gradient slowly and carefully, you will often see that the true error begins to increase after a while. It would be possible, but very expensive, to add the missing terms into the Elman net. You end up with something that looks much like the Williams-Zipser RTRL model, which basically requires you to keep a matrix showing the derivative of every state value with respect to every weight. In a net that allows only self-recurrent connections, you only need to save one extra value for each input-side weight, so in these networks it is practical to keep the extra terms. Such models, including Mike Mozer's "Focused" recurrent nets and my own Recurrent Cascade-Correlation model, don't suffer from the approximation described above. -- Scott =========================================================================== Scott E. Fahlman School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Internet: sef+ at cs.cmu.edu  From jain at rtc.atk.com Fri Apr 3 15:22:15 1992 From: jain at rtc.atk.com (jain@rtc.atk.com) Date: Fri, 3 Apr 1992 14:22:15 -0600 Subject: Thesis Available Message-ID: <92Apr3.142217cst.46132@nic.rtc.atk.com> My recently completed PhD thesis is now available as a technical report (number CMU-CS-91-208). To obtain a copy, please send email to "reports+ at cs.cmu.edu" or physical mail to: Technical Reports Request School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 To defray printing and mailing costs, there will be a small fee. I apologize to those of you who had requested copies directly from me and have not received them. The number of requests was high enough that I ran out of time to process them (and copies to send!). REMEBER: DON'T REPLY TO THE WHOLE LIST! REPLY TO: reports+ at cs.cmu.edu TR: CMU-CS-91-208 TITLE: PARSEC: A Connectionist Learning Architecture for Parsing Spoken Language ABSTRACT: A great deal of research has been done developing parsers for natural language, but adequate solutions for some of the particular problems involved in spoken language are still in their infancy. Among the unsolved problems are: difficulty in constructing task-specific grammars, lack of tolerance to noisy input, and inability to effectively utilize complimentary non-symbolic information. This thesis describes PARSEC---a system for generating connectionist parsing networks from example parses. PARSEC networks exhibit three strengths: 1) They automatically learn to parse, and they generalize well compared to hand-coded grammars. 2) They tolerate several types of noise without any explicit noise-modeling. 3) They can learn to use multi-modal input, e.g. a combination of intonation, syntax, and semantics. The PARSEC network architecture relies on a variation of supervised back-propagation learning. The architecture differs from some other connectionist approaches in that it is highly structured, both at the macroscopic level of modules, and at the microscopic level of connections. Structure is exploited to enhance system performance. Conference registration dialogs formed the primary development testbed for PARSEC. A separate simultaneous effort in speech recognition and translation for conference registration provided a useful data source for performance evaluations. Presented in this thesis are the PARSEC architecture, its training algorithms, and detailed performance analyses along several dimensions that concretely demonstrate PARSEC's advantages.  From gary at cs.UCSD.EDU Fri Apr 3 21:12:16 1992 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Fri, 3 Apr 92 18:12:16 PST Subject: Why does the error rise in a SRN? Message-ID: <9204040212.AB22283@odin.ucsd.edu> Yes, it seems that Elman nets can't learn in batch mode. I ported TLEARN to the INTEL hypercube using data parallelism, and it ran 90 times faster, and learned 90 times slower. I suspect that all that may be necessary is to go back one more step in time, but I haven't tried it. g.  From jose at tractatus.siemens.com Fri Apr 3 07:57:11 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Fri, 3 Apr 1992 07:57:11 -0500 (EST) Subject: Why does the error rise in a SRN? In-Reply-To: <9204030008.AA29877@odin.ucsd.edu> References: <9204030008.AA29877@odin.ucsd.edu> Message-ID: Well, Ray is right, but it is also true that Elman nets do an approximation to the true gradient. There are cases where it will actually point in the wrong direction for the pattern. g. Not to the complete gradient...right? Steve  From p-mehra at uiuc.edu Fri Apr 3 14:45:54 1992 From: p-mehra at uiuc.edu (Pankaj Mehra) Date: Fri, 03 Apr 92 13:45:54 CST Subject: Why does the error rise in a SRN? Message-ID: <9204031945.AA29224@rhea> In response to: Simon Dennis Something which seems to happen with surprising frequency is that the error will decrease for a period and then will start to increase again. Questions: (3) Why does it happen? --------- Ray Watrous An increase in error can occur with fixed step size algorithms ... a well-known property of such algorithms, but seems to be encountered in practice more frequently with recurrent networks. ... small changes in some regions of weight space can have large effects on the error because of the nonlinear feedback in the recurrent network. --------- Minh.Tue.Vo at cs.cmu.edu the effect somewhat by tweaking the learning rate and the momentum, but I couldn't eliminate it completely. TDNN doesn't seem to have that problem. --------- Pineda (1988) explains this sensitivity to learning rate/step-size very well. On pages 223 and 231 of that paper, he shows that "adiabatic" weight modification (= slowness of learning rate w.r.t. the fluctuations at the input) is important for learning to converge. TDNNs work because they do not exhibit the same kind of feedback dynamics as the recurrent networks of Jordan and Elman. Pineda, Fernando J., Dynamics and Architectures for Neural Computation, Jrnl. of Complexity, Vol. 4, pp. 216-245, 1988. -Pankaj  From peter at psy.ox.ac.uk Fri Apr 3 15:03:23 1992 From: peter at psy.ox.ac.uk (Peter Foldiak) Date: Fri, 3 Apr 92 15:03:23 BST Subject: Technical Report available Message-ID: <9204031403.AA00345@brain.cns.ox.ac.uk> The following report is now available: Models of sensory coding Peter Foldiak Cambridge University Engineering Department Tech. Report CUED/F-INFENG/TR 91 (technical report version of Ph.D. dissertation) For a copy, send physical mail address to: peter at psy.oxford.ac.uk or to Peter Foldiak MRC Research Centre in Brain and Behaviour, Dept. Experimental Psychol., University of Oxford, South Parks Road, Oxford OX1 3UD, U.K. (I may also have to ask you for a check to cover postage.) ------ Abstract 1 - An 'anti-Hebbian' synaptic modification rule is demonstrated to be able to adaptively form an uncorrelated representation of the correlated input signal. This mechanism can match the distribution of input patterns to the actual signalling space of the representation units, achieving information-theoretically optimal signal on noisy units. An uncorrelated, equal variance signal also makes fast, optimally efficient least-mean-square (LMS) error correcting learning possible. 2 - A combination of Hebbian and anti-Hebbian connections is demonstrated to implement a form of the statistical method of Principal Component Analysis, which reduces the dimensionality of a noisy Gaussian signal while maximising the information content of the representation, even when the units themselves are noisy. 3 - A similar arrangement of biologically more plausible, non- linear units is shown to be able to adaptively code inputs into a sparse representation, substantially reducing the higher- order statistical redundancy of the representation without considerable loss of information. Such a representation is advantageous if it is to be used in further associative learning stages. 4 - A Hebbian rule modified by a trace mechanism is studied, that allows processing units to respond in a way which is invariant with respect to commonly occurring transformations of the input signal. ----  From doya at crayfish.UCSD.EDU Fri Apr 3 23:43:12 1992 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Fri, 3 Apr 92 20:43:12 PST Subject: Papers on recurrent nets in Neuroprose Message-ID: <9204040443.AA29317@crayfish.UCSD.EDU> The following papers have been placed in Neuroprose archive. The first paper deals with Simon's problem: Why the error increases in the gradient descent learning of recurrent neural networks? ------------------------------------------------------------------------- File name: doya.bifurcation.ps.Z Bifurcations in the Learning of Recurrent Neural Networks Kenji Doya Dept. of Biology, U. C. San Diego Unlike feed-forward networks, the output of a recurrent network can change drastically with an infinitesimal change in the network parameter when it passes through a bifurcation point. The possible hazards caused by the bifurcations of the network dynamics and the learning equations are investigated. The roles of teacher forcing, preprogramming of network structures, and the approximate learning algorithms are discussed. To appear in: Proceedings of 1992 IEEE International Symposium on Circuits and Systems, May 10-13, 1992, San Diego. ------------------------------------------------------------------------- File name: doya.synchronization.ps.Z Adaptive Synchronization of Neural and Physical Oscillators Kenji Doya Shuji Yoshizawa U. C. San Diego University of Tokyo Animal locomotion patterns are controlled by recurrent neural networks called central pattern generators (CPGs). Although a CPG can oscillate autonomously, its rhythm and phase must be well coordinated with the state of the physical system using sensory inputs. In this paper we propose a learning algorithm for synchronizing neural and physical oscillators with specific phase relationships. Sensory input connections are modified by the correlation between cellular activities and input signals. Simulations show that the learning rule can be used for setting sensory feedback connections to a CPG as well as coupling connections between CPGs. To appear in: J.E. Moody, S.J. Hanson, and R.P. Lippmann, (Eds.) Advances in Neural Information Processing Systems 4, San Mateo, CA: Morgan Kaufmann (1992). ------------------------------------------------------------------------- To retrieve the papers by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get doya.bifurcation.ps.Z ftp> get doya.synchronization.ps.Z ftp> quit unix> uncompress doya.bifurcation.ps.Z unix> uncompress doya.synchronization.ps.Z unix> lpr doya.bifurcation.ps unix> lpr doya.synchronization.ps They are also available for anonymous ftp from "crayfish.ucsd.edu" (128.54.16.104), directory "pub/doya". ------------------------------------------------------------------------- Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301  From ajr at eng.cam.ac.uk Sun Apr 5 13:15:24 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Sun, 5 Apr 92 13:15:24 BST Subject: Why does the error rise in a SRN? In-Reply-To: <1992Apr4.144502.28704@eng.cam.ac.uk> Message-ID: <2743.9204051215@dsl.eng.cam.ac.uk> In response to the lack of a true gradient signal in "simple recurrent" (Elman-style) back-propagation networks Scott Fahlman writes: >It would be possible, but very expensive, to add the missing terms into the >Elman net. You end up with something that looks much like the >Williams-Zipser RTRL model... It does not have to be significantly harder to compute a good approximation to the error signal than Elman's approximation to it. The method of expanding the network in time achieves this by changing the per-pattern update to a per-buffer update. The buffer length should be longer than the expected context effects, and shorter than the training set size if the advantages of frequent updating are to be maintained [in practice this is not a difficult constraint]. The method is: Replicate the network N times where N is the buffer length, and stitch it together where the activations are to be passed forward to make one large network Place N patterns at the N input positions, and do a forward pass. Place N targets at the N output positions and (using your favourite error measure) perform a standard backward pass through the large network. Add up all the partial gradients for every shared weight and use the result in your favourite hack of gradient descent. Of course there are some end effects with a finite length buffer, but these can be made small by making the buffer large enough, and placing the buffer boundaries at different positions in the training data on subsequent passes. However, adding in all those extra nasty non-linearities into the gradient signal gives a much harder training problem. I think that it is worth it in terms of the increase in computational power of the network. Tony [Robinson]  From watrous at cortex.siemens.com Mon Apr 6 15:42:45 1992 From: watrous at cortex.siemens.com (Ray Watrous) Date: Mon, 6 Apr 92 15:42:45 EDT Subject: Increase in Error in SRN Message-ID: <9204061942.AA02176@cortex.siemens.com.siemens.com> The following summary and clarification of the further comments on increased error for recurrent networks might be helpful: There are three possible sources of an increase in error: 1. The approximation introduced in online learning by considering only one example at a time. This method of gradient approximation clearly should be paired with an infinitesimal step size algorithm, since long range extrapolations (as in variable step size algorithms) would lead to too large a change in the model based on insufficient data; this would destabilize learning. 2. The approximation in the gradient computation for a recurrent network by truncating the backward recursion. Here, the computation of the full gradient by backpropagation-in-time is no more expensive that the truncated version; it requires only that the activation history for a token be recorded, and the gradient information accumulated in a right-to-left pass in the analogous way to the top-bottom pass in normal backprop. Thus, the approximation is unnecessary. A forward form of the complete gradient is more complex computationally; (see Barak Pearlmutter, Two New Learning Procedures for Recurrent Networks, Neural Network Review, v3, pp 99-101, 1990). 3. The use of a fixed step-size algorithm, which is known to be unstable. This is where a line-search, or golden section search, or other method can be used to control the descent of the error. So, roughly the situation is that fixed step-sized methods can be used with gradient approximation methods with the possibility that the error can increase. Variable step size methods can be used with complete gradient methods, in which case the error is guaranteed to be monotonically decreasing. Gary Kuhn has reported good results using a forward-in-time complete gradient algorithm based on estimates of the gradient over a balanced subset of the training data that increases during training (Some Variations on the Training of Recurrent Networks, with Norman Herzberg, in Neural Networks: Theory and Applications, Mammone and Zeeri, eds. Academic Press, 1991). Whether the network instantiates time-delay links is relevant only if it is restricted to a feedforward architecture; in that case, only considerations 1 and 3 apply. Recurrent time-delay models have been successfully trained using the complete gradient and a line-search (embedded in a quasi-Newton optimization method), with the result that there has been no increase in the objective function. (R. Watrous, Phoneme Recognition Using Connectionist Networks, J. Acoust. Soc. Am. 87(4) pp 1753-1772, 1990). Raymond Watrous Siemens Corporate Research 755 College Road East Princeton, NJ 08540 (609) 734-6596  From plaut+ at CMU.EDU Tue Apr 7 20:23:02 1992 From: plaut+ at CMU.EDU (David Plaut) Date: Tue, 07 Apr 92 20:23:02 -0400 Subject: TR available Message-ID: <28938.702692582@K.GP.CS.CMU.EDU> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* Perseverative and Semantic Influences on Visual Object Naming Errors in Optic Aphasia: A Connectionist Account David C. Plaut Tim Shallice Department of Psychology Department of Psychology Carnegie Mellon University University College, London plaut+ at cmu.edu ucjtsts at ucl.ac.uk Technical Report PDP.CNS.92.1 A recurrent back-propagation network is trained to generate semantic representations of objects from high-level visual representations. In addition to the standard weights, the network has correlational weights useful for implementing short-term associative memory. Under damage, the network exhibits the complex semantic and perseverative effects of patients with a visual naming disorder known as ``optic aphasia,'' in which previously presented objects influence the response to the current object. Like optic aphasics, the network produces predominantly semantic rather than visual errors because, in contrast to reading, there is some structure in the mapping from visual to semantic representations for objects. This is the third TR in the "Parallel Distributed Processing and Cognitive Neuroscience" Technical Report series. To retrieve it via FTP (note that this is NOT the neuroprose archive): unix> ftp 128.2.248.152 # hydra.psy.cmu.edu Name: anonymous Password: ftp> cd pub/pdp.cns ftp> binary ftp> get pdp.cns.92.1.ps.Z ftp> quit unix> zcat pdp.cns.92.1.ps.Z | lpr The file ABSTRACTS in the same directory contains the titles and abstracts of all of the TRs in the series. For those who do not have FTP access, physical copies can be requested from Barbara Dorney . -Dave ------------------------------------------------------------------------------ David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University Pittsburgh, PA 15213-3890  From TEPPER at CVAX.IPFW.INDIANA.EDU Tue Apr 7 21:25:58 1992 From: TEPPER at CVAX.IPFW.INDIANA.EDU (TEPPER@CVAX.IPFW.INDIANA.EDU) Date: Tue, 7 Apr 1992 21:25:58 -0400 (EDT) Subject: NEW Fifth NN & PDP Conference Program Message-ID: <920407212559.202007c8@CVAX.IPFW.INDIANA.EDU> Here is an updated version of the program for The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University. Please notice Phil Best's lecture on Friday. It did not appear on the previous announcement. Fifth NN & PDP CONFERENCE PROGRAM - April 9, 10 and 11,1992 ----------------------------------------------------------- The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University at Fort Wayne will be held April 9, 10, and 11, 1992. Conference registration is $20 (on site). Students and members or employees of supporting organizations attend free. Inquiries should be addressed to: US mail: ------- Pr. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805-1499 email: sayegh at ipfwcvax.bitnet ----- FAX: (219)481-6880 --- Voice: (219) 481-6306 OR 481-6157 ----- All talks will be held in Kettler Hall, Room G46: Thursday, April 9, 6pm-9pm; Friday Morning & Afternoon (Tutorial Sessions), 8:30am-12pm & 1pm-4:30pm and Friday Evening 6pm-9pm; Saturday, 9am-12noon. Parking will be available near the Athletic Building or at any Blue A-B parking lots. Do not park in an Orange A lot or you may get a parking violation ticket. Special hotel rates (IPFW corporate rates) are available at Canterbury Green, which is a 5 minute drive from the campus. The number is (219) 485-9619. The Marriott Hotel also has corporate rates for IPFW and is about a 10 minute drive. Their number is (219) 484-0411. Another hotel with corporate rates for IPFW is Don Hall's Guesthouse (about 10 minutes away). Their number is (219) 489-2524. The following talks will be presented: Applications I - Thursday 6pm-7:30pm -------------------------------------- Nasser Ansari & Janusz A. Starzyk, Ohio University. DISTANCE FIELD APPROACH TO HANDWRITTEN CHARACTER RECOGNITION Thomas L. Hemminger & Yoh-Han Pao, Case Western Reserve University. A REAL- TIME NEURAL-NET COMPUTING APPROACH TO THE DETECTION AND CLASSIFICATION OF UNDERWATER ACOUSTIC TRANSIENTS Seibert L. Murphy & Samir I. Sayegh, Indiana-Purdue University. ANALYSIS OF THE CLASSIFICATION PERFORMANCE OF A BACK PROPAGATION NEURAL NETWORK DESIGNED FOR ACOUSTIC SCREENING S. Keyvan, L. C. Rabelo, & A. Malkani, Ohio University. NUCLEAR DIAGNOSTIC MONITORING SYSTEM USING ADAPTIVE RESONANCE THEORY J.L. Fleming & D.G. Hill, Armstrong Lab, Brooks AFB. STUDENT MODELING USING ARTIFICIAL NEURAL NETWORKS Biological and Cooperative Phenomena Optimization I - Thursday 7:50pm-9pm --------------------------------------------------------------------------- Ljubomir T. Citkusev & Ljubomir J., Buturovic, Boston University. NON- DERIVATIVE NETWORK FOR EARLY VISION Yalin Hu & Robert J. Jannarone, University of South Carolina. A NEUROCOMPUTING KERNEL ALGORITHM FOR REAL-TIME, CONTINUOUS COGNITIVE PROCESSING M.B. Khatri & P.G. Madhavan, Indiana-Purdue University, Indianapolis. ANN SIMULATION OF THE PLACE CELL PHENOMENON USING CUE SIZE RATIO Mark M. Millonas, University of Texas at Austin. CONNECTIONISM AND SWARM INTELLIGENCE --------------------------------------------------------------------------- --------------------------------------------------------------------------- Tutorials I - Friday 8:30am-11:45am ------------------------------------- Phil Best, Miami University. PROCESSING OF SPATIAL INFORMATION IN THE BRAIN" Bill Frederick, Indiana-Purdue University. INTRODUCTION TO FUZZY LOGIC Helmut Heller, University of Illinois. INTRODUCTION TO TRANSPUTER SYSTEMS Arun Jagota, SUNY-Buffalo. THE HOPFIELD NETWORK, ASSOCIATIVE MEMORIES, AND OPTIMIZATION Tutorials II - Friday 1:15pm-4:30pm ------------------------------------- Krzysztof J. Cios, University Of Toledo. SELF-GENERATING NEURAL NETWORK ALGORITHM : CID3 APPLICATION TO CARDIOLOGY Robert J. Jannarone, University of South Carolina. REAL-TIME NEUROCOMPUTING, AN INTRODUCTION Network Analysis I - Friday 6pm-7:30pm ---------------------------------------- M.R. Banan & K.D. Hjelmstad, University of Illinois at Urbana-Champaign. A SUPERVISED TRAINING ENVIRONMENT BASED ON LOCAL ADAPTATION, FUZZINESS, AND SIMULATION Pranab K. Das II, University of Texas at Austin. CHAOS IN A SYSTEM OF FEW NEURONS Arun Maskara & Andrew Noetzel, New Jersey Institute of Technology. FORCED LEARNING IN SIMPLE RECURRENT NEURAL NETWORKS Samir I. Sayegh, Indiana-Purdue University. SEQUENTIAL VS CUMULATIVE UPDATE: AN EXPANSION D.A. Brown, P.L.N. Murthy, & L. Berke, The College of Wooster. SELF- ADAPTATION IN BACKPROPAGATION NETWORKS THROUGH VARIABLE DECOMPOSITION AND OUTPUT SET DECOMPOSITION Applications II - Friday 7:50pm-9pm ------------------------------------- Susith Fernando & Karan Watson, Texas A & M University. ANNs TO INCORPORATE ENVIRONMENTAL FACTORS IN HI FAULTS DETECTION D.K. Singh, G.V. Kudav, & T.T. Maxwell, Youngstown State University. FUNCTIONAL MAPPING OF SURFACE PRESSURES ON 2-D AUTOMOTIVE SHAPES BY NEURAL NETWORKS K. Hooks, A. Malkani, & L. C. Rabelo, Ohio University. APPLICATION OF ARTIFICIAL NEURAL NETWORKS IN QUALITY CONTROL CHARTS B.E. Stephens & P.G. Madhavan, Purdue University at Indianapolis. SIMPLE NONLINEAR CURVE FITTING USING THE ARTIFICIAL NEURAL NETWOR ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Network Analysis II - Saturday 9am-10:30am ------------------------------------------- Sandip Sen, University of Michigan. NOISE SENSITIVITY IN A SIMPLE CLASSIFIER SYSTEM Xin Wang, University of Southern California. DYNAMICS OF DISCRETE-TIME RECURRENT NEURAL NETWORKS: PATTERN FORMATION AND EVOLUTION Zhenni Wang and Christine Di Massimo, University of Newcastle. A PROCEDURE FOR DETERMINING THE CANONICAL STRUCTURE OF MULTILAYER NEURAL NETWORKS Srikanth Radhakrishnan, Tulane University. PATTERN CLASSIFICATION USING THE HYBRID COULOMB ENERGY NETWORK Biological and Cooperative Phenomena Optimization II - Saturday 10:50am-12noon ------------------------------------------------------------------------------- J. Wu, M. Penna, P.G. Madhavan, & L. Zheng, Purdue University at Indianapolis. COGNITIVE MAP BUILDING AND NAVIGATION C. Zhu, J. Wu, & Michael A. Penna, Purdue University at Indianapolis. USING THE NADEL TO SOLVE THE CORRESPONDENCE PROBLEM Arun Jagota, SUNY-Buffalo. COMPUTATIONAL COMPLEXITY OF ANALYZING A HOPFIELD-CLIQUE NETWORK Assaad Makki, & Pepe Siy, Wayne State University. OPTIMAL SOLUTIONS BY MODIFIED HOPFIELD NEURAL NETWORKS  From pollack at cis.ohio-state.edu Wed Apr 8 16:43:38 1992 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 8 Apr 92 16:43:38 -0400 Subject: Refs requested: Linear Threshold Decisionmaking Message-ID: <9204082043.AA01988@dendrite.cis.ohio-state.edu> Below is a one-page abstract describing results for when simple linear threshold algorithms (m-out-of-n functions) are effective for noisy inputs and any number of outputs. I have not been able to uncover similar results for these algorithms or for linear threshold algorithms in general. What I have found are psychological studies using linear regression to find optimized decision models: Dawes, R. M. and Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2):95-106. Elstein, A. S., Shulman, L. S., and Sprafka, S. A. (1978). Medical Problem Solving. Harvard University Press, Cambridge, Massachusetts. Meehl, P. E. (1954). Clinical versus Statistical Predictions: A Theoretical Analysis and Review of the Evidence. University of Minnesota Press, Minneapolis. and some explanations of why they work: Wainer, H. (1976). Estimating coefficients in linear models: It don't make no nevermind. Psychological Bulletin, 83(2):213-217. Of course, the statistical literature is filled with work on linear discriminant functions and linear regression. A superficial search did not yield any results in the same spirit as mine. I should also mention that I know of Nick Littlestone's algorithm for learning m-out-of-n functions when the sample is linearly separable. I don't know of any more recent results for noisy inputs. Littlestone, N. (1987). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2(4):285-318. If you know of related work, please send mail to byland at cis.ohio-state.edu. Thanks, Tom ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Effectiveness of Simple Linear Threshold Algorithms Tom Bylander Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, Ohio, 43210 byland at cis.ohio-state.edu Consider the following simple linear threshold algorithm (SLT) (m-out-of-n function) for determining whether to believe a given "output" proposition (hereafter called an "output") from a given set of "input" propositions (hereafter called "inputs"): Collect all the inputs relevant to the output. Count the number of inputs in favor of the output. Subtract the number of inputs against the output. Believe the output if the result exceeds some threshold. SLT is an effective algorithm if two conditions are met. One, sufficient inputs are available. Two, the inputs are independent evidence for the output. For example, if x_1 and x_2 are two inputs and y is the output, then it should be the case that: P(x_1&x_{2}|y) = P(x_1|y)*P(x_2|y) I shall call this property "conditional independence." Consider the case where there are n outputs, each output is either 0 or 1, and each output is associated with a set of conditionally independent inputs, each 0 (unfavorable) or 1 (favorable). Let t be a number between 0 and .5 such that for any output y, the difference between the average value of its inputs when y=1 and the average when y=0 is greater than 2t. Based on inequalities from Hoeffding (1963), (ln n)/t^2 inputs per output are sufficient to ensure that the probability that SLT makes a wrong conclusion (i.e., that some assignment to some output is in error) is less than 1/n. This result does not depend on the prior probabilities of the outputs. How does this compare to optimal? Suppose that each input for an output corresponds to the output except for noise r. Based on the central limit theorem, r(1-r)(ln n)/t^2 inputs per output are insufficient to achieve 1/n error for the optimal algorithm, where t=.5-r. This result assumes that the prior probability of each assignment to the outputs is equally likely. I speculate that if H is the entropy of the outputs, then ln H should be substituted for ln n in the above bound. Thus, SLT is nearly optimal, asymptotically speaking, for noisy inputs and any number of outputs. Conditional independence is the most problematic assumption in the analysis. Perfect conditional independence is unlikely in reality; however, Hoeffding's inequalities for m-dependent random variables strongly suggest that SLT should work well as long as the same evidence is not counted too many times. Finding inputs that are sufficiently conditionally-independent is the central learning problem for SLT. Hoeffding, W. (1963). Probability inequalities for sums of bounded variables. J. American Statistical Association, 58(1):13-30.  From aisb93-prog at computer-science.birmingham.ac.uk Sat Apr 11 14:25:38 1992 From: aisb93-prog at computer-science.birmingham.ac.uk (Donald Peterson) Date: Sat, 11 Apr 92 14:25:38 BST Subject: Conference : AISB'93 Message-ID: <324.9204111325@christopher-robin.cs.bham.ac.uk> ================================================================ AISB'93 CONFERENCE : ANNOUNCEMENT AND CALL FOR PAPERS Theme: "Prospects for AI as the General Science of Intelligence" 29 March -- 2 April 1993 University of Birmingham ================================================================ 1. Introduction 2. Invited talks 3. Topic areas for submitted papers 4. Timetable for submitted papers 5. Paper lengths and submission details 6. Call for referees 7. Workshops and Tutorials 8. LAGB Conference 9. Email, paper mail, phone and fax. 1. INTRODUCTION The Society for the Study of Artificial Intelligence and the Simulation of Behaviour (one of the oldest AI societies) will hold its ninth bi-annual conference on the dates above at the University of Birmingham. The site is Manor House, a charming and convivial residential hall close to the University. Tutorials and Workshops are planned for Monday 29th March and the morning of Tuesday 30th March, and the main conference will start with lunch on Tuesday 30th March and end on Friday 2nd April. The Programme Chair is Aaron Sloman, and the Local Arrangements Organiser is Donald Peterson, both assisted by Petra Hickey. The conference will be "single track" as usual, with invited speakers and submitted papers, plus a "poster session" to allow larger numbers to report on their work, and the proceedings will be published. The conference will cover the usual topic areas for conferences on AI and Cognitive Science. However, with the turn of the century approaching, and with computer power no longer a major bottleneck in most AI research (apart from connectionism) it seemed appropriate to ask our invited speakers to look forwards rather than backwards, and so the theme of the conference will be "Prospects for AI as the general science of intelligence". Submitted papers exploring this are also welcome, in addition to the normal technical papers. 2. INVITED TALKS So far the following have agreed to give invited talks: Prof David Hogg (Leeds) "Prospects for computer vision" Prof Allan Ramsay (Dublin) "Prospects for natural language processing by machine" Prof Glyn Humphreys (Birmingham) "Prospects for connectionism - science and engineering". Prof Ian Sommerville (Lancaster) "Prospects for AI in systems design" Titles are provisional. 3. TOPIC AREAS for SUBMITTED PAPERS Papers are invited in any of the normal areas represented at AI and Cognitive Science conferences, including: AI in Design, AI in software engineering Teaching AI and Cognitive Science, Analogical and other forms of Reasoning Applications of AI, Automated discovery, Control of actions, Creativity, Distributed intelligence, Expert Systems, Intelligent interfaces Intelligent tutoring systems, Knowledge representation, Learning, Methodology, Modelling affective processes, Music, Natural language, Naive physics, Philosophical foundations, Planning, Problem Solving, Robotics, Tools for AI, Vision, Papers on neural nets or genetic algorithms are welcomed, but should be capable of being judged as contributing to one of the other topic areas. Papers may either be full papers or descriptions of work to be presented in a poster session. 4. TIMETABLE for SUBMITTED PAPERS Submission deadline: 1st September 1992 Date for notification of acceptances: mid October 1992 Date for submission of camera ready final copy: mid December 1992 The conference proceedings will be published. Long papers and invited papers will definitely be included. Selected poster summaries may be included if there is space. 5. PAPER LENGTH and SUBMISSION DETAILS Full papers: 10 pages maximum, A4 or 8.5"x11", no smaller than 12 point print size Times Roman or similar preferred, in letter quality print. Poster submissions 5 pages summary Excessively long papers will be rejected without being reviewed. All submissions should include 1. Full names and addresses of all authors 2. Electronic mail address if available 3. Topic area 4. Label: "Long paper" or "Poster summary" 5. Abstract no longer than 10 lines. 6. Statement certifying that the paper is not being submitted elsewhere for publication. 7. An undertaking that if the paper is accepted at least one of the authors will attend the conference. THREE copies are required. 6. CALL for REFEREES Anyone willing to act as a reviewer during September should write to the Programme Chair, with a summary CV or indication of status and experience, and preferred topic areas. 7. WORKSHOPS and TUTORIALS The first day and a half of the Conference are allocated to workshops and tutorials. These will be organised by Dr Hyacinth S. Nwana, and anyone interested in giving a workshop or tutorial should contact her at: Department of Computer Science, University of Keele, Staffs. ST5 5BG. U.K. phone: +44 782 583413, or +44 782 621111(x 3413) email JANET: nwanahs at uk.ac.keele.cs BITNET: nwanahs%cs.kl.ac.uk at ukacrl UUCP : ...!ukc!kl-cs!nwanahs other : nwanahs at cs.keele.ac.uk 8. LAGB CONFERENCE. Shortly before AISB'93, the Linguistics Association of Great Britain (LAGB) will hold its Spring Meeting at the University of Birmingham from 22-24th March, 1993. For more information, please contact Dr. William Edmondson: postal address as below; phone +44-(0)21-414-4763; email EDMONDSONWH at vax1.bham.ac.uk 9. EMAIL, PAPER MAIL, PHONE and FAX. Email: * aisb93-prog at cs.bham.ac.uk (for communications relating to submission of papers to the programme) * aisb93-delegates at cs.bham.ac.uk (for information on accommodation, meals, programme etc. as it becomes available --- enquirers will be placed on a mailing list) Address: AISB'93 (prog) or AISB'93 (delegates), School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, U.K. Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281 Donald Peterson and Aaron Sloman, April 1992. ----------------------------------------------------------------------------  From stolcke at ICSI.Berkeley.EDU Mon Apr 13 15:32:48 1992 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Mon, 13 Apr 92 13:32:48 MDT Subject: Paper available Message-ID: <9204132032.AA23691@icsib30.ICSI.Berkeley.EDU> The following short paper is available from the ICSI techreport archive. Instructions for retrieval can be found at the end of this message. Please ask me for hardcopies only if you don't have access to ftp. The paper will be presented at the Workshop on Integrating Neural and Symbolic Processes at AAAI-92 later this year. We are making it generally available mainly for the benefit of several people who had expressed interest in our results in the past. --Andreas ------------------------------------------------------------------------------ tr-92-025.ps.Z Tree Matching with Recursive Distributed Representations Andreas Stolcke and Dekai Wu TR-92-025 April 1992 We present an approach to the structure unification problem using distributed representations of hierarchical objects. Binary trees are encoded using the recursive auto-association method (RAAM), and a unification network is trained to perform the tree matching operation on the RAAM representations. It turns out that this restricted form of unification can be learned without hidden layers and producing good generalization if we allow the error signal from the unification task to modify both the unification network and the RAAM representations themselves. ------------------------------------------------------------------------------ Instructions for retrieving ICSI technical reports via ftp. Replace tr-XX-YYY with the appropriate TR number. unix% ftp ftp.icsi.berkeley.edu Connected to icsic.ICSI.Berkeley.EDU. 220 icsic FTP server (Version 6.16 Mon Jan 20 13:24:00 PST 1992) ready. Name (ftp.icsi.berkeley.edu:): anonymous 331 Guest login ok, send e-mail address as password. Password: your_name at your_machine 230 Guest login ok, access restrictions apply. ftp> cd /pub/techreports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get tr-XX-YYY.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for tr-XX-YYY.ps.Z (ZZZZ bytes). 226 Transfer complete. local: tr-XX-YYY.ps.Z remote: tr-XX-YYY.ps.Z 272251 bytes received in ZZ seconds (ZZZ Kbytes/s) ftp> quit 221 Goodbye. unix% uncompress tr-XX-YYY.ps.Z unix% lpr -Pyour_printer tr-XX-YYY.ps  From LINDSEY at FNAL.FNAL.GOV Wed Apr 15 21:35:36 1992 From: LINDSEY at FNAL.FNAL.GOV (LINDSEY@FNAL.FNAL.GOV) Date: Wed, 15 Apr 1992 20:35:36 -0500 (CDT) Subject: A VLSI Neural Net Application in High Energy Physics Message-ID: <920415203536.20e0365d@FNAL.FNAL.GOV> For those interested in hardware neural network applications, copies of the following paper are available via mail or fax. Send requests to Clark Lindsey at BITNET%"LINDSEY at FNAL". REAL TIME TRACK FINDING IN A DRIFT CHAMBER WITH A VLSI NEURAL NETWORK* Clark S. Lindsey (a), Bruce Denby (a), Herman Haggerty (a), and Ken Johns (b) (a) Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, Illinois 60510. (b) University of Arizona, Dept of Physics, Tucson, Arizona 85721. ABSTRACT In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. Accepted by Nuclear Instruments and Methods, Section A * FERMILAB-Pub-92/55  From dlovell at s1.elec.uq.oz.au Thu Apr 16 13:28:07 1992 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Thu, 16 Apr 92 12:28:07 EST Subject: Neocognitron Performance paper in Neuroprose Message-ID: <9204160228.AA01588@c10.elec.uq.oz.au> **DO NOT FORWARD TO OTHER GROUPS** The following paper (10 pages in length) has been placed in the Neuroprose archive and submitted to Neural Networks. Any comments or questions (both of which are invited) should be addressed to the first author: dlovell at s1.elec.uq.oz.au Thanks must go to Jordan Pollack for maintaining this excellent service. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% THE PERFORMANCE OF THE NEOCOGNITRON WITH VARIOUS S-CELL AND C-CELL TRANSFER FUNCTIONS David Lovell & Ah Chung Tsoi Intelligent Machines Laboratory, Department of Electrical Engineering University of Queensland, Queensland 4072, Australia When a neural network solution to a problem (e.g. handwritten character recognition) is proposed, it is important to know if the structure of the network and the function of the component neurons are well suited to the task. Recent research has examined the {\em structure} of Fukushima's neocognitron and the effect that it has on the classification of distorted input patterns. We present results which assess the classification performance of the neocognitron when the {\em function} of the component neurons is altered. The tests we describe demonstrate that using S-cells with a sigmoidal transfer function and modified activation function significantly enhances the classification performance of the neocognitron. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% filename: lovell.neocog.ps.Z FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get lovell.neocog.ps.Z ftp> bye unix% zcat lovell.neocog.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ----------------------------------------------------------------------------- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | tel: (07) 365 3564 |  From mccauley at ecn.purdue.edu Mon Apr 20 13:27:57 1992 From: mccauley at ecn.purdue.edu (Darrell McCauley) Date: Mon, 20 Apr 92 12:27:57 -0500 Subject: paper on beef/ultrasound/adaptive logic networks Message-ID: <9204201727.AA14554@cocklebur.ecn.purdue.edu> The following paper (11 pages in length) has been placed in the Neuroprose archive. A shorter version was submitted to Transactions of the ASAE. Any comments or questions should be sent to mccauley at ecn.purdue.edu. This annoucement may be forwarded to other lists/newsgroups. Though I cannot mail hardcopies, I may be willing to e-mail compressed, uuencoded PostScript versions. Of course, thanks to Jordan Pollack for offering this service. I find it very valuable. ------------------------------------------------------------------------- FAT ESTIMATION IN BEEF ULTRASOUND IMAGES USING TEXTURE AND ADAPTIVE LOGIC NETWORKS James Darrell McCauley, USDA Fellow Brian R. Thane, Graduate Student Dept of Agricultural Engineering Dept of Agricultural Engineering Purdue University Texas A&M University (mccauley at ecn.purdue.edu) (thane at diamond.tamu.edu) A. Dale Whittaker, Assistant Professor Dept of Agricultural Engineering Texas A&M University (dale at diamond.tamu.edu) Overviews of Adaptive Logic Networks and co--occurrence image texture are presented, along with a brief synopsis of instrument grading of beef. These tools are used for both prediction and classification of intramuscular fat in beef from ultrasonic images of both live beef animals and slaughtered carcasses. Results showed that Adaptive Logic Networks perform better than any fat prediction method for beef ultrasound images to date and are a viable alternative to statistical techniques. \keywords{Meat, Grading, Automation, Ultrasound Images, Neural Networks.} ------------------------------------------------------------------------- filename: mccauley.beef.ps.Z FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mccauley.beef.ps.Z ftp> quit unix% zcat mccauley.beef.ps.Z | lpr (or the equivalent) -- James Darrell McCauley Department of Ag Engr, Purdue Univ mccauley at ecn.purdue.edu West Lafayette, Indiana 47907-1146 ** "Do what is important first, then what is urgent." (unknown) **  From ahg at eng.cam.ac.uk Tue Apr 21 11:46:17 1992 From: ahg at eng.cam.ac.uk (ahg@eng.cam.ac.uk) Date: Tue, 21 Apr 92 11:46:17 BST Subject: Paper in neuroprose Message-ID: <28405.9204211046@tulip.eng.cam.ac.uk> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGOUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: ALTERNATIVE ENERGY FUNCTIONS FOR OPTIMIZING NEURAL NETWORKS Andrew Gee and Richard Prager Technical Report CUED/F-INFENG/TR 95 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract When feedback neural networks are used to solve combinatorial optimization problems, their dynamics perform some sort of descent on a continuous energy function related to the objective of the discrete problem. For any particular discrete problem, there are generally a number of suitable continuous energy functions, and the performance of the network can be expected to depend heavily on the choice of such a function. In this paper, alternative energy functions are employed to modify the dynamics of the network in a predictable manner, and progress is made towards identifying which are well suited to the underlying discrete problems. This is based on a revealing study of a large database of solved problems, in which the optimal solutions are decomposed along the eigenvectors of the network's connection matrix. It is demonstrated that there is a strong correlation between the mean and variance of this decomposition and the ability of the network to find good solutions. A consequence of this is that there may be some problems which neural networks are not well adapted to solve, irrespective of the manner in which the problems are mapped onto the network for solution. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gee.energy_fns.ps.Z ftp> quit unix> uncompress gee.energy_fns.ps.Z unix> lpr gee.energy_fns.ps (or however you print PostScript) Please note that a couple of the figures in the paper were produced on an Apple Mac, and the resulting PostScript is not quite standard. People using an Apple LaserWriter should have no problems though. b) Via postal mail: Request a hardcopy from Andrew Gee, Speech Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: ahg at eng.cam.ac.uk  From dhw at santafe.edu Tue Apr 21 17:54:03 1992 From: dhw at santafe.edu (David Wolpert) Date: Tue, 21 Apr 92 15:54:03 MDT Subject: New paper Message-ID: <9204212154.AA08364@sfi.santafe.edu> ********* DO NOT FORWARD TO OTHER MAILING LISTS ****** The following paper has just been placed in neuroprose. It is a major revision of an earlier preprint of the same title, and appears in the current issue of Complex Systems. ON THE CONNECTION BETWEEN IN-SAMPLE TESTING AND GENERALIZATION ERROR. David H. Wolpert, The Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, NM, 87501. Abstract: This paper proves that it is impossible to justify a correlation between reproduction of a training set and generalization error off of the training set using only a priori reasoning. As a result, the use in the real world of any generalizer which fits a hypothesis function to a training set (e.g., the use of back-propagation) is implicitly predicated on an assumption about the physical universe. This paper shows how this assumption can be expressed in terms of a non-Euclidean inner product between two vectors, one representing the physical universe and one representing the generalizer. In deriving this result, a novel formalism for addressing machine learning is developed. This new formalism can be viewed as an extension of the conventional "Bayesian" formalism which (amongst other things) allows one to address the case where one's assumed "priors" are not exactly correct. The most important feature of this new formalism is that it uses an extremely low-level event space, consisting of triples of {target function, hypothesis function, training set}. Partly as a result of this feature, most other formalisms that have been constructed to address machine learning (e.g., PAC, the Bayesian formalism, the "statistical mechanics" formalism) are special cases of the formalism presented in this paper. Consequently such formalisms are capable of addressing only a subset of the issues addressed in this paper. In fact, the formalism of this paper can be used to address all generalization issues of which I am aware: over-training, the need to restrict the number of free parameters in the hypothesis function, the problems associated with a "non-representative" training set, whether and when cross-validation works, whether and when stacked generalization works, whether and when a particular regularizer will work, etc. A summary of some of the more important results of this paper concerning these and related topics can be found in the conclusion. ********************************************** To retrieve this paper, which comes in two parts, do the following: unix> ftp archive.cis.ohio-state.edu login> anonymous password> neuron ftp> binary ftp> cd pub/neuroprose ftp> get wolpert.reichenbach-1.ps.Z ftp> get wolpert.reichenbach-2.ps.Z ftp> quit unix> uncompress wolpert.reichenbach-1.ps.Z unix> uncompress wolpert.reichenbach-2.ps.Z unix> lpr wolpert.reichenbach-1.ps.Z # or however you print out postscript unix> lpr wolpert.reichenbach-2.ps.Z # or however you print out postscript  From radford at ai.toronto.edu Wed Apr 22 14:46:52 1992 From: radford at ai.toronto.edu (Radford Neal) Date: Wed, 22 Apr 1992 14:46:52 -0400 Subject: TR on Bayesian backprop by Hybrid Monte Carlo Message-ID: <92Apr22.144703edt.311@neuron.ai.toronto.edu> *** DO NOT FORWARD TO OTHER LISTS *** The following paper has been placed in the neuroprose archive: BAYESIAN TRAINING OF BACKPROPAGATION NETWORKS BY THE HYBRID MONTE CARLO METHOD Radford M. Neal Department of Computer Science University of Toronto radford at cs.toronto.edu It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the ``Hybrid Monte Carlo'' method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of ``free energy'' differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks. This paper may be retrieved and printed on a PostScript printer as follows: unix> ftp archive.cis.ohio-state.edu (log on as user 'anonymous') ftp> cd pub/neuroprose ftp> binary ftp> get neal.hmc.ps.Z ftp> quit unix> uncompress neal.hmc.ps.Z unix> lpr neal.hmc.ps For those unable to do this, hardcopies may be requested from: The CRG Technical Report Secretary Department of Computer Science University of Toronto 10 King's College Road Toronto M5S 1A4 CANADA INTERNET: maureen at cs.toronto.edu UUCP: uunet!utai!maureen BITNET: maureen at utorgpu  From rsun at orion.ssdc.honeywell.com Fri Apr 24 15:48:15 1992 From: rsun at orion.ssdc.honeywell.com (Ron Sun) Date: Fri, 24 Apr 92 14:48:15 CDT Subject: No subject Message-ID: <9204241948.AA12586@orion.ssdc.honeywell.com> TR availble: A Connectionist Model for Commonsense Reasoning Incorporating Rules and Similarities Ron Sun Honeywell SSDC 3660 Technology Dr. Minneapolis, MN 55413 rsun at orion.ssdc.honeywell.com For the purpose of modeling commonsense reasoning, we investigate connectionist models of rule-based reasoning, and show that while such models can usually carry out reasoning in exactly the same way as symbolic systems, they have more to offer in terms of commonsense reasoning. A connectionist architecture, {\sc CONSYDERR}, is proposed for capturing certain commonsense reasoning competence, which partially remedies the brittleness problem in traditional rule-based systems. The architecture employs a two-level, dual representational scheme, which utilizes both localist and distributed representations and explores the synergy resulting from the interaction between the two. {\sc CONSYDERR} is therefore capable of accounting for many difficult patterns in commonsense reasoning with this simple combination of the two levels. This work also shows that connectionist models of reasoning are not just ``implementations" of their symbolic counterparts, but better computational models of commonsense reasoning. It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.ka.ps.Z ftp> quit unix> uncompress sun.ka.ps.Z unix> lpr sun.ka.ps (or however you print postscript)  From efiesler at idiap.ch Thu Apr 23 11:46:14 1992 From: efiesler at idiap.ch (Emile Fiesler) Date: Thu, 23 Apr 92 17:46:14 +0200 Subject: Paper available entitled: "Neural Network Formalization". Message-ID: <9204231546.AA01268@idiap.ch> The following paper is available via ftp from the neuroprose archive (instructions for retrieval follow the abstract). Neural Network Formalization Emile Fiesler IDIAP Case postale 609, CH-1920 Martigny, Switzerland Electronic mail: EFiesler at IDIAP.CH and H. John Caulfield Alabama A&M University P.O.Box 1268, Normal, AL 35762, U.S.A. A short version of this paper has been accepted for publication in "Artificial Neural Networks II", (Editors I. Alexander and J. Taylor, North-Holland/ Elsevier Science Publishers, Amsterdam, 1992), under the title: "Layer Based Neural Network Formalization". ABSTRACT In order to assist the field of neural networks in its maturing, a formaliza- tion and a solid foundation are essential. Additionally, to permit the intro- duction of formal proofs, it is important to have an all encompassing formal mathematical definition of a neural network. This publication offers a neural network formalization consisting of a topological taxonomy, a uniform nomenclature, and an accompanying consistent mnemonic notation. Supported by this formalization, both a flexible hierar- chical and a universal mathematical definition are presented. ------------------------------ To obtain a copy of the paper, follow these FTP instructions: unix> ftp archive.cis.ohio-state.edu (or: ftp 128.146.8.52) login: anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fiesler.formalization.ps.Z ftp> bye unix> zcat fiesler.formalization.ps.Z | lpr (or however you uncompress and print postscript) (Unfortunately, I will not be able to provide hard copies of the paper.)  From CSHERTZE at ucs.indiana.edu Fri Apr 24 16:49:39 1992 From: CSHERTZE at ucs.indiana.edu (CANDACE SHERTZER, 5-4658, PSYCHOLOGY 341) Date: Fri, 24 Apr 92 15:49:39 EST Subject: 1992 Cognitive Science Society Conference Info Message-ID: ================================================================ REGISTRATION INFORMATION AND PRELIMINARY PROGRAM for The Fourteenth Annual Conference of the Cognitive Science Society July 29 -- August 1, 1992 Cognitive Science Program Indiana University Bloomington, IN The Fourteenth Annual Conference of the Cognitive Science Society will be held July 29 - August 1, 1992, at Indiana University in Bloomington. The conference will feature seven invited plenary speakers, nine symposia, 110 submitted talks, and approximately 90 posters. There are also special evening events scheduled. ================================================================ PROGRAM (subject to change) Wednesday, July 29: 2:00 - 5:30 p.m. Registration 5:30 - 7:00 Reception 7:00 - 8:15 Plenary Talk Thursday, July 30: 9:00 - 10:15 a.m. Plenary Talk 10:15 - 10:50 Break 10:50 - 12:30 p.m. Talks and Symposia 12:30 - 2:00 Lunch 2:00 - 3:40 Talks and Symposia 3:40 - 4:15 Break 4:15 - 5:30 Plenary Talk 5:30 - 8:00 Banquet (optional) Friday, July 31: 9:00 - 5:30 Same as Thursday 5:30 - 8:00 Poster Session I Saturday, August 1 9:00 - 2:00 Same as Thursday and Friday 2:00 - 3:40 Poster Session II 3:40 - 4:15 Break 4:15 - 5:30 Plenary Talk 5:30 - 8:00 Concert (optional) Sunday Morning, August 2: Buses depart for the Indianapolis Airport ============================================================================= PLENARY SPEAKERS and talk titles: Elizabeth Bates Crosslinguistic studies of language breakdown in aphasia. Department of Cognitive Science, University of California, San Diego Daniel Dennett Problems with some models of consciousness. Center for Cognitive Studies, Tufts University Martha Farah Neuropsychology. Department of Psychology, Carnegie-Mellon University Douglas Hofstadter The centrality of analogy-making in human cognition. Center for Research on Concepts and Cognition, Indiana University John Holland Must learning precede cognition? Department of Psychology, University of Michigan Richard Shiffrin Memory representation, storage, and retrieval. Department of Psychology, Indiana University Michael Turvey Ecological foundations of cognition. Department of Psychology, University of Connecticut ============================================================================== SYMPOSIA: Topics and organizers Representation: Who needs it? Timothy van Gelder, Indiana University Beth Preston, University of Georgia Computational models of evolution as tools for cognitive science Rik Belew, University of California, San Diego Dynamic processes in music cognition Caroline Palmer, The Ohio State University Allen Winold, Indiana University Dynamics in the control and coordination of action Geoffrey Bingham, Indiana University Bruce Kay, Brown University Goal-Driven Learning David Leake, Indiana University Ashwin Ram, Georgia Institute of Technology Similarity and representation in early cognitive development Mary Jo Rattermann, Hampshire College Reasoning and visual representations K. Jon Barwise, Indiana University Speech perception and spoken language processing David Pisoni, Indiana University Robert Peterson, Indiana University Analogy, high-level perception, and categorization Douglas Hofstadter, Indiana University Melanie Mitchell, University of Michigan ================================================================ SPECIAL EVENING EVENTS: * Welcoming Reception - Wednesday, July 29. * Gala Banquet - Thursday, July 30. * Indiana University Opera performance of "Carousel" - Saturday, August 1. (The full program will be sent to all registered participants in early July, and will also be available at the Registration Desk.) ================================================================ About the Conference Site: Bloomington, a city of 60,000 people, is located in a region of state and national forests amid rolling hills and numerous lakes in south central Indiana. Within a twenty minute drive there are a number of state parks, several large lakes offering recreational activities such as sailing, boating, swimming, and waterskiing, and a winter ski resort. Brown County State Park attracts thousands of visitors year-round and is renowned for its spectacular fall colors; nearby is the picturesque artist community of Nashville, Indiana. The Conference will be held at the student union building of Indiana University. The building is the largest single-structure student union in the nation and offers many amenities and services, including automated teller machines, check-cashing services, post office, newsstand, photocopy shop, barber and hair styling shops, lost-and-found, hearing-aid compatible public phones and fax, several eateries, bowling alleys, bookstore, etc. Also located in the IMU is the University Computing Services, which can provide guest login, so that conference participants can access their home computers through the Internet. The IMU is fully wheelchair accessible. ================================================================ LODGING: ******** Forms for reserving rooms at any of these locations are included at the end of this note. On-Campus: ********** Indiana Memorial Union (IMU) Hotel: The conference will be held in the IMU, so these rooms offer maximal convenience. All rooms have air-conditioning, bathrooms, telephone, cable TV and services. Type of room Number of people* 1 2 3 4 Standard Double (1 double bed) $52.00 $60.00 xxx xxx Deluxe $57.00 $65.00 xxx xxx Standard Double (2 double beds) $60.00 $68.00 $76.00 $84.00 Deluxe $66.00 $74.00 $82.00 $90.00 King (1 king-sized bed) $58.00 $66.00 xxx xxx Residence Halls: Rooms have air conditioning and telephones. Bathrooms are shared. It is an easy half-mile walk to the Union, but shuttle vans will also be available. Single rooms only - $22.50* Off-Campus: *********** Rooms have also been reserved at two off-campus hotels. Both hotels offer amenities, and are approximately one mile from the IMU. Free shuttle service will be provided. Hampton Inn (2100 N. Walnut)* Single room..............................................$43.00 Double room..............................................$43.00 Econo Lodge (4501 E. 3rd Street)* Single room..............................................$38.00 Double room..............................................$45.00 King-sized room..........................................$45.00 *all rates subject to 10% hotel tax ==================================================================== FOOD: ***** Meals other than the special conference banquet and reception are available in many different venues. The IMU has several eating establishments: Cafeteria, Deli, and Tudor Room. Costs range from $3 for a sandwich and drink at the Deli to more than $20 for a full-service dinner in the Tudor Room. (The Tudor Room is not open for breakfast.) There are many restaurants within walking distance of the IMU. These include a Thai restaurant, Tibetan, Chinese, Ethiopian, Middle Eastern and others. A wide variety of vegetarian selections are also available on local menus. Conference participants who stay in the residence halls may purchase individual meal tickets. Meals are "all you can eat." The price for breakfast is $2.95, lunch $4.90 and dinner $7.45. =================================================================== TRAVEL: ******* USAir is the official airline for Cognitive Science 1992, and offers the following discounts on travel to the Conference. USAir Conference Rates: Within the USA: 40% off the full round trip day coach fare with no advance ticket purchase. 5% off published fares excluding first class, government contract fares, senior fares, system fares, and tour fares following all restrictions. >From Canada: 35% off the full round trip day coach fare with no advance ticket purchase. Reservations: To obtain this meeting discount, call USAir's Meeting and Convention Reservation Office at 1-800-334-8644, 8:00 AM - 9:00 PM, Eastern Standard Time. Refer to Gold File Number: 36570038. Participants will need to fly into Indianapolis Airport, which is a one hour drive (50 miles) from Bloomington. Several options are available for travel between Indianapolis and Bloomington: *Buses* will be chartered from Indiana University, for Wednesday arrivals and Sunday departures. They are scheduled to leave every hour, on the hour, from Indianapolis Airport on Wednesday starting at 12:00 noon through 7:00 p.m., with two additional buses leaving at 9:00 p.m. and 11:00 p.m. On Sunday, they will leave Bloomington on the hour from 6:00 a.m. through 12:00 noon, plus an additional bus leaving at 3:00 p.m. These charters will cost $20.00 per person, round trip. These buses are not equipped with wheelchair elevators, but wheelchairs can be stowed on board and disabled persons assisted into a regular seat. *Private limousine services* cost approximately $70 round trip. Participants choosing this option can make their own arrangements directly with the limousine companies. Two services available are Classic Touch Limousine Service, Inc. (812-339-7269) and Indy Connection Limousines, Inc. (1-800-888-4639). *Rental cars* are also available at Indianapolis Airport. Bloomington is easily reached from Indianapolis via a divided highway. Parking: For those participants who will drive to Bloomington, free parking is available at the IMU for guests of the Union; others may purchase a temporary decal for $3.00/day which allows access to campus parking garages. These decals will be available at the residence hall or at the registration desk. =================================================================== REGISTRATION: ************* Registration fees are outlined below. To register for the conference, please complete the enclosed form and return with appropriate payment to: Conference Registrar Conference #199-92 Indiana University Conference Bureau Indiana Memorial Union Rm. 677 Bloomington, IN 47405 On-site registration will be held from 2:00 p.m. - 7:00 p.m., Wednesday, July 29, in the East Lounge of the IMU. This will be staffed each day of the Conference from 8:00 a.m. - 5:00 p.m., providing information and assistance to participants and their guests. Fees: before Status June 27, 1992 Late Member $155.00 $200.00 Non-Member $180.00 $230.00 Student $80.00 $100.00 The registration fee may be paid by check or money order in US currency made payable to: Indiana University #199-92. Visa and MasterCard will also be accepted. Those paying by credit card may register by fax (812-855-8077) or by phone (812-855-9824 or 812-855-4661). Please refer to 199-92 as our Conference number. Cancellations: Cancellations received in writing prior to July 10, 1992, will be entitled to a full refund, less a $25 administrative fee. No refunds will be granted after that date. ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 -- August 1, 1992 REGISTRATION FORM please print Name: Address: City: State: Zip: Country: Daytime phone: ( ) E-Mail Address: PLEASE COMPLETE ONE REGISTRATION FORM PER PERSON ATTENDING. DUPLICATE IF NECESSARY. after Registration type Fees June 27, 1992 Total Paid Member $155.00 $200.00 __________ If you are not a member, but wish to apply, complete the application form and include a photocopy of it and of your membership fee check with this registration. Non-Member $180.00 $230.00 __________ Student $80.00 $100.00 __________ If you are registering as a student, please include a photocopy of a university form or letter from a faculty member indicating current enrollment. Bus trip on Wed. July 29, and Sun. Aug. 1: no. riding ____ @ $20.00=________ Gala Banquet on Thursday, July 30: no. attending ____ @ $25.00 = ________ Opera perf. of "Carousel" on Sat., Aug. 1: no. attending ____ @ $16.00=_____ Method of payment: ________ MC ________ Visa ________ Check Credit Card Number: _________________________ Exp. date: ________ Signature of Cardholder ________________________ Make checks payable to: Indiana University #199-92 Registration fee includes admission to all sessions and coffee breaks of the conference, one copy of the Conference Proceedings, and an information packet. ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 -- August 1, 1992 ON-CAMPUS HOUSING FORM Indiana Memorial Union *all rooms will be assigned on a first-come-first-served basis. Type of room Number of people 1 2 3 4 Standard Double (1 double bed) $52.00 $60.00 xxx xxx Deluxe $57.00 $65.00 xxx xxx Standard Double (2 double beds) $60.00 $68.00 $76.00 $84.00 Deluxe $66.00 $74.00 $82.00 $90.00 King (1 king-sized bed) $58.00 $66.00 xxx xxx *all rates subject to 10% hotel tax check-in date: check-out date: roommate _____ I prefer to have a roommate assigned ______ male ______ female ______ smoker ______ non-smoker *Confirmation will come directly from the IMU. Cancellation must be made by 6:00 p.m. on the day of arrival to avoid penalties. Provide credit card information below to hold room past 6:00 p.m. *********************** Halls of Residence Only single rooms are available - $22.50 per night, plus 10% tax check-in date: check-out date: _______ male _______ female Method of Payment ______ same as reg. fees ______ MC ______ Visa ______ Check Credit Card Number: __________________________ Exp. date: __________ Signature of Cardholder: ______________________________ *make checks payable to: Indiana University #199-92 Send REGISTRATION and ON-CAMPUS HOUSING forms to: Conference Registrar Indiana University Conference Bureau Indiana Memorial Union Rm. 677 Bloomington, IN 47405 ==================================================================== THE FOURTEENTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY Indiana University, Bloomington July 29 - August 1, 1992 OFF-CAMPUS HOUSING FORM *all rooms will be assigned on a first-come-first-served basis. Hampton Inn, 2100 N. Walnut, Bloomington, IN 47401, (812)334-2100 Single room..............................................$43.00/night Double room..............................................$43.00/night check-in date: __________ check-out date: __________ ************************ Econo Lodge, 4501 E. 3rd, Bloomington, IN 47403, (812)332-2141 Single room..............................................$38.00/night Double room..............................................$45.00/night King-sized room..........................................$45.00/night check-in date: __________ check-out date: __________ *all room rates are subject to 10% tax. earliest check-in time for both hotels is: 2:00 p.m. latest check-out time for both is: 12:00 noon Name: Address: City: State: Zip: Daytime phone ( ) Your reservations will be held until 6:00 p.m. unless accompanied by an accepted credit card number, expiration date, and signature. ________ Hold until 6 p.m. only ________ Hold until arrival (credit card information below) Method of Payment __________ MC __________ Visa Credit card number: Exp. date: Signature of Cardholder IMPORTANT: DO NOT mail this form with the registration, and/or campus housing form. Mail it directly to the appropriate hotel. ==================================================================== COGNITIVE SCIENCE SOCIETY 1992 Membership Application Includes a one year subscription to the journal Cognitive Science Name: (last/first/middle) Mailing address: FEES Member - $50.00 Student - $25.00 Foreign Postage (excluding Canada) - $14.00 Spouse of Member (no journal) - $25.00 Name of Spouse: TOTAL: $ ______________ METHOD OF PAYMENT [ ]Check in U.S. $ on U.S. bank [ ]VISA/MasterCard/Access (fill in below) Card Number: Exp. date: / Name on Card (please print): Signature of Card Holder: Telephone Number (including area codes): Electronic Mail: FAX Number: Full member applicants please provide either (a) the signatures of two current members of the Society who are sponsoring your application, (b) evidence of a Ph.D. degree or equivalent in a cognitive science related field, or (c) a curriculum vita indicating publications in a cognitive science related field. Sponsor 1 ___________________ Sponsor 2 ________________________ Students must provide a xerox of university form or letter from faculty member indicating current enrollment. IMPORTANT: DO NOT mail this form with any of the forms for registration. Mail this directly to: Alan Lesgold, Secretary/Treasurer Cognitive Science Society LRDC University of Pittsburgh Pittsburgh, PA 15260 USA ======================================================================= For more information or a hard copy of this brochure contact Candace Shertzer Cognitive Science Program Indiana University (812)855-4658 cshertze at silver.ucs.indiana.edu ======================================================================= The Fourteenth Annual Conference of the Cognitive Science Society Conference Chair John K. Kruschke Department of Psychology and Cognitive Science Program Indiana University, Bloomington, IN 47405 Steering Committee Indiana University, Cognitive Science Program David Chalmers, Center for Research on Concepts and Cognition J. Michael Dunn, Philosophy Michael Gasser, Computer Science & Linguistics Douglas Hofstadter, Center for Research on Concepts and Cognition David Leake, Computer Science David Pisoni, Psychology Robert Port, Computer Science & Linguistics Richard Shiffrin, Psychology Timothy van Gelder, Philosophy Local Arrangements: Candace Shertzer, Cognitive Science Program Officers of the Cognitive Science Society James L. McClelland, President 1988 - 1996 Geoffrey Hinton 1986 - 1992 David Rumelhart 1986 - 1993 Dedre Gentner 1987 - 1993 James Greeno 1987 - 1993 Walter Kintsch 1988 - 1994 Steve Kosslyn 1989 - 1995 George Lakoff 1989 - 1995 Philip Johnson-Laird 1990 - 1996 Wendy Lehnert 1990 - 1996 Janet Kolodner 1991 - 1997 Kurt VanLehn 1991 - 1997 Ex Officio Board Members Martin Ringle, Executive Editor, Cognitive Science 1986 - Alan Lesgold, Secretary/Treasurer (2nd Term) 1988 - 1994 ====================================================================  From braun%bmsr8.usc.edu at usc.edu Fri Apr 24 19:33:35 1992 From: braun%bmsr8.usc.edu at usc.edu (Stephanie Braun) Date: Fri, 24 Apr 92 16:33:35 PDT Subject: Short Course Announcement Message-ID: <9204242333.AA15713@bmsr8.usc.edu> The Biomedical Simulations Resource of the University of Southern California announces a two-day Short Course on COMPUTER SIMULATION IN NEUROBIOLOGY May 30-31, 1992 This course will illustrate the use of modeling and simulation in the exploration of research design and hypothesis testing in experimental neurobiology. Lectures, discussions, and computer laboratory sessions will focus on three exemplary case histories that are experimental bases of important current theoretical concepts in the neurobiology of movement and perception. Three short background papers will be distributed to participants in advance: (1) Bizzi, Mussa-Ivaldi & Giszter (1991) Computations underlying the execution of movement: a biological perspective. Science 253, 287-291. (2) Georgopoulos, Schwartz & Kettner (1986) Neuronal population coding of movement direction. Science 233, 1416-1419. (3) Hecht, Schlaer & Pirenne (1941) Energy at the threshold of vision. Science 93, 585-587. The course is intended primarily for college and university instructors, postdoctoral scholars and graduate students. Prior computer experience is not essential, but it will be assumed that participants have some understanding of basic issues in contemporary neurobiology. There will be no registration fee, but a nominal fee for course materials (discettes, notes, etc.) will be charged. Enrollment is limited; early registration is advised. Course Instructor: George P. Moore, PhD Biomedical Engineering, USC Associate Instructor: Reza Shadmehr, PhD Brain & Cognitive Sciences, MIT For further information: Call: (213)740-0342 FAX : (213)740-0343 E-mail: bmsr at bmsrs.usc.edu  From berg at cs.albany.edu Mon Apr 27 18:01:31 1992 From: berg at cs.albany.edu (George Berg) Date: Mon, 27 Apr 92 18:01:31 EDT Subject: Computational Biology Conference Message-ID: <9204272201.AA00710@odin.albany.edu> =============================================================================== * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * UPDATE: SECOND ALBANY CONFERENCE ON COMPUTATIONAL BIOLOGY "PATTERNS OF BIOLOGICAL ORGANIZATION" * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * GENERAL DESCRIPTION The Second Albany Conference on Computational Biology will be held October 8-11, 1992 in Rensselaerville near Albany, New York. The aim of this conference (like that of the 1990 Albany Conference) is to explore the computational tools and approaches being developed in diverse fields within biology, with emphasis this year on topics related to organization and self-assembly. The conference will be designed to provide an environment for a frank and informal exchange among scientists and mathematicians that is not normally possible within the constraints of topical, single-discipline meetings. The theme of the Conference, "Patterns of Biological Organization", will be developed in five sessions on topics ranging from the level of sequence to the level of embryo development. Leading specialists in the various disciplines are being invited, with the degree of involvement in novel computational approaches as one of the most important criteria for selection. We are seeking an interdisciplinary audience, mathematicians, and computer scientists as well as biologists. All participants will be invited to submit abstracts for posters, although submission is not mandatory. Also, if funding permits, we will sponsor "young investigator" travel awards as we did in 1990 for the first Albany Conference on Computational Biology. CONFERENCE FORMAT The conference will consist of three morning and two evening sessions over a period of three nights and days (Thursday afternoon through Sunday morning). Each session will be comprised of four 30-minute talks interspersed by question-and-answer periods of 15-20 minutes. Afternoons are free for discussion and workshops (some planned, others impromptu). Tentative workshop topics include visualization tools and structure data bases. In addition, a workshop is planned for Thursday afternoon that will introduce non-biologists to the main issues of macromolecular and cellular structure to be addressed at the meeting. The following is an outline of the conference sessions, including a partial listing of confirmed speakers: Keynote Address: Prof. Hermann Haken --------------- Institute for Theoretical Physics and Synergetics University of Stuttgart Session 1 Sequence analysis and secondary structure ---------------------------------------------------- Discussion leader: Charles Lawrence Wadsworth Center, and State Univ. of New York, Albany 518-473-3382 CEL at BIOMETRICS.PH.ALBANY.EDU Speakers: David L. Waltz Thinking Machines, Inc., and Brandeis University Waltham, MA Jean Michel Claverie National Center for Biotechnology Information, NIH, Bethesda, MD Michael Zucker Stanford University Stanford, CA Stephen Altschul National Center for Biotechnology Information, NIH, Bethesda, MD Session 2 Tertiary structure prediction ---------------------------------------- Discussion leader: George Berg State Univ. of New York, Albany 518-442-4267 BERG at CS.ALBANY.EDU Speakers: Rick Fine Biosym Technologies, Inc. San Diego, CA Stephen Bryant National Center for Biotechnology Information, NIH Bethesda, MD James Bowie University of California Los Angeles, CA Francois Michel Centre de Genetique Moleculaire, CNRS Gif-sur-Yvette, France Session 3 Macromolecular function ---------------------------------- Discussion leader: Jacquelyn Fetrow State Univ. of New York, Albany 518-442-4389 JACQUE at ISADORA.ALBANY.EDU Speakers: Judith Hempel Biosym Technologies, Inc. San Diego, CA Fred Cohen University of California San Francisco, CA Chris Lee Stanford University Stanford, CA Session 4 Recognition and assembly ----------------------------------- Discussion leader: Joachim Frank Wadsworth Center and State Univ. of New York, Albany 518-474-7002 JOACHIM at TETHYS.PH.ALBANY.EDU Speakers: David DeRosier Brandeis University Waltham, MA Phoebe Stewart University of Pennsylvania Philadelphia, PA John Sedat University of California San Francisco, CA Session 5 Development ---------------------- Discussion leader: John Reinitz Yale Univ., New Haven, CT 203-785-7049 REINITZ-JOHN at CS.YALE.EDU Speakers: Michael Levine University of California San Diego, CA John Reinitz Yale University New Haven, CT George Oster University of California Berkeley, CA Brian Goodwin Open University Milton Keynes, UK Questions about individual sessions may be sent to the respective Discussion Leaders (phone numbers and email addresses provided above). For general conference information, you may contact any of the discussion leaders or any other member of the Organizing Committee (chair: Carmen Mannella) or Program Committee (chair: Joachim Frank). Phone numbers and email addresses of the other members of these committees are listed below: Jeff Bell, Rennselaer Polytechnic Institute, Troy, NY 518-276-4075 BELL at VAX1.CHEM.RPI.EDU Stephen Bryant, National Center for Biotechnology Information, NIH, Bethesda, MD 301-496-2475 (ext. 65) BRYANT at NCBI.NLM.NIH.GOV Carmen Mannella, Wadsworth Center and State Univ. of New York, Albany 518-474-2462 CARMEN at TETHYS.PH.ALBANY.EDU Patrick Van Roey, Wadsworth Center, Albany, NY 518-473-1336 VANROEY at TETHYS.PH.ALBANY.EDU CONFERENCE SITE The conference, one of the Albany Conference series held annually since 1984, will take place at the Rensselaerville Conference Center, located 30 miles southwest of Albany, NY in the Helderberg Mountains. The Institute offers on-campus facilities including a large auditorium with all necessary audio-visual equipment, and smaller conference halls for informal workshops and poster sessions. The Weathervane Restaurant, located on-campus and formerly the carriage house of the Huyck estate, provides meals and refreshments, while overnight lodging is available in the modern and classic estate houses. Rooms are assigned in advance to registrants, and transportation to and from Rensselaerville is provided from the airport, as well as train and bus stations. The rural, secluded setting of the conference, the limited number of participants and the scheduling of sessions in the morning and the evening -- leaving the afternoons free -- are intended to facilitate informal discussions among conference participants. REGISTRATION INFORMATION CONFERENCE FEE: $475 includes registration, accomodations (double occupancy), meals and transportation between the conference center and Albany airport. A limited number of single occupancy accomodations are available for an extra $100. Payment of the full fee will be required by AUGUST 31, 1992. Please note that neither the Albany Conferences nor the Rensselaerville Conference Center accepts credit cards. APPLICATION DEADLINE: July 31, 1992. For further registration information and a copy of the application form for the 1992 Albany Conference on Computational Biology, please call the conference coordinator, Carole Keith, 518-442-4327, FAX 518-442-4767, Bitnet: CAROLE at ALBNYVM1, or write to The 1992 Albany Conference, P.O. Box 8836, Albany, NY 12208-0836. - - - - - - - - - - - - - - - - - - - - - - - - Individuals may also use the following "E-Mail application form" to register for this meeting: Name: Organization: Business Address: City: State: Zip: Business Phone: Fax: Because attendance is limited, please describe briefly your research interests or activities which explain your interest in participating in this conference. If you plan to submit a poster, please include its title and (if ready) a short abstract. (You will be asked to provide a one-page, camera-ready version of the poster abstract, using 1.5 inch borders, for the meeting workbook.) Send this E-mail application to CAROLE at ALBNYVM1 before the registration deadline (July 31). TRAVEL AWARDS Graduate students and postdocs who would like to be considered for a Young Investigators travel award should submit with their registration form a brief letter explaining his/her research interests. Graduate students should also include a letter of recommendation from a faculty advisor. Applications from members of groups that are underrepresented in this field (women and racial minorities) are encouraged. ===============================================================================  From rsun at orion.ssdc.honeywell.com Tue Apr 28 10:04:42 1992 From: rsun at orion.ssdc.honeywell.com (Ron Sun) Date: Tue, 28 Apr 92 09:04:42 CDT Subject: TR available Message-ID: <9204281404.AA17820@orion.ssdc.honeywell.com> TR availble: Fuzzy Evidential Logic: A Model of Causality for Commonsense Reasoning} Ron Sun Honeywell SSDC Minneapolis, MN 55418 This paper proposes a fuzzy evidential model for commonsense causal reasoning. After an analysis of the advantages and limitations of existing accounts of causality, a generalized rule-based model FEL ({\it Fuzzy Evidential Logic}) is proposed that takes into account the inexactness and the cumulative evidentiality of commonsense reasoning. It corresponds naturally to a neural (connectionist) network. Detailed analyses are performed regarding how the model handles commonsense causal reasoning. To appear in Proc. of 14th Coggnitive Science Conference, 1992 ---------------------------------------------------------------- It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) No hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sun.cogsci92.ps.Z ftp> quit unix> uncompress sun.cogsci92.ps.Z unix> lpr sun.cogsci92.ps (or however you print postscript) p.s. If you can't find the paper there, it might still be in Inbox (pub/neuroprose/Inbox). It is ftpable from there.  From gordon at prodigal.psych.rochester.edu Thu Apr 30 20:07:57 1992 From: gordon at prodigal.psych.rochester.edu (Dustin Gordon) Date: Thu, 30 Apr 92 20:07:57 EDT Subject: Postdoctoral Positions Message-ID: <9205010007.AA05044@prodigal.psych.rochester.edu> POSTDOCTORAL POSITIONS IN THE LANGUAGE SCIENCES AT ROCHESTER The Center for the Sciences of Language [CSL] at the University of Rochester has two NIH-funded postdoctoral trainee positions that can start anytime after July 1, 1992, and can run from one to two years. CSL is an interdisciplinary unit which connects programs in American Sign Language, Psycholinguistics, Linguistics, Natural language processing, Neuroscience, Philosophy, and Vision. Fellows will be expected to participate in a variety of exisiting research and teaching projects between these disciplines. Applicants should have a relevant background and an interest in interdisciplinary research training in the language sciences. We encourage applications from minorities and women. Applications should be sent to Tom Bever, CSL Director, Meliora Hall, University of Rochester, Rochester, NY, 14627; Bever at prodigal.psych.rochester.edu; 716-275-8724. Please include a vita, a statement of interests and the names and email addresses and/or phone numbers of three recommenders.  From rosanna at cns.edinburgh.ac.uk Wed Apr 29 10:17:41 1992 From: rosanna at cns.edinburgh.ac.uk (Rosanna Maccagnano) Date: Wed, 29 Apr 92 10:17:41 BST Subject: NETWORK Message-ID: <4169.9204290917@subnode.cns.ed.ac.uk> CONTENTS OF NETWORK - COMPUTATION IN NEURAL SYSTEMS Volume 3 Number 2 May 1992 LETTER TO THE EDITOR 101 A modified neuron model that scales and resolves network paralysis M SASEETHARAN & M P MOODY PAPERS 105 Modelling Hebbian cell assemblies comprised of cortical neurons A LANSNER & E FRANSEN 121 Effective neurons and attractor neural networks in cortical environment D J AMIT & M V TSODYKS 139 Associative memory in a network of ``spiking'' neurons W GERSTNER & J L VAN HEMMEN 165 Study of a learning algorithm for neural networks with discrete synaptic couplings C J PEREZ VICENTE, J CARRABINA & E VALDERRAMA 177 Information capacity in recurrent McCulloch-Pitts networks with sparsely coded memory states G PALM & F T SOMMER 187 Computing with a difference neuron D SELIGSON, M GRINIASTY, D HANSEL & N SHORESH 205 Self-organisation with partial data T SAMAD & S A HARP REVIEW ARTICLE 213 Could information theory provide an ecological theory of sensory processing? J J ATICK 253 ABSTRACTS SECTION NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. Macros can be supplied to produce papers in the form suitable for refereeing and for IOP house style. For more details contact the Editorial Services Manager at IOP Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX, UK. Telephone: 0272 297481 Fax: 0272 294318 Telex: 449149 INSTP G Email Janet: IOPPL at UK.AC.RL.GB Subscription Information Frequency: quarterly Subscription rates: Institution 149.00 pounds (US$274.00) Individual (UK) 17.50 pounds (Overseas) 20.50 pounds (US$41.00) A microfiche edition is also available at 89.00 pounds (US$164.00)  From mdg at magi.ncsl.nist.gov Fri Apr 10 08:23:11 1992 From: mdg at magi.ncsl.nist.gov (Mike Garris x2928) Date: Fri, 10 Apr 92 08:23:11 EDT Subject: New NIST OCR Database Message-ID: <9204101223.AA14894@magi.ncsl.nist.gov> NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Announces a New Database +-----------------------------+ | "NIST Special Database 3" | +-----------------------------+ Binary Images of Handwritten Segmented Characters (HWSC) The NIST database of handwritten segmented characters contains 313,389 isolated character images segmented from the 2,100 full-page images distributed with "NIST Special Database 1". The database includes the 2,100 pages of binary, black and white, images of hand-printed numerals and text. This significant new database contains 223,125 digits, 44,951 upper-case, and 45,313 lower-case character images. Each character image has been centered in a separate 128 by 128 pixel region and has been assigned a classification which has been manually corrected so that the error rate of the segmentation and assigned classification is less than 0.1%. The uncompressed database totals approximately 2.75 gigabytes of image data and includes image format documentation and example software. "NIST Special Database 3" has the following features: + 313,389 isolated character images including classifications + 223,125 digits, 44,951 upper-case, and 45,313 lower-case images + 2,100 full-page images + 12 pixel per millimeter resolution + image format documentation and example software Suitable for automated hand-print recognition research, the database can be used for: + algorithm development + system training and testing The database is a valuable tool for training recognition systems on a large statistical sample of hand-printed characters. The system requirements are a 5.25" CD-ROM drive with software to read ISO-9660 format. If you have any further technical questions please contact: Michael D. Garris mdg at magi.ncsl.nist.gov (301)975-2928 (new number!) If you wish to order the database, please contact: Standard Reference Data National Institute of Standards and Technology 221/A323 Gaithersburg, MD 20899 (301)975-2208 (301)926-0416 (FAX)  From dld at magi.ncsl.nist.gov Mon Apr 13 08:09:43 1992 From: dld at magi.ncsl.nist.gov (Darrin Dimmick X4147) Date: Mon, 13 Apr 92 08:09:43 EDT Subject: NIST SPECIAL DATABASE 2 Message-ID: <9204131209.AA21234@magi.ncsl.nist.gov> NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Announces a New Database +-----------------------------+ | "NIST Special Database 2" | +-----------------------------+ Structured Forms Reference Set (SFRS) The NIST database of structured forms contains 5,590 full page images of simulated tax forms completed using machine print. THERE IS NO REAL TAX DATA IN THIS DATABASE. The structured forms used in this database are 12 different forms from the 1988, IRS 1040 Package X. These include Forms 1040, 2106, 2441, 4562, and 6251 together with Schedules A, B, C, D, E, F and SE. Eight of these forms contain two pages or form faces making a total of 20 form faces represented in the database. Each image is stored in bi-level black and white raster format. The images in this database appear to be real forms prepared by individuals but the images have been automatically derived and synthesized using a computer and contain no "real" tax data. The entry field values on the forms have been automatically generated by a computer in order to make the data available without the danger of distributing privileged tax information. In addition to the images the database includes 5,590 answer files, one for each image. Each answer file contains an ASCII representation of the data found in the entry fields on the corresponding image. Image format documentation and example software are also provided. The uncompressed database totals approximately 5.9 gigabytes of data. "NIST Special Database 2" has the following features: + 5,590 full-page images + 5,590 answer files + 12 pixel per millimeter resolution + image format documentation and example software Suitable for automated document processing system research and development, the database can be used for: + algorithm development + system training and testing The system requirements are a 5.25" CD-ROM drive with software to read ISO-9660 format. If you have any further technical questions please contact: Darrin L. Dimmick dld at magi.ncsl.nist.gov (301)975-4147 If you wish to order the database, please contact: Standard Reference Data National Institute of Standards and Technology 221/A323 Gaithersburg, MD 20899 (301)975-2208 (301)926-0416 (FAX)