From xie at ee.su.OZ.AU Fri Mar 1 09:59:47 1991 From: xie at ee.su.OZ.AU (Yun Xie, Sydey Univ. Elec. Eng., Tel: (+61-2) Date: Fri, 1 Mar 91 09:59:47 EST Subject: technical report available Message-ID: <9102282259.AA00509@ee.su.oz.au> The following is the abstract of a report on our recent research work. The report is available by FTP and has been submitted. Analysis of the Effects of Quantization in Multi-Layer Neural Networks Using Statistical Model Yun Xie Marwan A. Jabri Dept. of Electronic Engineering Shool of Electrical Engineering Tsinghua University The University of Sydney Beijing 100084, P.R.China N.S.W. 2006, Australia ABSTRACT A statistical quantization model is used to analyse the effects of quantization when digital technique is used to implement a real-valued feedforward multi-layer neural network. In this process, we introduce a parameter that we call ``effective non-linearity coefficient'' which is important in the study of the quantization effects. We develop, as function of the quantization parameters, general statistical formulations of the performance degradation of the neural network caused by quantization. Our formulation predicts (as intuitively one may think) that network's performance degradation gets worse when the number of bits is decreased; a change of the number of hidden units in a layer has no effect on the degradation; for a constant ``effective non-linearity coefficient'' and number of bits, an increase in the number of layers leads to worse performance degradation of the network; the number of bits in successive layers can be reduced if the neurons of the lower layer are non-linear. unix>ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: neuron 230 Guest login ok, access restrictions apply. ftp>binary ftp>cd pub ftp>cd neuroprose ftp>get yun.quant.ps.Z ftp>bye unix>uncompress yun.quant.ps.Z unix>lpr yun.quant.ps From p-mehra at uiuc.edu Fri Mar 1 04:05:45 1991 From: p-mehra at uiuc.edu (Pankaj Mehra) Date: Fri, 01 Mar 91 03:05:45 CST Subject: Chaos update: 2 refs. on Control of chaotic systems Message-ID: <9103010905.AA02166@hobbes> ** PLS. DO NOT CROSS-POST ** In my original posting, I mentioned a talk by Hopfield. One week after that, Alfred Hubler of Center for Complex Systems Research gave a talk on "Controlled Chaos" in the same lecture series. The following references describe some of his recent work. F. Ohle et al., "Adaptive Control of Chaotic Systems", Tech Report CCS-90-13, Center for Complex Systems Research, Beckman Institute, University of Illinois, Urbana, IL, 1990. (Part of Ohle's Ph.D. thesis from Max-Planck Institute) E. A. Jackson and A. Hubler, "Periodic Entrainment of Chaotic Logistic Map Dynamics", Physica D, 44, pp. 407-420, Amsterdam: North-Holland, 1990. - Pankaj From jcp at vaxserv.sarnoff.com Fri Mar 1 19:34:15 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Fri, 1 Mar 91 19:34:15 EST Subject: SOLICTING POSTER CONCEPTS FOR NIPS91 Message-ID: <9103020034.AA05495@sarnoff.sarnoff.com> We are looking for good ideas for the NIPS-91 Poster. We must make a decision by next Wednesday. Two key motifs that have been incorporated in past posters are: (1) The synergy between studies of natural and synthetic neural information processing systems; (2) Mountainous surroundings at the conference and skiing at the workshop; (3) natural/synthetic contrasts in general. One half-baked idea is to take-off from one of Escher's drawings, such as the "hands drawing hands", or the subtle transmogrification of one organism into another through a series of intermediates that interlock to tile a surface. Here's your chance to express yourself! Send your suggestions to the NIPS-91 Publicity Chairman. Thanks! John Pearson jcp at as1.sarnoff.com NIPS-91 Publicity Chairman Steve Hanson NIPS-91 Program Chairman From marwan at ee.su.OZ.AU Fri Mar 1 21:32:09 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Sat, 2 Mar 1991 13:32:09 +1100 Subject: tech report available Message-ID: <9103020232.AA11925@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri & Barry Flower School of Electrical Engineering University of Sydney Abstract Previous work on analog VLSI implementation of multi-layer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that using gradient descent with direct approximation of the gradient instead of back-propagation is cheapest for parallel analog implementations. We also show that this technique (we call ``weight perturbation'') is suitable for multi-layer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is also presented. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.wpert.ps.Z >quit uncompress jabri.wpert.ps.Z lpr -P jabri.wpert.ps This file contains a large picture, and as a result you may have to set the time-out to a large value (we do that with lpr -i1000000 ...) If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-5): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From marwan at ee.su.OZ.AU Sat Mar 2 19:44:58 1991 From: marwan at ee.su.OZ.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: (+61-2) Date: Sun, 3 Mar 1991 10:44:58 +1000 Subject: Tech Report Available Message-ID: <9103030044.AA16855@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri & Barry Flower Systems Engineering and Design Automation Laboratory School of Electrical Engineering University of Sydney marwan at ee.su.oz.au (SEDAL Tech Report 1991-1-5) Abstract Previous work on analog VLSI implementation of multi-layer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that using gradient descent with direct approximation of the gradient instead of back-propagation is cheapest for parallel analog implementations. We also show that this technique (we call ``weight perturbation'') is suitable for multi-layer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is also presented. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.wpert.ps.Z >quit uncompress jabri.wpert.ps.Z lpr -P jabri.wpert.ps This file contains a large picture, and as a result you may have to set the time-out to a large value (we do that with lpr -i1000000 ...) If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-5): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From boubez at bass.rutgers.edu Mon Mar 4 11:52:42 1991 From: boubez at bass.rutgers.edu (boubez@bass.rutgers.edu) Date: Mon, 4 Mar 91 11:52:42 EST Subject: Technical report available In-Reply-To: stefano nolfi's message of Wed, 06 Feb 91 13:24:05 EDT <9102061541.AA22724@caip.rutgers.edu> Message-ID: <9103041652.AA06744@piano> Hello. I would appreciate receiving a copy of your report titled: AUTO-TEACHING: NETWORKS THAT DEVELOP THEIR OWN TEACHING INPUT Any kind of copy is welcome, whether it's electronic (email or ftp if available publicly), or hardcopy. Thank you in advance. toufic R 2 4 |_|_| Toufic Boubez | | | boubez at caip.rutgers.edu 1 3 5 CAIP Center, Rutgers University From boubez at bass.rutgers.edu Mon Mar 4 15:26:09 1991 From: boubez at bass.rutgers.edu (boubez@bass.rutgers.edu) Date: Mon, 4 Mar 91 15:26:09 EST Subject: Apology Message-ID: <9103042026.AA06924@piano> I apologise for my latest message to the list, I forgot to remove the "CC: connectionists" line from the body of the message. toufic From zemel at cs.toronto.edu Tue Mar 5 15:16:56 1991 From: zemel at cs.toronto.edu (zemel@cs.toronto.edu) Date: Tue, 5 Mar 1991 15:16:56 -0500 Subject: preprint available Message-ID: <91Mar5.151718est.7134@neat.cs.toronto.edu> The following paper has been placed in the neuroprose archives at Ohio State University: Discovering Viewpoint-Invariant Relationships That Characterize Objects Richard S. Zemel & Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, Ont. CANADA M5S-1A4 Abstract Using an unsupervised learning procedure, a network is trained on an ensemble of images of the same two-dimensional object at different positions, orientations and sizes. Each half of the network ``sees'' one fragment of the object, and tries to produce as output a set of 4 parameters that have high mutual information with the 4 parameters output by the other half of the network. Given the ensemble of training patterns, the 4 parameters on which the two halves of the network can agree are the position, orientation, and size of the whole object, or some recoding of them. After training, the network can reject instances of other shapes by using the fact that the predictions made by its two halves disagree. If two competing networks are trained on an unlabelled mixture of images of two objects, they cluster the training cases on the basis of the objects' shapes, independently of the position, orientation, and size. This paper will appear in the NIPS-90 proceedings. To retrieve it by anonymous ftp, do the following: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get zemel.unsup-recog.ps.Z ftp> quit unix> unix> zcat zemel.unsup-recog.ps.Z | lpr -P From tp at irst.it Wed Mar 6 07:10:54 1991 From: tp at irst.it (Tomaso Poggio) Date: Wed, 6 Mar 91 13:10:54 +0100 Subject: preprint available In-Reply-To: zemel@cs.toronto.edu's message of Tue, 5 Mar 1991 15:16:56 -0500 <91Mar5.151718est.7134@neat.cs.toronto.edu> Message-ID: <9103061210.AA17840@caneva.irst.it> From russ at dash.mitre.org Wed Mar 6 08:20:35 1991 From: russ at dash.mitre.org (Russell Leighton) Date: Wed, 6 Mar 91 08:20:35 EST Subject: simulator announcement Message-ID: <9103061320.AA25799@dash.mitre.org> The following describes a neural network simulation environment made available free from the MITRE Corporation. The software contains a neural network simulation code generator which generates high performance C code implementations for backpropagation networks. Also included is a graphical interface for visualization. PUBLIC DOMAIN NEURAL NETWORK SIMULATOR AND GRAPHICS ENVIRONMENT AVAILABLE Aspirin for MIGRAINES Version 4.0 The Mitre Corporation is making available free to the public a neural network simulation environment called Aspirin for MIGRAINES. The software consists of a code generator that builds neural network simulations by reading a network description (written in a language called "Aspirin") and generates a C simulation. A graphical interface (called "MIGRAINES") is provided for platforms that support the Sun window system NeWS1.1. For platforms that do not support NeWS1.1 no graphics are currently available. The system has been ported to a number of platforms: Sun3/4 Silicon Graphics Iris IBM RS/6000 DecStation Cray YMP Included with the software are "config" files for these platforms. Porting to other platforms may be done by choosing the "closest" platform currently supported and adapting the config files. Aspirin 4.15 ------------ The software that we are releasing now is principally for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort. Aspirin was originally conceived as ``a way of dealing with MIGRAINES.'' Our goal was to create an underlying system that would exist behind the graphics and provide the network modeling facilities. The system had to be flexible enough to allow research, that is, make it easy for a user to make frequent, possibly substantial, changes to network designs and learning algorithms. At the same time it had to be efficient enough to allow large ``real-world'' neural network systems to be developed. Aspirin uses a front-end parser and code generators to realize this goal. A high level declarative language has been developed to describe a network. This language was designed to make commonly used network constructs simple to describe, but to allow any network to be described. The Aspirin file defines the type of network, the size and topology of the network, and descriptions of the network's input and output. This file may also include information such as initial values of weights, names of user defined functions, and hints for the MIGRAINES graphics system. The Aspirin language is based around the concept of a "black box". A black box is a module that (optionally) receives input and (necessarily) produces output. Black boxes are autonomous units that are used to construct neural network systems. Black boxes may be connected arbitrarily to create large possibly heterogeneous network systems. As a simple example, pre or post-processing stages of a neural network can be considered black boxes that do not learn. The output of the Aspirin parser is sent to the appropriate code generator that implements the desired neural network paradigm. The goal of Aspirin is to provide a common extendible front-end language and parser for different network paradigms. The publicly available software will include a backpropagation code generator that supports several variations of the backpropagation learning algorithm. For backpropagation networks and their variations, Aspirin supports a wide variety of capabilities: 1. feed-forward layered networks with arbitrary connections 2. ``skip level'' connections 3. one and two-dimensional tessellations 4. a few node transfer functions (as well as user defined) 5. connections to layers/inputs at arbitrary delays, also "Waibel style" time-delay neural networks The file describing a network is processed by the Aspirin parser and files containing C functions to implement that network are generated. This code can then be linked with an application which uses these routines to control the network. Optionally, a complete simulation may be automatically generated which is integrated with the graphics and can read data in a variety of file formats. Currently supported file formats are: Ascii Type1, Type2, Type3 (simple floating point file formats) ProMatlab Examples -------- A set of examples comes with the distribution: xor: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 330-334. encode: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 335-339. detect: Detecting a sine wave in noise. characters: Learing to recognize 4 characters independent of rotation. sonar: from Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. spiral: from Kevin J. Lang and Michael J, Witbrock, "Learning to Tell Two Spirals Apart", in Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. ntalk: from Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. perf: A large network used only for performance testing. Performance of Aspirin simulations ---------------------------------- The backpropagation code generator produces simulations that run very efficiently. Aspirin simulations do best on vector machines when the networks are large, as exemplified by the Cray's performance. All simulations were done using the Unix "time" function and include all simulation overhead. The connections per second rating was calculated by multiplying the number of iterations by the total number of connections in the network and dividing by the "user" time provided by the Unix time function. Two tests were performed. In the first, the network was simply run "forward" 100,000 times and timed. In the second, the network was timed in learning mode and run until convergence. Under both tests the "user" time included the time to read in the data and initialize the network. Sonar: This network is a two layer fully connected network with 60 inputs: 2-34-60. Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 2.8 Cray YMP: 15.7 Backward: SparcStation1: 0.3 IBM RS/6000 320: 0.8 Cray YMP: 7 Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. Nettalk: This network is a two layer fully connected network with [29 x 7] inputs: 26-[15 x 8]-[29 x 7] Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 3.5 Cray YMP: 64 Backward: SparcStation1: 0.4 IBM RS/6000 320: 1.3 Cray YMP: 24.8 Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. Perf: This network was only run on the Cray. It is very large with very long vectors. The performance on this network is in some sense a peak performance for a machine. This network is a two layer fully connected network with 2048 inputs: 128-512-2048 Millions of Connections per Second Forward: Cray YMP: 96.3 Backward: Cray YMP: 18.9 Note: The cray benchmarks are courtesy of the Center for High Performance Computing at the University of Texas. Aspirin 5.0 ----------- The next release of the software *may* include: 1. 2nd order (quadratic) connections 2. Auto-regressive nodes (this a form of limited recurrence) 3. Code generators for other (not backprop) neural network learning algorithms. 4. More supported file formats 5. More config files for other platforms. MIGRAINES 4.0 ------------- MIGRAINES is a graphics system for visualizing neural network systems. The graphics that are currently being released are exclusively for feed-forward networks. They provide the ability to display networks, arc weights, node values, network inputs, network outputs, and target outputs in a wide variety of formats. There are many different representations that may be used to display arcs weights and node values, including pseudo-color (or grayscale) arrays (with user modifiable colors and value-to-color mappings), various plots, bar charts and other pictorial representations. MIGRAINES is not necessary for the execution of the Aspirin system. Networks may be designed, executed, tested, and saved entirely apart from any graphic interface. The more exotic the network being considered, the smaller the amount of graphics that will be useful. However, the graphics offer such a degree of creative and analytic power for neural network research that even the most jaded researcher will find them useful. Although the graphics were developed for the NeWS1.1 window system, it can be run under Sun's OpenWindows which supports NeWS applications. Note: OpenWindows is not 100% backward compatible with NeWS1.1 so some features of the graphics may not work well. MIGRAINES 5.0 ------------- The next release will replace the NeWS1.1 graphics with an X based system as well extending the graphical capabilities. An interface to the scientific visualization system apE2.0 *may* be available. How to get the software ----------------------- The software is available from two FTP sites, CMU's simulator collection and UCLA's cognitive science machines. The compressed tar file is a little more than 2 megabytes. The software is currently only available via anonymous FTP. > To get the software from CMU's simulator collection: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "connectionists-request at cs.cmu.edu". 5. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 6. Get the file "am4.tar.Z" > To get the software from UCLA's cognitive science machines: 1. Create an FTP connection to "polaris.cognet.ucla.edu" (128.97.50.3) (typically with the command "ftp 128.97.50.3") 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "alexis", by typing the command "cd alexis" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Get the file by typing the command "get am4.tar.Z" How to unpack the software -------------------------- After ftp'ing the file make the directory you wish to install the software. Go to that directory and type: zcat am4.tar.Z | tar xvf - How to print the manual ----------------------- The user manual is located in ./doc in a few compressed PostScript files. To print each file on a PostScript printer type: zcat | lpr Thanks ------ Thanks to the folks at CMU and UCLA for the ftp site. Thanks to the folks at the Center for High Performance Computing at the University of Texas for the use of their computers. Copyright and license agreement ------------------------------- Since the Aspirin/MIGRAINES system is licensed free of charge, the MITRE Corporation provides absolutely no warranty. Should the Aspirin/MIGRAINES system prove defective, you must assume the cost of all necessary servicing, repair or correction. In no way will the MITRE Corporation be liable to you for damages, including any lost profits, lost monies, or other special, incidental or consequential damages arising out of the use or in ability to use the Aspirin/MIGRAINES system. This software is the copyright of The MITRE Corporation. It may be freely used and modified for research and development purposes. We require a brief acknowledgement in any research paper or other publication where this software has made a significant contribution. If you wish to use it for commercial gain you must contact The MITRE Corporation for conditions of use. The MITRE Corporation provides absolutely NO WARRANTY for this software. February, 1991 Russell Leighton Alexis Wieland The MITRE Corporation 7525 Colshire Dr. McLean, Va. 22102-3481 From smieja at gmdzi.uucp Fri Mar 8 05:57:11 1991 From: smieja at gmdzi.uucp (Frank Smieja) Date: Fri, 8 Mar 91 09:57:11 -0100 Subject: Tech Report available: NN module systems Message-ID: <9103080857.AA11648@gmdzi.gmd.de> The following technical report is available. It will appear in the proceedings of the AISB90 COnference in Leeds, England. The report exists as smieja.minos.ps.Z in the Ohio cheops account (in /neuroprose, anonymous login via FTP). Normal procedure for retrieval applies. MULTIPLE NETWORK SYSTEMS (MINOS) MODULES: TASK DIVISION AND MODULE DISCRIMINATION It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent {\it systems.} These systems are intended to possess a varied repertoire of functions enabling adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details. Presented is an architecture for a type of neural expert module, named an {\it Authority.} An Authority consists of a number of {\it Minos\/} modules. Each of the Minos modules in an Authority has the same processing capabilities, but varies with respect to its particular {\it specialization\/} to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts. The expert with the highest {\it confidence\/} is believed, and it is the answer and confidence quotient that are transmitted to other levels in a system hierarchy. From Connectionists-Request at CS.CMU.EDU Fri Mar 8 13:11:38 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 08 Mar 91 13:11:38 EST Subject: Fwd: Request for info on simulators related to fuzzy logic Message-ID: <19306.668455898@B.GP.CS.CMU.EDU> ------- Forwarded Message From DUDZIAKM at isnet.inmos.COM Fri Mar 8 09:34:04 1991 From: DUDZIAKM at isnet.inmos.COM (DUDZIAKM@isnet.inmos.COM) Date: Fri, 8 Mar 91 07:34:04 MST Subject: Request for info on simulators related to fuzzy logic Message-ID: I am in the process of compiling an annotated list of simulators and emulators of fuzzy logic and other related non-deterministic logics. This list will be made available to the network community. I welcome any information about products and especially distributable, public-domain prototypes. I am familiar with a few of the commercial products but do not know much about what is available through the academic research community. There seem to be quite a number of neural/ connectionist simulators, such as have been recently described on this and other mailing lists. Your assistance is appreciated. Martin Dudziak SGS-THOMSON Microelectronics dudziakm at isnet.inmos.com (alternate: dudziakm at agrclu.st.it in Europe) fax: 301-290-7047 phone: 301-995-6952 ------- End of Forwarded Message From lanusse at ensta.ensta.fr Mon Mar 11 10:07:27 1991 From: lanusse at ensta.ensta.fr (+ prof) Date: Mon, 11 Mar 91 16:07:27 +0100 Subject: No subject Message-ID: <9103111507.AA09539@ensta.ensta.fr> Subject: mailing list To whom it may concern: I would like to be placed on the connectionists mailing list. Thank you Alain F. Lanusse Ecole Nationale Superieure de Techniques Avancees Paris (FRANCE) lanusse at ensta.ensta.fr From POSEIDON at UCSD Mon Mar 11 22:13:28 1991 From: POSEIDON at UCSD (POSEIDON@UCSD) Date: Mon, 11 Mar 91 19:13:28 PST Subject: help Message-ID: <9103120313.AA06782@sdcc13.UCSD.EDU> I am trying to delete myself from the connectionists mailing list, but am unable to do so. After following the proper procedure, it tells me I am deleted, but I still get the mail. Can anyone help? Thanks, Patrick Dwyer pdwyer at sdcc13.ucsd.edu From VAINA at buenga.bu.edu Tue Mar 12 07:22:00 1991 From: VAINA at buenga.bu.edu (VAINA@buenga.bu.edu) Date: Tue, 12 Mar 91 07:22 EST Subject: THE COMPUTING BRAIN LECTURE- TERRY SEJNOWSKI Message-ID: MARCH 13, 5PM DYNAMIC OF EYE MOVEMENTS, TERRY SEJNOWSKI at Boston University, 110 Cummington str (olf Engineering Building) Room 150. Tea at 4pm. All Boston area people invited! (for further information call 353-2455 or 353-9144) (617-area code). From marwan at ee.su.OZ.AU Tue Mar 12 07:51:51 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Tue, 12 Mar 1991 22:51:51 +1000 Subject: No subject Message-ID: <9103121251.AA29308@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Predicting the Number of Vias and Dimensions of Full-custom Circuits Using Neural Networks Techniques Marwan Jabri & Xiaoquan Li School of Electrical Engineering University of Sydney marwan at ee.su.oz.au (SEDAL Tech Report 1991-1-6) Abstract Block layout dimension prediction is an important activity in many VLSI design tasks (structural synthesis, floorplanning and physical synthesis). Block layout {\em dimension} prediction is harder than block {\em area} prediction and has been previously considered to be intractable [Kurdahi89]. In this paper we present a solution to this problem using a neural network machine learning paradigm. Our method uses a neural network to predict first the number of vias and then another neural network that uses this prediction and other circuit features to predict the width and the height of the layout of the circuit. Our approach has produced much better results than those published, {\em dimension} (aspect ratio) prediction average error of less than 18\% with corresponding {\em area} prediction average error of less than 15\%. Furthermore, our technique predicts the number of vias in a circuit with less than 4\% error on average. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.dime.ps.Z >quit uncompress jabri.dime.ps.Z lpr -P jabri.dime.ps If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-6): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From dekel at utdallas.edu Tue Mar 12 20:00:53 1991 From: dekel at utdallas.edu (Eliezer Dekel) Date: Tue, 12 Mar 91 19:00:53 -0600 Subject: Off-line Signature recognition Message-ID: I am looking for references to work on off-line signature recognition. I'm aware of some work that was done before 1985. I would greatly appreciate information about more recent work. I'll summerize and post to the list. Eliezer Dekel The University of Texas at Dallas dekel at utdallas.edu From marshall at cs.unc.edu Wed Mar 13 11:34:19 1991 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 13 Mar 91 11:34:19 -0500 Subject: Triangle NN talk: Mo-Yuen Chow Message-ID: <9103131634.AA24223@marshall.cs.unc.edu> ====== TRIANGLE AREA NEURAL NETWORK INTEREST GROUP presents: ====== Dr. MO-YUEN CHOW Department of Electrical and Computer Engineering North Carolina State University Tuesday, March 19, 1991 6:00 p.m. Entrance is locked at 6:30. Microelectronics Center Building, MCNC 3021 Cornwallis Road Research Triangle Park, NC Followed immediately by an ORGANIZATIONAL MEETING for the Triangle Area Neural Network Interest Group ---------------------------------------------------------------------- APPLICATION OF NEURAL NETWORKS TO INCIPIENT FAULT DETECTION IN INDUCTION MOTORS The main focus of the presentation is to introduce a new concept for incipient fault detection in rotating machines using artificial neural networks. Medium size induction motors are used as prototypes for rotating machines due to their wide applications. The concepts developed for induction motors can be easily generalized to other rotating machines. The common incipient faults of induction motors, namely, turn-to-turn insulation faults and bearing wear, and their effects on the motor performance are considered. A corresponding artificial neural network structure is then designed to detect those incipient faults. The designed network is trained by data of different fault conditions, obtained from a detailed induction motor simulation program. After training, the neural net is tested with a set of random data within the fault range under consideration. With a priori data knowledge, the network structure can be greatly simplified by using a high-order artificial neural network. The performance of using the batch-update and pattern-update backpropagation training algorithms for the network are compared. The satisfactory performance of using artificial neural networks in this project shows the promising future of artificial neural networks applications for other types of fault detection. ---------------------------------------------------------------------- Co-Sponsored by: Department of Electrical and Computer Eng., NCSU Department of Computer Science, UNC-CH Humanities Computing Facility, Duke Univ. Microelectronics Center of North Carolina (MCNC) For more information: Jonathan Marshall (UNC-CH, 962-1887, marshall at cs.unc.edu) or John Sutton (NCSU, 737-5065, sutton at eceugs.ece.ncsu.edu). Directions: Raleigh: I-40 west to Durham Freeway (147) north, 147 to Cornwallis exit. Durham: 147 south to Cornwallis exit. Chapel Hill: I-40 east to Durham Freeway (147) north, 147 to Cornwallis exit. FROM CORNWALLIS EXIT, bear right, go thru first set of traffic lights, passing Burroughs Wellcome, then next driveway on right is MCNC. When you enter MCNC, the Microelectronics Center building is on the left. ---------------------------------------------------------------------- We invite you to participate in organizing and running the new Triangle-area neural network (NN) interest group. It is our hope that the group will foster communication and collaboration among the local NN researchers, students, businesses, and the public. From marshall at cs.unc.edu Wed Mar 13 11:34:50 1991 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 13 Mar 91 11:34:50 -0500 Subject: Triangle NN talk: Dan Levine Message-ID: <9103131634.AA24248@marshall.cs.unc.edu> ====== TRIANGLE AREA NEURAL NETWORK INTEREST GROUP presents: ====== Prof. DANIEL S. LEVINE Department of Mathematics University of Texas at Arlington Tuesday, April 2, 1991 5:30 p.m. Refreshments will be served at 5:15. Sitterson Hall (Computer Science), room 011 UNC Chapel Hill ---------------------------------------------------------------------- NETWORK MODELING OF NEUROPSYCHOLOGICAL DATA A general class of neural network architectures will be discussed, based on such principles as associative learning, competition, and opponent processing. Examples of this sort of architecture will be introduced that model data on neuropsychological deficits arising from frontal lobe damage. These deficits include inability to switch criteria on a card sorting task; excessive attraction to novel stimuli; loss of verbal fluency; and difficulty in learning a flexible motor sequence. Frontal lobe damage is modeled in these networks by weakening of a specified connection. Dan Levine is author of the forthcoming textbook Introduction to Neural and Cognitive Modeling, published by L. Erlbaum Associates, 1991. He is a co-founder of the Dallas-Ft.Worth area neural network interest group M.I.N.D. ---------------------------------------------------------------------- Co-Sponsored by: Department of Electrical and Computer Eng., NCSU Department of Computer Science, UNC-CH Humanities Computing Facility, Duke Univ. For more information: Jonathan Marshall (UNC-CH, 962-1887, marshall at cs.unc.edu) or John Sutton (NCSU, 737-5065, sutton at eceugs.ece.ncsu.edu). Directions: Sitterson Hall is located across the street from the Carolina Inn, on South Columbia Street (Route 86), which is the main north-south street through downtown Chapel Hill. Free parking is available in the UNC lots, two of which are adjacent to Sitterson Hall. Municipal parking lots are located 2-3 blocks north, in downtown Chapel Hill. ---------------------------------------------------------------------- We invite you to participate in organizing and running the new Triangle-area neural network (NN) interest group. It is our hope that the group will foster communication and collaboration among the local NN researchers, students, businesses, and the public. From mathew at elroy.Jpl.Nasa.Gov Wed Mar 13 12:38:45 1991 From: mathew at elroy.Jpl.Nasa.Gov (Mathew Yeates) Date: Wed, 13 Mar 91 09:38:45 PST Subject: tech report Message-ID: <9103131738.AA01556@jane.Jpl.Nasa.Gov> The following technical report (JPL Publication) is available for anonymous ftp from the neuroprose directory at cheops.cis.ohio-state.edu. This is a short version of a previous paper "An Architecture With Neural Network Characteristics for Least Squares Problems" and has appeared in various forms at several conferences. There are two ideas that may be of interest: 1) By making the input layer of a single layer Perceptron fully connected, the learning scheme approximates Newtons algorithm instead of steepest descent. 2) By allowing local interactions between synapses the network can handle time varying behavior. Specifically, the network can implement the Kalman Filter for estimating the state of a linear system. get both yeates.pseudo-kalman.ps.Z and yeates.pseudo-kalman-fig.ps.Z A Neural Network for Computing the Pseudo-Inverse of a Matrix and Applications to Kalman Filtering Mathew C. Yeates California Institute of Technology Jet Propulsion Laboratory ABSTRACT A single layer linear neural network for associative memory is described. The matrix which best maps a set of input keys to desired output targets is computed recursively by the network using a parallel implementation of Greville's algorithm. This model differs from the Perceptron in that the input layer is fully interconnected leading to a parallel approximation to Newtons algorithm. This is in contrast to the steepest descent algorithm implemented by the Perceptron. By further extending the model to allow synapse updates to interact locally, a biologically plausible addition, the network implements Kalman filtering for a single output system. From marwan at ee.su.OZ.AU Thu Mar 14 01:29:45 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Thu, 14 Mar 1991 16:29:45 +1000 Subject: Job opportunities Message-ID: <9103140629.AA09964@brutus.ee.su.oz.au> Professional Assistants (2) Systems Engineering and Design Automation Laboratory School of Electrical Engineering The University of Sydney Australia Microelectronic Implementation of Neural Networks based Devices for the Analysis and Classification of Medical Signals Applications are invited from persons to work on an advanced neural network application project in the medical area. The project is being funded jointly by the Australian Government and a high-technology manufacturer of medical products. Appointees will be joining an existing team of 3 staff. The project is the research and development of architectures of neural networks to be implemented in VLSI. The project spans over a 2-year period following a feasibility study which was completed in early 1991. The first Professional Assistant is expected to have experience in VLSI engineering, to design VLSI circuit, to perform simulation, to develop simulation models of physical level implementations, to design testing jigs (hardware and software) and perform test on fabricated chips. The second Professional Assistant is also expected to have experience in VLSI engineering with the responsibility of working on the development of EEPROM storage technology, of upgrading design tools to support EEPROM, of developing an interface with the fabrication foundry, of designing building block storage elements and interfacing them with other artificial neural network building blocks. The appointee for this position is expected to spend substantial time at the University of New South Wales where some of the EEPROM work will be performed. Both Professional Assistants will also work on the overall chip prototyping that we expect to be performed in 1992. The appointment will be for a period of 2 years. Applicants should have an Electrical/Electronic Engineering degree or equivalent and a minimum of three years experience in a related fields. Preference will be given to applicants with experience in Artificial Neural Networks, and Analog or Digital Intergrated Circuits. The appointees may apply for enrollment towards a postgraduate degree (part-time). Salary range according to qualifications: $24,661 - $33,015 pa. Method of application: --------------------- Applications including curriculum vitae, list of publications and the names, addresses and fax numbers of three referees should be sent to: Dr M.A. Jabri, Sydney University Electrical Engineering Building J03 NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: (+61-2) 692-3847 Email: marwan at ee.su.oz.au From whom further information may be obtained. The University reserves the right not to proceed with any appointment for financial or other reasons. Equal Opportunity is University Policy. From elman at crl.ucsd.edu Thu Mar 14 18:05:04 1991 From: elman at crl.ucsd.edu (Jeff Elman) Date: Thu, 14 Mar 91 15:05:04 PST Subject: CRL TR-9101: The importance of starting small Message-ID: <9103142305.AA18566@crl.ucsd.edu> The following technical report is available. Hardcopies may be obtained by sending your name and postal address to crl at crl.ucsd.edu. A compressed postscript version can be retrieved through ftp (anonymous/ident) from crl.ucsd.edu (128.54.165.43) in the file pub/neuralnets/tr9101.Z. CRL Technical Report 9101 "Incremental learning, or The importance of starting small" Jeffrey L. Elman Center for Research in Language Departments of Cognitive Science and Linguistics University of California, San Diego elman at crl.ucsd.edu ABSTRACT Most work in learnability theory assumes that both the environment (the data to be learned) and the learning mechanism are static. In the case of children, however, this is an unrealistic assumption. First-language learning occurs, for example, at precisely that point in time when children undergo significant developmental changes. In this paper I describe the results of simulations in which network models are unable to learn a complex grammar when both the network and the input remain unchanging. How- ever, when either the input is presented incrementally, or- -more realistically--the network begins with limited memory that gradually increases, the network is able to learn the grammar. Seen in this light, the early limitations in a learner may play both a positive and critical role, and make it possible to master a body of knowledge which could not be learned in the mature system. From tom at asi.com Fri Mar 15 13:53:50 1991 From: tom at asi.com (Tom Baker) Date: Fri, 15 Mar 91 10:53:50 PST Subject: More limited precision responses Message-ID: <9103151853.AA23661@asi.com> Recently Jacob Murre sent a copy of the comments that he received from a request for references on limited precision implementations of neural network algorithms. Here are the results that I received from an earlier request. I have also collected a bibliography of papers on the subject. Because of the length of the bibliograpy, I will not post it here. I am sending a copy to all that responded to my initial inquiry. If you want a copy of the references send me a message, and I will send it to you. If you already sent a request for the references and do not receive them in a few days, then please send me another message. Thanks for all of the references and messages. Thomas Baker INTERNET: tom at asi.com Adaptive Solutions, Inc. UUCP: (uunet,ogicse)!adaptive!tom 1400 N.W. Compton Drive, Suite 340 Beaverton, Oregon 97006 ----------------------------------------------------------------------- Hello Thomas. Networks with Binary Weights (i.e. only +1 or -1) can be regarded as the extreme case of limited precision networks (they also utilize binary threshold units). The interest in such networks is two fold : theoretical and practical. Nevertheless, not many learning algorithms exist for these networks. Since this is one of my major interests, I have a list of references for binary weights algorithms, which I eclose. As far as I know, the only algorithm which train feed-forward networks with binary weights are based on the CHIR algorithm (Grossman 1989, Saad and Marom 1990, Nabutovskyet al 1990). The CHIR algorithm is an alternative to BP that was developed by our group, and is capable of training feed forward networks of binary (i.e hard threshold) units. The rest of the papers in the enclosed list deal with algorithms for the single binary weights percptron (a learning rule which can also be used for fully connected networks of such units). If you are interested, I can send you copies of our papers (Grossman, Nabutovsky et al). I would also be very intersted in what you find. Best Regards Tal Grossman Electronics Dept. Weizmann Inst. of Science Rehovot 76100, ISRAEL. ----------------------------------------------------------------------- In article <894 at adaptive.UUCP> you write: > >We use 16 bit weights, and 8 bit inputs and outputs. We have found >that this representation does as well as floating point for most of >the data sets that we have tried. I have also seen several other >papers where 16 bit weights were used successfully. > >I am also trying to collect a bibliography on limited precision. I >would like to see the references that you get. I do not have all of >the references that I have in a form that can be sent out. I will >post them soon. I would like to keep in touch with the people that >are doing research in this area. > >Thomas Baker INTERNET: tom at asi.com >Adaptive Solutions, Inc. UUCP: (uunet,ogicse)!adaptive!tom >1400 N.W. Compton Drive, Suite 340 >Beaverton, Oregon 97006 My research topic is VLSI implementation of neural networks, and hence I have done some study on precision requirement. My observations agree with that of yours more or less (I have also read your paper titled "Characterization of Neural Networks"). I found that the precision for weights is 11-12 bits for the decimal place. For outputs, 4-5 bits were sufficient, but for backpropagated error, the precision requirement was 7-8 bits (the values of these two don't exceed 1). As an aside, you say in your paper (cited above), that you can train the network by accumulating weights as well as backpropagated error. While I was successful with weights, I was never able to train the network by accumulating the error. Could you give some more explanation on this matter? Accumulating error will be very helpful from the point of view of VLSI implementation, since one need not wait for the error to be backpropagated at every epoch, and hence the throughput can be increased. Thanks in advance, Arun (e-mail address: arun at vlsi.waterloo.edu) [Ed. Oops! He's right, accumulating the error without propagating it doesn't work. TB ] ----------------------------------------------------------------------- I have done some precision effects simulations. I was concerned with an analog/digital hybrid architecture which drove me to examine three areas of precision constraints: 1) calculation precision--the same as weight storage precision and essentially the precision necessary in the backprop calculations, 2) feedforward weight precision--the precision necessary in calculating an activation level, 3) output precision--the precision necessary in both the feedforward calculations and in calculating weight changes. My results were not much better than you mentioned--13 bits were required for weight storage/delta-w calculations. I have a feeling that you wanted to see something much more optimistic. I should say that I was examining problems more related to signal processing and was very concerned with obtaining a low RMS error. Another study which looked more at classification problems and was concerned with correct classification and not necessarily RMS error got more optimistic results--I believe 8-10 bit calculations (but don't quote me). Those results originally appeared in a masters thesis by Sheldon Gilbert at MIT. His thesis I believe is available as a Lincoln Lab tech report #810. The date of that TR is 11/18/88. The results of my study appears in the fall '90 issue of "Neural Computation". As I work on a chip design I continually confront this effect and would be happy to discuss it further if you care to. Best Regards, Paul Hollis Dept. of ECE North Carolina State Univ. pwh at ecebowie.ncsu.edu (919) 737-7452 ----------------------------------------------------------------------- Hi! Yun-shu Peter Chiou told me you have done something on finite word length BP. Right now, I'm also working on a subject related to that area for my master's thesis. Could you give me a copy of your master's thesis? Of course, I'm gonna pay for it. Your reply will be greately appreciated. Jennifer Chen-ping Feng ----------------------------------------------------------------------- My group has interest in the quantisation effects from both theoretical and practical point of view. The theoretical aspects are very challenging. I would appreciate if you are aware of papers in this areas. Regards, Mrawn Jabri ----------------------------------------------------------------------- Dear connectionist researchers, We are in the process of designing a new neurocomputer. An important design consideration is precision: Should we use 1-bit, 4-bit, 8-bit, etc. representations for weights, activations, and other parameters? We are scaling-up our present neurocomputer, the BSP400 (Brain Style Processor with 400 processors), which uses 8-bit internal representations for activations and weights, but activations are exchanged as single bits (using partial time-coding induced by floating thresholds). This scheme does not scale well. Though we have tracked down scattered remarks in the literature on precision, we have not been able to find many systematic studies on this subject. Does anyone know of systematic simulations or analytical results of the effect of implementation precision on the performance of a neural network? In particular we are interested in the question of how (and to what extent) limited precision (i.e., 8-bits) implementations deviate from, say, 8-byte, double precision implementations. The only systematic studies we have been able to find so far deal with fault tolerance, which is only of indirect relevance to our problem: Brause, R. (1988). Pattern recognition and fault tolerance in non-linear neural networks. Biological Cybernetics, 58, 129-139. Jou, J., & J.A. Abraham (1986). Fault-tolerant matrix arithmetic and signal processing on highly concurrent computing structures. Proceedings of the IEEE, 74, 732-741. Moore, W.R. (1988). Conventional fault-tolerance and neural computers. In: R. Eckmiller, & C. Von der Malsburg (Eds.). Neural Computers. NATO ASI Series, F41, (Berling: Springer-Verlag), 29-37. Nijhuis, J., & L. Spaanenburg (1989). Fault tolerance of neural associative memories. IEE Proceedings, 136, 389-394. Thanks! Jacob M.J. Murre Unit of Experimental and Theoretical Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands E-Mail: MURRE at HLERUL55.Bitnet ----------------------------------------------------------------------- We are in the process of finishing up a paper which gives a theoretical (systematic) derivation of the finite precision neural network computation. The idea is a nonlinear extension of "general compound operators" widely used for error analysis of linear computation. We derive several mathematical formula for both retrieving and learning of neural networks. The finite precision error in the retrieving phase can be written as a function of several parameters, e.g., number of bits of weights, number of bits for multiplication and accumlation, size of nonlinear table-look-up, truncation/rounding or jamming approaches, and etc. Then we are able to extend this retrieving phase error analysis to iterative learning to predict the necessary number of bits. This can be shown using a ratio between the finite precision error and the (floating point) back-propagated error. Simulations have been conducted and matched the theoretical prediction quite well. Hopefully, we can have a final version of this paper available to you soon. Jordan L. Holt and Jenq-Neng Hwang ,"Finite Precision Error Analysis of Neural Network Hardware Implementation," University of Washington, FT-10, Seattle, WA 98195 Best Regards, Jenq-Neng ----------------------------------------------------------------------- Dear Baker, I saw the following article on the internet news: ----- Yun-Shu Peter Chiou (yunshu at eng.umd.edu) writes: > Does anyone out there have any references or have done any works > on the effects of finite word length arithmetic on Back-Propagation. I have done a lot of work with BP using limited precision calculations. My masters thesis was on the subject, and last summer Jordan Holt worked with us to run a lot of benchmark data on our limited precision simulator. We are submitting a paper on Jordan's results to IJCNN '91 in Seattle. We use 16 bit weights, and 8 bit inputs and outputs. We have found that this representation does as well as floating point for most of the data sets that we have tried. I have also seen several other papers where 16 bit weights were used successfully. I am also trying to collect a bibliography on limited precision. I would like to see the references that you get. I do not have all of the references that I have in a form that can be sent out. I will post them soon. I would like to keep in touch with the people that are doing research in this area. ----- Could I have a copy of your article sent to me? As for the moment I am writing a survey of the usage of SIMD computers for the simulation of artificial neural networks. As many SIMD computers are bitserial I think the precision is an important aspect of the algorithms. I have found some articles that discusses low precision neural networks and I included refereces to them last in my letter. If you have a compilation of other references that you recoment could you please send the list to me? advTHANKSance Tomas Nordstrom --- Tomas Nordstrom Tel: +46 920 91061 Dept. of Computer Engineering Fax: +46 920 98894 Lulea university of Technology Telex: 80447 LUHS S-95187 Lulea, SWEDEN Email: tono at sm.luth.se (internet) --- ----------------------------------------------------------------------- I a recent post on the connectionists mailing list you were quoted as follows... " ... We have found that for backprop learning, between twelve and sixteen bits are needed. ...One method that optical and analog engineers use is to calcualte the error by running feed forward calculations with limited precision, and learnign weights with heigher precision..." I am currently doing recesearch for optical implemetions for associative memories. The method that I am reseraching iteratively calculates an memory matrix that is fairly robust. However when I quantize the process during learning, the entire method fails. I was wondering if you knew of some one who has had similar problems in quantization during training. Thank you, Karen haines ----------------------------------------------------------------------- Hi Mr Baker I am working on a project wherein we are attempting to study the implications of using limited precision while implementing backpropagation. I read a message from Jacob Murre that said that you were maitaining a distribution list of persons interested in this field. Would you kindly add me to that list. My email address is ljd at mrlsun.sarnoff.com Thanks Leslie Dias From rba at vintage.bellcore.com Sun Mar 17 07:41:07 1991 From: rba at vintage.bellcore.com (Bob Allen) Date: Sun, 17 Mar 91 07:41:07 -0500 Subject: a 'connectionist pedagogy' - structured training of grammars Message-ID: <9103171241.AA03081@vintage.bellcore.com> The result that it is beneficial to teach a simple recurrent network a difficult grammar by staring with a short form is already well known (but unfortunately not cited in Jeff Elman's recently announced TR). My research was described to Jay McClelland's research group in November 1988, it was presented at the ACM Computer Science Conference (Adaptive Training of Connectionist State Machines, Louisville, Feb. 1989, page 428), it has been reported to this interest group several times, and it is summarized in: Allen, R.B., Connectionist Langauge Users, Connection Science, vol. 2, 279-311, 1990. I would be happy to distribute a reprint of this paper to anyone who requests one. Robert B. Allen 2A-367 Bellcore Morristown, NJ 07962-1910 From soller at cs.utah.edu Tue Mar 19 00:55:06 1991 From: soller at cs.utah.edu (Jerome Soller) Date: Mon, 18 Mar 91 22:55:06 -0700 Subject: Utah's First Annual Cognitive Science Lecture Message-ID: <9103190555.AA07733@cs.utah.edu> The speaker at the First Annual Utah Cognitive Science Lecture is Dr. Andreas Andreou of the Johns Hopkins University Electrical Engineering Department. His topic is "A Physical Model of the Retina in Analog VLSI That Explains Optical Illusions". This provides a contrast to Dr. Carver Mead of Caltech, who spoke earlier this year in Utah at the Computer Science Department's Annual Organick Lecture. The time and date of the First Annual Cognitive Science Lecture will be Tuesday, April 2nd, 4:00 P.M. The room will be 101 EMCB(next to the Merrill Engineering Building), University of Utah, Salt Lake City, Utah. A small reception(refreshments) will be available. This event is cosponsored by the Sigma Xi Resarch Fraternity. Dr. Dick Normann, Dr. Ken Horch, Dr. Dick Burgess, and Dr. Phil Hammond were extremely helpful in organizing this event. For more information on this event and other Cognitive Science related events in the state of Utah, contact me (801)-581-4710 or by e-mail(preferred) (soller at cs.utah.edu) . We have an 130 person electronic mailing list within the state of Utah announcing these kind of events. We are also finishing up this year's edition of the Utah Cognitive Science Information Guide, which contains 80 faculty, 60 graduate students, 60 industry representatives, 32 courses, and 25 research groups from the U. of Utah, BYU, Utah State and local industry. A rough draft can be copied by anonymous ftp from /usr/spool/ftp/pub/guide.doc from the cs.utah.edu machine. A final draft in plain text and a Macintosh version(better format) will be on the ftp site in about 2 weeks. Sincerely, Jerome B. Soller Ph. D. Student Department of Computer Science University of Utah From well!moritz at apple.com Mon Mar 18 21:41:41 1991 From: well!moritz at apple.com (Elan Moritz) Date: Mon, 18 Mar 91 18:41:41 pst Subject: abstracts - J. of Ideas, Vol 2 #1 Message-ID: <9103190241.AA22971@well.sf.ca.us> +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ please post & circulate Announcement ......... Abstracts of papers appearing in Volume 2 # 1 of the Journal of Ideas THOUGHT CONTAGION AS ABSTRACT EVOLUTION Aaron Lynch Abstract: Memory abstractions, or mnemons, form the basis of a memetic evolution theory where generalized self-replicating ideas give rise to thought contagion. A framework is presented for describing mnemon propagation, combination, and competition. It is observed that the transition from individual level considerations to population level considerations can act to cancel individual variations and may result in population behaviors. Equations for population memetics are presented for the case of two-idea interactions. It is argued that creativity via innovation of ideas is a population phenomena. Keywords: mnemon, meme, evolution, replication, idea, psychology, equation. ................... CULTURE AS A SEMANTIC FRACTAL: Sociobiology and Thick Description Charles J. Lumsden Department of Medicine, University of Toronto Toronto, Ontario, Canada M5S 1A8 Abstract: This report considers the problem of modeling culture as a thick symbolic system: a system of reference and association possessing multiple levels of meaning and interpretation. I suggest that thickness, in the sense intended by symbolic anthropologists like Geertz, can be treated mathematically by bringing together two lines of formal development, that of semantic networks, and that of fractal mathematics. The resulting semantic fractals offer many advantages for modeling human culture. The properties of semantic fractals as a class are described, and their role within sociobiology and symbolic anthropology considered. Provisional empirical evidence for the hypothesis of a semantic fractal organization for culture is discussed, together with the prospects for further testing of the fractal hypothesis. Keywords: culture, culturgen, meme, fractal, semantic network. ................... MODELING THE DISTRIBUTION OF A "MEME" IN A SIMPLE AGE DISTRIBUTION POPULATION: I. A KINETICS APPROACH AND SOME ALTERNATIVE MODELS Matthew Witten Center for High Performance Computing University of Texas System, Austin, TX 78758-4497 Abstract. Although there is a growing historical body of literature relating to the mathematical modeling of social and historical processes, little effort has been placed upon modeling the spread of an idea element "meme" in such a population. In this paper we review some of the literature and we then consider a simple kinetics approach, drawn from demography, to model the distribution of a hypothetical "meme" in a population consisting of three major age groups. KEYWORDS: Meme, idea, age-structure, compartment, sociobiology, kinetics model. ................... THE PRINCIPIA CYBERNETICA PROJECT Francis Heylighen, Cliff Joslyn, and Valentin Turchin The Principia Cybernetica Project[dagger] Abstract: This note describes an effort underway by a group of researchers to build a complete and consistent system of philosophy. The system will address, issues of general philosophical concern, including epistemology, metaphysics, and ethics, or the supreme human values. The aim of the project is to move towards conceptual unification of the relatively fragmented fields of Systems and Cybernetics through consensually-based philosophical development. Keywords: cybernetics, culture, evolution, system transition, networks, hypermedia, ethics, epistemology. ................... Brain and Mind: The Ultimate Grand Challenge Elan Moritz The Institute for Memetic Research P. O. Box 16327, Panama City, Florida 32406 Abstract: Questions about the nature of brain and mind are raised. It is argued that the fundamental understanding of the functions and operation of the brain and its relationship to mind must be regarded as the Ultimate Grand Challenge problem of science. National research initiatives such as the Decade of the Brain are discussed. Keywords: brain, mind, awareness, consciousness, computers, artificial intelligence, meme, evolution, mental health, virtual reality, cyberspace, supercomputers. +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ The Journal of Ides an archival forum for discussion of 1) evolution and spread of ideas, 2) the creative process, and 3) biological and electronic implementations of idea/knowledge generation and processing. The Journal of Ideas, ISSN 1049-6335, is published quarterly by the Institute for Memetic Research, Inc. P. O. Box 16327, Panama City Florida 32406-1327. >----------- FOR MORE INFORMATION -------> E-mail requests to Elan Moritz, Editor, at moritz at well.sf.ca.us. +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ From Connectionists-Request at CS.CMU.EDU Tue Mar 19 09:39:25 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 19 Mar 91 09:39:25 EST Subject: Fwd: Message-ID: <12415.669393565@B.GP.CS.CMU.EDU> ------- Forwarded Message From thrun at gmdzi.uucp Mon Mar 18 23:39:54 1991 From: thrun at gmdzi.uucp (Sebastian Thrun) Date: Tue, 19 Mar 91 03:39:54 -0100 Subject: No subject Message-ID: <9103190239.AA01072@gmdzi.gmd.de> Well, there is a new TR available on the neuroprose archieve which is more or less an extended version of the NIPS paper I announced some weeks ago: ON PLANNING AND EXPLORATION IN NON-DISCRETE WORLDS Sebastian Thrun Knut Moeller German National Research Center Bonn University for Computer Science St. Augustin, FRG Bonn, FRG The application of reinforcement learning to control problems has received considerable attention in the last few years [Anderson86,Barto89,Sutton84]. In general there are two principles to solve reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantages. We present a system that combines both methods. By interaction with an unknown environment a world model is progressively constructed using the backpropagation algorithm. For optimizing actions with respect to future reinforcement planning is applied in two steps: An experience network proposes a plan, which is subsequently optimized by gradient descent with a chain of model networks. While operating in a goal-oriented manner due to the planning process the experience network is trained. Its accumulating experience is fed back into the planning process in form of initial plans, such that planning can be gradually reduced. In order to ensure complete system identification, a competence network is trained to predict the accuracy of the model. This network enables purposeful exploration of the world. The appropriateness of this approach to reinforcement learning is demonstrated by three different control experiments, namely a target tracking, a robotics and a pole balancing task. Keywords: backpropagation, connectionist networks, control, exploration, planning, pole balancing, reinforcement learning, robotics, neural networks, and, and, and... - ------------------------------------------------------------------------- The TR can be retrieved by ftp: unix> ftp cheops.cis.ohio-state.edu Name: anonymous Guest Login ok, send ident as password Password: neuron ftp> binary ftp> cd pub ftp> cd neuroprose ftp> get thrun.plan-explor.ps.Z ftp> bye unix> uncompress thrun.plan-explor.ps unix> lpr thrun.plan-explor.ps - ------------------------------------------------------------------------- If you have trouble in ftping the files, do not hesitate to contact me. --- Sebastian Thrun (st at gmdzi.uucp, st at gmdzi.gmd.de) ------- End of Forwarded Message From bradley at ivy.Princeton.EDU Tue Mar 19 16:46:29 1991 From: bradley at ivy.Princeton.EDU (Bradley Dickinson) Date: Tue, 19 Mar 91 16:46:29 EST Subject: Award Nominations Due 3/31/91 Message-ID: <9103192146.AA02632@ivy.Princeton.EDU> Nominations Sought for IEEE Neural Networks Council Award The IEEE Neural Networks Council is soliciting nominations for an award, to be presented for the first time at the July 1991 International Joint Conference on Neural Networks. IEEE Transactions on Neural Networks Outstanding Paper Award This is an award of $500 for the outstanding paper published in the IEEE Transactions on Neural Networks in the previous two-year period. For 1991, all papers published in 1990 (Volume 1) in the IEEE Transactions on Neural Networks are eligible. For a paper with multiple authors, the award will be shared by the coauthors. Nominations must include a written statement describing the outstanding characteristics of the paper. The deadline for receipt of nominations is March 31, 1991. Nominations should be sent to Prof. Bradley W. Dickinson, NNC Awards Chair, Dept. of Electrical Engineering, Princeton University, Princeton, NJ 08544-5263. Questions about the award may be directed to Prof. Bradley W. Dickinson: From ST401843 at brownvm.brown.edu Tue Mar 19 18:29:54 1991 From: ST401843 at brownvm.brown.edu (thanasis kehagias) Date: Tue, 19 Mar 91 18:29:54 EST Subject: Paper announcements Message-ID: I have just placed two papers of mine in the ohio-state archive. The first one is in the file kehagias.srn1.ps.Z and the relevant figures in the companion file kehagias.srn1fig.ps.Z. The second one is in the file kehagias.srn2.ps.Z and the relevant figures in the companion file kehagias.srn2fig.ps.Z. Detailed instructions for getting and printing these files will be included in the end of this message. Some of you have received versions of these files in email previously. In that case read a postscript at the end of this message. ----------------------------------------------------------------------- Stochastic Recurrent Network training by the Local Backward-Forward Algorithm Ath. Kehagias Brown University Div. of Applied Mathematics We introduce Stochastic Recurrent Networks, which are collections of interconnected finite state units. At every discrete time step, each unit goes into a new state, following a probability law that is conditional on the state of neighboring units at the previous time step. A network of this type can learn a stochastic process, where ``learning'' means maximizing the probability Likelihood function of the model. A new learning (i.e. Likelihood maximization) algorithm is introduced, the Local Backward-Forward Algorithm. The new algorithm is based on the Baum Backward-Forward Algorithm (for Hidden Markov Models) and improves speed of learning substantially. Essentially, the local Backward-Forward Algorithm is a version of Baum's algorithm which estimates local transition probabilities rather than the global transition probability matrix. Using the local BF algorithm, we train SRN's that solve the 8-3-8 encoder problem and the phoneme modelling problem. This is the paper kehagias.srn1.ps.Z, kehagias.srn1fig.ps.Z . The paper srn1 has undergone significant revision. It had too many typos, bad notation and also needed reorganization . All of these have been done. Thanks to N. Chater, S. Nowlan and A.T. Tsoi and M. Perrone for many useful suggestions along these lines. -------------------------------------------------------------------- Stochastic Recurrent Network training Prediction and Classification of Time Series Ath. Kehagias Brown University Div. of Applied Mathematics We use Stochastic Recurrent Networks of the type introduced in [Keh91a] as models of finite-alphabet time series. We develop the Maximum Likelihood Prediction Algorithm and the Maximum A Posteriori Classification Algorithm (which can both be implemented in recurrent PDP form). The prediction problem is: given the output up to the present time: Y^1,...,Y^t and the input up to the immediate future: U^1,...,U^t+1, predict with Maximum Likelihood the output Y^t+1 that the SRN will produce in the immediate future. The classification problem is: given the output up to the present time: Y^1,...,Y^t and the input up to the present time: U^1,...,U^t, as well as a number of candidate SRN's: M_1, M_2, .., M_K, find the network that has Maximum Posterior Probability of producing Y^1,...,Y^t. We apply our algorithms to prediction and classification of speech waveforms. This is the paper kehagias.srn2.ps.Z, kehagias.srn2fig.ps.Z . ----------------------------------------------------------------------- To get these files, do the following: gvax> ftp cheops.cis.ohio-state.edu 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron ftp> Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp> binary 200 Type set to I. ftp> get kehagias.srn1.ps.Z ftp> get kehagias.srn1fig.ps.Z ftp> get kehagias.srn2.ps.Z ftp> get kehagias.srn2fig.ps.Z ftp> quit gvax> uncompress kehagias.srn1.ps.Z gvax> uncompress kehagias.srn1fig.ps.Z gvax> uncompress kehagias.srn2.ps.Z gvax> uncompress kehagias.srn2fig.ps.Z gvax> lqp kehagias.srn1.ps gvax> lqp kehagias.srn1fig.ps gvax> lqp kehagias.srn2.ps gvax> lqp kehagias.srn2fig.ps POSTSCRIPT: All of the people that sent a request (about a month ago) for srn1 in its original form are in my mailing list and most got copies of new versions of srn1,srn2 in email. Some of these files did not make it through internet, because of size restrictions etc. so you may want to fpt them now. Incidentally, if you want to be removed from the mailing list (for when the next paper in the series comes by) send me mail. Thanasis Kehagias From SAYEGH at CVAX.IPFW.INDIANA.EDU Wed Mar 20 21:44:52 1991 From: SAYEGH at CVAX.IPFW.INDIANA.EDU (SAYEGH@CVAX.IPFW.INDIANA.EDU) Date: Wed, 20 Mar 1991 21:44:52 EST Subject: 4th NN-PDP Conference, Indiana-Purdue Message-ID: <910320214452.20201ede@CVAX.IPFW.INDIANA.EDU> FOURTH CONFERENCE ON NEURAL NETWORKS ------------------------------------ AND PARALLEL DISTRIBUTED PROCESSING ----------------------------------- INDIANA UNIVERSITY-PURDUE UNIVERSITY ------------------------------------ 11,12,13 APRIL 1991 ------------------- The Fourth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University will be held on the Fort Wayne Campus, April 11,12, 13, 1991. Conference registration is $20 (on site) and students attend free. Some limited financial support might also be available to allow students to attend. Inquiries should be addressed to: email: sayegh at ipfwcvax.bitnet ----- US mail: ------- Prof. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805 FAX: (219) 481-6880 Voice: (219) 481-6306 OR 481-6157 Talks will be held: ------------------ Thursday April 11, 6pm - 9pm -- Classroom Medical 159 Friday Morning (Tutorial Session) -- Kettler G46 Friday Afternoon and Evening -- Classroom Medical 159 Saturday Morning -- Kettler G46 Free Parking is made available on the TOP floor of the parking garage. Special Hotel Rates (Purdue Corporate rates) are available at Hall's Guest House, which is a 10 mn drive from Campus. The number is (219) 489-2521. The following talks will be presented: ------------------------------------- network analysis: ---------------- P.G. Madhavan, B. Xu, B. Stephens, Purdue University, Indianapolis On the Convergence Speed & the Generalization Ability of Tri-state Neural Networks Mohammad R. Sayeh, Southern Illinois University at Carbondale Dynamical-System Approach to Unsupervised Classifier Samir I. Sayegh, Purdue University, Ft Wayne Symbolic Manipulation and Neural Networks Zhenni Wang, Ming T. Tham & A.J. Morris, University of Newcastle upon Tyne Multilayer Neural Networks: Approximated Canonical Decomposition of Nonlinearity M.G. Royer & O.K. Ersoy, Purdue University, W. Lafayette Classification Performance of Pshnn with BackPropagation Stages Sean Carroll, Tri-State University Single-Hidden-Layer Neural Nets Can Approximate B-Splines M. D. Tom & M.F. Tenorio, Purdue University, W. Lafayette A Neuron Architecture with Short Term Memory applications: ------------ G. Allen Pugh, Purdue University, Fort Wayne Further Design Considerations for Back Propagation I. H. Shin and K. J. Cios, The University of Toledo A Neural Network Paradigm and Architecture for Image Pattern Recognition R. E. Tjia, K. J. Cios and B. N. Shabestari, The University of Toledo Neural Network in Identification of Car Wheels from Gray Level Images S. Sayegh, C. Pomalaza-Raez, B. Beer and E. Tepper, Purdue University, Ft Wayne Pitch and Timbre Recognition Using Neural Network Jacek Witaszek & Colleen Brown, DePaul University Automatic Construction of Connectionist Expert Systems Robert Zerwekh, Northern Illinois University Modeling Learner Performance: Classifying Competence Levels Using Adaptive Resonance Theory biological aspects: ------------------ R. Manalis, Indiana Purdue University at Fort Wayne Short Term Memory Implicated in Twitch Facilitation Edgar Erwin, K. Obermayer, University of Illinois Formation and Variability of Somatotopic Maps with Topological Mismatch T. Alvager, B. Humpert, P. Lu, C. Roberts, Indiana State University DNA Sequence Analysis with a Neural Network Christel Kemke, DFKI, Germany Towards a Synthesis of Neural Network Behavior optimization and genetic algorithms: ----------------------------------- Robert L. Sedlmeyer, Indiana-Purdue, Fort Wayne A Genetic Algorithm to Estimate the Edge-Intergrity of Halin Graphs J.L. Noyes, Wittenberg University Neural Network Optimization Methods Omer Tunali & Ugur Halici, University of Missouri/Rolla A Boltzman Machine for Hypercube Embedding Problem William G. Frederick & Curt M. White, Indiana-Purdue University Genetic Algorithms and a Variation on the Steiner Point Problem Arun Jagota, State University of NY at Buffalo A Forgetting Rule and Other Extensions to the Hopfield-Style Network Storage Rule and Their Applications tutorial lectures: ----------------- Marc Clare, Lincoln National, Ft Wayne An Introduction to the Methodology of Building Neural Networks Ingrid Russell, University of Hartford Integrating Neural Networks into an AI Course Arun Jagota, State University of NY at Buffalo The Hopfield Model and Associative Memory Ingrid Russell, University of Hartford Self Organization and Adaptive Resonance Theory Models From tds at ai.mit.edu Thu Mar 21 02:32:37 1991 From: tds at ai.mit.edu (Terence D. Sanger) Date: Thu, 21 Mar 91 02:32:37 EST Subject: LMS-tree source code available Message-ID: <9103210732.AA01258@cauda-equina> Source code for a sample implementation of the LMS-tree algorithm is now available by anonymous ftp. The code includes a bare-bones implementation of the algorithm incorporated into a demo program which predicts future values of the mackey-glass differential delay equation. The demo will run under X11R3 or higher, and has been tested on sun-3 and sun-4 machines. Since this is a deliberately simple implementation not including tree pruning or other optimizations, many improvements are possible. I encourage any and all suggestions, comments, or questions. Terry Sanger (tds at ai.mit.edu) To obtain and execute the code: > mkdir lms-trees > cd lms-trees > ftp ftp.ai.mit.edu login: anonymous password: ftp> cd pub ftp> binary ftp> get sanger.mackey.tar.Z ftp> quit > uncompress sanger.mackey.tar.Z > tar xvf sanger.mackey.tar > make mackey > mackey Some documentation is included in the header of the file mackey.c . A description of the algorithm can be found in the paper I recently announced on this network: Basis-Function Trees as a Generalization of Local Variable Selection Methods for Function Approximation Abstract Local variable selection has proven to be a powerful technique for approximating functions in high-dimensional spaces. It is used in several statistical methods, including CART, ID3, C4, MARS, and others . In this paper I present a tree-structured network which is a generalization of these techniques. The network provides a framework for understanding the behavior of such algorithms and for modifying them to suit particular applications. To obtain the paper: >ftp cheops.cis.ohio-state.edu login: anonymous password: ftp> cd pub/neuroprose ftp> binary ftp> get sanger.trees.ps.Z ftp> quit >uncompress sanger.trees.ps.Z >lpr -h -s sanger.trees.ps Good Luck! From PetersonDM at computer-science.birmingham.ac.uk Thu Mar 21 10:18:25 1991 From: PetersonDM at computer-science.birmingham.ac.uk (PetersonDM@computer-science.birmingham.ac.uk) Date: Thu, 21 Mar 91 15:18:25 GMT Subject: Cognitive Science at Birmingham Message-ID: <9103211518.aa03564@james.Cs.Bham.AC.UK> ============================================================================ University of Birmingham Graduate Studies in COGNITIVE SCIENCE ============================================================================ The Cognitive Science Research Centre at the University of Birmingham comprises staff from the Departments/Schools of Psychology, Computer Science, Philosophy and Linguistics, and supports teaching and research in the inter-disciplinary investigation of mind and cognition. The Centre offers both MSc and PhD programmes. MSc in Cognitive Science The MSc programme is a 12 month conversion course, including a 4 month supervised project. The course places a particular stress on the relation between biological and computational architectures. Compulsory courses: AI Programming, Overview of Cognitive Science, Knowledge Representation Inference and Expert Systems, General Linguistics, Human Information Processing, Structures for Data and Knowledge, Philosophical Questions in Cognitive Science, Human-Computer Interaction, Biological and Computational Architectures, The Computer and the Mind, Current Issues in Cognitive Science. Option courses: Artificial and Natural Perceptual Systems, Speech and Natural Language, Parallel Distributed Processing. It is expected that students will have a good degree in psychology, computing, philosophy or linguistics. Funding is available through SERC and HTNT. PhD in Cognitive Science For 1991 there are 3 SERC studentships available for PhD level research into a range of topics including: o computational modelling of emotion o computational modelling of cognition o interface design o computational and psychophysical approaches to vision Computing Facilities Students have access to ample computing facilities, including networks of Apollo, Sun and Sparc workstations in the Schools of Computer Science and Psychology. Contact For further details, contact: Dr. Mike Harris CSRC, School of Psychology, University of Birmingham, PO Box 363, Edgbaston, Birmingham B15 2TT, UK. Phone: (021) 414 4913 Email: HARRIMWG at ibm3090.bham.ac.uk From HORN%TAUNIVM.BITNET at BITNET.CC.CMU.EDU Thu Mar 21 18:41:00 1991 From: HORN%TAUNIVM.BITNET at BITNET.CC.CMU.EDU (HORN%TAUNIVM.BITNET@BITNET.CC.CMU.EDU) Date: 21 Mar 91 18:41 IST Subject: BITNET mail follows Message-ID: <0540CD966200006E@BITNET.CC.CMU.EDU> From David Thu Mar 21 18:36:21 1991 From: David (+fax) Date: 21 March 91, 18:36:21 IST Subject: No subject Message-ID: The following preprint is available. Requests can be sent to HORN at TAUNIVM.BITNET : Segmentation, Binding and Illusory Conjunctions D. HORN, D. SAGI and M. USHER Abstract We investigate binding within the framework of a model of excitatory and inhibitory cell assemblies which form an oscillating neural network. Our model is composed of two such networks which are connected through their inhibitory neurons. The excitatory cell assemblies represent memory patterns. The latter have different meanings in the two networks, representing two different attributes of an object, such as shape and color. The networks segment an input which contains mixtures of such pairs into staggered oscillations of the relevant activities. Moreover, the phases of the oscillating activities representing the two attributes in each pair lock with each other to demonstrate binding. The system works very well for two inputs, but displays faulty correlations when the number of objects is larger than two. In other words, the network conjoins attributes of different objects, thus showing the phenomenon of "illusory conjunctions", like in human vision. From christie at ee.washington.edu Fri Mar 22 15:05:48 1991 From: christie at ee.washington.edu (Richard D. Christie) Date: Fri, 22 Mar 91 12:05:48 PST Subject: Conference Announcement - NN's & Power Systems (ANNPS 91) Message-ID: <9103222005.AA03710@wahoo.ee.washington.edu> *** Conference Announcement *** First International Forum on Applications of Neural Networks to Power Systems (ANNPS 91) *** July 23-26, 1991 *** Seattle, Washington (This is the hot and sunny time of the year in Seattle - Don't miss it!) Focus is on state of the art applications of Neural Network technology to complex power system problems. There will be papers, tutorials and panel discussions. A banquet cruise, one free lunch and a tour of the Boeing 747 plant in Everett are included in the conference fee. Conference fee: $125, $150 after July 1 Students: (Meetings and proceedings, no fun stuff) $25 For registration information, forms and questions, contact Jan Kvamme (kwa-mi) at email: jmk6112 at u.washington.edu FAX: (206) 543-2352 Phone: (206) 543-5539 snailmail: Engineering Continuing Education, GG-13 University of Washington 4725 30th Ave NE Seattle, WA 98105, USA Sponsored by: National Science Foundation University of Washington IEEE Power Engineering Society, Seattle Section Puget Sound Power & Light Company From hussien at circe.arc.nasa.gov Sat Mar 23 16:16:09 1991 From: hussien at circe.arc.nasa.gov (The Sixty Thousand Dollar Man) Date: Sat, 23 Mar 91 13:16:09 PST Subject: BITNET mail follows In-Reply-To: HORN%TAUNIVM.BITNET@BITNET.CC.CMU.EDU's message of 21 Mar 91 18:41 IST <0540CD966200006E@BITNET.CC.CMU.EDU> Message-ID: <9103232116.AA06759@circe.arc.nasa.gov.> I would like to get a copy of your article "Segmentation, Binding and Illusory Conjunctions", my mail address is Dr. Bassam Hussien mail stop 210-9 NASA Ames Research Center Moffett field, Ca 94035. e-mai hussien at circe.arc.nasa.gov Thanks. bassam From lazzaro at hobiecat.cs.caltech.edu Sun Mar 24 01:09:10 1991 From: lazzaro at hobiecat.cs.caltech.edu (John Lazzaro) Date: Sat, 23 Mar 91 22:09:10 pst Subject: No subject Message-ID: ******* PLEASE DO NOT DISTRIBUTE TO OTHER MAILING LISTS ******* Caltech VLSI CAD Tool Distribution ---------------------------------- We are offering to the "connectionists" mailing list hardware implementation community a pre-release version of the Caltech electronic CAD distribution. This distribution contains tools for schematic capture, netlist creation, and analog and digital simulation (log), IC mask layout, extraction, and DRC (wol), simple chip compilation (wolcomp), MOSIS fabrication request generation (mosis), netlist comparison (netcmp), data plotting (view) and postscript graphics editing (until). These tools were used exclusively for the design and test of all the integrated circuits described in Carver Mead's book "Analog VLSI and Neural Systems". Until was used as the primary tool for figure creation for the book. The distribution also contains an example of an analog VLSI chip that was designed and fabricated with these tools, and an example of an Actel field-programmable gate array design that was simulated and converted to Actel format with these tools. These tools are distributed under a license very similar to the GNU license; the minor changes protect Caltech from liability. To use these tools, you need: 1) A unix workstation that runs X11r3, X11r4, or Openwindows 2) A color screen 3) Gcc or other ANSI-standard compiler Right now only Sun Sparcstations are officially supported, although resourceful users have the tools running on Sun 3, HP Series 300, and Decstations. If don't have a Sparcstation or an HP 300, only take the package if you feel confident in your C/Unix abilities to do the porting required; someday soon we will integrate the changes back into the sources officially, although many "ifdef mips" are already in the code. If you are interested in some or all of these tools, 1) ftp to hobiecat.cs.caltech.edu on the Internet, 2) log in as anonymous and use your username as the password 3) cd ~ftp/pub/chipmunk 4) copy the file README, that contains more information. We are unable to help users who do not have Internet ftp access. Please do not post this announcement to a mailing list or Usenet group; we are hoping to catch a few more bugs through this pre-release before a general distribution begins. ******* PLEASE DO NOT DISTRIBUTE TO OTHER MAILING LISTS ******* From barryf at ee.su.OZ.AU Tue Mar 26 15:53:26 1991 From: barryf at ee.su.OZ.AU (Barry G. Flower, Sydney Univ. Elec. Eng., Tel: (+61-2) Date: Tue, 26 Mar 91 15:53:26 EST Subject: ACNN'92 Call For Papers Message-ID: <9103260553.AA10365@ee.su.oz.au> PRELIMINARY CALL FOR PAPERS Third Australian Conference On Neural Networks (ACNN'92) February 1992 The Australian National University, Canberra, Australia The third Australian conference on neural networks will be held in Canberra at the Australian National University, during the first week of February 1992. This conference is interdisciplinary, with emphasis on cross discipline communication between Neuroscientists, Engineers, Computer Scientists, Mathematicians and Psychologists concerned with understanding the integrative nature of the nervous system and its implementation in hardware/software. The categories for submissions include: 1 - Neuroscience: Integrative function of neural networks in vision, audition, motor, somatosensory and autonomic functions; Synaptic function; Cellular information processing; 2 - Theory: Learning; generalisation; complexity; scaling; stability; dynamics; 3 - Implementation: Hardware implementation of neural nets; Analogue and digital VLSI implementation; Optical implementations; 4 - Architectures and Learning Algorithms: New architectures and learning algorithms; hierarchy; modularity; learning pattern sequences; Information integration; 5 - Cognitive Science and AI: Computational models of cognition and perception; Reasoning; Concept formation; Language acquisition; Neural net implementation of expert systems; 6 - Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, machine vision; Motor control; Robotic; ACNN'92 will feature invited keynote speakers in the areas of neuroscience, learning, modelling and implementations. The program will include pre-conference tutorials, presentations and poster sessions. Proceedings will be printed and distributed to the attendees. There will be no parallel sessions. Submission Procedures: Original research contributions are solicited and will be internationally refereed. Authors must submit by August 30, 1991: 1- five copies of an up to four pages manuscript, 2- five copies of a single-page 100 words maximum abstract and 3- a covering letter indicating the submission title and the full names and addresses of all authors and to which author correspondence should be addressed. Authors need to indicate on the top of each copy of the manuscript and abstract pages their preference for an oral or poster presentation and specify one of the above six broad categories. Note that names or addresses of the authors should be omitted from the manuscript and the abstract and should be included only on the covering letter. Authors will be notified by November 1, 1991 whether their submissions are accepted or not, and are expected to prepare a revised manuscript (up to four pages) by December 13, 1991. Submissions should be mailed to: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Electrical Engineering NSW 2006 Australia Registration material may be obtained by writing to Mrs Agatha Shotam at the address above or by: Tel: (+61-2) 692 4214; Fax: (+61-2) 692 3847; Email: acnn92 at ee.su.oz.au. Deadline for Submissions is August 30, 1991 Please Post From LAUTRUP at nbivax.nbi.dk Wed Mar 27 07:47:00 1991 From: LAUTRUP at nbivax.nbi.dk (Benny Lautrup) Date: Wed, 27 Mar 1991 13:47 +0100 (NBI, Copenhagen) Subject: POSITIONS AVAILABLE IN NEURAL NETWORKS Message-ID: POSITIONS AVAILABLE IN NEURAL NETWORKS Recently, the Danish Research Councils funded the setting up of a Computational Neural Network Centre (CONNECT). There will be some positions as graduate students, postdocs, and more senior visiting scientists available in connection with the centre. Four of the junior (i.e. student and postdoc) positions will be funded directly from the centre grant and have been allotted to the main activity areas as described below. We are required to fill these very quickly to get the centre up and running according to the plans of the program under which it was funded, so the deadline for applying for them is very soon, APRIL 25. If there happen to be exceptionally qualified people in the relevant areas available right now, they should inform us immediately. We are also sending this letter because there may be other positions available in the future. These will generally be externally funded. Normally the procedure would be for us first to identify the good candidate and then to apply to research councils, foundations and/or international programs (e.g. NATO, EC, Nordic Council) for support. This requires some time, so if an applicant is interested in coming here from the fall of 1992, the procedure should be underway in the fall of 1991. The four areas for the present positions are: Biological sequence analysis Development of new theoretical tools and computational methods for analyzing the macromolecular structure and function of biological sequences. The focus will be on applying these tools and methods to specific problems in biology, including pre-mRNA splicing and similarity measures for DNA sequences to be used in constructing phylogenetic trees. The applicant is expected to have a thorough knowledge of experimental molecular biology, coupled with experience in mathematical methods for describing complex biological phenomena. This position will be at the Department of Structural Properties of Materials and the Institute for Physical Chemistry at the Technical University of Denmark. Analog VLSI for neural networks Development of VLSI circuits in analog CMOS for the implementation of neural networks and their learning algorithms. The focus will be on the interaction between network topology and the constraints imposed by VLSI technology. The applicant is expected to have a thorough knowledge of CMOS technology and analog electronics. Experience with the construction of large systems in VLSI, particularly combined analog-digital systems, is especially desirable. This position will be in the Electronics Institute at the Technical University of Denmark. Neural signal processing Theoretical analysis and implementation of new methods for optimizing architectures for neural networks, with applications in adaptive signal processing, as well as ``early vision''. The applicant is expected to have experience in mathematical modelling of complex systems using statistical or statistical mechanical methods. This position will be jointly in the Electronics Institute at the Technical University of Denmark and the Department of Optics and Fluid Dynamics, Risoe National Laboratory. Optical neural networks Theoretical and experimental investigation of optical neural networks. The applicant is expected to have a good knowledge of applied mathematics, statistics, and modern optics, particularly Fourier optics. This position will be in the Department of Optics and Fluid Dynamics, Risoe National Laboratory. In all cases, the applicant is expected to have some background in neural networks and experience in programming in high-level languages. An applicant should send his or her curriculum vitae and publication list to Benny Lautrup Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Denmark Telephone: (45)3142-1616 Telefax: (45)3142-1016 E-mail: lautrup at nbivax.nbi.dk before April 25. He/she should also have two letters of reference sent separately by people familiar with his/her work by the same date. From terry at sdbio2.UCSD.EDU Wed Mar 27 13:31:32 1991 From: terry at sdbio2.UCSD.EDU (Terry Sejnowski) Date: Wed, 27 Mar 91 10:31:32 PST Subject: Retina Simulator Message-ID: <9103271831.AA00991@sdbio2.UCSD.EDU> I can make available for the cost of duplicating and postage ($20) a computer simulation of the retina, written in Fortran 77. The model has been accepted for publication in BIOLOGICAL CYBERNETICS and a preprint of the paper can be provided. For more information send an E-mail message to siminoff at ifado.uucp. Robert Siminoff From LAUTRUP at nbivax.nbi.dk Thu Mar 28 05:22:00 1991 From: LAUTRUP at nbivax.nbi.dk (Benny Lautrup) Date: Thu, 28 Mar 1991 11:22 +0100 (NBI, Copenhagen) Subject: POSITIONS IN NEURAL NETWORKS Message-ID: <77A7C88800C00D4B@nbivax.nbi.dk> POSITIONS AVAILABLE IN NEURAL NETWORKS Recently, the Danish Research Councils funded the setting up of a Computational Neural Network Centre (CONNECT). There will be some positions as graduate students, postdocs, and more senior visiting scientists available in connection with the centre. Four of the junior (i.e. student and postdoc) positions will be funded directly from the centre grant and have been allotted to the main activity areas as described below. We are required to fill these very quickly to get the centre up and running according to the plans of the program under which it was funded, so the deadline for applying for them is very soon, APRIL 25. If there happen to be exceptionally qualified people in the relevant areas available right now, they should inform us immediately. We are also sending this letter because there may be other positions available in the future. These will generally be externally funded. Normally the procedure would be for us first to identify the good candidate and then to apply to research councils, foundations and/or international programs (e.g. NATO, EC, Nordic Council) for support. This requires some time, so if an applicant is interested in coming here from the fall of 1992, the procedure should be underway in the fall of 1991. The four areas for the present positions are: Biological sequence analysis Development of new theoretical tools and computational methods for analyzing the macromolecular structure and function of biological sequences. The focus will be on applying these tools and methods to specific problems in biology, including pre-mRNA splicing and similarity measures for DNA sequences to be used in constructing phylogenetic trees. The applicant is expected to have a thorough knowledge of experimental molecular biology, coupled with experience in mathematical methods for describing complex biological phenomena. This position will be at the Department of Structural Properties of Materials and the Institute for Physical Chemistry at the Technical University of Denmark. Analog VLSI for neural networks Development of VLSI circuits in analog CMOS for the implementation of neural networks and their learning algorithms. The focus will be on the interaction between network topology and the constraints imposed by VLSI technology. The applicant is expected to have a thorough knowledge of CMOS technology and analog electronics. Experience with the construction of large systems in VLSI, particularly combined analog-digital systems, is especially desirable. This position will be in the Electronics Institute at the Technical University of Denmark. Neural signal processing Theoretical analysis and implementation of new methods for optimizing architectures for neural networks, with applications in adaptive signal processing, as well as ``early vision''. The applicant is expected to have experience in mathematical modelling of complex systems using statistical or statistical mechanical methods. This position will be jointly in the Electronics Institute at the Technical University of Denmark and the Department of Optics and Fluid Dynamics, Risoe National Laboratory. Optical neural networks Theoretical and experimental investigation of optical neural networks. The applicant is expected to have a good knowledge of applied mathematics, statistics, and modern optics, particularly Fourier optics. This position will be in the Department of Optics and Fluid Dynamics, Risoe National Laboratory. In all cases, the applicant is expected to have some background in neural networks and experience in programming in high-level languages. An applicant should send his or her curriculum vitae and publication list to Benny Lautrup Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Denmark Telephone: (45)3142-1616 Telefax: (45)3142-1016 E-mail: lautrup at nbivax.nbi.dk before April 25. He/she should also have two letters of reference sent separately by people familiar with his/her work by the same date. From SCHNEIDER at vms.cis.pitt.edu Thu Mar 28 15:15:00 1991 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Thu, 28 Mar 91 16:15 EDT Subject: Impact of connectionism on Education- what has happened. Message-ID: <9B5F57412DBF4025AC@vms.cis.pitt.edu> Wanted examples applications of connectionism to education. I am writing an overview of connectionism for education. If you know any references or stories of the impact on education please send a note to: SCHNEIDER at PITTVMS on Bitnet or Walter Schneider, 3939 O'Hara St Pittsburgh PA 15260, USA. The topics of interest and potential impact of connectionism are: conceptualization of learning knowledge representation as it effects teaching sequencing and nature of practice computerized tutoring and in intelligent tutoring projects From goldberg at iris.usc.edu Thu Mar 28 14:12:01 1991 From: goldberg at iris.usc.edu (Kenneth Goldberg) Date: Thu, 28 Mar 91 11:12:01 PST Subject: Call for Papers Message-ID: Announcing: Workshop on Neural Networks in Robotics University of Southern California October 23-25, 1991 Sponsor: The Center for Neural Engineering at USC The goal of the workshop will be to stimulate discussion on the current status and potential advances in this field. The workshop will be concerned with (but not limited to) issues such as: Connectionist approaches to robot control Combined machine/connectionist learning Path planning and obstacle avoidance Inverse kinematics and dynamics Transfer of skills from humans to robots Intelligent robots in manufacturing Multiple interacting robot systems Neural network architectures for robot control Sensor fusion and interaction Task learning by robots Biological models for robot control The Organizing Committee includes Michael Arbib (USC), Jacob Barhen (JPL), Andrew Barto (Univ of Massachussetts), George Bekey (USC) and Ken Goldberg (USC). Submissions: 3 copies of extended abstracts of proposed presentations (2 to 4 pages in length) by May 15, 1990 to Prof. George Bekey, Chairman, Technical Program Committee, c/o Computer Science Department, University of Southern California, Los Angeles, California 90089-0782 From egnp46 at castle.edinburgh.ac.uk Fri Mar 29 14:50:51 1991 From: egnp46 at castle.edinburgh.ac.uk (D J Wallace) Date: Fri, 29 Mar 91 14:50:51 WET Subject: Research position available Message-ID: <9103291450.aa10801@castle.ed.ac.uk> POSTDOCTORAL POSITION IN NEURAL NETWORK MODELS AND APPLICATIONS PHYSICS DEPARTMENT, UNIVERSITY OF EDINBURGH Applications are invited for a postdoctoral reasearch position in the Physics Department, University of Edinburgh funded by a Science and Engineering Research Council grant to David Wallace and Alastair Bruce. The position is for two years, from October 1991. The group's interests span theoretical and computational studies of training algorithms, generalisation, dynamical behaviour and optimisation. Theoretical techniques utilise statistical mechanics and dynamical systems. Computational facilities include a range of systems in Edinburgh Parallel Computing Centre, including a 400-node 1.8Gbyte transputer system, a 64-node 1Gbyte Meiko i860 machine and AMT DAPs, as well as workstations and graphics facilities. There are strong links with researchers in other departments, including David Willshaw and Keith Stenning (Cognitive Science), Richard Rohwer (Speech Technology), Alan Murray (Electrical Engineering) and Michael Morgan and Richard Morris (Pharmacology), and we are in two European Community Twinnings. Industrial collaborations have included applications with British Gas, British Petroleum, British Telecom, National Westminster Bank and Shell. Applications supported by a cv and two letters of reference should be sent to D.J. Wallace Physics Department, University of Edinburgh, Kings Buildings, Edinburgh EH9 3JZ, UK Email: ADBruce at uk.ac.ed and DJWallace at uk.ac.ed Tel: 031 650 5250 or 5247 to arrive if possible by 30th April. Further particulars can be obtained from the same address. From xie at ee.su.OZ.AU Fri Mar 1 09:59:47 1991 From: xie at ee.su.OZ.AU (Yun Xie, Sydey Univ. Elec. Eng., Tel: (+61-2) Date: Fri, 1 Mar 91 09:59:47 EST Subject: technical report available Message-ID: <9102282259.AA00509@ee.su.oz.au> The following is the abstract of a report on our recent research work. The report is available by FTP and has been submitted. Analysis of the Effects of Quantization in Multi-Layer Neural Networks Using Statistical Model Yun Xie Marwan A. Jabri Dept. of Electronic Engineering Shool of Electrical Engineering Tsinghua University The University of Sydney Beijing 100084, P.R.China N.S.W. 2006, Australia ABSTRACT A statistical quantization model is used to analyse the effects of quantization when digital technique is used to implement a real-valued feedforward multi-layer neural network. In this process, we introduce a parameter that we call ``effective non-linearity coefficient'' which is important in the study of the quantization effects. We develop, as function of the quantization parameters, general statistical formulations of the performance degradation of the neural network caused by quantization. Our formulation predicts (as intuitively one may think) that network's performance degradation gets worse when the number of bits is decreased; a change of the number of hidden units in a layer has no effect on the degradation; for a constant ``effective non-linearity coefficient'' and number of bits, an increase in the number of layers leads to worse performance degradation of the network; the number of bits in successive layers can be reduced if the neurons of the lower layer are non-linear. unix>ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: neuron 230 Guest login ok, access restrictions apply. ftp>binary ftp>cd pub ftp>cd neuroprose ftp>get yun.quant.ps.Z ftp>bye unix>uncompress yun.quant.ps.Z unix>lpr yun.quant.ps From p-mehra at uiuc.edu Fri Mar 1 04:05:45 1991 From: p-mehra at uiuc.edu (Pankaj Mehra) Date: Fri, 01 Mar 91 03:05:45 CST Subject: Chaos update: 2 refs. on Control of chaotic systems Message-ID: <9103010905.AA02166@hobbes> ** PLS. DO NOT CROSS-POST ** In my original posting, I mentioned a talk by Hopfield. One week after that, Alfred Hubler of Center for Complex Systems Research gave a talk on "Controlled Chaos" in the same lecture series. The following references describe some of his recent work. F. Ohle et al., "Adaptive Control of Chaotic Systems", Tech Report CCS-90-13, Center for Complex Systems Research, Beckman Institute, University of Illinois, Urbana, IL, 1990. (Part of Ohle's Ph.D. thesis from Max-Planck Institute) E. A. Jackson and A. Hubler, "Periodic Entrainment of Chaotic Logistic Map Dynamics", Physica D, 44, pp. 407-420, Amsterdam: North-Holland, 1990. - Pankaj From jcp at vaxserv.sarnoff.com Fri Mar 1 19:34:15 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Fri, 1 Mar 91 19:34:15 EST Subject: SOLICTING POSTER CONCEPTS FOR NIPS91 Message-ID: <9103020034.AA05495@sarnoff.sarnoff.com> We are looking for good ideas for the NIPS-91 Poster. We must make a decision by next Wednesday. Two key motifs that have been incorporated in past posters are: (1) The synergy between studies of natural and synthetic neural information processing systems; (2) Mountainous surroundings at the conference and skiing at the workshop; (3) natural/synthetic contrasts in general. One half-baked idea is to take-off from one of Escher's drawings, such as the "hands drawing hands", or the subtle transmogrification of one organism into another through a series of intermediates that interlock to tile a surface. Here's your chance to express yourself! Send your suggestions to the NIPS-91 Publicity Chairman. Thanks! John Pearson jcp at as1.sarnoff.com NIPS-91 Publicity Chairman Steve Hanson NIPS-91 Program Chairman From marwan at ee.su.OZ.AU Fri Mar 1 21:32:09 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Sat, 2 Mar 1991 13:32:09 +1100 Subject: tech report available Message-ID: <9103020232.AA11925@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri & Barry Flower School of Electrical Engineering University of Sydney Abstract Previous work on analog VLSI implementation of multi-layer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that using gradient descent with direct approximation of the gradient instead of back-propagation is cheapest for parallel analog implementations. We also show that this technique (we call ``weight perturbation'') is suitable for multi-layer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is also presented. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.wpert.ps.Z >quit uncompress jabri.wpert.ps.Z lpr -P jabri.wpert.ps This file contains a large picture, and as a result you may have to set the time-out to a large value (we do that with lpr -i1000000 ...) If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-5): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From marwan at ee.su.OZ.AU Sat Mar 2 19:44:58 1991 From: marwan at ee.su.OZ.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: (+61-2) Date: Sun, 3 Mar 1991 10:44:58 +1000 Subject: Tech Report Available Message-ID: <9103030044.AA16855@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri & Barry Flower Systems Engineering and Design Automation Laboratory School of Electrical Engineering University of Sydney marwan at ee.su.oz.au (SEDAL Tech Report 1991-1-5) Abstract Previous work on analog VLSI implementation of multi-layer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that using gradient descent with direct approximation of the gradient instead of back-propagation is cheapest for parallel analog implementations. We also show that this technique (we call ``weight perturbation'') is suitable for multi-layer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is also presented. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.wpert.ps.Z >quit uncompress jabri.wpert.ps.Z lpr -P jabri.wpert.ps This file contains a large picture, and as a result you may have to set the time-out to a large value (we do that with lpr -i1000000 ...) If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-5): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From boubez at bass.rutgers.edu Mon Mar 4 11:52:42 1991 From: boubez at bass.rutgers.edu (boubez@bass.rutgers.edu) Date: Mon, 4 Mar 91 11:52:42 EST Subject: Technical report available In-Reply-To: stefano nolfi's message of Wed, 06 Feb 91 13:24:05 EDT <9102061541.AA22724@caip.rutgers.edu> Message-ID: <9103041652.AA06744@piano> Hello. I would appreciate receiving a copy of your report titled: AUTO-TEACHING: NETWORKS THAT DEVELOP THEIR OWN TEACHING INPUT Any kind of copy is welcome, whether it's electronic (email or ftp if available publicly), or hardcopy. Thank you in advance. toufic R 2 4 |_|_| Toufic Boubez | | | boubez at caip.rutgers.edu 1 3 5 CAIP Center, Rutgers University From boubez at bass.rutgers.edu Mon Mar 4 15:26:09 1991 From: boubez at bass.rutgers.edu (boubez@bass.rutgers.edu) Date: Mon, 4 Mar 91 15:26:09 EST Subject: Apology Message-ID: <9103042026.AA06924@piano> I apologise for my latest message to the list, I forgot to remove the "CC: connectionists" line from the body of the message. toufic From zemel at cs.toronto.edu Tue Mar 5 15:16:56 1991 From: zemel at cs.toronto.edu (zemel@cs.toronto.edu) Date: Tue, 5 Mar 1991 15:16:56 -0500 Subject: preprint available Message-ID: <91Mar5.151718est.7134@neat.cs.toronto.edu> The following paper has been placed in the neuroprose archives at Ohio State University: Discovering Viewpoint-Invariant Relationships That Characterize Objects Richard S. Zemel & Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, Ont. CANADA M5S-1A4 Abstract Using an unsupervised learning procedure, a network is trained on an ensemble of images of the same two-dimensional object at different positions, orientations and sizes. Each half of the network ``sees'' one fragment of the object, and tries to produce as output a set of 4 parameters that have high mutual information with the 4 parameters output by the other half of the network. Given the ensemble of training patterns, the 4 parameters on which the two halves of the network can agree are the position, orientation, and size of the whole object, or some recoding of them. After training, the network can reject instances of other shapes by using the fact that the predictions made by its two halves disagree. If two competing networks are trained on an unlabelled mixture of images of two objects, they cluster the training cases on the basis of the objects' shapes, independently of the position, orientation, and size. This paper will appear in the NIPS-90 proceedings. To retrieve it by anonymous ftp, do the following: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get zemel.unsup-recog.ps.Z ftp> quit unix> unix> zcat zemel.unsup-recog.ps.Z | lpr -P From tp at irst.it Wed Mar 6 07:10:54 1991 From: tp at irst.it (Tomaso Poggio) Date: Wed, 6 Mar 91 13:10:54 +0100 Subject: preprint available In-Reply-To: zemel@cs.toronto.edu's message of Tue, 5 Mar 1991 15:16:56 -0500 <91Mar5.151718est.7134@neat.cs.toronto.edu> Message-ID: <9103061210.AA17840@caneva.irst.it> From russ at dash.mitre.org Wed Mar 6 08:20:35 1991 From: russ at dash.mitre.org (Russell Leighton) Date: Wed, 6 Mar 91 08:20:35 EST Subject: simulator announcement Message-ID: <9103061320.AA25799@dash.mitre.org> The following describes a neural network simulation environment made available free from the MITRE Corporation. The software contains a neural network simulation code generator which generates high performance C code implementations for backpropagation networks. Also included is a graphical interface for visualization. PUBLIC DOMAIN NEURAL NETWORK SIMULATOR AND GRAPHICS ENVIRONMENT AVAILABLE Aspirin for MIGRAINES Version 4.0 The Mitre Corporation is making available free to the public a neural network simulation environment called Aspirin for MIGRAINES. The software consists of a code generator that builds neural network simulations by reading a network description (written in a language called "Aspirin") and generates a C simulation. A graphical interface (called "MIGRAINES") is provided for platforms that support the Sun window system NeWS1.1. For platforms that do not support NeWS1.1 no graphics are currently available. The system has been ported to a number of platforms: Sun3/4 Silicon Graphics Iris IBM RS/6000 DecStation Cray YMP Included with the software are "config" files for these platforms. Porting to other platforms may be done by choosing the "closest" platform currently supported and adapting the config files. Aspirin 4.15 ------------ The software that we are releasing now is principally for creating, and evaluating, feed-forward networks such as those used with the backpropagation learning algorithm. The software is aimed both at the expert programmer/neural network researcher who may wish to tailor significant portions of the system to his/her precise needs, as well as at casual users who will wish to use the system with an absolute minimum of effort. Aspirin was originally conceived as ``a way of dealing with MIGRAINES.'' Our goal was to create an underlying system that would exist behind the graphics and provide the network modeling facilities. The system had to be flexible enough to allow research, that is, make it easy for a user to make frequent, possibly substantial, changes to network designs and learning algorithms. At the same time it had to be efficient enough to allow large ``real-world'' neural network systems to be developed. Aspirin uses a front-end parser and code generators to realize this goal. A high level declarative language has been developed to describe a network. This language was designed to make commonly used network constructs simple to describe, but to allow any network to be described. The Aspirin file defines the type of network, the size and topology of the network, and descriptions of the network's input and output. This file may also include information such as initial values of weights, names of user defined functions, and hints for the MIGRAINES graphics system. The Aspirin language is based around the concept of a "black box". A black box is a module that (optionally) receives input and (necessarily) produces output. Black boxes are autonomous units that are used to construct neural network systems. Black boxes may be connected arbitrarily to create large possibly heterogeneous network systems. As a simple example, pre or post-processing stages of a neural network can be considered black boxes that do not learn. The output of the Aspirin parser is sent to the appropriate code generator that implements the desired neural network paradigm. The goal of Aspirin is to provide a common extendible front-end language and parser for different network paradigms. The publicly available software will include a backpropagation code generator that supports several variations of the backpropagation learning algorithm. For backpropagation networks and their variations, Aspirin supports a wide variety of capabilities: 1. feed-forward layered networks with arbitrary connections 2. ``skip level'' connections 3. one and two-dimensional tessellations 4. a few node transfer functions (as well as user defined) 5. connections to layers/inputs at arbitrary delays, also "Waibel style" time-delay neural networks The file describing a network is processed by the Aspirin parser and files containing C functions to implement that network are generated. This code can then be linked with an application which uses these routines to control the network. Optionally, a complete simulation may be automatically generated which is integrated with the graphics and can read data in a variety of file formats. Currently supported file formats are: Ascii Type1, Type2, Type3 (simple floating point file formats) ProMatlab Examples -------- A set of examples comes with the distribution: xor: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 330-334. encode: from RumelHart and McClelland, et al, "Parallel Distributed Processing, Vol 1: Foundations", MIT Press, 1986, pp. 335-339. detect: Detecting a sine wave in noise. characters: Learing to recognize 4 characters independent of rotation. sonar: from Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. spiral: from Kevin J. Lang and Michael J, Witbrock, "Learning to Tell Two Spirals Apart", in Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. ntalk: from Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. perf: A large network used only for performance testing. Performance of Aspirin simulations ---------------------------------- The backpropagation code generator produces simulations that run very efficiently. Aspirin simulations do best on vector machines when the networks are large, as exemplified by the Cray's performance. All simulations were done using the Unix "time" function and include all simulation overhead. The connections per second rating was calculated by multiplying the number of iterations by the total number of connections in the network and dividing by the "user" time provided by the Unix time function. Two tests were performed. In the first, the network was simply run "forward" 100,000 times and timed. In the second, the network was timed in learning mode and run until convergence. Under both tests the "user" time included the time to read in the data and initialize the network. Sonar: This network is a two layer fully connected network with 60 inputs: 2-34-60. Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 2.8 Cray YMP: 15.7 Backward: SparcStation1: 0.3 IBM RS/6000 320: 0.8 Cray YMP: 7 Gorman, R. P., and Sejnowski, T. J. (1988). "Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets" in Neural Networks, Vol. 1, pp. 75-89. Nettalk: This network is a two layer fully connected network with [29 x 7] inputs: 26-[15 x 8]-[29 x 7] Millions of Connections per Second Forward: SparcStation1: 1 IBM RS/6000 320: 3.5 Cray YMP: 64 Backward: SparcStation1: 0.4 IBM RS/6000 320: 1.3 Cray YMP: 24.8 Sejnowski, T.J., and Rosenberg, C.R. (1987). "Parallel networks that learn to pronounce English text" in Complex Systems, 1, 145-168. Perf: This network was only run on the Cray. It is very large with very long vectors. The performance on this network is in some sense a peak performance for a machine. This network is a two layer fully connected network with 2048 inputs: 128-512-2048 Millions of Connections per Second Forward: Cray YMP: 96.3 Backward: Cray YMP: 18.9 Note: The cray benchmarks are courtesy of the Center for High Performance Computing at the University of Texas. Aspirin 5.0 ----------- The next release of the software *may* include: 1. 2nd order (quadratic) connections 2. Auto-regressive nodes (this a form of limited recurrence) 3. Code generators for other (not backprop) neural network learning algorithms. 4. More supported file formats 5. More config files for other platforms. MIGRAINES 4.0 ------------- MIGRAINES is a graphics system for visualizing neural network systems. The graphics that are currently being released are exclusively for feed-forward networks. They provide the ability to display networks, arc weights, node values, network inputs, network outputs, and target outputs in a wide variety of formats. There are many different representations that may be used to display arcs weights and node values, including pseudo-color (or grayscale) arrays (with user modifiable colors and value-to-color mappings), various plots, bar charts and other pictorial representations. MIGRAINES is not necessary for the execution of the Aspirin system. Networks may be designed, executed, tested, and saved entirely apart from any graphic interface. The more exotic the network being considered, the smaller the amount of graphics that will be useful. However, the graphics offer such a degree of creative and analytic power for neural network research that even the most jaded researcher will find them useful. Although the graphics were developed for the NeWS1.1 window system, it can be run under Sun's OpenWindows which supports NeWS applications. Note: OpenWindows is not 100% backward compatible with NeWS1.1 so some features of the graphics may not work well. MIGRAINES 5.0 ------------- The next release will replace the NeWS1.1 graphics with an X based system as well extending the graphical capabilities. An interface to the scientific visualization system apE2.0 *may* be available. How to get the software ----------------------- The software is available from two FTP sites, CMU's simulator collection and UCLA's cognitive science machines. The compressed tar file is a little more than 2 megabytes. The software is currently only available via anonymous FTP. > To get the software from CMU's simulator collection: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "connectionists-request at cs.cmu.edu". 5. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 6. Get the file "am4.tar.Z" > To get the software from UCLA's cognitive science machines: 1. Create an FTP connection to "polaris.cognet.ucla.edu" (128.97.50.3) (typically with the command "ftp 128.97.50.3") 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "alexis", by typing the command "cd alexis" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Get the file by typing the command "get am4.tar.Z" How to unpack the software -------------------------- After ftp'ing the file make the directory you wish to install the software. Go to that directory and type: zcat am4.tar.Z | tar xvf - How to print the manual ----------------------- The user manual is located in ./doc in a few compressed PostScript files. To print each file on a PostScript printer type: zcat | lpr Thanks ------ Thanks to the folks at CMU and UCLA for the ftp site. Thanks to the folks at the Center for High Performance Computing at the University of Texas for the use of their computers. Copyright and license agreement ------------------------------- Since the Aspirin/MIGRAINES system is licensed free of charge, the MITRE Corporation provides absolutely no warranty. Should the Aspirin/MIGRAINES system prove defective, you must assume the cost of all necessary servicing, repair or correction. In no way will the MITRE Corporation be liable to you for damages, including any lost profits, lost monies, or other special, incidental or consequential damages arising out of the use or in ability to use the Aspirin/MIGRAINES system. This software is the copyright of The MITRE Corporation. It may be freely used and modified for research and development purposes. We require a brief acknowledgement in any research paper or other publication where this software has made a significant contribution. If you wish to use it for commercial gain you must contact The MITRE Corporation for conditions of use. The MITRE Corporation provides absolutely NO WARRANTY for this software. February, 1991 Russell Leighton Alexis Wieland The MITRE Corporation 7525 Colshire Dr. McLean, Va. 22102-3481 From smieja at gmdzi.uucp Fri Mar 8 05:57:11 1991 From: smieja at gmdzi.uucp (Frank Smieja) Date: Fri, 8 Mar 91 09:57:11 -0100 Subject: Tech Report available: NN module systems Message-ID: <9103080857.AA11648@gmdzi.gmd.de> The following technical report is available. It will appear in the proceedings of the AISB90 COnference in Leeds, England. The report exists as smieja.minos.ps.Z in the Ohio cheops account (in /neuroprose, anonymous login via FTP). Normal procedure for retrieval applies. MULTIPLE NETWORK SYSTEMS (MINOS) MODULES: TASK DIVISION AND MODULE DISCRIMINATION It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent {\it systems.} These systems are intended to possess a varied repertoire of functions enabling adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details. Presented is an architecture for a type of neural expert module, named an {\it Authority.} An Authority consists of a number of {\it Minos\/} modules. Each of the Minos modules in an Authority has the same processing capabilities, but varies with respect to its particular {\it specialization\/} to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts. The expert with the highest {\it confidence\/} is believed, and it is the answer and confidence quotient that are transmitted to other levels in a system hierarchy. From Connectionists-Request at CS.CMU.EDU Fri Mar 8 13:11:38 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 08 Mar 91 13:11:38 EST Subject: Fwd: Request for info on simulators related to fuzzy logic Message-ID: <19306.668455898@B.GP.CS.CMU.EDU> ------- Forwarded Message From DUDZIAKM at isnet.inmos.COM Fri Mar 8 09:34:04 1991 From: DUDZIAKM at isnet.inmos.COM (DUDZIAKM@isnet.inmos.COM) Date: Fri, 8 Mar 91 07:34:04 MST Subject: Request for info on simulators related to fuzzy logic Message-ID: I am in the process of compiling an annotated list of simulators and emulators of fuzzy logic and other related non-deterministic logics. This list will be made available to the network community. I welcome any information about products and especially distributable, public-domain prototypes. I am familiar with a few of the commercial products but do not know much about what is available through the academic research community. There seem to be quite a number of neural/ connectionist simulators, such as have been recently described on this and other mailing lists. Your assistance is appreciated. Martin Dudziak SGS-THOMSON Microelectronics dudziakm at isnet.inmos.com (alternate: dudziakm at agrclu.st.it in Europe) fax: 301-290-7047 phone: 301-995-6952 ------- End of Forwarded Message From lanusse at ensta.ensta.fr Mon Mar 11 10:07:27 1991 From: lanusse at ensta.ensta.fr (+ prof) Date: Mon, 11 Mar 91 16:07:27 +0100 Subject: No subject Message-ID: <9103111507.AA09539@ensta.ensta.fr> Subject: mailing list To whom it may concern: I would like to be placed on the connectionists mailing list. Thank you Alain F. Lanusse Ecole Nationale Superieure de Techniques Avancees Paris (FRANCE) lanusse at ensta.ensta.fr From POSEIDON at UCSD Mon Mar 11 22:13:28 1991 From: POSEIDON at UCSD (POSEIDON@UCSD) Date: Mon, 11 Mar 91 19:13:28 PST Subject: help Message-ID: <9103120313.AA06782@sdcc13.UCSD.EDU> I am trying to delete myself from the connectionists mailing list, but am unable to do so. After following the proper procedure, it tells me I am deleted, but I still get the mail. Can anyone help? Thanks, Patrick Dwyer pdwyer at sdcc13.ucsd.edu From VAINA at buenga.bu.edu Tue Mar 12 07:22:00 1991 From: VAINA at buenga.bu.edu (VAINA@buenga.bu.edu) Date: Tue, 12 Mar 91 07:22 EST Subject: THE COMPUTING BRAIN LECTURE- TERRY SEJNOWSKI Message-ID: MARCH 13, 5PM DYNAMIC OF EYE MOVEMENTS, TERRY SEJNOWSKI at Boston University, 110 Cummington str (olf Engineering Building) Room 150. Tea at 4pm. All Boston area people invited! (for further information call 353-2455 or 353-9144) (617-area code). From marwan at ee.su.OZ.AU Tue Mar 12 07:51:51 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Tue, 12 Mar 1991 22:51:51 +1000 Subject: No subject Message-ID: <9103121251.AA29308@brutus.ee.su.oz.au> ***************** Technical Report Available ***************** Predicting the Number of Vias and Dimensions of Full-custom Circuits Using Neural Networks Techniques Marwan Jabri & Xiaoquan Li School of Electrical Engineering University of Sydney marwan at ee.su.oz.au (SEDAL Tech Report 1991-1-6) Abstract Block layout dimension prediction is an important activity in many VLSI design tasks (structural synthesis, floorplanning and physical synthesis). Block layout {\em dimension} prediction is harder than block {\em area} prediction and has been previously considered to be intractable [Kurdahi89]. In this paper we present a solution to this problem using a neural network machine learning paradigm. Our method uses a neural network to predict first the number of vias and then another neural network that uses this prediction and other circuit features to predict the width and the height of the layout of the circuit. Our approach has produced much better results than those published, {\em dimension} (aspect ratio) prediction average error of less than 18\% with corresponding {\em area} prediction average error of less than 15\%. Furthermore, our technique predicts the number of vias in a circuit with less than 4\% error on average. *** Also submitted To ftp this report: ------------------- ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get jabri.dime.ps.Z >quit uncompress jabri.dime.ps.Z lpr -P jabri.dime.ps If for any reasons you are unable to print the file, you can ask for a hardcopy by writing to (and asking for SEDAL Tech Report 1991-1-6): Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia From dekel at utdallas.edu Tue Mar 12 20:00:53 1991 From: dekel at utdallas.edu (Eliezer Dekel) Date: Tue, 12 Mar 91 19:00:53 -0600 Subject: Off-line Signature recognition Message-ID: I am looking for references to work on off-line signature recognition. I'm aware of some work that was done before 1985. I would greatly appreciate information about more recent work. I'll summerize and post to the list. Eliezer Dekel The University of Texas at Dallas dekel at utdallas.edu From marshall at cs.unc.edu Wed Mar 13 11:34:19 1991 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 13 Mar 91 11:34:19 -0500 Subject: Triangle NN talk: Mo-Yuen Chow Message-ID: <9103131634.AA24223@marshall.cs.unc.edu> ====== TRIANGLE AREA NEURAL NETWORK INTEREST GROUP presents: ====== Dr. MO-YUEN CHOW Department of Electrical and Computer Engineering North Carolina State University Tuesday, March 19, 1991 6:00 p.m. Entrance is locked at 6:30. Microelectronics Center Building, MCNC 3021 Cornwallis Road Research Triangle Park, NC Followed immediately by an ORGANIZATIONAL MEETING for the Triangle Area Neural Network Interest Group ---------------------------------------------------------------------- APPLICATION OF NEURAL NETWORKS TO INCIPIENT FAULT DETECTION IN INDUCTION MOTORS The main focus of the presentation is to introduce a new concept for incipient fault detection in rotating machines using artificial neural networks. Medium size induction motors are used as prototypes for rotating machines due to their wide applications. The concepts developed for induction motors can be easily generalized to other rotating machines. The common incipient faults of induction motors, namely, turn-to-turn insulation faults and bearing wear, and their effects on the motor performance are considered. A corresponding artificial neural network structure is then designed to detect those incipient faults. The designed network is trained by data of different fault conditions, obtained from a detailed induction motor simulation program. After training, the neural net is tested with a set of random data within the fault range under consideration. With a priori data knowledge, the network structure can be greatly simplified by using a high-order artificial neural network. The performance of using the batch-update and pattern-update backpropagation training algorithms for the network are compared. The satisfactory performance of using artificial neural networks in this project shows the promising future of artificial neural networks applications for other types of fault detection. ---------------------------------------------------------------------- Co-Sponsored by: Department of Electrical and Computer Eng., NCSU Department of Computer Science, UNC-CH Humanities Computing Facility, Duke Univ. Microelectronics Center of North Carolina (MCNC) For more information: Jonathan Marshall (UNC-CH, 962-1887, marshall at cs.unc.edu) or John Sutton (NCSU, 737-5065, sutton at eceugs.ece.ncsu.edu). Directions: Raleigh: I-40 west to Durham Freeway (147) north, 147 to Cornwallis exit. Durham: 147 south to Cornwallis exit. Chapel Hill: I-40 east to Durham Freeway (147) north, 147 to Cornwallis exit. FROM CORNWALLIS EXIT, bear right, go thru first set of traffic lights, passing Burroughs Wellcome, then next driveway on right is MCNC. When you enter MCNC, the Microelectronics Center building is on the left. ---------------------------------------------------------------------- We invite you to participate in organizing and running the new Triangle-area neural network (NN) interest group. It is our hope that the group will foster communication and collaboration among the local NN researchers, students, businesses, and the public. From marshall at cs.unc.edu Wed Mar 13 11:34:50 1991 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Wed, 13 Mar 91 11:34:50 -0500 Subject: Triangle NN talk: Dan Levine Message-ID: <9103131634.AA24248@marshall.cs.unc.edu> ====== TRIANGLE AREA NEURAL NETWORK INTEREST GROUP presents: ====== Prof. DANIEL S. LEVINE Department of Mathematics University of Texas at Arlington Tuesday, April 2, 1991 5:30 p.m. Refreshments will be served at 5:15. Sitterson Hall (Computer Science), room 011 UNC Chapel Hill ---------------------------------------------------------------------- NETWORK MODELING OF NEUROPSYCHOLOGICAL DATA A general class of neural network architectures will be discussed, based on such principles as associative learning, competition, and opponent processing. Examples of this sort of architecture will be introduced that model data on neuropsychological deficits arising from frontal lobe damage. These deficits include inability to switch criteria on a card sorting task; excessive attraction to novel stimuli; loss of verbal fluency; and difficulty in learning a flexible motor sequence. Frontal lobe damage is modeled in these networks by weakening of a specified connection. Dan Levine is author of the forthcoming textbook Introduction to Neural and Cognitive Modeling, published by L. Erlbaum Associates, 1991. He is a co-founder of the Dallas-Ft.Worth area neural network interest group M.I.N.D. ---------------------------------------------------------------------- Co-Sponsored by: Department of Electrical and Computer Eng., NCSU Department of Computer Science, UNC-CH Humanities Computing Facility, Duke Univ. For more information: Jonathan Marshall (UNC-CH, 962-1887, marshall at cs.unc.edu) or John Sutton (NCSU, 737-5065, sutton at eceugs.ece.ncsu.edu). Directions: Sitterson Hall is located across the street from the Carolina Inn, on South Columbia Street (Route 86), which is the main north-south street through downtown Chapel Hill. Free parking is available in the UNC lots, two of which are adjacent to Sitterson Hall. Municipal parking lots are located 2-3 blocks north, in downtown Chapel Hill. ---------------------------------------------------------------------- We invite you to participate in organizing and running the new Triangle-area neural network (NN) interest group. It is our hope that the group will foster communication and collaboration among the local NN researchers, students, businesses, and the public. From mathew at elroy.Jpl.Nasa.Gov Wed Mar 13 12:38:45 1991 From: mathew at elroy.Jpl.Nasa.Gov (Mathew Yeates) Date: Wed, 13 Mar 91 09:38:45 PST Subject: tech report Message-ID: <9103131738.AA01556@jane.Jpl.Nasa.Gov> The following technical report (JPL Publication) is available for anonymous ftp from the neuroprose directory at cheops.cis.ohio-state.edu. This is a short version of a previous paper "An Architecture With Neural Network Characteristics for Least Squares Problems" and has appeared in various forms at several conferences. There are two ideas that may be of interest: 1) By making the input layer of a single layer Perceptron fully connected, the learning scheme approximates Newtons algorithm instead of steepest descent. 2) By allowing local interactions between synapses the network can handle time varying behavior. Specifically, the network can implement the Kalman Filter for estimating the state of a linear system. get both yeates.pseudo-kalman.ps.Z and yeates.pseudo-kalman-fig.ps.Z A Neural Network for Computing the Pseudo-Inverse of a Matrix and Applications to Kalman Filtering Mathew C. Yeates California Institute of Technology Jet Propulsion Laboratory ABSTRACT A single layer linear neural network for associative memory is described. The matrix which best maps a set of input keys to desired output targets is computed recursively by the network using a parallel implementation of Greville's algorithm. This model differs from the Perceptron in that the input layer is fully interconnected leading to a parallel approximation to Newtons algorithm. This is in contrast to the steepest descent algorithm implemented by the Perceptron. By further extending the model to allow synapse updates to interact locally, a biologically plausible addition, the network implements Kalman filtering for a single output system. From marwan at ee.su.OZ.AU Thu Mar 14 01:29:45 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Thu, 14 Mar 1991 16:29:45 +1000 Subject: Job opportunities Message-ID: <9103140629.AA09964@brutus.ee.su.oz.au> Professional Assistants (2) Systems Engineering and Design Automation Laboratory School of Electrical Engineering The University of Sydney Australia Microelectronic Implementation of Neural Networks based Devices for the Analysis and Classification of Medical Signals Applications are invited from persons to work on an advanced neural network application project in the medical area. The project is being funded jointly by the Australian Government and a high-technology manufacturer of medical products. Appointees will be joining an existing team of 3 staff. The project is the research and development of architectures of neural networks to be implemented in VLSI. The project spans over a 2-year period following a feasibility study which was completed in early 1991. The first Professional Assistant is expected to have experience in VLSI engineering, to design VLSI circuit, to perform simulation, to develop simulation models of physical level implementations, to design testing jigs (hardware and software) and perform test on fabricated chips. The second Professional Assistant is also expected to have experience in VLSI engineering with the responsibility of working on the development of EEPROM storage technology, of upgrading design tools to support EEPROM, of developing an interface with the fabrication foundry, of designing building block storage elements and interfacing them with other artificial neural network building blocks. The appointee for this position is expected to spend substantial time at the University of New South Wales where some of the EEPROM work will be performed. Both Professional Assistants will also work on the overall chip prototyping that we expect to be performed in 1992. The appointment will be for a period of 2 years. Applicants should have an Electrical/Electronic Engineering degree or equivalent and a minimum of three years experience in a related fields. Preference will be given to applicants with experience in Artificial Neural Networks, and Analog or Digital Intergrated Circuits. The appointees may apply for enrollment towards a postgraduate degree (part-time). Salary range according to qualifications: $24,661 - $33,015 pa. Method of application: --------------------- Applications including curriculum vitae, list of publications and the names, addresses and fax numbers of three referees should be sent to: Dr M.A. Jabri, Sydney University Electrical Engineering Building J03 NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: (+61-2) 692-3847 Email: marwan at ee.su.oz.au From whom further information may be obtained. The University reserves the right not to proceed with any appointment for financial or other reasons. Equal Opportunity is University Policy. From elman at crl.ucsd.edu Thu Mar 14 18:05:04 1991 From: elman at crl.ucsd.edu (Jeff Elman) Date: Thu, 14 Mar 91 15:05:04 PST Subject: CRL TR-9101: The importance of starting small Message-ID: <9103142305.AA18566@crl.ucsd.edu> The following technical report is available. Hardcopies may be obtained by sending your name and postal address to crl at crl.ucsd.edu. A compressed postscript version can be retrieved through ftp (anonymous/ident) from crl.ucsd.edu (128.54.165.43) in the file pub/neuralnets/tr9101.Z. CRL Technical Report 9101 "Incremental learning, or The importance of starting small" Jeffrey L. Elman Center for Research in Language Departments of Cognitive Science and Linguistics University of California, San Diego elman at crl.ucsd.edu ABSTRACT Most work in learnability theory assumes that both the environment (the data to be learned) and the learning mechanism are static. In the case of children, however, this is an unrealistic assumption. First-language learning occurs, for example, at precisely that point in time when children undergo significant developmental changes. In this paper I describe the results of simulations in which network models are unable to learn a complex grammar when both the network and the input remain unchanging. How- ever, when either the input is presented incrementally, or- -more realistically--the network begins with limited memory that gradually increases, the network is able to learn the grammar. Seen in this light, the early limitations in a learner may play both a positive and critical role, and make it possible to master a body of knowledge which could not be learned in the mature system. From tom at asi.com Fri Mar 15 13:53:50 1991 From: tom at asi.com (Tom Baker) Date: Fri, 15 Mar 91 10:53:50 PST Subject: More limited precision responses Message-ID: <9103151853.AA23661@asi.com> Recently Jacob Murre sent a copy of the comments that he received from a request for references on limited precision implementations of neural network algorithms. Here are the results that I received from an earlier request. I have also collected a bibliography of papers on the subject. Because of the length of the bibliograpy, I will not post it here. I am sending a copy to all that responded to my initial inquiry. If you want a copy of the references send me a message, and I will send it to you. If you already sent a request for the references and do not receive them in a few days, then please send me another message. Thanks for all of the references and messages. Thomas Baker INTERNET: tom at asi.com Adaptive Solutions, Inc. UUCP: (uunet,ogicse)!adaptive!tom 1400 N.W. Compton Drive, Suite 340 Beaverton, Oregon 97006 ----------------------------------------------------------------------- Hello Thomas. Networks with Binary Weights (i.e. only +1 or -1) can be regarded as the extreme case of limited precision networks (they also utilize binary threshold units). The interest in such networks is two fold : theoretical and practical. Nevertheless, not many learning algorithms exist for these networks. Since this is one of my major interests, I have a list of references for binary weights algorithms, which I eclose. As far as I know, the only algorithm which train feed-forward networks with binary weights are based on the CHIR algorithm (Grossman 1989, Saad and Marom 1990, Nabutovskyet al 1990). The CHIR algorithm is an alternative to BP that was developed by our group, and is capable of training feed forward networks of binary (i.e hard threshold) units. The rest of the papers in the enclosed list deal with algorithms for the single binary weights percptron (a learning rule which can also be used for fully connected networks of such units). If you are interested, I can send you copies of our papers (Grossman, Nabutovsky et al). I would also be very intersted in what you find. Best Regards Tal Grossman Electronics Dept. Weizmann Inst. of Science Rehovot 76100, ISRAEL. ----------------------------------------------------------------------- In article <894 at adaptive.UUCP> you write: > >We use 16 bit weights, and 8 bit inputs and outputs. We have found >that this representation does as well as floating point for most of >the data sets that we have tried. I have also seen several other >papers where 16 bit weights were used successfully. > >I am also trying to collect a bibliography on limited precision. I >would like to see the references that you get. I do not have all of >the references that I have in a form that can be sent out. I will >post them soon. I would like to keep in touch with the people that >are doing research in this area. > >Thomas Baker INTERNET: tom at asi.com >Adaptive Solutions, Inc. UUCP: (uunet,ogicse)!adaptive!tom >1400 N.W. Compton Drive, Suite 340 >Beaverton, Oregon 97006 My research topic is VLSI implementation of neural networks, and hence I have done some study on precision requirement. My observations agree with that of yours more or less (I have also read your paper titled "Characterization of Neural Networks"). I found that the precision for weights is 11-12 bits for the decimal place. For outputs, 4-5 bits were sufficient, but for backpropagated error, the precision requirement was 7-8 bits (the values of these two don't exceed 1). As an aside, you say in your paper (cited above), that you can train the network by accumulating weights as well as backpropagated error. While I was successful with weights, I was never able to train the network by accumulating the error. Could you give some more explanation on this matter? Accumulating error will be very helpful from the point of view of VLSI implementation, since one need not wait for the error to be backpropagated at every epoch, and hence the throughput can be increased. Thanks in advance, Arun (e-mail address: arun at vlsi.waterloo.edu) [Ed. Oops! He's right, accumulating the error without propagating it doesn't work. TB ] ----------------------------------------------------------------------- I have done some precision effects simulations. I was concerned with an analog/digital hybrid architecture which drove me to examine three areas of precision constraints: 1) calculation precision--the same as weight storage precision and essentially the precision necessary in the backprop calculations, 2) feedforward weight precision--the precision necessary in calculating an activation level, 3) output precision--the precision necessary in both the feedforward calculations and in calculating weight changes. My results were not much better than you mentioned--13 bits were required for weight storage/delta-w calculations. I have a feeling that you wanted to see something much more optimistic. I should say that I was examining problems more related to signal processing and was very concerned with obtaining a low RMS error. Another study which looked more at classification problems and was concerned with correct classification and not necessarily RMS error got more optimistic results--I believe 8-10 bit calculations (but don't quote me). Those results originally appeared in a masters thesis by Sheldon Gilbert at MIT. His thesis I believe is available as a Lincoln Lab tech report #810. The date of that TR is 11/18/88. The results of my study appears in the fall '90 issue of "Neural Computation". As I work on a chip design I continually confront this effect and would be happy to discuss it further if you care to. Best Regards, Paul Hollis Dept. of ECE North Carolina State Univ. pwh at ecebowie.ncsu.edu (919) 737-7452 ----------------------------------------------------------------------- Hi! Yun-shu Peter Chiou told me you have done something on finite word length BP. Right now, I'm also working on a subject related to that area for my master's thesis. Could you give me a copy of your master's thesis? Of course, I'm gonna pay for it. Your reply will be greately appreciated. Jennifer Chen-ping Feng ----------------------------------------------------------------------- My group has interest in the quantisation effects from both theoretical and practical point of view. The theoretical aspects are very challenging. I would appreciate if you are aware of papers in this areas. Regards, Mrawn Jabri ----------------------------------------------------------------------- Dear connectionist researchers, We are in the process of designing a new neurocomputer. An important design consideration is precision: Should we use 1-bit, 4-bit, 8-bit, etc. representations for weights, activations, and other parameters? We are scaling-up our present neurocomputer, the BSP400 (Brain Style Processor with 400 processors), which uses 8-bit internal representations for activations and weights, but activations are exchanged as single bits (using partial time-coding induced by floating thresholds). This scheme does not scale well. Though we have tracked down scattered remarks in the literature on precision, we have not been able to find many systematic studies on this subject. Does anyone know of systematic simulations or analytical results of the effect of implementation precision on the performance of a neural network? In particular we are interested in the question of how (and to what extent) limited precision (i.e., 8-bits) implementations deviate from, say, 8-byte, double precision implementations. The only systematic studies we have been able to find so far deal with fault tolerance, which is only of indirect relevance to our problem: Brause, R. (1988). Pattern recognition and fault tolerance in non-linear neural networks. Biological Cybernetics, 58, 129-139. Jou, J., & J.A. Abraham (1986). Fault-tolerant matrix arithmetic and signal processing on highly concurrent computing structures. Proceedings of the IEEE, 74, 732-741. Moore, W.R. (1988). Conventional fault-tolerance and neural computers. In: R. Eckmiller, & C. Von der Malsburg (Eds.). Neural Computers. NATO ASI Series, F41, (Berling: Springer-Verlag), 29-37. Nijhuis, J., & L. Spaanenburg (1989). Fault tolerance of neural associative memories. IEE Proceedings, 136, 389-394. Thanks! Jacob M.J. Murre Unit of Experimental and Theoretical Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands E-Mail: MURRE at HLERUL55.Bitnet ----------------------------------------------------------------------- We are in the process of finishing up a paper which gives a theoretical (systematic) derivation of the finite precision neural network computation. The idea is a nonlinear extension of "general compound operators" widely used for error analysis of linear computation. We derive several mathematical formula for both retrieving and learning of neural networks. The finite precision error in the retrieving phase can be written as a function of several parameters, e.g., number of bits of weights, number of bits for multiplication and accumlation, size of nonlinear table-look-up, truncation/rounding or jamming approaches, and etc. Then we are able to extend this retrieving phase error analysis to iterative learning to predict the necessary number of bits. This can be shown using a ratio between the finite precision error and the (floating point) back-propagated error. Simulations have been conducted and matched the theoretical prediction quite well. Hopefully, we can have a final version of this paper available to you soon. Jordan L. Holt and Jenq-Neng Hwang ,"Finite Precision Error Analysis of Neural Network Hardware Implementation," University of Washington, FT-10, Seattle, WA 98195 Best Regards, Jenq-Neng ----------------------------------------------------------------------- Dear Baker, I saw the following article on the internet news: ----- Yun-Shu Peter Chiou (yunshu at eng.umd.edu) writes: > Does anyone out there have any references or have done any works > on the effects of finite word length arithmetic on Back-Propagation. I have done a lot of work with BP using limited precision calculations. My masters thesis was on the subject, and last summer Jordan Holt worked with us to run a lot of benchmark data on our limited precision simulator. We are submitting a paper on Jordan's results to IJCNN '91 in Seattle. We use 16 bit weights, and 8 bit inputs and outputs. We have found that this representation does as well as floating point for most of the data sets that we have tried. I have also seen several other papers where 16 bit weights were used successfully. I am also trying to collect a bibliography on limited precision. I would like to see the references that you get. I do not have all of the references that I have in a form that can be sent out. I will post them soon. I would like to keep in touch with the people that are doing research in this area. ----- Could I have a copy of your article sent to me? As for the moment I am writing a survey of the usage of SIMD computers for the simulation of artificial neural networks. As many SIMD computers are bitserial I think the precision is an important aspect of the algorithms. I have found some articles that discusses low precision neural networks and I included refereces to them last in my letter. If you have a compilation of other references that you recoment could you please send the list to me? advTHANKSance Tomas Nordstrom --- Tomas Nordstrom Tel: +46 920 91061 Dept. of Computer Engineering Fax: +46 920 98894 Lulea university of Technology Telex: 80447 LUHS S-95187 Lulea, SWEDEN Email: tono at sm.luth.se (internet) --- ----------------------------------------------------------------------- I a recent post on the connectionists mailing list you were quoted as follows... " ... We have found that for backprop learning, between twelve and sixteen bits are needed. ...One method that optical and analog engineers use is to calcualte the error by running feed forward calculations with limited precision, and learnign weights with heigher precision..." I am currently doing recesearch for optical implemetions for associative memories. The method that I am reseraching iteratively calculates an memory matrix that is fairly robust. However when I quantize the process during learning, the entire method fails. I was wondering if you knew of some one who has had similar problems in quantization during training. Thank you, Karen haines ----------------------------------------------------------------------- Hi Mr Baker I am working on a project wherein we are attempting to study the implications of using limited precision while implementing backpropagation. I read a message from Jacob Murre that said that you were maitaining a distribution list of persons interested in this field. Would you kindly add me to that list. My email address is ljd at mrlsun.sarnoff.com Thanks Leslie Dias From rba at vintage.bellcore.com Sun Mar 17 07:41:07 1991 From: rba at vintage.bellcore.com (Bob Allen) Date: Sun, 17 Mar 91 07:41:07 -0500 Subject: a 'connectionist pedagogy' - structured training of grammars Message-ID: <9103171241.AA03081@vintage.bellcore.com> The result that it is beneficial to teach a simple recurrent network a difficult grammar by staring with a short form is already well known (but unfortunately not cited in Jeff Elman's recently announced TR). My research was described to Jay McClelland's research group in November 1988, it was presented at the ACM Computer Science Conference (Adaptive Training of Connectionist State Machines, Louisville, Feb. 1989, page 428), it has been reported to this interest group several times, and it is summarized in: Allen, R.B., Connectionist Langauge Users, Connection Science, vol. 2, 279-311, 1990. I would be happy to distribute a reprint of this paper to anyone who requests one. Robert B. Allen 2A-367 Bellcore Morristown, NJ 07962-1910 From soller at cs.utah.edu Tue Mar 19 00:55:06 1991 From: soller at cs.utah.edu (Jerome Soller) Date: Mon, 18 Mar 91 22:55:06 -0700 Subject: Utah's First Annual Cognitive Science Lecture Message-ID: <9103190555.AA07733@cs.utah.edu> The speaker at the First Annual Utah Cognitive Science Lecture is Dr. Andreas Andreou of the Johns Hopkins University Electrical Engineering Department. His topic is "A Physical Model of the Retina in Analog VLSI That Explains Optical Illusions". This provides a contrast to Dr. Carver Mead of Caltech, who spoke earlier this year in Utah at the Computer Science Department's Annual Organick Lecture. The time and date of the First Annual Cognitive Science Lecture will be Tuesday, April 2nd, 4:00 P.M. The room will be 101 EMCB(next to the Merrill Engineering Building), University of Utah, Salt Lake City, Utah. A small reception(refreshments) will be available. This event is cosponsored by the Sigma Xi Resarch Fraternity. Dr. Dick Normann, Dr. Ken Horch, Dr. Dick Burgess, and Dr. Phil Hammond were extremely helpful in organizing this event. For more information on this event and other Cognitive Science related events in the state of Utah, contact me (801)-581-4710 or by e-mail(preferred) (soller at cs.utah.edu) . We have an 130 person electronic mailing list within the state of Utah announcing these kind of events. We are also finishing up this year's edition of the Utah Cognitive Science Information Guide, which contains 80 faculty, 60 graduate students, 60 industry representatives, 32 courses, and 25 research groups from the U. of Utah, BYU, Utah State and local industry. A rough draft can be copied by anonymous ftp from /usr/spool/ftp/pub/guide.doc from the cs.utah.edu machine. A final draft in plain text and a Macintosh version(better format) will be on the ftp site in about 2 weeks. Sincerely, Jerome B. Soller Ph. D. Student Department of Computer Science University of Utah From well!moritz at apple.com Mon Mar 18 21:41:41 1991 From: well!moritz at apple.com (Elan Moritz) Date: Mon, 18 Mar 91 18:41:41 pst Subject: abstracts - J. of Ideas, Vol 2 #1 Message-ID: <9103190241.AA22971@well.sf.ca.us> +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ please post & circulate Announcement ......... Abstracts of papers appearing in Volume 2 # 1 of the Journal of Ideas THOUGHT CONTAGION AS ABSTRACT EVOLUTION Aaron Lynch Abstract: Memory abstractions, or mnemons, form the basis of a memetic evolution theory where generalized self-replicating ideas give rise to thought contagion. A framework is presented for describing mnemon propagation, combination, and competition. It is observed that the transition from individual level considerations to population level considerations can act to cancel individual variations and may result in population behaviors. Equations for population memetics are presented for the case of two-idea interactions. It is argued that creativity via innovation of ideas is a population phenomena. Keywords: mnemon, meme, evolution, replication, idea, psychology, equation. ................... CULTURE AS A SEMANTIC FRACTAL: Sociobiology and Thick Description Charles J. Lumsden Department of Medicine, University of Toronto Toronto, Ontario, Canada M5S 1A8 Abstract: This report considers the problem of modeling culture as a thick symbolic system: a system of reference and association possessing multiple levels of meaning and interpretation. I suggest that thickness, in the sense intended by symbolic anthropologists like Geertz, can be treated mathematically by bringing together two lines of formal development, that of semantic networks, and that of fractal mathematics. The resulting semantic fractals offer many advantages for modeling human culture. The properties of semantic fractals as a class are described, and their role within sociobiology and symbolic anthropology considered. Provisional empirical evidence for the hypothesis of a semantic fractal organization for culture is discussed, together with the prospects for further testing of the fractal hypothesis. Keywords: culture, culturgen, meme, fractal, semantic network. ................... MODELING THE DISTRIBUTION OF A "MEME" IN A SIMPLE AGE DISTRIBUTION POPULATION: I. A KINETICS APPROACH AND SOME ALTERNATIVE MODELS Matthew Witten Center for High Performance Computing University of Texas System, Austin, TX 78758-4497 Abstract. Although there is a growing historical body of literature relating to the mathematical modeling of social and historical processes, little effort has been placed upon modeling the spread of an idea element "meme" in such a population. In this paper we review some of the literature and we then consider a simple kinetics approach, drawn from demography, to model the distribution of a hypothetical "meme" in a population consisting of three major age groups. KEYWORDS: Meme, idea, age-structure, compartment, sociobiology, kinetics model. ................... THE PRINCIPIA CYBERNETICA PROJECT Francis Heylighen, Cliff Joslyn, and Valentin Turchin The Principia Cybernetica Project[dagger] Abstract: This note describes an effort underway by a group of researchers to build a complete and consistent system of philosophy. The system will address, issues of general philosophical concern, including epistemology, metaphysics, and ethics, or the supreme human values. The aim of the project is to move towards conceptual unification of the relatively fragmented fields of Systems and Cybernetics through consensually-based philosophical development. Keywords: cybernetics, culture, evolution, system transition, networks, hypermedia, ethics, epistemology. ................... Brain and Mind: The Ultimate Grand Challenge Elan Moritz The Institute for Memetic Research P. O. Box 16327, Panama City, Florida 32406 Abstract: Questions about the nature of brain and mind are raised. It is argued that the fundamental understanding of the functions and operation of the brain and its relationship to mind must be regarded as the Ultimate Grand Challenge problem of science. National research initiatives such as the Decade of the Brain are discussed. Keywords: brain, mind, awareness, consciousness, computers, artificial intelligence, meme, evolution, mental health, virtual reality, cyberspace, supercomputers. +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ The Journal of Ides an archival forum for discussion of 1) evolution and spread of ideas, 2) the creative process, and 3) biological and electronic implementations of idea/knowledge generation and processing. The Journal of Ideas, ISSN 1049-6335, is published quarterly by the Institute for Memetic Research, Inc. P. O. Box 16327, Panama City Florida 32406-1327. >----------- FOR MORE INFORMATION -------> E-mail requests to Elan Moritz, Editor, at moritz at well.sf.ca.us. +=++=++=++=++=++=++=++=++=++=++=++=++=++=++=+ From Connectionists-Request at CS.CMU.EDU Tue Mar 19 09:39:25 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 19 Mar 91 09:39:25 EST Subject: Fwd: Message-ID: <12415.669393565@B.GP.CS.CMU.EDU> ------- Forwarded Message From thrun at gmdzi.uucp Mon Mar 18 23:39:54 1991 From: thrun at gmdzi.uucp (Sebastian Thrun) Date: Tue, 19 Mar 91 03:39:54 -0100 Subject: No subject Message-ID: <9103190239.AA01072@gmdzi.gmd.de> Well, there is a new TR available on the neuroprose archieve which is more or less an extended version of the NIPS paper I announced some weeks ago: ON PLANNING AND EXPLORATION IN NON-DISCRETE WORLDS Sebastian Thrun Knut Moeller German National Research Center Bonn University for Computer Science St. Augustin, FRG Bonn, FRG The application of reinforcement learning to control problems has received considerable attention in the last few years [Anderson86,Barto89,Sutton84]. In general there are two principles to solve reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantages. We present a system that combines both methods. By interaction with an unknown environment a world model is progressively constructed using the backpropagation algorithm. For optimizing actions with respect to future reinforcement planning is applied in two steps: An experience network proposes a plan, which is subsequently optimized by gradient descent with a chain of model networks. While operating in a goal-oriented manner due to the planning process the experience network is trained. Its accumulating experience is fed back into the planning process in form of initial plans, such that planning can be gradually reduced. In order to ensure complete system identification, a competence network is trained to predict the accuracy of the model. This network enables purposeful exploration of the world. The appropriateness of this approach to reinforcement learning is demonstrated by three different control experiments, namely a target tracking, a robotics and a pole balancing task. Keywords: backpropagation, connectionist networks, control, exploration, planning, pole balancing, reinforcement learning, robotics, neural networks, and, and, and... - ------------------------------------------------------------------------- The TR can be retrieved by ftp: unix> ftp cheops.cis.ohio-state.edu Name: anonymous Guest Login ok, send ident as password Password: neuron ftp> binary ftp> cd pub ftp> cd neuroprose ftp> get thrun.plan-explor.ps.Z ftp> bye unix> uncompress thrun.plan-explor.ps unix> lpr thrun.plan-explor.ps - ------------------------------------------------------------------------- If you have trouble in ftping the files, do not hesitate to contact me. --- Sebastian Thrun (st at gmdzi.uucp, st at gmdzi.gmd.de) ------- End of Forwarded Message From bradley at ivy.Princeton.EDU Tue Mar 19 16:46:29 1991 From: bradley at ivy.Princeton.EDU (Bradley Dickinson) Date: Tue, 19 Mar 91 16:46:29 EST Subject: Award Nominations Due 3/31/91 Message-ID: <9103192146.AA02632@ivy.Princeton.EDU> Nominations Sought for IEEE Neural Networks Council Award The IEEE Neural Networks Council is soliciting nominations for an award, to be presented for the first time at the July 1991 International Joint Conference on Neural Networks. IEEE Transactions on Neural Networks Outstanding Paper Award This is an award of $500 for the outstanding paper published in the IEEE Transactions on Neural Networks in the previous two-year period. For 1991, all papers published in 1990 (Volume 1) in the IEEE Transactions on Neural Networks are eligible. For a paper with multiple authors, the award will be shared by the coauthors. Nominations must include a written statement describing the outstanding characteristics of the paper. The deadline for receipt of nominations is March 31, 1991. Nominations should be sent to Prof. Bradley W. Dickinson, NNC Awards Chair, Dept. of Electrical Engineering, Princeton University, Princeton, NJ 08544-5263. Questions about the award may be directed to Prof. Bradley W. Dickinson: From ST401843 at brownvm.brown.edu Tue Mar 19 18:29:54 1991 From: ST401843 at brownvm.brown.edu (thanasis kehagias) Date: Tue, 19 Mar 91 18:29:54 EST Subject: Paper announcements Message-ID: I have just placed two papers of mine in the ohio-state archive. The first one is in the file kehagias.srn1.ps.Z and the relevant figures in the companion file kehagias.srn1fig.ps.Z. The second one is in the file kehagias.srn2.ps.Z and the relevant figures in the companion file kehagias.srn2fig.ps.Z. Detailed instructions for getting and printing these files will be included in the end of this message. Some of you have received versions of these files in email previously. In that case read a postscript at the end of this message. ----------------------------------------------------------------------- Stochastic Recurrent Network training by the Local Backward-Forward Algorithm Ath. Kehagias Brown University Div. of Applied Mathematics We introduce Stochastic Recurrent Networks, which are collections of interconnected finite state units. At every discrete time step, each unit goes into a new state, following a probability law that is conditional on the state of neighboring units at the previous time step. A network of this type can learn a stochastic process, where ``learning'' means maximizing the probability Likelihood function of the model. A new learning (i.e. Likelihood maximization) algorithm is introduced, the Local Backward-Forward Algorithm. The new algorithm is based on the Baum Backward-Forward Algorithm (for Hidden Markov Models) and improves speed of learning substantially. Essentially, the local Backward-Forward Algorithm is a version of Baum's algorithm which estimates local transition probabilities rather than the global transition probability matrix. Using the local BF algorithm, we train SRN's that solve the 8-3-8 encoder problem and the phoneme modelling problem. This is the paper kehagias.srn1.ps.Z, kehagias.srn1fig.ps.Z . The paper srn1 has undergone significant revision. It had too many typos, bad notation and also needed reorganization . All of these have been done. Thanks to N. Chater, S. Nowlan and A.T. Tsoi and M. Perrone for many useful suggestions along these lines. -------------------------------------------------------------------- Stochastic Recurrent Network training Prediction and Classification of Time Series Ath. Kehagias Brown University Div. of Applied Mathematics We use Stochastic Recurrent Networks of the type introduced in [Keh91a] as models of finite-alphabet time series. We develop the Maximum Likelihood Prediction Algorithm and the Maximum A Posteriori Classification Algorithm (which can both be implemented in recurrent PDP form). The prediction problem is: given the output up to the present time: Y^1,...,Y^t and the input up to the immediate future: U^1,...,U^t+1, predict with Maximum Likelihood the output Y^t+1 that the SRN will produce in the immediate future. The classification problem is: given the output up to the present time: Y^1,...,Y^t and the input up to the present time: U^1,...,U^t, as well as a number of candidate SRN's: M_1, M_2, .., M_K, find the network that has Maximum Posterior Probability of producing Y^1,...,Y^t. We apply our algorithms to prediction and classification of speech waveforms. This is the paper kehagias.srn2.ps.Z, kehagias.srn2fig.ps.Z . ----------------------------------------------------------------------- To get these files, do the following: gvax> ftp cheops.cis.ohio-state.edu 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron ftp> Guest login ok, access restrictions apply. ftp> cd pub/neuroprose ftp> binary 200 Type set to I. ftp> get kehagias.srn1.ps.Z ftp> get kehagias.srn1fig.ps.Z ftp> get kehagias.srn2.ps.Z ftp> get kehagias.srn2fig.ps.Z ftp> quit gvax> uncompress kehagias.srn1.ps.Z gvax> uncompress kehagias.srn1fig.ps.Z gvax> uncompress kehagias.srn2.ps.Z gvax> uncompress kehagias.srn2fig.ps.Z gvax> lqp kehagias.srn1.ps gvax> lqp kehagias.srn1fig.ps gvax> lqp kehagias.srn2.ps gvax> lqp kehagias.srn2fig.ps POSTSCRIPT: All of the people that sent a request (about a month ago) for srn1 in its original form are in my mailing list and most got copies of new versions of srn1,srn2 in email. Some of these files did not make it through internet, because of size restrictions etc. so you may want to fpt them now. Incidentally, if you want to be removed from the mailing list (for when the next paper in the series comes by) send me mail. Thanasis Kehagias From SAYEGH at CVAX.IPFW.INDIANA.EDU Wed Mar 20 21:44:52 1991 From: SAYEGH at CVAX.IPFW.INDIANA.EDU (SAYEGH@CVAX.IPFW.INDIANA.EDU) Date: Wed, 20 Mar 1991 21:44:52 EST Subject: 4th NN-PDP Conference, Indiana-Purdue Message-ID: <910320214452.20201ede@CVAX.IPFW.INDIANA.EDU> FOURTH CONFERENCE ON NEURAL NETWORKS ------------------------------------ AND PARALLEL DISTRIBUTED PROCESSING ----------------------------------- INDIANA UNIVERSITY-PURDUE UNIVERSITY ------------------------------------ 11,12,13 APRIL 1991 ------------------- The Fourth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University will be held on the Fort Wayne Campus, April 11,12, 13, 1991. Conference registration is $20 (on site) and students attend free. Some limited financial support might also be available to allow students to attend. Inquiries should be addressed to: email: sayegh at ipfwcvax.bitnet ----- US mail: ------- Prof. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805 FAX: (219) 481-6880 Voice: (219) 481-6306 OR 481-6157 Talks will be held: ------------------ Thursday April 11, 6pm - 9pm -- Classroom Medical 159 Friday Morning (Tutorial Session) -- Kettler G46 Friday Afternoon and Evening -- Classroom Medical 159 Saturday Morning -- Kettler G46 Free Parking is made available on the TOP floor of the parking garage. Special Hotel Rates (Purdue Corporate rates) are available at Hall's Guest House, which is a 10 mn drive from Campus. The number is (219) 489-2521. The following talks will be presented: ------------------------------------- network analysis: ---------------- P.G. Madhavan, B. Xu, B. Stephens, Purdue University, Indianapolis On the Convergence Speed & the Generalization Ability of Tri-state Neural Networks Mohammad R. Sayeh, Southern Illinois University at Carbondale Dynamical-System Approach to Unsupervised Classifier Samir I. Sayegh, Purdue University, Ft Wayne Symbolic Manipulation and Neural Networks Zhenni Wang, Ming T. Tham & A.J. Morris, University of Newcastle upon Tyne Multilayer Neural Networks: Approximated Canonical Decomposition of Nonlinearity M.G. Royer & O.K. Ersoy, Purdue University, W. Lafayette Classification Performance of Pshnn with BackPropagation Stages Sean Carroll, Tri-State University Single-Hidden-Layer Neural Nets Can Approximate B-Splines M. D. Tom & M.F. Tenorio, Purdue University, W. Lafayette A Neuron Architecture with Short Term Memory applications: ------------ G. Allen Pugh, Purdue University, Fort Wayne Further Design Considerations for Back Propagation I. H. Shin and K. J. Cios, The University of Toledo A Neural Network Paradigm and Architecture for Image Pattern Recognition R. E. Tjia, K. J. Cios and B. N. Shabestari, The University of Toledo Neural Network in Identification of Car Wheels from Gray Level Images S. Sayegh, C. Pomalaza-Raez, B. Beer and E. Tepper, Purdue University, Ft Wayne Pitch and Timbre Recognition Using Neural Network Jacek Witaszek & Colleen Brown, DePaul University Automatic Construction of Connectionist Expert Systems Robert Zerwekh, Northern Illinois University Modeling Learner Performance: Classifying Competence Levels Using Adaptive Resonance Theory biological aspects: ------------------ R. Manalis, Indiana Purdue University at Fort Wayne Short Term Memory Implicated in Twitch Facilitation Edgar Erwin, K. Obermayer, University of Illinois Formation and Variability of Somatotopic Maps with Topological Mismatch T. Alvager, B. Humpert, P. Lu, C. Roberts, Indiana State University DNA Sequence Analysis with a Neural Network Christel Kemke, DFKI, Germany Towards a Synthesis of Neural Network Behavior optimization and genetic algorithms: ----------------------------------- Robert L. Sedlmeyer, Indiana-Purdue, Fort Wayne A Genetic Algorithm to Estimate the Edge-Intergrity of Halin Graphs J.L. Noyes, Wittenberg University Neural Network Optimization Methods Omer Tunali & Ugur Halici, University of Missouri/Rolla A Boltzman Machine for Hypercube Embedding Problem William G. Frederick & Curt M. White, Indiana-Purdue University Genetic Algorithms and a Variation on the Steiner Point Problem Arun Jagota, State University of NY at Buffalo A Forgetting Rule and Other Extensions to the Hopfield-Style Network Storage Rule and Their Applications tutorial lectures: ----------------- Marc Clare, Lincoln National, Ft Wayne An Introduction to the Methodology of Building Neural Networks Ingrid Russell, University of Hartford Integrating Neural Networks into an AI Course Arun Jagota, State University of NY at Buffalo The Hopfield Model and Associative Memory Ingrid Russell, University of Hartford Self Organization and Adaptive Resonance Theory Models From tds at ai.mit.edu Thu Mar 21 02:32:37 1991 From: tds at ai.mit.edu (Terence D. Sanger) Date: Thu, 21 Mar 91 02:32:37 EST Subject: LMS-tree source code available Message-ID: <9103210732.AA01258@cauda-equina> Source code for a sample implementation of the LMS-tree algorithm is now available by anonymous ftp. The code includes a bare-bones implementation of the algorithm incorporated into a demo program which predicts future values of the mackey-glass differential delay equation. The demo will run under X11R3 or higher, and has been tested on sun-3 and sun-4 machines. Since this is a deliberately simple implementation not including tree pruning or other optimizations, many improvements are possible. I encourage any and all suggestions, comments, or questions. Terry Sanger (tds at ai.mit.edu) To obtain and execute the code: > mkdir lms-trees > cd lms-trees > ftp ftp.ai.mit.edu login: anonymous password: ftp> cd pub ftp> binary ftp> get sanger.mackey.tar.Z ftp> quit > uncompress sanger.mackey.tar.Z > tar xvf sanger.mackey.tar > make mackey > mackey Some documentation is included in the header of the file mackey.c . A description of the algorithm can be found in the paper I recently announced on this network: Basis-Function Trees as a Generalization of Local Variable Selection Methods for Function Approximation Abstract Local variable selection has proven to be a powerful technique for approximating functions in high-dimensional spaces. It is used in several statistical methods, including CART, ID3, C4, MARS, and others . In this paper I present a tree-structured network which is a generalization of these techniques. The network provides a framework for understanding the behavior of such algorithms and for modifying them to suit particular applications. To obtain the paper: >ftp cheops.cis.ohio-state.edu login: anonymous password: ftp> cd pub/neuroprose ftp> binary ftp> get sanger.trees.ps.Z ftp> quit >uncompress sanger.trees.ps.Z >lpr -h -s sanger.trees.ps Good Luck! From PetersonDM at computer-science.birmingham.ac.uk Thu Mar 21 10:18:25 1991 From: PetersonDM at computer-science.birmingham.ac.uk (PetersonDM@computer-science.birmingham.ac.uk) Date: Thu, 21 Mar 91 15:18:25 GMT Subject: Cognitive Science at Birmingham Message-ID: <9103211518.aa03564@james.Cs.Bham.AC.UK> ============================================================================ University of Birmingham Graduate Studies in COGNITIVE SCIENCE ============================================================================ The Cognitive Science Research Centre at the University of Birmingham comprises staff from the Departments/Schools of Psychology, Computer Science, Philosophy and Linguistics, and supports teaching and research in the inter-disciplinary investigation of mind and cognition. The Centre offers both MSc and PhD programmes. MSc in Cognitive Science The MSc programme is a 12 month conversion course, including a 4 month supervised project. The course places a particular stress on the relation between biological and computational architectures. Compulsory courses: AI Programming, Overview of Cognitive Science, Knowledge Representation Inference and Expert Systems, General Linguistics, Human Information Processing, Structures for Data and Knowledge, Philosophical Questions in Cognitive Science, Human-Computer Interaction, Biological and Computational Architectures, The Computer and the Mind, Current Issues in Cognitive Science. Option courses: Artificial and Natural Perceptual Systems, Speech and Natural Language, Parallel Distributed Processing. It is expected that students will have a good degree in psychology, computing, philosophy or linguistics. Funding is available through SERC and HTNT. PhD in Cognitive Science For 1991 there are 3 SERC studentships available for PhD level research into a range of topics including: o computational modelling of emotion o computational modelling of cognition o interface design o computational and psychophysical approaches to vision Computing Facilities Students have access to ample computing facilities, including networks of Apollo, Sun and Sparc workstations in the Schools of Computer Science and Psychology. Contact For further details, contact: Dr. Mike Harris CSRC, School of Psychology, University of Birmingham, PO Box 363, Edgbaston, Birmingham B15 2TT, UK. Phone: (021) 414 4913 Email: HARRIMWG at ibm3090.bham.ac.uk From HORN%TAUNIVM.BITNET at BITNET.CC.CMU.EDU Thu Mar 21 18:41:00 1991 From: HORN%TAUNIVM.BITNET at BITNET.CC.CMU.EDU (HORN%TAUNIVM.BITNET@BITNET.CC.CMU.EDU) Date: 21 Mar 91 18:41 IST Subject: BITNET mail follows Message-ID: <0540CD966200006E@BITNET.CC.CMU.EDU> From David Thu Mar 21 18:36:21 1991 From: David (+fax) Date: 21 March 91, 18:36:21 IST Subject: No subject Message-ID: The following preprint is available. Requests can be sent to HORN at TAUNIVM.BITNET : Segmentation, Binding and Illusory Conjunctions D. HORN, D. SAGI and M. USHER Abstract We investigate binding within the framework of a model of excitatory and inhibitory cell assemblies which form an oscillating neural network. Our model is composed of two such networks which are connected through their inhibitory neurons. The excitatory cell assemblies represent memory patterns. The latter have different meanings in the two networks, representing two different attributes of an object, such as shape and color. The networks segment an input which contains mixtures of such pairs into staggered oscillations of the relevant activities. Moreover, the phases of the oscillating activities representing the two attributes in each pair lock with each other to demonstrate binding. The system works very well for two inputs, but displays faulty correlations when the number of objects is larger than two. In other words, the network conjoins attributes of different objects, thus showing the phenomenon of "illusory conjunctions", like in human vision. From christie at ee.washington.edu Fri Mar 22 15:05:48 1991 From: christie at ee.washington.edu (Richard D. Christie) Date: Fri, 22 Mar 91 12:05:48 PST Subject: Conference Announcement - NN's & Power Systems (ANNPS 91) Message-ID: <9103222005.AA03710@wahoo.ee.washington.edu> *** Conference Announcement *** First International Forum on Applications of Neural Networks to Power Systems (ANNPS 91) *** July 23-26, 1991 *** Seattle, Washington (This is the hot and sunny time of the year in Seattle - Don't miss it!) Focus is on state of the art applications of Neural Network technology to complex power system problems. There will be papers, tutorials and panel discussions. A banquet cruise, one free lunch and a tour of the Boeing 747 plant in Everett are included in the conference fee. Conference fee: $125, $150 after July 1 Students: (Meetings and proceedings, no fun stuff) $25 For registration information, forms and questions, contact Jan Kvamme (kwa-mi) at email: jmk6112 at u.washington.edu FAX: (206) 543-2352 Phone: (206) 543-5539 snailmail: Engineering Continuing Education, GG-13 University of Washington 4725 30th Ave NE Seattle, WA 98105, USA Sponsored by: National Science Foundation University of Washington IEEE Power Engineering Society, Seattle Section Puget Sound Power & Light Company From hussien at circe.arc.nasa.gov Sat Mar 23 16:16:09 1991 From: hussien at circe.arc.nasa.gov (The Sixty Thousand Dollar Man) Date: Sat, 23 Mar 91 13:16:09 PST Subject: BITNET mail follows In-Reply-To: HORN%TAUNIVM.BITNET@BITNET.CC.CMU.EDU's message of 21 Mar 91 18:41 IST <0540CD966200006E@BITNET.CC.CMU.EDU> Message-ID: <9103232116.AA06759@circe.arc.nasa.gov.> I would like to get a copy of your article "Segmentation, Binding and Illusory Conjunctions", my mail address is Dr. Bassam Hussien mail stop 210-9 NASA Ames Research Center Moffett field, Ca 94035. e-mai hussien at circe.arc.nasa.gov Thanks. bassam From lazzaro at hobiecat.cs.caltech.edu Sun Mar 24 01:09:10 1991 From: lazzaro at hobiecat.cs.caltech.edu (John Lazzaro) Date: Sat, 23 Mar 91 22:09:10 pst Subject: No subject Message-ID: ******* PLEASE DO NOT DISTRIBUTE TO OTHER MAILING LISTS ******* Caltech VLSI CAD Tool Distribution ---------------------------------- We are offering to the "connectionists" mailing list hardware implementation community a pre-release version of the Caltech electronic CAD distribution. This distribution contains tools for schematic capture, netlist creation, and analog and digital simulation (log), IC mask layout, extraction, and DRC (wol), simple chip compilation (wolcomp), MOSIS fabrication request generation (mosis), netlist comparison (netcmp), data plotting (view) and postscript graphics editing (until). These tools were used exclusively for the design and test of all the integrated circuits described in Carver Mead's book "Analog VLSI and Neural Systems". Until was used as the primary tool for figure creation for the book. The distribution also contains an example of an analog VLSI chip that was designed and fabricated with these tools, and an example of an Actel field-programmable gate array design that was simulated and converted to Actel format with these tools. These tools are distributed under a license very similar to the GNU license; the minor changes protect Caltech from liability. To use these tools, you need: 1) A unix workstation that runs X11r3, X11r4, or Openwindows 2) A color screen 3) Gcc or other ANSI-standard compiler Right now only Sun Sparcstations are officially supported, although resourceful users have the tools running on Sun 3, HP Series 300, and Decstations. If don't have a Sparcstation or an HP 300, only take the package if you feel confident in your C/Unix abilities to do the porting required; someday soon we will integrate the changes back into the sources officially, although many "ifdef mips" are already in the code. If you are interested in some or all of these tools, 1) ftp to hobiecat.cs.caltech.edu on the Internet, 2) log in as anonymous and use your username as the password 3) cd ~ftp/pub/chipmunk 4) copy the file README, that contains more information. We are unable to help users who do not have Internet ftp access. Please do not post this announcement to a mailing list or Usenet group; we are hoping to catch a few more bugs through this pre-release before a general distribution begins. ******* PLEASE DO NOT DISTRIBUTE TO OTHER MAILING LISTS ******* From barryf at ee.su.OZ.AU Tue Mar 26 15:53:26 1991 From: barryf at ee.su.OZ.AU (Barry G. Flower, Sydney Univ. Elec. Eng., Tel: (+61-2) Date: Tue, 26 Mar 91 15:53:26 EST Subject: ACNN'92 Call For Papers Message-ID: <9103260553.AA10365@ee.su.oz.au> PRELIMINARY CALL FOR PAPERS Third Australian Conference On Neural Networks (ACNN'92) February 1992 The Australian National University, Canberra, Australia The third Australian conference on neural networks will be held in Canberra at the Australian National University, during the first week of February 1992. This conference is interdisciplinary, with emphasis on cross discipline communication between Neuroscientists, Engineers, Computer Scientists, Mathematicians and Psychologists concerned with understanding the integrative nature of the nervous system and its implementation in hardware/software. The categories for submissions include: 1 - Neuroscience: Integrative function of neural networks in vision, audition, motor, somatosensory and autonomic functions; Synaptic function; Cellular information processing; 2 - Theory: Learning; generalisation; complexity; scaling; stability; dynamics; 3 - Implementation: Hardware implementation of neural nets; Analogue and digital VLSI implementation; Optical implementations; 4 - Architectures and Learning Algorithms: New architectures and learning algorithms; hierarchy; modularity; learning pattern sequences; Information integration; 5 - Cognitive Science and AI: Computational models of cognition and perception; Reasoning; Concept formation; Language acquisition; Neural net implementation of expert systems; 6 - Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, machine vision; Motor control; Robotic; ACNN'92 will feature invited keynote speakers in the areas of neuroscience, learning, modelling and implementations. The program will include pre-conference tutorials, presentations and poster sessions. Proceedings will be printed and distributed to the attendees. There will be no parallel sessions. Submission Procedures: Original research contributions are solicited and will be internationally refereed. Authors must submit by August 30, 1991: 1- five copies of an up to four pages manuscript, 2- five copies of a single-page 100 words maximum abstract and 3- a covering letter indicating the submission title and the full names and addresses of all authors and to which author correspondence should be addressed. Authors need to indicate on the top of each copy of the manuscript and abstract pages their preference for an oral or poster presentation and specify one of the above six broad categories. Note that names or addresses of the authors should be omitted from the manuscript and the abstract and should be included only on the covering letter. Authors will be notified by November 1, 1991 whether their submissions are accepted or not, and are expected to prepare a revised manuscript (up to four pages) by December 13, 1991. Submissions should be mailed to: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Electrical Engineering NSW 2006 Australia Registration material may be obtained by writing to Mrs Agatha Shotam at the address above or by: Tel: (+61-2) 692 4214; Fax: (+61-2) 692 3847; Email: acnn92 at ee.su.oz.au. Deadline for Submissions is August 30, 1991 Please Post From LAUTRUP at nbivax.nbi.dk Wed Mar 27 07:47:00 1991 From: LAUTRUP at nbivax.nbi.dk (Benny Lautrup) Date: Wed, 27 Mar 1991 13:47 +0100 (NBI, Copenhagen) Subject: POSITIONS AVAILABLE IN NEURAL NETWORKS Message-ID: POSITIONS AVAILABLE IN NEURAL NETWORKS Recently, the Danish Research Councils funded the setting up of a Computational Neural Network Centre (CONNECT). There will be some positions as graduate students, postdocs, and more senior visiting scientists available in connection with the centre. Four of the junior (i.e. student and postdoc) positions will be funded directly from the centre grant and have been allotted to the main activity areas as described below. We are required to fill these very quickly to get the centre up and running according to the plans of the program under which it was funded, so the deadline for applying for them is very soon, APRIL 25. If there happen to be exceptionally qualified people in the relevant areas available right now, they should inform us immediately. We are also sending this letter because there may be other positions available in the future. These will generally be externally funded. Normally the procedure would be for us first to identify the good candidate and then to apply to research councils, foundations and/or international programs (e.g. NATO, EC, Nordic Council) for support. This requires some time, so if an applicant is interested in coming here from the fall of 1992, the procedure should be underway in the fall of 1991. The four areas for the present positions are: Biological sequence analysis Development of new theoretical tools and computational methods for analyzing the macromolecular structure and function of biological sequences. The focus will be on applying these tools and methods to specific problems in biology, including pre-mRNA splicing and similarity measures for DNA sequences to be used in constructing phylogenetic trees. The applicant is expected to have a thorough knowledge of experimental molecular biology, coupled with experience in mathematical methods for describing complex biological phenomena. This position will be at the Department of Structural Properties of Materials and the Institute for Physical Chemistry at the Technical University of Denmark. Analog VLSI for neural networks Development of VLSI circuits in analog CMOS for the implementation of neural networks and their learning algorithms. The focus will be on the interaction between network topology and the constraints imposed by VLSI technology. The applicant is expected to have a thorough knowledge of CMOS technology and analog electronics. Experience with the construction of large systems in VLSI, particularly combined analog-digital systems, is especially desirable. This position will be in the Electronics Institute at the Technical University of Denmark. Neural signal processing Theoretical analysis and implementation of new methods for optimizing architectures for neural networks, with applications in adaptive signal processing, as well as ``early vision''. The applicant is expected to have experience in mathematical modelling of complex systems using statistical or statistical mechanical methods. This position will be jointly in the Electronics Institute at the Technical University of Denmark and the Department of Optics and Fluid Dynamics, Risoe National Laboratory. Optical neural networks Theoretical and experimental investigation of optical neural networks. The applicant is expected to have a good knowledge of applied mathematics, statistics, and modern optics, particularly Fourier optics. This position will be in the Department of Optics and Fluid Dynamics, Risoe National Laboratory. In all cases, the applicant is expected to have some background in neural networks and experience in programming in high-level languages. An applicant should send his or her curriculum vitae and publication list to Benny Lautrup Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Denmark Telephone: (45)3142-1616 Telefax: (45)3142-1016 E-mail: lautrup at nbivax.nbi.dk before April 25. He/she should also have two letters of reference sent separately by people familiar with his/her work by the same date. From terry at sdbio2.UCSD.EDU Wed Mar 27 13:31:32 1991 From: terry at sdbio2.UCSD.EDU (Terry Sejnowski) Date: Wed, 27 Mar 91 10:31:32 PST Subject: Retina Simulator Message-ID: <9103271831.AA00991@sdbio2.UCSD.EDU> I can make available for the cost of duplicating and postage ($20) a computer simulation of the retina, written in Fortran 77. The model has been accepted for publication in BIOLOGICAL CYBERNETICS and a preprint of the paper can be provided. For more information send an E-mail message to siminoff at ifado.uucp. Robert Siminoff From LAUTRUP at nbivax.nbi.dk Thu Mar 28 05:22:00 1991 From: LAUTRUP at nbivax.nbi.dk (Benny Lautrup) Date: Thu, 28 Mar 1991 11:22 +0100 (NBI, Copenhagen) Subject: POSITIONS IN NEURAL NETWORKS Message-ID: <77A7C88800C00D4B@nbivax.nbi.dk> POSITIONS AVAILABLE IN NEURAL NETWORKS Recently, the Danish Research Councils funded the setting up of a Computational Neural Network Centre (CONNECT). There will be some positions as graduate students, postdocs, and more senior visiting scientists available in connection with the centre. Four of the junior (i.e. student and postdoc) positions will be funded directly from the centre grant and have been allotted to the main activity areas as described below. We are required to fill these very quickly to get the centre up and running according to the plans of the program under which it was funded, so the deadline for applying for them is very soon, APRIL 25. If there happen to be exceptionally qualified people in the relevant areas available right now, they should inform us immediately. We are also sending this letter because there may be other positions available in the future. These will generally be externally funded. Normally the procedure would be for us first to identify the good candidate and then to apply to research councils, foundations and/or international programs (e.g. NATO, EC, Nordic Council) for support. This requires some time, so if an applicant is interested in coming here from the fall of 1992, the procedure should be underway in the fall of 1991. The four areas for the present positions are: Biological sequence analysis Development of new theoretical tools and computational methods for analyzing the macromolecular structure and function of biological sequences. The focus will be on applying these tools and methods to specific problems in biology, including pre-mRNA splicing and similarity measures for DNA sequences to be used in constructing phylogenetic trees. The applicant is expected to have a thorough knowledge of experimental molecular biology, coupled with experience in mathematical methods for describing complex biological phenomena. This position will be at the Department of Structural Properties of Materials and the Institute for Physical Chemistry at the Technical University of Denmark. Analog VLSI for neural networks Development of VLSI circuits in analog CMOS for the implementation of neural networks and their learning algorithms. The focus will be on the interaction between network topology and the constraints imposed by VLSI technology. The applicant is expected to have a thorough knowledge of CMOS technology and analog electronics. Experience with the construction of large systems in VLSI, particularly combined analog-digital systems, is especially desirable. This position will be in the Electronics Institute at the Technical University of Denmark. Neural signal processing Theoretical analysis and implementation of new methods for optimizing architectures for neural networks, with applications in adaptive signal processing, as well as ``early vision''. The applicant is expected to have experience in mathematical modelling of complex systems using statistical or statistical mechanical methods. This position will be jointly in the Electronics Institute at the Technical University of Denmark and the Department of Optics and Fluid Dynamics, Risoe National Laboratory. Optical neural networks Theoretical and experimental investigation of optical neural networks. The applicant is expected to have a good knowledge of applied mathematics, statistics, and modern optics, particularly Fourier optics. This position will be in the Department of Optics and Fluid Dynamics, Risoe National Laboratory. In all cases, the applicant is expected to have some background in neural networks and experience in programming in high-level languages. An applicant should send his or her curriculum vitae and publication list to Benny Lautrup Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Denmark Telephone: (45)3142-1616 Telefax: (45)3142-1016 E-mail: lautrup at nbivax.nbi.dk before April 25. He/she should also have two letters of reference sent separately by people familiar with his/her work by the same date. From SCHNEIDER at vms.cis.pitt.edu Thu Mar 28 15:15:00 1991 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Thu, 28 Mar 91 16:15 EDT Subject: Impact of connectionism on Education- what has happened. Message-ID: <9B5F57412DBF4025AC@vms.cis.pitt.edu> Wanted examples applications of connectionism to education. I am writing an overview of connectionism for education. If you know any references or stories of the impact on education please send a note to: SCHNEIDER at PITTVMS on Bitnet or Walter Schneider, 3939 O'Hara St Pittsburgh PA 15260, USA. The topics of interest and potential impact of connectionism are: conceptualization of learning knowledge representation as it effects teaching sequencing and nature of practice computerized tutoring and in intelligent tutoring projects From goldberg at iris.usc.edu Thu Mar 28 14:12:01 1991 From: goldberg at iris.usc.edu (Kenneth Goldberg) Date: Thu, 28 Mar 91 11:12:01 PST Subject: Call for Papers Message-ID: Announcing: Workshop on Neural Networks in Robotics University of Southern California October 23-25, 1991 Sponsor: The Center for Neural Engineering at USC The goal of the workshop will be to stimulate discussion on the current status and potential advances in this field. The workshop will be concerned with (but not limited to) issues such as: Connectionist approaches to robot control Combined machine/connectionist learning Path planning and obstacle avoidance Inverse kinematics and dynamics Transfer of skills from humans to robots Intelligent robots in manufacturing Multiple interacting robot systems Neural network architectures for robot control Sensor fusion and interaction Task learning by robots Biological models for robot control The Organizing Committee includes Michael Arbib (USC), Jacob Barhen (JPL), Andrew Barto (Univ of Massachussetts), George Bekey (USC) and Ken Goldberg (USC). Submissions: 3 copies of extended abstracts of proposed presentations (2 to 4 pages in length) by May 15, 1990 to Prof. George Bekey, Chairman, Technical Program Committee, c/o Computer Science Department, University of Southern California, Los Angeles, California 90089-0782 From egnp46 at castle.edinburgh.ac.uk Fri Mar 29 14:50:51 1991 From: egnp46 at castle.edinburgh.ac.uk (D J Wallace) Date: Fri, 29 Mar 91 14:50:51 WET Subject: Research position available Message-ID: <9103291450.aa10801@castle.ed.ac.uk> POSTDOCTORAL POSITION IN NEURAL NETWORK MODELS AND APPLICATIONS PHYSICS DEPARTMENT, UNIVERSITY OF EDINBURGH Applications are invited for a postdoctoral reasearch position in the Physics Department, University of Edinburgh funded by a Science and Engineering Research Council grant to David Wallace and Alastair Bruce. The position is for two years, from October 1991. The group's interests span theoretical and computational studies of training algorithms, generalisation, dynamical behaviour and optimisation. Theoretical techniques utilise statistical mechanics and dynamical systems. Computational facilities include a range of systems in Edinburgh Parallel Computing Centre, including a 400-node 1.8Gbyte transputer system, a 64-node 1Gbyte Meiko i860 machine and AMT DAPs, as well as workstations and graphics facilities. There are strong links with researchers in other departments, including David Willshaw and Keith Stenning (Cognitive Science), Richard Rohwer (Speech Technology), Alan Murray (Electrical Engineering) and Michael Morgan and Richard Morris (Pharmacology), and we are in two European Community Twinnings. Industrial collaborations have included applications with British Gas, British Petroleum, British Telecom, National Westminster Bank and Shell. Applications supported by a cv and two letters of reference should be sent to D.J. Wallace Physics Department, University of Edinburgh, Kings Buildings, Edinburgh EH9 3JZ, UK Email: ADBruce at uk.ac.ed and DJWallace at uk.ac.ed Tel: 031 650 5250 or 5247 to arrive if possible by 30th April. Further particulars can be obtained from the same address.