From J65%TAUNIVM.BITNET at VMA.CC.CMU.EDU Fri Jul 1 08:48:01 1988 From: J65%TAUNIVM.BITNET at VMA.CC.CMU.EDU (amir toister) Date: Fri, 01 Jul 88 08:48:01 IST Subject: sub Message-ID: pl. add me to the mailing list. thanx, amir. From munnari!smokey.oz.au!guy at uunet.UU.NET Fri Jul 1 01:16:23 1988 From: munnari!smokey.oz.au!guy at uunet.UU.NET (munnari!smokey.oz.au!guy@uunet.UU.NET) Date: 01 Jul 88 14:46:23 +0930 (Fri) Subject: Technical Report Available (was Fractal Representations) In-Reply-To: Your message of Tue, 28 Jun 88 08:57:41 EDT. <8806281257.AA07119@bucasb.bu.edu> Message-ID: <8807020150.AA07807@uunet.UU.NET> Hello, Could I have a copy of your TR please? My address is: Guy Smith, Dept. of Computer Science, University of Adelaide, Adelaide, 5000, AUSTRALIA. Thanks, Guy Smith. From munnari!cheops.eecs.unsw.oz.au!ashley at uunet.UU.NET Sun Jul 3 00:44:52 1988 From: munnari!cheops.eecs.unsw.oz.au!ashley at uunet.UU.NET (Ashley M. Aitken) Date: Sat, 2 Jul 88 23:44:52 EST Subject: Theories of Higher Brain Functions via Cerebral Neocortex ? Message-ID: <8807030042.AA28121@uunet.UU.NET> G'Day ! I am presently undertaking to review theories and models of the cerebral neo-cortex which aim to explain higher brain functions. To-date I am familiar with the following Neuronal Group Selection Edelman, G. Value Unit Encoding Ballard, D. Neocortex Theory Marr, D. I would be most grateful if anyone knowing of any references to other theories or models could please e-mail me the relevant information. Any leads will be most appreciated. Thanks in Advance, sincerely, Ashley Aitken E-MAIL : ashley at cheops.unsw.oz ACSnet ashley%cheops.unsw.oz at uunet.uu.net ARPAnet ashley at cheops.unsw.oz.au ARPAnet {uunet,ukc,ubc-vision,mcvax}!munnari!cheops.unsw.oz!ashley UUCP ashley%cheops.unsw.oz at australia CSnet ashley%cheops.unsw.oz at uk.ac.ukc JAnet POSTAL : Academic Address: Residential Address: Computer Science Department, EECS, c/o Basser College, University of New South Wales, The Kensington Colleges, Box 1,PO KENSINGTON,N.S.W.,2033, Box 24,PO KENSINGTON,3033. AUSTRALIA. AUSTRALIA. Ph. Aust (02) 697-4055 Ph. Aust (02) 663-8117 From pratt at paul.rutgers.edu Tue Jul 5 11:40:46 1988 From: pratt at paul.rutgers.edu (Lorien Y. Pratt) Date: Tue, 5 Jul 88 11:40:46 EDT Subject: Request for Boltzmann Machine information Message-ID: <8807051540.AA03734@devo.rutgers.edu> I am attempting to perform a thorough survey of papers which analyze the Boltzmann Machine algorithm for learning in neural networks. I would appreciate any of the following: o Boltzmann machine bibliography files o Preprints of Boltzmann machine papers o Pointers to Boltzmann machine papers o Any papers referring to a complexity analysis of the Boltzmann machine o Any papers presenting the Boltzmann machine explicitly in algorithmic form. o Any comparisons between the complexity of the Boltzmann and other neural network learning algorithms (e.g. back propagation) o Pointers to Boltzmann machine implementations, as well as empirical results on learning ability. o Information on the parallel implementation of the Boltzmann machine One of my higher priorities is to establish once and for all whether or not the Boltzmann machine algorithm is so inferior to Back Propagation that it merits no further study, as was implied by Geoff Hinton at the recent Machine Learning conference. Therefore, if you can point out the kinds of learning problems, if any, on which the Boltzmann machine is superior, then I'd appreciate hearing about them. From pratt at paul.rutgers.edu Tue Jul 5 11:42:20 1988 From: pratt at paul.rutgers.edu (Lorien Y. Pratt) Date: Tue, 5 Jul 88 11:42:20 EDT Subject: Request for Boltzmann Machine information (forgot my address) Message-ID: <8807051542.AA03748@devo.rutgers.edu> Sorry, I forgot my address in the previous posting. It is: Lorien Y. Pratt Computer Science Department pratt at paul.rutgers.edu Rutgers University Busch Campus (201) 932-4714 Piscataway, NJ 08854 From hgigley at note.nsf.gov Tue Jul 5 17:18:03 1988 From: hgigley at note.nsf.gov (Helen M. Gigley) Date: Tue, 05 Jul 88 17:18:03 -0400 Subject: new address Message-ID: <8807051718.aa15204@note.note.nsf.gov> Please update my address on the connectionist file to hgigley at note.nsf.gov effectively immediately. Mailing address is NSF 1800 G Street, N.W., Room 304 Washington, D.C. 20550 Thanks, Helen M. Gigley From jose at tractatus.bellcore.com Wed Jul 6 13:03:25 1988 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Wed, 6 Jul 88 13:03:25 EDT Subject: Request for Boltzmann Machine information Message-ID: <8807061703.AA05040@tractatus.bellcore.com> I didn't think Geoff said it was so inferior...in fact there is lots of reason beleive it is quite similar to backpropagation noting of course that it can compute higher order correlations between input and output.. Although the annealings make it slow...but with a vlsi implementation as was done recently at bellcore speed could be much improved... jose From hinton at ai.toronto.edu Wed Jul 6 14:21:30 1988 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Wed, 6 Jul 88 14:21:30 EDT Subject: Request for Boltzmann Machine information In-Reply-To: Your message of Tue, 05 Jul 88 11:40:46 -0400. Message-ID: <88Jul6.114149edt.70@neat.ai.toronto.edu> Lorien Pratt's message says that I implied that the Boltzmann machine deserves no further study. This is not what I believe. It is certianly much slower than back-propagation at many tasks and appears to scale more poorly than BP when the depth of the network is increased. So it would be silly to use it instead of back-propagation for a typical current application. However, and there may still be methods of improving the boltzmann machine learning algorithm a lot. For example, Anderson and Peterson have reported that a mean-field version of the procedure learns much faster. The mean-field version is basically a learning procedure for Hopfield-Tank nets (which are the mean-field version of Boltzmann machines). It allows learning in Hopfield -Tank nets that have hidden units. Also, the BM learning procedure is easier than BP to put directly into hardware. Alspector at bellcore has recently fabricated and tested a chip that goes about 100,000 times faster than a minicomputer simulation. Finally, the BM procedure can learn to model the higher-order statistics of the desired state vectors of the output units. BP cannot do this. In summary, the existing, standard Boltzmann machine learning procedure is much slower than BP at tasks for which BP is applicable. However, the mean-field version may be closer in efficiency to BP, and other developments are possible. REFERENCES The best general description of Boltzmann Machines is: G. E. Hinton and T. J. Sejnowski Learning and Relearning in Boltzmann machines In D.~E. Rumelhart, J.~L. McClelland, and the~PDP~Research~Group, Parallel Distributed Processing: {Explorations} in the Microstructure of Cognition. {Volume I Foundations}}, MIT Press, Cambridge, MA, 1986. Some recent developments are described in section 7 of: G. E. Hinton "Connectionist Learning Procedures" Technical Report CMU-CS-87-115 (version 2) Available from computer science dept, CMU, Pittsburgh PA 15213. The hardware implementation is described in: J. Alspector and R.~B. Allen. A neuromorphic VLSI learning system. In P. Loseleben, editor, Advanced Research in VLSI: Proceedings of the 1987 Stanford Conference. MIT Press, Cambridge, Mass., 1987. The mean field lerning procedure is described in: C. Peterson and J.~R. Anderson. A Mean Field Theory Learning Algorithm for Neural Networks. MCC Technical Report E1-259-87, Microelectronics and Computer Technology Corporation, 1987. Geoff From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Wed Jul 6 19:50:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Wed, 6 Jul 88 18:50 EST Subject: Fractal Representations of Neural Networks Responses Message-ID: For those of you who are interested in what I have receieved dealing with fractal representations of neural networks, I have so far only heard from John Merrill (merrill at bucasb.bu.edu) who sent me an abstract of his paper (I believe he also CC'ed it to connectionists), and Jordan Pollack (pollack at nmsu.csnet) who is working on a paper outline. It seems that there isn't a great deal of literature that peole know about dealing with fractal representations of neural networks. I am very interested in this subject for its use in artificially evolving neural networks using genetic algortihms, and it seems that fractal genetic representation is a way of compacting alot of complex information into a small amount of descriptors (ala Mandelbrot Set). -Thomas Edwards ins_atge at jhuvms 812 Brantford Avenue Silver Spring, MD 20904 From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Jul 6 21:37:58 1988 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (Thanasis Kehagias) Date: Wed, 06 Jul 88 21:37:58 EDT Subject: Simulated Annealing Message-ID: when i was at the connectionist summer school at CMU a couple weeks ago, Dave Touretzky mentioned that somebody (I believe it was Mass Sivilotti) had just finished building a simulated annealing chip that was running big problems (was it ca. 50 units?) at ms times. does anyone have more andd more accurate info? Thanasis Kehagias From unido!tumult.informatik.tu-muenchen.de!schmidhu at uunet.UU.NET Thu Jul 7 16:58:37 1988 From: unido!tumult.informatik.tu-muenchen.de!schmidhu at uunet.UU.NET (unido!tumult.informatik.tu-muenchen.de!schmidhu@uunet.UU.NET) Date: Thu, 7 Jul 88 16:58:37 N Subject: No subject Message-ID: <8807071500.AQ08612@unido.uucp> Received: by tumult.informatik.tu-muenchen.de (1.2/4.81) id AA10801; Thu, 7 Jul 88 16:58:37 -0200 Date: Thu, 7 Jul 88 16:58:37 -0200 From: Juergen Schmidhuber Message-Id: <8807071458.AA10801 at tumult.informatik.tu-muenchen.de> To: INS_ATGE at JHUVMS.BITNET, connectionists at cs.cmu.edu Subject: Re: Fractal Representations of Neural Networks Responses Cc: schmidhu at tumult.informatik.tu-muenchen.de I would like to ask a related question: Has there been any research on recognizing fractal objects with neural nets? The first thing that most neural networks do to some input is to apply a distributing transformation. At first glance this should greatly simplify the task of correlating the whole object to its parts (one might think of some measure of self-similarity) and then doing the classification. Juergen Schmidhuber From bukys at cs.rochester.edu Thu Jul 7 14:57:42 1988 From: bukys at cs.rochester.edu (bukys@cs.rochester.edu) Date: Thu, 7 Jul 88 14:57:42 EDT Subject: Rochester Connectionist Simulator -- ANNOUNCEMENTS Message-ID: <8807071857.AA03986@hamal.cs.rochester.edu> (1) The Rochester Connectionist Simulator (version 4.1) is now available via anonymous FTP from CS.Rochester.EDU. It is in the public/rcs directory. Read the README file there for more information. The distribution is too large to mail; the compressed tar file is 837K bytes, and the uncompressed tar file is 3 Megabytes. If you have TeX and a PostScript printer, you should be able to produce your own copy of the 181-page manual. If you want a paper copy of the manual anyway, send a check for $10 per manual (payable to the University of Rochester) to Rose Peet at the above address. If you are unable to obtain anonymous FTP access to the simulator distribution, you can still order a copy the old way. Contact Rose Peet Computer Science Department University of Rochester Rochester, NY 14627 (USA) or , and she will send you the approriate forms. We are currently charging $150 for a distribution tape and a manual. We do not have the facilities for generating invoices, so payment is required with any order. (2) We are setting up electronic mailing lists for use by the users of the simulator. If you have the simulator, please sign up! You can reach other users of the simulator via the users' mailing list: If you are not on this mailing list, and wish to be added, send a note to Please send bug reports to (3) Official bug patches to date (to be found in the FTP directory): rcs_v4.1.patch.01 fixes a SERIOUS problem in the logging feature. rcs_v4.1.patch.02 fixes some minor syntactic problems in the doc. The patches can be applied by hand if necessary, but you will make your life easier if you obtain the widely-available "patch" program, and redirect each patch file into "patch -p". From sylvie at hobiecat.caltech.edu Thu Jul 7 18:08:43 1988 From: sylvie at hobiecat.caltech.edu (sylvie ryckebusch) Date: Thu, 7 Jul 88 15:08:43 pdt Subject: Simulated Annealing Message-ID: <8807072208.AA11079@hobiecat.caltech.edu> A VLSI implementation of a Boltzmann machine is being done by Josh Alspector at Bellcore. An extensive description of this research is presented in the Proceedings of the 1987 Stanford Conference on Advanced Research in VLSI, Paul Losleben, ed. mass sivilotti From eric at mcc.com Thu Jul 7 22:49:02 1988 From: eric at mcc.com (Eric Hartman) Date: Thu, 7 Jul 88 21:49:02 cdt Subject: backprop vs Boltzmann Message-ID: <8807080249.AA10150@little.aca.mcc.com> Regarding the discussion about back propagation vs. the Boltzmann machine: We agree with Hinton's answer to Lorien Pratt on this question. Since we have recently completed an extensive comparison between the mean field theory [1] and the back propagation learning algorithm in a variety of learning and generalization situations [2] we would like to supplement his answer with the following remarks: #1 For the mirror symmetry and clumps problems mean field theory (MFT) and back propagation (BP) exhibit the same quality with respect to # of learning cycles needed for learning and generalization percentage. The two algorithms even tend to find remarkably similar solutions. These results hold both for fixed training sets and continuous (ongoing) learning. (These conclusions assume that the two algorithms use the same activation function, either [0,1] or [-1,1], and that the midpoint is used as the success criterion in generalization.) #2 The two algorithms also appear equivalent with respect to different scaling properties; number of training patterns and number of hidden layers. [In other words our experience is somewhat different than Hinton's re. the latter point (which strictly speaking was for the original Boltzmann machine (BM)] #3 We have made further checks of the MFT approximation than was done in ref.[1] and find that it is indeed very good. #4 The real advantage of MFT vs BP is twofold: It allows for a more general architecture and usage [see #5 below] and it is more natural for a VLSI implementation. Also, on the simulation level, the parallelism inherent in MFT can be fully exploited on a SIMD architecture [this is not easy with the original BM]. #5 BP is of feed-forward nature. Hence there is a very strong distinction between input and output units. Being bidirectional, no such fixed distinction need exist for MFT. Its application areas therefore exceed pure feature recognition (input-output mappings). It can be used as a content-addressable memory. In ref.[2] we are exploring this possibility with substantial success. We use hidden units. So far we have achieved storage capacities of approximately 2N (where N is the number of visible units) for the case of retrieval with clamping (the given bits are known with certainty), and of at least roughly N for the case of error correction (no bits are known with certainty and hence none can be clamped). We consider these capacities to be lower bounds, as we are quite confident that we will find still better ways of using the learning algorithm and the hidden units to further increase these capacities. Also, regarding the discussion a month or so ago about fully distributed analog content-addressable memories, we have made some investigation in this direction and find that MFT is quite capable of learning analog as well as discrete patterns. 1. C. Peterson and J.R. Anderson. "A Mean Field Theory Learning Algorithm for Neural Networks", MCC Technical Report MCC-EI-259-87, Published in Complex Systems 1, 995 (1987). 2. C. Peterson and E. Hartman "Explorations of the Mean Field Theory Learning Algorithm", MCC Technical Report MCC-ACA-ST-064-88 [This paper is still subject to some polishing and will be announced on connectionists within a few weeks time]. ----------------------------- Carsten Peterson and Eric Hartman From terry Thu Jul 7 21:11:31 1988 From: terry (Terry Sejnowski ) Date: Thu, 7 Jul 88 21:11:31 edt Subject: Request for Boltzmann Machine information Message-ID: <8807080111.AA05719@crabcake.cs.jhu.edu> At the recent N'Euro 88 meeting in Paris, R. L. Chrisly working with T. Kohonen reported results from Boltzmann learning, back-propagation, and Kohonen's vector quantization applied to a classification problem in a noisy environment. Boltzmann produced classifications close to the optimal Bayes strategy; Back-prop did well for small problems, but the perfomrance deteriorated for large problems and was overtaken by Kohonen's network. Boltzmann, however, took much more time to learn and was sensitive to the input representation (a unary code for continuous value inputs was better than an analog code). For more information write to Kohonen's lab in Finland. Kohonen mentioned to me recently that a two level Kohonen net has significantly improved performance. The reason that Boltmann machines do well on statistical inference problems is that they are in principle capable of optimal Baysian inference. See Proc. IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, June 1983. Hinton & Sejnowski: Optimal Perceptual Inference. Terry ----- From josh at flash.bellcore.com Fri Jul 8 15:23:07 1988 From: josh at flash.bellcore.com (Joshua Alspector) Date: Fri, 8 Jul 88 15:23:07 EDT Subject: Stochastic learning chip Message-ID: <8807081923.AA21117@flash.bellcore.com> To clarify some previous postings, we have fabricated a chip based on a modified Boltzmann algorithm that learns. It can learn an XOR function in a few milliseconds. Patterns can be presented to it at about 100,000 per second. It is a test chip containing a small network in one corner that consists of 6 neurons and 15 two-way synapses for potentially full connectivity. We can initialize the network to any weight configuration and permanently disable some connections. We also have demonstrated the capability to do unsupervised competitive learning as well as supervised learning. It turns out that our noise amplifiers do not give gaussian uncorrelated noise as we had hoped, but the noise that exists seems to be sufficient to help it learn. This bears out results in previous simulations that show the noise distributions don't matter too much as long as they provide a stochastic element. Therefore, it is not completely accurate to call it a Boltzmann machine or even a simulated annealing machine. We can do only toy problems because of the small number of neurons but we have plans to make much larger networks in the future than can consist of multiple chips. Previous papers describing the implementation and extensions of the stochastic learning technique are: J. Alspector and R.B. Allen, "A neuromorphic vlsi learning system", in Advanced Research in VLSI: Proceedings of the 1987 Stanford Conference. edited by P. Losleben (MIT Press, Cambridge, MA, 1987), pp. 313-349. J. Alspector, R.B. Allen, V. Hu, & S. Satyanarayana, "Stochastic Learning Networks and their Electronic Implementation", in Proceedings of the 1987 NIPS Conference edited by D.Z. Anderson josh From unido!gmdzi!zsv!al at uunet.UU.NET Tue Jul 12 09:46:43 1988 From: unido!gmdzi!zsv!al at uunet.UU.NET (Alexander Linden) Date: Tue, 12 Jul 88 15:46:43 +0200 Subject: connectionist summer school Message-ID: <8807121346.AA02259@zsv.gmd.de> Is there anybody to report about the connectionist summer school at CMU ? Are there proceedings of some of the invited papers, every applicant had to supply ? Alexander Linden From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Tue Jul 12 21:52:20 1988 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (Thanasis Kehagias) Date: Tue, 12 Jul 88 21:52:20 EDT Subject: Lapedes paper Message-ID: does anybody have references to Lapedes et. al. work on neural nets and chaotic time series? thanks in advance, Thanasis Kehagias From Roni.Rosenfeld at B.GP.CS.CMU.EDU Tue Jul 12 22:03:16 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Tue, 12 Jul 88 22:03:16 EDT Subject: The CONNECTIONISTS archive Message-ID: <8264.584762596@RONI.BOLTZ.CS.CMU.EDU> All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 02/27/88 are now available for public perusal. A separate file exists for each month. The files' names are: "arch.yymm", where yymm stand for the obvious thing. Thus the earliest available data are in the file: "arch.8802". To browse through these files (as well as through other files, see below) you must FTP them to your local machine. How to FTP files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user "ftpguest" with password "cmunix". 3. To retrieve a file named, e.g., backprop.lisp, you must reference it as "connectionists/backprop.lisp". Note that the filename you type must not begin with a "/" or contain the special string "/..". 4. See the file "connectionists/READ.ME" for a list of other files of interest that are available for FTPing. Contents of connectionists/READ.ME as of 07/12/88: -=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= subscribers.dist The connectionists mailing list redistribution-maintainers People responsible for redistribution. arch.yymm Archives of the mailing list for the month mm/yy. (Archives available starting 02/28/88) backprop.lisp A backpropagation simulator conference.* Calls for papers for upcoming neural net conferences. biblio.mss A slightly expanded version of the PDP book's bibliography. bradtke.bib A bibliography on connectionist natural language and KR. -=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Problems? - contact us at "connectionists-request at cs.cmu.edu". Happy Browsing -- Roni Rosenfeld connectionists-request at cs.cmu.edu From Roni.Rosenfeld at B.GP.CS.CMU.EDU Tue Jul 12 22:26:42 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Tue, 12 Jul 88 22:26:42 EDT Subject: The CONNECTIONISTS archive - *** UPDATE *** Message-ID: <8303.584764002@RONI.BOLTZ.CS.CMU.EDU> Some people have experienced problems trying to get files from the archive. Please hold off until further announcements. Roni Rosenfeld connectionists-request at cs.cmu.edu From dukempd.uucp!palmer at cs.duke.edu Wed Jul 13 10:07:33 1988 From: dukempd.uucp!palmer at cs.duke.edu (Richard Palmer) Date: Wed, 13 Jul 88 10:07:33 EDT Subject: Mailing list Message-ID: <8807131407.AA04795@dukempd.UUCP> Please put me on your mailing list. Thanks. Richard Palmer, Department of Physics, Duke University, Durham N.C. 27706 palmer%dukempd at CS.DUKE.EDU (preferred) DCONMT at TUCC.BITNET (1 day slower; letter O) From MJ_CARTE%UNHH.BITNET at VMA.CC.CMU.EDU Thu Jul 14 13:32:00 1988 From: MJ_CARTE%UNHH.BITNET at VMA.CC.CMU.EDU (MJ_CARTE%UNHH.BITNET@VMA.CC.CMU.EDU) Date: Thu, 14 Jul 88 12:32 EST Subject: Lapedes paper reference request Message-ID: With reference to Thanasis Kehagias' request of 13 July 88 for leads to Alan Lapedes' work: "Nonlinear signal processing using neural networks: Prediction and system modeling," A. Lapedes and R. Farber, Los Alamos National Laboratory Tech.Rpt. LA-UR-87-2662, July 1987. Related work which is said to substantially speed up learning of complex nonlinear dynamics (and which does *not* use connectionist/neural network ideas!) is described in: "Exploiting chaos to predict the future and reduce noise (version 1.1)," J.Doyne Farmer and J.J. Sidorowich, Los Alamos National Laboratory, February 1988 (no T.R. number available on my copy). I don't have e-mail addresses for any of the authors noted, but the LANL address is: Theoretical Division Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos, NM 87545. I believe there has also been subsequent work by Lapedes and Farber-- perhaps others can supply references. Mike Carter Electrical and Computer Engineering Dept. University of New Hampshire From russ%yummy at gateway.mitre.org Thu Jul 14 16:45:09 1988 From: russ%yummy at gateway.mitre.org (russ%yummy@gateway.mitre.org) Date: Thu, 14 Jul 88 16:45:09 EDT Subject: Abstract Message-ID: <8807142045.AA01204@baklava.mitre.org> For copies of the following paper send to: Wieland at mitre.arpa or Alexis Wieland M.S. Z425 MITRE Corporation 7525 Colshire Drive McLean, Virginia 22102 An Analysis of Noise Tolerance for a Neural Network Recognition System Alexis Wieland Russell Leighton Garry Jacyna MITRE Corporation Signal Processing Center 7525 Colshire Drive McLean, Virginia 22102 This paper analyzes the performance of a neural network designed to carry out a simple recognition task when its input signal has been corrupted with gaussian or correlated noise. The back-propagation algorithm was used to train a neural network to categorize input images as being an A, B, C, D, or nothing independent of rotation, contrast, and brightness, and in the presence of large amounts of additive noise. For bandlimited white gaussian noise the results are compared to the performance of an optimal matched filter. The neural network is shown to perform classification at or near the optimal limit. From hendler at dormouse.cs.umd.edu Fri Jul 15 10:25:55 1988 From: hendler at dormouse.cs.umd.edu (Jim Hendler) Date: Fri, 15 Jul 88 10:25:55 EDT Subject: ftp-able backprop Message-ID: <8807151425.AA03101@dormouse.cs.umd.edu> For some reason a lot of you expressed interest in the tech. rpt and code for the simple back-prop tool we put together. In answer to the demand I've made the file ftp-able from the anonymous login at the machine mimsy.umd.edu. -Jim H. p.s. this is unsupported and all that stuff. The code is kernel common-lisp and should run on just about anything with a decent commonlisp. From Roni.Rosenfeld at B.GP.CS.CMU.EDU Fri Jul 15 15:42:55 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Fri, 15 Jul 88 15:42:55 EDT Subject: The CONNECTIONISTS archive - *** correction *** Message-ID: <11076.584998975@RONI.BOLTZ.CS.CMU.EDU> FTPing from the CONNECTIONISTS archive did not work for most people. The problem: Our extra-secure FTP expects to either be able to use the name of the remote file for the local file, or to be given a local name to use. So if the remote name begins with "connectionists/" then there must be a "connectionists/" in the local space, or else you must specify a local name. We apologize for the inconvenience. Here'e what you should do: EITHER: make a directory called "connectionists" in your working directory ftp, log in as ftpguest (password "cmunix") ls connectionists /* This succeeds */ get connectionists/READ.ME /* This puts READ.ME into your local directory "connectionists" */ OR: ftp, log in as ftpguest/cmunix ls connectionists /* This succeeds */ get connectionists/READ.ME foo /* This puts READ.ME into your local file foo */ Please let us know if you have any problems. Roni Rosenfeld connectionists-request at cs.cmu.edu From meb at oddjob.uchicago.edu Tue Jul 19 14:08:27 1988 From: meb at oddjob.uchicago.edu (Matthew Brand) Date: Tue, 19 Jul 88 13:08:27 CDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807191808.AA06771@oddjob.uchicago.edu> I am looking for proofs of complexity limits for tasks learnable by (multilayer) PDP algorithms. Specifically, the proof that the generalized delta rule (aka backprop) constrains one to linearly independent association tasks. Similar work on Boltzmann machines or any simulated annealing-based learning algorithm would be even more welcome. And, as long as I'm writing my wish list, if you know of any work which has indicated strong upper bounds on the tasks complexity of 3+ layer nets configured via energy-minimization algorithms, I'm mighty keen to see it. Other requests: references, tech reports, or reprints on - Metropolis algorithm-based learning rules other than the Boltzmann machine. - Augmentations of the generalized delta rule, specifically relaxations of the feed-forward constraint, for example approximations of error in recurrent subnets. I understand that a researcher in Spain is doing very interesting stuff along these lines. - Complexity analyses of genetic learning algorithms such as Holland's classifier systems: assuming parallel operation, how does performance decline with scale-up of # of rules, # of condition bits, and size of the message list? Please mail responses to me; I'll summarize and post. I'd like to do this quickly; in 4 weeks I'll be taking a vacation far away from computers, and I'm some distance from a library suitable to look up computer science or AI references. For these reasons, papers sent e-mail or US-mail would be much appreciated. My address: Matthew Brand 5631 S. Kenwood Ave. #2B Chicago, IL., 60637.1739 Many thanks in advance. * * * * * * * matthew brand * * * meb at oddjob.uchicago.edu * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * From alexis%yummy at gateway.mitre.org Wed Jul 20 08:43:55 1988 From: alexis%yummy at gateway.mitre.org (alexis%yummy@gateway.mitre.org) Date: Wed, 20 Jul 88 08:43:55 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807201243.AA00470@marzipan.mitre.org> I'm not entirely sure I understand what you mean by: > ... generalized delta rule (aka backprop) constrains one to linearly > independent association tasks. but I don't think it's correct. If you mean linearly separable problems (ala Minsky & Papert) or that the input vectors have to be orthogonal that is *definitely* not true (see R. Lippmann, Introduction to Computing with Neural Nets, ASSP Mag, April 87; or D. Burr, Experiments on Neural Net Recognition of Spoken and Written Text, ASSP, V36#7, July 88; or A. Wieland & R. Leighton, Geometric Analysis of Neural Network Capabilities, ICNN 88) By way of empirical demonstration, we've been using a multi-layer net with 2 inputs (representing an x and y coordinate) and 1 output (representing class) to separate two clusters that spiral around each other ~3 times to test some of our learning algorithms. If anything *IS NOT* linearly separable, a spiral is not. From kruschke at cogsci.berkeley.edu Thu Jul 21 13:13:27 1988 From: kruschke at cogsci.berkeley.edu (John Kruschke) Date: Thu, 21 Jul 88 10:13:27 PDT Subject: No subject Message-ID: <8807211713.AA07091@cogsci.berkeley.edu> Please put me on the mailing list! I am interested in hidden layer representation and self-organized architectures. Thanks. --John Kruschke From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jul 22 02:35:41 1988 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 22 Jul 88 02:35:41 EDT Subject: CNS fac (from Jim Bower at Caltech) Message-ID: <404.585556541@DST.BOLTZ.CS.CMU.EDU> Return-Path: <@C.CS.CMU.EDU:jbower at bek-mc.caltech.edu> Date: Mon, 18 Jul 88 15:13:54 pdt From: jbower at bek-mc.caltech.edu (Jim Bower) Subject: CNS fac Positions Available in Computation and Neural Systems at Caltech The Computation and Neural Systems (CNS) program at California Institute of Technology is conducting a search for tenure track faculty. CNS is an academic program offering a curriculum leading to a Ph.D. degree. It was established in 1986 as an interdisciplinary program primarily between the Biology and Engineering and Applied Science divisions in order to bring together biologists that are investigating how computation is done in the nervous system and engineers and scientists that are applying ideas from neurobiology to the development of new computational devices and algorithms. Appointments will be in either the biology or engineering divisions, or joint, depending on the interests of the successful applicant. The primary intent of the search is for appointments of either a theorist and/or an experimentalist at the Assistant Professor level, however, exceptionally qualified senior candidates will also be considered. The areas for which the search is conducted include: Learning Theory and Algorithms, Biophysics of Computation and Learning, Neural Approaches to Pattern Recognition, Motor Control, Problem Representations and Network Architectures, Small Nervous Systems, Network Dynamics, Speech, and Audition. The successful applicant will be expected to develop a strong research activity and to teach courses in his or her specialty. Applicants should send a resume, including list of publications, a brief summary of research accomplishments and goals, and names of at least three references to: Professor D. Psaltis MS 116-81 Caltech Pasadena, CA 91125 Caltech is an equal opportunity employer and it specifically encourages minorities and women to apply. ------- End of Forwarded Message From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jul 22 05:50:44 1988 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 22 Jul 88 05:50:44 EDT Subject: connectionist summer school proceedings Message-ID: <752.585568244@DST.BOLTZ.CS.CMU.EDU> In response to Alexander Linden's query of July 12: The edited proceedings of the 1988 Connectionist Models Summer School will be published by Morgan Kaufmann Publishers, around December. Price will be $24.95, softcover. You should start seeing advertisements for it soon. Note: Morgan Kauffman is the publisher of the AAAI and IJCAI proceedings. Starting in 1988 they will also publish the proceedings of the Denver NIPS conference. -- Dave From yann at ai.toronto.edu Fri Jul 22 14:22:00 1988 From: yann at ai.toronto.edu (Yann le Cun) Date: Fri, 22 Jul 88 14:22:00 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <88Jul22.114037edt.516@neat.ai.toronto.edu> > I am looking for proofs of complexity limits for tasks learnable by > (multilayer) PDP algorithms. Specifically, the proof that the > generalized delta rule (aka backprop) constrains one to linearly > independent association tasks. Your question is a little ambiguous. If your question is about the computational power of multi-layer networks (independantly of the learning algorithm), then it is very easy to show that a network of sigmoid units with 2 intermediate layers can approximate ANY continuous vector function (from R^n to R^m ) as close as you want, provided that you put enough units in the hidden layers (the proof is in my thesis, but really, it is trivial). The proof is constructive, but, as always, the resulting network has no (or little) practical interest since the number of hidden units can be prohibitively large. Surprisingly, for a function of a single variable, you just need one hidden layer. Any classification function on a finite set of patterns is computable with two hidden layers. Now, if your question is about the limitations of back-prop itself (as a learning algorithm), there is not much we know about that. I suspect that your question had to do with SINGLE LAYER networks. Usual single layer networks are restricted to *linearly separable functions*. There is a nice theorem by Cover (IEEE trans. electronic computer, vol.EC14(3) 1965) which gives the probability that a dichotomy on a set of patterns is linearly separable. Even if the desired dichotomy IS linearly separable, the delta rule (or Widrow-Hoff rule), which only works for single layer nets, will not necessarily find a solution. The Perceptron rule will. Yann le Cun. Dept of Computer Science, University of Toronto. yann at ai.toronto.edu From johns at flash.bellcore.com Sat Jul 23 10:02:49 1988 From: johns at flash.bellcore.com (John Schotland) Date: Sat, 23 Jul 88 10:02:49 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807231402.AA04961@flash.bellcore.com> Yann Would it be possible to get a copy of your thesis? Presumably it is written in French--but that's fine with me. Thanks very much. John Schotland Bellcore Room 2C323 445 South St. Morristown, NJ 07960 From pollack at orange.cis.ohio-state.edu Fri Jul 22 10:57:44 1988 From: pollack at orange.cis.ohio-state.edu (Jordan B. Pollack) Date: Fri, 22 Jul 88 10:57:44 EDT Subject: new address! Message-ID: <8807221457.AA02539@orange.cis.ohio-state.edu> Jordan Pollack Department of Computer and Information Science The Ohio State University 2036 Neil Avenue Mall Columbus, OH 43210-1277 (614) 292-4890 pollack at cis.ohio-state.edu Please update your mailing list. Jordan From pollack at orange.cis.ohio-state.edu Fri Jul 22 10:57:30 1988 From: pollack at orange.cis.ohio-state.edu (Jordan B. Pollack) Date: Fri, 22 Jul 88 10:57:30 EDT Subject: new address! Message-ID: <8807221457.AA02535@orange.cis.ohio-state.edu> Jordan Pollack Department of Computer and Information Science The Ohio State University 2036 Neil Avenue Mall Columbus, OH 43210-1277 (614) 292-4890 pollack at cis.ohio-state.edu Please update your mailing list. Jordan From Scott.Fahlman at B.GP.CS.CMU.EDU Mon Jul 25 22:12:43 1988 From: Scott.Fahlman at B.GP.CS.CMU.EDU (Scott.Fahlman@B.GP.CS.CMU.EDU) Date: Mon, 25 Jul 88 22:12:43 EDT Subject: Tech Report available Message-ID: The following CMU Computer Science Dept. Tech Report is now available. If you want a copy, please send your request by computer mail to "catherine.copetas at cs.cmu.edu", who handles our tech report distribution. Indicate that you want "CMU-CS-88-162" and be sure to include a physical mail address. Try not to send your request to the whole connectionists mailing list -- people who do that look really stupid. Copies of this report have already been sent to students and faculty of the recent Connectionist Models Summer School at CMU, except for CMU people who can easily pick up a copy. --------------------------------------------------------------------------- Technical Report CMU-CS-88-162 "An Empirical Study of Learning Speed in Back-Propagation Networks" Scott E. Fahlman Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract: Most connectionist or "neural network" learning systems use some form of the back-propagation algorithm. However, back-propagation learning is too slow for many applications, and it scales up poorly as tasks become larger and more complex. The factors governing learning speed are poorly understood. I have begun a systematic, empirical study of learning speed in backprop-like algorithms, measured against a variety of benchmark problems. The goal is twofold: to develop faster learning algorithms and to contribute to the development of a methodology that will be of value in future studies of this kind. This paper is a progress report describing the results obtained during the first six months of this study. To date I have looked only at a limited set of benchmark problems, but the results on these problems are encouraging: I have developed a new learning algorithm called "quickprop" that, on the problems tested so far, is faster than standard backprop by an order of magnitude or more. This new algorithm also appears to scale up very well as the problem size increases. From honavar at cs.wisc.edu Tue Jul 26 12:11:39 1988 From: honavar at cs.wisc.edu (A Buggy AI Program) Date: Tue, 26 Jul 88 11:11:39 CDT Subject: References needed Message-ID: <8807261611.AA20013@ai.cs.wisc.edu> I would appreciate pointers to papers or tech. reports describing 1. Prof. Hinton's (or any other) work on the effect of number of hidden units on generalization properties of networks. 2. Prof. Rumelhart's work on incorporating cost/complexity term in the criterion function minimized by the back propagation algorithm with the objective of minimizing the network complexity. Both were discussed by the respective researchers in the talks they gave at the CMU summer school on connectionist models. Thanks in advance. Vasant Honavar honavar at ai.cs.wisc.edu From mls at csmil.umich.edu Thu Jul 28 13:04:50 1988 From: mls at csmil.umich.edu (Martin Sonntag) Date: Thu, 28 Jul 88 13:04:50 EDT Subject: Rochester Connectionist Simulator Message-ID: <8807281704.AA04795@csmil.umich.edu> Help! I have been trying to reach Costanzo at cs.rochester.edu about the Rochester Connectionist Simulator. He used to be the contact person there. Does anyone out there have version 4.1 and/or know how to get it? I have version 4.0 and a license for the University of Michigan. Please reply to mls at csmil.umich.edu with any e-mail addresses of contacts for this package. Thanks. Martin Sonntag Cognitive Science & Machine Intelligence Lab University of Michigan From goddard at aurel.caltech.edu Fri Jul 29 14:20:26 1988 From: goddard at aurel.caltech.edu (goddard@aurel.caltech.edu) Date: Fri, 29 Jul 88 11:20:26 -0700 Subject: Rochester Connectionist Simulator In-Reply-To: Your message of Thu, 28 Jul 88 13:04:50 -0400. <8807281704.AA04795@csmil.umich.edu> Message-ID: <8807291820.AA01354@aurel.caltech.edu> The contact person is Rose Peet (rose at cs.rochester.edu). RCS is availible via anonymous ftp (free) as well as on tape ($150). From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Sat Jul 30 17:10:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Sat, 30 Jul 88 16:10 EST Subject: Error correcting competition Message-ID: Does anyone know of research intocompetitive learning mechanisms similar to the one presented in PDP I which include teacher error correction? It strikes me that if the competitive network makes a decisions, and the winning decision elements make a wrong decision, that connections to those winning elements be changed so that element would have a lessened chance of winning again given the same set of circumstances. In this way, we have accountability of neural groups similar to back-propogation, yet this method may be more biologically appealing. -Thomas G. Edwards ins_atge at jhuvms Apt. 2a 331 E. University Pkwy Baltimore, MD 21218 From J65%TAUNIVM.BITNET at VMA.CC.CMU.EDU Fri Jul 1 08:48:01 1988 From: J65%TAUNIVM.BITNET at VMA.CC.CMU.EDU (amir toister) Date: Fri, 01 Jul 88 08:48:01 IST Subject: sub Message-ID: pl. add me to the mailing list. thanx, amir. From munnari!smokey.oz.au!guy at uunet.UU.NET Fri Jul 1 01:16:23 1988 From: munnari!smokey.oz.au!guy at uunet.UU.NET (munnari!smokey.oz.au!guy@uunet.UU.NET) Date: 01 Jul 88 14:46:23 +0930 (Fri) Subject: Technical Report Available (was Fractal Representations) In-Reply-To: Your message of Tue, 28 Jun 88 08:57:41 EDT. <8806281257.AA07119@bucasb.bu.edu> Message-ID: <8807020150.AA07807@uunet.UU.NET> Hello, Could I have a copy of your TR please? My address is: Guy Smith, Dept. of Computer Science, University of Adelaide, Adelaide, 5000, AUSTRALIA. Thanks, Guy Smith. From munnari!cheops.eecs.unsw.oz.au!ashley at uunet.UU.NET Sun Jul 3 00:44:52 1988 From: munnari!cheops.eecs.unsw.oz.au!ashley at uunet.UU.NET (Ashley M. Aitken) Date: Sat, 2 Jul 88 23:44:52 EST Subject: Theories of Higher Brain Functions via Cerebral Neocortex ? Message-ID: <8807030042.AA28121@uunet.UU.NET> G'Day ! I am presently undertaking to review theories and models of the cerebral neo-cortex which aim to explain higher brain functions. To-date I am familiar with the following Neuronal Group Selection Edelman, G. Value Unit Encoding Ballard, D. Neocortex Theory Marr, D. I would be most grateful if anyone knowing of any references to other theories or models could please e-mail me the relevant information. Any leads will be most appreciated. Thanks in Advance, sincerely, Ashley Aitken E-MAIL : ashley at cheops.unsw.oz ACSnet ashley%cheops.unsw.oz at uunet.uu.net ARPAnet ashley at cheops.unsw.oz.au ARPAnet {uunet,ukc,ubc-vision,mcvax}!munnari!cheops.unsw.oz!ashley UUCP ashley%cheops.unsw.oz at australia CSnet ashley%cheops.unsw.oz at uk.ac.ukc JAnet POSTAL : Academic Address: Residential Address: Computer Science Department, EECS, c/o Basser College, University of New South Wales, The Kensington Colleges, Box 1,PO KENSINGTON,N.S.W.,2033, Box 24,PO KENSINGTON,3033. AUSTRALIA. AUSTRALIA. Ph. Aust (02) 697-4055 Ph. Aust (02) 663-8117 From pratt at paul.rutgers.edu Tue Jul 5 11:40:46 1988 From: pratt at paul.rutgers.edu (Lorien Y. Pratt) Date: Tue, 5 Jul 88 11:40:46 EDT Subject: Request for Boltzmann Machine information Message-ID: <8807051540.AA03734@devo.rutgers.edu> I am attempting to perform a thorough survey of papers which analyze the Boltzmann Machine algorithm for learning in neural networks. I would appreciate any of the following: o Boltzmann machine bibliography files o Preprints of Boltzmann machine papers o Pointers to Boltzmann machine papers o Any papers referring to a complexity analysis of the Boltzmann machine o Any papers presenting the Boltzmann machine explicitly in algorithmic form. o Any comparisons between the complexity of the Boltzmann and other neural network learning algorithms (e.g. back propagation) o Pointers to Boltzmann machine implementations, as well as empirical results on learning ability. o Information on the parallel implementation of the Boltzmann machine One of my higher priorities is to establish once and for all whether or not the Boltzmann machine algorithm is so inferior to Back Propagation that it merits no further study, as was implied by Geoff Hinton at the recent Machine Learning conference. Therefore, if you can point out the kinds of learning problems, if any, on which the Boltzmann machine is superior, then I'd appreciate hearing about them. From pratt at paul.rutgers.edu Tue Jul 5 11:42:20 1988 From: pratt at paul.rutgers.edu (Lorien Y. Pratt) Date: Tue, 5 Jul 88 11:42:20 EDT Subject: Request for Boltzmann Machine information (forgot my address) Message-ID: <8807051542.AA03748@devo.rutgers.edu> Sorry, I forgot my address in the previous posting. It is: Lorien Y. Pratt Computer Science Department pratt at paul.rutgers.edu Rutgers University Busch Campus (201) 932-4714 Piscataway, NJ 08854 From hgigley at note.nsf.gov Tue Jul 5 17:18:03 1988 From: hgigley at note.nsf.gov (Helen M. Gigley) Date: Tue, 05 Jul 88 17:18:03 -0400 Subject: new address Message-ID: <8807051718.aa15204@note.note.nsf.gov> Please update my address on the connectionist file to hgigley at note.nsf.gov effectively immediately. Mailing address is NSF 1800 G Street, N.W., Room 304 Washington, D.C. 20550 Thanks, Helen M. Gigley From jose at tractatus.bellcore.com Wed Jul 6 13:03:25 1988 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Wed, 6 Jul 88 13:03:25 EDT Subject: Request for Boltzmann Machine information Message-ID: <8807061703.AA05040@tractatus.bellcore.com> I didn't think Geoff said it was so inferior...in fact there is lots of reason beleive it is quite similar to backpropagation noting of course that it can compute higher order correlations between input and output.. Although the annealings make it slow...but with a vlsi implementation as was done recently at bellcore speed could be much improved... jose From hinton at ai.toronto.edu Wed Jul 6 14:21:30 1988 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Wed, 6 Jul 88 14:21:30 EDT Subject: Request for Boltzmann Machine information In-Reply-To: Your message of Tue, 05 Jul 88 11:40:46 -0400. Message-ID: <88Jul6.114149edt.70@neat.ai.toronto.edu> Lorien Pratt's message says that I implied that the Boltzmann machine deserves no further study. This is not what I believe. It is certianly much slower than back-propagation at many tasks and appears to scale more poorly than BP when the depth of the network is increased. So it would be silly to use it instead of back-propagation for a typical current application. However, and there may still be methods of improving the boltzmann machine learning algorithm a lot. For example, Anderson and Peterson have reported that a mean-field version of the procedure learns much faster. The mean-field version is basically a learning procedure for Hopfield-Tank nets (which are the mean-field version of Boltzmann machines). It allows learning in Hopfield -Tank nets that have hidden units. Also, the BM learning procedure is easier than BP to put directly into hardware. Alspector at bellcore has recently fabricated and tested a chip that goes about 100,000 times faster than a minicomputer simulation. Finally, the BM procedure can learn to model the higher-order statistics of the desired state vectors of the output units. BP cannot do this. In summary, the existing, standard Boltzmann machine learning procedure is much slower than BP at tasks for which BP is applicable. However, the mean-field version may be closer in efficiency to BP, and other developments are possible. REFERENCES The best general description of Boltzmann Machines is: G. E. Hinton and T. J. Sejnowski Learning and Relearning in Boltzmann machines In D.~E. Rumelhart, J.~L. McClelland, and the~PDP~Research~Group, Parallel Distributed Processing: {Explorations} in the Microstructure of Cognition. {Volume I Foundations}}, MIT Press, Cambridge, MA, 1986. Some recent developments are described in section 7 of: G. E. Hinton "Connectionist Learning Procedures" Technical Report CMU-CS-87-115 (version 2) Available from computer science dept, CMU, Pittsburgh PA 15213. The hardware implementation is described in: J. Alspector and R.~B. Allen. A neuromorphic VLSI learning system. In P. Loseleben, editor, Advanced Research in VLSI: Proceedings of the 1987 Stanford Conference. MIT Press, Cambridge, Mass., 1987. The mean field lerning procedure is described in: C. Peterson and J.~R. Anderson. A Mean Field Theory Learning Algorithm for Neural Networks. MCC Technical Report E1-259-87, Microelectronics and Computer Technology Corporation, 1987. Geoff From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Wed Jul 6 19:50:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Wed, 6 Jul 88 18:50 EST Subject: Fractal Representations of Neural Networks Responses Message-ID: For those of you who are interested in what I have receieved dealing with fractal representations of neural networks, I have so far only heard from John Merrill (merrill at bucasb.bu.edu) who sent me an abstract of his paper (I believe he also CC'ed it to connectionists), and Jordan Pollack (pollack at nmsu.csnet) who is working on a paper outline. It seems that there isn't a great deal of literature that peole know about dealing with fractal representations of neural networks. I am very interested in this subject for its use in artificially evolving neural networks using genetic algortihms, and it seems that fractal genetic representation is a way of compacting alot of complex information into a small amount of descriptors (ala Mandelbrot Set). -Thomas Edwards ins_atge at jhuvms 812 Brantford Avenue Silver Spring, MD 20904 From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Jul 6 21:37:58 1988 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (Thanasis Kehagias) Date: Wed, 06 Jul 88 21:37:58 EDT Subject: Simulated Annealing Message-ID: when i was at the connectionist summer school at CMU a couple weeks ago, Dave Touretzky mentioned that somebody (I believe it was Mass Sivilotti) had just finished building a simulated annealing chip that was running big problems (was it ca. 50 units?) at ms times. does anyone have more andd more accurate info? Thanasis Kehagias From unido!tumult.informatik.tu-muenchen.de!schmidhu at uunet.UU.NET Thu Jul 7 16:58:37 1988 From: unido!tumult.informatik.tu-muenchen.de!schmidhu at uunet.UU.NET (unido!tumult.informatik.tu-muenchen.de!schmidhu@uunet.UU.NET) Date: Thu, 7 Jul 88 16:58:37 N Subject: No subject Message-ID: <8807071500.AQ08612@unido.uucp> Received: by tumult.informatik.tu-muenchen.de (1.2/4.81) id AA10801; Thu, 7 Jul 88 16:58:37 -0200 Date: Thu, 7 Jul 88 16:58:37 -0200 From: Juergen Schmidhuber Message-Id: <8807071458.AA10801 at tumult.informatik.tu-muenchen.de> To: INS_ATGE at JHUVMS.BITNET, connectionists at cs.cmu.edu Subject: Re: Fractal Representations of Neural Networks Responses Cc: schmidhu at tumult.informatik.tu-muenchen.de I would like to ask a related question: Has there been any research on recognizing fractal objects with neural nets? The first thing that most neural networks do to some input is to apply a distributing transformation. At first glance this should greatly simplify the task of correlating the whole object to its parts (one might think of some measure of self-similarity) and then doing the classification. Juergen Schmidhuber From bukys at cs.rochester.edu Thu Jul 7 14:57:42 1988 From: bukys at cs.rochester.edu (bukys@cs.rochester.edu) Date: Thu, 7 Jul 88 14:57:42 EDT Subject: Rochester Connectionist Simulator -- ANNOUNCEMENTS Message-ID: <8807071857.AA03986@hamal.cs.rochester.edu> (1) The Rochester Connectionist Simulator (version 4.1) is now available via anonymous FTP from CS.Rochester.EDU. It is in the public/rcs directory. Read the README file there for more information. The distribution is too large to mail; the compressed tar file is 837K bytes, and the uncompressed tar file is 3 Megabytes. If you have TeX and a PostScript printer, you should be able to produce your own copy of the 181-page manual. If you want a paper copy of the manual anyway, send a check for $10 per manual (payable to the University of Rochester) to Rose Peet at the above address. If you are unable to obtain anonymous FTP access to the simulator distribution, you can still order a copy the old way. Contact Rose Peet Computer Science Department University of Rochester Rochester, NY 14627 (USA) or , and she will send you the approriate forms. We are currently charging $150 for a distribution tape and a manual. We do not have the facilities for generating invoices, so payment is required with any order. (2) We are setting up electronic mailing lists for use by the users of the simulator. If you have the simulator, please sign up! You can reach other users of the simulator via the users' mailing list: If you are not on this mailing list, and wish to be added, send a note to Please send bug reports to (3) Official bug patches to date (to be found in the FTP directory): rcs_v4.1.patch.01 fixes a SERIOUS problem in the logging feature. rcs_v4.1.patch.02 fixes some minor syntactic problems in the doc. The patches can be applied by hand if necessary, but you will make your life easier if you obtain the widely-available "patch" program, and redirect each patch file into "patch -p". From sylvie at hobiecat.caltech.edu Thu Jul 7 18:08:43 1988 From: sylvie at hobiecat.caltech.edu (sylvie ryckebusch) Date: Thu, 7 Jul 88 15:08:43 pdt Subject: Simulated Annealing Message-ID: <8807072208.AA11079@hobiecat.caltech.edu> A VLSI implementation of a Boltzmann machine is being done by Josh Alspector at Bellcore. An extensive description of this research is presented in the Proceedings of the 1987 Stanford Conference on Advanced Research in VLSI, Paul Losleben, ed. mass sivilotti From eric at mcc.com Thu Jul 7 22:49:02 1988 From: eric at mcc.com (Eric Hartman) Date: Thu, 7 Jul 88 21:49:02 cdt Subject: backprop vs Boltzmann Message-ID: <8807080249.AA10150@little.aca.mcc.com> Regarding the discussion about back propagation vs. the Boltzmann machine: We agree with Hinton's answer to Lorien Pratt on this question. Since we have recently completed an extensive comparison between the mean field theory [1] and the back propagation learning algorithm in a variety of learning and generalization situations [2] we would like to supplement his answer with the following remarks: #1 For the mirror symmetry and clumps problems mean field theory (MFT) and back propagation (BP) exhibit the same quality with respect to # of learning cycles needed for learning and generalization percentage. The two algorithms even tend to find remarkably similar solutions. These results hold both for fixed training sets and continuous (ongoing) learning. (These conclusions assume that the two algorithms use the same activation function, either [0,1] or [-1,1], and that the midpoint is used as the success criterion in generalization.) #2 The two algorithms also appear equivalent with respect to different scaling properties; number of training patterns and number of hidden layers. [In other words our experience is somewhat different than Hinton's re. the latter point (which strictly speaking was for the original Boltzmann machine (BM)] #3 We have made further checks of the MFT approximation than was done in ref.[1] and find that it is indeed very good. #4 The real advantage of MFT vs BP is twofold: It allows for a more general architecture and usage [see #5 below] and it is more natural for a VLSI implementation. Also, on the simulation level, the parallelism inherent in MFT can be fully exploited on a SIMD architecture [this is not easy with the original BM]. #5 BP is of feed-forward nature. Hence there is a very strong distinction between input and output units. Being bidirectional, no such fixed distinction need exist for MFT. Its application areas therefore exceed pure feature recognition (input-output mappings). It can be used as a content-addressable memory. In ref.[2] we are exploring this possibility with substantial success. We use hidden units. So far we have achieved storage capacities of approximately 2N (where N is the number of visible units) for the case of retrieval with clamping (the given bits are known with certainty), and of at least roughly N for the case of error correction (no bits are known with certainty and hence none can be clamped). We consider these capacities to be lower bounds, as we are quite confident that we will find still better ways of using the learning algorithm and the hidden units to further increase these capacities. Also, regarding the discussion a month or so ago about fully distributed analog content-addressable memories, we have made some investigation in this direction and find that MFT is quite capable of learning analog as well as discrete patterns. 1. C. Peterson and J.R. Anderson. "A Mean Field Theory Learning Algorithm for Neural Networks", MCC Technical Report MCC-EI-259-87, Published in Complex Systems 1, 995 (1987). 2. C. Peterson and E. Hartman "Explorations of the Mean Field Theory Learning Algorithm", MCC Technical Report MCC-ACA-ST-064-88 [This paper is still subject to some polishing and will be announced on connectionists within a few weeks time]. ----------------------------- Carsten Peterson and Eric Hartman From terry Thu Jul 7 21:11:31 1988 From: terry (Terry Sejnowski ) Date: Thu, 7 Jul 88 21:11:31 edt Subject: Request for Boltzmann Machine information Message-ID: <8807080111.AA05719@crabcake.cs.jhu.edu> At the recent N'Euro 88 meeting in Paris, R. L. Chrisly working with T. Kohonen reported results from Boltzmann learning, back-propagation, and Kohonen's vector quantization applied to a classification problem in a noisy environment. Boltzmann produced classifications close to the optimal Bayes strategy; Back-prop did well for small problems, but the perfomrance deteriorated for large problems and was overtaken by Kohonen's network. Boltzmann, however, took much more time to learn and was sensitive to the input representation (a unary code for continuous value inputs was better than an analog code). For more information write to Kohonen's lab in Finland. Kohonen mentioned to me recently that a two level Kohonen net has significantly improved performance. The reason that Boltmann machines do well on statistical inference problems is that they are in principle capable of optimal Baysian inference. See Proc. IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, June 1983. Hinton & Sejnowski: Optimal Perceptual Inference. Terry ----- From josh at flash.bellcore.com Fri Jul 8 15:23:07 1988 From: josh at flash.bellcore.com (Joshua Alspector) Date: Fri, 8 Jul 88 15:23:07 EDT Subject: Stochastic learning chip Message-ID: <8807081923.AA21117@flash.bellcore.com> To clarify some previous postings, we have fabricated a chip based on a modified Boltzmann algorithm that learns. It can learn an XOR function in a few milliseconds. Patterns can be presented to it at about 100,000 per second. It is a test chip containing a small network in one corner that consists of 6 neurons and 15 two-way synapses for potentially full connectivity. We can initialize the network to any weight configuration and permanently disable some connections. We also have demonstrated the capability to do unsupervised competitive learning as well as supervised learning. It turns out that our noise amplifiers do not give gaussian uncorrelated noise as we had hoped, but the noise that exists seems to be sufficient to help it learn. This bears out results in previous simulations that show the noise distributions don't matter too much as long as they provide a stochastic element. Therefore, it is not completely accurate to call it a Boltzmann machine or even a simulated annealing machine. We can do only toy problems because of the small number of neurons but we have plans to make much larger networks in the future than can consist of multiple chips. Previous papers describing the implementation and extensions of the stochastic learning technique are: J. Alspector and R.B. Allen, "A neuromorphic vlsi learning system", in Advanced Research in VLSI: Proceedings of the 1987 Stanford Conference. edited by P. Losleben (MIT Press, Cambridge, MA, 1987), pp. 313-349. J. Alspector, R.B. Allen, V. Hu, & S. Satyanarayana, "Stochastic Learning Networks and their Electronic Implementation", in Proceedings of the 1987 NIPS Conference edited by D.Z. Anderson josh From unido!gmdzi!zsv!al at uunet.UU.NET Tue Jul 12 09:46:43 1988 From: unido!gmdzi!zsv!al at uunet.UU.NET (Alexander Linden) Date: Tue, 12 Jul 88 15:46:43 +0200 Subject: connectionist summer school Message-ID: <8807121346.AA02259@zsv.gmd.de> Is there anybody to report about the connectionist summer school at CMU ? Are there proceedings of some of the invited papers, every applicant had to supply ? Alexander Linden From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Tue Jul 12 21:52:20 1988 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (Thanasis Kehagias) Date: Tue, 12 Jul 88 21:52:20 EDT Subject: Lapedes paper Message-ID: does anybody have references to Lapedes et. al. work on neural nets and chaotic time series? thanks in advance, Thanasis Kehagias From Roni.Rosenfeld at B.GP.CS.CMU.EDU Tue Jul 12 22:03:16 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Tue, 12 Jul 88 22:03:16 EDT Subject: The CONNECTIONISTS archive Message-ID: <8264.584762596@RONI.BOLTZ.CS.CMU.EDU> All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 02/27/88 are now available for public perusal. A separate file exists for each month. The files' names are: "arch.yymm", where yymm stand for the obvious thing. Thus the earliest available data are in the file: "arch.8802". To browse through these files (as well as through other files, see below) you must FTP them to your local machine. How to FTP files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user "ftpguest" with password "cmunix". 3. To retrieve a file named, e.g., backprop.lisp, you must reference it as "connectionists/backprop.lisp". Note that the filename you type must not begin with a "/" or contain the special string "/..". 4. See the file "connectionists/READ.ME" for a list of other files of interest that are available for FTPing. Contents of connectionists/READ.ME as of 07/12/88: -=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= subscribers.dist The connectionists mailing list redistribution-maintainers People responsible for redistribution. arch.yymm Archives of the mailing list for the month mm/yy. (Archives available starting 02/28/88) backprop.lisp A backpropagation simulator conference.* Calls for papers for upcoming neural net conferences. biblio.mss A slightly expanded version of the PDP book's bibliography. bradtke.bib A bibliography on connectionist natural language and KR. -=-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Problems? - contact us at "connectionists-request at cs.cmu.edu". Happy Browsing -- Roni Rosenfeld connectionists-request at cs.cmu.edu From Roni.Rosenfeld at B.GP.CS.CMU.EDU Tue Jul 12 22:26:42 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Tue, 12 Jul 88 22:26:42 EDT Subject: The CONNECTIONISTS archive - *** UPDATE *** Message-ID: <8303.584764002@RONI.BOLTZ.CS.CMU.EDU> Some people have experienced problems trying to get files from the archive. Please hold off until further announcements. Roni Rosenfeld connectionists-request at cs.cmu.edu From dukempd.uucp!palmer at cs.duke.edu Wed Jul 13 10:07:33 1988 From: dukempd.uucp!palmer at cs.duke.edu (Richard Palmer) Date: Wed, 13 Jul 88 10:07:33 EDT Subject: Mailing list Message-ID: <8807131407.AA04795@dukempd.UUCP> Please put me on your mailing list. Thanks. Richard Palmer, Department of Physics, Duke University, Durham N.C. 27706 palmer%dukempd at CS.DUKE.EDU (preferred) DCONMT at TUCC.BITNET (1 day slower; letter O) From MJ_CARTE%UNHH.BITNET at VMA.CC.CMU.EDU Thu Jul 14 13:32:00 1988 From: MJ_CARTE%UNHH.BITNET at VMA.CC.CMU.EDU (MJ_CARTE%UNHH.BITNET@VMA.CC.CMU.EDU) Date: Thu, 14 Jul 88 12:32 EST Subject: Lapedes paper reference request Message-ID: With reference to Thanasis Kehagias' request of 13 July 88 for leads to Alan Lapedes' work: "Nonlinear signal processing using neural networks: Prediction and system modeling," A. Lapedes and R. Farber, Los Alamos National Laboratory Tech.Rpt. LA-UR-87-2662, July 1987. Related work which is said to substantially speed up learning of complex nonlinear dynamics (and which does *not* use connectionist/neural network ideas!) is described in: "Exploiting chaos to predict the future and reduce noise (version 1.1)," J.Doyne Farmer and J.J. Sidorowich, Los Alamos National Laboratory, February 1988 (no T.R. number available on my copy). I don't have e-mail addresses for any of the authors noted, but the LANL address is: Theoretical Division Center for Nonlinear Studies Los Alamos National Laboratory Los Alamos, NM 87545. I believe there has also been subsequent work by Lapedes and Farber-- perhaps others can supply references. Mike Carter Electrical and Computer Engineering Dept. University of New Hampshire From russ%yummy at gateway.mitre.org Thu Jul 14 16:45:09 1988 From: russ%yummy at gateway.mitre.org (russ%yummy@gateway.mitre.org) Date: Thu, 14 Jul 88 16:45:09 EDT Subject: Abstract Message-ID: <8807142045.AA01204@baklava.mitre.org> For copies of the following paper send to: Wieland at mitre.arpa or Alexis Wieland M.S. Z425 MITRE Corporation 7525 Colshire Drive McLean, Virginia 22102 An Analysis of Noise Tolerance for a Neural Network Recognition System Alexis Wieland Russell Leighton Garry Jacyna MITRE Corporation Signal Processing Center 7525 Colshire Drive McLean, Virginia 22102 This paper analyzes the performance of a neural network designed to carry out a simple recognition task when its input signal has been corrupted with gaussian or correlated noise. The back-propagation algorithm was used to train a neural network to categorize input images as being an A, B, C, D, or nothing independent of rotation, contrast, and brightness, and in the presence of large amounts of additive noise. For bandlimited white gaussian noise the results are compared to the performance of an optimal matched filter. The neural network is shown to perform classification at or near the optimal limit. From hendler at dormouse.cs.umd.edu Fri Jul 15 10:25:55 1988 From: hendler at dormouse.cs.umd.edu (Jim Hendler) Date: Fri, 15 Jul 88 10:25:55 EDT Subject: ftp-able backprop Message-ID: <8807151425.AA03101@dormouse.cs.umd.edu> For some reason a lot of you expressed interest in the tech. rpt and code for the simple back-prop tool we put together. In answer to the demand I've made the file ftp-able from the anonymous login at the machine mimsy.umd.edu. -Jim H. p.s. this is unsupported and all that stuff. The code is kernel common-lisp and should run on just about anything with a decent commonlisp. From Roni.Rosenfeld at B.GP.CS.CMU.EDU Fri Jul 15 15:42:55 1988 From: Roni.Rosenfeld at B.GP.CS.CMU.EDU (Roni.Rosenfeld@B.GP.CS.CMU.EDU) Date: Fri, 15 Jul 88 15:42:55 EDT Subject: The CONNECTIONISTS archive - *** correction *** Message-ID: <11076.584998975@RONI.BOLTZ.CS.CMU.EDU> FTPing from the CONNECTIONISTS archive did not work for most people. The problem: Our extra-secure FTP expects to either be able to use the name of the remote file for the local file, or to be given a local name to use. So if the remote name begins with "connectionists/" then there must be a "connectionists/" in the local space, or else you must specify a local name. We apologize for the inconvenience. Here'e what you should do: EITHER: make a directory called "connectionists" in your working directory ftp, log in as ftpguest (password "cmunix") ls connectionists /* This succeeds */ get connectionists/READ.ME /* This puts READ.ME into your local directory "connectionists" */ OR: ftp, log in as ftpguest/cmunix ls connectionists /* This succeeds */ get connectionists/READ.ME foo /* This puts READ.ME into your local file foo */ Please let us know if you have any problems. Roni Rosenfeld connectionists-request at cs.cmu.edu From meb at oddjob.uchicago.edu Tue Jul 19 14:08:27 1988 From: meb at oddjob.uchicago.edu (Matthew Brand) Date: Tue, 19 Jul 88 13:08:27 CDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807191808.AA06771@oddjob.uchicago.edu> I am looking for proofs of complexity limits for tasks learnable by (multilayer) PDP algorithms. Specifically, the proof that the generalized delta rule (aka backprop) constrains one to linearly independent association tasks. Similar work on Boltzmann machines or any simulated annealing-based learning algorithm would be even more welcome. And, as long as I'm writing my wish list, if you know of any work which has indicated strong upper bounds on the tasks complexity of 3+ layer nets configured via energy-minimization algorithms, I'm mighty keen to see it. Other requests: references, tech reports, or reprints on - Metropolis algorithm-based learning rules other than the Boltzmann machine. - Augmentations of the generalized delta rule, specifically relaxations of the feed-forward constraint, for example approximations of error in recurrent subnets. I understand that a researcher in Spain is doing very interesting stuff along these lines. - Complexity analyses of genetic learning algorithms such as Holland's classifier systems: assuming parallel operation, how does performance decline with scale-up of # of rules, # of condition bits, and size of the message list? Please mail responses to me; I'll summarize and post. I'd like to do this quickly; in 4 weeks I'll be taking a vacation far away from computers, and I'm some distance from a library suitable to look up computer science or AI references. For these reasons, papers sent e-mail or US-mail would be much appreciated. My address: Matthew Brand 5631 S. Kenwood Ave. #2B Chicago, IL., 60637.1739 Many thanks in advance. * * * * * * * matthew brand * * * meb at oddjob.uchicago.edu * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * From alexis%yummy at gateway.mitre.org Wed Jul 20 08:43:55 1988 From: alexis%yummy at gateway.mitre.org (alexis%yummy@gateway.mitre.org) Date: Wed, 20 Jul 88 08:43:55 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807201243.AA00470@marzipan.mitre.org> I'm not entirely sure I understand what you mean by: > ... generalized delta rule (aka backprop) constrains one to linearly > independent association tasks. but I don't think it's correct. If you mean linearly separable problems (ala Minsky & Papert) or that the input vectors have to be orthogonal that is *definitely* not true (see R. Lippmann, Introduction to Computing with Neural Nets, ASSP Mag, April 87; or D. Burr, Experiments on Neural Net Recognition of Spoken and Written Text, ASSP, V36#7, July 88; or A. Wieland & R. Leighton, Geometric Analysis of Neural Network Capabilities, ICNN 88) By way of empirical demonstration, we've been using a multi-layer net with 2 inputs (representing an x and y coordinate) and 1 output (representing class) to separate two clusters that spiral around each other ~3 times to test some of our learning algorithms. If anything *IS NOT* linearly separable, a spiral is not. From kruschke at cogsci.berkeley.edu Thu Jul 21 13:13:27 1988 From: kruschke at cogsci.berkeley.edu (John Kruschke) Date: Thu, 21 Jul 88 10:13:27 PDT Subject: No subject Message-ID: <8807211713.AA07091@cogsci.berkeley.edu> Please put me on the mailing list! I am interested in hidden layer representation and self-organized architectures. Thanks. --John Kruschke From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jul 22 02:35:41 1988 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 22 Jul 88 02:35:41 EDT Subject: CNS fac (from Jim Bower at Caltech) Message-ID: <404.585556541@DST.BOLTZ.CS.CMU.EDU> Return-Path: <@C.CS.CMU.EDU:jbower at bek-mc.caltech.edu> Date: Mon, 18 Jul 88 15:13:54 pdt From: jbower at bek-mc.caltech.edu (Jim Bower) Subject: CNS fac Positions Available in Computation and Neural Systems at Caltech The Computation and Neural Systems (CNS) program at California Institute of Technology is conducting a search for tenure track faculty. CNS is an academic program offering a curriculum leading to a Ph.D. degree. It was established in 1986 as an interdisciplinary program primarily between the Biology and Engineering and Applied Science divisions in order to bring together biologists that are investigating how computation is done in the nervous system and engineers and scientists that are applying ideas from neurobiology to the development of new computational devices and algorithms. Appointments will be in either the biology or engineering divisions, or joint, depending on the interests of the successful applicant. The primary intent of the search is for appointments of either a theorist and/or an experimentalist at the Assistant Professor level, however, exceptionally qualified senior candidates will also be considered. The areas for which the search is conducted include: Learning Theory and Algorithms, Biophysics of Computation and Learning, Neural Approaches to Pattern Recognition, Motor Control, Problem Representations and Network Architectures, Small Nervous Systems, Network Dynamics, Speech, and Audition. The successful applicant will be expected to develop a strong research activity and to teach courses in his or her specialty. Applicants should send a resume, including list of publications, a brief summary of research accomplishments and goals, and names of at least three references to: Professor D. Psaltis MS 116-81 Caltech Pasadena, CA 91125 Caltech is an equal opportunity employer and it specifically encourages minorities and women to apply. ------- End of Forwarded Message From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jul 22 05:50:44 1988 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 22 Jul 88 05:50:44 EDT Subject: connectionist summer school proceedings Message-ID: <752.585568244@DST.BOLTZ.CS.CMU.EDU> In response to Alexander Linden's query of July 12: The edited proceedings of the 1988 Connectionist Models Summer School will be published by Morgan Kaufmann Publishers, around December. Price will be $24.95, softcover. You should start seeing advertisements for it soon. Note: Morgan Kauffman is the publisher of the AAAI and IJCAI proceedings. Starting in 1988 they will also publish the proceedings of the Denver NIPS conference. -- Dave From yann at ai.toronto.edu Fri Jul 22 14:22:00 1988 From: yann at ai.toronto.edu (Yann le Cun) Date: Fri, 22 Jul 88 14:22:00 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <88Jul22.114037edt.516@neat.ai.toronto.edu> > I am looking for proofs of complexity limits for tasks learnable by > (multilayer) PDP algorithms. Specifically, the proof that the > generalized delta rule (aka backprop) constrains one to linearly > independent association tasks. Your question is a little ambiguous. If your question is about the computational power of multi-layer networks (independantly of the learning algorithm), then it is very easy to show that a network of sigmoid units with 2 intermediate layers can approximate ANY continuous vector function (from R^n to R^m ) as close as you want, provided that you put enough units in the hidden layers (the proof is in my thesis, but really, it is trivial). The proof is constructive, but, as always, the resulting network has no (or little) practical interest since the number of hidden units can be prohibitively large. Surprisingly, for a function of a single variable, you just need one hidden layer. Any classification function on a finite set of patterns is computable with two hidden layers. Now, if your question is about the limitations of back-prop itself (as a learning algorithm), there is not much we know about that. I suspect that your question had to do with SINGLE LAYER networks. Usual single layer networks are restricted to *linearly separable functions*. There is a nice theorem by Cover (IEEE trans. electronic computer, vol.EC14(3) 1965) which gives the probability that a dichotomy on a set of patterns is linearly separable. Even if the desired dichotomy IS linearly separable, the delta rule (or Widrow-Hoff rule), which only works for single layer nets, will not necessarily find a solution. The Perceptron rule will. Yann le Cun. Dept of Computer Science, University of Toronto. yann at ai.toronto.edu From johns at flash.bellcore.com Sat Jul 23 10:02:49 1988 From: johns at flash.bellcore.com (John Schotland) Date: Sat, 23 Jul 88 10:02:49 EDT Subject: needed: complexity analyses of NN & evolutionary learning systems Message-ID: <8807231402.AA04961@flash.bellcore.com> Yann Would it be possible to get a copy of your thesis? Presumably it is written in French--but that's fine with me. Thanks very much. John Schotland Bellcore Room 2C323 445 South St. Morristown, NJ 07960 From pollack at orange.cis.ohio-state.edu Fri Jul 22 10:57:44 1988 From: pollack at orange.cis.ohio-state.edu (Jordan B. Pollack) Date: Fri, 22 Jul 88 10:57:44 EDT Subject: new address! Message-ID: <8807221457.AA02539@orange.cis.ohio-state.edu> Jordan Pollack Department of Computer and Information Science The Ohio State University 2036 Neil Avenue Mall Columbus, OH 43210-1277 (614) 292-4890 pollack at cis.ohio-state.edu Please update your mailing list. Jordan From pollack at orange.cis.ohio-state.edu Fri Jul 22 10:57:30 1988 From: pollack at orange.cis.ohio-state.edu (Jordan B. Pollack) Date: Fri, 22 Jul 88 10:57:30 EDT Subject: new address! Message-ID: <8807221457.AA02535@orange.cis.ohio-state.edu> Jordan Pollack Department of Computer and Information Science The Ohio State University 2036 Neil Avenue Mall Columbus, OH 43210-1277 (614) 292-4890 pollack at cis.ohio-state.edu Please update your mailing list. Jordan From Scott.Fahlman at B.GP.CS.CMU.EDU Mon Jul 25 22:12:43 1988 From: Scott.Fahlman at B.GP.CS.CMU.EDU (Scott.Fahlman@B.GP.CS.CMU.EDU) Date: Mon, 25 Jul 88 22:12:43 EDT Subject: Tech Report available Message-ID: The following CMU Computer Science Dept. Tech Report is now available. If you want a copy, please send your request by computer mail to "catherine.copetas at cs.cmu.edu", who handles our tech report distribution. Indicate that you want "CMU-CS-88-162" and be sure to include a physical mail address. Try not to send your request to the whole connectionists mailing list -- people who do that look really stupid. Copies of this report have already been sent to students and faculty of the recent Connectionist Models Summer School at CMU, except for CMU people who can easily pick up a copy. --------------------------------------------------------------------------- Technical Report CMU-CS-88-162 "An Empirical Study of Learning Speed in Back-Propagation Networks" Scott E. Fahlman Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract: Most connectionist or "neural network" learning systems use some form of the back-propagation algorithm. However, back-propagation learning is too slow for many applications, and it scales up poorly as tasks become larger and more complex. The factors governing learning speed are poorly understood. I have begun a systematic, empirical study of learning speed in backprop-like algorithms, measured against a variety of benchmark problems. The goal is twofold: to develop faster learning algorithms and to contribute to the development of a methodology that will be of value in future studies of this kind. This paper is a progress report describing the results obtained during the first six months of this study. To date I have looked only at a limited set of benchmark problems, but the results on these problems are encouraging: I have developed a new learning algorithm called "quickprop" that, on the problems tested so far, is faster than standard backprop by an order of magnitude or more. This new algorithm also appears to scale up very well as the problem size increases. From honavar at cs.wisc.edu Tue Jul 26 12:11:39 1988 From: honavar at cs.wisc.edu (A Buggy AI Program) Date: Tue, 26 Jul 88 11:11:39 CDT Subject: References needed Message-ID: <8807261611.AA20013@ai.cs.wisc.edu> I would appreciate pointers to papers or tech. reports describing 1. Prof. Hinton's (or any other) work on the effect of number of hidden units on generalization properties of networks. 2. Prof. Rumelhart's work on incorporating cost/complexity term in the criterion function minimized by the back propagation algorithm with the objective of minimizing the network complexity. Both were discussed by the respective researchers in the talks they gave at the CMU summer school on connectionist models. Thanks in advance. Vasant Honavar honavar at ai.cs.wisc.edu From mls at csmil.umich.edu Thu Jul 28 13:04:50 1988 From: mls at csmil.umich.edu (Martin Sonntag) Date: Thu, 28 Jul 88 13:04:50 EDT Subject: Rochester Connectionist Simulator Message-ID: <8807281704.AA04795@csmil.umich.edu> Help! I have been trying to reach Costanzo at cs.rochester.edu about the Rochester Connectionist Simulator. He used to be the contact person there. Does anyone out there have version 4.1 and/or know how to get it? I have version 4.0 and a license for the University of Michigan. Please reply to mls at csmil.umich.edu with any e-mail addresses of contacts for this package. Thanks. Martin Sonntag Cognitive Science & Machine Intelligence Lab University of Michigan From goddard at aurel.caltech.edu Fri Jul 29 14:20:26 1988 From: goddard at aurel.caltech.edu (goddard@aurel.caltech.edu) Date: Fri, 29 Jul 88 11:20:26 -0700 Subject: Rochester Connectionist Simulator In-Reply-To: Your message of Thu, 28 Jul 88 13:04:50 -0400. <8807281704.AA04795@csmil.umich.edu> Message-ID: <8807291820.AA01354@aurel.caltech.edu> The contact person is Rose Peet (rose at cs.rochester.edu). RCS is availible via anonymous ftp (free) as well as on tape ($150). From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Sat Jul 30 17:10:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Sat, 30 Jul 88 16:10 EST Subject: Error correcting competition Message-ID: Does anyone know of research intocompetitive learning mechanisms similar to the one presented in PDP I which include teacher error correction? It strikes me that if the competitive network makes a decisions, and the winning decision elements make a wrong decision, that connections to those winning elements be changed so that element would have a lessened chance of winning again given the same set of circumstances. In this way, we have accountability of neural groups similar to back-propogation, yet this method may be more biologically appealing. -Thomas G. Edwards ins_atge at jhuvms Apt. 2a 331 E. University Pkwy Baltimore, MD 21218