From awyk at psy.uwa.oz.au Mon Dec 2 11:53:55 1991 From: awyk at psy.uwa.oz.au (Brian Aw) Date: Mon, 2 Dec 91 16:53:55 GMT Subject: Standard Images Message-ID: <9112021653.AA25678@freud.psy.uwa.oz.au> Would anyone have in hand some standard natural images used for image compression and reconstruction? A neural-net based coding and reconstruction scheme which I am currently working on works well with some natural images taken from a laboratory camera, but I need to test it with some 'bench-mark' natural images. If you can help please reply to me directly and we will talk on how to get the images over. Thanks. Brian Aw 2/12/91. From gluck at pavlov.Rutgers.EDU Mon Dec 2 08:18:43 1991 From: gluck at pavlov.Rutgers.EDU (Mark Gluck) Date: Mon, 2 Dec 91 08:18:43 EST Subject: NN in Cog. & Neural Sciences: Jackson Hole, WY, 1/16/92 Message-ID: <9112021318.AA09826@pavlov.rutgers.edu> Symposium: NEURAL NETWORK MODELS IN THE BEHAVIORAL AND NEURAL SCIENCES at 17th Annual Interdisciplinary Conf. Jackson Hole, Wyoming Symposium Chair: Mark A. Gluck (Neuroscience, Rutgers; gluck at pavlov.rutgers.edu) Thursday, January 16th, 4-8pm The talks will focus on theories of neural computation and their application to diverse topics in the cognitive and neural sciences. These presentations will be general tutorial overviews of each subspeciality. They are intended to enable members of the interdisciplinary audience to keep abreast of the latest developments in the field. Talks will be 23 minutes in length -- no more than half of which should be devoted to the speaker's own research. Time will be strictly limited. An additional 7 minutes will be alloted afterwards for discussion and questions. Speakers & Topics Larry Maloney, (Cognitive & Neural Science, NYU): NN & Vision Richard Granger (Neurobio. of Learning & Mem, Irvine): NN & Cortical Processing Stephen Hanson (Siemens Research): NN & Event Perception Mark Gluck (Neuroscience, Rutgers): NN & Human Learning Lee Giles (NEC Corp.): NN & Temporal Sequences David Servan-Schreiber (W. Psychiatric Inst. & CMU): NN & Psychiatric Disorders __________________________________________________________ This one day symposium is part of the 17th Annual Interdisciplinary Conference. The full conference runs from January 12 - January 16, 1992 at Teton Village, Jackson Hole, Wyoming. The meeting is an attempt to facilitate the interchange of ideas among researchers in a variety of disciplines relating to cognitive, neuro, and computer science. For further information on the conference, contact: Interdisciplinary Conference Attn: Dr. George Sperling Psychology & Neural Sciences, NYU 6 Washington Place, Rm. 980 NYC, NY 10003 Email: gs at cns.nyu.edu From wahba at stat.wisc.edu Mon Dec 2 14:54:57 1991 From: wahba at stat.wisc.edu (Grace Wahba) Date: Mon, 2 Dec 91 13:54:57 -0600 Subject: cross validation Message-ID: <9112021954.AA03233@hera.stat.wisc.edu> Re: Cross validation and Generalized Cross Validation: These tools are widely used in the statistics literature to resolve the bias - variance tradeoff in various contexts, including some which might be considered ml, and many image reconstruction algorithms, especially those which might be considered deconvolution. There is also some work on edge detection. You can read all about it in the context of multivariate function estimation and regularization (in reproducing kernel Hilbert spaces) in "Spline Models for Observational Data" - (see *** below) including 30 pages of references, mostly to the smoothing and regularization literature. Thin plate splines (source of one of the popular radial basis functions) also get a chapter. As a statistician new to the ml literature I am having fun reading the ml mail and observe that there is a growing overlap between the ml the statistics literature and all would benefit from seeing what the other side is doing... ..***.... Spline Models for Observational Data, by Grace Wahba v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA SIAM at wharton.upenn.edu 1-800-447-7426 (8:30-4:45 Eastern Standard Time, toll-free from the US, I dont know if this works overseas) May be ordered on American Express, Visa or Mastercard, or may be billed (extra charge). CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. From ICPRK%ASUACAD.BITNET at BITNET.CC.CMU.EDU Mon Dec 2 16:40:59 1991 From: ICPRK%ASUACAD.BITNET at BITNET.CC.CMU.EDU (Peter Killeen) Date: Mon, 02 Dec 91 14:40:59 MST Subject: job ad for NIPS message board Message-ID: <01GDN7JGAM6O9EDC4T@BITNET.CC.CMU.EDU> Experimental Psychologist Arizona State University is recruiting an Associate Professor of Experimental Psychology. The successful candidate must have a Ph.D. in psychology with specialty in adaptive systems/neural networks. The individual will join a growing interdisciplinary group with interests in basic biomedical research from an adaptive systems perspective. The position will start in August 1992. Send Vitae, reprints, and contact three individuals to have them send letters of reference to Dr. Peter R. Killeen, Experimental Psychology Search Committee, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. Deadline for application is Feb 14, 1992, and every two weeks thereafter until the position is filled. ASU is an equal- opportunity and affirmative action employer. From xie at ee.su.OZ.AU Mon Dec 2 19:07:21 1991 From: xie at ee.su.OZ.AU (Xie Yun) Date: Tue, 3 Dec 1991 11:07:21 +1100 Subject: TR available Message-ID: <9112030007.AA15244@brutus.ee.su.OZ.AU> The following technical report is placed in the neuroprose archive: \title{Training Algorithms for Limited Precision Feedforward Neural Networks } \author{Yun Xie \thanks{Permanent address: Department of Electronic Engineering, Tsinghua University, Beijing 100084, P.R.China} \hskip 30pt Marwan A. Jabri\\ \\ Department of Electrical Engineering\\ The University of Sydney\\ N.S.W. 2006, Australia} \date{} \maketitle \begin{abstract} A statistical quantization model is used to analyze the effects of quantization on the performance and the training dynamics of a feedforward multi-layer neural network implemented in digital hardware. The analysis shows that special techniques have to be employed to train such networks in which each variable is represented by limited number of bits in fixed point format. Based on the analysis, we propose a training algorithm that we call the Combined Search Algorithm (CS). It consists of two search techniques and can be easily implemented in hardware. Computer simulations were conducted using IntraCardiac ElectroGrams (ICEGs) and sonar reflection patterns and the results show that: using CS, the training performance of feedforward multi--layer neural networks implemented in digital hardware with 8 to 10 bit resolution can be as good as that of networks implemented with unlimited precision; CS is insensitive to training parameter variation; and importantly, the simulations confirm that the numbers of quantization bits can be reduced in the upper layers without affecting the performance of the network. \end{abstract} You can get the report by FTP: unix>ftp 128.146.8.52 name:anonymous Passord:neuron ftp>binary ftp>cd pbu/neuroprose ftp>get yun.cs.ps.Z ftp>bye unix>uncompress yun.cs.ps.Z unix>lpr yun.cs.ps Yun From awyk at psy.uwa.oz.au Tue Dec 3 10:37:47 1991 From: awyk at psy.uwa.oz.au (Brian Aw) Date: Tue, 3 Dec 91 15:37:47 GMT Subject: Standard Images Message-ID: <9112031537.AA28573@freud.psy.uwa.oz.au> As soon as I receive the standard images (for testing image compression and reconstruction techniques), I will be most happy to share with those who have requested and anyone who might like to have them. Brian Aw 3/12/91. awyk at freud.psy.uwa.oz.au From SCHNEIDER at vms.cis.pitt.edu Tue Dec 3 11:23:00 1991 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Tue, 3 Dec 91 12:23 EDT Subject: Graduate and Post-doc positions in Neural Processes in Cognition Message-ID: Program announcement for Interdisciplinary Graduate and Postdoctoral Training in Neural Processes in Cognition at the University of Pittsburgh and Carnegie Mellon University The Pittsburgh Neural Processes in Cognition program, in its second year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an innovative program for students investigating the neurobiology of cognition. The program's focus is the interpretation of cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Such functions include perceiving, attending, learning, planning, and remembering in humans and in animals. A carefully designed program of study prepares each student to perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, MRI and PET brain scanners, the Pittsburgh Supercomputing Center, and a regional medical center providing access to human clinical populations. This is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Each student receives full financial support, travel allowances and a computer workstation. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. Last year's class included mathematicians, psychologists, and neuroscience researchers. Pittsburgh is one of America's most exciting and affordable cities, offering outstanding symphony, theater, professional sports, and outdoor recreation in the surrounding Allegheny mountains. More than ten thousand graduate students attend its universities. Core Faculty and interests and affiliation Carnegie Mellon University -Psychology- James McClelland, Johnathan Cohen, Martha Farah, Mark Johnson University of Pittsburgh Behavioral Neuroscience - Michael Ariel Biology - Teresa Chay Information Science - Paul Munro Mathematics - Bard Ermentrout Neurobiology Anatomy and Cell Sciences - Al Humphrey Neurological Surgery - Don Krieger, Robert Sclabassi Neurology - Steven Small Psychiatry - David Lewis, Lisa Morrow, Stuart Steinhauer Psychology - Walter Schneider, Velma Dobson Physiology - Dan Simons Radiology - Mark Mintun Applications: To apply to the program contact the program office or one of the affiliated departments. Students are admitted jointly to a home department and the Neural Processes in Cognition Program. Postdoctoral applicants must have United States resident's status. Applications are requested by February 1. For information contact: Professor Walter Schneider Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara St Pittsburgh, PA 15260 Or: call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET. From uh311ae at sunmanager.lrz-muenchen.de Tue Dec 3 14:19:50 1991 From: uh311ae at sunmanager.lrz-muenchen.de (Henrik Klagges) Date: 03 Dec 91 20:19:50+0100 Subject: TechReport ps file is broken Message-ID: <9112031919.AA04842@sunmanager.lrz-muenchen.de> The recently announced Techreport about limited precision algorithms prints miniaturized mirror-image pages. I guess the file is corrupt. Just to prevent you from a nasty surprise. Cheers, Henrik IBM Research From 7923509%TWNCTU01.BITNET at BITNET.CC.CMU.EDU Wed Dec 4 13:09:00 1991 From: 7923509%TWNCTU01.BITNET at BITNET.CC.CMU.EDU (7923509%TWNCTU01.BITNET@BITNET.CC.CMU.EDU) Date: Wed, 4 Dec 91 13:09 U Subject: Associative Memory Message-ID: <01GDP1KIDN8G9EDC4T@BITNET.CC.CMU.EDU> Hi every1: There is some questions,please help me. The BAM model seems to be unable to learn the whole training pairs which are given in advance.When there are so many training pairs,the BAM has bad behavior . Is there any way to improve the capacity and fault tolerance ? If we want to associate training pairs (Ai,Bi) i=1,2,..,P and the training patterns both Ai's and Bi's are not linear independent,is it possible ? please reply to my bitnet address,thank's in advance ! 7923509 at twnctu01 From sakaue at it4.crl.mei.co.jp Wed Dec 4 19:22:18 1991 From: sakaue at it4.crl.mei.co.jp (S.Sakaue) Date: Wed, 4 Dec 91 19:22:18 JST Subject: TechReport ps file is broken In-Reply-To: Henrik Klagges's message of 03 Dec 91 20:19:50+0100 <9112031919.AA04842@sunmanager.lrz-muenchen.de> Message-ID: <9112041022.AA07635@sky1.it4.crl.mei.co.jp> >>>>> On 03 Dec 91 20:19:50+0100, Henrik Klagges said: Henrik> The recently announced Techreport about limited precision algorithms prints Henrik> miniaturized mirror-image pages. I guess the file is corrupt. That file is a combination of two PostScript files. You can retrieve the paper by adding showpage next to the end of first file. ---------- Shigeo Sakaue Central Reasearch Labs. Matsushita Electric Industrial Co., Ltd. 3-15. Yagumo-Nakamachi, Moriguchi, Osaka, 570, Japan email: sakaue at crl.mei.co.jp TEL: <+81>6-909-1121 FACSIMILE: <+81>6-906-0177 From pgeutner at ira.uka.de Wed Dec 4 08:16:32 1991 From: pgeutner at ira.uka.de (Petra Geutner) Date: Wed, 04 Dec 91 14:16:32 +0100 Subject: mail, die in petra landen sollte Message-ID: From xie at ee.su.OZ.AU Wed Dec 4 17:28:54 1991 From: xie at ee.su.OZ.AU (Xie Yun) Date: Thu, 5 Dec 1991 09:28:54 +1100 Subject: TR Message-ID: <9112042228.AA09709@brutus.ee.su.OZ.AU> I am very sorry for the mistake in my previous PS file in neurprose under the name yun.cs.ps.Z. I have sent another one to Inbox and asked Jordan to move it to neuroprose to replace the old one. Yun From M160%eurokom.ie at BITNET.CC.CMU.EDU Fri Dec 6 12:59:00 1991 From: M160%eurokom.ie at BITNET.CC.CMU.EDU (Ronan Reilly ERC) Date: Fri, 6 Dec 1991 12:59 CET Subject: Cognitive Science of NLP Workshop Message-ID: <9112061259.161899@eurokom.ie> Call for Participation in a Workshop on THE COGNITIVE SCIENCE OF NATURAL LANGUAGE PROCESSING 14-15 March, 1992 Dublin City University Guest Speakers: George McConkie University of Illinois at Urbana-Champaign Kim Plunkett University of Oxford Noel Sharkey University of Exeter Attendance at the CSNLP workshop will be by invitation on the basis of a submitted paper. Those wishing to be considered should send a paper (hardcopy, no e-mail submissions please) of not more than eight A4 pages to Ronan Reilly (e-mail: ronan_reilly at eurokom.ie), Educational Research Centre, St Patrick's College, Dublin 9, Ireland, not later than 3 February, 1992. Notification of acceptance along with registration and accommodation details will be sent out by 17 February, 1992. Submitting authors should also send their fax number and/or e-mail address to help speed up the selection process. The particular focus of the workshop will be on the computational modelling of human natural language processing (NLP), and preference will be given to papers that present empirically supported computational models of any aspect of human NLP. An additional goal in selecting papers will be to provide coverage of a range of NLP areas. This workshop is supported by the following organisations: Educational Research Centre, St Patrick's College, Dublin; Linguistics Insititute of Ireland; Dublin City University; and the Commission of the European Communities through the DANDI ESPRIT Basic Research Action (No. 3351). From vergina!strintzi at csi.forth.gr Fri Dec 6 20:57:25 1991 From: vergina!strintzi at csi.forth.gr (Michael Strintzis) Date: Fri, 6 Dec 91 17:57:25 PST Subject: No subject Message-ID: <9112061757.aa05807@vergina.uucp> Dear Connectionists Could somebody please help me update my bibliography on connectionist techniques for Principal Component Analysis? There was an excellent letter on this topic at the Connectionist Conference at eurokom which I unfortunately accidentally missed. Thanks Michael Strintzis Michael.Strintzis at eurokom.ie Electrical Engineering University of Thessaloniki 54006 Thessaloniki Greece From JWLEE%KRYSUCC1.BITNET at vma.cc.cmu.edu Sat Dec 7 11:40:00 1991 From: JWLEE%KRYSUCC1.BITNET at vma.cc.cmu.edu (Jaewoong Lee) Date: SAT, 7 DEC 1991 11:40 EXP Subject: population coding Message-ID: Dear Connectionists, I am a graduate student starting a work on connectionist model for population coding. What I am trying to do is to build up *biologically plausible* model that can explain population coding mechanism in superior colliculus. What I have read are [C. Lee et al. "Population coding of saccadic eye movement by neurons in the superior colliculus", Nature Vol. 332] and some others. Any other references or any comments? Your help will be greatly appreciated and I will summarize the replies for other researchers. Thank you. Jaewoong Lee, AI Lab. CS. Dept. Yonsei University, Seoul, KOREA e-mail: JWLEE at KRYSUCC1.BITNET From sutcliffer%ul.ie at BITNET.CC.CMU.EDU Fri Dec 6 13:56:57 1991 From: sutcliffer%ul.ie at BITNET.CC.CMU.EDU (sutcliffer%ul.ie@BITNET.CC.CMU.EDU) Date: Fri, 6 Dec 91 18:56:57 GMT Subject: AICS'92 call for papers ============================================ = AICS'92 Announcement and Call For Papers = ============================================ Message-ID: <9112061856.AA08766@itdsrv1.ul.ie> 5th Irish Conference on Artificial Intelligence and Cognitive Science Keynote Speaker : Erik Sandewall, Linkoping University, Sweden 10-11th September 1992 University of Limerick, Ireland. Aims The conference brings together Irish and overseas researchers and practitioners from all areas of artificial intelligence and cognitive science. The aim is to provide a forum where researchers can present their current work and where industrial and commercial users can relate this research to their own practical experience and needs. Submissions from abroad are particularly welcome and assistance with travel costs may be available for a small number of participants. Topics of Interest Papers are invited which describe substantial, original and unpublished research on all aspects of artificial intelligence and cognitive science, including, but not limited to: Application and Theory of Expert Systems Human-Computer Interaction Learning Natural Language Knowledge Representation Principles and Applications of Connectionism User Modelling Decision Support and Strategic Planning Robotics Speech Image Processing Format for Submission Authors should submit three copies of a complete paper, not to exceed 5000 words. The first page should comprise the title, author(s), address, phone, fax and e-mail, together with a 200 word abstract. The text of the paper should then start on the second page Schedule Papers Due: 24 April, 1992 Notification of Acceptance: 12 June 1992 Publication of Proceedings The abstracts of accepted papers will be distributed at the conference. The proceedings of previous AICS conferences have been published in the British Computer Society Workshop Series with Springer Verlag, and it is expected that those of AICS'92 will b Conference Information The University campus is situated in rolling parkland beside the river Shannon. The conference will take place in the newly built Robert Schuman building which has fully equipped lecture theatres so that workstation or PC software can easily be demonstrated, The University is easily reached by road, rail or by air - Shannon international airport is only 20kms away. Further information and registration details can be obtained by email from aics92 at ul.ie or by contacting : Kevin Ryan - Conference Chairperson AICS'92 Department of Computer Science and Information Systems University of Limerick Plassey Technological Park Limerick, Ireland Phone (353)-61-333644 Fax (353)-61-330316 Programme Committee Roddy Cowie, Queen's University Belfast Mark Keane, Trinity College Dublin Gabriel McDermott, University College Dublin Michael McTear, University of Ulster Abdur Rahman, University of Limerick Kevin Ryan, University of Limerick Alan Smeaton, Dublin City University Humphrey Sorensen, University College Cork Richard Sutcliffe, University of Limerick From dario at cns.nyu.edu Mon Dec 9 21:01:56 1991 From: dario at cns.nyu.edu (Dario Ringach) Date: Mon, 9 Dec 91 21:01:56 EST Subject: Position errors in smooth pursuit Message-ID: <9112100201.AA07380@wotan.cns.nyu.edu> I would appreciate any references documenting the influence of position errors in smooth pursuit eye movements. I am aware of the work of Wyatt and Pola, and Carl and Gellman. Thank you in advance. -- Dario From WARREN%BROWNCOG.BITNET at BITNET.CC.CMU.EDU Mon Dec 9 16:41:00 1991 From: WARREN%BROWNCOG.BITNET at BITNET.CC.CMU.EDU (WARREN%BROWNCOG.BITNET@BITNET.CC.CMU.EDU) Date: Mon, 9 Dec 1991 16:41 EST Subject: Job Announcement Message-ID: <01GDWZMLKOSS0002W5@BROWNCOG.BITNET> JOB OPENING IN HIGHER-LEVEL COGNITION DEPT. OF COGNITIVE AND LINGUISTIC SCIENCES BROWN UNIVERSITY The Department of Cognitive and Linguistic Sciences at Brown University invites applications for a tenure-track, Assistant Professor position in higher-level cognition, beginning July 1, 1992. Preference will be given to the areas of concepts, memory, attention, problem solving, knowledge representation, and the relation between cognition and language. Applicants should have a strong experimental research program and broad teaching ability in the field of cognition and cognitive science. Women and minorities are especially encouraged to apply. Send C.V., three letters of reference, copies of publications, and statement of research interests by January 1, 1992, to: Dr. William H. Warren, Chair Cognitive Search Committee Dept. of Cognitive and Linguistic Sciences, Box 1978 Brown University Providence, RI 02912 Brown University is an Equal Opportunity/Affirmative Action Employer. From UDAH256 at oak.cc.kcl.ac.uk Tue Dec 10 05:56:00 1991 From: UDAH256 at oak.cc.kcl.ac.uk (Mark Plumbley) Date: Tue, 10 Dec 91 10:56 GMT Subject: Principal Components and Information Theory Message-ID: Michael Strintzis writes: >Could somebody please help me update my bibliography >on connectionist techniques for Principal Component >Analysis? There was an article on connectionists recently from Gary Cottrell with some PCA references in it. I also have some work on Information Theory with implications on PCA (related to work of Linsker): M. D. Plumbley and F. Fallside, "An Information-Theoretic Approach to Unsupervised Connectionist Models", Proceedings of the 1988 Connectionist Models Summer School, pp239-245, Morgan Kaufmann, 1988 M. D. Plumbley "On Information Theory and Unsupervised Neural Networks", Tech Report CUED/F-INFENG/TR.78, Cambridge University Engineering Department, Cambridge CB2 1PZ, UK. August 1991. (Tech report version of my Thesis). The tech report shows that some of the PCA algorithms directly decrease information loss across the network as they progress. Mark Plumbley Tel: [+44|0] 71 873 2241/2234 Centre for Neural Networks Fax: [+44|0] 71 873 2017 Department of Mathematics/King's College London/Strand/London WC2R 2LS/UK From P.Refenes at cs.ucl.ac.uk Tue Dec 10 12:52:59 1991 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 10 Dec 91 17:52:59 +0000 Subject: PREPRINT Message-ID: CURRENCY EXCHANGE RATE PREDICTION & NEURAL NETWORK DESIGN STRATEGIES A. N. REFENES, M. AZEMA-BARAC, L. CHEN, & S. A. KAROUSSOS Department of Computer Science, University College London, Gower Street, WC1, 6BT, London, UK. ABSTRACT This paper describes a non trivial application in forecasting currency exchange rates, and its implementation using a multi-layer perceptron network. We show that with careful network design, the backpropagation learning procedure is an effective way of training neural networks for time series prediction. The choice of squashing function is an important design issue in achieving fast convergence and good generalisation performance. We evaluate the use of symmetric and asymmetric squashing functions in the learning procedure, and show that symmetric functions yield faster convergence and better generalisation performance. We derive analytic results to show the conditions under which symmetric squashing functions yield faster convergence, and to quantify the upper bounds on the convergence improvement. The network is evaluated both for long term forecasting without feed- back (i.e. only the forecast prices are used for the remaining trading days) and for short term forecasting with hourly feed-back. The network learns the training set near perfect and shows accurate prediction, making at least 22% profit on the last 60 trading days of 1989. =========================================================== From SAYEGH at CVAX.IPFW.INDIANA.EDU Tue Dec 10 20:25:45 1991 From: SAYEGH at CVAX.IPFW.INDIANA.EDU (SAYEGH@CVAX.IPFW.INDIANA.EDU) Date: Tue, 10 Dec 1991 20:25:45 EST Subject: conference announcement Message-ID: <911210202545.2020e131@CVAX.IPFW.INDIANA.EDU> CALL FOR PAPERS FIFTH CONFERENCE ON NEURAL NETWORKS AND PARALLEL DISTRIBUTED PROCESSING INDIANA UNIVERSITY-PURDUE UNIVERSITY 9, 10, 11 APRIL 1992 The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University will be held on the Fort Wayne Campus, April 9, 10 and 11, 1992. Authors are invited to submit a one page abstract of current research in their area of Neural Networks Theory or Application before February 3, 1991. Notification of acceptance or rejection will be sent by February 28. Conference registration is $20 and students attend free. Some limited financial support might also be available to allow students to attend. Abstracts and inquiries should be addressed to: email: sayegh at ipfwcvax.bitnet US mail: Prof. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805 From thomasp at informatik.tu-muenchen.de Wed Dec 11 06:55:26 1991 From: thomasp at informatik.tu-muenchen.de (Thomas) Date: Wed, 11 Dec 91 12:55:26 +0100 Subject: Pen-Based Computing and HWCR Message-ID: <91Dec11.125539met.34095@gshalle1.informatik.tu-muenchen.de> I'm testing several Notepad or "Pen-Based" computers and I wonder if anybody knows of NN-approaches to handwritten character recognition (HWCR) actually (or about to be) USED in Notepads from Grid, NCR, Momenta, MicroSlate, Tusk, Telepad, Samsung.. or within operating systems like PenPoint, Windows for Pens...etc. I'm aware of the fact, that NESTOR is developping HWCR-Software for Notepads but I know little else regarding their efforts. I'm also partly aware of the vast literature regarding HWCharacter/DigitR but I'm solely interested in PRACTICAL and commercially viable HWCR-approaches. Depending on feedback, I'll summarize responses to the net. Patrick Thomas N3 - Nachrichten Neuronale Netze From desa at cs.rochester.edu Wed Dec 11 17:25:03 1991 From: desa at cs.rochester.edu (desa@cs.rochester.edu) Date: Wed, 11 Dec 91 17:25:03 EST Subject: TR available in Neuroprose Archives Message-ID: <9112112225.AA02044@cyan.cs.rochester.edu> The following technical report has been placed in the neuroprose archive: Top-down teaching enables non-trivial clustering via competitive learning Virginia de Sa Dana Ballard desa at cs.rochester.edu dana at cs.rochester.edu Dept. of Computer Science University of Rochester Rochester, NY 14627-0226 Abstract: Unsupervised competitive learning classifies patterns based on similarity of their input representations. As it is not given external guidance, it has no means of incorporating task-specific information useful for classifying based on semantic similarity. This report describes a method of augmenting the basic competitive learning algorithm with a top-down teaching signal. This teaching signal removes the restriction inherent in unsupervised learning and allows high level structuring of the representation while maintaining the speed and biological plausibility of a local Hebbian style learning algorithm. Examples, using this algorithm in small problems, are presented and the function of the teaching input is illustrated geometrically. This work supports the hypothesis that cortical back-projections are important for the organization of sensory traces during learning. ----------------------------------------------------------------------- To retrieve by anonymous ftp: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get desa.top_down.ps.Z ftp> quit unix> uncompress desa.top_down.ps unix> lpr -P(your_local_postscript_printer) desa.top_down.ps Hard copy requests can be sent to tr at cs.rochester.edu or Technical Reports Dept. of Computer Science University of Rochester Rochester, NY 14627-0226 (There is a nominal $2 charge for hard copy TR's) From terry at jeeves.UCSD.EDU Wed Dec 11 22:08:41 1991 From: terry at jeeves.UCSD.EDU (Terry Sejnowski) Date: Wed, 11 Dec 91 19:08:41 PST Subject: Neural Computation 3:4 Message-ID: <9112120308.AA19887@jeeves.UCSD.EDU> Neural Computation Winter 1991, Volume 3, Issue 4 View Neural Network Classifiers Estimate Bayesian a Posteriori Probabilities Michael D. Richard and Richard P. Lippmann Note Lowering Variance of Decisions by Using Artificial Network Portfolios G. Mani Letters Oscillating Networks: Control of Burst Duration by Electrically Coupled Neurons L.F. Abbott, E. Marder, and S.L. Hooper A Computer Simulation of Oscillatory Behavior in Primary Visual Cortex Matthew A. Wilson and James M. Bower Segmentation, Binding, and Illusory Conjunctions D. Horn, D. Sagi, and M. Usher Contrastive Learning and Neural Oscillations Fernando Pineda and Pierre Baldi Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri and Barry Flower Predicting the Future: Advantages of Semi-Local Units Eric Hartman and James D. Keeler Improving the Generalisation Properties of Radial Basis Function Neural Networks Chris Bishop Temporal Evolution of Generalization during Learning in Linear Networks Pierre Baldi and Yves Chauvin Learning the Unlearnable Dan Nabutovsky and Eytan Domany Kolmogorov's Theorem is Relevant Vera Kurkov An Exponential Response Neural Net Shlomo Geva and Joaquin Sitte ----- SUBSCRIPTIONS - VOLUME 4 - BIMONTHLY (6 issues) ______ $40 Student ______ $65 Individual ______ $150 Institution Add $12 for postage and handling outside USA (+7% for Canada). (Back issues from Volumes 1-3 are available for $28 each.) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From steensj at daimi.aau.dk Fri Dec 13 14:31:24 1991 From: steensj at daimi.aau.dk (steensj@daimi.aau.dk) Date: Fri, 13 Dec 91 14:31:24 MET Subject: Report available in Neuroprose Archives Message-ID: <9112131331.AA03566@bmw.daimi.aau.dk> ****** Please do not forward to other lists. Thank you ******* The following report has been placed in the neuroprose archives at Ohio State. Ftp instructions follow the abstract. ------------------------------------------------------------- A Conceptual Approach to Generalization in Dynamic Neural Networks Steen Sjogaard Computer Science Department Aarhus University DK-8000 Aarhus C. Denmark steensj at daimi.aau.dk ABSTRACT Inspired by the famous paper "Generalization as Search" by Tom Mitchell from 1982, a conceptual approach to generalization in artificial neural networks is proposed. The two most important ideas are (1) to consider the problem of forming a general description of a class of objects as a search problem, and (2) to divide the search space into a static and a dynamic part. These ideas are beneficial as they emphasize the evolution or process that a learner must undergo in order to discover a valid generalization. We find that this approach and the adapted conceptual framework provide a more varied and intuitively appealing view on generalization. Furthermore, a new cascade- correlation learning algorithm which is very similar to Fahlman and Lebiere's Cascade-Correlation Learning Architecture from 1990, is proposed. The capabilities of these two learning algorithms are discussed, and a direct comparison in terms of the conceptual framework is performed. Finally, the two algorithms are analyzed empirically, and it is demonstrated how the obtained results can be explained and discussed in terms of the conceptual framework. The empirical analyses are based on two experiments: The first experiment concerns the scaling behavior of the two network types, while the other experiment concerns a closer analysis of the representation that the two network types utilize for found generalizations. Both experiments show that the networks generated by the new algorithm perform better than the networks generated by the Cascade-Correlation Learning Architecture on the relatively simple geometric classification problem considered. ------------------------------------------------------------- To retrieve by anonymous ftp: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sjogaard.concept.ps.Z ftp> quit unix> uncompress sjogaard.concept.ps.Z unix> lpr -P sjogaard.concept.ps /Steen From amari at sat.t.u-tokyo.ac.jp Sat Dec 14 15:13:17 1991 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Sat, 14 Dec 91 15:13:17 JST Subject: neural PCA; TRs on neural learning Message-ID: <9112140613.AA13370@sat.t.u-tokyo.ac.jp> In response to Strintzis' request on papers on neural PCA, I would like to note one old paper: S.Amari, "Neural theory of association and concept- formation", Biol. Cybern. vol. 26 (1977), pp.175-185. In this paper, a general neural learning scheme was treated. As an example, it was clealy analyzed (p.179) that a type of learning leads to the eigenvector corresponding to the maximal eigenvalue of the covariance matrix of input signals. I do not know if this is the first literature of treating neural PCA. May be not. New Technical Reports are now available. Four Types of Learning Curves, by S.Amari, N.Fujita and S.Shinomoto, METR 91-04, May 1991, U. Tokyo Statistical Theory of Learning Curves under Entropic Loss Criterion, by S.Amari and N.Murata, METR 91-12, Nov. 1991, U. Tokyo. Both of them studies the relation between the number of training examples and generalization errors. If there are a rush of paper requests, I am afraid that I cannot handle them all. From jfj at m53.limsi.fr Sun Dec 15 06:22:24 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Sun, 15 Dec 91 12:22:24 +0100 Subject: NNs and NLP Message-ID: <9112151122.AA13237@m53.limsi.fr> Hi. I am collecting references on neural network approaches to natural language processing (articles dealing with speech recognition excluded), and would like some help ! If enough interested parties respond, I will post the results. P.S.: please write to me directly (jfj at limsi.fr), and not to connectionists. Thanx in advance, jfj From barryf at ee.su.OZ.AU Mon Dec 16 18:01:38 1991 From: barryf at ee.su.OZ.AU (Barry Flower) Date: Tue, 17 Dec 1991 10:01:38 +1100 Subject: ACNN92 Program Message-ID: <9112162301.AA25524@brutus.ee.su.OZ.AU> THE THIRD AUSTRALIAN CONFERENCE ON NEURAL NETWORKS (ACNN'92) CONFERENCE PROGRAM 3rd - 5th FEBRUARY 1992 AUSTRALIAN NATIONAL UNIVERSITY CANBERRA, AUSTRALIA ACNN'92 Organising Committee ~~~~~~~~~~~~~~~~~~~~~~~~~~ Conference Chairman Dr Marwan Jabri Director Systems Engineering & Design Automation Laboratory (SEDAL) School of Electrical Engineering University of Sydney Technical Program Co-Chairs Prof Bill Levick, Australian National University A/Prof Ah Chung Tsoi, University of Queensland Stream Chairs Prof Yanni Attikiouzel, University of Western Australia Prof Max Bennett, University of Sydney Prof Max Coltheart, Macquarie University Dr Marwan Jabri, University of Sydney Dr M Palaniswami, University of Melbourne Dr Stephen Pickard, University of Sydney Dr M Srinivasan, Australian National University A/Prof Ah Chung Tsoi, University of Queensland Local Arrangements Dr M Srinivasan, Australian National University Institutions Liaison Dr N Nandagopal, DSTO Sponsorship Dr Stephen Pickard, University of Sydney Publicity Mr Barry Flower, University of Sydney Publications Mr Philip Leong, University of Sydney Secretariat Mrs Agatha Shotam, University of Sydney ---------------------------------------------------------------------- ---------------------------------------------------------------------- For further information and registrations please contact: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Tel: (+61-2) 692 4214 Electrical Engineering Fax: (+61-2) 660 1228 NSW 2006 Australia Email: acnn92 at ee.su.oz.au. ---------------------------------------------------------------------- ---------------------------------------------------------------------- PROGRAMME ~~~~~~~ Monday, 3rd February 1992 9.00 - 10.00 am Registration 10.00 - 10.30 am Official Opening The Hon Ross Free, Minister for Science & Technology 10.30 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 1 On the Existence of "Fast" and "Slow" Directionally Sensitive Motion Detector Neurons in Insects G A Horridge & Ljerka Marcelja Centre for Visual Sciences, RSBS Australian National University, Australia Neural Networks for the Detection of Motion Boundaries M V Srinivasan & P Sobey Centre for Visual Sciences, RSBS Australian National University, Australia On the Mechanism Underlying Movement Detection in the Fly Visual System A Bouzerdoum Department of Electrical & Electronic Engineering University of Adelaide, Australia 12.30 - 2.00 pm Lunch 2.00 - 3.30 pm Session 2 (invited) A Generalization Method for Back-Propagation using Fuzzy Sets B R Hunt, Y Y Qi & D DeKruger Department of Electrical & Computer Engineering University of Arizona, USA Connectionist Models of Musical Pattern Recognition C Stevens Department of Psychology University of Sydney, Australia Analog VLSI Implementation of Adaptive Algorithms by an Extended Hebbian Synapse Circuit T Morie, O Fujitsu and Y Amemiya NTT LSI Laboratories, Japan 3.30 - 4.00 pm Afternoon Tea 4.00 - 5.30 pm Session 3 Computer Simulation: An Aid to Understanding the Neuronal Circuits that Control the Behaviour of the Intestine J C Bornstein, J B Furness, H Kelly Department of Physiology University of Melbourne, Australia T O Neild & R A R Bywater Department of Physiology Monash University A Multi-Module Neural Network Approach for ICEG Classification Z Chi & M A Jabri Department of Electrical Engineering University of Sydney, Australia Circuit Complexity for Neural Computation Kai-Yeung Siu, V Roychowdhury and T Kailath Department of Electrical & Computer Engineering University of California, Irvine, USA 5.30 - 8.00 pm Poster Session 1 and Cocktails Tuesday, 4th February 1992 9.00 - 10.30 am Session 4 Sparse Associative Memory W G Gibson and J Robinson School of Mathematics & Statistics University of Sydney, Australia Robustness and Universal Approximation in Multilayer Feedforward Neural Networks P Diamond and I V Fomenko Mathematics Department The University of Queensland, Australia A Partially Correlated Higher Order Recurrent Neural Network P S Malin & M Palaniswami Department of Electrical Engineering University of Melbourne, Australia 10.30 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 5 A Computer Simulation of Intestinal Motor Activity S J H Brookes Department of Physiology Flinders University, Australia High Precision Hybrid Analogue/Digital Synapses A Azhad & J Morris Department of Computer Science University of Tasmania, Australia Model of the Spinal Frog Neural System Underlying the Wiping Reflex Ranko Babich ETF Pristina Yugoslavia 12.30 - 2.30 pm Lunch Postgraduate Students Session coordinated by Janet Wiles, University of Queensland 2.30 - 4.00 pm Session 6 Panel on Challenging Issues in Neural Networks Research Panelists include: Y Attikiouzel, University of Western Australia T Caelli, University of Melbourne M Coltheart, Macquarie University B Hunt, University of Arizona B Levick, Australian National University 4.00 - 4.30 pm Afternoon Tea 4.30 - 6.00 pm Session 7 "Solving" Combinatorially Hard Problems by Combining Deterministic Annealing with Constrained Optimisation Paul Stolorz Theoretical Division Los Alamos National Laboratory, USA The Recognition Capability of RAM-Based Neural Networks R Bowmaker & G G Coghill School of Engineering University of Auckland, New Zealand Modelling the Human Text-to-Speech System Max Coltheart School of Behavioural Sciences Macquarie University 6.00 - 8.00 pm Poster Session 2 7.00 - 8.00 pm BBQ Wednesday, 5th February 1992 9.00 - 10.40 am Session 8 Missing Values in a Backpropagation Neural Net P Vamplew & A Adams Department of Computer Science University of Tasmania, Australia Training Limited Precision Feedforward Neural Networks Y Xie & M A Jabri Department of Electrical Engineering University of Sydney, Australia Modelling Robustness of FIR amd IIR Synapse Multilayer Perceptrons Andrew Back & A C Tsoi Department of Electrical Engineering University of Queensland, Australia Poster Highlights 10.40 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 9 Low-Level Insect Vision: A Large but Accessible Biological Neural Network D Osorio & A C James Centre for Visual Sciences Australian National University, Australia Comparisons of Letter Recognition by Humans and Artificial Neural Networks Cyril Latimer Department of Psychology University of Sydney, Australia Evidence that the Adaptive Gain Control Exhibited by Neurons of the Striate Visual Cortex is a Co-operative Network Property T Maddess & T R Vidyasagar Centre for Visual Sciences Australian National University, Australia 12.30 - 2.00 pm Lunch 12.30 - 2.00 pm Poster Session 3 2.00 - 3.00 pm Session 10 Panel on Benefits of Neural Network Technologies to Australian Commercial Applications. Panelists include: A Bowles, BHP Research Melbourne D Nandagopal, DSTO Australia K Hubick, DITAC P Nickolls, Telectronics Pacing Systems R Smith, ATERB 3.00 - 3.30 pm Afternoon Tea 3.30 - 4.30 pm Session 11 Applications of Artificial Neural Networks in the Australian Defence Industry N Nandagopal Guided Weapons Division DSTO, Australia Low Power Analogue VLSI Implementation of a Feed-Forward Neural Network S J Pickard, M A Jabri, P H W Leong B Flower & P Henderson Department of Electrical Engineering University of Sydney, Australia 4.30 - 5.00 pm Closing Poster Session 1 Monday, 3rd February 1992 5.30 pm - 8.00 pm Entropy Production - a New Harmony Function for Hopfield Like Networks L Andrey Institute of Computer & Information Science Czechoslovak Academy of Sciences, Czechoslovakia A New Method of Training Neural Nets to Counteract Their Overgeneralization M Bahrami & K E Tait School of Electrical Engineering University of New South Wales The Development and Application of Dynamic Models in Process Control Using Neural Networks C J Chessari & G W Barton Department of Chemical Engineering University of Sydney Image Restoration in the Homogeneous ANN and the Role of Time Discreteness N S Belliustin Scientific Research Institute of Radiophysics, USSR V G Yakhno Institute of Applied Physics, USSR Neuronal Control of the Intestine - a Biological Neural Network S J H Brookes Department of Physiology & Ctr for Neuroscience Flinders University A Neural Network Approach to Phonocardiography I Cathers Department of Biological Sciences Cumberland College of Health Sciences Motion Analysis and Range Estimation Using Neural Networks M Cavaiuolo, A J S Yakovleff & C R Watson Electronic Research Laboratory DSTO, Australia Unsupervised Clustering Using Dynamic Competitive Learning S J Kia & G G Coghill Department of Electrical & Electronic Engineering University of Auckland, New Zealand Neural Net Simulation of the Peristaltic Reflex A D Coop & S J Redman Division of Neuroscience, JCSMR Australian National University Novel Applications of a Standard Recurrent Network Model H Debar CSEE/DCI, France B Dorizzi Institut National des Telecommunications, France A Self-Organizing Neural Tree Architecture L Y Fang & A Jennings Artificial Intelligence Section Telecom Australia Research Labs K Q-Q Li Department of Computer Science Monash University T Li & S Klasa Department of Computer Science Concordia University, Canada Computer Simulation of Spinal Cord Neural Circuits that Control Muscle Movement B P Graham Centre for Information Science Research Australian National University S J Redman Division of Neuroscience, JCSMR Australian National University Temporal Patterns of Auditory Responses: Implications for Encoding Mechanisms in the Ear K G Hill Developmental Neurobiology Group, RSBS Australian National University Emulation of the Neuro-morphological Functions of Biological Visual Receptive Fields for Medical Image Processing S K Hungenahally School of Microelectronics Griffith University COPE - A Hybrid Connectionist Production System Environment N K Kasabov Department of Computer Science University of Essex, UK Pruning Large Synaptic Weights A Kowalczyk Artificial Intelligence Section Telecom Australia Research Labs A Connectionist Model of Attentional Learning Using a Sequentially Allocatable Spotlight of Attention C Latimer & Z Schreter Department of Psychology University of Sydney An Analogue Low Power VLSI Neural Network P H W Leong & M A Jabri Department of Electrical Engineering University of Sydney Radar Target Recognition using ART2 D Nandagopal, A C Wright, N M Martin, R P Johnson, P Lozo & I Potter Guided Weapons Division DSTO, Australia An Expert System and a Neural Network Combine to Control the Robotic Arc Welding Process E Siores Mechanical Engineering Department University of Wollongong Poster Session 2 Tuesday, 4th February 1992 6.00 - 8.00 pm Nonlinear Adaptive Control Using an IIR MLP A Back Department of Electrical Engineering University of Queensland Neural Network Analysis of Geomagnetic Survey Data C J S deSilva & Y Attikiouzel Department of Electrical Engineering University of Western Australia Seismic Event Classification Using Self-Organizing Neural Networks F U Dowla, W J Maurer & S P Jarpe Lawrence Livermore National Laboratory, USA Task Based Pruning R Dunne Mathematics Programme Murdoch University N A Campbell & H T Kiiveri Division of Mathematics & Statistics C S I R O Training Continually Running Recurrent Networks and Neuron with Gain Using Weight Perturbation B G Flower & M A Jabri Department of Electrical Engineering University of Sydney Science with Neural Nets: An Application to Nuclear Physics S Gazula Department of Physics Washington University, USA Image Restoration with Iterative K-nearest Neighbor Operation with the Scheduled K size (IKOWSK) Y Hagihara Faculty of Technology Tokyo University of Agriculture & Technology, Japan The Influence of Output Bit Mapping on Convergence Time in M-ary PSK Neural Network Detectors J T Hefferan & S Reisenfeld School of Electrical Engineering University of Technology, Sydney Discriminant Functions and Aggregation Functions: Emulation and Generalization of Visual Receptive Fields S K Hungenahally School of Microelectronics Griffith University Selective Presentation Learning for Pattern Recognition by Back-Propagation Neural Networks K Kohara NTT Network Information Systems Laboratories, Japan Identification of Essential Attributes for Neural Network Classifier A Kowalczyk Artificial Intelligence Section Telecom Australia Research Labs Back Propagation and the N-2-N Encoder Problem R Lister Basser Department of Computer Science University of Sydney A Comparison of Transfer Functions for Feature Extracting Layers in the Neocognitron D R Lovell, A C Tsoi & T Downs Department of Electrical Engineering University of Queensland Target Cuing: A Heterogeneous Neural Network Approach H McCauley Naval Weapons Center, USA Some Word Recognition Experiments with a Modified Neuron Model M Saseetharan & M P Moody School of Electrical & Electronic Systems Engineering Queensland University of Technology Regularization and Spline Fitting by Analog Networks D Suter Department of Computer Science La Trobe University Information Transformation Across a Physiological Synapse R M Vickery, B D Gynther & M J Rowe School of Physiology & Pharmacology University of New South Wales Towards a Neural Network Implementation of Hoffman's Lie Algebra for Vision J Wiles Department of Psychology University of Queensland Approximation Theoretic Results for Neural Networks R C Williamson Department of Systems Engineering Australian National University U Helmke Department of Mathematics University of Regensburg, Germany Pattern Recognition by Using a Compound Eye-like Hybrid System S W Zhang, M Nagle & M V Srinivasan Centre for Visual Sciences, RSBS Australian National University Poster Session 3 Wednesday, 5th February 1992 12.30 - 2.00 pm Simplifying the Hopfield/Tank Algorithm in Solving the Travelling Salesman Problem W K Lai & G G Coghill School of Engineering University of Auckland Domain Classification of Language Using Neural Networks M Flower Artificial Intelligence Section Telecom Australia Research Labs A New Self-Organisation Strategy for Floorplan Design J Jiang & M A Jabri Department of Electrical Engineering University of Sydney HOCAM - A Content Addressable Memory using Higher Order Neural Networks M Palaniswami Department of Electrical Engineering University of Melbourne Mean Square Reconstruction Error Criterion in Biological Vision T R Pattison Department of Electrical Engineering University of Adelaide Making a Simple Recurrent Network a Self-Oscillator by Incremental Training S Phillips Department of Computer Science University of Queensland Neural Nets in Free Text Information Filtering J C Scholtes Department of Computational Linguistics University of Amsterdam Neural Network Learning with Opaque Mappings C J Thornton School of Cognitive & Comp Sci University of Sussex Neurocognitive Pattern Transmission; by Identity Mapping Networks J Tizard & C R Clark School of Social Sciences Flinders University of South Australia The Connectionist Sequential Machine: A General Model of Sequential Networks C Touzet & N Giambiasi LERI, EERIE, France Efficient Computation of Gabor Transform using A Neural Network H Wang & H Yan Department of Electrical Engineering University of Sydney Piecewise Linear Feedforward Neural Networks R C Williamson Department of Systems Engineering Australian National University P Bartlett Department of Electrical Engineering University of Queensland Functional Link Net and Simulated Annealing Approach for the Economic Dispatch of Electric Power K P Wong & C C Fung Department of Electrical Engineering University of Western Australia Non-linear Prediction for Resolution Enhancement of Band-limited Signals H Yan Department of Electrical Engineering University of Sydney Dynamics of Neural Networks H Yang Department of Computer Science La Trobe University A Probabilistic Neural Network Edge Detector for 2 Dimensional Gray Scale Images A Zaknich & Y Attikiouzel Department of Electrical Engineering University of Western Australia A Hybrid Approach to Isolated-Digit Recognition D Zhang & J B Millar Computer Sciences Lab, RSPSE Australian National University ---------------------------------END--------------------------------------- From bhaskar at theory.cs.psu.edu Tue Dec 17 09:46:14 1991 From: bhaskar at theory.cs.psu.edu (Bhaskar DasGupta) Date: Tue, 17 Dec 1991 09:46:14 -0500 Subject: recurrent nets. Message-ID: <9112171446.AA00298@omega.theory.cs.psu.edu> The following will appear as a concise paper in IEEE SouthEastcon 1992. Learning Capabalities of Recurrent Networks. Bhaskar DasGupta Computer Science Department Penn State. Brief summary: Recurrent Neural Networks are models of computation in which the underlying graph is directed ( possibly cyclic ), and each processor changes state according to some function computed according to its weighted summed inputs, either deterministically or probabilistically. Under arbitrary probabilistic update rules, such models can be as powerful as Probabilistic Turing Machines. For probabilistic models we can define the error probability as the maximum probability of reaching an incorrect output configuration. It is observed: If the error probability is bounded then such a network can be simulated by a deterministic finite automaton ( with exponentially many states ) For deterministic recurrent nets where each processor implements a threshold function: It may accept all P-complete language problems. However, restricting the weight-threshold relationship may result in accepting a weaker class, the NC class ( problems which can be solved in poly-log time with polynomially many processors ). The results are straightforward to derive, so I did not put it in the neuroprose archive. Thanks. Bhaskar From tesauro at watson.ibm.com Tue Dec 17 16:11:23 1991 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Tue, 17 Dec 91 16:11:23 EST Subject: TR available Message-ID: The following technical report is now available. (This is a long version of the paper to appear in the next NIPS proceedings.) To obtain a copy, send a message to "tesauro at watson.ibm.com" and be sure to include your PHYSICAL mail address. Practical Issues in Temporal Difference Learning Gerald Tesauro IBM Thomas J. Watson Research Center PO Box 704, Yorktown Heights, NY 10598 USA Abstract: This paper examines whether temporal difference methods for training connectionist networks, such as Suttons's TD($\lambda$) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD($\lambda$) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex nontrivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating. From ingber at umiacs.UMD.EDU Wed Dec 18 09:39:02 1991 From: ingber at umiacs.UMD.EDU (Lester Ingber) Date: Wed, 18 Dec 1991 09:39:02 EST Subject: Generic mesoscopic neural networks ... neocortical interactions Message-ID: <9112181439.AA08115@dweezil.umiacs.UMD.EDU> *** Please do not forward to any other lists *** Generic mesoscopic neural networks based on statistical mechanics of neocortical interactions Lester Ingber A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions, demon- strating its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. This methodology also defines an algorithm to construct a mesos- copic neural network (MNN), based on realistic neocortical processes and parameters, to record patterns of brain activity and to compute the evolution of this system. Furthermore, this new algorithm is quite generic, and can be used to similarly pro- cess information in other systems, especially, but not limited to, those amenable to modeling by mathematical physics techniques alternatively described by path-integral Lagrangians, Fokker- Planck equations, or Langevin rate equations. This methodology is made possible and practical by a confluence of techniques drawn from SMNI itself, modern methods of functional stochastic calculus defining nonlinear Lagrangians, Very Fast Simulated Re- Annealing (VFSR), and parallel-processing computation. I have placed the above preprint in the Neuroprose archive as ingber.mnn.ps.Z. To obtain this paper: local% ftp archive.cis.ohio-state.edu [local% ftp 128.146.8.52] Name (archive.cis.ohio-state.edu:yourloginname): anonymous Password (archive.cis.ohio-state.edu:anonymous): yourloginname ftp> cd pub/neuroprose ftp> binary ftp> get ingber.mnn.ps.Z ftp> quit local% uncompress ingber.mnn.ps.Z local% lpr [-P..] ingber.mnn.ps This will print out 8 pages on your PostScript laserprinter. If you do not have access to ftp, then send me an email request, and I will email you a PostScript-compressed-uuencoded ascii file with instructions on how to produce laserprinted copies, just requiring the additional first step of 'uudecode file'. Sorry, but I cannot take on the task of mailing out hardcopies of this paper. ------------------------------------------ | Prof. Lester Ingber | | ______________________ | | Science Transfer Corporation | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | ------------------------------------------ From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Dec 18 13:25:12 1991 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 18 Dec 91 13:25:12 EST Subject: Generic mesoscopic neural networks ... neocortical interactions In-Reply-To: Your message of "Wed, 18 Dec 91 09:39:02 EST." <9112181439.AA08115@dweezil.umiacs.UMD.EDU> Message-ID: <21254.693080712@B.GP.CS.CMU.EDU> Lester, I really think that you need expert with Elan more than I have. If I were buying Canon lenses I would save up a lot of pennies and concentrate on the L lense line. The $9500 400mm f2.8 USM looks particularly nice at the moment, that's about a million pennies. More seriously it sounds like you do not need wide angle capability. I would buy the best flash (430EZ I believe), the 55 (or is it 50?) mm macro, and the 80-200 f2.8 zoom. The macro can double for portraiture when necessary, you can bounce light with this flash and add an additional Canon macro-ringlight later. The zoom is suitable for action, but can be used for eliminating background in protraiture, or chasing kids all over the yard. It is also a very good race action lense at medium distances. Paul From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Dec 18 13:38:17 1991 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 18 Dec 91 13:38:17 EST Subject: Please diregard last message with my header, reply bug. Message-ID: <21528.693081497@B.GP.CS.CMU.EDU> My apologies to all. Paul From sontag at control.rutgers.edu Wed Dec 18 16:19:17 1991 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Wed, 18 Dec 91 16:19:17 EST Subject: Report available --computability with nn's Message-ID: <9112182119.AA03640@control.rutgers.edu> (Revised) Tech Report available from neuroprose: ON THE COMPUTATIONAL POWER OF NEURAL NETS Hava T. Siegelmann, Department of Computer Science Eduardo D. Sontag, Department of Mathematics Rutgers University, New Brunswick, NJ 08903 This paper shows the Turing universality of first-order, finite neural nets. It updates the report placed there last Spring* with new results that include the simulation in LINEAR TIME of BINARY-tape machines, (as opposed to the unary alphabets used in the previous version). The estimate of the number of neurons needed for universality is now lowered to 1,000 (from 100,000). *A summary of the older report appeared in: H. Siegelmann and E. Sontag, "Turing computability with neural nets," Applied Math. Letters 4 (1991): 77-80. ================ To obtain copies of the postscript file, please use Jordan Pollack's service: Example: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) Name (archive.cis.ohio-state.edu): anonymous Password (archive.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get siegelman.turing.ps.Z ftp> quit unix> uncompress siegelman.turing.ps.Z Now print "siegelman.turing.ps" as you would any other (postscript) file. From yirgan at dendrite.cs.colorado.edu Wed Dec 18 17:42:08 1991 From: yirgan at dendrite.cs.colorado.edu (Juergen Schmidhuber) Date: Wed, 18 Dec 1991 15:42:08 -0700 Subject: New TR on unsupervised learning Message-ID: <199112182242.AA04036@thalamus.cs.Colorado.EDU> LEARNING FACTORIAL CODES BY PREDICTABILITY MINIMIZATION .. Jurgen Schmidhuber Department of Computer Science University of Colorado (Compact version of Technical Report CU-CS-565-91) ABSTRACT I present a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns or input sequences. With a given set of representational units, each unit tries to react to the environment such that it minimizes its predictability by an adaptive predictor that sees all the other units. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus. I discuss various simple yet potentially powerful implementations of the principle which aim at finding binary factorial codes (Barlow, 1989}, i.e. codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Unlike previous methods the novel principle has a potential for removing not only linear but also non-linear output redundancy. Methods for finding factorial codes automatically embed Occam's razor for finding codes using a minimal number of units. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended sequences. --------------------------------------------------------------------- To obtain a copy, do: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: neuron ftp> binary ftp> cd pub/neuroprose ftp> get schmidhuber.factorial.ps.Z ftp> bye unix> uncompress schmidhuber.factorial.ps.Z unix> lpr schmidhuber.factorial.ps --------------------------------------------------------------------- There is no hardcopy mailing list. I will read my mail only occasionally during the next three weeks or so. .. Jurgen From karit at spine.hut.fi Thu Dec 19 07:53:28 1991 From: karit at spine.hut.fi (Kari Torkkola) Date: Thu, 19 Dec 91 14:53:28 +0200 Subject: Public domain LVQ-programs released Message-ID: <9112191253.AA10882@spine.hut.fi.hut.fi> ************************************************************************ * * * LVQ_PAK * * * * The * * * * Learning Vector Quantization * * * * Program Package * * * * Version 1.0 (December, 1991) * * * * Prepared by the * * LVQ Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1991 * * * ************************************************************************ Public-domain programs for Learning Vector Quantization (LVQ) algorithms are available via anonymous FTP on the Internet. "What is LVQ?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, LVQ is a group of methods applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each class zone such that the decision borders are approximated by the nearest-neighbor rule. Unlike in normal k-nearest-neighbor (k-nn) classification, the original samples are not used as codebook vectors, but they tune the latter. LVQ is concerned with the optimal placement of these codebook vectors into class zones. This package contains all the programs necessary for the correct application of certain LVQ algorithms in an arbitrary statistical classification or pattern recognition task. To this package two particular options for the algorithms, the LVQ1 and the LVQ2.1, have been selected. This is the very first release of the package, and updates will be available as soon as bugs are found and fixed. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the LVQ programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/lvq_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The lvq_pak program package includes the following files: - Documentation: README short description of the package and installation instructions document.ps documentation in (c) PostScript format document.ps.Z same as above but compressed document.txt documentation in ASCII format - Source file archives (which contain the documentation, too): lvq_p1r0.exe Self-extracting MS-DOS archive file lvq_pak-1.0.tar UNIX tape archive file lvq_pak-1.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/lvq_pak ftp> binary ftp> get lvq_pak-1.0.tar.Z ftp> quit unix> uncompress lvq_pak-1.0.tar.Z unix> tar xvfo lvq_pak-1.0.tar See file README for further installation instructions. All comments concerning this package shoud be addressed to lvq at cochlea.hut.fi. ************************************************************************ From mike at psych.ualberta.ca Thu Dec 19 22:27:42 1991 From: mike at psych.ualberta.ca (Mike R. W. Dawson) Date: Thu, 19 Dec 1991 20:27:42 -0700 Subject: Connectionism & Motion Message-ID: The following paper has recently appeared in Psychological Review, and describes how a variant of Anderson's "brainstate-in-a-box" algorithm can be used to solve a particular information processing problem faced when apparent motion is perceived. If you're interested in a reprint, please contact me at the address below. ======================================================================= Dawson, M.R.W. (1991). The how and why of what went where in apparent motion: Modeling solutions to the motion correspondence problem. Psychological Review, 98(4), 569-603. A model that is capable of maintaining the identities of individuated elements as they move is described. It solves a particular problem of underdetermination, the motion correspondence problem, by simultaneously applying three constraints: the nearest neighbour principle, the relative velocity principle, and the element integrity principle. The model generates the same correspondence solutions as does the human visual system for a variety of displays, and many of its properties are consistent with what is known about the physiological mechanisms underlying human motion perception. The model can also be viewed as a proposal of how the identities of attentional tags are maintained by visual cognition, and thus it can be differentiated from a system that serves merely to detect movement. ============================================================================== -- Michael R. W. Dawson email: mike at psych.ualberta.ca Biological Computation Project Department of Psychology University of Alberta Edmonton, Alberta Tel: +1 403 492 5175 T6G 2E9, Canada Fax: +1 403 492 1768 From btan at bluering.cowan.edu.au Fri Dec 20 01:58:33 1991 From: btan at bluering.cowan.edu.au (btan@bluering.cowan.edu.au) Date: Fri, 20 Dec 91 14:58:33 +0800 Subject: Research topic needed Message-ID: <9112200658.AA22681@bluering.cowan.edu.au> Dear Neural Gurus Currently, I am pursuing my doctorate degree in the area of Neural Networks, could any of Neural Gurus help me to identify research topic, please. Kindly email to me directly btan at cowan.edu.au and not to connectionists. Thanks in advance. Merry Christmas and a Happy New Year. Yours faithfully Boon Tan From lux at ho.isl.titech.ac.jp Fri Dec 20 23:38:01 1991 From: lux at ho.isl.titech.ac.jp (Xuenong Lu) Date: Fri, 20 Dec 91 23:38:01 JST Subject: RESUME Message-ID: <9112201438.AA06945@beat> Hello! This is Xuenong LU in Tokyo Japan. I am now a PhD student in Tokyo Institute of Technology, majored in Information Processing and Electrical Engineering. As I will soon graduate and get my PhD in March, 1992, I would like to ask for your help to find a proper job for me. I would be very happy if I can find a job in united states. Here I would like to give you a short inquiry about job openings. ----------------------------------------------------------------------------- 1. Name: Xuenong Lu 2. Address: Greenhouse 205#, Ibukino 52-1, Midori-ku, Yokohama, 227 JAPAN, Tel. +81-45-982-6367, Email:lux at ho.isl.titech.ac.jp 3. Status: PhD in E&E ( available from March 1992) 4. Salary required: Over 20,000$ yearly 5. Synopsis of Resume: Seeking a R&D or postdoctoral postition in optics, neural networks, information processing or computer science. Strong skills in hardware and software related UNIX, IBM-PC, NEC-PC and X-window, SUN-window and other widnow systems. Eight years of experience in programming with assembler, C, BASIC and FORTRAN. Good ability in Speaking and writing in English and Japanese. ------------------------------------------------------------------------------- Thank you very much for your great cooperation! Merry Christmas and a Happy New Year! Xuenong Lu from Tokyo Japan Dec 20, 1991 APPENDIX: +---------------+ | R E S U M E | +---------------+ 1. NAME: Xuenong Lu 2. SEX: Male 3. BIRTHDAY: January 29th, 1965 4. ORGANIZATION & STATUS: Ph.D student of Imaging Science and Engineering Laboratory, Tokyo Institute of Technology 5. CURRENT ADDRESS: Tokyo Institute of Technology (Honda Group) Imaging Science and Engineering Laboratory Nagatsuta 4259, Midori-ku, Yokohama 227, Japan Tel: +81-45-922-1111 ex. 2083 Fax: +81-45-921-1492 E-mail: lux at ho.isl.titech.ac.jp 6. PERMANENT ADDRESS Greenhouse Room 205, Ibukino 52-1, Midori-ku, Yokohama 227, Japan TEL.+81-45-982-6367 7. EDUCATION BACKGROUND +-------+---+-----------------------------------+-------------+---------------+ | Date |yrs| University and Location | Major | Degree | +-------+---+-----------------------------------+-------------+---------------+ |1982.9-| 4 | Zhejiang University, Hangzhou, | Optical | BS of | |1986.9 | | Zhejiang, China | Engineering | Engineering | +-------+---+-----------------------------------+-------------+---------------+ |1986.9-| 2 | Tsinghua Univeristy, | Information | MS of | |1988.9 | | Beijing, China | Processing | E & E | +-------+---+-----------------------------------+-------------+---------------+ |1989.4-| 3 | Tokyo Institute of Technology | Electrical | PhD of | |1992.3 | | Tokyo, Japan | Engineering | E & E | +-------+---+-----------------------------------+-------------+---------------+ 8. LANGUAGE BACKGROUND (1) English: Have a very good command of it, not only in reading and listening but also in speaking and writing. ( 8 yrs learning) (2) Japanese: Have a very good command of it, can read, speak, listen and write in Japanese with no problem. ( 6 yrs learning ) (3) Chinese: Native language 9. SCHOLARSHIPS & AWARDS (1) Scholarship of Japanese Government ( The Ministry of Education, Science and Culture) from April 1989 to March 1992 in Tokyo Institute of Technology ( Tokyo, Japan) (2) Scholarship for undergraduate students from Chinese Government (Ministry of Education) from September 1982 to August 1986 in Zhejiang University (Hangzhou China) (3) Regarded as one of the Best Ten Outstanding Graduate Students in Zhejiang University, China (among 2,000 graduate students, August 1986, Hangzhou China) 10. FIELDS OF INTEREST (1) Information Processing: New methods to process information by either optical method or computer simulation: as image processing, optical computing systems, optical neural networks, pattern recognition, optical disk, etc (2) Neural Networks: the structure of neural networks, the hardware implementation of neural networks, artificial intelligence, the study of brain and the application of neural networks, the design of neural networks, new computer architecture by neural networks (3) Computer science: digital image processing, medical image processing, computer graphics, software development, database applications, etc 11. PUBLICATIONS & PROCEEDINGS (1) X. Lu, M. Yamaguchi, N. Ohyama, T. Honda, M. Oita, S. Tai, and K. Kyuma, " The optical implementation of the intelligent associative6 memory system", Optics Communications, ( to be published). (2) X. Lu, T. Honda, N. Ohyama, M. Wu, and G. Jin, "Optical configurations for solving equations using nonlinear etalons", Japanese Journal of Applied Physics, vol.29, No.10, pp.L1836-L1839(October 1990) (3) X. Lu, Y. Wang, M. Wu, and G. Jin, " The fabrication of a 25X25 multiple beam splitter", Optics Communications, Vol.72, No.3,4, pp.157-162 (July 1989) (4) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " Re-association with logical constraints in Hopfield-type associative memory", (submitted to Optics Communications) (5) H. Asuma, X. Lu, T. Honda, and N. Ohyama, "The phase modulation features of liquid crystal panel", Japanese Journal of Optics, vol.20, No.2, pp.98-102 (1991) - Japanese (6) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, "Development of an intelligent optical associative memory system", Technical Digest of OSA Annual Meeting 1991, November 3-8, 1991, San Jose, USA, MII5, pp.39 (7) X. Lu, T. Honda, N. Ohyama, M. Wu, and G. Jin, "Digital optical computing model for solving equations", Proc. of 1990 International Topical Meeting on Optical Computing, April 8-12, 1990, Kobe, Japan, 10D1, pp.197-1984 (8) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, "The development of intelligent associative memory system (I)", Proceedings of the 22nd Joint Conference on Imaging Technology, 10-2. (9) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, S. Tai, and K. Kyuma, "The performance of intelligent associative memory", Extended Abstracts (The 38th Spring Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.864, 31p-A-10 (10) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " A proposal of intelligent associative memory system", Extended Abstracts (The 51st Autumn Meeting, 1990), The Japan Society of Applied Physics and Related Societies, pp.807, 28a-H-2 (11) H. Asuma, X. Lu, T. Honda, N. Okumura, T. Sonehara, and N. Ohyama, "The phase modulation feature of liquid crystal", Extended Abstracts (The 37 Spring Meeting, 1990), The Japan Society of Applied Physics and Related Societies, pp.770, 29p-D-10 (12) M. Yamaguchi, X. Lu, N. Ohyama, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " The creativity of neural network", Extended Abstracts (The 52 Autumn Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.818, 9a-ZH-1 (13) M. Oita, J. Ohta, and K. Kyuma, X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, "A proposal of intelligent associative memory (2)---the optical neural network", Extended Abstracts (The 38th Spring Meeting, 1991),The Japan Society of Applied Physics and Related Societies, pp.863,31p-A-9 (14) M. Oita, J. Ohta, and K. Kyuma, X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, "A proposal of intelligent associative memory (3) --- quantized training rule ", Extended Abstracts (The 52 Autumn Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.820, 9a-ZH-7 From D4PBJSS0%EB0UB011.BITNET at BITNET.CC.CMU.EDU Fri Dec 20 18:13:03 1991 From: D4PBJSS0%EB0UB011.BITNET at BITNET.CC.CMU.EDU (Josep M Sopena) Date: Fri, 20 Dec 91 18:13:03 HOE Subject: Parsing embedded sentences ... Message-ID: <01GEC46ZDNXC9PM18C@BITNET.CC.CMU.EDU> The following paper is now available. To obtain a copy send a message to "d4pbjss0 at e0ub011.bitnet". ESRP: A DISTRIBUTED CONNECTIONIST PARSER THAT USES EMBEDDED SEQUENCES TO REPRESENT STRUCTURE Josep M Sopena Departament de Psicologia Basica Universitat de Barcelona In this paper we present a neural network that is able to compute a certain type of structure, that among other things allows it to adequately assign thematic roles, and find the antecedents of the traces, pro, PRO, anaphoras, pronouns, etc. for an extensive variety of syntactic structures. Up until now, the type of sentences that the network has been able to parse include: 1. 'That' sentences with several levels of embedding. John says that Mary thought that Peter was ill. 2.- Passive sentences. 3.- Relative sentences with several levels of embedding (center embedded). John loved the girl that the carpenter who the builder hated was seeing. The man that bought the car that Peter wanted was crazy. The man the woman the boy hates loves is running. 4.-Syntactic ambiguity in the attachment of PP's John saw a woman with a handbag with binoculars. 5.- Combinations of these four types of sentences: John bought the car that Peter thought the woman with a handbag wanted. The input consists of the sentence presented word by word. The patterns in the output represent the structure of the sentence. The structure is not represented by a static pattern but by a temporal course of patterns. This evolution of the output is based on different types of psychological evidence, and is as follows: the output is a sequence of simple semantic predicates (although it could be thought of in a more syntactical way). An element of the output sequence consists only of a single predicate, which always has to be complete. Since there are often omitted elements within the clauses (eg. Traces, PRO, pro etc.) the network retrieves these elements in order to complete the current predicate. These two mechanisms, segmentation into simple predicates and retreival of previously processed elements, are those which allow structure to be computed. In this way the structure is not conceived solely as a linear sequence of simple predicates because using these mechanisms it is posible to form embedded sequences (embedded structures). The paper also includes empirical evidence that supports the model as a plausible psychological model. The NN is formed by two parallel modules that share all of the output and part of the input. The first module is an standard Elman network that maps the elements in the input with their predicate representation in the output and assigns the corresponding semantic roles. The second module is a modified Elman network with two hidden layers. The units of the first hidden layer (which is the copied layer) have a linear function activation. This type of network has a much greater short term memory capacity than a standard Elman network. It stores the sequence of predicates, retreives the elements of the current predicate omitted in the input (traces, PRO etc.) and the referents of pronouns and anaphoras. When a pronoun or an anaphora appears in the input, the corresponding antecedent in the sentence, which has been retreived from this second module, is placed in the output. This module also allows the network to build embedded sequences by retreiving former elements of the sequence. The two modules were simultaneously trained. There were no manipulations other than the changes of inputs and targets, as in the standard backpropagation algorithm. The network was trained with 3000 sentences built from a starting a vocabulary of 1000 words. The number of sentences that is possible to build starting from this vocabulary is power(10,15). The generalization was completely successful for a test set of 800 sentences representing the variety of syntactic patterns of the training set. The model bears some relationship with the idea of representing structure not only in space but in time as well (Hinton 1989)and with the RAAM networks of Pollack(1989). The shortcomings of this type of networks are also discussed. From saarinen at csrd.uiuc.edu Fri Dec 20 12:55:44 1991 From: saarinen at csrd.uiuc.edu (Sirpa Saarinen) Date: Fri, 20 Dec 91 11:55:44 CST Subject: Ill-conditioning in NNs (Tech. Rep.) Message-ID: <9112201755.AA06158@sp1.csrd.uiuc.edu> Technical report available: CSRD Report no. 1089 Ill-Conditioning in Neural Network Training Problems S. Saarinen, R. Bramley and G. Cybenko Center for Supercomputing Research and Development, University of Illinois, Urbana, IL, USA 61801 Abstract The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to be a slow process with more sophisticated techniques not always performing significantly better. In this paper, we show that feedforward neural networks can have ill-conditioned Hessians and that this ill-conditioning can be quite common. The analysis and experimental results in this paper lead to the conclusion that many network training problems are ill-conditioned and may not be solved more efficiently by higher order optimization methods. While our analyses are for completely connected layered networks, they extend to networks with sparse connectivity as well. Our results suggest that neural networks can have considerable redundancy in parameterizing the function space in a neighborhood of a local minimum, independently of whether or not the solution has a small residual. If you wish to have this report, please write to nichols at csrd.uiuc.edu and ask for report 1089. Merry Christmas, Sirpa Saarinen saarinen at csrd.uiuc.edu From back at s1.elec.uq.oz.au Mon Dec 23 14:37:37 1991 From: back at s1.elec.uq.oz.au (Andrew Back) Date: Mon, 23 Dec 91 14:37:37 EST Subject: Mackey-Glass Message-ID: <9112230337.AA07687@c14.elec.uq.oz.au> Can someone let me know if the Mackey-Glass chaotic time series has been modelled by a fully recurrent network (such as Williams and Zipser's structure) ? I would like any references or performance results. Any help much appreciated. Please e-mail relies direct to: back at s1.elec.uq.oz.au Thanks, Andrew Back ---- Department of Electrical Engineering University of Queensland St. Lucia. 4072. Australia From M160%eurokom.ie at BITNET.CC.CMU.EDU Mon Dec 23 11:17:00 1991 From: M160%eurokom.ie at BITNET.CC.CMU.EDU (Ronan Reilly ERC) Date: Mon, 23 Dec 1991 11:17 CET Subject: A connectionist parsing technique Message-ID: <9112231117.169315@eurokom.ie> Below is the abstract of a paper that is due to appear in the journal Network early next year. A compressed postscript version (reilly.parser.ps.Z) has been placed in Jordan Pollack's Neuroprose archive at Ohio State and can be retrieved in the usual way. Requests for hardcopy should be sent to: ronan_reilly at eurokom.ie Season's greetings, Ronan ========================================== A Connectionist Technique for On-Line Parsing Ronan Reilly Educational Research Centre St Patrick's College, Dublin 9 A technique is described that permits the on-line construction and dynamic modification of parse trees during the processing of sentence-like input. The approach is a combination of simple recurrent network (SRN) and recursive auto-associative memory (RAAM) . The parsing technique involves teaching the SRN to build RAAM representations as it processes its input item-by-item. The approach is a potential component of a larger connectionist natural language processing system, and could also be used as a tool in the cognitive modelling of language understanding. Unfortunately, the modified SRN demonstrates a limited capacity for generalisation. ========================================== From gordon at AIC.NRL.Navy.Mil Mon Dec 23 12:19:21 1991 From: gordon at AIC.NRL.Navy.Mil (gordon@AIC.NRL.Navy.Mil) Date: Mon, 23 Dec 91 12:19:21 EST Subject: workshop announcement Message-ID: <9112231719.AA19520@sun25.aic.nrl.navy.mil> CALL FOR PAPERS Informal Workshop on ``Biases in Inductive Learning" To be held after the 1992 Machine Learning Conference Saturday, July 4, 1992 Aberdeen, Scotland All aspects of an inductive learning system can bias the learn- ing process. Researchers to date have studied various biases in inductive learning such as algorithms, representations, background knowledge, and instance orders. The focus of this workshop is not to examine these biases in isolation. Instead, this workshop will examine how these biases influence each other and how they influence learning performance. For example, how can active selection of instances in concept learning influence PAC convergence? How might a domain theory affect an inductive learning algorithm? How does the choice of representational bias in a learner influence its algo- rithmic bias and vice versa? The purpose of this workshop is to draw researchers from diverse areas to discuss the issue of biases in inductive learning. The workshop topic is a unifying theme for researchers working in the areas of reformulation, constructive induction, inverse resolu- tion, PAC learning, EBL-SBL learning, and other areas. This workshop does not encourage papers describing system comparisons. Instead, the workshop encourages papers on the following topics: - Empirical and analytical studies comparing different biases in inductive learning and their quantitative and qualitative influ- ence on each other or on learning performance - Studies of methods for dynamically adjusting biases, with a focus on the impact of these adjustments on other biases and on learning performance - Analyses of why certain biases are more suitable for particular applications of inductive learning - Issues that arise when integrating new biases into an existing inductive learning system - Theory of inductive bias Please send 4 hard copies of a paper (10-15 double-spaced pages, 12-point font) or (if you wish to attend, but not present a paper) a description of your current research to: Diana Gordon Naval Research Laboratory, Code 5510 4555 Overlook Ave. S.W. Washington, D.C. 20375-5000 USA Email submissions to gordon at aic.nrl.navy.mil are also acceptable, but they must be in PostScript. FAX submissions will not be accepted. If you have any questions about the workshop, please send email to Diana Gordon at gordon at aic.nrl.navy.mil or call 202-767- 2686. Important Dates: March 12 - Papers and research descriptions due May 1 - Acceptance notification June 1 - Final version of papers due Program Committee: Diana Gordon, Naval Research Laboratory Dennis Kibler, University of California at Irvine Larry Rendell, University of Illinois Jude Shavlik, University of Wisconsin William Spears, Naval Research Laboratory Devika Subramanian, Cornell University Paul Vitanyi, CWI and University of Amsterdam From wray at ptolemy.arc.nasa.gov Mon Dec 23 13:57:28 1991 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Mon, 23 Dec 91 10:57:28 PST Subject: bayesian methods for back-propagation, update Message-ID: <9112231857.AA05676@ptolemy.arc.nasa.gov> To appear in December 1991 issue of {\it Complex Systems}. An early draft appeared in the Neuroprose archive July 1991, and was available as NASA Ames AI Research Branch TR FIA-91-22. The new version is considerably improved and contains new, updated and corrected material. A limited number of reprints are available. by writing to Wray Buntine. (Andreas is currently on holidays!) Please only request one if your library doesn't get Complex Systems journal. PS. distributing the first draft via connectionists allowed us to get all sorts of helpful feedback on the early version! ------------- Bayesian Back-Propagation Wray L. Buntine Andreas S. Weigend wray at ptolemy.arc.nasa.gov andreas at psych.stanford.edu RIACS \& NASA Ames Research Center Xerox Palo Alto Research Center Mail Stop 269-2 3333 Coyote Hill Rd. Moffet Field, CA 94035, USA Palo Alto, CA, 94304, USA Connectionist feed-forward networks, trained with back-propagation, can be used both for non-linear regression and for (discrete one-of-$C$) classification. This paper presents approximate Bayesian methods to statistical components of back-propagation: choosing a cost function and penalty term (interpreted as a form of prior probability), pruning insignificant weights, estimating the uncertainty of weights, predicting for new patterns (``out-of-sample''), estimating the uncertainty in the choice of this prediction (``error bars''), estimating the generalization error, comparing different network structures, and handling missing values in the training patterns. These methods extend some heuristic techniques suggested in the literature, and in most cases require a small additional factor in computation during back-propagation, or computation once back-propagation has finished. From Michael_Berthold at NL.CS.CMU.EDU Tue Dec 24 18:26:23 1991 From: Michael_Berthold at NL.CS.CMU.EDU (Michael_Berthold@NL.CS.CMU.EDU) Date: Tue, 24 Dec 91 18:26:23 EST Subject: TR available In-Reply-To: Your message of "Tue, 17 Dec 91 16:11:23 EST." Message-ID: <1338.693617183@NL.CS.CMU.EDU> I'm interested in your paper and would be glad if you can send a copy to me. Thanks, Michael Berthold Center for Machine Translation Carnegie Mellon University Smith Hall 106 5000 Forbes Ave. Pittsburgh, PA 15213 From Michael_Berthold at NL.CS.CMU.EDU Wed Dec 25 13:54:12 1991 From: Michael_Berthold at NL.CS.CMU.EDU (Michael_Berthold@NL.CS.CMU.EDU) Date: Wed, 25 Dec 91 13:54:12 EST Subject: Uoups, Sorry ! Message-ID: <3514.693687252@NL.CS.CMU.EDU> From gluck at pavlov.Rutgers.EDU Wed Dec 25 18:40:03 1991 From: gluck at pavlov.Rutgers.EDU (Mark Gluck) Date: Wed, 25 Dec 91 18:40:03 EST Subject: Graduate & Postdoctoral Study in Cognitive & Neural Bases of Learning at Rutgers Univ. Message-ID: <9112252340.AA01648@pavlov.rutgers.edu> -> Please Post or Distribute Graduate and Postdoctoral Training in the: COGNITIVE & NEURAL BASES OF LEARNING at the Center for Molecular & Behavioral Neuroscience Rutgers University; Newark, NJ Graduate and postdoctoral positions are available for those interested in joining our lab to pursue research and training in the cognitive and neural bases of learning and memory, with a special emphasis on computational neural-network theories of learning. Current research topics include: * Empirical and Computational Studies of Human Learning Experimental research involves studies of human learning and judgment -- especially classification learning -- motivated by a desire to evaluate adaptive network theories of human learning and better understand the relationship between animal and human learning. Theoretical (computational) work seeks to develop and extend adaptive network models of learning to more accurately reflect a wider range of animal and human learning behaviors. Applications of these behavioral models to analyses of the neural bases of animal and human learning are of particular interest. * Computational Models of the Neurobiology of Learning & Memory Understanding the neural bases of learning through computational models of neural circuits and systems, especially the cerebellar and hippocampal areas involved in classical Pavlovian conditioning of motor-reflex learning, is our primary goal. Related work seeks to understand hippocampal function in a wider range of animal and human learning behaviors. ______________________________________________________________________ Other Information: RESEARCH FACILITIES: A new center for graduate study and research in cognitive, behavioral, and molecular neuroscience. The program emphasizes interdisciplinary and integrative analyses of brain and behavior. Located in the new Aidekman Neuroscience Research Center, the C.M.B.N. has state-of-the-art communication facilities, computers, offices, and laboratories. LOCATION: Newark, New Jersey: 20 minutes from Manhattan but also close to rural New Jersey countryside. Other nearby universities and industry research labs with related research programs include: Rutgers (New Brunswick), NYU, Princeton, Columbia, Siemens, NEC, AT&T, Bellcore, and IBM. CURRENT FACULTY: Elizabeth Abercrombie, Gyorgi Buzsaki, Ian Creese, Mark Gluck, Howard Poizner, Margaret Shiffrar, Ralph Siegel, Paula Tallal, and James Tepper. Five additional faculty will be hired. SUPPORT: The Center has 10 state-funded postdoctoral positions with additional positions funded from grants and fellowships. The graduate program is research-oriented and leads to a Ph.D. in Behavioral and Neural Sciences; all students are fully funded. SELECTION CRITERIA & PREREQUISITES: Candidates with any (or all) of the following skills are encouraged to apply: (1) familiarity with neural-network theories and algorithms, (2) strong computational and analytic skills, and (3) experience with experimental methods in cognitive psychology. Evidence of prior research ability and strong writing skills are critical. ______________________________________________________________________ For more information on graduate or postdoctoral training in learning and memory at CMBN/Rutgers, please send a letter with a statement of your research and career interests, and a resume (C.V.), to: Dr. Mark A. Gluck Phone: (201) 648-1080 (x3221) Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Email: gluck at pavlov.rutgers.edu From Prahlad.Gupta at K.GP.CS.CMU.EDU Thu Dec 26 14:43:41 1991 From: Prahlad.Gupta at K.GP.CS.CMU.EDU (Prahlad.Gupta@K.GP.CS.CMU.EDU) Date: Thu, 26 Dec 91 14:43:41 EST Subject: Paper available in Neuroprose Message-ID: The following paper has been placed in the Neuroprose archive, as the file gupta.stress.ps.Z Comments are invited. Retrieval instructions follow the abstract below. Thanks to Jordan Pollack for making this facility available. -- Prahlad ------------------------------------------------------------------------- =========================================================== CONNECTIONIST MODELS & LINGUISTIC THEORY: INVESTIGATIONS OF STRESS SYSTEMS IN LANGUAGE PRAHLAD GUPTA DAVID S. TOURETZKY -------------------------- -------------------------- Dept. of Psychology School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 prahlad at cs.cmu.edu dst at cs.cmu.edu ============================================================ Abstract -------- This work describes the use of connectionist techniques to model the learning and assignment of linguistic stress. Our aim was to explore the ability of a simple perceptron to model the assignment of stress in individual words, and to consider, in light of this study, the relationship between the connectionist and theoretical linguistics approaches to investigating language. We first point out some interesting parallels between aspects of the model and the constructs and predictions of Metrical Phonology, the linguistic theory of stress: (1) the distribution of learning times obtained from perceptron experiments corresponds with theoretical predictions of "markedness," and (2) the weight patterns developed by perceptron learning bear a suggestive *structural* relationship to features of the linguistic analysis, particularly with regard to "iteration" and "metrical feet". We use the connectionist learning data to develop an analysis of linguistic stress based on perceptron-learnability. We develop a novel characterization of stress systems in terms of six parameters. These provide both a partial description of the stress pattern itself and a prediction of its learnability, without invoking abstract theoretical constructs such as "metrical feet." Our parameters encode linguistically salient concepts as well as concepts that have computational significance. These two sets of results suggest that simple connectionist learning techniques have the potential to complement, and provide computational validation for, abstract theoretical investigations of linguistic domains. We then examine why such methodologies should be of interest for linguistic theorizing. Our analysis began at a high level by observing inherent characteristics of various stress systems, much as theoretical linguistics does. However, our explanations changed substantially when we included a detailed account of the model's processing mechanisms. Our higher-level, theoretical account of stress was revealed as only an *approximation* to the lower-level computational account. Without the ability to open up the black boxes of the human processor, linguistic analyses are arguably analogous to our higher-level descriptions. This highlights the need for *computational grounding* of theory-building. In addition, we suggest that there are methodological problems underlying parameter-based approaches to learnability. These problems make it all the more important to seek sources of converging evidence such as is provided by computational models. ------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gupta.stress.ps.Z ftp> quit unix> uncompress gupta.stress.ps.Z unix> lpr -P gupta.stress.ps ------------------------------------------------------------------------- From mauduit at ece.UCSD.EDU Thu Dec 26 17:47:13 1991 From: mauduit at ece.UCSD.EDU (Nicolas Mauduit) Date: Thu, 26 Dec 91 14:47:13 PST Subject: paper mauduit.lneuro.ps.Z available at archive.cis.ohio-state.edu Message-ID: <9112262247.AA09544@celece> The preprint of the following paper, to appear in IEEE Neural Networks, march 92 special issue on hardware, is available by ftp from the neuroprose archive at archive.cis.ohio-state.edu (file mauduit.lneuro.ps.Z): Lneuro 1.0: a piece of hardware LEGO for building neural network systems (to appear in IEEE Neural Networks, march 92 special issue on hardware) by Nicolas MAUDUIT UCSD, dept. ECE, EBU1 La Jolla, CA 92093-0407 USA Marc DURANTON LEP, div. 21 Jean GOBERT B.P. 15, 22, avenue Descartes Jacques-Ariel SIRAT 94453 Limeil Brevannes France Abstract: The state of our experiments on neural networks simulations on a parallel architecture is presented here. A digital architecture was selected, scalable and flexible enough to be useful for simulating various kinds of networks and paradigms. The computing device is based on an existing coarse grain parallel framework (INMOS Transputers), improved with finer grain parallel abilities through VLSI chips, called the Lneuro 1.0, for LEP neuromimetic circuit. The modular architecture of the circuit enables to build various kinds of boards to match the foreseen range of applications, or to increase the power of the system by adding more hardware. The resulting machine remains reconfigurable according to a specific problem to some extent, at the system level through the Transputers framework, as well as at the circuit level. A small scale machine has been realized using 16 Lneuros arranged in clusters composed of 4 circuits and a controller, to experimentally test the behaviour of this architecture (the communication, control, primitives required, etc.). Results are presented on an integer version of Kohonen feature maps. The speedup factor increases regularly with the number of clusters involved (up to a factor 80). Some ways to improve this family of neural networks simulation machines are also investigated. -------------------------------------------------------------------------------- The file can be obtained the usual way: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ... ftp> cd pub/neuroprose ftp> binary ftp> get mauduit.lneuro.ps.Z ftp> quit unix> uncompress mauduit.lneuro.ps.Z then print the file mauduit.lneuro.ps on a postscript printer Nicolas Mauduit ---------------------------------------------------------------- Nicolas Mauduit, Dept ECE | Phone (619) 534 6026 UCSD EBU 1 | FAX (619) 534 1225 San Diego CA 92093-0407 USA | Email mauduit at celece.ucsd.edu From awyk at psy.uwa.oz.au Mon Dec 2 11:53:55 1991 From: awyk at psy.uwa.oz.au (Brian Aw) Date: Mon, 2 Dec 91 16:53:55 GMT Subject: Standard Images Message-ID: <9112021653.AA25678@freud.psy.uwa.oz.au> Would anyone have in hand some standard natural images used for image compression and reconstruction? A neural-net based coding and reconstruction scheme which I am currently working on works well with some natural images taken from a laboratory camera, but I need to test it with some 'bench-mark' natural images. If you can help please reply to me directly and we will talk on how to get the images over. Thanks. Brian Aw 2/12/91. From gluck at pavlov.Rutgers.EDU Mon Dec 2 08:18:43 1991 From: gluck at pavlov.Rutgers.EDU (Mark Gluck) Date: Mon, 2 Dec 91 08:18:43 EST Subject: NN in Cog. & Neural Sciences: Jackson Hole, WY, 1/16/92 Message-ID: <9112021318.AA09826@pavlov.rutgers.edu> Symposium: NEURAL NETWORK MODELS IN THE BEHAVIORAL AND NEURAL SCIENCES at 17th Annual Interdisciplinary Conf. Jackson Hole, Wyoming Symposium Chair: Mark A. Gluck (Neuroscience, Rutgers; gluck at pavlov.rutgers.edu) Thursday, January 16th, 4-8pm The talks will focus on theories of neural computation and their application to diverse topics in the cognitive and neural sciences. These presentations will be general tutorial overviews of each subspeciality. They are intended to enable members of the interdisciplinary audience to keep abreast of the latest developments in the field. Talks will be 23 minutes in length -- no more than half of which should be devoted to the speaker's own research. Time will be strictly limited. An additional 7 minutes will be alloted afterwards for discussion and questions. Speakers & Topics Larry Maloney, (Cognitive & Neural Science, NYU): NN & Vision Richard Granger (Neurobio. of Learning & Mem, Irvine): NN & Cortical Processing Stephen Hanson (Siemens Research): NN & Event Perception Mark Gluck (Neuroscience, Rutgers): NN & Human Learning Lee Giles (NEC Corp.): NN & Temporal Sequences David Servan-Schreiber (W. Psychiatric Inst. & CMU): NN & Psychiatric Disorders __________________________________________________________ This one day symposium is part of the 17th Annual Interdisciplinary Conference. The full conference runs from January 12 - January 16, 1992 at Teton Village, Jackson Hole, Wyoming. The meeting is an attempt to facilitate the interchange of ideas among researchers in a variety of disciplines relating to cognitive, neuro, and computer science. For further information on the conference, contact: Interdisciplinary Conference Attn: Dr. George Sperling Psychology & Neural Sciences, NYU 6 Washington Place, Rm. 980 NYC, NY 10003 Email: gs at cns.nyu.edu From wahba at stat.wisc.edu Mon Dec 2 14:54:57 1991 From: wahba at stat.wisc.edu (Grace Wahba) Date: Mon, 2 Dec 91 13:54:57 -0600 Subject: cross validation Message-ID: <9112021954.AA03233@hera.stat.wisc.edu> Re: Cross validation and Generalized Cross Validation: These tools are widely used in the statistics literature to resolve the bias - variance tradeoff in various contexts, including some which might be considered ml, and many image reconstruction algorithms, especially those which might be considered deconvolution. There is also some work on edge detection. You can read all about it in the context of multivariate function estimation and regularization (in reproducing kernel Hilbert spaces) in "Spline Models for Observational Data" - (see *** below) including 30 pages of references, mostly to the smoothing and regularization literature. Thin plate splines (source of one of the popular radial basis functions) also get a chapter. As a statistician new to the ml literature I am having fun reading the ml mail and observe that there is a growing overlap between the ml the statistics literature and all would benefit from seeing what the other side is doing... ..***.... Spline Models for Observational Data, by Grace Wahba v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA SIAM at wharton.upenn.edu 1-800-447-7426 (8:30-4:45 Eastern Standard Time, toll-free from the US, I dont know if this works overseas) May be ordered on American Express, Visa or Mastercard, or may be billed (extra charge). CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. From ICPRK%ASUACAD.BITNET at BITNET.CC.CMU.EDU Mon Dec 2 16:40:59 1991 From: ICPRK%ASUACAD.BITNET at BITNET.CC.CMU.EDU (Peter Killeen) Date: Mon, 02 Dec 91 14:40:59 MST Subject: job ad for NIPS message board Message-ID: <01GDN7JGAM6O9EDC4T@BITNET.CC.CMU.EDU> Experimental Psychologist Arizona State University is recruiting an Associate Professor of Experimental Psychology. The successful candidate must have a Ph.D. in psychology with specialty in adaptive systems/neural networks. The individual will join a growing interdisciplinary group with interests in basic biomedical research from an adaptive systems perspective. The position will start in August 1992. Send Vitae, reprints, and contact three individuals to have them send letters of reference to Dr. Peter R. Killeen, Experimental Psychology Search Committee, Department of Psychology, Arizona State University, Tempe, AZ 85287-1104. Deadline for application is Feb 14, 1992, and every two weeks thereafter until the position is filled. ASU is an equal- opportunity and affirmative action employer. From xie at ee.su.OZ.AU Mon Dec 2 19:07:21 1991 From: xie at ee.su.OZ.AU (Xie Yun) Date: Tue, 3 Dec 1991 11:07:21 +1100 Subject: TR available Message-ID: <9112030007.AA15244@brutus.ee.su.OZ.AU> The following technical report is placed in the neuroprose archive: \title{Training Algorithms for Limited Precision Feedforward Neural Networks } \author{Yun Xie \thanks{Permanent address: Department of Electronic Engineering, Tsinghua University, Beijing 100084, P.R.China} \hskip 30pt Marwan A. Jabri\\ \\ Department of Electrical Engineering\\ The University of Sydney\\ N.S.W. 2006, Australia} \date{} \maketitle \begin{abstract} A statistical quantization model is used to analyze the effects of quantization on the performance and the training dynamics of a feedforward multi-layer neural network implemented in digital hardware. The analysis shows that special techniques have to be employed to train such networks in which each variable is represented by limited number of bits in fixed point format. Based on the analysis, we propose a training algorithm that we call the Combined Search Algorithm (CS). It consists of two search techniques and can be easily implemented in hardware. Computer simulations were conducted using IntraCardiac ElectroGrams (ICEGs) and sonar reflection patterns and the results show that: using CS, the training performance of feedforward multi--layer neural networks implemented in digital hardware with 8 to 10 bit resolution can be as good as that of networks implemented with unlimited precision; CS is insensitive to training parameter variation; and importantly, the simulations confirm that the numbers of quantization bits can be reduced in the upper layers without affecting the performance of the network. \end{abstract} You can get the report by FTP: unix>ftp 128.146.8.52 name:anonymous Passord:neuron ftp>binary ftp>cd pbu/neuroprose ftp>get yun.cs.ps.Z ftp>bye unix>uncompress yun.cs.ps.Z unix>lpr yun.cs.ps Yun From awyk at psy.uwa.oz.au Tue Dec 3 10:37:47 1991 From: awyk at psy.uwa.oz.au (Brian Aw) Date: Tue, 3 Dec 91 15:37:47 GMT Subject: Standard Images Message-ID: <9112031537.AA28573@freud.psy.uwa.oz.au> As soon as I receive the standard images (for testing image compression and reconstruction techniques), I will be most happy to share with those who have requested and anyone who might like to have them. Brian Aw 3/12/91. awyk at freud.psy.uwa.oz.au From SCHNEIDER at vms.cis.pitt.edu Tue Dec 3 11:23:00 1991 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Tue, 3 Dec 91 12:23 EDT Subject: Graduate and Post-doc positions in Neural Processes in Cognition Message-ID: Program announcement for Interdisciplinary Graduate and Postdoctoral Training in Neural Processes in Cognition at the University of Pittsburgh and Carnegie Mellon University The Pittsburgh Neural Processes in Cognition program, in its second year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an innovative program for students investigating the neurobiology of cognition. The program's focus is the interpretation of cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Such functions include perceiving, attending, learning, planning, and remembering in humans and in animals. A carefully designed program of study prepares each student to perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, MRI and PET brain scanners, the Pittsburgh Supercomputing Center, and a regional medical center providing access to human clinical populations. This is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Each student receives full financial support, travel allowances and a computer workstation. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. Last year's class included mathematicians, psychologists, and neuroscience researchers. Pittsburgh is one of America's most exciting and affordable cities, offering outstanding symphony, theater, professional sports, and outdoor recreation in the surrounding Allegheny mountains. More than ten thousand graduate students attend its universities. Core Faculty and interests and affiliation Carnegie Mellon University -Psychology- James McClelland, Johnathan Cohen, Martha Farah, Mark Johnson University of Pittsburgh Behavioral Neuroscience - Michael Ariel Biology - Teresa Chay Information Science - Paul Munro Mathematics - Bard Ermentrout Neurobiology Anatomy and Cell Sciences - Al Humphrey Neurological Surgery - Don Krieger, Robert Sclabassi Neurology - Steven Small Psychiatry - David Lewis, Lisa Morrow, Stuart Steinhauer Psychology - Walter Schneider, Velma Dobson Physiology - Dan Simons Radiology - Mark Mintun Applications: To apply to the program contact the program office or one of the affiliated departments. Students are admitted jointly to a home department and the Neural Processes in Cognition Program. Postdoctoral applicants must have United States resident's status. Applications are requested by February 1. For information contact: Professor Walter Schneider Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara St Pittsburgh, PA 15260 Or: call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET. From uh311ae at sunmanager.lrz-muenchen.de Tue Dec 3 14:19:50 1991 From: uh311ae at sunmanager.lrz-muenchen.de (Henrik Klagges) Date: 03 Dec 91 20:19:50+0100 Subject: TechReport ps file is broken Message-ID: <9112031919.AA04842@sunmanager.lrz-muenchen.de> The recently announced Techreport about limited precision algorithms prints miniaturized mirror-image pages. I guess the file is corrupt. Just to prevent you from a nasty surprise. Cheers, Henrik IBM Research From 7923509%TWNCTU01.BITNET at BITNET.CC.CMU.EDU Wed Dec 4 13:09:00 1991 From: 7923509%TWNCTU01.BITNET at BITNET.CC.CMU.EDU (7923509%TWNCTU01.BITNET@BITNET.CC.CMU.EDU) Date: Wed, 4 Dec 91 13:09 U Subject: Associative Memory Message-ID: <01GDP1KIDN8G9EDC4T@BITNET.CC.CMU.EDU> Hi every1: There is some questions,please help me. The BAM model seems to be unable to learn the whole training pairs which are given in advance.When there are so many training pairs,the BAM has bad behavior . Is there any way to improve the capacity and fault tolerance ? If we want to associate training pairs (Ai,Bi) i=1,2,..,P and the training patterns both Ai's and Bi's are not linear independent,is it possible ? please reply to my bitnet address,thank's in advance ! 7923509 at twnctu01 From sakaue at it4.crl.mei.co.jp Wed Dec 4 19:22:18 1991 From: sakaue at it4.crl.mei.co.jp (S.Sakaue) Date: Wed, 4 Dec 91 19:22:18 JST Subject: TechReport ps file is broken In-Reply-To: Henrik Klagges's message of 03 Dec 91 20:19:50+0100 <9112031919.AA04842@sunmanager.lrz-muenchen.de> Message-ID: <9112041022.AA07635@sky1.it4.crl.mei.co.jp> >>>>> On 03 Dec 91 20:19:50+0100, Henrik Klagges said: Henrik> The recently announced Techreport about limited precision algorithms prints Henrik> miniaturized mirror-image pages. I guess the file is corrupt. That file is a combination of two PostScript files. You can retrieve the paper by adding showpage next to the end of first file. ---------- Shigeo Sakaue Central Reasearch Labs. Matsushita Electric Industrial Co., Ltd. 3-15. Yagumo-Nakamachi, Moriguchi, Osaka, 570, Japan email: sakaue at crl.mei.co.jp TEL: <+81>6-909-1121 FACSIMILE: <+81>6-906-0177 From pgeutner at ira.uka.de Wed Dec 4 08:16:32 1991 From: pgeutner at ira.uka.de (Petra Geutner) Date: Wed, 04 Dec 91 14:16:32 +0100 Subject: mail, die in petra landen sollte Message-ID: From xie at ee.su.OZ.AU Wed Dec 4 17:28:54 1991 From: xie at ee.su.OZ.AU (Xie Yun) Date: Thu, 5 Dec 1991 09:28:54 +1100 Subject: TR Message-ID: <9112042228.AA09709@brutus.ee.su.OZ.AU> I am very sorry for the mistake in my previous PS file in neurprose under the name yun.cs.ps.Z. I have sent another one to Inbox and asked Jordan to move it to neuroprose to replace the old one. Yun From M160%eurokom.ie at BITNET.CC.CMU.EDU Fri Dec 6 12:59:00 1991 From: M160%eurokom.ie at BITNET.CC.CMU.EDU (Ronan Reilly ERC) Date: Fri, 6 Dec 1991 12:59 CET Subject: Cognitive Science of NLP Workshop Message-ID: <9112061259.161899@eurokom.ie> Call for Participation in a Workshop on THE COGNITIVE SCIENCE OF NATURAL LANGUAGE PROCESSING 14-15 March, 1992 Dublin City University Guest Speakers: George McConkie University of Illinois at Urbana-Champaign Kim Plunkett University of Oxford Noel Sharkey University of Exeter Attendance at the CSNLP workshop will be by invitation on the basis of a submitted paper. Those wishing to be considered should send a paper (hardcopy, no e-mail submissions please) of not more than eight A4 pages to Ronan Reilly (e-mail: ronan_reilly at eurokom.ie), Educational Research Centre, St Patrick's College, Dublin 9, Ireland, not later than 3 February, 1992. Notification of acceptance along with registration and accommodation details will be sent out by 17 February, 1992. Submitting authors should also send their fax number and/or e-mail address to help speed up the selection process. The particular focus of the workshop will be on the computational modelling of human natural language processing (NLP), and preference will be given to papers that present empirically supported computational models of any aspect of human NLP. An additional goal in selecting papers will be to provide coverage of a range of NLP areas. This workshop is supported by the following organisations: Educational Research Centre, St Patrick's College, Dublin; Linguistics Insititute of Ireland; Dublin City University; and the Commission of the European Communities through the DANDI ESPRIT Basic Research Action (No. 3351). From vergina!strintzi at csi.forth.gr Fri Dec 6 20:57:25 1991 From: vergina!strintzi at csi.forth.gr (Michael Strintzis) Date: Fri, 6 Dec 91 17:57:25 PST Subject: No subject Message-ID: <9112061757.aa05807@vergina.uucp> Dear Connectionists Could somebody please help me update my bibliography on connectionist techniques for Principal Component Analysis? There was an excellent letter on this topic at the Connectionist Conference at eurokom which I unfortunately accidentally missed. Thanks Michael Strintzis Michael.Strintzis at eurokom.ie Electrical Engineering University of Thessaloniki 54006 Thessaloniki Greece From JWLEE%KRYSUCC1.BITNET at vma.cc.cmu.edu Sat Dec 7 11:40:00 1991 From: JWLEE%KRYSUCC1.BITNET at vma.cc.cmu.edu (Jaewoong Lee) Date: SAT, 7 DEC 1991 11:40 EXP Subject: population coding Message-ID: Dear Connectionists, I am a graduate student starting a work on connectionist model for population coding. What I am trying to do is to build up *biologically plausible* model that can explain population coding mechanism in superior colliculus. What I have read are [C. Lee et al. "Population coding of saccadic eye movement by neurons in the superior colliculus", Nature Vol. 332] and some others. Any other references or any comments? Your help will be greatly appreciated and I will summarize the replies for other researchers. Thank you. Jaewoong Lee, AI Lab. CS. Dept. Yonsei University, Seoul, KOREA e-mail: JWLEE at KRYSUCC1.BITNET From sutcliffer%ul.ie at BITNET.CC.CMU.EDU Fri Dec 6 13:56:57 1991 From: sutcliffer%ul.ie at BITNET.CC.CMU.EDU (sutcliffer%ul.ie@BITNET.CC.CMU.EDU) Date: Fri, 6 Dec 91 18:56:57 GMT Subject: AICS'92 call for papers ============================================ = AICS'92 Announcement and Call For Papers = ============================================ Message-ID: <9112061856.AA08766@itdsrv1.ul.ie> 5th Irish Conference on Artificial Intelligence and Cognitive Science Keynote Speaker : Erik Sandewall, Linkoping University, Sweden 10-11th September 1992 University of Limerick, Ireland. Aims The conference brings together Irish and overseas researchers and practitioners from all areas of artificial intelligence and cognitive science. The aim is to provide a forum where researchers can present their current work and where industrial and commercial users can relate this research to their own practical experience and needs. Submissions from abroad are particularly welcome and assistance with travel costs may be available for a small number of participants. Topics of Interest Papers are invited which describe substantial, original and unpublished research on all aspects of artificial intelligence and cognitive science, including, but not limited to: Application and Theory of Expert Systems Human-Computer Interaction Learning Natural Language Knowledge Representation Principles and Applications of Connectionism User Modelling Decision Support and Strategic Planning Robotics Speech Image Processing Format for Submission Authors should submit three copies of a complete paper, not to exceed 5000 words. The first page should comprise the title, author(s), address, phone, fax and e-mail, together with a 200 word abstract. The text of the paper should then start on the second page Schedule Papers Due: 24 April, 1992 Notification of Acceptance: 12 June 1992 Publication of Proceedings The abstracts of accepted papers will be distributed at the conference. The proceedings of previous AICS conferences have been published in the British Computer Society Workshop Series with Springer Verlag, and it is expected that those of AICS'92 will b Conference Information The University campus is situated in rolling parkland beside the river Shannon. The conference will take place in the newly built Robert Schuman building which has fully equipped lecture theatres so that workstation or PC software can easily be demonstrated, The University is easily reached by road, rail or by air - Shannon international airport is only 20kms away. Further information and registration details can be obtained by email from aics92 at ul.ie or by contacting : Kevin Ryan - Conference Chairperson AICS'92 Department of Computer Science and Information Systems University of Limerick Plassey Technological Park Limerick, Ireland Phone (353)-61-333644 Fax (353)-61-330316 Programme Committee Roddy Cowie, Queen's University Belfast Mark Keane, Trinity College Dublin Gabriel McDermott, University College Dublin Michael McTear, University of Ulster Abdur Rahman, University of Limerick Kevin Ryan, University of Limerick Alan Smeaton, Dublin City University Humphrey Sorensen, University College Cork Richard Sutcliffe, University of Limerick From dario at cns.nyu.edu Mon Dec 9 21:01:56 1991 From: dario at cns.nyu.edu (Dario Ringach) Date: Mon, 9 Dec 91 21:01:56 EST Subject: Position errors in smooth pursuit Message-ID: <9112100201.AA07380@wotan.cns.nyu.edu> I would appreciate any references documenting the influence of position errors in smooth pursuit eye movements. I am aware of the work of Wyatt and Pola, and Carl and Gellman. Thank you in advance. -- Dario From WARREN%BROWNCOG.BITNET at BITNET.CC.CMU.EDU Mon Dec 9 16:41:00 1991 From: WARREN%BROWNCOG.BITNET at BITNET.CC.CMU.EDU (WARREN%BROWNCOG.BITNET@BITNET.CC.CMU.EDU) Date: Mon, 9 Dec 1991 16:41 EST Subject: Job Announcement Message-ID: <01GDWZMLKOSS0002W5@BROWNCOG.BITNET> JOB OPENING IN HIGHER-LEVEL COGNITION DEPT. OF COGNITIVE AND LINGUISTIC SCIENCES BROWN UNIVERSITY The Department of Cognitive and Linguistic Sciences at Brown University invites applications for a tenure-track, Assistant Professor position in higher-level cognition, beginning July 1, 1992. Preference will be given to the areas of concepts, memory, attention, problem solving, knowledge representation, and the relation between cognition and language. Applicants should have a strong experimental research program and broad teaching ability in the field of cognition and cognitive science. Women and minorities are especially encouraged to apply. Send C.V., three letters of reference, copies of publications, and statement of research interests by January 1, 1992, to: Dr. William H. Warren, Chair Cognitive Search Committee Dept. of Cognitive and Linguistic Sciences, Box 1978 Brown University Providence, RI 02912 Brown University is an Equal Opportunity/Affirmative Action Employer. From UDAH256 at oak.cc.kcl.ac.uk Tue Dec 10 05:56:00 1991 From: UDAH256 at oak.cc.kcl.ac.uk (Mark Plumbley) Date: Tue, 10 Dec 91 10:56 GMT Subject: Principal Components and Information Theory Message-ID: Michael Strintzis writes: >Could somebody please help me update my bibliography >on connectionist techniques for Principal Component >Analysis? There was an article on connectionists recently from Gary Cottrell with some PCA references in it. I also have some work on Information Theory with implications on PCA (related to work of Linsker): M. D. Plumbley and F. Fallside, "An Information-Theoretic Approach to Unsupervised Connectionist Models", Proceedings of the 1988 Connectionist Models Summer School, pp239-245, Morgan Kaufmann, 1988 M. D. Plumbley "On Information Theory and Unsupervised Neural Networks", Tech Report CUED/F-INFENG/TR.78, Cambridge University Engineering Department, Cambridge CB2 1PZ, UK. August 1991. (Tech report version of my Thesis). The tech report shows that some of the PCA algorithms directly decrease information loss across the network as they progress. Mark Plumbley Tel: [+44|0] 71 873 2241/2234 Centre for Neural Networks Fax: [+44|0] 71 873 2017 Department of Mathematics/King's College London/Strand/London WC2R 2LS/UK From P.Refenes at cs.ucl.ac.uk Tue Dec 10 12:52:59 1991 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Tue, 10 Dec 91 17:52:59 +0000 Subject: PREPRINT Message-ID: CURRENCY EXCHANGE RATE PREDICTION & NEURAL NETWORK DESIGN STRATEGIES A. N. REFENES, M. AZEMA-BARAC, L. CHEN, & S. A. KAROUSSOS Department of Computer Science, University College London, Gower Street, WC1, 6BT, London, UK. ABSTRACT This paper describes a non trivial application in forecasting currency exchange rates, and its implementation using a multi-layer perceptron network. We show that with careful network design, the backpropagation learning procedure is an effective way of training neural networks for time series prediction. The choice of squashing function is an important design issue in achieving fast convergence and good generalisation performance. We evaluate the use of symmetric and asymmetric squashing functions in the learning procedure, and show that symmetric functions yield faster convergence and better generalisation performance. We derive analytic results to show the conditions under which symmetric squashing functions yield faster convergence, and to quantify the upper bounds on the convergence improvement. The network is evaluated both for long term forecasting without feed- back (i.e. only the forecast prices are used for the remaining trading days) and for short term forecasting with hourly feed-back. The network learns the training set near perfect and shows accurate prediction, making at least 22% profit on the last 60 trading days of 1989. =========================================================== From SAYEGH at CVAX.IPFW.INDIANA.EDU Tue Dec 10 20:25:45 1991 From: SAYEGH at CVAX.IPFW.INDIANA.EDU (SAYEGH@CVAX.IPFW.INDIANA.EDU) Date: Tue, 10 Dec 1991 20:25:45 EST Subject: conference announcement Message-ID: <911210202545.2020e131@CVAX.IPFW.INDIANA.EDU> CALL FOR PAPERS FIFTH CONFERENCE ON NEURAL NETWORKS AND PARALLEL DISTRIBUTED PROCESSING INDIANA UNIVERSITY-PURDUE UNIVERSITY 9, 10, 11 APRIL 1992 The Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University will be held on the Fort Wayne Campus, April 9, 10 and 11, 1992. Authors are invited to submit a one page abstract of current research in their area of Neural Networks Theory or Application before February 3, 1991. Notification of acceptance or rejection will be sent by February 28. Conference registration is $20 and students attend free. Some limited financial support might also be available to allow students to attend. Abstracts and inquiries should be addressed to: email: sayegh at ipfwcvax.bitnet US mail: Prof. Samir Sayegh Physics Department Indiana University-Purdue University Fort Wayne, IN 46805 From thomasp at informatik.tu-muenchen.de Wed Dec 11 06:55:26 1991 From: thomasp at informatik.tu-muenchen.de (Thomas) Date: Wed, 11 Dec 91 12:55:26 +0100 Subject: Pen-Based Computing and HWCR Message-ID: <91Dec11.125539met.34095@gshalle1.informatik.tu-muenchen.de> I'm testing several Notepad or "Pen-Based" computers and I wonder if anybody knows of NN-approaches to handwritten character recognition (HWCR) actually (or about to be) USED in Notepads from Grid, NCR, Momenta, MicroSlate, Tusk, Telepad, Samsung.. or within operating systems like PenPoint, Windows for Pens...etc. I'm aware of the fact, that NESTOR is developping HWCR-Software for Notepads but I know little else regarding their efforts. I'm also partly aware of the vast literature regarding HWCharacter/DigitR but I'm solely interested in PRACTICAL and commercially viable HWCR-approaches. Depending on feedback, I'll summarize responses to the net. Patrick Thomas N3 - Nachrichten Neuronale Netze From desa at cs.rochester.edu Wed Dec 11 17:25:03 1991 From: desa at cs.rochester.edu (desa@cs.rochester.edu) Date: Wed, 11 Dec 91 17:25:03 EST Subject: TR available in Neuroprose Archives Message-ID: <9112112225.AA02044@cyan.cs.rochester.edu> The following technical report has been placed in the neuroprose archive: Top-down teaching enables non-trivial clustering via competitive learning Virginia de Sa Dana Ballard desa at cs.rochester.edu dana at cs.rochester.edu Dept. of Computer Science University of Rochester Rochester, NY 14627-0226 Abstract: Unsupervised competitive learning classifies patterns based on similarity of their input representations. As it is not given external guidance, it has no means of incorporating task-specific information useful for classifying based on semantic similarity. This report describes a method of augmenting the basic competitive learning algorithm with a top-down teaching signal. This teaching signal removes the restriction inherent in unsupervised learning and allows high level structuring of the representation while maintaining the speed and biological plausibility of a local Hebbian style learning algorithm. Examples, using this algorithm in small problems, are presented and the function of the teaching input is illustrated geometrically. This work supports the hypothesis that cortical back-projections are important for the organization of sensory traces during learning. ----------------------------------------------------------------------- To retrieve by anonymous ftp: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get desa.top_down.ps.Z ftp> quit unix> uncompress desa.top_down.ps unix> lpr -P(your_local_postscript_printer) desa.top_down.ps Hard copy requests can be sent to tr at cs.rochester.edu or Technical Reports Dept. of Computer Science University of Rochester Rochester, NY 14627-0226 (There is a nominal $2 charge for hard copy TR's) From terry at jeeves.UCSD.EDU Wed Dec 11 22:08:41 1991 From: terry at jeeves.UCSD.EDU (Terry Sejnowski) Date: Wed, 11 Dec 91 19:08:41 PST Subject: Neural Computation 3:4 Message-ID: <9112120308.AA19887@jeeves.UCSD.EDU> Neural Computation Winter 1991, Volume 3, Issue 4 View Neural Network Classifiers Estimate Bayesian a Posteriori Probabilities Michael D. Richard and Richard P. Lippmann Note Lowering Variance of Decisions by Using Artificial Network Portfolios G. Mani Letters Oscillating Networks: Control of Burst Duration by Electrically Coupled Neurons L.F. Abbott, E. Marder, and S.L. Hooper A Computer Simulation of Oscillatory Behavior in Primary Visual Cortex Matthew A. Wilson and James M. Bower Segmentation, Binding, and Illusory Conjunctions D. Horn, D. Sagi, and M. Usher Contrastive Learning and Neural Oscillations Fernando Pineda and Pierre Baldi Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multi-Layer Networks Marwan Jabri and Barry Flower Predicting the Future: Advantages of Semi-Local Units Eric Hartman and James D. Keeler Improving the Generalisation Properties of Radial Basis Function Neural Networks Chris Bishop Temporal Evolution of Generalization during Learning in Linear Networks Pierre Baldi and Yves Chauvin Learning the Unlearnable Dan Nabutovsky and Eytan Domany Kolmogorov's Theorem is Relevant Vera Kurkov An Exponential Response Neural Net Shlomo Geva and Joaquin Sitte ----- SUBSCRIPTIONS - VOLUME 4 - BIMONTHLY (6 issues) ______ $40 Student ______ $65 Individual ______ $150 Institution Add $12 for postage and handling outside USA (+7% for Canada). (Back issues from Volumes 1-3 are available for $28 each.) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From steensj at daimi.aau.dk Fri Dec 13 14:31:24 1991 From: steensj at daimi.aau.dk (steensj@daimi.aau.dk) Date: Fri, 13 Dec 91 14:31:24 MET Subject: Report available in Neuroprose Archives Message-ID: <9112131331.AA03566@bmw.daimi.aau.dk> ****** Please do not forward to other lists. Thank you ******* The following report has been placed in the neuroprose archives at Ohio State. Ftp instructions follow the abstract. ------------------------------------------------------------- A Conceptual Approach to Generalization in Dynamic Neural Networks Steen Sjogaard Computer Science Department Aarhus University DK-8000 Aarhus C. Denmark steensj at daimi.aau.dk ABSTRACT Inspired by the famous paper "Generalization as Search" by Tom Mitchell from 1982, a conceptual approach to generalization in artificial neural networks is proposed. The two most important ideas are (1) to consider the problem of forming a general description of a class of objects as a search problem, and (2) to divide the search space into a static and a dynamic part. These ideas are beneficial as they emphasize the evolution or process that a learner must undergo in order to discover a valid generalization. We find that this approach and the adapted conceptual framework provide a more varied and intuitively appealing view on generalization. Furthermore, a new cascade- correlation learning algorithm which is very similar to Fahlman and Lebiere's Cascade-Correlation Learning Architecture from 1990, is proposed. The capabilities of these two learning algorithms are discussed, and a direct comparison in terms of the conceptual framework is performed. Finally, the two algorithms are analyzed empirically, and it is demonstrated how the obtained results can be explained and discussed in terms of the conceptual framework. The empirical analyses are based on two experiments: The first experiment concerns the scaling behavior of the two network types, while the other experiment concerns a closer analysis of the representation that the two network types utilize for found generalizations. Both experiments show that the networks generated by the new algorithm perform better than the networks generated by the Cascade-Correlation Learning Architecture on the relatively simple geometric classification problem considered. ------------------------------------------------------------- To retrieve by anonymous ftp: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get sjogaard.concept.ps.Z ftp> quit unix> uncompress sjogaard.concept.ps.Z unix> lpr -P sjogaard.concept.ps /Steen From amari at sat.t.u-tokyo.ac.jp Sat Dec 14 15:13:17 1991 From: amari at sat.t.u-tokyo.ac.jp (Shun-ichi Amari) Date: Sat, 14 Dec 91 15:13:17 JST Subject: neural PCA; TRs on neural learning Message-ID: <9112140613.AA13370@sat.t.u-tokyo.ac.jp> In response to Strintzis' request on papers on neural PCA, I would like to note one old paper: S.Amari, "Neural theory of association and concept- formation", Biol. Cybern. vol. 26 (1977), pp.175-185. In this paper, a general neural learning scheme was treated. As an example, it was clealy analyzed (p.179) that a type of learning leads to the eigenvector corresponding to the maximal eigenvalue of the covariance matrix of input signals. I do not know if this is the first literature of treating neural PCA. May be not. New Technical Reports are now available. Four Types of Learning Curves, by S.Amari, N.Fujita and S.Shinomoto, METR 91-04, May 1991, U. Tokyo Statistical Theory of Learning Curves under Entropic Loss Criterion, by S.Amari and N.Murata, METR 91-12, Nov. 1991, U. Tokyo. Both of them studies the relation between the number of training examples and generalization errors. If there are a rush of paper requests, I am afraid that I cannot handle them all. From jfj at m53.limsi.fr Sun Dec 15 06:22:24 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Sun, 15 Dec 91 12:22:24 +0100 Subject: NNs and NLP Message-ID: <9112151122.AA13237@m53.limsi.fr> Hi. I am collecting references on neural network approaches to natural language processing (articles dealing with speech recognition excluded), and would like some help ! If enough interested parties respond, I will post the results. P.S.: please write to me directly (jfj at limsi.fr), and not to connectionists. Thanx in advance, jfj From barryf at ee.su.OZ.AU Mon Dec 16 18:01:38 1991 From: barryf at ee.su.OZ.AU (Barry Flower) Date: Tue, 17 Dec 1991 10:01:38 +1100 Subject: ACNN92 Program Message-ID: <9112162301.AA25524@brutus.ee.su.OZ.AU> THE THIRD AUSTRALIAN CONFERENCE ON NEURAL NETWORKS (ACNN'92) CONFERENCE PROGRAM 3rd - 5th FEBRUARY 1992 AUSTRALIAN NATIONAL UNIVERSITY CANBERRA, AUSTRALIA ACNN'92 Organising Committee ~~~~~~~~~~~~~~~~~~~~~~~~~~ Conference Chairman Dr Marwan Jabri Director Systems Engineering & Design Automation Laboratory (SEDAL) School of Electrical Engineering University of Sydney Technical Program Co-Chairs Prof Bill Levick, Australian National University A/Prof Ah Chung Tsoi, University of Queensland Stream Chairs Prof Yanni Attikiouzel, University of Western Australia Prof Max Bennett, University of Sydney Prof Max Coltheart, Macquarie University Dr Marwan Jabri, University of Sydney Dr M Palaniswami, University of Melbourne Dr Stephen Pickard, University of Sydney Dr M Srinivasan, Australian National University A/Prof Ah Chung Tsoi, University of Queensland Local Arrangements Dr M Srinivasan, Australian National University Institutions Liaison Dr N Nandagopal, DSTO Sponsorship Dr Stephen Pickard, University of Sydney Publicity Mr Barry Flower, University of Sydney Publications Mr Philip Leong, University of Sydney Secretariat Mrs Agatha Shotam, University of Sydney ---------------------------------------------------------------------- ---------------------------------------------------------------------- For further information and registrations please contact: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Tel: (+61-2) 692 4214 Electrical Engineering Fax: (+61-2) 660 1228 NSW 2006 Australia Email: acnn92 at ee.su.oz.au. ---------------------------------------------------------------------- ---------------------------------------------------------------------- PROGRAMME ~~~~~~~ Monday, 3rd February 1992 9.00 - 10.00 am Registration 10.00 - 10.30 am Official Opening The Hon Ross Free, Minister for Science & Technology 10.30 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 1 On the Existence of "Fast" and "Slow" Directionally Sensitive Motion Detector Neurons in Insects G A Horridge & Ljerka Marcelja Centre for Visual Sciences, RSBS Australian National University, Australia Neural Networks for the Detection of Motion Boundaries M V Srinivasan & P Sobey Centre for Visual Sciences, RSBS Australian National University, Australia On the Mechanism Underlying Movement Detection in the Fly Visual System A Bouzerdoum Department of Electrical & Electronic Engineering University of Adelaide, Australia 12.30 - 2.00 pm Lunch 2.00 - 3.30 pm Session 2 (invited) A Generalization Method for Back-Propagation using Fuzzy Sets B R Hunt, Y Y Qi & D DeKruger Department of Electrical & Computer Engineering University of Arizona, USA Connectionist Models of Musical Pattern Recognition C Stevens Department of Psychology University of Sydney, Australia Analog VLSI Implementation of Adaptive Algorithms by an Extended Hebbian Synapse Circuit T Morie, O Fujitsu and Y Amemiya NTT LSI Laboratories, Japan 3.30 - 4.00 pm Afternoon Tea 4.00 - 5.30 pm Session 3 Computer Simulation: An Aid to Understanding the Neuronal Circuits that Control the Behaviour of the Intestine J C Bornstein, J B Furness, H Kelly Department of Physiology University of Melbourne, Australia T O Neild & R A R Bywater Department of Physiology Monash University A Multi-Module Neural Network Approach for ICEG Classification Z Chi & M A Jabri Department of Electrical Engineering University of Sydney, Australia Circuit Complexity for Neural Computation Kai-Yeung Siu, V Roychowdhury and T Kailath Department of Electrical & Computer Engineering University of California, Irvine, USA 5.30 - 8.00 pm Poster Session 1 and Cocktails Tuesday, 4th February 1992 9.00 - 10.30 am Session 4 Sparse Associative Memory W G Gibson and J Robinson School of Mathematics & Statistics University of Sydney, Australia Robustness and Universal Approximation in Multilayer Feedforward Neural Networks P Diamond and I V Fomenko Mathematics Department The University of Queensland, Australia A Partially Correlated Higher Order Recurrent Neural Network P S Malin & M Palaniswami Department of Electrical Engineering University of Melbourne, Australia 10.30 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 5 A Computer Simulation of Intestinal Motor Activity S J H Brookes Department of Physiology Flinders University, Australia High Precision Hybrid Analogue/Digital Synapses A Azhad & J Morris Department of Computer Science University of Tasmania, Australia Model of the Spinal Frog Neural System Underlying the Wiping Reflex Ranko Babich ETF Pristina Yugoslavia 12.30 - 2.30 pm Lunch Postgraduate Students Session coordinated by Janet Wiles, University of Queensland 2.30 - 4.00 pm Session 6 Panel on Challenging Issues in Neural Networks Research Panelists include: Y Attikiouzel, University of Western Australia T Caelli, University of Melbourne M Coltheart, Macquarie University B Hunt, University of Arizona B Levick, Australian National University 4.00 - 4.30 pm Afternoon Tea 4.30 - 6.00 pm Session 7 "Solving" Combinatorially Hard Problems by Combining Deterministic Annealing with Constrained Optimisation Paul Stolorz Theoretical Division Los Alamos National Laboratory, USA The Recognition Capability of RAM-Based Neural Networks R Bowmaker & G G Coghill School of Engineering University of Auckland, New Zealand Modelling the Human Text-to-Speech System Max Coltheart School of Behavioural Sciences Macquarie University 6.00 - 8.00 pm Poster Session 2 7.00 - 8.00 pm BBQ Wednesday, 5th February 1992 9.00 - 10.40 am Session 8 Missing Values in a Backpropagation Neural Net P Vamplew & A Adams Department of Computer Science University of Tasmania, Australia Training Limited Precision Feedforward Neural Networks Y Xie & M A Jabri Department of Electrical Engineering University of Sydney, Australia Modelling Robustness of FIR amd IIR Synapse Multilayer Perceptrons Andrew Back & A C Tsoi Department of Electrical Engineering University of Queensland, Australia Poster Highlights 10.40 - 11.00 am Morning Tea 11.00 - 12.30 pm Session 9 Low-Level Insect Vision: A Large but Accessible Biological Neural Network D Osorio & A C James Centre for Visual Sciences Australian National University, Australia Comparisons of Letter Recognition by Humans and Artificial Neural Networks Cyril Latimer Department of Psychology University of Sydney, Australia Evidence that the Adaptive Gain Control Exhibited by Neurons of the Striate Visual Cortex is a Co-operative Network Property T Maddess & T R Vidyasagar Centre for Visual Sciences Australian National University, Australia 12.30 - 2.00 pm Lunch 12.30 - 2.00 pm Poster Session 3 2.00 - 3.00 pm Session 10 Panel on Benefits of Neural Network Technologies to Australian Commercial Applications. Panelists include: A Bowles, BHP Research Melbourne D Nandagopal, DSTO Australia K Hubick, DITAC P Nickolls, Telectronics Pacing Systems R Smith, ATERB 3.00 - 3.30 pm Afternoon Tea 3.30 - 4.30 pm Session 11 Applications of Artificial Neural Networks in the Australian Defence Industry N Nandagopal Guided Weapons Division DSTO, Australia Low Power Analogue VLSI Implementation of a Feed-Forward Neural Network S J Pickard, M A Jabri, P H W Leong B Flower & P Henderson Department of Electrical Engineering University of Sydney, Australia 4.30 - 5.00 pm Closing Poster Session 1 Monday, 3rd February 1992 5.30 pm - 8.00 pm Entropy Production - a New Harmony Function for Hopfield Like Networks L Andrey Institute of Computer & Information Science Czechoslovak Academy of Sciences, Czechoslovakia A New Method of Training Neural Nets to Counteract Their Overgeneralization M Bahrami & K E Tait School of Electrical Engineering University of New South Wales The Development and Application of Dynamic Models in Process Control Using Neural Networks C J Chessari & G W Barton Department of Chemical Engineering University of Sydney Image Restoration in the Homogeneous ANN and the Role of Time Discreteness N S Belliustin Scientific Research Institute of Radiophysics, USSR V G Yakhno Institute of Applied Physics, USSR Neuronal Control of the Intestine - a Biological Neural Network S J H Brookes Department of Physiology & Ctr for Neuroscience Flinders University A Neural Network Approach to Phonocardiography I Cathers Department of Biological Sciences Cumberland College of Health Sciences Motion Analysis and Range Estimation Using Neural Networks M Cavaiuolo, A J S Yakovleff & C R Watson Electronic Research Laboratory DSTO, Australia Unsupervised Clustering Using Dynamic Competitive Learning S J Kia & G G Coghill Department of Electrical & Electronic Engineering University of Auckland, New Zealand Neural Net Simulation of the Peristaltic Reflex A D Coop & S J Redman Division of Neuroscience, JCSMR Australian National University Novel Applications of a Standard Recurrent Network Model H Debar CSEE/DCI, France B Dorizzi Institut National des Telecommunications, France A Self-Organizing Neural Tree Architecture L Y Fang & A Jennings Artificial Intelligence Section Telecom Australia Research Labs K Q-Q Li Department of Computer Science Monash University T Li & S Klasa Department of Computer Science Concordia University, Canada Computer Simulation of Spinal Cord Neural Circuits that Control Muscle Movement B P Graham Centre for Information Science Research Australian National University S J Redman Division of Neuroscience, JCSMR Australian National University Temporal Patterns of Auditory Responses: Implications for Encoding Mechanisms in the Ear K G Hill Developmental Neurobiology Group, RSBS Australian National University Emulation of the Neuro-morphological Functions of Biological Visual Receptive Fields for Medical Image Processing S K Hungenahally School of Microelectronics Griffith University COPE - A Hybrid Connectionist Production System Environment N K Kasabov Department of Computer Science University of Essex, UK Pruning Large Synaptic Weights A Kowalczyk Artificial Intelligence Section Telecom Australia Research Labs A Connectionist Model of Attentional Learning Using a Sequentially Allocatable Spotlight of Attention C Latimer & Z Schreter Department of Psychology University of Sydney An Analogue Low Power VLSI Neural Network P H W Leong & M A Jabri Department of Electrical Engineering University of Sydney Radar Target Recognition using ART2 D Nandagopal, A C Wright, N M Martin, R P Johnson, P Lozo & I Potter Guided Weapons Division DSTO, Australia An Expert System and a Neural Network Combine to Control the Robotic Arc Welding Process E Siores Mechanical Engineering Department University of Wollongong Poster Session 2 Tuesday, 4th February 1992 6.00 - 8.00 pm Nonlinear Adaptive Control Using an IIR MLP A Back Department of Electrical Engineering University of Queensland Neural Network Analysis of Geomagnetic Survey Data C J S deSilva & Y Attikiouzel Department of Electrical Engineering University of Western Australia Seismic Event Classification Using Self-Organizing Neural Networks F U Dowla, W J Maurer & S P Jarpe Lawrence Livermore National Laboratory, USA Task Based Pruning R Dunne Mathematics Programme Murdoch University N A Campbell & H T Kiiveri Division of Mathematics & Statistics C S I R O Training Continually Running Recurrent Networks and Neuron with Gain Using Weight Perturbation B G Flower & M A Jabri Department of Electrical Engineering University of Sydney Science with Neural Nets: An Application to Nuclear Physics S Gazula Department of Physics Washington University, USA Image Restoration with Iterative K-nearest Neighbor Operation with the Scheduled K size (IKOWSK) Y Hagihara Faculty of Technology Tokyo University of Agriculture & Technology, Japan The Influence of Output Bit Mapping on Convergence Time in M-ary PSK Neural Network Detectors J T Hefferan & S Reisenfeld School of Electrical Engineering University of Technology, Sydney Discriminant Functions and Aggregation Functions: Emulation and Generalization of Visual Receptive Fields S K Hungenahally School of Microelectronics Griffith University Selective Presentation Learning for Pattern Recognition by Back-Propagation Neural Networks K Kohara NTT Network Information Systems Laboratories, Japan Identification of Essential Attributes for Neural Network Classifier A Kowalczyk Artificial Intelligence Section Telecom Australia Research Labs Back Propagation and the N-2-N Encoder Problem R Lister Basser Department of Computer Science University of Sydney A Comparison of Transfer Functions for Feature Extracting Layers in the Neocognitron D R Lovell, A C Tsoi & T Downs Department of Electrical Engineering University of Queensland Target Cuing: A Heterogeneous Neural Network Approach H McCauley Naval Weapons Center, USA Some Word Recognition Experiments with a Modified Neuron Model M Saseetharan & M P Moody School of Electrical & Electronic Systems Engineering Queensland University of Technology Regularization and Spline Fitting by Analog Networks D Suter Department of Computer Science La Trobe University Information Transformation Across a Physiological Synapse R M Vickery, B D Gynther & M J Rowe School of Physiology & Pharmacology University of New South Wales Towards a Neural Network Implementation of Hoffman's Lie Algebra for Vision J Wiles Department of Psychology University of Queensland Approximation Theoretic Results for Neural Networks R C Williamson Department of Systems Engineering Australian National University U Helmke Department of Mathematics University of Regensburg, Germany Pattern Recognition by Using a Compound Eye-like Hybrid System S W Zhang, M Nagle & M V Srinivasan Centre for Visual Sciences, RSBS Australian National University Poster Session 3 Wednesday, 5th February 1992 12.30 - 2.00 pm Simplifying the Hopfield/Tank Algorithm in Solving the Travelling Salesman Problem W K Lai & G G Coghill School of Engineering University of Auckland Domain Classification of Language Using Neural Networks M Flower Artificial Intelligence Section Telecom Australia Research Labs A New Self-Organisation Strategy for Floorplan Design J Jiang & M A Jabri Department of Electrical Engineering University of Sydney HOCAM - A Content Addressable Memory using Higher Order Neural Networks M Palaniswami Department of Electrical Engineering University of Melbourne Mean Square Reconstruction Error Criterion in Biological Vision T R Pattison Department of Electrical Engineering University of Adelaide Making a Simple Recurrent Network a Self-Oscillator by Incremental Training S Phillips Department of Computer Science University of Queensland Neural Nets in Free Text Information Filtering J C Scholtes Department of Computational Linguistics University of Amsterdam Neural Network Learning with Opaque Mappings C J Thornton School of Cognitive & Comp Sci University of Sussex Neurocognitive Pattern Transmission; by Identity Mapping Networks J Tizard & C R Clark School of Social Sciences Flinders University of South Australia The Connectionist Sequential Machine: A General Model of Sequential Networks C Touzet & N Giambiasi LERI, EERIE, France Efficient Computation of Gabor Transform using A Neural Network H Wang & H Yan Department of Electrical Engineering University of Sydney Piecewise Linear Feedforward Neural Networks R C Williamson Department of Systems Engineering Australian National University P Bartlett Department of Electrical Engineering University of Queensland Functional Link Net and Simulated Annealing Approach for the Economic Dispatch of Electric Power K P Wong & C C Fung Department of Electrical Engineering University of Western Australia Non-linear Prediction for Resolution Enhancement of Band-limited Signals H Yan Department of Electrical Engineering University of Sydney Dynamics of Neural Networks H Yang Department of Computer Science La Trobe University A Probabilistic Neural Network Edge Detector for 2 Dimensional Gray Scale Images A Zaknich & Y Attikiouzel Department of Electrical Engineering University of Western Australia A Hybrid Approach to Isolated-Digit Recognition D Zhang & J B Millar Computer Sciences Lab, RSPSE Australian National University ---------------------------------END--------------------------------------- From bhaskar at theory.cs.psu.edu Tue Dec 17 09:46:14 1991 From: bhaskar at theory.cs.psu.edu (Bhaskar DasGupta) Date: Tue, 17 Dec 1991 09:46:14 -0500 Subject: recurrent nets. Message-ID: <9112171446.AA00298@omega.theory.cs.psu.edu> The following will appear as a concise paper in IEEE SouthEastcon 1992. Learning Capabalities of Recurrent Networks. Bhaskar DasGupta Computer Science Department Penn State. Brief summary: Recurrent Neural Networks are models of computation in which the underlying graph is directed ( possibly cyclic ), and each processor changes state according to some function computed according to its weighted summed inputs, either deterministically or probabilistically. Under arbitrary probabilistic update rules, such models can be as powerful as Probabilistic Turing Machines. For probabilistic models we can define the error probability as the maximum probability of reaching an incorrect output configuration. It is observed: If the error probability is bounded then such a network can be simulated by a deterministic finite automaton ( with exponentially many states ) For deterministic recurrent nets where each processor implements a threshold function: It may accept all P-complete language problems. However, restricting the weight-threshold relationship may result in accepting a weaker class, the NC class ( problems which can be solved in poly-log time with polynomially many processors ). The results are straightforward to derive, so I did not put it in the neuroprose archive. Thanks. Bhaskar From tesauro at watson.ibm.com Tue Dec 17 16:11:23 1991 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Tue, 17 Dec 91 16:11:23 EST Subject: TR available Message-ID: The following technical report is now available. (This is a long version of the paper to appear in the next NIPS proceedings.) To obtain a copy, send a message to "tesauro at watson.ibm.com" and be sure to include your PHYSICAL mail address. Practical Issues in Temporal Difference Learning Gerald Tesauro IBM Thomas J. Watson Research Center PO Box 704, Yorktown Heights, NY 10598 USA Abstract: This paper examines whether temporal difference methods for training connectionist networks, such as Suttons's TD($\lambda$) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD($\lambda$) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex nontrivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating. From ingber at umiacs.UMD.EDU Wed Dec 18 09:39:02 1991 From: ingber at umiacs.UMD.EDU (Lester Ingber) Date: Wed, 18 Dec 1991 09:39:02 EST Subject: Generic mesoscopic neural networks ... neocortical interactions Message-ID: <9112181439.AA08115@dweezil.umiacs.UMD.EDU> *** Please do not forward to any other lists *** Generic mesoscopic neural networks based on statistical mechanics of neocortical interactions Lester Ingber A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions, demon- strating its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. This methodology also defines an algorithm to construct a mesos- copic neural network (MNN), based on realistic neocortical processes and parameters, to record patterns of brain activity and to compute the evolution of this system. Furthermore, this new algorithm is quite generic, and can be used to similarly pro- cess information in other systems, especially, but not limited to, those amenable to modeling by mathematical physics techniques alternatively described by path-integral Lagrangians, Fokker- Planck equations, or Langevin rate equations. This methodology is made possible and practical by a confluence of techniques drawn from SMNI itself, modern methods of functional stochastic calculus defining nonlinear Lagrangians, Very Fast Simulated Re- Annealing (VFSR), and parallel-processing computation. I have placed the above preprint in the Neuroprose archive as ingber.mnn.ps.Z. To obtain this paper: local% ftp archive.cis.ohio-state.edu [local% ftp 128.146.8.52] Name (archive.cis.ohio-state.edu:yourloginname): anonymous Password (archive.cis.ohio-state.edu:anonymous): yourloginname ftp> cd pub/neuroprose ftp> binary ftp> get ingber.mnn.ps.Z ftp> quit local% uncompress ingber.mnn.ps.Z local% lpr [-P..] ingber.mnn.ps This will print out 8 pages on your PostScript laserprinter. If you do not have access to ftp, then send me an email request, and I will email you a PostScript-compressed-uuencoded ascii file with instructions on how to produce laserprinted copies, just requiring the additional first step of 'uudecode file'. Sorry, but I cannot take on the task of mailing out hardcopies of this paper. ------------------------------------------ | Prof. Lester Ingber | | ______________________ | | Science Transfer Corporation | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | ------------------------------------------ From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Dec 18 13:25:12 1991 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 18 Dec 91 13:25:12 EST Subject: Generic mesoscopic neural networks ... neocortical interactions In-Reply-To: Your message of "Wed, 18 Dec 91 09:39:02 EST." <9112181439.AA08115@dweezil.umiacs.UMD.EDU> Message-ID: <21254.693080712@B.GP.CS.CMU.EDU> Lester, I really think that you need expert with Elan more than I have. If I were buying Canon lenses I would save up a lot of pennies and concentrate on the L lense line. The $9500 400mm f2.8 USM looks particularly nice at the moment, that's about a million pennies. More seriously it sounds like you do not need wide angle capability. I would buy the best flash (430EZ I believe), the 55 (or is it 50?) mm macro, and the 80-200 f2.8 zoom. The macro can double for portraiture when necessary, you can bounce light with this flash and add an additional Canon macro-ringlight later. The zoom is suitable for action, but can be used for eliminating background in protraiture, or chasing kids all over the yard. It is also a very good race action lense at medium distances. Paul From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Dec 18 13:38:17 1991 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 18 Dec 91 13:38:17 EST Subject: Please diregard last message with my header, reply bug. Message-ID: <21528.693081497@B.GP.CS.CMU.EDU> My apologies to all. Paul From sontag at control.rutgers.edu Wed Dec 18 16:19:17 1991 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Wed, 18 Dec 91 16:19:17 EST Subject: Report available --computability with nn's Message-ID: <9112182119.AA03640@control.rutgers.edu> (Revised) Tech Report available from neuroprose: ON THE COMPUTATIONAL POWER OF NEURAL NETS Hava T. Siegelmann, Department of Computer Science Eduardo D. Sontag, Department of Mathematics Rutgers University, New Brunswick, NJ 08903 This paper shows the Turing universality of first-order, finite neural nets. It updates the report placed there last Spring* with new results that include the simulation in LINEAR TIME of BINARY-tape machines, (as opposed to the unary alphabets used in the previous version). The estimate of the number of neurons needed for universality is now lowered to 1,000 (from 100,000). *A summary of the older report appeared in: H. Siegelmann and E. Sontag, "Turing computability with neural nets," Applied Math. Letters 4 (1991): 77-80. ================ To obtain copies of the postscript file, please use Jordan Pollack's service: Example: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) Name (archive.cis.ohio-state.edu): anonymous Password (archive.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get siegelman.turing.ps.Z ftp> quit unix> uncompress siegelman.turing.ps.Z Now print "siegelman.turing.ps" as you would any other (postscript) file. From yirgan at dendrite.cs.colorado.edu Wed Dec 18 17:42:08 1991 From: yirgan at dendrite.cs.colorado.edu (Juergen Schmidhuber) Date: Wed, 18 Dec 1991 15:42:08 -0700 Subject: New TR on unsupervised learning Message-ID: <199112182242.AA04036@thalamus.cs.Colorado.EDU> LEARNING FACTORIAL CODES BY PREDICTABILITY MINIMIZATION .. Jurgen Schmidhuber Department of Computer Science University of Colorado (Compact version of Technical Report CU-CS-565-91) ABSTRACT I present a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns or input sequences. With a given set of representational units, each unit tries to react to the environment such that it minimizes its predictability by an adaptive predictor that sees all the other units. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus. I discuss various simple yet potentially powerful implementations of the principle which aim at finding binary factorial codes (Barlow, 1989}, i.e. codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Unlike previous methods the novel principle has a potential for removing not only linear but also non-linear output redundancy. Methods for finding factorial codes automatically embed Occam's razor for finding codes using a minimal number of units. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended sequences. --------------------------------------------------------------------- To obtain a copy, do: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: neuron ftp> binary ftp> cd pub/neuroprose ftp> get schmidhuber.factorial.ps.Z ftp> bye unix> uncompress schmidhuber.factorial.ps.Z unix> lpr schmidhuber.factorial.ps --------------------------------------------------------------------- There is no hardcopy mailing list. I will read my mail only occasionally during the next three weeks or so. .. Jurgen From karit at spine.hut.fi Thu Dec 19 07:53:28 1991 From: karit at spine.hut.fi (Kari Torkkola) Date: Thu, 19 Dec 91 14:53:28 +0200 Subject: Public domain LVQ-programs released Message-ID: <9112191253.AA10882@spine.hut.fi.hut.fi> ************************************************************************ * * * LVQ_PAK * * * * The * * * * Learning Vector Quantization * * * * Program Package * * * * Version 1.0 (December, 1991) * * * * Prepared by the * * LVQ Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1991 * * * ************************************************************************ Public-domain programs for Learning Vector Quantization (LVQ) algorithms are available via anonymous FTP on the Internet. "What is LVQ?", you may ask --- See the following reference, then: Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE, 78(9):1464-1480, 1990. In short, LVQ is a group of methods applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each class zone such that the decision borders are approximated by the nearest-neighbor rule. Unlike in normal k-nearest-neighbor (k-nn) classification, the original samples are not used as codebook vectors, but they tune the latter. LVQ is concerned with the optimal placement of these codebook vectors into class zones. This package contains all the programs necessary for the correct application of certain LVQ algorithms in an arbitrary statistical classification or pattern recognition task. To this package two particular options for the algorithms, the LVQ1 and the LVQ2.1, have been selected. This is the very first release of the package, and updates will be available as soon as bugs are found and fixed. This code is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Helsinki University of Technology. In the implementation of the LVQ programs we have tried to use as simple code as possible. Therefore the programs are supposed to compile in various machines without any specific modifications made on the code. All programs have been written in ANSI C. The programs are available in two archive formats, one for the UNIX-environment, the other for MS-DOS. Both archives contain exactly the same files. These files can be accessed via FTP as follows: 1. Create an FTP connection from wherever you are to machine "cochlea.hut.fi". The internet address of this machine is 130.233.168.48, for those who need it. 2. Log in as user "anonymous" with your own e-mail address as password. 3. Change remote directory to "/pub/lvq_pak". 4. At this point FTP should be able to get a listing of files in this directory with DIR and fetch the ones you want with GET. (The exact FTP commands you use depend on your local FTP program.) Remember to use the binary transfer mode for compressed files. The lvq_pak program package includes the following files: - Documentation: README short description of the package and installation instructions document.ps documentation in (c) PostScript format document.ps.Z same as above but compressed document.txt documentation in ASCII format - Source file archives (which contain the documentation, too): lvq_p1r0.exe Self-extracting MS-DOS archive file lvq_pak-1.0.tar UNIX tape archive file lvq_pak-1.0.tar.Z same as above but compressed An example of FTP access is given below unix> ftp cochlea.hut.fi (or 130.233.168.48) Name: anonymous Password: ftp> cd /pub/lvq_pak ftp> binary ftp> get lvq_pak-1.0.tar.Z ftp> quit unix> uncompress lvq_pak-1.0.tar.Z unix> tar xvfo lvq_pak-1.0.tar See file README for further installation instructions. All comments concerning this package shoud be addressed to lvq at cochlea.hut.fi. ************************************************************************ From mike at psych.ualberta.ca Thu Dec 19 22:27:42 1991 From: mike at psych.ualberta.ca (Mike R. W. Dawson) Date: Thu, 19 Dec 1991 20:27:42 -0700 Subject: Connectionism & Motion Message-ID: The following paper has recently appeared in Psychological Review, and describes how a variant of Anderson's "brainstate-in-a-box" algorithm can be used to solve a particular information processing problem faced when apparent motion is perceived. If you're interested in a reprint, please contact me at the address below. ======================================================================= Dawson, M.R.W. (1991). The how and why of what went where in apparent motion: Modeling solutions to the motion correspondence problem. Psychological Review, 98(4), 569-603. A model that is capable of maintaining the identities of individuated elements as they move is described. It solves a particular problem of underdetermination, the motion correspondence problem, by simultaneously applying three constraints: the nearest neighbour principle, the relative velocity principle, and the element integrity principle. The model generates the same correspondence solutions as does the human visual system for a variety of displays, and many of its properties are consistent with what is known about the physiological mechanisms underlying human motion perception. The model can also be viewed as a proposal of how the identities of attentional tags are maintained by visual cognition, and thus it can be differentiated from a system that serves merely to detect movement. ============================================================================== -- Michael R. W. Dawson email: mike at psych.ualberta.ca Biological Computation Project Department of Psychology University of Alberta Edmonton, Alberta Tel: +1 403 492 5175 T6G 2E9, Canada Fax: +1 403 492 1768 From btan at bluering.cowan.edu.au Fri Dec 20 01:58:33 1991 From: btan at bluering.cowan.edu.au (btan@bluering.cowan.edu.au) Date: Fri, 20 Dec 91 14:58:33 +0800 Subject: Research topic needed Message-ID: <9112200658.AA22681@bluering.cowan.edu.au> Dear Neural Gurus Currently, I am pursuing my doctorate degree in the area of Neural Networks, could any of Neural Gurus help me to identify research topic, please. Kindly email to me directly btan at cowan.edu.au and not to connectionists. Thanks in advance. Merry Christmas and a Happy New Year. Yours faithfully Boon Tan From lux at ho.isl.titech.ac.jp Fri Dec 20 23:38:01 1991 From: lux at ho.isl.titech.ac.jp (Xuenong Lu) Date: Fri, 20 Dec 91 23:38:01 JST Subject: RESUME Message-ID: <9112201438.AA06945@beat> Hello! This is Xuenong LU in Tokyo Japan. I am now a PhD student in Tokyo Institute of Technology, majored in Information Processing and Electrical Engineering. As I will soon graduate and get my PhD in March, 1992, I would like to ask for your help to find a proper job for me. I would be very happy if I can find a job in united states. Here I would like to give you a short inquiry about job openings. ----------------------------------------------------------------------------- 1. Name: Xuenong Lu 2. Address: Greenhouse 205#, Ibukino 52-1, Midori-ku, Yokohama, 227 JAPAN, Tel. +81-45-982-6367, Email:lux at ho.isl.titech.ac.jp 3. Status: PhD in E&E ( available from March 1992) 4. Salary required: Over 20,000$ yearly 5. Synopsis of Resume: Seeking a R&D or postdoctoral postition in optics, neural networks, information processing or computer science. Strong skills in hardware and software related UNIX, IBM-PC, NEC-PC and X-window, SUN-window and other widnow systems. Eight years of experience in programming with assembler, C, BASIC and FORTRAN. Good ability in Speaking and writing in English and Japanese. ------------------------------------------------------------------------------- Thank you very much for your great cooperation! Merry Christmas and a Happy New Year! Xuenong Lu from Tokyo Japan Dec 20, 1991 APPENDIX: +---------------+ | R E S U M E | +---------------+ 1. NAME: Xuenong Lu 2. SEX: Male 3. BIRTHDAY: January 29th, 1965 4. ORGANIZATION & STATUS: Ph.D student of Imaging Science and Engineering Laboratory, Tokyo Institute of Technology 5. CURRENT ADDRESS: Tokyo Institute of Technology (Honda Group) Imaging Science and Engineering Laboratory Nagatsuta 4259, Midori-ku, Yokohama 227, Japan Tel: +81-45-922-1111 ex. 2083 Fax: +81-45-921-1492 E-mail: lux at ho.isl.titech.ac.jp 6. PERMANENT ADDRESS Greenhouse Room 205, Ibukino 52-1, Midori-ku, Yokohama 227, Japan TEL.+81-45-982-6367 7. EDUCATION BACKGROUND +-------+---+-----------------------------------+-------------+---------------+ | Date |yrs| University and Location | Major | Degree | +-------+---+-----------------------------------+-------------+---------------+ |1982.9-| 4 | Zhejiang University, Hangzhou, | Optical | BS of | |1986.9 | | Zhejiang, China | Engineering | Engineering | +-------+---+-----------------------------------+-------------+---------------+ |1986.9-| 2 | Tsinghua Univeristy, | Information | MS of | |1988.9 | | Beijing, China | Processing | E & E | +-------+---+-----------------------------------+-------------+---------------+ |1989.4-| 3 | Tokyo Institute of Technology | Electrical | PhD of | |1992.3 | | Tokyo, Japan | Engineering | E & E | +-------+---+-----------------------------------+-------------+---------------+ 8. LANGUAGE BACKGROUND (1) English: Have a very good command of it, not only in reading and listening but also in speaking and writing. ( 8 yrs learning) (2) Japanese: Have a very good command of it, can read, speak, listen and write in Japanese with no problem. ( 6 yrs learning ) (3) Chinese: Native language 9. SCHOLARSHIPS & AWARDS (1) Scholarship of Japanese Government ( The Ministry of Education, Science and Culture) from April 1989 to March 1992 in Tokyo Institute of Technology ( Tokyo, Japan) (2) Scholarship for undergraduate students from Chinese Government (Ministry of Education) from September 1982 to August 1986 in Zhejiang University (Hangzhou China) (3) Regarded as one of the Best Ten Outstanding Graduate Students in Zhejiang University, China (among 2,000 graduate students, August 1986, Hangzhou China) 10. FIELDS OF INTEREST (1) Information Processing: New methods to process information by either optical method or computer simulation: as image processing, optical computing systems, optical neural networks, pattern recognition, optical disk, etc (2) Neural Networks: the structure of neural networks, the hardware implementation of neural networks, artificial intelligence, the study of brain and the application of neural networks, the design of neural networks, new computer architecture by neural networks (3) Computer science: digital image processing, medical image processing, computer graphics, software development, database applications, etc 11. PUBLICATIONS & PROCEEDINGS (1) X. Lu, M. Yamaguchi, N. Ohyama, T. Honda, M. Oita, S. Tai, and K. Kyuma, " The optical implementation of the intelligent associative6 memory system", Optics Communications, ( to be published). (2) X. Lu, T. Honda, N. Ohyama, M. Wu, and G. Jin, "Optical configurations for solving equations using nonlinear etalons", Japanese Journal of Applied Physics, vol.29, No.10, pp.L1836-L1839(October 1990) (3) X. Lu, Y. Wang, M. Wu, and G. Jin, " The fabrication of a 25X25 multiple beam splitter", Optics Communications, Vol.72, No.3,4, pp.157-162 (July 1989) (4) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " Re-association with logical constraints in Hopfield-type associative memory", (submitted to Optics Communications) (5) H. Asuma, X. Lu, T. Honda, and N. Ohyama, "The phase modulation features of liquid crystal panel", Japanese Journal of Optics, vol.20, No.2, pp.98-102 (1991) - Japanese (6) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, "Development of an intelligent optical associative memory system", Technical Digest of OSA Annual Meeting 1991, November 3-8, 1991, San Jose, USA, MII5, pp.39 (7) X. Lu, T. Honda, N. Ohyama, M. Wu, and G. Jin, "Digital optical computing model for solving equations", Proc. of 1990 International Topical Meeting on Optical Computing, April 8-12, 1990, Kobe, Japan, 10D1, pp.197-1984 (8) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, "The development of intelligent associative memory system (I)", Proceedings of the 22nd Joint Conference on Imaging Technology, 10-2. (9) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, S. Tai, and K. Kyuma, "The performance of intelligent associative memory", Extended Abstracts (The 38th Spring Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.864, 31p-A-10 (10) X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " A proposal of intelligent associative memory system", Extended Abstracts (The 51st Autumn Meeting, 1990), The Japan Society of Applied Physics and Related Societies, pp.807, 28a-H-2 (11) H. Asuma, X. Lu, T. Honda, N. Okumura, T. Sonehara, and N. Ohyama, "The phase modulation feature of liquid crystal", Extended Abstracts (The 37 Spring Meeting, 1990), The Japan Society of Applied Physics and Related Societies, pp.770, 29p-D-10 (12) M. Yamaguchi, X. Lu, N. Ohyama, T. Honda, M. Oita, J. Ohta, and K. Kyuma, " The creativity of neural network", Extended Abstracts (The 52 Autumn Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.818, 9a-ZH-1 (13) M. Oita, J. Ohta, and K. Kyuma, X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, "A proposal of intelligent associative memory (2)---the optical neural network", Extended Abstracts (The 38th Spring Meeting, 1991),The Japan Society of Applied Physics and Related Societies, pp.863,31p-A-9 (14) M. Oita, J. Ohta, and K. Kyuma, X. Lu, N. Ohyama, M. Yamaguchi, T. Honda, "A proposal of intelligent associative memory (3) --- quantized training rule ", Extended Abstracts (The 52 Autumn Meeting, 1991), The Japan Society of Applied Physics and Related Societies, pp.820, 9a-ZH-7 From D4PBJSS0%EB0UB011.BITNET at BITNET.CC.CMU.EDU Fri Dec 20 18:13:03 1991 From: D4PBJSS0%EB0UB011.BITNET at BITNET.CC.CMU.EDU (Josep M Sopena) Date: Fri, 20 Dec 91 18:13:03 HOE Subject: Parsing embedded sentences ... Message-ID: <01GEC46ZDNXC9PM18C@BITNET.CC.CMU.EDU> The following paper is now available. To obtain a copy send a message to "d4pbjss0 at e0ub011.bitnet". ESRP: A DISTRIBUTED CONNECTIONIST PARSER THAT USES EMBEDDED SEQUENCES TO REPRESENT STRUCTURE Josep M Sopena Departament de Psicologia Basica Universitat de Barcelona In this paper we present a neural network that is able to compute a certain type of structure, that among other things allows it to adequately assign thematic roles, and find the antecedents of the traces, pro, PRO, anaphoras, pronouns, etc. for an extensive variety of syntactic structures. Up until now, the type of sentences that the network has been able to parse include: 1. 'That' sentences with several levels of embedding. John says that Mary thought that Peter was ill. 2.- Passive sentences. 3.- Relative sentences with several levels of embedding (center embedded). John loved the girl that the carpenter who the builder hated was seeing. The man that bought the car that Peter wanted was crazy. The man the woman the boy hates loves is running. 4.-Syntactic ambiguity in the attachment of PP's John saw a woman with a handbag with binoculars. 5.- Combinations of these four types of sentences: John bought the car that Peter thought the woman with a handbag wanted. The input consists of the sentence presented word by word. The patterns in the output represent the structure of the sentence. The structure is not represented by a static pattern but by a temporal course of patterns. This evolution of the output is based on different types of psychological evidence, and is as follows: the output is a sequence of simple semantic predicates (although it could be thought of in a more syntactical way). An element of the output sequence consists only of a single predicate, which always has to be complete. Since there are often omitted elements within the clauses (eg. Traces, PRO, pro etc.) the network retrieves these elements in order to complete the current predicate. These two mechanisms, segmentation into simple predicates and retreival of previously processed elements, are those which allow structure to be computed. In this way the structure is not conceived solely as a linear sequence of simple predicates because using these mechanisms it is posible to form embedded sequences (embedded structures). The paper also includes empirical evidence that supports the model as a plausible psychological model. The NN is formed by two parallel modules that share all of the output and part of the input. The first module is an standard Elman network that maps the elements in the input with their predicate representation in the output and assigns the corresponding semantic roles. The second module is a modified Elman network with two hidden layers. The units of the first hidden layer (which is the copied layer) have a linear function activation. This type of network has a much greater short term memory capacity than a standard Elman network. It stores the sequence of predicates, retreives the elements of the current predicate omitted in the input (traces, PRO etc.) and the referents of pronouns and anaphoras. When a pronoun or an anaphora appears in the input, the corresponding antecedent in the sentence, which has been retreived from this second module, is placed in the output. This module also allows the network to build embedded sequences by retreiving former elements of the sequence. The two modules were simultaneously trained. There were no manipulations other than the changes of inputs and targets, as in the standard backpropagation algorithm. The network was trained with 3000 sentences built from a starting a vocabulary of 1000 words. The number of sentences that is possible to build starting from this vocabulary is power(10,15). The generalization was completely successful for a test set of 800 sentences representing the variety of syntactic patterns of the training set. The model bears some relationship with the idea of representing structure not only in space but in time as well (Hinton 1989)and with the RAAM networks of Pollack(1989). The shortcomings of this type of networks are also discussed. From saarinen at csrd.uiuc.edu Fri Dec 20 12:55:44 1991 From: saarinen at csrd.uiuc.edu (Sirpa Saarinen) Date: Fri, 20 Dec 91 11:55:44 CST Subject: Ill-conditioning in NNs (Tech. Rep.) Message-ID: <9112201755.AA06158@sp1.csrd.uiuc.edu> Technical report available: CSRD Report no. 1089 Ill-Conditioning in Neural Network Training Problems S. Saarinen, R. Bramley and G. Cybenko Center for Supercomputing Research and Development, University of Illinois, Urbana, IL, USA 61801 Abstract The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to be a slow process with more sophisticated techniques not always performing significantly better. In this paper, we show that feedforward neural networks can have ill-conditioned Hessians and that this ill-conditioning can be quite common. The analysis and experimental results in this paper lead to the conclusion that many network training problems are ill-conditioned and may not be solved more efficiently by higher order optimization methods. While our analyses are for completely connected layered networks, they extend to networks with sparse connectivity as well. Our results suggest that neural networks can have considerable redundancy in parameterizing the function space in a neighborhood of a local minimum, independently of whether or not the solution has a small residual. If you wish to have this report, please write to nichols at csrd.uiuc.edu and ask for report 1089. Merry Christmas, Sirpa Saarinen saarinen at csrd.uiuc.edu From back at s1.elec.uq.oz.au Mon Dec 23 14:37:37 1991 From: back at s1.elec.uq.oz.au (Andrew Back) Date: Mon, 23 Dec 91 14:37:37 EST Subject: Mackey-Glass Message-ID: <9112230337.AA07687@c14.elec.uq.oz.au> Can someone let me know if the Mackey-Glass chaotic time series has been modelled by a fully recurrent network (such as Williams and Zipser's structure) ? I would like any references or performance results. Any help much appreciated. Please e-mail relies direct to: back at s1.elec.uq.oz.au Thanks, Andrew Back ---- Department of Electrical Engineering University of Queensland St. Lucia. 4072. Australia From M160%eurokom.ie at BITNET.CC.CMU.EDU Mon Dec 23 11:17:00 1991 From: M160%eurokom.ie at BITNET.CC.CMU.EDU (Ronan Reilly ERC) Date: Mon, 23 Dec 1991 11:17 CET Subject: A connectionist parsing technique Message-ID: <9112231117.169315@eurokom.ie> Below is the abstract of a paper that is due to appear in the journal Network early next year. A compressed postscript version (reilly.parser.ps.Z) has been placed in Jordan Pollack's Neuroprose archive at Ohio State and can be retrieved in the usual way. Requests for hardcopy should be sent to: ronan_reilly at eurokom.ie Season's greetings, Ronan ========================================== A Connectionist Technique for On-Line Parsing Ronan Reilly Educational Research Centre St Patrick's College, Dublin 9 A technique is described that permits the on-line construction and dynamic modification of parse trees during the processing of sentence-like input. The approach is a combination of simple recurrent network (SRN) and recursive auto-associative memory (RAAM) . The parsing technique involves teaching the SRN to build RAAM representations as it processes its input item-by-item. The approach is a potential component of a larger connectionist natural language processing system, and could also be used as a tool in the cognitive modelling of language understanding. Unfortunately, the modified SRN demonstrates a limited capacity for generalisation. ========================================== From gordon at AIC.NRL.Navy.Mil Mon Dec 23 12:19:21 1991 From: gordon at AIC.NRL.Navy.Mil (gordon@AIC.NRL.Navy.Mil) Date: Mon, 23 Dec 91 12:19:21 EST Subject: workshop announcement Message-ID: <9112231719.AA19520@sun25.aic.nrl.navy.mil> CALL FOR PAPERS Informal Workshop on ``Biases in Inductive Learning" To be held after the 1992 Machine Learning Conference Saturday, July 4, 1992 Aberdeen, Scotland All aspects of an inductive learning system can bias the learn- ing process. Researchers to date have studied various biases in inductive learning such as algorithms, representations, background knowledge, and instance orders. The focus of this workshop is not to examine these biases in isolation. Instead, this workshop will examine how these biases influence each other and how they influence learning performance. For example, how can active selection of instances in concept learning influence PAC convergence? How might a domain theory affect an inductive learning algorithm? How does the choice of representational bias in a learner influence its algo- rithmic bias and vice versa? The purpose of this workshop is to draw researchers from diverse areas to discuss the issue of biases in inductive learning. The workshop topic is a unifying theme for researchers working in the areas of reformulation, constructive induction, inverse resolu- tion, PAC learning, EBL-SBL learning, and other areas. This workshop does not encourage papers describing system comparisons. Instead, the workshop encourages papers on the following topics: - Empirical and analytical studies comparing different biases in inductive learning and their quantitative and qualitative influ- ence on each other or on learning performance - Studies of methods for dynamically adjusting biases, with a focus on the impact of these adjustments on other biases and on learning performance - Analyses of why certain biases are more suitable for particular applications of inductive learning - Issues that arise when integrating new biases into an existing inductive learning system - Theory of inductive bias Please send 4 hard copies of a paper (10-15 double-spaced pages, 12-point font) or (if you wish to attend, but not present a paper) a description of your current research to: Diana Gordon Naval Research Laboratory, Code 5510 4555 Overlook Ave. S.W. Washington, D.C. 20375-5000 USA Email submissions to gordon at aic.nrl.navy.mil are also acceptable, but they must be in PostScript. FAX submissions will not be accepted. If you have any questions about the workshop, please send email to Diana Gordon at gordon at aic.nrl.navy.mil or call 202-767- 2686. Important Dates: March 12 - Papers and research descriptions due May 1 - Acceptance notification June 1 - Final version of papers due Program Committee: Diana Gordon, Naval Research Laboratory Dennis Kibler, University of California at Irvine Larry Rendell, University of Illinois Jude Shavlik, University of Wisconsin William Spears, Naval Research Laboratory Devika Subramanian, Cornell University Paul Vitanyi, CWI and University of Amsterdam From wray at ptolemy.arc.nasa.gov Mon Dec 23 13:57:28 1991 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Mon, 23 Dec 91 10:57:28 PST Subject: bayesian methods for back-propagation, update Message-ID: <9112231857.AA05676@ptolemy.arc.nasa.gov> To appear in December 1991 issue of {\it Complex Systems}. An early draft appeared in the Neuroprose archive July 1991, and was available as NASA Ames AI Research Branch TR FIA-91-22. The new version is considerably improved and contains new, updated and corrected material. A limited number of reprints are available. by writing to Wray Buntine. (Andreas is currently on holidays!) Please only request one if your library doesn't get Complex Systems journal. PS. distributing the first draft via connectionists allowed us to get all sorts of helpful feedback on the early version! ------------- Bayesian Back-Propagation Wray L. Buntine Andreas S. Weigend wray at ptolemy.arc.nasa.gov andreas at psych.stanford.edu RIACS \& NASA Ames Research Center Xerox Palo Alto Research Center Mail Stop 269-2 3333 Coyote Hill Rd. Moffet Field, CA 94035, USA Palo Alto, CA, 94304, USA Connectionist feed-forward networks, trained with back-propagation, can be used both for non-linear regression and for (discrete one-of-$C$) classification. This paper presents approximate Bayesian methods to statistical components of back-propagation: choosing a cost function and penalty term (interpreted as a form of prior probability), pruning insignificant weights, estimating the uncertainty of weights, predicting for new patterns (``out-of-sample''), estimating the uncertainty in the choice of this prediction (``error bars''), estimating the generalization error, comparing different network structures, and handling missing values in the training patterns. These methods extend some heuristic techniques suggested in the literature, and in most cases require a small additional factor in computation during back-propagation, or computation once back-propagation has finished. From Michael_Berthold at NL.CS.CMU.EDU Tue Dec 24 18:26:23 1991 From: Michael_Berthold at NL.CS.CMU.EDU (Michael_Berthold@NL.CS.CMU.EDU) Date: Tue, 24 Dec 91 18:26:23 EST Subject: TR available In-Reply-To: Your message of "Tue, 17 Dec 91 16:11:23 EST." Message-ID: <1338.693617183@NL.CS.CMU.EDU> I'm interested in your paper and would be glad if you can send a copy to me. Thanks, Michael Berthold Center for Machine Translation Carnegie Mellon University Smith Hall 106 5000 Forbes Ave. Pittsburgh, PA 15213 From Michael_Berthold at NL.CS.CMU.EDU Wed Dec 25 13:54:12 1991 From: Michael_Berthold at NL.CS.CMU.EDU (Michael_Berthold@NL.CS.CMU.EDU) Date: Wed, 25 Dec 91 13:54:12 EST Subject: Uoups, Sorry ! Message-ID: <3514.693687252@NL.CS.CMU.EDU> From gluck at pavlov.Rutgers.EDU Wed Dec 25 18:40:03 1991 From: gluck at pavlov.Rutgers.EDU (Mark Gluck) Date: Wed, 25 Dec 91 18:40:03 EST Subject: Graduate & Postdoctoral Study in Cognitive & Neural Bases of Learning at Rutgers Univ. Message-ID: <9112252340.AA01648@pavlov.rutgers.edu> -> Please Post or Distribute Graduate and Postdoctoral Training in the: COGNITIVE & NEURAL BASES OF LEARNING at the Center for Molecular & Behavioral Neuroscience Rutgers University; Newark, NJ Graduate and postdoctoral positions are available for those interested in joining our lab to pursue research and training in the cognitive and neural bases of learning and memory, with a special emphasis on computational neural-network theories of learning. Current research topics include: * Empirical and Computational Studies of Human Learning Experimental research involves studies of human learning and judgment -- especially classification learning -- motivated by a desire to evaluate adaptive network theories of human learning and better understand the relationship between animal and human learning. Theoretical (computational) work seeks to develop and extend adaptive network models of learning to more accurately reflect a wider range of animal and human learning behaviors. Applications of these behavioral models to analyses of the neural bases of animal and human learning are of particular interest. * Computational Models of the Neurobiology of Learning & Memory Understanding the neural bases of learning through computational models of neural circuits and systems, especially the cerebellar and hippocampal areas involved in classical Pavlovian conditioning of motor-reflex learning, is our primary goal. Related work seeks to understand hippocampal function in a wider range of animal and human learning behaviors. ______________________________________________________________________ Other Information: RESEARCH FACILITIES: A new center for graduate study and research in cognitive, behavioral, and molecular neuroscience. The program emphasizes interdisciplinary and integrative analyses of brain and behavior. Located in the new Aidekman Neuroscience Research Center, the C.M.B.N. has state-of-the-art communication facilities, computers, offices, and laboratories. LOCATION: Newark, New Jersey: 20 minutes from Manhattan but also close to rural New Jersey countryside. Other nearby universities and industry research labs with related research programs include: Rutgers (New Brunswick), NYU, Princeton, Columbia, Siemens, NEC, AT&T, Bellcore, and IBM. CURRENT FACULTY: Elizabeth Abercrombie, Gyorgi Buzsaki, Ian Creese, Mark Gluck, Howard Poizner, Margaret Shiffrar, Ralph Siegel, Paula Tallal, and James Tepper. Five additional faculty will be hired. SUPPORT: The Center has 10 state-funded postdoctoral positions with additional positions funded from grants and fellowships. The graduate program is research-oriented and leads to a Ph.D. in Behavioral and Neural Sciences; all students are fully funded. SELECTION CRITERIA & PREREQUISITES: Candidates with any (or all) of the following skills are encouraged to apply: (1) familiarity with neural-network theories and algorithms, (2) strong computational and analytic skills, and (3) experience with experimental methods in cognitive psychology. Evidence of prior research ability and strong writing skills are critical. ______________________________________________________________________ For more information on graduate or postdoctoral training in learning and memory at CMBN/Rutgers, please send a letter with a statement of your research and career interests, and a resume (C.V.), to: Dr. Mark A. Gluck Phone: (201) 648-1080 (x3221) Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Email: gluck at pavlov.rutgers.edu From Prahlad.Gupta at K.GP.CS.CMU.EDU Thu Dec 26 14:43:41 1991 From: Prahlad.Gupta at K.GP.CS.CMU.EDU (Prahlad.Gupta@K.GP.CS.CMU.EDU) Date: Thu, 26 Dec 91 14:43:41 EST Subject: Paper available in Neuroprose Message-ID: The following paper has been placed in the Neuroprose archive, as the file gupta.stress.ps.Z Comments are invited. Retrieval instructions follow the abstract below. Thanks to Jordan Pollack for making this facility available. -- Prahlad ------------------------------------------------------------------------- =========================================================== CONNECTIONIST MODELS & LINGUISTIC THEORY: INVESTIGATIONS OF STRESS SYSTEMS IN LANGUAGE PRAHLAD GUPTA DAVID S. TOURETZKY -------------------------- -------------------------- Dept. of Psychology School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 prahlad at cs.cmu.edu dst at cs.cmu.edu ============================================================ Abstract -------- This work describes the use of connectionist techniques to model the learning and assignment of linguistic stress. Our aim was to explore the ability of a simple perceptron to model the assignment of stress in individual words, and to consider, in light of this study, the relationship between the connectionist and theoretical linguistics approaches to investigating language. We first point out some interesting parallels between aspects of the model and the constructs and predictions of Metrical Phonology, the linguistic theory of stress: (1) the distribution of learning times obtained from perceptron experiments corresponds with theoretical predictions of "markedness," and (2) the weight patterns developed by perceptron learning bear a suggestive *structural* relationship to features of the linguistic analysis, particularly with regard to "iteration" and "metrical feet". We use the connectionist learning data to develop an analysis of linguistic stress based on perceptron-learnability. We develop a novel characterization of stress systems in terms of six parameters. These provide both a partial description of the stress pattern itself and a prediction of its learnability, without invoking abstract theoretical constructs such as "metrical feet." Our parameters encode linguistically salient concepts as well as concepts that have computational significance. These two sets of results suggest that simple connectionist learning techniques have the potential to complement, and provide computational validation for, abstract theoretical investigations of linguistic domains. We then examine why such methodologies should be of interest for linguistic theorizing. Our analysis began at a high level by observing inherent characteristics of various stress systems, much as theoretical linguistics does. However, our explanations changed substantially when we included a detailed account of the model's processing mechanisms. Our higher-level, theoretical account of stress was revealed as only an *approximation* to the lower-level computational account. Without the ability to open up the black boxes of the human processor, linguistic analyses are arguably analogous to our higher-level descriptions. This highlights the need for *computational grounding* of theory-building. In addition, we suggest that there are methodological problems underlying parameter-based approaches to learnability. These problems make it all the more important to seek sources of converging evidence such as is provided by computational models. ------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gupta.stress.ps.Z ftp> quit unix> uncompress gupta.stress.ps.Z unix> lpr -P gupta.stress.ps ------------------------------------------------------------------------- From mauduit at ece.UCSD.EDU Thu Dec 26 17:47:13 1991 From: mauduit at ece.UCSD.EDU (Nicolas Mauduit) Date: Thu, 26 Dec 91 14:47:13 PST Subject: paper mauduit.lneuro.ps.Z available at archive.cis.ohio-state.edu Message-ID: <9112262247.AA09544@celece> The preprint of the following paper, to appear in IEEE Neural Networks, march 92 special issue on hardware, is available by ftp from the neuroprose archive at archive.cis.ohio-state.edu (file mauduit.lneuro.ps.Z): Lneuro 1.0: a piece of hardware LEGO for building neural network systems (to appear in IEEE Neural Networks, march 92 special issue on hardware) by Nicolas MAUDUIT UCSD, dept. ECE, EBU1 La Jolla, CA 92093-0407 USA Marc DURANTON LEP, div. 21 Jean GOBERT B.P. 15, 22, avenue Descartes Jacques-Ariel SIRAT 94453 Limeil Brevannes France Abstract: The state of our experiments on neural networks simulations on a parallel architecture is presented here. A digital architecture was selected, scalable and flexible enough to be useful for simulating various kinds of networks and paradigms. The computing device is based on an existing coarse grain parallel framework (INMOS Transputers), improved with finer grain parallel abilities through VLSI chips, called the Lneuro 1.0, for LEP neuromimetic circuit. The modular architecture of the circuit enables to build various kinds of boards to match the foreseen range of applications, or to increase the power of the system by adding more hardware. The resulting machine remains reconfigurable according to a specific problem to some extent, at the system level through the Transputers framework, as well as at the circuit level. A small scale machine has been realized using 16 Lneuros arranged in clusters composed of 4 circuits and a controller, to experimentally test the behaviour of this architecture (the communication, control, primitives required, etc.). Results are presented on an integer version of Kohonen feature maps. The speedup factor increases regularly with the number of clusters involved (up to a factor 80). Some ways to improve this family of neural networks simulation machines are also investigated. -------------------------------------------------------------------------------- The file can be obtained the usual way: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ... ftp> cd pub/neuroprose ftp> binary ftp> get mauduit.lneuro.ps.Z ftp> quit unix> uncompress mauduit.lneuro.ps.Z then print the file mauduit.lneuro.ps on a postscript printer Nicolas Mauduit ---------------------------------------------------------------- Nicolas Mauduit, Dept ECE | Phone (619) 534 6026 UCSD EBU 1 | FAX (619) 534 1225 San Diego CA 92093-0407 USA | Email mauduit at celece.ucsd.edu