From Connectionists-Request at CS.CMU.EDU Wed May 1 00:05:22 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Wed, 01 May 91 00:05:22 EDT Subject: Bi-monthly Reminder Message-ID: <4685.673070722@B.GP.CS.CMU.EDU> This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & Scott Crowder --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. - Limericks should not be posted here. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on cheops.cis.ohio-state.edu (128.146.8.62) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community. Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Email: pollack at cis.ohio-state.edu Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From peterc at chaos.cs.brandeis.edu Wed May 1 03:03:34 1991 From: peterc at chaos.cs.brandeis.edu (Peter Cariani) Date: Wed, 1 May 91 03:03:34 edt Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: Lyle J. Borg-Graham's message of Wed, 24 Apr 91 12:26:52 EDT <9104241626.AA28990@wheat-chex> Message-ID: <9105010703.AA06223@chaos.cs.brandeis.edu> Dear Lyle, Although Raymond cites a fairly large nmber of empirical observations in many different types of systems, and the basic channel types participating in these systems are pretty much ubiquitous, I am not about to claim that a simple model accounts for the potentially very rich temporal dynamics of all neurons. Obviously there are many types of responses possible, but there seems to be a lack of functional models which utilize the temporal dynamics of single neurons (beyond synaptic delay & refractory period) to do information processing. Can Raymond's threshold results be accounted for via current models? I haven't seen any models where the afterpotentials have amplitudes sufficient to reduce the effective threshold by 50-70%, and I have yet to see this oscillatory behavior developed into a theory of coding (by interspike interval), except in the Lettvin papers I cited. In defense of simple models, they are often useful in developing the broader functional implications of a style of information processing. If most neurons have (potentially) complex temporal characteristics, then we'd better work on our coupled oscillator models and maybe some adaptively tuned oscillator models if we are going to make any sense of it all. Peter Cariani From dave at cogsci.indiana.edu Thu May 2 00:05:31 1991 From: dave at cogsci.indiana.edu (David Chalmers) Date: Wed, 1 May 91 23:05:31 EST Subject: Technical Report available: High-Level Perception Message-ID: The following paper is available electronically from the Center for Research on Concepts and Cognition at Indiana University. HIGH-LEVEL PERCEPTION, REPRESENTATION, AND ANALOGY: A CRITIQUE OF ARTIFICIAL INTELLIGENCE METHODOLOGY David J. Chalmers, Robert M. French, and Douglas R. Hofstadter Center for Research on Concepts and Cognition Indiana University CRCC-TR-49 High-level perception -- the process of making sense of complex data at an abstract, conceptual level -- is fundamental to human cognition. Via high-level perception, chaotic environmental stimuli are organized into mental representations which are used throughout cognitive processing. Much work in traditional artificial intelligence has ignored the process of high-level perception completely, by starting with hand-coded representations. In this paper, we argue that this dismissal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models -- notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought -- and argue that these are flawed precisely because they downplay the role of high-level perception. Further, we argue that perceptual processes cannot be separated from other cognitive processes even in principle, and therefore that such artificial-intelligence models cannot be defended by supposing the existence of a "representation module" that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context. N.B. This is not a connectionist paper in the narrowest sense, but the representational issues discussed are very relevant to connectionism, and the advocated integration of perception and cognition is a key feature of many connectionist models. Also, philosophical motivation for the "quasi-connectionist" Copycat architecture is provided. ----------------------------------------------------------------------------- This paper may be retrieved by anonymous ftp from cogsci.indiana.edu (129.79.238.6). The file is cfh.perception.ps.Z, in the directory pub. To retrieve, follow the procedure below. unix> ftp cogsci.indiana.edu # (or ftp 129.79.238.6) ftp> Name: anonymous ftp> Password: [identification] ftp> cd pub ftp> binary ftp> get cfh.perception.ps.Z ftp> quit unix> uncompress cfh.perception.ps.Z unix> lpr -P(your_local_postscript_printer) cfh.perception.ps If you do not have access to ftp, hardcopies may be obtained by sending e-mail to dave at cogsci.indiana.edu. From obm8 at cs.kun.nl Thu May 2 08:10:27 1991 From: obm8 at cs.kun.nl (obm8@cs.kun.nl) Date: Thu, 2 May 91 14:10:27 +0200 Subject: NN Message-ID: <9105021210.AA11571@erato> WANTED: Information on NN in militairy systems We are students from the university of Nijmegen and we are searching for some kind of literature concerning the use of neural networks in militairy systems. Especially articles which adress the usage of NN and the constraints in which they have to operate. Here in the Netherlands it is pretty difficult to get some information about this. We would appreciate any reaction (as fast as possible be- cause we're dealing with a deadline) on these matters. You can send it to: Parcival Willems, Paul Jones, obm8 at erato.cs.kun.nl From bridle at ai.toronto.edu Thu May 2 10:21:31 1991 From: bridle at ai.toronto.edu (John Bridle) Date: Thu, 2 May 1991 10:21:31 -0400 Subject: NN In-Reply-To: obm8@cs.kun.nl's message of Thu, 2 May 1991 08:10:27 -0400 <9105021210.AA11571@erato> Message-ID: <91May2.102145edt.230@neuron.ai.toronto.edu> You ask about data on NNs in military systems. My collegue Andrew Webb has published a paper "Potential Applications of NNs in Defence" or similar. It is basically a survey of literature on the subject. (Andrew does know a lot about NNs and some areas of defence electronics.) He is at webb at hermes.mod.uk Mention my name. -------------------------------------------------------------------------- From: John S Bridle Currently with: Geoff Hinton of: Speech Research Unit Dept of Computer Science Defence Research Agency University of Toronto Electronics Division RSRE St Andrews Road Toronto Ontario Great Malvern Canada Worcs. WR14 3PS (Until 15 May 1991) U.K. Email: bridle at ai.toronto.edu Email: bridle at hermes.mod.uk --------------------------------------------------------------------------- From lacher at lambda.cs.fsu.edu Thu May 2 13:37:17 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Thu, 2 May 91 13:37:17 -0400 Subject: Hybrid Systems at IJCNN Singapore Message-ID: <9105021737.AA05663@lambda.cs.fsu.edu> To: Connectionists From: Chris Lacher (R. C. Lacher) lacher at cs.fsu.edu (904) 644-0058 (FAX) Subject: Hybrid Systems As you probably know, the International Joint Conference on Neural Networks has, for the first time in IJCNN91/Singapore, a submission category for hybrid systems research, officially titled ``Hybrid Systems (AI, Neural Networks, Fuzzy Systems)". Some have argued that a better descriptor will eventually be "General Intelligent Systems". In any case, coupled (loose or tight), composit, or hybrid systems are meant to be included in the concept. The conference is sponsored by IEEE and co-sponsored by INNS and will be held at the Westin Stamford and Westin Plaza Hotels in Singapore, November 18-21, 1991. This is a significant milestone, a response to and recognition of the growing importance of systems that integrate various machine intelligence technologies across traditional boundaries. This meeting will help define the field, its foundations, and its founders. I am writing to urge you to participate in this historically significant event. Full details on paper submissions are published as page 407 in the May, 1991, issue of IEEE Transactions on Neural Networks. Note that the deadline for RECEIPT of manuscripts is May 31, 1991. From wilson at magi.ncsl.nist.gov Fri May 3 12:31:10 1991 From: wilson at magi.ncsl.nist.gov (Charles Wilson x2080) Date: Fri, 3 May 91 12:31:10 EDT Subject: New Information Processing Technologies Message-ID: <9105031631.AA18304@magi.ncsl.nist.gov> As part of the US response to the Japanese ``Sixth Generation Computer'' initiate, specifically the ``New Information Processing Technologies'' (NIPT) cooperative initiaive, the Advanced Systems Division of NIST is preparing a report on `` new information processing technologies''. This report will be used to help set future US policy in these areas. These technologies include massively parallel computing, distributed processing, neural computing, optical computing, and fuzzy logic. This report will include: 1) identification of emerging technologies (are these the technologies which will provide ``human like response'' in computer systems); 2) assessment of economic impact of NIPT technologies (which technologies will move from toy system to real systems and when); 3) assessment of present US position relative to Japan and how international collaborations will affect these positions. 4) national security considerations. The Japanese are particularly interested funding in US (largely university) participation. The report must be completed before May 31. Other agencies such as DARPA and NSF will be asked to comment. Interested US researchers are invited to respond by E- mail. Comments on items 1 and 2 from researchers working in these areas are particularly important. Response received after May 15 have lower probability of inclusion. C. L. Wilson (301) 975-2080 FAX (301) 590-0932 E-mail wilson at magi.ncsl.nist.gov PLEASE RESPOND TO THE ADDRESS ABOVE AND NOT TO THIS MAILING LIST. From atul at nynexst.com Fri May 3 17:48:32 1991 From: atul at nynexst.com (Atul Chhabra) Date: Fri, 3 May 91 17:48:32 EDT Subject: Lecolinet's PhD Thesis? Message-ID: <9105032148.AA00902@texas.nynexst.com> How can I get a copy of the following PhD thesis? E. Lecolinet, "Segmentation D'Images de Mots Manuscripts: Application a la lecture de chanies de characteres majuscules alphanumeriques et a la lecture de l'ecriture cursive," PhD thesis, University of Paris, March 1990. The thesis deals with the issue of segmentation in recognition of handwritten characters -- a topic familiar to many connectionists. In another place, I have seen this thesis referred to as: E. Lecolinet, "Segmentation et reconnaissance des codes postaus et des mots manuscrits," PhD thesis, Paris VI 1990. Better still, does any one know of any English publications by this author? Thanks, Atul -- Atul K. Chhabra (atul at nynexst.com) Phone: (914)683-2786 Fax: (914)683-2211 NYNEX Science & Technology 500 Westchester Avenue White Plains, NY 10604 From schmidhu at informatik.tu-muenchen.dbp.de Mon May 6 06:43:22 1991 From: schmidhu at informatik.tu-muenchen.dbp.de (Juergen Schmidhuber) Date: 06 May 91 12:43:22+0200 Subject: New FKI-Report Message-ID: <9105061043.AA10479@kiss.informatik.tu-muenchen.de> Here is another one: --------------------------------------------------------------------- AN O(n^3) LEARNING ALGORITHM FOR FULLY RECURRENT NETWORKS Juergen Schmidhuber Technical Report FKI-151-91, May 6, 1991 The fixed-size storage learning algorithm for fully recurrent continually running networks (e.g. (Robinson + Fallside, 1987), (Williams + Zipser, 1988)) requires O(n^4) computations per time step, where n is the number of non-input units. We describe a method which computes exactly the same gradient and requires fixed-size storage of the same order as the previous algorithm. But, the average time complexity per time step is O(n^3). --------------------------------------------------------------------- To obtain a copy, do: unix> ftp 131.159.8.35 Name: anonymous Password: your name, please ftp> binary ftp> cd pub/fki ftp> get fki151.ps.Z ftp> bye unix> uncompress fki151.ps.Z unix> lpr fki151.ps Please do not forget to leave your name (instead of your email address). NOTE: fki151.ps is designed for European A4 paper format (20.9cm x 29.6cm). In case of ftp-problems send email to schmidhu at informatik.tu-muenchen.de or contact Juergen Schmidhuber Institut fuer Informatik, Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2 GERMANY From lyle at ai.mit.edu Mon May 6 15:04:32 1991 From: lyle at ai.mit.edu (Lyle J. Borg-Graham) Date: Mon, 6 May 91 15:04:32 EDT Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: Peter Cariani's message of Wed, 1 May 91 03:03:34 edt <9105010703.AA06223@chaos.cs.brandeis.edu> Message-ID: <9105061904.AA02824@peduncle> >Can Raymond's threshold results be accounted for via current models? We have modelled the time course of K+ currents which could in principle reduce 'threshold' in hippocampal cells (MIT AI Lab Technical Report 1161) (see also "Simulations suggest information processing roles for the diverse currents in hippocampal neurons", NIPS 1987 Proceedings, ed. D.Z. Anderson). >I haven't seen any models where the afterpotentials have amplitudes >sufficient to reduce the effective threshold by 50-70% Again, as I mentioned in a previous message, it might be useful to be more precise in the definition of 'threshold'. From the statement above, I assume you mean that the voltage difference between the afterpotential and the normal action potential (voltage) threshold is 50-70% smaller than that between the resting potential and threshold, thus reducing the threshold *current* by the same amount (assuming that the input impedance doesn't change, which it does). I am not familiar with the cited results, but I would also expect (as mentioned earlier) that the normal voltage threshold would be increased after the spike because of (a) incomplete re-activation of Na+ channels (which I believe is the classic mechanism cited for the refactory period) and (b) decreased input impedance due to activation of various channels (which means that a larger Na+ current is needed to start up the positive feedback underlying the upstroke of the spike). My suggestion as to role of K+ currents underlying a super-excitable phase was that *inactivation* of these currents by depolarization might both reduce the decrease in impedance (b) *and* increase the "resting potential". Why is this minutia important? Well for one thing in real neurons inputs and intrinsic properties are mediated by conductance changes, which, in turn, interact non-linearly (as analyzed by Poggio, Torre, and Koch, among others). Whether or not these non-linear interactions are relevant depends on the model, but at least we should know enough about their general properties so that we can scope out the right context of the problem at the beginning. > In defense of simple models, they are often useful in developing the >broader functional implications of a style of information processing. If >most neurons have (potentially) complex temporal characteristics, then >we'd better work on our coupled oscillator models and maybe some >adaptively tuned oscillator models if we are going to make any sense >of it all. Absolutely. - Lyle From lyle at ai.mit.edu Mon May 6 17:01:47 1991 From: lyle at ai.mit.edu (Lyle J. Borg-Graham) Date: Mon, 6 May 91 17:01:47 EDT Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: "Lyle J. Borg-Graham"'s message of Mon, 6 May 91 15:04:32 EDT <9105061904.AA02824@peduncle> Message-ID: <9105062101.AA02923@peduncle> oops, I meant to say that the refractory period was due to incomplete *de-inactivation* of Na+ channel(s). Lyle From david at cns.edinburgh.ac.uk Tue May 7 11:49:10 1991 From: david at cns.edinburgh.ac.uk (David Willshaw) Date: Tue, 7 May 91 11:49:10 BST Subject: NETWORK - contents of Volume 2, no 2 (May 1991) Message-ID: <6670.9105071049@subnode.cns.ed.ac.uk> The forthcoming May 1991 issue of NETWORK will contain the following papers: NETWORK Volume 2 Number 2 May 1991 Minimum-entropy coding with Hopfield networks H G E Hentschel and H B Barlow Cellular automation models of the CA3 region of the hippocampus E Pytte, G Grinstein and R D Traub Competitive learning, natural images and cortical cells C J StC Webber Adaptive fields: distributed representations of classically conditioned associations P F M J Verschure and A C C Coolen ``Quantum'' neural networks M Lewenstein and M Olko ---------------------- NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. NETWORK is published quarterly. The subscription rates are: Institution 125.00 POUNDS (US$220.00) Individual (UK) 17.30 POUNDS (Overseas) 20.50 POUNDS (US$37.90) For more details contact IOP Publishing Techno House Redcliffe Way Bristol BS1 6NX United Kingdom Telephone: 0272 297481 Fax: 0272 294318 Telex: 449149 INSTP G EMAIL: JANET: IOPPL at UK.AC.RL.GB From collins at z.ils.nwu.edu Tue May 7 14:47:40 1991 From: collins at z.ils.nwu.edu (Gregg Collins) Date: Tue, 7 May 91 13:47:40 CDT Subject: ML91 now takes credit cards! Message-ID: <9105071847.AA01002@z.ils.nwu.edu> That's right, you can now register for ML91 -- The Eighth International Workshop on Machine Learning -- using your Master Card, Visa, or American Express. We have adjusted the registration forms slightly to reflect this. Copies of the new forms appear below. (don't worry, though -- we'll still accept the old ones). Gregg Collins Lawrence Birnbaum ML91 Program Co-chairs *****************Conference Registration Form************************** ML91: The Eighth International Workshop on Machine Learning Conference Registration Form Please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Address: Phone: Email: Payment information: Type of registration (check one): [ ] Student: $70 [ ] Other: $100 Registration is due May 22, 1991. If your registration will arrive after that date, please add a late fee of $25. You may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: ********************Housing Registration Form************************** ML91: the Eighth International Workshop on Machine Learning On-campus Housing Registration Form On campus housing for ML91 is available at a cost of $69 per person for four nights, June 26-29. Rooms are double-occupancy only. Sorry, we cannot offer a reduced rate for shorter stays. To register for housing, please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Sex: Address: Phone: Email: Name of roomate (if left blank, we will assign you a roommate): Payment: Housing is $69 per person, which you may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: From collins%z.ils.nwu.edu at VM.TCS.Tulane.EDU Tue May 7 14:47:40 1991 From: collins%z.ils.nwu.edu at VM.TCS.Tulane.EDU (Gregg Collins) Date: Tue, 7 May 91 13:47:40 CDT Subject: ML91 now takes credit cards! Message-ID: <9105071847.AA01002@z.ils.nwu.edu> That's right, you can now register for ML91 -- The Eighth International Workshop on Machine Learning -- using your Master Card, Visa, or American Express. We have adjusted the registration forms slightly to reflect this. Copies of the new forms appear below. (don't worry, though -- we'll still accept the old ones). Gregg Collins Lawrence Birnbaum ML91 Program Co-chairs *****************Conference Registration Form************************** ML91: The Eighth International Workshop on Machine Learning Conference Registration Form Please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Address: Phone: Email: Payment information: Type of registration (check one): [ ] Student: $70 [ ] Other: $100 Registration is due May 22, 1991. If your registration will arrive after that date, please add a late fee of $25. You may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: ********************Housing Registration Form************************** ML91: the Eighth International Workshop on Machine Learning On-campus Housing Registration Form On campus housing for ML91 is available at a cost of $69 per person for four nights, June 26-29. Rooms are double-occupancy only. Sorry, we cannot offer a reduced rate for shorter stays. To register for housing, please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Sex: Address: Phone: Email: Name of roomate (if left blank, we will assign you a roommate): Payment: Housing is $69 per person, which you may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: From CONNECT at nbivax.nbi.dk Wed May 8 03:40:00 1991 From: CONNECT at nbivax.nbi.dk (CONNECT@nbivax.nbi.dk) Date: Wed, 8 May 1991 09:40 +0200 (NBI, Copenhagen) Subject: International Journal of Neural Systems Vol 2, issues 1-2 Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 2, issues number 1-2 (1991): 1. H. Liljenstrom: Modelling the Dynamics of olfactory cortex effects of anatomically organised Propagation Delays. 2. S. Becker: Unsupervised learning Procedures for Neural Networks. 3. Yves Chauvin: Gradient Descent to Global minimal in a n-dimensional Landscape. 4. J. G. Taylor: Neural Network Capacity for Temporal Sequence Storage. 5. S. Z. Lerner and J. R. Deller: Speech Recognition by a self-organising feature finder. 6. Jefferey Lee Johnson: Modelling head end escape behaviour in the earthworm: the efferent arc and the end organ. 7. M.-Y. Chow, G. Bilbro and S. O. Yee: Application of Learning Theory for a Single Phase Induction Motor Incipient Fault Detector Artificial Neural Network. 8. J. Tomberg and K. Kaski: Some implementation of artificial neural networks using pulse-density Modulation Technique. 9. I. Kocher and R. Monasson: Generalisation error and dynamical efforts in a two-dimensional patches detector. 10. J. Schmidhuber and R. Huber: Learning to generate fovea Trajectories for attentive vision. 11. A. Hartstein: A back-propagation algorithm for a network fo neurons with Threshold Controlled Synapses. 12. M. Miller and E. N. Miranda: Stability of Multi-Layered Neural Networks. 13. J. Ariel Sirat: A Fast neural algorithm for principal components analysis and singular value Decomposition. 14. D. Stork: Review of book by J. Hertz, A. Krogh and R. Palmer. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message From ring at cs.utexas.edu Wed May 8 17:16:31 1991 From: ring at cs.utexas.edu (Mark Ring) Date: Wed, 8 May 91 16:16:31 CDT Subject: Preprint: building sensory-motor hierarchies Message-ID: <9105082116.AA05640@ai.cs.utexas.edu> Recently there's been some interest on this mailing list regarding neural net hierarchies for sequence "chunking". I've placed a relevant paper in the Neuroprose Archive for public ftp. This is a (very slightly extended) copy of a paper to be published in the Proceedings of the Eighth International Workshop on Machine Learning. The paper summarizes the results to date of work begun a year and a half ago to create a system that automatically and incrementally constructs hierarchies of behaviors in neural nets. The purpose of the system is to develop continuously through the encapsulation, or "chunking," of learned behaviors. ---------------------------------------------------------------------- INCREMENTAL DEVELOPMENT OF COMPLEX BEHAVIORS THROUGH AUTOMATIC CONSTRUCTION OF SENSORY-MOTOR HIERARCHIES Mark Ring University of Texas at Austin This paper addresses the issue of continual, incremental development of behaviors in reactive agents. The reactive agents are neural-network based and use reinforcement learning techniques. A continually developing system is one that is constantly capable of extending its repertoire of behaviors. An agent increases its repertoire of behaviors in order to increase its performance in and understanding of its environment. Continual development requires an unlimited growth potential; that is, it requires a system that can constantly augment current behaviors with new behaviors, perhaps using the current ones as a foundation for those that come next. It also requires a process for organizing behaviors in meaningful ways and a method for assigning credit properly to sequences of behaviors, where each behavior may itself be an arbitrarily long sequence. The solution proposed here is hierarchical and bottom up. I introduce a new kind of neuron (termed a ``bion''), whose characteristics permit it to be automatically constructed into sensory-motor hierarchies as determined by experience. The bion is being developed to resolve the problems of incremental growth, temporal history limitation, network organization, and credit assignment among component behaviors. A longer, more detailed paper will be announced shortly. ---------------------------------------------------------------------- Instructions to retrieve paper by ftp, (no hard copies available at this time): % ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get ring.ml91.ps.Z ftp> bye % uncompress ring.ml91.ps.Z % lpr -P(your_postscript_printer) ring.ml91.ps.Z ---------------------------------------------------------------------- DO NOT "reply" DIRECTLY TO THIS MESSAGE! If you have any questions or difficulties, please send e-mail to: ring at cs.utexas.edu. or send mail to: Mark Ring Department of Computer Sciences Taylor 2.124 University of Texas at Austin Austin, TX 78712 From CONNECT at nbivax.nbi.dk Thu May 9 06:47:00 1991 From: CONNECT at nbivax.nbi.dk (CONNECT@nbivax.nbi.dk) Date: Thu, 9 May 1991 12:47 +0200 (NBI, Copenhagen) Subject: International Journal of Neural Systems vol 2, issues 1-2 Message-ID: <846F22B560A0AF40@nbivax.nbi.dk> Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 2, issues number 1-2 (1991): 1. H. Liljenstrom: Modelling the Dynamics of olfactory cortex effects of anatomically organised Propagation Delays. 2. S. Becker: Unsupervised learning Procedures for Neural Networks. 3. Yves Chauvin: Gradient Descent to Global minimal in a n-dimensional Landscape. 4. J. G. Taylor: Neural Network Capacity for Temporal Sequence Storage. 5. S. Z. Lerner and J. R. Deller: Speech Recognition by a self-organising feature finder. 6. Jefferey Lee Johnson: Modelling head end escape behaviour in the earthworm: the efferent arc and the end organ. 7. M.-Y. Chow, G. Bilbro and S. O. Yee: Application of Learning Theory for a Single Phase Induction Motor Incipient Fault Detector Artificial Neural Network. 8. J. Tomberg and K. Kaski: Some implementation of artificial neural networks using pulse-density Modulation Technique. 9. I. Kocher and R. Monasson: Generalisation error and dynamical efforts in a two-dimensional patches detector. 10. J. Schmidhuber and R. Huber: Learning to generate fovea Trajectories for attentive vision. 11. A. Hartstein: A back-propagation algorithm for a network fo neurons with Threshold Controlled Synapses. 12. M. Miller and E. N. Miranda: Stability of Multi-Layered Neural Networks. 13. J. Ariel Sirat: A Fast neural algorithm for principal components analysis and singular value Decomposition. 14. D. Stork: Review of book by J. Hertz, A. Krogh and R. Palmer. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message From jon at incsys.com Thu May 9 11:16:22 1991 From: jon at incsys.com (Jon Shultis) Date: Thu, 09 May 91 11:16:22 -0400 Subject: FYI Informal Computing Workshop Program Message-ID: <9105091517.AA14404@incsys.com> Workshop on Informal Computing 29-31 May 1991 Santa Cruz, California Program Wednesday 29 May Conversational Computing and Adaptive Languages 8:15 Opening Remarks, Jon Shultis, Incremental Systems 8:30 Natural Language Techniques in Formal Languages, David Mundie, Incremental Systems 9:30 Building and Exploiting a User Model In Natural Language Information Systems, Sandra Carberry, University of Delaware 10:30 Break 10:45 Informalism in Interfaces, Larry Reeker, Institutes for Defense Analyses 11:45 Natural Language Programming in Solving Problems of Search, Alan Biermann, Duke University 12:30 Lunch 13:45 Linguistic Structure from a Cognitive Grammar Perspective, Karen van Hoek, University of California at San Diego 14:45 Notational Formalisms, Computational Mechanisms: Models or Metaphors? A Linguistic Perspective, Catherine Harris, University of California at San Diego 15:45 Break 16:00 Discussion 18:00 Break for dinner Thursday 30 May Informal Knowledge and Reasoning 8:15 What is Informalism?, David Fisher, Incremental Systems 9:15 Reaction in Real-Time Decision Making, Bruce D'Ambrosio, Oregon State University 10:15 Break 10:30 Decision Making with Informal, Plausible Reasoning, David Littman, George Mason University 11:15 Title to be announced, Tim Standish, University of California at Irvine 12:15 Lunch 13:30 Intensional Logic and the Metaphysics of Intensionality, Edward Zalta, Stanford University 14:30 Connecting Object to Symbol in Modeling Cognition, Stevan Harnad, Princeton University 15:30 Break 15:45 Discussion 17:45 Break 19:00 Banquet Friday 31 May Modeling and Interpretation 8:15 A Model of Modeling Based on Reference, Purpose and Cost-effectiveness, Jeff Rothenberg, RAND 9:15 Mathematical Modeling of Digital Systems, Donald Good, Computational Logic, Inc. 10:15 Break 10:30 Ideographs, Epistemic Types, and Interpretive Semantics, Jon Shultis, Incremental Systems 11:30 Discussion 12:30 Lunch and End of the Workshop 13:45 Steering Committee Meeting for Informalism '92 Conference, all interested participants are invited. From jcp at vaxserv.sarnoff.com Thu May 9 15:37:10 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Thu, 9 May 91 15:37:10 EDT Subject: postmark deadline Message-ID: <9105091937.AA21511@sarnoff.sarnoff.com> All submittals to the 1991 NIPS conference and workshop must be POSTMARKED by May 17th. Express mail is not necessary. John Pearson Publicity Chairman, NIPS-91 jcp at as1.sarnoff.com From ASKROGH at nbivax.nbi.dk Fri May 10 07:00:00 1991 From: ASKROGH at nbivax.nbi.dk (Anders Krogh) Date: Fri, 10 May 1991 13:00 +0200 (NBI, Copenhagen) Subject: Bibliography Message-ID: <4F7256B300A0BD31@nbivax.nbi.dk> Bibliography in the Neuroprose Archive: Bibliography from the book "Introduction to the Theory of Neural Computation" by John Hertz, Anders Krogh, and Richard Palmer (Addison-Wesley, 1991) has been placed in the Neuroprose Archive. After a suggestion from Tali Tishby we decided to make the bibliography for our book publicly available in the Neuroprose Archive. The copyright of the book is owned by Addison-Wesley and this bibliography is placed in the public domain with their permission. We spent considerable effort on the bibliography while writing the book, and hope that other researchers will benefit from it. It is written in the TeX format developed for the book, and the file includes the macros needed to make it TeX-able. It should be fairly easy to adapt the macros and/or bibliographic entries for individual needs. If anyone converts it to BiBtex---or improves it in other ways---we encourage them to put the new version in the Neuroprose Archive, or e-mail it to one of the following addresses so that we can do so. askrogh at nbivax.nbi.dk palmer at phy.duke.edu Please note that we are not intending to update this bibliography, except to correct mistakes. It would be great if someone would maintain a complete online NN bibliography, but WE cannot. So please don't send us requests of the form "please add my paper ...". John Hertz, Anders Krogh, and Richard G. Palmer. -------------------------- To obtain copies from Neuroprose: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get hertz.refs.tex.Z ftp> bye If you want to print it, do something like this (depending on your local system): unix> uncompress hertz.refs.tex unix> tex hertz.refs.tex unix> dvi2ps hertz.refs (the dvi to postscript converter) unix> lpr hertz.refs.ps From shoshi at thumper.bellcore.com Fri May 10 11:59:25 1991 From: shoshi at thumper.bellcore.com (Shoshana Hardt-Kornacki) Date: Fri, 10 May 91 11:59:25 edt Subject: CFP: Information Filtering Workshop Message-ID: <9105101559.AA13648@shalva.bellcore.com> Bellcore Workshop on High-Performance Information Filtering: Foundations, Architectures, and Applications November 5-7, 1991 Chester, New Jersey Information filtering can be viewed, both, as a way to control the flood of information that is received by an end-user, and as a way to target the information that is sent by information providers. The information carrier, which provides the appropriate connectivity between the information providers, the filter and the end-user, plays a major role in providing a cost effective architecture which also ensures end-user privacy. The aim of the workshop is to examine issues that can advance the state-of-the-art in filter construction, usage, and evaluation, for various information domains, such as news, entertainment, advertizing, and community information. We focus on creative approaches that take into consideration constraints imposed by realistic application contexts. Topics include but not limited to: Taxonomy of information domains and their dynamics Information retrieval and indexing systems Information delivery architectures Cognitive models of end-user's interests and preferences Cognitive models for multimedia information processing Adaptive filtering agents and distributed filters Information theoretic approaches to filter performance evaluation The workshop is by invitation only. Please submit a 5-10 page paper (hardcopy only), summerizing the work you would like to present, or a one page description of your interests and how they relate to this workshop. Demonstrations of existing prototypes are welcome. Proceedings will be available at the workshop. Workshop Chair: Shoshana Hardt-Kornacki (Bellcore) Workshop Program Committee: Bob Allen (Bellcore) Nick Belkin (Rutgers University) Louis Gomez (Bellcore) Tom Landauer (Bellcore) Bill Mansfield (Bellcore) Papers should be sent to: Shoshana Hardt-Kornacki Bell Communications Research 445 South Street, Morristown NJ, 07962. (201) 829-4528, shoshi at bellcore.com Papers due: July 15, 1991. Invitations sent: August 15, 1991. Workshop dates: November 5-7, 1991. From enorris at gmuvax2.gmu.edu Mon May 13 12:26:34 1991 From: enorris at gmuvax2.gmu.edu (Gene Norris) Date: Mon, 13 May 91 12:26:34 -0400 Subject: ANNA91 Program & Registration Message-ID: <9105131626.AA19674@gmuvax2.gmu.edu> ANNA 91 (Analysis of Neural Network Applications Conference) is the first of a planned series of conferences on the application of neural network technology to real-world problems. The conference, to be held at George Mason University in Fairfax, Virginia, is organized around the problem-solving process: domain analysis, design criteria, analytic approaches to network definition, evaluation methods, and lessons learned. There will be two full-day tutorials on May 29th, addressing both fundamentals and advanced topics, followed by two days of presentations and panel sessions on May 30-31. The keynote speaker will be James Anderson of Brown University; Paul Werbos of the NSF will give the luncheon address on the first day, and Oliver Selfridge from GTE Laboratories will chair the rapporteur panel. Two panel sessions have also been scheduled: the first, chaired by Eugene Norris of GMU, will look back at the history of the technology, and the second, chaired by Jerry LR Chandler of NINCDS, will explore the probable state of the technology in the early 21st century. George Mason University is located n the Washington, D.C. area and is convenient both to Washington National and Dulles Airports. Attendance at the conference will be limited by facility space; reservations will be processed in order of arrival. Sponsors of the conference are ACM SIGART and ACM SIGBDP in cooperation with the International Neural Network Society and the Washington Evolutionary Systems Society. Institutional support is provided by GMU and NIH, with additonal support from American Electronics Inc., CTA Inc., IKONIX, and TRW/Systems Division. Questions? Toni Shetler (Conference Chair), TRW FVA6/3444 PO Box 10400, Fairfax VA 22031 ------------------------ADVANCE PROGRAM----------------------------- Wednesday, May 29, 1991: 09:00 - 12:00 and 01:00 - 04:00 Tutorial 1: Neural Network Fundamentals Instructors - Judith Dayhoff, University of Maryland and Edward Page, Clemson University Tutorial 2: Real Brains for Modelers Instructor - Eugene Norris, George Mason University. Thursday AM, May 30, 1991 08:30 - 10:00 Welcome and Keynote Speaker Welcome - Toni Shetler, TRW/Systems Division Keynote Intro - Robert L. Stites, IKONIX Keynote - James Anderson, Brown University 10:00 - 10:30 Break 10:30 - 12:00 Panel Session 1: Now: Where Are We? Panel Chair - Eugene Norris, George Mason University Panelists - Craig Will, IDA, Tom Vogel, ERIM, Bill Marks, NIH 12:00 - 01:30 Luncheon Luncheon Intro - Robert L. Stites, IKONIX Speaker - Paul Werbos, NSF 01:30 - 03:30 Session 2: Domain Analysis Session Chair: Judith Dayhoff, University of Maryland % Synthetic aperture radar image formation with neural networks. Ted Frison, S. Walt McCandless, and Robert Runze. % Application of the recurrent neural network to the problem of language acquisition. Ryotaro Kamimura. % Protein classification using a neural network protein database (NNPDB) system. Cathy H. Wu, Adisorn Ermongkonchai, and Tzu-Chung Chang. % The object-oriented paradigm and neurocomputing. Paul S. Prueitt and Robert M. Craig. 03:30 - 04:00 Break 04:00 - 06:00 Session 3: Design Criteria Session Chair: Harry Erwin, TRW/Systems Division % Neural network-based decision support for incomplete database systems. B. Jin, A. R. Hurson, and L. L. Miller. % Spatial classification and multi-spectral fusion with neural networks. Craig Harston. % Neural network process control. Michael J. Piovoso, AaronJJ. Owens, Allon Guez and Eva Nilssen. 06:00 - 07:00 Evening Reception Host: Kim McBrian, TRW/Command Support Division 07:00 - 09:00 Session 4: Analytic Approaches to Network Definition Session Chair: Gary C. Fleming, American Electronics, Inc. % A discrete-time neural network multitarget tracking data association algorithm. Oluseyi Olurotimi. % On the implementation of RB technique in neural networks. M. T. Musavi, K. B. Faris, K. H. Chan, and W. Ahmed. % Radiographic image compression: a neural approach. Sridhar Narayan, Edward W. Page, and Gene A. Tagliarini. % Supervised adaptive resonance networks. Robert A. Baxter. Friday, May 31, 1991 08:00 - 10:00 Session 5: Lessons Learned, Feedback, and Design Implications Session Chair: Elias Awad, University of Virginia % Neural control of a nonlinear system with inherent time delays. Edward A. Rietman and Robert C. Frye. % Pattern mapping in pulse transmission neural networks. Judith Dayhoff. % Analysis of a biologically motivated neural network for character recognition. M. D. Garris, R. A. Wilkinson, and C. L. Wilson. % Optimization in cascaded Boltzman machines with a temperature gradient: an alternative to simulated annealing. James P. Coughlin and Robert H. Baran. 10:00 - 10:30 Break 10:30 - 12:00 Panel Session 6: Where Will We Be in 1995, 2000, and 2010? Panel Chair: Jerry LR Chandler, NINCDS, Epilepsy Branch Panelists - Captain Steven Suddarth, USAF; James Templeman, George Washington University; Russell Eberhart, JHU/APL; Larry Hutton, JHU/ APL; Robert Artigiani, USNA 12:00 - 01:00 Lunch Break 01:00 - 02:00 Session 7: Evaluation Session Chair: Larry K. Barrett, CTA, Inc. % A neural network for target classification using passive sonar. Robert H. Baran and James P. Coughlin. % Defect prediction with neural networks. Robert L. Stites, Bryan Ward, and Robert V. Walters. 02:00 - 03:30 Session 8: ANNA-91 Conference Wrap-up Session Chair: Toni Shetler Rapporteurs - Joseph Bigus, IBM; Oliver Selfridge, GTE Laboratories; and Harold Szu, NSWC -------------------------Registration & Hotel Forms ---------------------- CONFERENCE REGISTRATION Tutorial: fee Amount Name:_____________________________________________ member $150 ______ Address:__________________________________________ nonmember $200 ______ __________________________________________________ student $ 25 ______ __________________________________________________ circle ONE: Tutorial 1 (Dayhoff & Page) Tutorial 2 (Norris) Conference: fee Amount ______________________________________________ member $200 ______ ^Membership number & Society Affiliation nonmember $250 ______ student $ 25 ______ ________________________________________________ Faculty Advisor (full-time student registration) Total: ______ Mail to: ANNA 91 Conference Registration Toni SHetler TRW FVA6/3444 PO Box 10400 Fairfax, VA 22031 HOTEL REGISTRATION Circle choice and mail directly to the hotel (addresses below) Wellesley Inn Quality Inn Single/night +6.5% tax $44.00 $49.50 Double/night +6.5% tax $49.50 $59.50 Arrival Day/date ___________________ Departure ________________________ Name:___________________________________________ Address: _______________________________________ _______________________________________ _______________________________________ Telephone (include Area code) ___________________ MAIL FORM DIRECTLY TO YOUR HOTEL! Wellseley Inn Quality Inn US Rt 50 - 10327 Lee Highway US Rt 50 - 11180 Main Street Fairfax, VA 22030 Fairfax, VA 22030 (703) 359-2888 or (800) 654-2000 (703)591-5900 or (800) 223-1223 To Guarantee late arrival, please forward one night's deposit or include your credit card number with expiration date. Reservations without guarantee will only be held until 6:00 PM on date of arrival. _________________________ ___________ ____________________ __________ Cardholder name type:AE,MC,... credit card number expiration date From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon May 13 13:31:04 1991 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 13 May 91 13:31:04 EDT Subject: Two new Tech Reports Message-ID: The following two tech reports have been placed in the neuroprose database at Ohio State. Instructions for accessing them via anonymous FTP are included at the end of this message. (Maybe everyone should copy down these instructions once and for all so that we can stop sending repeating them with each announcement.) --------------------------------------------------------------------------- Tech Report CMU-CS-91-100 The Recurrent Cascade-Correlation Architecture Scott E. Fahlman Recurrent Cascade-Correlation (RCC) is a recurrent version of the Cascade-Correlation learning architecture of Fahlman and Lebiere \cite{fahlman:cascor}. RCC can learn from examples to map a sequence of inputs into a desired sequence of outputs. New hidden units with recurrent connections are added to the network one at a time, as they are needed during training. In effect, the network builds up a finite-state machine tailored specifically for the current problem. RCC retains the advantages of Cascade-Correlation: fast learning, good generalization, automatic construction of a near-minimal multi-layered network, and the ability to learn complex behaviors through a sequence of simple lessons. The power of RCC is demonstrated on two tasks: learning a finite-state grammar from examples of legal strings, and learning to recognize characters in Morse code. Note: This TR is essentially the same as the the paper of the same name in the NIPS 3 proceedings (due to appear very soon). The TR version includes some additional experimental data and a few explanatory diagrams that had to be cut in the NIPS version. --------------------------------------------------------------------------- Tech report CMU-CS-91-130 Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm Markus Hoehfeld and Scott E. Fahlman A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms. We present an empirical study of the effects of limited precision in Cascade-Correlation networks on three different learning problems. We show that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below 12 bits. We introduce techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 6 bits of precision, with only a gradual reduction in the quality of the solutions. Note: The experiments described here were conducted during a visit by Markus Hoehfeld to Carnegie Mellon in the fall of 1990. Markus Hoehfeld's permanent address is Siemens AG, ZFE IS INF 2, Otto-Hahn-Ring 6, W-8000 Munich 83, Germany. --------------------------------------------------------------------------- To access these tech reports in postscript form via anonymous FTP, do the following: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get ftp> quit unix> uncompress unix> lpr (use flag your printer needs for Postscript) The TRs described above are stored as "fahlman.rcc.ps.Z" and "hoehfeld.precision.ps.Z". Older reports "fahlman.quickprop-tr.ps.Z" and "fahlman.cascor-tr.ps.Z" may also be of interest. Your local version of ftp and other unix utilities may be different. Consult your local system wizards for details. --------------------------------------------------------------------------- Hardopy versions are now being printed and will be available soon, but because of the high demand and tight budget, our school has has (reluctantly) instituted a charge for mailing out tech reports in hardcopy: $3 per copy within the U.S. and $5 per copy elsewhere, and the payment must be in U.S. dollars. To order hardcopies, contact: Ms. Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 U.S.A. From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon May 13 13:58:25 1991 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 13 May 91 13:58:25 EDT Subject: Lisp Code for Recurrent Cascade-Correlation Message-ID: Simulation code for the Recurrent Cascade-Correlation (RCC) algorithm is now available for FTP via Internet. For now, only a Common Lisp version is available. This is the same version I've been using for my own experiments, except that a lot of non-portable display and user-interface code has been removed. It shouldn't be too hard to modify Scott Crowder's C-based simulator for Cascade-Correlation to implement the new algorithm, but the likely "volunteers" to do the conversion are all too busy right now. I'll send a follow-up notice whenever a C version becomes available. Instructions for obtaining the code via Internet FTP are included at the end of this message. If people can't get it by FTP, contact me by E-mail and I'll try once to mail it to you. If it bounces or your mailer rejects such a large message, I don't have time to try a lot of other delivery methods. We are not prepared to distribute the software by floppy disk or tape -- don't ask. I am maintaining an E-mail list of people using any of our simulators so that I can notify them of any changes or problems that occur. I would appreciate hearing about any interesting applications of this code, and will try to help with any problems people run into. Of course, if the code is incorporated into any products or larger systems, I would appreciate an acknowledgement of where it came from. NOTE: This code code is in the public domain. It is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Carnegie-Mellon University. There are several other programs in the "code" directory mentioned below: Cascade-Correlation in Common Lisp and C, Quickprop in Common Lisp and C, the Migraine/Aspirin simulator from MITRE, and some simulation code written by Tony Robinson for the vowel benchmark he contributed to the benchmark collection. -- Scott *************************************************************************** For people (at CMU, MIT, and soon some other places) with access to the Andrew File System (AFS), you can access the files directly from directory "/afs/cs.cmu.edu/project/connect/code". This file system uses the same syntactic conventions as BSD Unix: case sensitive names, slashes for subdirectories, no version numbers, etc. The protection scheme is a bit different, but that shouldn't matter to people just trying to read these files. For people accessing these files via FTP: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu". The internet address of this machine is 128.2.254.155, for those who need it. 2. Log in as user "anonymous" with no password. You may see an error message that says "filenames may not have /.. in them" or something like that. Just ignore it. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories may not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. The RCC simulator lives in file "rcc1.lisp". If you try to access this directory by FTP and have trouble, please contact me. The exact FTP commands you use to change directories, list files, etc., will vary from one version of FTP to another. From kddlab!atrp05.atr-la.atr.co.jp!nick at uunet.UU.NET Tue May 14 00:14:37 1991 From: kddlab!atrp05.atr-la.atr.co.jp!nick at uunet.UU.NET (Nick Campbell) Date: Tue, 14 May 91 13:14:37+0900 Subject: preprints and reports In-Reply-To: Juergen Schmidhuber's message of 30 Apr 91 9:17 +0200 <9104300717.AA03329(a)kiss.informatik.tu-muenchen.de> Message-ID: <9105140414.AA25653@atrp05.atr-la.atr.co.jp> Many thanks - the package of papers arrived today and I look forward to reading them soon. Nick From russ at oceanus.mitre.org Tue May 14 13:07:36 1991 From: russ at oceanus.mitre.org (Russell Leighton) Date: Tue, 14 May 91 13:07:36 EDT Subject: New Graphics for MITRE Neural Network Simulator Message-ID: <9105141707.AA20789@oceanus.mitre.org> Attention users of the MITRE Neural Network Simulator Aspirin/MIGRAINES Version 4.0 Version 5.0 of Aspirin/MIGRAINES is targeted for public distribution in late summer. This will include a graphic interface which will support X11, SunView, GL and NextStep. We are able to have such an interface because we are using the libraries of a scientific visualization software package called apE. Users interested in having this graphical interface should get a copy of apE2.1 **NOW** so that when Aspirin/MIGRAINES version 5.0 is released it can be used with the apE software. The apE software is available from the Ohio Supercomputing Center for a nominal charge (I believe it is now free for educational institutions, but I am not sure). Order forms can be ftp'd from "apE.osgp.osc.edu" (128.146.18.18) in the /pub/doc/info directory. The Good News: 1. The apE software is free (or nearly free). 2. The apE software is a very portable package. 3. The apE software supports many window systems. 4. You get source with the apE software. 5. The apE tool called "wrench" allows graphical programmimg, of a sort, by connecting boxes with data pipes. A neural network compute module (which A/M can automatically generate) can be used in these pipelines with other compute/graphics modules for pre/post processing. 6. We can get out of the computer graphics business. 7. Sexy data displays. 8. ApE is a nice visualization package, and the price is right. The Bad News: 1. You need more software than what comes with the Aspirin/MIGRAINES distribution (although, you can run without any graphics with the supplied software). 2. The apE software is not very fast and uses alot of memory. 3. apE2.1 is a big distribution Other features to expect in version 5.0: 1. Support for more platforms: Sun,SGI,DecStation,IBM RS/6000,Cray,Convex,Meiko,i860 based coprocessors,... 2. New features for Aspirin: - Quadratic connections (allows hyper-elliptical decision surfaces) - Auto-Regressive Nodes (allows each node to have an auto-regressive memory, with tunable feedback weights). - New file formats Russell Leighton INTERNET: russ at dash.mitre.org Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From haussler at saturn.ucsc.edu Tue May 14 17:40:49 1991 From: haussler at saturn.ucsc.edu (David Haussler) Date: Tue, 14 May 91 14:40:49 -0700 Subject: COLT '91 conference program Message-ID: <9105142140.AA27812@saturn.ucsc.edu> CONFERENCE PROGRAM FOR COLT '91 : PLEASE POST AND DISTRIBUTE Workshop on Computational Learning Theory Monday, August 5 through Wednesday, August 7, 1991 University of California, Santa Cruz, California PROGRAM: Sunday, August 4th: Reception, 7:00 - 10:00 pm, Crown Merrill Multi-Purpose Room Monday, August 5th Session 1: 9:00 -- 10:20 Tracking Drifting Concepts Using Random Examples by David P. Helmbold and Philip M. Long Investigating the Distribution Assumptions in the Pac Learning Model by Peter L. Bartlett and Robert C. Williamson Simultaneous Learning and Estimation for Classes of Probabilities by Kevin Buescher and P.R. Kumar Learning by Smoothing: a morphological approach by Michael Woonkyung Kim Session 2: 11:00 -- 12:00 Unifying Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension by David Haussler, Michael Kearns and Robert E. Schapire Generalization Performance of Bayes Optimal Classification Algorithm for Learning a Perceptron by Manfred Opper and David Haussler Probably Almost Bayes Decisions by Paul Fischer, Stefan Polt, and Hans Ulrich Simon Session 3: 2:00 -- 3:00 Generalization and Learning, invited talk by Tom Cover Session 4: 3:30 -- 4:30 A Geometric Approach to Threshold Circuit Complexity by Vwani Roychowdhury, Kai-Yeung Siu, Alon Orlitsky, and Thomas Kailath Learning Curves in Large Neural Networks by H. Sompolinsky, H.S. Seung, and N. Tishby On the Learning of Infinitary Regular Sets by Oded Maler and Amir Pnueli Impromptu talks: 5:00 -- 6:00 Business Meeting: 8:00 Impromtu talks: 9:00 Tuesday, August 6 Session 5: 9:00 -- 10:20 Learning Monotone DNF with an Incomplete Membership Oracle by Dana Angluin and Donna K. Slonim Redundant Noisy Attributes, Attribute Errors, and Linear-threshold Learning Using Winnow by Nicholas Littlestone Learning in the presence of finitely or infinitely many irrelevant attributes by Avrim Blum, Lisa Hellerstein, and Nick Littlestone On-Line Learning with an Oblivious Environment and the Power of Randomization by Wolfgang Maass Session 6: 11:00 -- 12:00 Learning Monotone k\mu-DNF Formulas on Product Distributions by Thomas Hancock and Yishay Mansour Learning Probabilistic Read-once Formulas on Product Distributions by Robert E. Schapire Learning 2\mu-DNF Formulas and k\mu Decision Trees by Thomas R. Hancock Session 7: 2:00 -- 3:00 Invited talk by Rodney Brooks Session 8: 3:30 -- 4:30 Polynomial-Time Learning of Very Simple Grammars from Positive Data by Takashi Yokomori Relations Between Probabilistic and Team One-Shot Learners by Robert Daley, Leonard Pitt, Mahendran Velauthapillai, Todd Will When Oracles Do Not Help by Theodore A. Slaman and Robert M. Solovay Impromptu talks: 5:00 -- 6:00 Banquet: 6:30 Wednesday, August 7 Session 9: 9:00 -- 10:20 Approximation and Estimation Bounds for Artificial Neural Networks by Andrew R. Barron The VC-Dimension vs. the Statistical Capacity for Two Layer Networks with Binary Weights by Chuanyi Ji and Demetri Psaltis On Learning Binary Weights for Majority Functions by Santosh S. Venkatesh Evaluating the Performance of a Simple Inductive Procedure in the Presence of Overfitting Error by Andrew Nobel Session 10: 11:00 -- 12:00 Polynomial Learnability of Probabilistic Concepts with respect to the Kullback-Leibler Divergence by Naoki Abe, Jun-ichi Takeuchi, and Manfred K. Warmuth A Loss Bound Model for On-Line Stochastic Prediction Strategies by Kenji Yamanishi On the Complexity of Teaching by Sally A. Goldman and Michael J. Kearns Session 11: 2:00 -- 3:40 Improved Learning of AC^0 Functions by Merrick L. Furst, Jeffrey C. Jackson, and Sean W Smith Learning Read-Once Formulas over Fields and Extended Bases by Thomas Hancock and Lisa Hellerstein Fast Identification of Geometric Objects with Membership Queries by William J. Bultman and Wolfgang Maass Bounded degree graph inference from walks by Vijay Raghavan On the Complexity of Learning Strings and Sequences by Tao Jiang and Ming Li General Information: The workshop will be held on the UCSC campus, which is hidden away in the redwoods on the Pacific coast of Northern California. We encourage you to come early so that you will have time to enjoy the area. You can arrive on campus as early as Saturday, August 3. You may want to learn wind surfing on Monterey Bay, go hiking in the redwoods at Big Basin Redwoods State Park, see the elephant seals at Ano Nuevo State Park, visit the Monterey Bay Aquarium, or see a play at the Santa Cruz Shakespeare Festival on campus. The workshop is being held in-cooperation with ACM SICACT and SIGART, and with financial support from the Office of Naval Research. 1. Conference and room registration: Forms can be obtained by anonymous FTP, connect to midgard.ucsc.edu and look in the directory pub/colt. Alternatively, send E-mail to "colt at cis.ucsc.edu" for instructins on obtaining the forms by electronic mail. Fill out the forms and return them to us with your payment. It must be postmarked by June 24 and received by July 1 to obtain the early registration rate and guarantee the room. Conference attendance is limited by the available space, and late registrations may need to be returned. 2. Flight tickets: San Jose Airport is the closest, about a 45 minute drive. San Francisco Airport is about an hour and forty-five minutes away, but has slightly better flight connections. The International Travel Bureau (ITB -- ask for Peter) at (800) 525-5233 is the COLT travel agency and has discounts for some non-Saturday flights. 3. Transportation from the airport to Santa Cruz: The first option is to rent a car and drive south from San Jose on 880/17. When you get to Santa Cruz, take Route 1 (Mission St.) north. Turn right on Bay Street and follow the signs to UCSC. Commuters must purchase parking permits for 2.50/day from the parking office or the conference satellite office. Those staying on campus can pick up permits with their room keys. Various van services also connect Santa Cruz with the the San Francisco and San Jose airports. The Santa Cruz Airporter (408) 423-1214 (or (800)-223-4142 from the airport) has regularly scheduled trips (every two hours from 9am until 11pm from San Jose); Over The Hill Transportation (408) 426-4598 and ABC Transportation (408) 662-8177 travel on demand and should drop you off at the dorms. Call these services directly for reservations and prices. Peerless Stages (phone: (408) 423-1800) operates a regularly scheduled bus between the San Jose Airport and Downtown costing 4.30 and taking about an hour and a quarter. The number 1 bus serves the campus from the Santa Cruz metro center, ask the driver for the Crown-Merrill apartments. Your arrival : Enter the campus at the main entrance following Bay Street. Follow the main road, Coolidge Drive, up into the woods and continue until the second stop sign. Turn right and go up the hill. If you need a map, send E-mail to Jean (jean at cs.ucsc.edu). This road leads into the Crown/Merrill apartments. The whole route will be marked with signs. When you get to the campus, follow the All Conferences signs. As you enter the redwoods the signs will specify particular conferences, such as the International Dowsing Competition and COLT '91. The COLT '91 signs will lead you to the Crown/Merrill apartments. In the center of the apartment complex you will find the Crown/Merrill satellite office of the Conference Office. They will have your keys, meal cards, parking permits, and lots of information about what to do in Santa Cruz, If you get lost or have questions about your room: Call the Crown/Merrill satellite office at (408) 459-2611 . Someone will be at that number all the time, including Saturday and Sunday night. THE FUN PART The weather in August is mostly sunny with occasional summer fog. Bring T-shirts, slacks, shorts, and a sweater or light jacket, as it cools down at night. For information on the local bus routes and schedules, call the Metro center at (408) 425-8600. You can rent windsurfers and wet suits at Cowell Beach . Sherryl (home (408) 429-5730, message machine (408) 429-6033) should be able to arrange lessons and/or board rentals. The main road that leads into the campus is Bay Street. If you go in the opposite direction, away from campus, you will run into a T-intersection at the ocean at the end of Bay Street. Turn left and stay to the right. The road will lead you down to the Boardwalk. Cowell Beach is at the base of the Dream Inn on your right. If you turn right instead of left at the T-intersection at the bottom of Bay Street, you will be driving along Westcliff Drive overlooking the ocean. The road passes by the lighthouse (where you can watch seals and local surfing pros) and dead-ends at Natural Bridges State Park. Westcliff Drive also offers a wonderful paved walkway/bikeway, about 2 miles long. Big Basin Redwoods State Park is about a 45 minute drive from Santa Cruz and there are buses that leave from the downtown Metro Center. You can hike for hours and hours among giant redwoods on the 80 miles of trails. We recommend Berry Creek Falls (about 6 hours for good hikers), but even a half hour hike is worth it! Some of the tallest coastal redwoods on this planet can be found here: the Mother of the Forest is 101 meters (329 feet) high and is on the short (0.06 mile) Redwood trail. For park information call (408) 338-6132. This is your chance to see some Northern Elephant seals, the largest of the pinnipeds. Ano Nuevo State Park is one of the few places in the world where these seals go on land for breeding and molting (August is molting season). Ano Nuevo is located about 20 miles north of Santa Cruz on the coast (right up Highway 1). The park is open from 8am until sunset, but you should plan on arriving before 3pm to see the Elephant seals. Call Ano Nuevo State Park at (415)879-0595 for more information. At the Monterey Bay Aquarium , you can see Great White sharks, Leopard sharks, sea otters, rays, mollusks, and beautiful coral. It's open from 10am to 6pm, and is located about 40 miles south on Highway 1 in Monterey just off of Steinbeck's Cannery Row. For aquarium information call (408) 375-3333. Shakespeare Santa Cruz performances include: "A Midsummer Night's Dream" outside in the redwoods (2pm Saturday and Sunday); "Measure for Measure" (Saturday at 8pm); and "Our Town" (7:30 PM on Sunday). The box office can be reached after July 1 at (408)459-4168 and for general information call (408) 459-2121. Bring swimming trunks, tennis rackets, etc. You can get day passes for $2.50 (East Field House, Physical Education Office) to use the recreation facilities on campus. If you have questions regarding registration or accommodations, contact: Jean McKnight, COLT '91, Dept. of Computer Science, UCSC, Santa Cruz, CA 95064. Her emergency phone number is (408) 459-2303, but she prefers E-mail to jean at cs.ucsc.edu or facsimile at (408) 429-0146. As the program and registration forms are being distributed electronically, please post and/or distribute to your colleagues who might not be on our E-mail list. LaTex versions of the conference information, program, and registration forms can be obtained by anonymous ftp. Connect to midgard.ucsc.edu and look in the directory pub/colt. From gibson_w at maths.su.oz.au Wed May 15 22:58:03 1991 From: gibson_w at maths.su.oz.au (Bill Gibson) Date: Wed, 15 May 91 22:58:03 AES Subject: Job opportunity at Sydney Message-ID: <9105151258.AA02488@c721> We are currently advertising to fill a lectureship in the School of Mathematics and Statistics at the University of Sydney, in Australia. I am a member of a small group of researchers, which includes an experimental neurobiologist, which is working on biological applications of neural networks, with a particular interest in the hippocampus. The School is keen to expand further into the general area of mathematical biology, and this is an opportunity for someone with these interests to obtain a tenurable position. The text of the advertisement follows - I will be happy to provide further information on request. Bill Gibson LECTURER Ref. 17/04 School of Mathematics and Statistics The School has active research groups in pure mathematics (algebra, algebraic geometry, analysis, category theory, group theory), applied mathematics (mathematical modelling in various areas of biology, finance, earth sciences and solar astrophysics) and mathematical statistics (probability, theoretical and applied statistics, neurobiological modelling). Courses in mathematics are given at all undergraduate and postgraduate levels and include computer-based courses. Both research and teaching are supported by a large network of Apollo workstations, including several high performance processors and colour graphics systems. The appointee will have a strong research record in a field related to nonlinear systems and be prepared to teach courses at all levels, including computer-based courses. Research areas such as mathematical biology, neural networks, nonlinear waves and chaos are of particular interest. Appointments to lectureships have the potential to lead to tenure and are usually probationary for three years. Salary: $A33 163 - $A43 096 p.a. Closing: 4 July 1991 From SCHOLTES at ALF.LET.UVA.NL Wed May 15 13:41:00 1991 From: SCHOLTES at ALF.LET.UVA.NL (SCHOLTES) Date: Wed, 15 May 91 13:41 MET Subject: No subject Message-ID: <22C43B9F1A000066@BITNET.CC.CMU.EDU> TR Available on Recurrent Self-Organization in NLP: Kohonen Feature Maps in Natural Language Processing J.C. Scholtes University of Amsterdam Main points: showing the possibilities of Kohonen feature maps in symbolic applications by pushing self-organization. showing a different technique in Connectionist NLP by using only (unsupervised) self organization. Although the model is tested in a NLP context, the linguistic aspects of these experiments are probably less interesting than the connectionist ones. People inquiring a copy should be aware of this. Abstract In the 1980s, backpropagation (BP) started the connectionist bandwagon in Natural Language Processing (NLP). Although initial results were good, some critical notes must be made towards the blind application of BP. Most such systems add contextual and semantical features manually by structuring the input set. Moreover, these models form a small subtract of the brain structures known from neural sciences. They do not adapt smoothly to a changing environment and can only learn input/output pairs. Although these disadvantages of the backpropagation algorithm are commonly known and accepted, other more plausible learning algorithms, such as unsupervised learning techniques are still rare in the field of NLP. Main reason is the highly increasing complexity of unsupervised learning methods when applied in the already complex field of NLP. However, recent efforts implementing unsupervised language learning have been made, resulting in interesting conclusions (Elman and Ritter). Sequencing this earlier work, a recurrent self-organizing model (based on an extension of the Kohonen feature map), capable to derive contextual (and some semantical) information from scratch, is presented in detail. The model implements a first step towards an overall unsupervised language learning system. Simple linguistic tasks such as single word clustering (representation on the map), syntactical group formation, derivation of contextual structures, string prediction, grammatical correctness checking, word sense disambiguation and structure assigning are carried out in a number of experiments. The performance of the model is as least as good as achieved in recurrent backpropagation, and at some points even better (e.g. unsupervised derivation of word classes and syntactical structures). Although premature, the first results are promising and show possibilities for other even more biologically-inspired language processing techniques such as real Hebbian, Genetic or Darwinistic models. Forthcoming research must overcome limitations still present in the extended Kohonen model, such as the absence of within layer learning, restricted recurrence, no look-ahead functions (absence of distributed or unsupervised buffering mechanisms) and a limited support for an increased number of layers. A copy can be obtained by sending a Email message to SCHOLTES at ALF.LET.UVA.NL Please indicate whether you want a hard copy or a postscript file being send to you. From sontag at control.rutgers.edu Wed May 15 13:04:59 1991 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Wed, 15 May 91 13:04:59 EDT Subject: Neural nets are universal computing devices Message-ID: <9105151704.AA02096@control.rutgers.edu> NEURAL NETS ARE UNIVERSAL COMPUTING DEVICES -- request for comments We have proved that it is possible to build a recurrent net that simulates a universal Turing machine. We do not use high-order connections, nor do we require an unbounded number of neurons or an "external" memory such as a stack or a tape. The net is of the standard type, with linear interconnections and about 10^6 neurons. There was some discussion in this medium, some time ago, about questions of universality. At that time, besides classical references, mention was made of work by Pollack, Franklin/Garzon, Hartley/Szu, and Sun. It would appear that our conclusion is not contained in any of the above (which assume high-order connections or potentially infinitely many neurons). [More precisely: a ``net'' is as an interconnection of N synchronously evolving processors, each of which updates its state, a rational number, according to x(t+1) = s(...), where the expression inside is a linear combination (with biases) of the previous states of all processors. An "output processor" signals when the net has completed its computation by outputting a "1". The initial data, a natural number, is encoded via a fractional unary representation into the first processor; when the computation is completed, this same processor has encoded in it the result of the computation. (An alternative, which would give an entirely equivalent result, would be to define read-in and read-out maps.) As activation function we pick the simplest possible "sigmoid," namely the saturated-linear function s(x)=x if x is in [0,1], s(x)=0 for x<0, and s(x)=1 for x>1.] We would appreciate all comments/flames/etc about the technical result. (Philosophical discussions about the implications of these types of results have been extensively covered in previous postings.) A technical report is in preparation, and will be posted to connectionists when publicly available. Any extra references that we should be aware of, please let us know. Thanks a lot, -Hava Siegelman and Eduardo Sontag, Depts of Comp Sci and Math, Rutgers University. From blank at copper.ucs.indiana.edu Wed May 15 16:11:09 1991 From: blank at copper.ucs.indiana.edu (doug blank) Date: Wed, 15 May 91 15:11:09 EST Subject: Paper Available: RAAM Message-ID: Exploring the Symbolic/Subsymbolic Continuum: A Case Study of RAAM Douglas S. Blank (blank at iuvax.cs.indiana.edu) Lisa A. Meeden (meeden at iuvax.cs.indiana.edu) James B. Marshall (marshall at iuvax.cs.indiana.edu) Indiana University Computer Science and Cognitive Science Departments Abstract: This paper is an in-depth study of the mechanics of recursive auto-associative memory, or RAAM, an architecture developed by Jordan Pollack. It is divided into three main sections: an attempt to place the symbolic and subsymbolic paradigms on a common ground; an analysis of a simple RAAM; and a description of a set of experiments performed on simple "tarzan" sentences encoded by a larger RAAM. We define the symbolic and subsymbolic paradigms as two opposing corners of an abstract space of paradigms. This space, we propose, has roughly three dimensions: representation, composition, and functionality. By defining the differences in these terms, we are able to place actual models in the paradigm space, and compare these models in somewhat common terms. As an example of the subsymbolic corner of the space, we examine in detail the RAAM architecture, representations, compositional mechanisms, and functionality. In conjunction with other simple feed-forward networks, we create detectors, decoders and transformers which act holistically on the composed, distributed, continuous subsymbolic representations created by a RAAM. These tasks, although trivial for a symbolic system, are accomplished without the need to decode a composite structure into its constituent parts, as symbolic systems must do. The paper can be found in the neuroprose archive as blank.raam.ps.Z; a detailed example of how to retrieve the paper follows at the end of this message. A version of the paper will also appear in your local bookstores as a chapter in "Closing the Gap: Symbolism vs Connectionism," J. Dinsmore, editor; LEA, publishers. 1992. ---------------------------------------------------------------------------- % ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server (Ver Tue May 9 14:01 EDT 1989) ready. Name (cheops.cis.ohio-state.edu:): anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get blank.raam.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for blank.raam.ps.Z (173015 bytes). 226 Transfer complete. local: blank.raam.ps.Z remote: blank.raam.ps.Z 173015 bytes received in 1.6 seconds (1e+02 Kbytes/s) ftp> bye 221 Goodbye. % uncompress blank.raam.ps.Z % lpr blank.raam.ps ---------------------------------------------------------------------------- From dhw at t13.Lanl.GOV Thu May 16 11:33:47 1991 From: dhw at t13.Lanl.GOV (David Wolpert) Date: Thu, 16 May 91 09:33:47 MDT Subject: posting of Siegelman and Sontag Message-ID: <9105161533.AA02358@t13.lanl.gov> Dr.'s Siegelman and Sontag, You might be interested in an article of mine which appeared in Complex Systems last year ("A mathematical theory of generalization: part II", pp. 201-249, vol. 4). In it I describe experiments using essentially genetic algorithms to train recurrent nets whose output is signaled when a certain pre-determined node exceeds a threshold (I call this "output flagging"). This sounds very similar to the work you describe. In my work, the training was done in such a way as to minimize a cross-validation error (I call this "self-guessing error" in the paper), and automatically had zero learning error. This strategy was followed to try to achieve good generalization off of the learning set. Also, the individual nodes in the net weren't neurons in the conventional sense, but were rather parameterized input-output surfaces; the training involved changing not only the architecture of the whole net but also the surfaces at the nodes. An interesting advantage of this technique is that it allows you to have a node represent some environmental information, i.e., one of the input-output surfaces can be "hard-wired" and represent something in the environment (e.g., visual data). This allows you to train on one environment and then simply "slot in" another one later; the recurrent net "probes" the environment by sending various input values into this environment node and seeing what comes out. With this technique you don't have to worry about having input neurons represent the environmental data. The paper is part of what is essentially a thesis dump; in hindsight, it is not as well written as I'd like. You can probably safely skip most of the verbiage leading up to the description of the experiments. If you find the paper interesting, but confusing, I'd be more than happy to discuss it with you. Finally, as an unpublished extension of the work in the paper, I've proved that, with output flagging, a continuous-limit recurrent net consisting entirely of linear (!) neurons can mimic a universal computer. Again, this sounds very similar to the work you describe. Please send me a copy of your paper when it's finished. Thanks. David Wolpert (dhw at tweety.lanl.gov) From pollack at cis.ohio-state.edu Thu May 16 17:46:27 1991 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Thu, 16 May 91 17:46:27 -0400 Subject: Universality In-Reply-To: Arun Jagota's message of Thu, 16 May 91 13:33:45 EDT <9105161733.AA05965@sybil.cs.Buffalo.EDU> Message-ID: <9105162146.AA01724@dendrite.cis.ohio-state.edu> Sontag refers to the turing machine construction in a chapter of my unpublished thesis. I showed that it is SUFFICIENT to have rational values (to store the unbounded tape), Multiplicative connections (to GATE the rational values), and thresholds (to make decisions). My construction was modelled on Minsky's stored program machine construction (using unbounded integers) in his book FINITE AND INFINITE MACHINES. I was not able to formally prove NECESSITY of the conjunction of parts. My work was misinterpreted as a suggestion that we should build a Turing machine out of neurons, and then use it! The actual reason for this kind of exercise is to make sure we are not missing something as important to neural computing as, say, "Branch-If-Zero" is to conventional machines. The chapter is available the usual way from neuroprose as pollack.neuring.ps.Z If "the usual way" is difficult for you, you might try getting the file "Getps" the usual way, and then FTP'ing from neuroprose becomes quite easy. From kroger at cognet.ucla.edu Sat May 18 22:47:39 1991 From: kroger at cognet.ucla.edu (James Kroger) Date: Sat, 18 May 91 19:47:39 PDT Subject: seeking info on using snowflakes diagrams Message-ID: <9105190247.AA00246@scarecrow.cognet.ucla.edu> I am posting this on behalf of Judea Pearl at UCLA. I apologize in advance for not summarizing his efforts to date in seeking this info; he didn't enumerate them in his request. If you have any info you would like to share, I'll be happy to relay it. ---Jim Kroger (kroger at cognet.ucla.edu) ---------- I am would like to communicate with someone who is actively involved with the application of "snowflakes" diagrams in Neurobiological Signal Analysis. Thanks ----=======Judea From AC1MPS at primea.sheffield.ac.uk Sat May 18 12:17:00 1991 From: AC1MPS at primea.sheffield.ac.uk (AC1MPS@primea.sheffield.ac.uk) Date: Sat, 18 May 91 12:17:00 Subject: neural nets as universal computing devices Message-ID: Drs Siegelman and Sontag (and anyone else interested) With regard to your note that "Neural nets are universal computing devices", you might be interested in some work described in a series of technical reports from Bruce MacLennan at Knoxville, Tennessee. MacLennan starts with the observation that when a very large number of processors are involves in some parallel/distributed system, it's sensible to model them and their calculations as if they are 'continuous' in extent and nature. The result is a theory of "field computation", which he uses as a framework for a theory of massively parallel analog computation. MacLennan has, I believe, also been investigating the possibility of designing a "universal" field computer. If we think of MacLennan's model as the limiting case when the number of nodes/connections in a net becomes enormous, it would be interesting to know whether your universal model 'converges' to his, in some sense. I would very much appreciate a copy of the technical report when it's available. Mike Stannett e-mail: Formal Methods Group AC1MPS @ primea.sheffield.ac.uk Dept of Computer Science The University Sheffield S10 2TN England From schraudo at cs.UCSD.EDU Mon May 20 15:05:00 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Mon, 20 May 91 12:05:00 PDT Subject: Hertz/Krogh/Palmer bibliography: BibTeX version available Message-ID: <9105201905.AA10458@beowulf.ucsd.edu> I've converted the Hertz/Krogh/Palmer neural network bibliography to BibTeX format and uploaded it as hertz.refs.bib.Z to the neuroprose archive (anonymous ftp to cheops.cis.ohio-state.edu, directory pub/neuroprose). Like the original authors, I don't have the time to update or otherwise maintain this bibliography, except to fix obvious bugs. Is anybody out there who would be willing to organize a continually updated bibliography? Many thanks to Palmer, Krogh & Hertz for making their work available to us. Their original file (hertz.refs.tex.Z) has been updated last Thursday to fix a few minor problems. Happy citing, -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Mon May 20 22:24:30 1991 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 20 May 91 22:24:30 EDT Subject: information on NIPS 3 Message-ID: <7601.674792670@DST.BOLTZ.CS.CMU.EDU> Below is information on the forthcoming NIPS 3 volume (proceedings of the 1990 NIPS conference), which will be available next month from Morgan Kaufmann. There is catalog information, followed by the complete table of contents, followed by ordering information should you wish to purchase a copy. (NIPS authors and attendees will be receiving free copies.) Normally NIPS proceedings come out in April, but this volume turned out to be much larger than the previous ones (because more papers were accepted), resulting in various technical and logistical difficulties which hopfeully will not be repeated next year. -- Dave Touretzky ................................................................ PUBLICATION ANNOUNCEMENT ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS -3- Edited by Richard P. Lippmann (M.I.T. Lincoln Labs), John E. Moody (Yale University), and David S. Touretzky (Carnegie Mellon University) NIPS 3 (1991) ISBN 1-55860-184-8 $49.95 U.S. 1100 pages MORGAN KAUFMANN PUBLISHERS, INC. Tabel of Contents Neurobiology......................................................1 "Studies of a Model for the Development and Regeneration of Eye-Brain Maps" J.D. Cowan and A.E. Friedman.................................3 "Development and Spatial Structure of Cortical Feature Maps: A Model Study" K. Obermayer, H. Ritter, and K. Schulten....................11 "Interaction Among Ocularity, Retinotopy and On-center/ Off-center Pathways" Shigeru Tanaka..............................................18 "Simple Spin Models for the Development of Ocular Dominance Columns and Iso-Orientation Patches" J.D. Cowan and A.E. Friedman................................26 "A Recurrent Neural Network Model of Velocity Storage in the Vestibulo-Ocular Reflex" Thomas J. Anastasio.........................................32 "Self-organization of Hebbian Synapses in Hippocampal Neurons" Thomas H. Brown, Zachary F. Mainen, Anthony M. Zador, and Brenda J. Claiborne.........................................39 "Cholinergic Modulation May Enhance Cortical Associative Memory Function" Michael E. Hasselmo, Brooke P. Anderson, and James M. Bower.46 Neuro-Dynamics...................................................53 "Order Reduction for Dynamical Systems Describing the Behavior of Complex Neurons" Thomas B. Kepler, L.F. Abbott, and Eve Marder...............55 "Stochastic Neurodynamics" J.D. Cowan..................................................62 "Dynamics of Learning in Recurrent Feature-Discovery Networks" Todd K. Leen................................................70 "A Lagrangian Approach to Fixed Points" Eric Mjolsness and Willard L. Miranker......................77 "Associative Memory in a Network of Biological Neurons" Wulfram Gerstner............................................84 "CAM Storage of Analog Patterns and Continuous Sequences with 3N\u2\d Weights" Bill Baird and Frank Eeckman................................91 "Connection Topology and Dynamics in Lateral Inhibition Networks" C.M. Marcus, F.R. Waugh, and R.M. Westervelt................98 "Shaping the State Space Landscape in Recurrent Networks" Patrice Y. Simard, Jean Pierre Raysz, and Bernard Victorri.105 "Adjoint-Functions and Temporal Learning Algorithms in Neural Networks" N. Toomarian and J. Barhen....... .........................113 Oscillations....................................................121 "Phase-coupling in Two-Dimensional Networks of Interacting Oscillators" Ernst Niebur, Daniel M. Kammen, Christof Koch, Daniel Ruderman, and Heinz G. Schuster............................123 "Oscillation Onset in Neural Delayed Feedback" Andre Longtin............................................. 130 "Analog Computation at a Critical Point" Leonid Kruglyak and William Bialek.........................137 Temporal Reasoning..............................................145 "Modeling Time Varying Systems Using Hidden Control Neural Architecture" Esther Levin...............................................147 "The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning" Ulrich Bodenhausen and Alex Waibel.........................155 "A Theory for Neural Networks with Time Delays" Bert de Vries and Jose C. Principe.........................162 "ART2/BP Architecture for Adaptive Estimation of Dynamic Processes" Einar Sorheim.............................................169 "Statistical Mechanics of Temporal Association in Neural Networks" Andreas V.M. Herz, Zhaoping Li, and J. Leo van Hemmen......176 "Learning Time-varying Concepts" Anthony Kuh, Thomas Petsche, and Ronald L. Rivest..........183 "The Recurrent Cascade-Correlation Architecture" Scott E. Fahlman...........................................190 Speech..........................................................197 "Continuous Speech Recognition by Linked Predictive Neural Networks" Joe Tebelskis, Alex Waibel, Bojan Petek, and Otto Schmidbauer...........................................199 "A Recurrent Neural Network for Word Identification from Continuous Phoneme Strings" Robert B. Allen and Candace A. Kamm........................206 "Connectionist Approaches to the Use of Markov Models for Speech Recognition" Herve Bourlard, Nelson Morgan, and Chuck Wooters...........213 "Spoken Letter Recognition" Mark Fanty and Ronald Cole.................................220 "Speech Recognition Using Demi-Syllable Neural Prediction Model" Ken-ichi Iso and Takao Watanabe............................227 "RecNorm: Simultaneous Normalisation and Classification Applied to Speech Recognition" John S. Bridle and Stephen J. Cox..........................234 "Exploratory Feature Extraction in Speech Signals" Nathan Intrator............................................241 "Phonetic Classification and Recognition Using the Multi-Layer Perceptron" Hong C. Leung, James R. Glass, Michael S. Phillips, and Victor W. Zue.....................................................248 "From Speech Recognition to Spoken Language Understanding" Victor Zue, James Glass, David Goodine, Lynette Hirschman, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff.....................................................255 "Speech Recognition using Connectionist Approaches" Khalid Choukri.............................................262 Signal Processing...............................................271 "Natural Dolphin Echo Recognition Using an Integrator Gateway Network" Herbert L. Roitblat, Patrick W.B. Moore, Paul E. Nachtigall, and Ralph H. Penner........................................273 "Signal Processing by Multiplexing and Demultiplexing in Neurons" David C. Tam...............................................282 "Applications of Neural Networks in Video Signal Processing" John C. Pearson, Clay D. Spence, and Ronald Sverdlove......289 Visual Processing...............................................297 "Discovering Viewpoint-Invariant Relationships That Characterize Objects" Richard S. Zemel and Geoffrey E. Hinton....................299 "A Neural Network Approach for Three-Dimensional Object Recognition" Volker Tresp...............................................306 "A Second-Order Translation, Rotation and Scale Invariant Neural Network" Shelly D.D. Goggin, Kristina M. Johnson, and Karl E. Gustafson..................................................313 "Learning to See Rotation and Dilation with a Hebb Rule" Martin I. Sereno and Margaret E. Sereno....................320 "Stereopsis by a Neural Network Which Learns the Constraints" Alireza Khotanzad and Ying-Wung Lee........................327 "Grouping Contours by Iterated Pairing Network" Amnon Shashua and Shimon Ullman............................335 "Neural Dynamics of Motion Segmentation and Grouping" Ennio Mingolla.............................................342 "A Multiscale Adaptive Network Model of Motion Computation in Primates" H. Taichi Wang, Bimal Mathur, and Christof Koch............349 "Qualitative Structure From Motion" Daphna Weinshall...........................................356 "Optimal Sampling of Natural Images" William Bialek, Daniel L. Ruderman, and A. Zee.............363 "A VLSI Neural Network for Color Constancy" Andrew Moore, John Allman, Geoffrey Fox, and Rodney Goodman....................................................370 "Optimal Filtering in the Salamander Retina" Fred Rieke, W. Geoffrey Owen, and William Bialek...........377 "A Four Neuron Circuit Accounts for Change Sensitive Inhibition in Salamander Retina" Jeffrey L. Teeters, Frank H. Eeckman, and Frank S. Werblin.384 "Feedback Synapse to Cone and Light Adaptation" Josef Skrzypek.............................................391 "An Analog VLSI Chip for Finding Edges from Zero-crossings" Wyeth Bair and Christof Koch...............................399 "A Delay-Line Based Motion Detection Chip" Tim Horiuchi, John Lazzaro, Andrew Moore, and Christof Koch..............................................406 Control and Navigation..........................................413 "Neural Networks Structured for Control Application to Aircraft Landing" Charles Schley, Yves Chauvin, Van Henkle, and Richard Golden.............................................415 "Real-time Autonomous Robot Navigation Using VLSI Neural Networks" Lionel Tarassenko, Michael Brownlow, Gillian Marshall, Jon Tombs, and Alan Murray.....................................422 "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation" Dean A. Pomerleau..........................................429 "Learning Trajectory and Force Control of an Artificial Muscle Arm" Masazumi Katayama and Mitsuo Kawato........................436 "Proximity Effect Corrections in Electron Beam Lithography" Robert C. Frye, Kevin D. Cummings, and Edward A. Reitman...443 "Planning with an Adaptive World Model" Sebastian B. Thrun, Knut Moller, and Alexander Linden...........................................450 "A Connectionist Learning Control Architecture for Navigation" Jonathan R. Bachrach.......................................457 "Navigating Through Temporal Difference" Peter Dayan................................................464 "Integrated Modeling and Control Based on Reinforcement Learning" Richard S. Sutton..........................................471 "A Reinforcement Learning Variant for Control Scheduling" Aloke Guha.................................................479 "Adaptive Range Coding" Bruce E. Rosen, James M. Goodwin, and Jacques J. Vidal.....486 "Neural Network Implementation of Admission Control" Rodolfo A. Milito, Isabelle Guyon, and Sara A. Solla.......493 "Reinforcement Learning in Markovian and Non-Markovian Environments" Jurgen Schmidhuber.........................................500 "A Model of Distributed Sensorimotor Control in The Cockroach Escape Turn" R.D. Beer, G.J. Kacmarcik, R.E. Ritzmann, and H.J. Chiel...507 "Flight Control in the Dragonfly: A Neurobiological Simulation" William E. Faller and Marvin W. Luttges....................514 Applications....................................................521 "A Novel Approach to Prediction of the 3-Dimensional Structures" Henrik Fredholm, Henrik Bohr, Jakob Bohr, Soren Brunak, Rodney M.J. Cotterill, Benny Lautrup, and Steffen B. Petersen.....523 "Training Knowledge-Based Neural Networks to Recognize Genes" Michiel O. Noordewier, Geoffrey G. Towell, and Jude W. Shavlik....................................................530 "Neural Network Application to Diagnostics" Kenneth A. Marko...........................................537 "Lg Depth Estimation and Ripple Fire Characterization" John L. Perry and Douglas R. Baumgardt.....................544 "A B-P ANN Commodity Trader" Joseph E. Collard..........................................551 "Integrated Segmentation and Recognition of Hand-Printed Numerals" James D. Keeler, David E. Rumelhart, and Wee-Kheng Leow....557 "EMPATH: Face, Emotion, and Gender Recognition Using Holons" Garrison W. Cottrell and Janet Metcalfe....................564 "SEXNET: A Neural Network Identifies Sex From Human Faces" B.A. Golomb, D.T. Lawrence, and T.J. Sejnowski.............572 "A Neural Expert System with Automated Extraction of Fuzzy If-Then Rules" Yoichi Hayashi.............................................578 "Analog Neural Networks as Decoders" Ruth Erlanson and Yaser Abu-Mostafa........................585 Language and Cognition..........................................589 "Distributed Recursive Structure Processing" Geraldine Legendre, Yoshiro Miyata, and Paul Smolensky.....591 "Translating Locative Prepositions" Paul W. Munro and Mary Tabasko "A Short-Term Memory Architecture for the Learning of Morphophonemic Rules" Michael Gasser and Chan-Do Lee.............................605 "Exploiting Syllable Structure in a Connectionist Phonology Model" David S. Touretzky and Deirdre W. Wheeler..................612 "Language Induction by Phase Transition in Dynamical Recognizers" Jordan B. Pollack..........................................619 "Discovering Discrete Distributed Representations" Michael C. Mozer...........................................627 "Direct Memory Access Using Two Cues" Janet Wiles, Michael S. Humphreys, John D. Bain, and Simon Dennis...............................................635 "An Attractor Neural Network Model of Recall and Recognition" Eytan Ruppin and Yechezkel Yeshurun........................642 "ALCOVE: A Connectionist Model of Human Category Learning" John K. Kruschke...........................................649 "Spherical Units as Dynamic Consequential Regions" Stephen Jose Hanson and Mark A. Gluck......................656 "Connectionist Implementation of a Theory of Generalization" Roger N. Shepard and Sheila Kannappan......................665 Local Basis Functions...........................................673 "Adaptive Spline Networks" Jerome H. Friedman.........................................675 "Multi-Layer Perceptrons with B-Spline Receptive Field Functions" Stephen H. Lane, Marshall G. Flax, David A. Handelman, and Jack J. Gelfand............................................684 "Bumptrees for Efficient Function, Constraint, and Classification Learning" Stephen M. Omohundro.......................................693 "Basis-Function Trees as a Generalization of Local Variable Selection Methods" Terence D. Sanger..........................................700 "Generalization Properties of Radial Basis Functions" Sherif M. Botros and Christopher G. Atkeson................707 "Learning by Combining Memorization and Gradient Descent" John C. Platt..............................................714 "Sequential Adaptation of Radial Basis Function Neural Networks" V. Kadirkamanathan, M. Niranjan, and F. Fallside...........721 "Oriented Non-Radial Basis Functions for Image Coding and Analysis" Avijit Saha, Jim Christian, D.S. Tang, and Chuan-Lin Wu....728 "Computing with Arrays of Bell-Shaped and Sigmoid Functions" Pierre Baldi...............................................735 "Discrete Affine Wavelet Transforms" Y.C. Pati and P.S. Krishnaprasad................................743 "Extensions of a Theory of Networks for Approximation and Learning" Federico Girosi, Tomaso Poggio, and Bruno Caprile..........750 "How Receptive Field Parameters Affect Neural Learning" Bartlett W. Mel and Stephen M. Omohundro...................757 Learning Systems................................................765 "A Competitive Modular Connectionist Architecture" Robert A. Jacobs and Michael I. Jordan.....................767 "Evaluation of Adaptive Mixtures of Competing Experts" Steven J. Nowlan and Geoffrey E. Hinton....................774 "A Framework for the Cooperation of Learning Algorithms" Leon Bottou and Patrick Gallinari..........................781 "Connectionist Music Composition Based on Melodic and Stylistic Constraints" Michael C. Mozer and Todd Soukup...........................789 "Using Genetic Algorithms to Improve Pattern Classification Performance" Eric I. Chang and Richard P. Lippmann......................797 "Evolution and Learning in Neural Networks" Ron Keesing and David G. Stork.............................804 "Designing Linear Threshold Based Neural Network Pattern Classifiers" Terrence L. Fine...........................................811 "On Stochastic Complexity and Admissible Models for Neural Network Classifiers" Padhraic Smyth............................................ 818 "Efficient Design of Boltzmann Machines" Ajay Gupta and Wolfgang Maass..............................825 "Note on Learning Rate Schedules for Stochastic Optimization" Christian Darken and John Moody............................832 "Convergence of a Neural Network Classifier" John S. Baras and Anthony LaVigna..........................839 "Learning Theory and Experiments with Competitive Networks" Griff L. Bilbro and David E. Van den Bou...................846 "Transforming Neural-Net Output Levels to Probability Distributions" John S. Denker and Yann leCun..............................853 "Back Propagation is Sensitive to Initial Conditions" John F. Kolen and Jordan B. Pollack........................860 "Closed-Form Inversion of Backpropagation Networks" Michael L. Rossen..........................................868 Learning and Generalization.....................................873 "Generalization by Weight-Elimination with Application to Forecasting" Andreas S. Weigend, David E. Rumelhart, and Bernardo A. Huberman...................................................875 "The Devil and the Network" Sanjay Biswas and Santosh S. Venkatesh.....................883 "Generalization Dynamics in LMS Trained Linear Networks" Yves Chauvin...............................................890 "Dynamics of Generalization in Linear Perceptrons" Anders Krogh and John A. Hertz.............................897 "Constructing Hidden Units Using Examples and Queries" Eric B. Baum and Kevin J. Lang.............................904 Can Neural Networks do Better Than the Vapnik-Chervonenkis Bounds?" David Cohn and Gerald Tesauro..............................911 "Second Order Properties of Error Surfaces" Yann Le Cun, Ido Kanter, and Sara A. Solla.................918 "Chaitin-Kolmogorov Complexity and Generalization in Neural Networks" Barak A. Pearlmutter and Ronald Rosenfeld..................925 "Asymptotic Slowing Down of the Nearest-Neighbor Classifier" Robert R. Snapp, Demetri Psaltis, and Santosh S. Venkatesh..................................................932 "Remarks on Interpolation and Recognition Using Neural Nets" Eduardo D. Sontag..........................................939 "*-Entropy and the Complexity of Feedforward Neural Networks" Robert C. Williamson.......................................946 "On The Circuit Complexity of Neural Networks" V.P. Roychowdhury, A. Orlitsky, K.Y. Siu, and T. Kailath...953 Performance Comparisons.........................................961 "Comparison of Three Classification Techniques, CART, C4.5 and Multi-Layer Perceptrons" A.C. Tsoi and R.A. Pearson.................................963 "Practical Characteristics of Neural Network and Conventional Pattern Classifiers" Kenney Ng and Richard P. Lippmann..........................970 "Time Trials on Second-Order and Variable-Learning-Rate Algorithms" Richard Rohwer.............................................977 "Kohonen Networks and Clustering" Wesley Snyder, Daniel Nissman, David Van den Bout, and Griff Bilbro.....................................................984 VLSI............................................................991 "VLSI Implementations of Learning and Memory Systems" Mark A. Holler.............................................993 "Compact EEPROM-based Weight Functions" A. Kramer, C.K. Sin, R. Chu, and P.K. Ko..................1001 "An Analog VLSI Splining Network" Daniel B. Schwartz and Vijay K. Samalam...................1008 "Relaxation Networks for Large Supervised Learning Problems" Joshua Alspector, Robert B. Allen, Anthony Jayakumar, Torsten Zeppenfeld, and Ronny Meir................................1015 "Design and Implementation of a High Speed CMAC Neural Network" W. Thomas Miller, III, Brian A. Box, Erich C. Whitney, and James M. Glynn............................................1022 "Back Propagation Implementation" Hal McCartor..............................................1028 "Reconfigurable Neural Net Chip with 32K Connections" H.P. Graf, R. Janow, D. Henderson, and R. Lee.............1032 "Simulation of the Neocognitron on a CCD Parallel Processing Architecture" Michael L. Chuang and Alice M. Chiang.....................1039 "VLSI Implementation of TInMANN" Matt Melton, Tan Phan, Doug Reeves, and Dave Van den Bout.........................................1046 Subject Index Author Index Advances in Neural Information Processing Systems Bibliography NIPS 3 (1991) ISBN 1-55860-184-8 $49.95 U.S. 1100 pages NIPS 2 (1990) ISBN 1-55860-100-7 $35.95 U.S. 853 pages NIPS 1 (1989) ISBN 1-55860-015-9 $35.95 U.S. 819 pages Complete three volume set: NIPS 1, 2, & 3 ISBN 1-55860-189-9 $109.00 U.S. Morgan Kaufmann Publishers, Inc _________________________________________________________________ Ordering Information: Shipping is available at cost, plus a nominal handling fee: In the U.S. and Canada, please add $3.50 for the first book and $2.50 for each additional for surface shipping; for surface shipments to all other areas, please add $6.50 for the first book and $3.50 for each additional book. Air shipment available outside North America for $45.00 on the first book, and $25.00 on each additional book. We accept American Express, Master Card, Visa and personal checks drawn on US banks. MORGAN KAUFMANN PUBLISHERS, INC. Department B8 2929 Campus Drive, Suite 260 San Mateo, CA 94403 USA Phone: (800) 745-7323 (Toll Free for North America) (415) 578-9928 Fax: (415) 578-0672 email: morgan at unix.sri.com From SIGUENZA at EMDCCI11.BITNET Tue May 21 16:41:55 1991 From: SIGUENZA at EMDCCI11.BITNET (Juan Alberto Sigenza Pizarro) Date: Tue, 21 May 91 16:41:55 HOE Subject: No subject Message-ID: ************************************************************** * SUMMER SCHOOL * * * * ARTIFICIAL NEURAL NETWORWKS: FOUNDATIONS AND APLICATIONS. * * * ************************************************************** Organizers: Jose R. Dorronsoro (Instituto de Ingenieria del Conocimiento and Facultad de Ciencias de la Universidad Autonoma de Madrid) Juan A. Siguenza (Instituto de Ingenieria del Conocimiento and Facultad de Medicina de la Universidad Autonoma de Madrid) *******PROGRAMME******* July 22, 1991 ************* Curso introductorio a las Redes Neuronales. Vicente Lopez} (Instituto de Ingenieria del Conocimiento y Facultad de Ciencias de la UAM) 10:00 1st Session: Introduccion a las diferentes arquitecturas. 12:00 2nd Session: Redes basadas en algoritmos de retropropagacion. 17:00 3rd Session: Ejercicios de aplicacion practica. 23 July, 1991 ************* 10:00 Optimizacion global y algoritmos geneticos. A. Trias (AIA S.A. y Universidad de Barcelona.) 12:00 Neurobiological models and artificial neural networks. R. Granger (Bonney Center for the Neurobiology of Learning and Memory, Universidad de California, Irvine) 17:00 Demostraciones practicas} 24 July, 1991 ************* 10:00 Present perspectives for the optical implementation of neural networks. D. Selviah (University College of London) 12:00 Neural networks learning algorithms G. Tesauro (IBM, T.J. Watson Research Center, Yorktown From slehar at park.bu.edu Tue May 21 11:48:38 1991 From: slehar at park.bu.edu (Steve Lehar) Date: Tue, 21 May 91 11:48:38 -0400 Subject: neural nets as universal computing devices In-Reply-To: connectionists@c.cs.cmu.edu's message of 20 May 91 11:26:19 GM Message-ID: <9105211548.AA03404@park.bu.edu> Have you got a reference for McLennan's "field computation" idea? It sounds very interesting. From mackay at hope.caltech.edu Tue May 21 13:40:57 1991 From: mackay at hope.caltech.edu (David MacKay) Date: Tue, 21 May 91 10:40:57 PDT Subject: New Bayesian work Message-ID: <9105211740.AA27625@hope.caltech.edu> Two new papers available ------------------------ The papers that I presented at Snowbird this year are now available in the neuroprose archives. The titles: [1] Bayesian interpolation (14 pages) [2] A practical Bayesian framework for backprop networks (11 pages) The first paper describes and demonstrates recent developments in Bayesian regularisation and model comparison. The second applies this framework to backprop. The first paper is a prerequisite for understanding the second. Abstracts and instructions for anonymous ftp follow. If you have problems obtaining the files by ftp, feel free to contact me. David MacKay Office: (818) 397 2805 Fax: (818) 792 7402 Email: mackay at hope.caltech.edu Smail: Caltech 139-74, Pasadena, CA 91125 Abstracts --------- Bayesian interpolation ---------------------- Although Bayesian analysis has been in use since Laplace, the Bayesian method of {\em model--comparison} has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and model--comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other problems. Regularising constants are set by examining their posterior probability distribution. Alternative regularisers (priors) and alternative basis sets are objectively compared by evaluating the {\em evidence} for them. `Occam's razor' is automatically embodied by this framework. The way in which Bayes infers the values of regularising constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling. A practical Bayesian framework for backprop networks ---------------------------------------------------- A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for deletion of weights; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well--determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over--flexible and over--complex architectures. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well--matched to a problem, a good correlation between generalisation ability and the Bayesian evidence is obtained. Instructions for obtaining copies by ftp from neuroprose: --------------------------------------------------------- unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mackay.bayes-interpolation.ps.Z ftp> get mackay.bayes-backprop.ps.Z ftp> quit unix> [then `uncompress' files and lpr them.] From ethem at ICSI.Berkeley.EDU Tue May 21 13:56:40 1991 From: ethem at ICSI.Berkeley.EDU (Ethem Alpaydin) Date: Tue, 21 May 91 10:56:40 PDT Subject: New ICSI TR on incremental learning Message-ID: <9105211756.AA04169@icsib17.Berkeley.EDU> The following TR is available by anonymous net access at icsi-ftp.berkeley.edu (128.32.201.55) in postscript. Instructions to ftp and uncompress follow text. Hard copies may be requested by writing to either of the addresses below: ethem at icsi.berkeley.edu Ethem Alpaydin ICSI 1947 Center St. Suite 600 Berkeley CA 94704-1105 USA ------------------------------------------------------------------------------ GAL: Networks that grow when they learn and shrink when they forget Ethem Alpaydin International Computer Science Institute Berkeley, CA TR 91-032 Abstract Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. ``Grow and Learn'' (GAL) is a new algorithm that learns an association at one-shot due to being incremental and using a local representation. During the so-called ``sleep'' phase, units that were previously stored but which are no longer necessary due to recent modifications are removed to minimize network complexity. The incrementally constructed network can later be finetuned off-line to improve performance. Another method proposed that greatly increases recognition accuracy is to train a number of networks and vote over their responses. The algorithm and its variants are tested on recognition of handwritten numerals and seem promising especially in terms of learning speed. This makes the algorithm attractive for on-line learning tasks, e.g., in robotics. The biological plausibility of incremental learning is also discussed briefly. Keywords Incremental learning, supervised learning, classification, pruning, destructive methods, growth, constructive methods, nearest neighbor. -------------------------------------------------------------------------- Instructions to ftp the above-mentioned TR (Assuming you are under UNIX and have a postscript printer --- messages in parantheses indicate system's responses): ftp 128.32.201.55 (Connected to 128.32.201.55. 220 icsi-ftp (icsic) FTP server (Version 5.60 local) ready. Name (128.32.201.55:ethem):)anonymous (331 Guest login ok, send ident as password. Password:)(your email address) (230 Guest login Ok, access restrictions apply. ftp>)cd pub/techreports (250 CWD command successful. ftp>)bin (200 Type set to I. ftp>)get tr-91-032.ps.Z (200 PORT command successful. 150 Opening BINARY mode data connection for tr-91-032.ps.Z (153915 bytes). 226 Transfer complete. local: tr-91-032.ps.Z remote: tr-91-032.ps.Z 153915 bytes received in 0.62 seconds (2.4e+02 Kbytes/s) ftp>)quit (221 Goodbye.) (back to Unix) uncompress tr-91-032.ps.Z lpr tr-91-032.ps Happy reading, I hope you'll enjoy it. From gary at cs.UCSD.EDU Tue May 21 21:27:56 1991 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Tue, 21 May 91 18:27:56 PDT Subject: paper available: Learning the past tense in a recurrent network Message-ID: <9105220127.AA18926@desi.ucsd.edu> The following paper will appear in the Proceedings of the Thirteenth Annual Meeting of the Cognitive Science Society. It is now available in the neuroprose archive as cottrell.cogsci91.ps.Z. Learning the past tense in a recurrent network: Acquiring the mapping from meaning to sounds Garrison W. Cottrell Kim Plunkett Computer Science Dept. Inst. of Psychology UCSD University of Aarhus La Jolla, CA Aarhus, Denmark The performance of a recurrent neural network in mapping a set of plan vectors, representing verb semantics, to associated sequences of phonemes, representing the phonological structure of verb morphology, is investigated. Several semantic representations are explored in attempt to evaluate the role of verb synonymy and homophony in deteriming the patterns of error observed in the net's output performance. The model's performance offers several unexplored predictions for developmental profiles of young children acquiring English verb morphology. To retrieve this from the neuroprose archive type the following: ftp 128.146.8.62 anonymous bi cd pub/neuroprose get cottrell.cogsci91.ps.Z quit uncompress cottrell.cogsci91.ps.Z lpr cottrell.cogsci91.ps Thanks again to Jordan Pollack for this great idea for net distribution. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From mclennan at cs.utk.edu Tue May 21 22:07:04 1991 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Tue, 21 May 91 22:07:04 -0400 Subject: field computation papers Message-ID: <9105220207.AA00427@maclennan.cs.utk.edu> There have been several requests for my papers on field computation. In addition to an early paper in the first IEEE ICNN (San Diego, 1987), there are several reports in the neuroprose directory: maclennan.contincomp.ps.Z -- a short introduction maclennan.fieldcomp.ps.Z -- the current most comprehensive report maclennan.csa.ps.Z -- continuous spatial automata Of course I will be happy to send out hardcopy of these papers or several others not in neuroprose. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 (615)974-5067 maclennan at cs.utk.edu Here are the directions for accessing files from neuroprose. Note that there is also in the directory a script called Getps that does all the work. unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.csa.ps.Z ftp> quit unix> uncompress maclennan.csa.ps.Z unix> lpr maclennan.csa.ps (or however you print postscript) From AC1MPS at primea.sheffield.ac.uk Wed May 22 12:40:37 1991 From: AC1MPS at primea.sheffield.ac.uk (AC1MPS@primea.sheffield.ac.uk) Date: Wed, 22 May 91 12:40:37 Subject: maclennan papers Message-ID: Dear connectionists, A number of you have asked for more information on MacLennan's work concerning field computation. He mentions that a number of tech. reports on field computation are stored in the neuroprose archive (he doesn't remember their filenames off hand, but they all mention his name). His e-mail address is currently maclennan @ cs.utk.edu Technical reports and papers of which I know are as follows: 1987: Technology-independent design of neurocomputers: the universal field computer. Published in Proc IEEE 1st conf. on neural networks, June 1987. -------------------------------- Technical reports, Computer Science Department, University of Tennessee, Knoxville, TN 37916. CS-89-83 Continuous computation: taking massive parallelism seriously June 1989 CS-89-84 Outline of a theory of massively parallel analog computation June 1989 CS-90-100 Field computation: a theoretical framework for massively parallel analog computation, Parts I-IV. February 1990 CS-90-121 Continuous spatial automata November 1990 --------------------------------- Best wishes, Mike Stannett. From fritzke at immd2.informatik.uni-erlangen.de Wed May 22 18:03:46 1991 From: fritzke at immd2.informatik.uni-erlangen.de (B. Fritzke) Date: Wed, 22 May 91 18:03:46 MET DST Subject: TR's available (via ftp) Message-ID: <9105221603.AA01521@faui28.informatik.uni-erlangen.de> Hi there, I just have placed two short papers in the Neuroprose Archive at cheops.cis.ohio-state.edu (128.146.8.62) in the directory pub/neuroprose. The files are: fritzke.cell_structures.ps.Z (to be presented at ICANN-91 Helsinki) fritzke.clustering.ps.Z (to be presented at IJCNN-91 Seattle) They both deal with a new self-organizing network based on the model of Kohonen. The first one describes the model and the second one concentrates one an application. LET IT GROW -- SELF-ORGANIZING FEATURE MAPS WITH PROBLEM DEPENDENT CELL STRUCTURE Bernd FRITZKE Abstract: The self-organizing feature maps introduced by T. Kohonen use a cell array of fixed size and structure. In many cases this array is not able to model a given signal distribution properly. We present a method to construct two-dimensional cell structures during a self-organization process which are specially adapted to the underlying distribution: Starting with a small number of cells new cells are added successively. Thereby signal vectors according to the (usually not explicitly known) probabil- ity distribution are used to determine where to insert or delete cells in the current structure. This process leads to problem dependent cell structures which model the given distribution with arbitrary high accuracy. UNSUPERVISED CLUSTERING WITH GROWING CELL STRUCTURES Bernd FRITZKE Abstract: A Neural Network model is presented which is able to detect clusters of similar patterns. The patterns are n- dimensional real number vectors according to an unknown proba- bility distribution P(X). By evaluating sample vectors ac- cording to P(X) a two-dimensional cell structure is gradually built up which models the distribution. Through removal of cells corresponding to areas with low probability density the structure is then split into several disconnected substruc- tures. Each of them identifies one cluster of similar patterns. Not only the number of clusters is determined but also an ap- proximation of the probability distribution inside each cluster. The accuracy of the cluster description is increased linearly with the number of evaluated sample vectors. Enjoy, Bernd Bernd Fritzke ----------> e-mail: fritzke at immd2.informatik.uni-erlangen.de University of Erlangen, CS IMMD II, Martensstr. 3, 8520 Erlangen (Germany) From dtam at next-cns.neusc.bcm.tmc.edu Wed May 22 20:08:59 1991 From: dtam at next-cns.neusc.bcm.tmc.edu (David C. Tam) Date: Wed, 22 May 91 20:08:59 GMT-0600 Subject: Info on Snowflake diagrams for spike train analysis Message-ID: <9105230208.AA02196@next-cns.neusc.bcm.tmc.edu> This is a brief summary of the information on "snowflake" diagrams in Neurobiological Signal Analysis in reply to the request by Judea Pearl (via kroger at cognet.ucla.edu). Snowflake scatter diagram was one of the spike train analytical methods introduced by Donald Perkel and George Gerstein et al in the 1970's to analyze the correlation between firing intervals among 3 neurons. Background: Spike trains are time-series of action potentials recorded from biological neurons. Since the firing times of spikes by neurons vary in time (i.e., they jitter in time), the analysis of the timing relationships between the firing of neurons require specialized statistical methods which deals with pulse-codes. The most often used statistics is the correlation analysis (which is also developed by Donald Perkel and George Gerstein et al earlier in the 1960's to analyze spike train data). Snowflake analysis and correlation analysis are similar in the following ways: Whereas correlation analysis establishes statistics for pair-wise correlation between 2 spike trains (neurons), snowflake analysis establishes statistics for 3-wise correlation among 3 neurons. Whereas correlation analysis establishes statistics for all higher-order firing intervals between neurons, snowflake analysis establishes statistics for only first-order intervals. Snowflake diagram and joint-interval histogram are similar in the following ways: Whereas joint-interval scatter diagram has 2 orthogonal axes (in a 2-D plane) for displaying the adjacent cross-interval between 2 neurons, snowflake scatter diagram has 3-axes (each 120 degrees from each other) in a 2-D plane for displaying the adjacent cross-interval between 3 neurons. They both establish first-order interval statistics. I have worked with Donald Perkel until he deceased, but George Gerstein is still at Univ. of Penn. I have worked on numerous spike train analytical methods including snowflake diagram. I have also developed other similar spike train analysis techniques, so further detailed questions can be directed to me (David Tam, e-mail: dtam at next-cns.neusc.bcm.tmc.edu) if needed. Related references: Perkel, D.H., Gerstein, G.L., Smith, M.S. and Tatton, W.G. (1975) Nerve-impulse patterns: a quantitative display technique for three neurons, Brain Research. 100: 271-296. Gerstein, G. L. and Perkel, D. H. (1972) Mutual temporal relationships among neuronal spike trains, Biophysical Journal. 12: 453-473. Perkel, D.H., Gerstein, G.L. and Moore, G.P. (1967) Neuronal spike trains and stochastic point process. I. The single spike train. Biophysical Journal. 7: 391-418. Perkel, D.H., Gerstein, G.L. and Moore, G.P. (1967) Neuronal spike trains and stochastic point process. II. Simultaneous spike trains. Biophysical Journal. 7: 419-440. Tam, D.C, Ebner, T.J. and Knox, C.K. (1987) Conditional cross-interval correlation analyses with applications to simultaneously recorded cerebellar Purkinje neurons. Journal of Neurosci. Methods. 23: 23-33. From mike at park.bu.edu Wed May 22 22:54:40 1991 From: mike at park.bu.edu (mike@park.bu.edu) Date: Wed, 22 May 91 22:54:40 -0400 Subject: Bibliography Message-ID: <9105230254.AA13933@fenway.bu.edu> I would like to compile a fairly large bibliographic database in BiBTeX format including the Krogh Bibliographic Database in BibTeX format. If individuals mail me databases in either refer or BibTeX format I will make an effort to merge the files and place papers in neuroprose directory. -- Boston University (617-353-7857) Email: mike at bucasb.bu.edu Smail: Michael Cohen 111 Cummington Street, RM 242 Center for Adaptive Systems Boston, Mass 02215 Boston University From rsun at chaos.cs.brandeis.edu Thu May 23 16:32:23 1991 From: rsun at chaos.cs.brandeis.edu (Ron Sun) Date: Thu, 23 May 91 16:32:23 edt Subject: No subject Message-ID: <9105232032.AA20654@chaos.cs.brandeis.edu> The following paper will appear in the Proc.13th Annual Conference of Cognitive Science Society. It is a revised version of an earlier TR entitle "Integrating Rules and Connectionism for Robust Reasoning" Connectionist Models of Rule-Based Reasoning Ron Sun Brandeis University Computer Science Department rsun at cs.brandeis.edu We investigate connectionist models of rule-based reasoning, and show that while such models usually carry out reasoning in exactly the same way as symbolic systems, they have more to offer in terms of commonsense reasoning. A connectionist architecture for commonsense reasoning,CONSYDERR, is proposed to account for common reasoning patterns and to remedy the brittleness problem in traditional rule-based systems. A dual representational scheme is devised, which utilizes both localist and distributed representations and explores the synergy resulting from the interaction between the two. {CONSYDERR} is therefore capable of accounting for many difficult patterns in commonsense reasoning. This work shows that connectionist models of reasoning are not just ``implementations" of their symbolic counterparts, but better computational models of commonsense reasoning. ------------ FTP procedures ------------------------- (thanks to the service provided by Jordan Pollack) --- ftp cheops.cis.ohio-state.edu >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get sun.cogsci91.ps.Z >quit uncompress sun.integrate.ps.Z lpr sun.cogsci91.ps From schraudo at cs.UCSD.EDU Thu May 23 20:11:48 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Thu, 23 May 91 17:11:48 PDT Subject: hertz.refs.bib.Z -- key change Message-ID: <9105240011.AA09866@beowulf.ucsd.edu> I've added the prefix "HKP:" (for Hertz, Krogh & Palmer) to all citation keys in hertz.refs.bib.Z, and uploaded the new version to pub/neuroprose. The prefix prevents key clashes when several BibTeX files are searched together (as in "\bibliography{recent.work,hertz.refs,own.papers}"). -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From erol at ehei.ehei.fr Thu May 23 12:33:53 1991 From: erol at ehei.ehei.fr (Erol Gelenbe) Date: Thu, 23 May 91 16:35:53 +2 Subject: Technical report on learning in recurrent networks Message-ID: <9105240700.AA21235@corton.inria.fr> You may obtain a hard copy of the following tech report by sending me e-mail : Learning in the Recurrent Random Network by Erol Gelenbe EHEI 45 rue des Saints-Peres 75006 Paris This paper describes an "exact" learning algorithm for the recurrent random network model (see E. Gelenbe in Neural Computation, Vol 2, No 2, 1990). The algorithm is based on the delta rule for updating the network weights. Computationally, each step requires the solution of n non-linear equations (solved in time Kn where K is a constant) and 2n linear equations for the derivatives. Thus it is of O(n**3) complexity, where n is the number of neurons. From bap at james.psych.yale.edu Fri May 24 09:14:54 1991 From: bap at james.psych.yale.edu (Barak Pearlmutter) Date: Fri, 24 May 91 09:14:54 -0400 Subject: Technical report on learning in recurrent networks In-Reply-To: Erol Gelenbe's message of Thu, 23 May 91 16:35:53 +2 <9105240700.AA21235@corton.inria.fr> Message-ID: <9105241314.AA26892@james.psych.yale.edu> I would appreciate a copy. Thanks, Barak Pearlmutter Department of Psychology P.O. Box 11A Yale Station New Haven, CT 06520-7447 From jbarnden at NMSU.Edu Fri May 24 14:48:40 1991 From: jbarnden at NMSU.Edu (jbarnden@NMSU.Edu) Date: Fri, 24 May 91 12:48:40 MDT Subject: a book Message-ID: <9105241848.AA13844@NMSU.Edu> CONNECTIONIST BOOK ANNOUNCEMENT =============================== Barnden, J.A. & Pollack, J.B. (Eds). (1991). Advances in Connectionist and Neural Computation Theory, Vol. 1: High Level Connectionist Models. Norwood, N.J.: Ablex Publishing Corp. ------------------------------------------------ ISBN 0-89391-687-0 Location index QA76.5.H4815 1990 389 pp. Extensive subject index. Cost $34.50 for individuals and course adoption. For more information: jbarnden at nmsu.edu, pollack at cis.ohio-state.edu ------------------------------------------------ MAIN CONTENTS: David Waltz Foreword John A. Barnden & Jordan B. Pollack Introduction: problems for high level connectionism David S. Touretzky Connectionism and compositional semantics Michael G. Dyer Symbolic NeuroEngineering for natural language processing: a multilevel research approach. Lawrence Bookman & Richard Alterman Schema recognition for text understanding: an analog semantic feature approach Eugene Charniak & Eugene Santos A context-free connectionist parser which is not connectionist, but then it is not really context-free either Wendy G. Lehnert Symbolic/subsymbolic sentence analysis: exploiting the best of two worlds. James Hendler Developing hybrid symbolic/connectionist models John A. Barnden Encoding complex symbolic data structures with some unusual connectionist techniques Mark Derthick Finding a maximally plausible model of an inconsistent theory Lokendra Shastri The relevance of connectionism to AI: a representation and reasoning perspective Joachim Diederich Steps toward knowledge-intensive connectionist learning Garrison W. Cottrell & Fu-Sheng Tsung Learning simple arithmetic procedures. Jiawei Hong & Xiaonan Tan The similarity between connectionist and other parallel computation models Lawrence Birnbaum Complex features in planning and understanding: problems and opportunities for connectionism Jordan Pollack & John Barnden Conclusion From tap at ai.toronto.edu Fri May 24 16:48:42 1991 From: tap at ai.toronto.edu (Tony Plate) Date: Fri, 24 May 1991 16:48:42 -0400 Subject: techreport/preprint available Message-ID: <91May24.164846edt.785@neuron.ai.toronto.edu> ** Please do not forward to other newsgroups ** The following tech-report is available by ftp from the neuroprose archive at cheops.cis.ohio-state.edu. It is an expanded version of the paper "Holographic Reduced Representations: Convolution Algebra for Compositional Distributed Representations" which is to appear in the Proceedings of the 12th International Joint Conference on Artificial Intelligence (1991). Holographic Reduced Representations Tony Plate Department of Computer Science, University of Toronto Toronto, Ontario, Canada, M5S 1A4 tap at ai.utoronto.ca Technical Report CRG-TR-91-1 May 1991 Abstract A solution to the problem of representing compositional structure using distributed representations is described. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, frames, and reduced representations can be compressed into a fixed width vector. These representations are items in their own right, and can be used in constructing compositional struc- tures. The noisy reconstructions given by convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. Three appendices are attached. The first discusses some of the mathematical properties of convolution memories. The second gives a more intuitive explanation of convolution memories and explores the relationship between approximate and exact inverses to the convolution operation. The third contains examples of cal- culations of the capacities and recall probabilities for convolution memories. Here's what to do to get the file from neuroprose. unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get plate.hrr.ps.Z ftp> quit unix> uncompress plate.hrr.ps.Z unix> lpr plate.hrr.ps (or however you print postscript) If you are unable to get the file in this way, or have trouble printing it, mail me (tap at ai.utoronto.ca), and I can send a hardcopy. ---------------- Tony Plate ---------------------- tap at ai.utoronto.ca ----- Department of Computer Science, University of Toronto, 10 Kings College Road, Toronto, Ontario, CANADA M5S 1A4 ---------------------------------------------------------------------------- From nzt at research.att.com Sat May 25 09:50:38 1991 From: nzt at research.att.com (nzt@research.att.com) Date: Sat, 25 May 91 09:50:38 EDT Subject: Preprints on Statistical Mechanics of Learning Message-ID: <9105251350.AA16962@minos.att.com> The following preprints are available by ftp from the neuroprose archive at cheops.cis.ohio-state.edu. 1. Statistical Mechanics of Learning from Examples I: General Formulation and Annealed Approximation 2. Statistical Mechanics of Learning from Examples II: Quenched Theory and Unrealizable Rules by: Sebastian Seung, Haim Sompolinsky, and Naftali Tishby This is a two part detailed analytical and numerical study of learning curves in large neural networks, using techniques of equilibrium statistical mechanics. Abstract - Part I Learning from examples in feedforward neural networks is studied using equilibrium statistical mechanics. Two simple approximations to the exact quenched theory are presented: the high temperature limit and the annealed approximation. Within these approximations, we study four models of perceptron learning of realizable target rules. In each model, the target rule is perfectly realizable because it is another perceptron of identical architecture. We focus on the generalization curve, i.e. the average generalization error as a function of the number of examples. The case of continuously varying weights is considered first, for both linear and boolean output units. In these two models, learning is gradual, with generalization curves that asymptotically obey inverse power laws. Two other model perceptrons, with weights that are constrained to be discrete, exhibit sudden learning. For a linear output, there is a first-order transition occurring at low temperatures, from a state of poor generalization to a state of good generalization. Beyond the transition, the generalization curve decays exponentially to zero. For a boolean output, the first order transition is to perfect generalization at all temperatures. Monte Carlo simulations confirm that these approximate analytical results are quantitatively accurate at high temperatures and qualitatively correct at low temperatures. For unrealizable rules the annealed approximation breaks down in general, as we illustrate with a final model of a linear perceptron with unrealizable threshold. Finally, we propose a general classification of generalization curves in models of realizable rules. Abstract - Part II Learning from examples in feedforward neural networks is studied using the replica method. We focus on the generalization curve, which is defined as the average generalization error as a function of the number of examples. For smooth networks, i.e. those with continuously varying weights and smooth transfer functions, the generalization curve is found to asymptotically obey an inverse power law. This implies that generalization curves in smooth networks are generically gradual. In contrast, for discrete networks, discontinuous learning transitions can occur. We illustrate both gradual and discontinuous learning with four single-layer perceptron models. In each model, a perceptron is trained on a perfectly realizable target rule, i.e. a rule that is generated by another perceptron of identical architecture. The replica method yields results that are qualitatively similar to the approximate results derived in Part I for these models. We study another class of perceptron models, in which the target rule is unrealizable because it is generated by a perceptron of mismatched architecture. In this class of models, the quenched disorder inherent in the random sampling of the examples plays an important role, yielding generalization curves that differ from those predicted by the simple annealed approximation of Part I. In addition this disorder leads to the appearance of equilibrium spin glass phases, at least at low temperatures. Unrealizable rules also exhibit the phenomenon of overtraining, in which training at zero temperature produces inferior generalization to training at nonzero temperature. Here's what to do to get the files from neuroprose: unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get tishby.sst1.ps.Z ftp> get tishby.sst2.ps.Z ftp> quit unix> uncompress tishby.sst* unix> lpr tishby.sst* (or however you print postscript) Sebastian Seung Haim Sompolinsky Naftali Tishby ---------------------------------------------------------------------------- From schmidhu at kiss.informatik.tu-muenchen.de Mon May 27 05:22:10 1991 From: schmidhu at kiss.informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Mon, 27 May 1991 11:22:10 +0200 Subject: Technical report on learning in recurrent networks Message-ID: <9105270922.AA03776@kiss.informatik.tu-muenchen.de> Yes, send me your random recurrent network papers. Juergen Schmidhuber Institut fuer Informatik, Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2 GERMANY From hwang at pierce.ee.washington.edu Mon May 27 11:04:31 1991 From: hwang at pierce.ee.washington.edu ( J. N. Hwang) Date: Mon, 27 May 91 08:04:31 PDT Subject: deadline extension of IJCNN'91 Singapore Message-ID: <9105271504.AA09946@pierce.ee.washington.edu.> If you are inclined to waiting to the last minute to submit conference papers, you will be happy to learn that the deadline for submission of papers to the IJCNN'91 Singapore has been extended to June 30, 1991. IJCNN'91 Publicity Committee --------------------------------------------------------------------- IJCNN'91 SINGAPORE, CALL FOR PAPERS CONFERENCE: The IEEE Neural Network Council and the international neural network society (INNS) invite all persons interested in the field of Neural Networks to submit FULL PAPERS for possible presentation at the conference. FULL PAPERS: must be received by "June 30", 1991. All submissions will be acknowledged by mail. Authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. Papers will be reviewed by senior researchers in the field, and all papers accepted will be published in full in the conference proceedings. The conference hosts tutorials on Nov. 18 and tours arranged probably on Nov. 17 and Nov. 22, 1991. Conference sess- ions will be held from Nov. 19-21, 1991. Proposals for tutorial speakers & topics should be submitted to Professor Toshio Fukuda (address below) by Nov. 15, 1990. TOPICS OF INTEREST: original, basic and applied papers in all areas of neural networks & their applications are being solicited. FULL PAPERS may be submitted for consideration as oral or poster pres- entation in (but not limited to) the following sessions: -- Associative Memory -- Sensation & Perception -- Electrical Neurocomputers -- Sensormotor Control System -- Image Processing -- Supervised Learning -- Invertebrate Neural Networks -- Unsupervised Learning -- Machine Vision -- Neuro-Physiology -- Neurocognition -- Hybrid Systems (AI, Neural -- Neuro-Dynamics Networks, Fuzzy Systems) -- Optical Neurocomputers -- Mathematical Methods -- Optimization -- Applications -- Robotics AUTHORS' SCHEDULE: Deadline for submission of FULL PAPERS (camera ready) June 30, 1991 Notification of acceptance Aug. 31, 1991 SUBMISSION GUIDELINES: Eight copies (One original and seven photocopies) are required for submission. Do not fold or staple the original, camera ready copy. Papers of no more than 6 pages, including figures, tables and references, should be written in English and only complete papers will be considered. Papers must be submitted camera-ready on 8 1/2" x 11" white bond paper with 1" margins on all four sides. They should be prepared by typewriter or letter quality printer in one-column format, single-spaced or similar type of 10 points or larger and should be printed on one side of the paper only. FAX submissions are not acceptable. Centered at the top of the first page should be the complete title, author name(s), affiliation(s) and mailing address(es). This is followed by a blank space and then the abstract, up to 15 lines, followed by the text. In an accompanying letter, the following must be included: -- Corresponding author: -- Presentation preferred: Name Oral Mailing Address Poster Telephone & FAX number -- Technical Session: -- Presenter: 1st Choice Name 2nd Choice Mailing Address Telephone & FAX number FOR SUBMISSION FROM JAPAN, SEND TO: Professor Toshio Fukuda Programme Chairman IJCNN'91 SINGAPORE Dept. of Mechanical Engineering Nagoya University, Furo-cho, Chikusa-Ku Nagoya 464-01 Japan. (FAX: 81-52-781-9243) FOR SUBMISSION FROM USA, SEND TO: Ms Nomi Feldman Meeting Management 5565 Oberlin Drive, Suite 110 San Diego, CA 92121 (FAX: 81-52-781-9243) FOR SUBMISSION FROM REST OF THE WORLD, SEND TO: Dr. Teck-Seng, Low IJCNN'91 SINGAPORE Communication Intl Associates Pte Ltd 44/46 Tanjong Pagar Road Singapore 0208 (TEL: (65) 226-2838, FAX: (65) 226-2877, (65) 221-8916) From jbarnden at NMSU.Edu Tue May 28 11:41:45 1991 From: jbarnden at NMSU.Edu (jbarnden@NMSU.Edu) Date: Tue, 28 May 91 09:41:45 MDT Subject: ordering of announced book Message-ID: <9105281541.AA28578@NMSU.Edu> ADDENDUM TO A BOOK ANNOUNCEMENT =============================== Several people have asked about ordering a copy of a book I announced recently. This message includes publisher's address and ordering-department phone number. Barnden, J.A. & Pollack, J.B. (Eds). (1991). Advances in Connectionist and Neural Computation Theory, Vol. 1: High Level Connectionist Models. Norwood, N.J.: Ablex Publishing Corp. 355 Chestnut Street, Norwood, NJ 07648-2090 Order Dept.: (201) 767-8455 ------------------------------------------------ ISBN 0-89391-687-0 Location index QA76.5.H4815 1990 389 pp. Extensive subject index. Cost $34.50 for individuals and course adoption. For more information: jbarnden at nmsu.edu, pollack at cis.ohio-state.edu ------------------------------------------------ From gary at cs.UCSD.EDU Tue May 28 21:16:08 1991 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Tue, 28 May 91 18:16:08 PDT Subject: list mixup Message-ID: <9105290116.AA25934@desi.ucsd.edu> There was a problem here of an undergraduate inadvertently adding the connectionists mailing list to our "talks" mailing list. The way it works here, anyone can do this. I have remedied the problem. Aplogies from UCSD, and you should not see any more announcements of linguistics colloquia at UCSD! g. From white at teetot.acusd.edu Wed May 29 14:51:32 1991 From: white at teetot.acusd.edu (Ray White) Date: Wed, 29 May 91 11:51:32 -0700 Subject: No subject Message-ID: <9105291851.AA03981@teetot.acusd.edu> This notice is to announce a short paper which will be presented at IJCNN-91 Seattle. COMPETITIVE HEBBIAN LEARNING Ray H. White Departments of Physics and Computer Science University of San Diego Abstract Of crucial importance for applications of unsupervised learning to systems of many nodes with a common set of inputs is how the nodes may be trained to collectively develop optimal response to the input. In this paper Competitive Hebbian Learning, a modified Hebbian-learning rule, is introduced. In Competitive Hebbian Learning the change in each connection weight is made proportional to the product of node and input activities multiplied by a factor which decreases with increasing activity on the other nodes. The individual nodes learn to respond to different components of the input activity while collectively developing maximal response. Several applications of Competitive Hebbian Learning are then presented to show examples of the power and versatility of this learning algorithm. This paper has been placed in Jordan Pollack's neuroprose archive at Ohio State, and may be retrieved by anonymous ftp. The title of the file there is white.comp-hebb.ps.Z and it may be retrieved by the usual procedure: local> ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) Name(128.146.8.62:xxx) anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get white.comp-hebb.ps.Z ftp> quit local> uncompress white.comp-hebb.ps.Z local> lpr -P(your_local_postscript_printer) white.comp-hebb.ps Ray White (white at teetot.acusd.edu or white at cogsci.ucsd.edu) From CORTEX at buenga.bu.edu Thu May 30 13:01:00 1991 From: CORTEX at buenga.bu.edu (CORTEX@buenga.bu.edu) Date: Thu, 30 May 1991 13:01 EDT Subject: ANNOUNCEMENT: NEW BOOK Message-ID: <06F43BB160200576@buenga.bu.edu> ____________________________________________ COMPUTATIONAL-NEUROSCIENCE BOOK ANNOUNCEMENT -------------------------------------------- FROM THE RETINA TO THE NEOCORTEX SELECTED PAPERS OF DAVID MARR Edited by Lucia Vaina (1991) Distributer: Birkhauser Boston ________________________________________________________________________ ISBN 0-8176-3472-X ISBN 3-7643-3472-X Cost: $49 For more information: DORAN at SPINT.Compuserve.COM To order the book: call George Adelman at Birkhauser: (617) 876-2333. ________________________________________________________________________ MAIN CONTENTS: the book contains papers by David Marr, which are placed in the framework of current computational neuroscience by leaders in each of the subfields represented by these papers. (1) Early Papers 1. A Theory of Cerebellar Cortex [1969] with commentary by Thomas Thach 2. How the Cerebellum May be Used (with S. Blomfield) [1970] with commentary by Jack D. Cowan 3. Simple Memory: A Theory of Archicortex [1971] with commentaries by David Willshaw & Bruce McNaughton 4. A Theory of Cerebral Neocortex [1970] with commentary by Jack D. Cowan 5. A Computation of Lightness by the Primate Retina [1974] with commentary by Norberto M. Grzywacz (2) Binocular Depth Perception 6. A Note on the Computation of Binocular Disparity in a Symbolic, Low-Level Visual Processor [1974] 7. Cooperative Computation of Stereo Disparity (with T.Poggio) [1976] 8. Analysis of a Cooperative Stereo Algorithm (with G.Palm, T.Poggio) [1978] 9. A Computational Theory of Human Stereo Vision (with T. Poggio) [1979] with commentary on Binocular Depth Perception by Ellen C. Hildreth and W. Eric L. Grimson (3) David Marr: A Pioneer in Computational Neuroscience by Terrence J. Sejnowski (4) Epilogue: Remembering David Marr by former students and colleagues: Peter Rado, Tony Pay, G.S. Brindley, Benjamin Kaminer, Francis H. Crick, Whitman Richards, Tommy Poggio, Shimon Ullman, Ellen Hildreth. From gk at thp.Uni-Koeln.DE Fri May 31 07:09:38 1991 From: gk at thp.Uni-Koeln.DE (gk@thp.Uni-Koeln.DE) Date: Fri, 31 May 91 13:09:38 +0200 Subject: No subject Message-ID: <9105311109.AA17022@sun0.thp.Uni-Koeln.DE> ************* Please DO NOT post to other mailing list ****************** ************* Please DO NOT post to other mailing list ****************** Neural Network Simulations on the European Teraflop Computer Dear Colleagues, Scientist in Europe are actively seeking to build a Teraflop computer, i.e., a computer which can perform a million million floating point operations per second. In the USA there is a similar effort, however, the use of that machine is to be limited to the small group of physicist working on problems in Lattice Gauge Theory. The Steering Committee of the European Teraflop Initiative has decided to take a much broader view of scientific computing, and at its Geneva meeting of May 16 asked for proposals from other scientific fields. In particular, they asked if the group of researchers working under the broad heading of "neural networks" would be interested in using such a machine. Now, the current time scale for building this machine with the speed and memory of a 1000 Crays is 3-5 years, however, the architecture, which is not yet decided upon, will be influenced by the preliminary proposals put forward in the next few months. And so my question is, do we need Teraflop Computers for ``neural network '' research? In particular: 1) Who wants later (if at all) do simulations on such a machine? 2) Who wants to cooperate in the planning of the simulations ? 3) Which models and problems should be selected (biological systems or applications oriented problems)? 4) What types of algorithms would such simulations require? 5) Should we drop this plan because of competition in the same simulations from: (i) special purpose computers, (ii) ``secret'' industrial research, ...? Presumably the project would require a long planning period, heavy competition from other fields of research, and the willingness to program in a language fitted to the computer and not to us. Please send me your comments on these questions, preferably by e-mail, if you are interested. Please pass this announcement on to other researchers. Yours sincerely Gregory Kohring Institute for Theoretical Physics, Cologne University, D-5000 Koln 41, Germany, fax +49 221 470 5159 e-mail: gk at thp.uni-koeln.de ************* Please DO NOT post to other mailing list ****************** ************* Please DO NOT post to other mailing list ****************** From DUDZIAKM at isnet.inmos.COM Fri May 31 16:51:03 1991 From: DUDZIAKM at isnet.inmos.COM (MJD / NEURAL NETWORK R&D / SGS-THOMSON MICROELECTRONICS USA) Date: Fri, 31 May 91 14:51:03 MDT Subject: Posting FYI - I have tested this out and it is quite robust Message-ID: <29292.9105312051@inmos-c.inmos.com> PRESS RELEASE AND CORPORATION INTRODUCES FIRST HOLOGRAPHICALLY BASED NEUROCOMPUTING SYSTEM AND CORPORATION 4 Hughson St. Suite 305 Hamilton, Ontario Canada L8N 3Z1 phone (416) 570 0525 fax (416) 570 0498 AND Corporation based in Canada has developed a new technology related to the current field of artificial neural systems. This new field is referred to as holographic neural technology. The operational basis stems from holographic principles in the superposition or "enfolding" of information by convolution of complex vectors. An analogous, albeit far more limited, process occurs within interacting electromagnetic fields in the generation of optical holograms. Information as dealt with in the neural system represents analog stimulus-response patterns and the holographic process permits one to superimpose or enfold very large numbers of such patterns or analog mappings onto a single neuron cell. Analog stimulus-response associations are learned in one non- iterative transformation on exposing the neuron cell to a stimulus and desired response data field. Similarly, decoding or expression of a response is performed in one non-iterative transformation. The process exhibits the property of non-disturbance whereby previously learned mappings are minimally corrupted or influenced by subsequent learning. During learning of associations, the holographic neural process generates a true deterministic mapping of analog stimulus input to desired analog response. Large sets of such stimulus-response mappings are enfolded onto the identically same correlation set (array of complex vectors) and may be controlled in such a manner that these mappings have modifiable properties of generalization. Specifically, this generalization characteristic refers to a stimulus-response mapping in which input states circumscribed within a modifiable region of the stimulus locus will accurately regenerate the associated analog response. In addition, neuron cells may be configured to exhibit a range of dynamic memory profiles extending from short to long term memory. This feature applies variable decay within the correlation sets whereby encoded stimulus-response mappings may be attenuated at controlled rates. A single neuron cell employing the holographic principle displays vastly increased capabilities over much larger ANS networks employing standard gradient descent methods. AND Corporation has constructed an applications development system based on this new technology it has called the HNeT system (for Holographic NeTwork). This system executes within an INMOS transputer hardware platform. The HNeT development system permits the user to design an entire neural configuration which remains resident and executes within an INMOS transputer based co-processing board. The neural engine executes concurrently with the host resident application program, the host processor providing essentially I/O and operator interface services. Internally the neural engine is structured by the user as a configuration of cells having data field flow paths established between these cells. Data fields within the holographic system are structured as matrices of complex values representing analog stimulus and response data fields. The manner in which data sets, normally expressed as real-numbered values in the external domain, are mapped to complex data fields within the neural engine is not of necessary importance to the neural system designer as data transfer functions provided within the HNeT development system perform these data field conversions automatically. An extensive set of 'C' neural library routines provide the user flexibility in configuring up to 16K cells per transputer and 64K synaptic inputs per cell (memory limited). Functions for configuring these cells within the neural engine are provided within the HNeT library, and may be grouped into the following general categories: Neural cells - These cell types form the principle operational component within the neural engine, employing the holographic neural process for single pass encoding (learning) and decoding (expression) of analog stimulus-response associations. These cells generate the correlation sets or matrices which store the enfolded stimulus-response mappings. Operator cells - Cells may also be configured within the neural engine to perform a wide variety of complex vector transform operations and data manipulation over data fields. These cells principally perform preprocessing operations on data fields fed into neural cells. Input cells - These cells operate essentially as buffers within the neural engine for data transferred between the host application program and the transputer resident neural configuration. This category of cells may also be used to facilitate recurrent data flow structures within the neural engine. In configuring the neural engine the user has the flexibility of constructing a wide range of cell types within any definable configuration, and specifying any data flow path between these diverse cell types. Configuration of the neural engine is simple and straightforward using the programming convention provided for the HNeT neural development system. For instance, a powerful configuration comprised of two inputs cells and one neural cell (cortex), capable of encoding large numbers of analog stimulus- response mappings (potentially >> 64K mappings), can be configured using three function calls . i.e. stim = receptor(256,255); des_resp = buffer(1,1); output = cortex(des_resp, stim, ENDLIST); The above 'C' application code configures one cortex cell type within the neural engine receiving a stimulus field of 256 by 255 elements (stim), and returns a label to the output data field containing the generated response value (output) for that cell. This configuration may be set into operation using an execute instruction to perform either decoding only, or both decoding/encoding functions concurrently. The cortex cell has defined within its function variable list two input data fields, that is the stimulus field (stim) and the desired response data field (des_resp). On encoding the stimulus-to-desired-response association, an analog mapping is generated employing holographic neural principles, and this mapping enfolded onto the cortex cells correlation set. On a decoding cycle, the cortex cell generates the response from a stimulus field transformed through its correlation set, returning a label to that data field (output). An entire neural configuration may be constructed using the above convention of function calls to configure various cells and establishing data flow paths via labels returned by the configuration functions. The users application program performs principally I/O of stimulus-response data fields to the neural engine and establishes control over execution cycles for both learning and expression (response recall) operations. The transputer resident neural engine independently performs all the transform operations and data field transfers for the established neural configuration. The host IBM resources may be fully allocated to ancillary tasks such as peripheral and console interface, retrieval/storage of data, etc. This format allows maximum concurrency of operation. For the control engineer, the holographic neural system provides a new and powerful process whereby input states may be deterministically mapped to control output states over the extent of the control input or state space domain. The mapping of these analog control states is generated simply by training the neural engine. Realizing the property of non-disturbance exhibited by the holographic process, the neural configuration may be constructed to learn large sets of spacio-temporal patterns useful in robotics applications. The neural system designer may explicitly control and modify the generalization characteristics and resolution of this mapping space through design and modification of higher order statistics used within the system. In other words, stimulus-response control states are mapped out within the cell, and allowing the user to explicitly define the higher order characteristics or generalization properties of the neural system. Control states are encoded simply by presenting the neural engine with the suite of analog stimulus-response control actions to be learned. This technology is patent pending in North and South America, Europe, Britan, Asia, Australia and the USSR. The HNeT applications development system is commercially available for single transputer platforms at a base price of $7,450.00 US. For further information call or write to AND Corporation, address given above. From Connectionists-Request at CS.CMU.EDU Wed May 1 00:05:22 1991 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Wed, 01 May 91 00:05:22 EDT Subject: Bi-monthly Reminder Message-ID: <4685.673070722@B.GP.CS.CMU.EDU> This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & Scott Crowder --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. - Limericks should not be posted here. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on cheops.cis.ohio-state.edu (128.146.8.62) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community. Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Email: pollack at cis.ohio-state.edu Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From peterc at chaos.cs.brandeis.edu Wed May 1 03:03:34 1991 From: peterc at chaos.cs.brandeis.edu (Peter Cariani) Date: Wed, 1 May 91 03:03:34 edt Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: Lyle J. Borg-Graham's message of Wed, 24 Apr 91 12:26:52 EDT <9104241626.AA28990@wheat-chex> Message-ID: <9105010703.AA06223@chaos.cs.brandeis.edu> Dear Lyle, Although Raymond cites a fairly large nmber of empirical observations in many different types of systems, and the basic channel types participating in these systems are pretty much ubiquitous, I am not about to claim that a simple model accounts for the potentially very rich temporal dynamics of all neurons. Obviously there are many types of responses possible, but there seems to be a lack of functional models which utilize the temporal dynamics of single neurons (beyond synaptic delay & refractory period) to do information processing. Can Raymond's threshold results be accounted for via current models? I haven't seen any models where the afterpotentials have amplitudes sufficient to reduce the effective threshold by 50-70%, and I have yet to see this oscillatory behavior developed into a theory of coding (by interspike interval), except in the Lettvin papers I cited. In defense of simple models, they are often useful in developing the broader functional implications of a style of information processing. If most neurons have (potentially) complex temporal characteristics, then we'd better work on our coupled oscillator models and maybe some adaptively tuned oscillator models if we are going to make any sense of it all. Peter Cariani From dave at cogsci.indiana.edu Thu May 2 00:05:31 1991 From: dave at cogsci.indiana.edu (David Chalmers) Date: Wed, 1 May 91 23:05:31 EST Subject: Technical Report available: High-Level Perception Message-ID: The following paper is available electronically from the Center for Research on Concepts and Cognition at Indiana University. HIGH-LEVEL PERCEPTION, REPRESENTATION, AND ANALOGY: A CRITIQUE OF ARTIFICIAL INTELLIGENCE METHODOLOGY David J. Chalmers, Robert M. French, and Douglas R. Hofstadter Center for Research on Concepts and Cognition Indiana University CRCC-TR-49 High-level perception -- the process of making sense of complex data at an abstract, conceptual level -- is fundamental to human cognition. Via high-level perception, chaotic environmental stimuli are organized into mental representations which are used throughout cognitive processing. Much work in traditional artificial intelligence has ignored the process of high-level perception completely, by starting with hand-coded representations. In this paper, we argue that this dismissal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models -- notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought -- and argue that these are flawed precisely because they downplay the role of high-level perception. Further, we argue that perceptual processes cannot be separated from other cognitive processes even in principle, and therefore that such artificial-intelligence models cannot be defended by supposing the existence of a "representation module" that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context. N.B. This is not a connectionist paper in the narrowest sense, but the representational issues discussed are very relevant to connectionism, and the advocated integration of perception and cognition is a key feature of many connectionist models. Also, philosophical motivation for the "quasi-connectionist" Copycat architecture is provided. ----------------------------------------------------------------------------- This paper may be retrieved by anonymous ftp from cogsci.indiana.edu (129.79.238.6). The file is cfh.perception.ps.Z, in the directory pub. To retrieve, follow the procedure below. unix> ftp cogsci.indiana.edu # (or ftp 129.79.238.6) ftp> Name: anonymous ftp> Password: [identification] ftp> cd pub ftp> binary ftp> get cfh.perception.ps.Z ftp> quit unix> uncompress cfh.perception.ps.Z unix> lpr -P(your_local_postscript_printer) cfh.perception.ps If you do not have access to ftp, hardcopies may be obtained by sending e-mail to dave at cogsci.indiana.edu. From obm8 at cs.kun.nl Thu May 2 08:10:27 1991 From: obm8 at cs.kun.nl (obm8@cs.kun.nl) Date: Thu, 2 May 91 14:10:27 +0200 Subject: NN Message-ID: <9105021210.AA11571@erato> WANTED: Information on NN in militairy systems We are students from the university of Nijmegen and we are searching for some kind of literature concerning the use of neural networks in militairy systems. Especially articles which adress the usage of NN and the constraints in which they have to operate. Here in the Netherlands it is pretty difficult to get some information about this. We would appreciate any reaction (as fast as possible be- cause we're dealing with a deadline) on these matters. You can send it to: Parcival Willems, Paul Jones, obm8 at erato.cs.kun.nl From bridle at ai.toronto.edu Thu May 2 10:21:31 1991 From: bridle at ai.toronto.edu (John Bridle) Date: Thu, 2 May 1991 10:21:31 -0400 Subject: NN In-Reply-To: obm8@cs.kun.nl's message of Thu, 2 May 1991 08:10:27 -0400 <9105021210.AA11571@erato> Message-ID: <91May2.102145edt.230@neuron.ai.toronto.edu> You ask about data on NNs in military systems. My collegue Andrew Webb has published a paper "Potential Applications of NNs in Defence" or similar. It is basically a survey of literature on the subject. (Andrew does know a lot about NNs and some areas of defence electronics.) He is at webb at hermes.mod.uk Mention my name. -------------------------------------------------------------------------- From: John S Bridle Currently with: Geoff Hinton of: Speech Research Unit Dept of Computer Science Defence Research Agency University of Toronto Electronics Division RSRE St Andrews Road Toronto Ontario Great Malvern Canada Worcs. WR14 3PS (Until 15 May 1991) U.K. Email: bridle at ai.toronto.edu Email: bridle at hermes.mod.uk --------------------------------------------------------------------------- From lacher at lambda.cs.fsu.edu Thu May 2 13:37:17 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Thu, 2 May 91 13:37:17 -0400 Subject: Hybrid Systems at IJCNN Singapore Message-ID: <9105021737.AA05663@lambda.cs.fsu.edu> To: Connectionists From: Chris Lacher (R. C. Lacher) lacher at cs.fsu.edu (904) 644-0058 (FAX) Subject: Hybrid Systems As you probably know, the International Joint Conference on Neural Networks has, for the first time in IJCNN91/Singapore, a submission category for hybrid systems research, officially titled ``Hybrid Systems (AI, Neural Networks, Fuzzy Systems)". Some have argued that a better descriptor will eventually be "General Intelligent Systems". In any case, coupled (loose or tight), composit, or hybrid systems are meant to be included in the concept. The conference is sponsored by IEEE and co-sponsored by INNS and will be held at the Westin Stamford and Westin Plaza Hotels in Singapore, November 18-21, 1991. This is a significant milestone, a response to and recognition of the growing importance of systems that integrate various machine intelligence technologies across traditional boundaries. This meeting will help define the field, its foundations, and its founders. I am writing to urge you to participate in this historically significant event. Full details on paper submissions are published as page 407 in the May, 1991, issue of IEEE Transactions on Neural Networks. Note that the deadline for RECEIPT of manuscripts is May 31, 1991. From wilson at magi.ncsl.nist.gov Fri May 3 12:31:10 1991 From: wilson at magi.ncsl.nist.gov (Charles Wilson x2080) Date: Fri, 3 May 91 12:31:10 EDT Subject: New Information Processing Technologies Message-ID: <9105031631.AA18304@magi.ncsl.nist.gov> As part of the US response to the Japanese ``Sixth Generation Computer'' initiate, specifically the ``New Information Processing Technologies'' (NIPT) cooperative initiaive, the Advanced Systems Division of NIST is preparing a report on `` new information processing technologies''. This report will be used to help set future US policy in these areas. These technologies include massively parallel computing, distributed processing, neural computing, optical computing, and fuzzy logic. This report will include: 1) identification of emerging technologies (are these the technologies which will provide ``human like response'' in computer systems); 2) assessment of economic impact of NIPT technologies (which technologies will move from toy system to real systems and when); 3) assessment of present US position relative to Japan and how international collaborations will affect these positions. 4) national security considerations. The Japanese are particularly interested funding in US (largely university) participation. The report must be completed before May 31. Other agencies such as DARPA and NSF will be asked to comment. Interested US researchers are invited to respond by E- mail. Comments on items 1 and 2 from researchers working in these areas are particularly important. Response received after May 15 have lower probability of inclusion. C. L. Wilson (301) 975-2080 FAX (301) 590-0932 E-mail wilson at magi.ncsl.nist.gov PLEASE RESPOND TO THE ADDRESS ABOVE AND NOT TO THIS MAILING LIST. From atul at nynexst.com Fri May 3 17:48:32 1991 From: atul at nynexst.com (Atul Chhabra) Date: Fri, 3 May 91 17:48:32 EDT Subject: Lecolinet's PhD Thesis? Message-ID: <9105032148.AA00902@texas.nynexst.com> How can I get a copy of the following PhD thesis? E. Lecolinet, "Segmentation D'Images de Mots Manuscripts: Application a la lecture de chanies de characteres majuscules alphanumeriques et a la lecture de l'ecriture cursive," PhD thesis, University of Paris, March 1990. The thesis deals with the issue of segmentation in recognition of handwritten characters -- a topic familiar to many connectionists. In another place, I have seen this thesis referred to as: E. Lecolinet, "Segmentation et reconnaissance des codes postaus et des mots manuscrits," PhD thesis, Paris VI 1990. Better still, does any one know of any English publications by this author? Thanks, Atul -- Atul K. Chhabra (atul at nynexst.com) Phone: (914)683-2786 Fax: (914)683-2211 NYNEX Science & Technology 500 Westchester Avenue White Plains, NY 10604 From schmidhu at informatik.tu-muenchen.dbp.de Mon May 6 06:43:22 1991 From: schmidhu at informatik.tu-muenchen.dbp.de (Juergen Schmidhuber) Date: 06 May 91 12:43:22+0200 Subject: New FKI-Report Message-ID: <9105061043.AA10479@kiss.informatik.tu-muenchen.de> Here is another one: --------------------------------------------------------------------- AN O(n^3) LEARNING ALGORITHM FOR FULLY RECURRENT NETWORKS Juergen Schmidhuber Technical Report FKI-151-91, May 6, 1991 The fixed-size storage learning algorithm for fully recurrent continually running networks (e.g. (Robinson + Fallside, 1987), (Williams + Zipser, 1988)) requires O(n^4) computations per time step, where n is the number of non-input units. We describe a method which computes exactly the same gradient and requires fixed-size storage of the same order as the previous algorithm. But, the average time complexity per time step is O(n^3). --------------------------------------------------------------------- To obtain a copy, do: unix> ftp 131.159.8.35 Name: anonymous Password: your name, please ftp> binary ftp> cd pub/fki ftp> get fki151.ps.Z ftp> bye unix> uncompress fki151.ps.Z unix> lpr fki151.ps Please do not forget to leave your name (instead of your email address). NOTE: fki151.ps is designed for European A4 paper format (20.9cm x 29.6cm). In case of ftp-problems send email to schmidhu at informatik.tu-muenchen.de or contact Juergen Schmidhuber Institut fuer Informatik, Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2 GERMANY From lyle at ai.mit.edu Mon May 6 15:04:32 1991 From: lyle at ai.mit.edu (Lyle J. Borg-Graham) Date: Mon, 6 May 91 15:04:32 EDT Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: Peter Cariani's message of Wed, 1 May 91 03:03:34 edt <9105010703.AA06223@chaos.cs.brandeis.edu> Message-ID: <9105061904.AA02824@peduncle> >Can Raymond's threshold results be accounted for via current models? We have modelled the time course of K+ currents which could in principle reduce 'threshold' in hippocampal cells (MIT AI Lab Technical Report 1161) (see also "Simulations suggest information processing roles for the diverse currents in hippocampal neurons", NIPS 1987 Proceedings, ed. D.Z. Anderson). >I haven't seen any models where the afterpotentials have amplitudes >sufficient to reduce the effective threshold by 50-70% Again, as I mentioned in a previous message, it might be useful to be more precise in the definition of 'threshold'. From the statement above, I assume you mean that the voltage difference between the afterpotential and the normal action potential (voltage) threshold is 50-70% smaller than that between the resting potential and threshold, thus reducing the threshold *current* by the same amount (assuming that the input impedance doesn't change, which it does). I am not familiar with the cited results, but I would also expect (as mentioned earlier) that the normal voltage threshold would be increased after the spike because of (a) incomplete re-activation of Na+ channels (which I believe is the classic mechanism cited for the refactory period) and (b) decreased input impedance due to activation of various channels (which means that a larger Na+ current is needed to start up the positive feedback underlying the upstroke of the spike). My suggestion as to role of K+ currents underlying a super-excitable phase was that *inactivation* of these currents by depolarization might both reduce the decrease in impedance (b) *and* increase the "resting potential". Why is this minutia important? Well for one thing in real neurons inputs and intrinsic properties are mediated by conductance changes, which, in turn, interact non-linearly (as analyzed by Poggio, Torre, and Koch, among others). Whether or not these non-linear interactions are relevant depends on the model, but at least we should know enough about their general properties so that we can scope out the right context of the problem at the beginning. > In defense of simple models, they are often useful in developing the >broader functional implications of a style of information processing. If >most neurons have (potentially) complex temporal characteristics, then >we'd better work on our coupled oscillator models and maybe some >adaptively tuned oscillator models if we are going to make any sense >of it all. Absolutely. - Lyle From lyle at ai.mit.edu Mon May 6 17:01:47 1991 From: lyle at ai.mit.edu (Lyle J. Borg-Graham) Date: Mon, 6 May 91 17:01:47 EDT Subject: THRESHOLDS AND SUPEREXCITABILITY In-Reply-To: "Lyle J. Borg-Graham"'s message of Mon, 6 May 91 15:04:32 EDT <9105061904.AA02824@peduncle> Message-ID: <9105062101.AA02923@peduncle> oops, I meant to say that the refractory period was due to incomplete *de-inactivation* of Na+ channel(s). Lyle From david at cns.edinburgh.ac.uk Tue May 7 11:49:10 1991 From: david at cns.edinburgh.ac.uk (David Willshaw) Date: Tue, 7 May 91 11:49:10 BST Subject: NETWORK - contents of Volume 2, no 2 (May 1991) Message-ID: <6670.9105071049@subnode.cns.ed.ac.uk> The forthcoming May 1991 issue of NETWORK will contain the following papers: NETWORK Volume 2 Number 2 May 1991 Minimum-entropy coding with Hopfield networks H G E Hentschel and H B Barlow Cellular automation models of the CA3 region of the hippocampus E Pytte, G Grinstein and R D Traub Competitive learning, natural images and cortical cells C J StC Webber Adaptive fields: distributed representations of classically conditioned associations P F M J Verschure and A C C Coolen ``Quantum'' neural networks M Lewenstein and M Olko ---------------------- NETWORK welcomes research Papers and Letters where the findings have demonstrable relevance across traditional disciplinary boundaries. Research Papers can be of any length, if that length can be justified by content. Rarely, however, is it expected that a length in excess of 10,000 words will be justified. 2,500 words is the expected limit for research Letters. Articles can be published from authors' TeX source codes. NETWORK is published quarterly. The subscription rates are: Institution 125.00 POUNDS (US$220.00) Individual (UK) 17.30 POUNDS (Overseas) 20.50 POUNDS (US$37.90) For more details contact IOP Publishing Techno House Redcliffe Way Bristol BS1 6NX United Kingdom Telephone: 0272 297481 Fax: 0272 294318 Telex: 449149 INSTP G EMAIL: JANET: IOPPL at UK.AC.RL.GB From collins at z.ils.nwu.edu Tue May 7 14:47:40 1991 From: collins at z.ils.nwu.edu (Gregg Collins) Date: Tue, 7 May 91 13:47:40 CDT Subject: ML91 now takes credit cards! Message-ID: <9105071847.AA01002@z.ils.nwu.edu> That's right, you can now register for ML91 -- The Eighth International Workshop on Machine Learning -- using your Master Card, Visa, or American Express. We have adjusted the registration forms slightly to reflect this. Copies of the new forms appear below. (don't worry, though -- we'll still accept the old ones). Gregg Collins Lawrence Birnbaum ML91 Program Co-chairs *****************Conference Registration Form************************** ML91: The Eighth International Workshop on Machine Learning Conference Registration Form Please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Address: Phone: Email: Payment information: Type of registration (check one): [ ] Student: $70 [ ] Other: $100 Registration is due May 22, 1991. If your registration will arrive after that date, please add a late fee of $25. You may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: ********************Housing Registration Form************************** ML91: the Eighth International Workshop on Machine Learning On-campus Housing Registration Form On campus housing for ML91 is available at a cost of $69 per person for four nights, June 26-29. Rooms are double-occupancy only. Sorry, we cannot offer a reduced rate for shorter stays. To register for housing, please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Sex: Address: Phone: Email: Name of roomate (if left blank, we will assign you a roommate): Payment: Housing is $69 per person, which you may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: From collins%z.ils.nwu.edu at VM.TCS.Tulane.EDU Tue May 7 14:47:40 1991 From: collins%z.ils.nwu.edu at VM.TCS.Tulane.EDU (Gregg Collins) Date: Tue, 7 May 91 13:47:40 CDT Subject: ML91 now takes credit cards! Message-ID: <9105071847.AA01002@z.ils.nwu.edu> That's right, you can now register for ML91 -- The Eighth International Workshop on Machine Learning -- using your Master Card, Visa, or American Express. We have adjusted the registration forms slightly to reflect this. Copies of the new forms appear below. (don't worry, though -- we'll still accept the old ones). Gregg Collins Lawrence Birnbaum ML91 Program Co-chairs *****************Conference Registration Form************************** ML91: The Eighth International Workshop on Machine Learning Conference Registration Form Please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Address: Phone: Email: Payment information: Type of registration (check one): [ ] Student: $70 [ ] Other: $100 Registration is due May 22, 1991. If your registration will arrive after that date, please add a late fee of $25. You may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: ********************Housing Registration Form************************** ML91: the Eighth International Workshop on Machine Learning On-campus Housing Registration Form On campus housing for ML91 is available at a cost of $69 per person for four nights, June 26-29. Rooms are double-occupancy only. Sorry, we cannot offer a reduced rate for shorter stays. To register for housing, please send this form to: Machine Learning 1991 The Institute for the Learning Sciences 1890 Maple Avenue Evanston, Illinois, 60201 USA Registration information (please type or print): Name: Sex: Address: Phone: Email: Name of roomate (if left blank, we will assign you a roommate): Payment: Housing is $69 per person, which you may pay either by check or by credit card. Checks should be made out to Northwestern University. If you are paying by credit card, please complete the following: Card type (circle): VISA MASTER CARD AMERICAN EXPRESS Card number: Expiration date: Signature: From CONNECT at nbivax.nbi.dk Wed May 8 03:40:00 1991 From: CONNECT at nbivax.nbi.dk (CONNECT@nbivax.nbi.dk) Date: Wed, 8 May 1991 09:40 +0200 (NBI, Copenhagen) Subject: International Journal of Neural Systems Vol 2, issues 1-2 Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 2, issues number 1-2 (1991): 1. H. Liljenstrom: Modelling the Dynamics of olfactory cortex effects of anatomically organised Propagation Delays. 2. S. Becker: Unsupervised learning Procedures for Neural Networks. 3. Yves Chauvin: Gradient Descent to Global minimal in a n-dimensional Landscape. 4. J. G. Taylor: Neural Network Capacity for Temporal Sequence Storage. 5. S. Z. Lerner and J. R. Deller: Speech Recognition by a self-organising feature finder. 6. Jefferey Lee Johnson: Modelling head end escape behaviour in the earthworm: the efferent arc and the end organ. 7. M.-Y. Chow, G. Bilbro and S. O. Yee: Application of Learning Theory for a Single Phase Induction Motor Incipient Fault Detector Artificial Neural Network. 8. J. Tomberg and K. Kaski: Some implementation of artificial neural networks using pulse-density Modulation Technique. 9. I. Kocher and R. Monasson: Generalisation error and dynamical efforts in a two-dimensional patches detector. 10. J. Schmidhuber and R. Huber: Learning to generate fovea Trajectories for attentive vision. 11. A. Hartstein: A back-propagation algorithm for a network fo neurons with Threshold Controlled Synapses. 12. M. Miller and E. N. Miranda: Stability of Multi-Layered Neural Networks. 13. J. Ariel Sirat: A Fast neural algorithm for principal components analysis and singular value Decomposition. 14. D. Stork: Review of book by J. Hertz, A. Krogh and R. Palmer. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message From ring at cs.utexas.edu Wed May 8 17:16:31 1991 From: ring at cs.utexas.edu (Mark Ring) Date: Wed, 8 May 91 16:16:31 CDT Subject: Preprint: building sensory-motor hierarchies Message-ID: <9105082116.AA05640@ai.cs.utexas.edu> Recently there's been some interest on this mailing list regarding neural net hierarchies for sequence "chunking". I've placed a relevant paper in the Neuroprose Archive for public ftp. This is a (very slightly extended) copy of a paper to be published in the Proceedings of the Eighth International Workshop on Machine Learning. The paper summarizes the results to date of work begun a year and a half ago to create a system that automatically and incrementally constructs hierarchies of behaviors in neural nets. The purpose of the system is to develop continuously through the encapsulation, or "chunking," of learned behaviors. ---------------------------------------------------------------------- INCREMENTAL DEVELOPMENT OF COMPLEX BEHAVIORS THROUGH AUTOMATIC CONSTRUCTION OF SENSORY-MOTOR HIERARCHIES Mark Ring University of Texas at Austin This paper addresses the issue of continual, incremental development of behaviors in reactive agents. The reactive agents are neural-network based and use reinforcement learning techniques. A continually developing system is one that is constantly capable of extending its repertoire of behaviors. An agent increases its repertoire of behaviors in order to increase its performance in and understanding of its environment. Continual development requires an unlimited growth potential; that is, it requires a system that can constantly augment current behaviors with new behaviors, perhaps using the current ones as a foundation for those that come next. It also requires a process for organizing behaviors in meaningful ways and a method for assigning credit properly to sequences of behaviors, where each behavior may itself be an arbitrarily long sequence. The solution proposed here is hierarchical and bottom up. I introduce a new kind of neuron (termed a ``bion''), whose characteristics permit it to be automatically constructed into sensory-motor hierarchies as determined by experience. The bion is being developed to resolve the problems of incremental growth, temporal history limitation, network organization, and credit assignment among component behaviors. A longer, more detailed paper will be announced shortly. ---------------------------------------------------------------------- Instructions to retrieve paper by ftp, (no hard copies available at this time): % ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get ring.ml91.ps.Z ftp> bye % uncompress ring.ml91.ps.Z % lpr -P(your_postscript_printer) ring.ml91.ps.Z ---------------------------------------------------------------------- DO NOT "reply" DIRECTLY TO THIS MESSAGE! If you have any questions or difficulties, please send e-mail to: ring at cs.utexas.edu. or send mail to: Mark Ring Department of Computer Sciences Taylor 2.124 University of Texas at Austin Austin, TX 78712 From CONNECT at nbivax.nbi.dk Thu May 9 06:47:00 1991 From: CONNECT at nbivax.nbi.dk (CONNECT@nbivax.nbi.dk) Date: Thu, 9 May 1991 12:47 +0200 (NBI, Copenhagen) Subject: International Journal of Neural Systems vol 2, issues 1-2 Message-ID: <846F22B560A0AF40@nbivax.nbi.dk> Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of Volume 2, issues number 1-2 (1991): 1. H. Liljenstrom: Modelling the Dynamics of olfactory cortex effects of anatomically organised Propagation Delays. 2. S. Becker: Unsupervised learning Procedures for Neural Networks. 3. Yves Chauvin: Gradient Descent to Global minimal in a n-dimensional Landscape. 4. J. G. Taylor: Neural Network Capacity for Temporal Sequence Storage. 5. S. Z. Lerner and J. R. Deller: Speech Recognition by a self-organising feature finder. 6. Jefferey Lee Johnson: Modelling head end escape behaviour in the earthworm: the efferent arc and the end organ. 7. M.-Y. Chow, G. Bilbro and S. O. Yee: Application of Learning Theory for a Single Phase Induction Motor Incipient Fault Detector Artificial Neural Network. 8. J. Tomberg and K. Kaski: Some implementation of artificial neural networks using pulse-density Modulation Technique. 9. I. Kocher and R. Monasson: Generalisation error and dynamical efforts in a two-dimensional patches detector. 10. J. Schmidhuber and R. Huber: Learning to generate fovea Trajectories for attentive vision. 11. A. Hartstein: A back-propagation algorithm for a network fo neurons with Threshold Controlled Synapses. 12. M. Miller and E. N. Miranda: Stability of Multi-Layered Neural Networks. 13. J. Ariel Sirat: A Fast neural algorithm for principal components analysis and singular value Decomposition. 14. D. Stork: Review of book by J. Hertz, A. Krogh and R. Palmer. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of California, San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstrom (Oregon Graduate Institute) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) J. Moody (Yale, USA) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)81-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)382-5663 ----------------------------------------------------------------------- End Message From jon at incsys.com Thu May 9 11:16:22 1991 From: jon at incsys.com (Jon Shultis) Date: Thu, 09 May 91 11:16:22 -0400 Subject: FYI Informal Computing Workshop Program Message-ID: <9105091517.AA14404@incsys.com> Workshop on Informal Computing 29-31 May 1991 Santa Cruz, California Program Wednesday 29 May Conversational Computing and Adaptive Languages 8:15 Opening Remarks, Jon Shultis, Incremental Systems 8:30 Natural Language Techniques in Formal Languages, David Mundie, Incremental Systems 9:30 Building and Exploiting a User Model In Natural Language Information Systems, Sandra Carberry, University of Delaware 10:30 Break 10:45 Informalism in Interfaces, Larry Reeker, Institutes for Defense Analyses 11:45 Natural Language Programming in Solving Problems of Search, Alan Biermann, Duke University 12:30 Lunch 13:45 Linguistic Structure from a Cognitive Grammar Perspective, Karen van Hoek, University of California at San Diego 14:45 Notational Formalisms, Computational Mechanisms: Models or Metaphors? A Linguistic Perspective, Catherine Harris, University of California at San Diego 15:45 Break 16:00 Discussion 18:00 Break for dinner Thursday 30 May Informal Knowledge and Reasoning 8:15 What is Informalism?, David Fisher, Incremental Systems 9:15 Reaction in Real-Time Decision Making, Bruce D'Ambrosio, Oregon State University 10:15 Break 10:30 Decision Making with Informal, Plausible Reasoning, David Littman, George Mason University 11:15 Title to be announced, Tim Standish, University of California at Irvine 12:15 Lunch 13:30 Intensional Logic and the Metaphysics of Intensionality, Edward Zalta, Stanford University 14:30 Connecting Object to Symbol in Modeling Cognition, Stevan Harnad, Princeton University 15:30 Break 15:45 Discussion 17:45 Break 19:00 Banquet Friday 31 May Modeling and Interpretation 8:15 A Model of Modeling Based on Reference, Purpose and Cost-effectiveness, Jeff Rothenberg, RAND 9:15 Mathematical Modeling of Digital Systems, Donald Good, Computational Logic, Inc. 10:15 Break 10:30 Ideographs, Epistemic Types, and Interpretive Semantics, Jon Shultis, Incremental Systems 11:30 Discussion 12:30 Lunch and End of the Workshop 13:45 Steering Committee Meeting for Informalism '92 Conference, all interested participants are invited. From jcp at vaxserv.sarnoff.com Thu May 9 15:37:10 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Thu, 9 May 91 15:37:10 EDT Subject: postmark deadline Message-ID: <9105091937.AA21511@sarnoff.sarnoff.com> All submittals to the 1991 NIPS conference and workshop must be POSTMARKED by May 17th. Express mail is not necessary. John Pearson Publicity Chairman, NIPS-91 jcp at as1.sarnoff.com From ASKROGH at nbivax.nbi.dk Fri May 10 07:00:00 1991 From: ASKROGH at nbivax.nbi.dk (Anders Krogh) Date: Fri, 10 May 1991 13:00 +0200 (NBI, Copenhagen) Subject: Bibliography Message-ID: <4F7256B300A0BD31@nbivax.nbi.dk> Bibliography in the Neuroprose Archive: Bibliography from the book "Introduction to the Theory of Neural Computation" by John Hertz, Anders Krogh, and Richard Palmer (Addison-Wesley, 1991) has been placed in the Neuroprose Archive. After a suggestion from Tali Tishby we decided to make the bibliography for our book publicly available in the Neuroprose Archive. The copyright of the book is owned by Addison-Wesley and this bibliography is placed in the public domain with their permission. We spent considerable effort on the bibliography while writing the book, and hope that other researchers will benefit from it. It is written in the TeX format developed for the book, and the file includes the macros needed to make it TeX-able. It should be fairly easy to adapt the macros and/or bibliographic entries for individual needs. If anyone converts it to BiBtex---or improves it in other ways---we encourage them to put the new version in the Neuroprose Archive, or e-mail it to one of the following addresses so that we can do so. askrogh at nbivax.nbi.dk palmer at phy.duke.edu Please note that we are not intending to update this bibliography, except to correct mistakes. It would be great if someone would maintain a complete online NN bibliography, but WE cannot. So please don't send us requests of the form "please add my paper ...". John Hertz, Anders Krogh, and Richard G. Palmer. -------------------------- To obtain copies from Neuroprose: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get hertz.refs.tex.Z ftp> bye If you want to print it, do something like this (depending on your local system): unix> uncompress hertz.refs.tex unix> tex hertz.refs.tex unix> dvi2ps hertz.refs (the dvi to postscript converter) unix> lpr hertz.refs.ps From shoshi at thumper.bellcore.com Fri May 10 11:59:25 1991 From: shoshi at thumper.bellcore.com (Shoshana Hardt-Kornacki) Date: Fri, 10 May 91 11:59:25 edt Subject: CFP: Information Filtering Workshop Message-ID: <9105101559.AA13648@shalva.bellcore.com> Bellcore Workshop on High-Performance Information Filtering: Foundations, Architectures, and Applications November 5-7, 1991 Chester, New Jersey Information filtering can be viewed, both, as a way to control the flood of information that is received by an end-user, and as a way to target the information that is sent by information providers. The information carrier, which provides the appropriate connectivity between the information providers, the filter and the end-user, plays a major role in providing a cost effective architecture which also ensures end-user privacy. The aim of the workshop is to examine issues that can advance the state-of-the-art in filter construction, usage, and evaluation, for various information domains, such as news, entertainment, advertizing, and community information. We focus on creative approaches that take into consideration constraints imposed by realistic application contexts. Topics include but not limited to: Taxonomy of information domains and their dynamics Information retrieval and indexing systems Information delivery architectures Cognitive models of end-user's interests and preferences Cognitive models for multimedia information processing Adaptive filtering agents and distributed filters Information theoretic approaches to filter performance evaluation The workshop is by invitation only. Please submit a 5-10 page paper (hardcopy only), summerizing the work you would like to present, or a one page description of your interests and how they relate to this workshop. Demonstrations of existing prototypes are welcome. Proceedings will be available at the workshop. Workshop Chair: Shoshana Hardt-Kornacki (Bellcore) Workshop Program Committee: Bob Allen (Bellcore) Nick Belkin (Rutgers University) Louis Gomez (Bellcore) Tom Landauer (Bellcore) Bill Mansfield (Bellcore) Papers should be sent to: Shoshana Hardt-Kornacki Bell Communications Research 445 South Street, Morristown NJ, 07962. (201) 829-4528, shoshi at bellcore.com Papers due: July 15, 1991. Invitations sent: August 15, 1991. Workshop dates: November 5-7, 1991. From enorris at gmuvax2.gmu.edu Mon May 13 12:26:34 1991 From: enorris at gmuvax2.gmu.edu (Gene Norris) Date: Mon, 13 May 91 12:26:34 -0400 Subject: ANNA91 Program & Registration Message-ID: <9105131626.AA19674@gmuvax2.gmu.edu> ANNA 91 (Analysis of Neural Network Applications Conference) is the first of a planned series of conferences on the application of neural network technology to real-world problems. The conference, to be held at George Mason University in Fairfax, Virginia, is organized around the problem-solving process: domain analysis, design criteria, analytic approaches to network definition, evaluation methods, and lessons learned. There will be two full-day tutorials on May 29th, addressing both fundamentals and advanced topics, followed by two days of presentations and panel sessions on May 30-31. The keynote speaker will be James Anderson of Brown University; Paul Werbos of the NSF will give the luncheon address on the first day, and Oliver Selfridge from GTE Laboratories will chair the rapporteur panel. Two panel sessions have also been scheduled: the first, chaired by Eugene Norris of GMU, will look back at the history of the technology, and the second, chaired by Jerry LR Chandler of NINCDS, will explore the probable state of the technology in the early 21st century. George Mason University is located n the Washington, D.C. area and is convenient both to Washington National and Dulles Airports. Attendance at the conference will be limited by facility space; reservations will be processed in order of arrival. Sponsors of the conference are ACM SIGART and ACM SIGBDP in cooperation with the International Neural Network Society and the Washington Evolutionary Systems Society. Institutional support is provided by GMU and NIH, with additonal support from American Electronics Inc., CTA Inc., IKONIX, and TRW/Systems Division. Questions? Toni Shetler (Conference Chair), TRW FVA6/3444 PO Box 10400, Fairfax VA 22031 ------------------------ADVANCE PROGRAM----------------------------- Wednesday, May 29, 1991: 09:00 - 12:00 and 01:00 - 04:00 Tutorial 1: Neural Network Fundamentals Instructors - Judith Dayhoff, University of Maryland and Edward Page, Clemson University Tutorial 2: Real Brains for Modelers Instructor - Eugene Norris, George Mason University. Thursday AM, May 30, 1991 08:30 - 10:00 Welcome and Keynote Speaker Welcome - Toni Shetler, TRW/Systems Division Keynote Intro - Robert L. Stites, IKONIX Keynote - James Anderson, Brown University 10:00 - 10:30 Break 10:30 - 12:00 Panel Session 1: Now: Where Are We? Panel Chair - Eugene Norris, George Mason University Panelists - Craig Will, IDA, Tom Vogel, ERIM, Bill Marks, NIH 12:00 - 01:30 Luncheon Luncheon Intro - Robert L. Stites, IKONIX Speaker - Paul Werbos, NSF 01:30 - 03:30 Session 2: Domain Analysis Session Chair: Judith Dayhoff, University of Maryland % Synthetic aperture radar image formation with neural networks. Ted Frison, S. Walt McCandless, and Robert Runze. % Application of the recurrent neural network to the problem of language acquisition. Ryotaro Kamimura. % Protein classification using a neural network protein database (NNPDB) system. Cathy H. Wu, Adisorn Ermongkonchai, and Tzu-Chung Chang. % The object-oriented paradigm and neurocomputing. Paul S. Prueitt and Robert M. Craig. 03:30 - 04:00 Break 04:00 - 06:00 Session 3: Design Criteria Session Chair: Harry Erwin, TRW/Systems Division % Neural network-based decision support for incomplete database systems. B. Jin, A. R. Hurson, and L. L. Miller. % Spatial classification and multi-spectral fusion with neural networks. Craig Harston. % Neural network process control. Michael J. Piovoso, AaronJJ. Owens, Allon Guez and Eva Nilssen. 06:00 - 07:00 Evening Reception Host: Kim McBrian, TRW/Command Support Division 07:00 - 09:00 Session 4: Analytic Approaches to Network Definition Session Chair: Gary C. Fleming, American Electronics, Inc. % A discrete-time neural network multitarget tracking data association algorithm. Oluseyi Olurotimi. % On the implementation of RB technique in neural networks. M. T. Musavi, K. B. Faris, K. H. Chan, and W. Ahmed. % Radiographic image compression: a neural approach. Sridhar Narayan, Edward W. Page, and Gene A. Tagliarini. % Supervised adaptive resonance networks. Robert A. Baxter. Friday, May 31, 1991 08:00 - 10:00 Session 5: Lessons Learned, Feedback, and Design Implications Session Chair: Elias Awad, University of Virginia % Neural control of a nonlinear system with inherent time delays. Edward A. Rietman and Robert C. Frye. % Pattern mapping in pulse transmission neural networks. Judith Dayhoff. % Analysis of a biologically motivated neural network for character recognition. M. D. Garris, R. A. Wilkinson, and C. L. Wilson. % Optimization in cascaded Boltzman machines with a temperature gradient: an alternative to simulated annealing. James P. Coughlin and Robert H. Baran. 10:00 - 10:30 Break 10:30 - 12:00 Panel Session 6: Where Will We Be in 1995, 2000, and 2010? Panel Chair: Jerry LR Chandler, NINCDS, Epilepsy Branch Panelists - Captain Steven Suddarth, USAF; James Templeman, George Washington University; Russell Eberhart, JHU/APL; Larry Hutton, JHU/ APL; Robert Artigiani, USNA 12:00 - 01:00 Lunch Break 01:00 - 02:00 Session 7: Evaluation Session Chair: Larry K. Barrett, CTA, Inc. % A neural network for target classification using passive sonar. Robert H. Baran and James P. Coughlin. % Defect prediction with neural networks. Robert L. Stites, Bryan Ward, and Robert V. Walters. 02:00 - 03:30 Session 8: ANNA-91 Conference Wrap-up Session Chair: Toni Shetler Rapporteurs - Joseph Bigus, IBM; Oliver Selfridge, GTE Laboratories; and Harold Szu, NSWC -------------------------Registration & Hotel Forms ---------------------- CONFERENCE REGISTRATION Tutorial: fee Amount Name:_____________________________________________ member $150 ______ Address:__________________________________________ nonmember $200 ______ __________________________________________________ student $ 25 ______ __________________________________________________ circle ONE: Tutorial 1 (Dayhoff & Page) Tutorial 2 (Norris) Conference: fee Amount ______________________________________________ member $200 ______ ^Membership number & Society Affiliation nonmember $250 ______ student $ 25 ______ ________________________________________________ Faculty Advisor (full-time student registration) Total: ______ Mail to: ANNA 91 Conference Registration Toni SHetler TRW FVA6/3444 PO Box 10400 Fairfax, VA 22031 HOTEL REGISTRATION Circle choice and mail directly to the hotel (addresses below) Wellesley Inn Quality Inn Single/night +6.5% tax $44.00 $49.50 Double/night +6.5% tax $49.50 $59.50 Arrival Day/date ___________________ Departure ________________________ Name:___________________________________________ Address: _______________________________________ _______________________________________ _______________________________________ Telephone (include Area code) ___________________ MAIL FORM DIRECTLY TO YOUR HOTEL! Wellseley Inn Quality Inn US Rt 50 - 10327 Lee Highway US Rt 50 - 11180 Main Street Fairfax, VA 22030 Fairfax, VA 22030 (703) 359-2888 or (800) 654-2000 (703)591-5900 or (800) 223-1223 To Guarantee late arrival, please forward one night's deposit or include your credit card number with expiration date. Reservations without guarantee will only be held until 6:00 PM on date of arrival. _________________________ ___________ ____________________ __________ Cardholder name type:AE,MC,... credit card number expiration date From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon May 13 13:31:04 1991 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 13 May 91 13:31:04 EDT Subject: Two new Tech Reports Message-ID: The following two tech reports have been placed in the neuroprose database at Ohio State. Instructions for accessing them via anonymous FTP are included at the end of this message. (Maybe everyone should copy down these instructions once and for all so that we can stop sending repeating them with each announcement.) --------------------------------------------------------------------------- Tech Report CMU-CS-91-100 The Recurrent Cascade-Correlation Architecture Scott E. Fahlman Recurrent Cascade-Correlation (RCC) is a recurrent version of the Cascade-Correlation learning architecture of Fahlman and Lebiere \cite{fahlman:cascor}. RCC can learn from examples to map a sequence of inputs into a desired sequence of outputs. New hidden units with recurrent connections are added to the network one at a time, as they are needed during training. In effect, the network builds up a finite-state machine tailored specifically for the current problem. RCC retains the advantages of Cascade-Correlation: fast learning, good generalization, automatic construction of a near-minimal multi-layered network, and the ability to learn complex behaviors through a sequence of simple lessons. The power of RCC is demonstrated on two tasks: learning a finite-state grammar from examples of legal strings, and learning to recognize characters in Morse code. Note: This TR is essentially the same as the the paper of the same name in the NIPS 3 proceedings (due to appear very soon). The TR version includes some additional experimental data and a few explanatory diagrams that had to be cut in the NIPS version. --------------------------------------------------------------------------- Tech report CMU-CS-91-130 Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm Markus Hoehfeld and Scott E. Fahlman A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms. We present an empirical study of the effects of limited precision in Cascade-Correlation networks on three different learning problems. We show that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below 12 bits. We introduce techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 6 bits of precision, with only a gradual reduction in the quality of the solutions. Note: The experiments described here were conducted during a visit by Markus Hoehfeld to Carnegie Mellon in the fall of 1990. Markus Hoehfeld's permanent address is Siemens AG, ZFE IS INF 2, Otto-Hahn-Ring 6, W-8000 Munich 83, Germany. --------------------------------------------------------------------------- To access these tech reports in postscript form via anonymous FTP, do the following: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get ftp> quit unix> uncompress unix> lpr (use flag your printer needs for Postscript) The TRs described above are stored as "fahlman.rcc.ps.Z" and "hoehfeld.precision.ps.Z". Older reports "fahlman.quickprop-tr.ps.Z" and "fahlman.cascor-tr.ps.Z" may also be of interest. Your local version of ftp and other unix utilities may be different. Consult your local system wizards for details. --------------------------------------------------------------------------- Hardopy versions are now being printed and will be available soon, but because of the high demand and tight budget, our school has has (reluctantly) instituted a charge for mailing out tech reports in hardcopy: $3 per copy within the U.S. and $5 per copy elsewhere, and the payment must be in U.S. dollars. To order hardcopies, contact: Ms. Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 U.S.A. From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon May 13 13:58:25 1991 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 13 May 91 13:58:25 EDT Subject: Lisp Code for Recurrent Cascade-Correlation Message-ID: Simulation code for the Recurrent Cascade-Correlation (RCC) algorithm is now available for FTP via Internet. For now, only a Common Lisp version is available. This is the same version I've been using for my own experiments, except that a lot of non-portable display and user-interface code has been removed. It shouldn't be too hard to modify Scott Crowder's C-based simulator for Cascade-Correlation to implement the new algorithm, but the likely "volunteers" to do the conversion are all too busy right now. I'll send a follow-up notice whenever a C version becomes available. Instructions for obtaining the code via Internet FTP are included at the end of this message. If people can't get it by FTP, contact me by E-mail and I'll try once to mail it to you. If it bounces or your mailer rejects such a large message, I don't have time to try a lot of other delivery methods. We are not prepared to distribute the software by floppy disk or tape -- don't ask. I am maintaining an E-mail list of people using any of our simulators so that I can notify them of any changes or problems that occur. I would appreciate hearing about any interesting applications of this code, and will try to help with any problems people run into. Of course, if the code is incorporated into any products or larger systems, I would appreciate an acknowledgement of where it came from. NOTE: This code code is in the public domain. It is distributed without charge on an "as is" basis. There is no warranty of any kind by the authors or by Carnegie-Mellon University. There are several other programs in the "code" directory mentioned below: Cascade-Correlation in Common Lisp and C, Quickprop in Common Lisp and C, the Migraine/Aspirin simulator from MITRE, and some simulation code written by Tony Robinson for the vowel benchmark he contributed to the benchmark collection. -- Scott *************************************************************************** For people (at CMU, MIT, and soon some other places) with access to the Andrew File System (AFS), you can access the files directly from directory "/afs/cs.cmu.edu/project/connect/code". This file system uses the same syntactic conventions as BSD Unix: case sensitive names, slashes for subdirectories, no version numbers, etc. The protection scheme is a bit different, but that shouldn't matter to people just trying to read these files. For people accessing these files via FTP: 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu". The internet address of this machine is 128.2.254.155, for those who need it. 2. Log in as user "anonymous" with no password. You may see an error message that says "filenames may not have /.. in them" or something like that. Just ignore it. 3. Change remote directory to "/afs/cs/project/connect/code". Any subdirectories of this one should also be accessible. Parent directories may not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. The RCC simulator lives in file "rcc1.lisp". If you try to access this directory by FTP and have trouble, please contact me. The exact FTP commands you use to change directories, list files, etc., will vary from one version of FTP to another. From kddlab!atrp05.atr-la.atr.co.jp!nick at uunet.UU.NET Tue May 14 00:14:37 1991 From: kddlab!atrp05.atr-la.atr.co.jp!nick at uunet.UU.NET (Nick Campbell) Date: Tue, 14 May 91 13:14:37+0900 Subject: preprints and reports In-Reply-To: Juergen Schmidhuber's message of 30 Apr 91 9:17 +0200 <9104300717.AA03329(a)kiss.informatik.tu-muenchen.de> Message-ID: <9105140414.AA25653@atrp05.atr-la.atr.co.jp> Many thanks - the package of papers arrived today and I look forward to reading them soon. Nick From russ at oceanus.mitre.org Tue May 14 13:07:36 1991 From: russ at oceanus.mitre.org (Russell Leighton) Date: Tue, 14 May 91 13:07:36 EDT Subject: New Graphics for MITRE Neural Network Simulator Message-ID: <9105141707.AA20789@oceanus.mitre.org> Attention users of the MITRE Neural Network Simulator Aspirin/MIGRAINES Version 4.0 Version 5.0 of Aspirin/MIGRAINES is targeted for public distribution in late summer. This will include a graphic interface which will support X11, SunView, GL and NextStep. We are able to have such an interface because we are using the libraries of a scientific visualization software package called apE. Users interested in having this graphical interface should get a copy of apE2.1 **NOW** so that when Aspirin/MIGRAINES version 5.0 is released it can be used with the apE software. The apE software is available from the Ohio Supercomputing Center for a nominal charge (I believe it is now free for educational institutions, but I am not sure). Order forms can be ftp'd from "apE.osgp.osc.edu" (128.146.18.18) in the /pub/doc/info directory. The Good News: 1. The apE software is free (or nearly free). 2. The apE software is a very portable package. 3. The apE software supports many window systems. 4. You get source with the apE software. 5. The apE tool called "wrench" allows graphical programmimg, of a sort, by connecting boxes with data pipes. A neural network compute module (which A/M can automatically generate) can be used in these pipelines with other compute/graphics modules for pre/post processing. 6. We can get out of the computer graphics business. 7. Sexy data displays. 8. ApE is a nice visualization package, and the price is right. The Bad News: 1. You need more software than what comes with the Aspirin/MIGRAINES distribution (although, you can run without any graphics with the supplied software). 2. The apE software is not very fast and uses alot of memory. 3. apE2.1 is a big distribution Other features to expect in version 5.0: 1. Support for more platforms: Sun,SGI,DecStation,IBM RS/6000,Cray,Convex,Meiko,i860 based coprocessors,... 2. New features for Aspirin: - Quadratic connections (allows hyper-elliptical decision surfaces) - Auto-Regressive Nodes (allows each node to have an auto-regressive memory, with tunable feedback weights). - New file formats Russell Leighton INTERNET: russ at dash.mitre.org Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From haussler at saturn.ucsc.edu Tue May 14 17:40:49 1991 From: haussler at saturn.ucsc.edu (David Haussler) Date: Tue, 14 May 91 14:40:49 -0700 Subject: COLT '91 conference program Message-ID: <9105142140.AA27812@saturn.ucsc.edu> CONFERENCE PROGRAM FOR COLT '91 : PLEASE POST AND DISTRIBUTE Workshop on Computational Learning Theory Monday, August 5 through Wednesday, August 7, 1991 University of California, Santa Cruz, California PROGRAM: Sunday, August 4th: Reception, 7:00 - 10:00 pm, Crown Merrill Multi-Purpose Room Monday, August 5th Session 1: 9:00 -- 10:20 Tracking Drifting Concepts Using Random Examples by David P. Helmbold and Philip M. Long Investigating the Distribution Assumptions in the Pac Learning Model by Peter L. Bartlett and Robert C. Williamson Simultaneous Learning and Estimation for Classes of Probabilities by Kevin Buescher and P.R. Kumar Learning by Smoothing: a morphological approach by Michael Woonkyung Kim Session 2: 11:00 -- 12:00 Unifying Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension by David Haussler, Michael Kearns and Robert E. Schapire Generalization Performance of Bayes Optimal Classification Algorithm for Learning a Perceptron by Manfred Opper and David Haussler Probably Almost Bayes Decisions by Paul Fischer, Stefan Polt, and Hans Ulrich Simon Session 3: 2:00 -- 3:00 Generalization and Learning, invited talk by Tom Cover Session 4: 3:30 -- 4:30 A Geometric Approach to Threshold Circuit Complexity by Vwani Roychowdhury, Kai-Yeung Siu, Alon Orlitsky, and Thomas Kailath Learning Curves in Large Neural Networks by H. Sompolinsky, H.S. Seung, and N. Tishby On the Learning of Infinitary Regular Sets by Oded Maler and Amir Pnueli Impromptu talks: 5:00 -- 6:00 Business Meeting: 8:00 Impromtu talks: 9:00 Tuesday, August 6 Session 5: 9:00 -- 10:20 Learning Monotone DNF with an Incomplete Membership Oracle by Dana Angluin and Donna K. Slonim Redundant Noisy Attributes, Attribute Errors, and Linear-threshold Learning Using Winnow by Nicholas Littlestone Learning in the presence of finitely or infinitely many irrelevant attributes by Avrim Blum, Lisa Hellerstein, and Nick Littlestone On-Line Learning with an Oblivious Environment and the Power of Randomization by Wolfgang Maass Session 6: 11:00 -- 12:00 Learning Monotone k\mu-DNF Formulas on Product Distributions by Thomas Hancock and Yishay Mansour Learning Probabilistic Read-once Formulas on Product Distributions by Robert E. Schapire Learning 2\mu-DNF Formulas and k\mu Decision Trees by Thomas R. Hancock Session 7: 2:00 -- 3:00 Invited talk by Rodney Brooks Session 8: 3:30 -- 4:30 Polynomial-Time Learning of Very Simple Grammars from Positive Data by Takashi Yokomori Relations Between Probabilistic and Team One-Shot Learners by Robert Daley, Leonard Pitt, Mahendran Velauthapillai, Todd Will When Oracles Do Not Help by Theodore A. Slaman and Robert M. Solovay Impromptu talks: 5:00 -- 6:00 Banquet: 6:30 Wednesday, August 7 Session 9: 9:00 -- 10:20 Approximation and Estimation Bounds for Artificial Neural Networks by Andrew R. Barron The VC-Dimension vs. the Statistical Capacity for Two Layer Networks with Binary Weights by Chuanyi Ji and Demetri Psaltis On Learning Binary Weights for Majority Functions by Santosh S. Venkatesh Evaluating the Performance of a Simple Inductive Procedure in the Presence of Overfitting Error by Andrew Nobel Session 10: 11:00 -- 12:00 Polynomial Learnability of Probabilistic Concepts with respect to the Kullback-Leibler Divergence by Naoki Abe, Jun-ichi Takeuchi, and Manfred K. Warmuth A Loss Bound Model for On-Line Stochastic Prediction Strategies by Kenji Yamanishi On the Complexity of Teaching by Sally A. Goldman and Michael J. Kearns Session 11: 2:00 -- 3:40 Improved Learning of AC^0 Functions by Merrick L. Furst, Jeffrey C. Jackson, and Sean W Smith Learning Read-Once Formulas over Fields and Extended Bases by Thomas Hancock and Lisa Hellerstein Fast Identification of Geometric Objects with Membership Queries by William J. Bultman and Wolfgang Maass Bounded degree graph inference from walks by Vijay Raghavan On the Complexity of Learning Strings and Sequences by Tao Jiang and Ming Li General Information: The workshop will be held on the UCSC campus, which is hidden away in the redwoods on the Pacific coast of Northern California. We encourage you to come early so that you will have time to enjoy the area. You can arrive on campus as early as Saturday, August 3. You may want to learn wind surfing on Monterey Bay, go hiking in the redwoods at Big Basin Redwoods State Park, see the elephant seals at Ano Nuevo State Park, visit the Monterey Bay Aquarium, or see a play at the Santa Cruz Shakespeare Festival on campus. The workshop is being held in-cooperation with ACM SICACT and SIGART, and with financial support from the Office of Naval Research. 1. Conference and room registration: Forms can be obtained by anonymous FTP, connect to midgard.ucsc.edu and look in the directory pub/colt. Alternatively, send E-mail to "colt at cis.ucsc.edu" for instructins on obtaining the forms by electronic mail. Fill out the forms and return them to us with your payment. It must be postmarked by June 24 and received by July 1 to obtain the early registration rate and guarantee the room. Conference attendance is limited by the available space, and late registrations may need to be returned. 2. Flight tickets: San Jose Airport is the closest, about a 45 minute drive. San Francisco Airport is about an hour and forty-five minutes away, but has slightly better flight connections. The International Travel Bureau (ITB -- ask for Peter) at (800) 525-5233 is the COLT travel agency and has discounts for some non-Saturday flights. 3. Transportation from the airport to Santa Cruz: The first option is to rent a car and drive south from San Jose on 880/17. When you get to Santa Cruz, take Route 1 (Mission St.) north. Turn right on Bay Street and follow the signs to UCSC. Commuters must purchase parking permits for 2.50/day from the parking office or the conference satellite office. Those staying on campus can pick up permits with their room keys. Various van services also connect Santa Cruz with the the San Francisco and San Jose airports. The Santa Cruz Airporter (408) 423-1214 (or (800)-223-4142 from the airport) has regularly scheduled trips (every two hours from 9am until 11pm from San Jose); Over The Hill Transportation (408) 426-4598 and ABC Transportation (408) 662-8177 travel on demand and should drop you off at the dorms. Call these services directly for reservations and prices. Peerless Stages (phone: (408) 423-1800) operates a regularly scheduled bus between the San Jose Airport and Downtown costing 4.30 and taking about an hour and a quarter. The number 1 bus serves the campus from the Santa Cruz metro center, ask the driver for the Crown-Merrill apartments. Your arrival : Enter the campus at the main entrance following Bay Street. Follow the main road, Coolidge Drive, up into the woods and continue until the second stop sign. Turn right and go up the hill. If you need a map, send E-mail to Jean (jean at cs.ucsc.edu). This road leads into the Crown/Merrill apartments. The whole route will be marked with signs. When you get to the campus, follow the All Conferences signs. As you enter the redwoods the signs will specify particular conferences, such as the International Dowsing Competition and COLT '91. The COLT '91 signs will lead you to the Crown/Merrill apartments. In the center of the apartment complex you will find the Crown/Merrill satellite office of the Conference Office. They will have your keys, meal cards, parking permits, and lots of information about what to do in Santa Cruz, If you get lost or have questions about your room: Call the Crown/Merrill satellite office at (408) 459-2611 . Someone will be at that number all the time, including Saturday and Sunday night. THE FUN PART The weather in August is mostly sunny with occasional summer fog. Bring T-shirts, slacks, shorts, and a sweater or light jacket, as it cools down at night. For information on the local bus routes and schedules, call the Metro center at (408) 425-8600. You can rent windsurfers and wet suits at Cowell Beach . Sherryl (home (408) 429-5730, message machine (408) 429-6033) should be able to arrange lessons and/or board rentals. The main road that leads into the campus is Bay Street. If you go in the opposite direction, away from campus, you will run into a T-intersection at the ocean at the end of Bay Street. Turn left and stay to the right. The road will lead you down to the Boardwalk. Cowell Beach is at the base of the Dream Inn on your right. If you turn right instead of left at the T-intersection at the bottom of Bay Street, you will be driving along Westcliff Drive overlooking the ocean. The road passes by the lighthouse (where you can watch seals and local surfing pros) and dead-ends at Natural Bridges State Park. Westcliff Drive also offers a wonderful paved walkway/bikeway, about 2 miles long. Big Basin Redwoods State Park is about a 45 minute drive from Santa Cruz and there are buses that leave from the downtown Metro Center. You can hike for hours and hours among giant redwoods on the 80 miles of trails. We recommend Berry Creek Falls (about 6 hours for good hikers), but even a half hour hike is worth it! Some of the tallest coastal redwoods on this planet can be found here: the Mother of the Forest is 101 meters (329 feet) high and is on the short (0.06 mile) Redwood trail. For park information call (408) 338-6132. This is your chance to see some Northern Elephant seals, the largest of the pinnipeds. Ano Nuevo State Park is one of the few places in the world where these seals go on land for breeding and molting (August is molting season). Ano Nuevo is located about 20 miles north of Santa Cruz on the coast (right up Highway 1). The park is open from 8am until sunset, but you should plan on arriving before 3pm to see the Elephant seals. Call Ano Nuevo State Park at (415)879-0595 for more information. At the Monterey Bay Aquarium , you can see Great White sharks, Leopard sharks, sea otters, rays, mollusks, and beautiful coral. It's open from 10am to 6pm, and is located about 40 miles south on Highway 1 in Monterey just off of Steinbeck's Cannery Row. For aquarium information call (408) 375-3333. Shakespeare Santa Cruz performances include: "A Midsummer Night's Dream" outside in the redwoods (2pm Saturday and Sunday); "Measure for Measure" (Saturday at 8pm); and "Our Town" (7:30 PM on Sunday). The box office can be reached after July 1 at (408)459-4168 and for general information call (408) 459-2121. Bring swimming trunks, tennis rackets, etc. You can get day passes for $2.50 (East Field House, Physical Education Office) to use the recreation facilities on campus. If you have questions regarding registration or accommodations, contact: Jean McKnight, COLT '91, Dept. of Computer Science, UCSC, Santa Cruz, CA 95064. Her emergency phone number is (408) 459-2303, but she prefers E-mail to jean at cs.ucsc.edu or facsimile at (408) 429-0146. As the program and registration forms are being distributed electronically, please post and/or distribute to your colleagues who might not be on our E-mail list. LaTex versions of the conference information, program, and registration forms can be obtained by anonymous ftp. Connect to midgard.ucsc.edu and look in the directory pub/colt. From gibson_w at maths.su.oz.au Wed May 15 22:58:03 1991 From: gibson_w at maths.su.oz.au (Bill Gibson) Date: Wed, 15 May 91 22:58:03 AES Subject: Job opportunity at Sydney Message-ID: <9105151258.AA02488@c721> We are currently advertising to fill a lectureship in the School of Mathematics and Statistics at the University of Sydney, in Australia. I am a member of a small group of researchers, which includes an experimental neurobiologist, which is working on biological applications of neural networks, with a particular interest in the hippocampus. The School is keen to expand further into the general area of mathematical biology, and this is an opportunity for someone with these interests to obtain a tenurable position. The text of the advertisement follows - I will be happy to provide further information on request. Bill Gibson LECTURER Ref. 17/04 School of Mathematics and Statistics The School has active research groups in pure mathematics (algebra, algebraic geometry, analysis, category theory, group theory), applied mathematics (mathematical modelling in various areas of biology, finance, earth sciences and solar astrophysics) and mathematical statistics (probability, theoretical and applied statistics, neurobiological modelling). Courses in mathematics are given at all undergraduate and postgraduate levels and include computer-based courses. Both research and teaching are supported by a large network of Apollo workstations, including several high performance processors and colour graphics systems. The appointee will have a strong research record in a field related to nonlinear systems and be prepared to teach courses at all levels, including computer-based courses. Research areas such as mathematical biology, neural networks, nonlinear waves and chaos are of particular interest. Appointments to lectureships have the potential to lead to tenure and are usually probationary for three years. Salary: $A33 163 - $A43 096 p.a. Closing: 4 July 1991 From SCHOLTES at ALF.LET.UVA.NL Wed May 15 13:41:00 1991 From: SCHOLTES at ALF.LET.UVA.NL (SCHOLTES) Date: Wed, 15 May 91 13:41 MET Subject: No subject Message-ID: <22C43B9F1A000066@BITNET.CC.CMU.EDU> TR Available on Recurrent Self-Organization in NLP: Kohonen Feature Maps in Natural Language Processing J.C. Scholtes University of Amsterdam Main points: showing the possibilities of Kohonen feature maps in symbolic applications by pushing self-organization. showing a different technique in Connectionist NLP by using only (unsupervised) self organization. Although the model is tested in a NLP context, the linguistic aspects of these experiments are probably less interesting than the connectionist ones. People inquiring a copy should be aware of this. Abstract In the 1980s, backpropagation (BP) started the connectionist bandwagon in Natural Language Processing (NLP). Although initial results were good, some critical notes must be made towards the blind application of BP. Most such systems add contextual and semantical features manually by structuring the input set. Moreover, these models form a small subtract of the brain structures known from neural sciences. They do not adapt smoothly to a changing environment and can only learn input/output pairs. Although these disadvantages of the backpropagation algorithm are commonly known and accepted, other more plausible learning algorithms, such as unsupervised learning techniques are still rare in the field of NLP. Main reason is the highly increasing complexity of unsupervised learning methods when applied in the already complex field of NLP. However, recent efforts implementing unsupervised language learning have been made, resulting in interesting conclusions (Elman and Ritter). Sequencing this earlier work, a recurrent self-organizing model (based on an extension of the Kohonen feature map), capable to derive contextual (and some semantical) information from scratch, is presented in detail. The model implements a first step towards an overall unsupervised language learning system. Simple linguistic tasks such as single word clustering (representation on the map), syntactical group formation, derivation of contextual structures, string prediction, grammatical correctness checking, word sense disambiguation and structure assigning are carried out in a number of experiments. The performance of the model is as least as good as achieved in recurrent backpropagation, and at some points even better (e.g. unsupervised derivation of word classes and syntactical structures). Although premature, the first results are promising and show possibilities for other even more biologically-inspired language processing techniques such as real Hebbian, Genetic or Darwinistic models. Forthcoming research must overcome limitations still present in the extended Kohonen model, such as the absence of within layer learning, restricted recurrence, no look-ahead functions (absence of distributed or unsupervised buffering mechanisms) and a limited support for an increased number of layers. A copy can be obtained by sending a Email message to SCHOLTES at ALF.LET.UVA.NL Please indicate whether you want a hard copy or a postscript file being send to you. From sontag at control.rutgers.edu Wed May 15 13:04:59 1991 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Wed, 15 May 91 13:04:59 EDT Subject: Neural nets are universal computing devices Message-ID: <9105151704.AA02096@control.rutgers.edu> NEURAL NETS ARE UNIVERSAL COMPUTING DEVICES -- request for comments We have proved that it is possible to build a recurrent net that simulates a universal Turing machine. We do not use high-order connections, nor do we require an unbounded number of neurons or an "external" memory such as a stack or a tape. The net is of the standard type, with linear interconnections and about 10^6 neurons. There was some discussion in this medium, some time ago, about questions of universality. At that time, besides classical references, mention was made of work by Pollack, Franklin/Garzon, Hartley/Szu, and Sun. It would appear that our conclusion is not contained in any of the above (which assume high-order connections or potentially infinitely many neurons). [More precisely: a ``net'' is as an interconnection of N synchronously evolving processors, each of which updates its state, a rational number, according to x(t+1) = s(...), where the expression inside is a linear combination (with biases) of the previous states of all processors. An "output processor" signals when the net has completed its computation by outputting a "1". The initial data, a natural number, is encoded via a fractional unary representation into the first processor; when the computation is completed, this same processor has encoded in it the result of the computation. (An alternative, which would give an entirely equivalent result, would be to define read-in and read-out maps.) As activation function we pick the simplest possible "sigmoid," namely the saturated-linear function s(x)=x if x is in [0,1], s(x)=0 for x<0, and s(x)=1 for x>1.] We would appreciate all comments/flames/etc about the technical result. (Philosophical discussions about the implications of these types of results have been extensively covered in previous postings.) A technical report is in preparation, and will be posted to connectionists when publicly available. Any extra references that we should be aware of, please let us know. Thanks a lot, -Hava Siegelman and Eduardo Sontag, Depts of Comp Sci and Math, Rutgers University. From blank at copper.ucs.indiana.edu Wed May 15 16:11:09 1991 From: blank at copper.ucs.indiana.edu (doug blank) Date: Wed, 15 May 91 15:11:09 EST Subject: Paper Available: RAAM Message-ID: Exploring the Symbolic/Subsymbolic Continuum: A Case Study of RAAM Douglas S. Blank (blank at iuvax.cs.indiana.edu) Lisa A. Meeden (meeden at iuvax.cs.indiana.edu) James B. Marshall (marshall at iuvax.cs.indiana.edu) Indiana University Computer Science and Cognitive Science Departments Abstract: This paper is an in-depth study of the mechanics of recursive auto-associative memory, or RAAM, an architecture developed by Jordan Pollack. It is divided into three main sections: an attempt to place the symbolic and subsymbolic paradigms on a common ground; an analysis of a simple RAAM; and a description of a set of experiments performed on simple "tarzan" sentences encoded by a larger RAAM. We define the symbolic and subsymbolic paradigms as two opposing corners of an abstract space of paradigms. This space, we propose, has roughly three dimensions: representation, composition, and functionality. By defining the differences in these terms, we are able to place actual models in the paradigm space, and compare these models in somewhat common terms. As an example of the subsymbolic corner of the space, we examine in detail the RAAM architecture, representations, compositional mechanisms, and functionality. In conjunction with other simple feed-forward networks, we create detectors, decoders and transformers which act holistically on the composed, distributed, continuous subsymbolic representations created by a RAAM. These tasks, although trivial for a symbolic system, are accomplished without the need to decode a composite structure into its constituent parts, as symbolic systems must do. The paper can be found in the neuroprose archive as blank.raam.ps.Z; a detailed example of how to retrieve the paper follows at the end of this message. A version of the paper will also appear in your local bookstores as a chapter in "Closing the Gap: Symbolism vs Connectionism," J. Dinsmore, editor; LEA, publishers. 1992. ---------------------------------------------------------------------------- % ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server (Ver Tue May 9 14:01 EDT 1989) ready. Name (cheops.cis.ohio-state.edu:): anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get blank.raam.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for blank.raam.ps.Z (173015 bytes). 226 Transfer complete. local: blank.raam.ps.Z remote: blank.raam.ps.Z 173015 bytes received in 1.6 seconds (1e+02 Kbytes/s) ftp> bye 221 Goodbye. % uncompress blank.raam.ps.Z % lpr blank.raam.ps ---------------------------------------------------------------------------- From dhw at t13.Lanl.GOV Thu May 16 11:33:47 1991 From: dhw at t13.Lanl.GOV (David Wolpert) Date: Thu, 16 May 91 09:33:47 MDT Subject: posting of Siegelman and Sontag Message-ID: <9105161533.AA02358@t13.lanl.gov> Dr.'s Siegelman and Sontag, You might be interested in an article of mine which appeared in Complex Systems last year ("A mathematical theory of generalization: part II", pp. 201-249, vol. 4). In it I describe experiments using essentially genetic algorithms to train recurrent nets whose output is signaled when a certain pre-determined node exceeds a threshold (I call this "output flagging"). This sounds very similar to the work you describe. In my work, the training was done in such a way as to minimize a cross-validation error (I call this "self-guessing error" in the paper), and automatically had zero learning error. This strategy was followed to try to achieve good generalization off of the learning set. Also, the individual nodes in the net weren't neurons in the conventional sense, but were rather parameterized input-output surfaces; the training involved changing not only the architecture of the whole net but also the surfaces at the nodes. An interesting advantage of this technique is that it allows you to have a node represent some environmental information, i.e., one of the input-output surfaces can be "hard-wired" and represent something in the environment (e.g., visual data). This allows you to train on one environment and then simply "slot in" another one later; the recurrent net "probes" the environment by sending various input values into this environment node and seeing what comes out. With this technique you don't have to worry about having input neurons represent the environmental data. The paper is part of what is essentially a thesis dump; in hindsight, it is not as well written as I'd like. You can probably safely skip most of the verbiage leading up to the description of the experiments. If you find the paper interesting, but confusing, I'd be more than happy to discuss it with you. Finally, as an unpublished extension of the work in the paper, I've proved that, with output flagging, a continuous-limit recurrent net consisting entirely of linear (!) neurons can mimic a universal computer. Again, this sounds very similar to the work you describe. Please send me a copy of your paper when it's finished. Thanks. David Wolpert (dhw at tweety.lanl.gov) From pollack at cis.ohio-state.edu Thu May 16 17:46:27 1991 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Thu, 16 May 91 17:46:27 -0400 Subject: Universality In-Reply-To: Arun Jagota's message of Thu, 16 May 91 13:33:45 EDT <9105161733.AA05965@sybil.cs.Buffalo.EDU> Message-ID: <9105162146.AA01724@dendrite.cis.ohio-state.edu> Sontag refers to the turing machine construction in a chapter of my unpublished thesis. I showed that it is SUFFICIENT to have rational values (to store the unbounded tape), Multiplicative connections (to GATE the rational values), and thresholds (to make decisions). My construction was modelled on Minsky's stored program machine construction (using unbounded integers) in his book FINITE AND INFINITE MACHINES. I was not able to formally prove NECESSITY of the conjunction of parts. My work was misinterpreted as a suggestion that we should build a Turing machine out of neurons, and then use it! The actual reason for this kind of exercise is to make sure we are not missing something as important to neural computing as, say, "Branch-If-Zero" is to conventional machines. The chapter is available the usual way from neuroprose as pollack.neuring.ps.Z If "the usual way" is difficult for you, you might try getting the file "Getps" the usual way, and then FTP'ing from neuroprose becomes quite easy. From kroger at cognet.ucla.edu Sat May 18 22:47:39 1991 From: kroger at cognet.ucla.edu (James Kroger) Date: Sat, 18 May 91 19:47:39 PDT Subject: seeking info on using snowflakes diagrams Message-ID: <9105190247.AA00246@scarecrow.cognet.ucla.edu> I am posting this on behalf of Judea Pearl at UCLA. I apologize in advance for not summarizing his efforts to date in seeking this info; he didn't enumerate them in his request. If you have any info you would like to share, I'll be happy to relay it. ---Jim Kroger (kroger at cognet.ucla.edu) ---------- I am would like to communicate with someone who is actively involved with the application of "snowflakes" diagrams in Neurobiological Signal Analysis. Thanks ----=======Judea From AC1MPS at primea.sheffield.ac.uk Sat May 18 12:17:00 1991 From: AC1MPS at primea.sheffield.ac.uk (AC1MPS@primea.sheffield.ac.uk) Date: Sat, 18 May 91 12:17:00 Subject: neural nets as universal computing devices Message-ID: Drs Siegelman and Sontag (and anyone else interested) With regard to your note that "Neural nets are universal computing devices", you might be interested in some work described in a series of technical reports from Bruce MacLennan at Knoxville, Tennessee. MacLennan starts with the observation that when a very large number of processors are involves in some parallel/distributed system, it's sensible to model them and their calculations as if they are 'continuous' in extent and nature. The result is a theory of "field computation", which he uses as a framework for a theory of massively parallel analog computation. MacLennan has, I believe, also been investigating the possibility of designing a "universal" field computer. If we think of MacLennan's model as the limiting case when the number of nodes/connections in a net becomes enormous, it would be interesting to know whether your universal model 'converges' to his, in some sense. I would very much appreciate a copy of the technical report when it's available. Mike Stannett e-mail: Formal Methods Group AC1MPS @ primea.sheffield.ac.uk Dept of Computer Science The University Sheffield S10 2TN England From schraudo at cs.UCSD.EDU Mon May 20 15:05:00 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Mon, 20 May 91 12:05:00 PDT Subject: Hertz/Krogh/Palmer bibliography: BibTeX version available Message-ID: <9105201905.AA10458@beowulf.ucsd.edu> I've converted the Hertz/Krogh/Palmer neural network bibliography to BibTeX format and uploaded it as hertz.refs.bib.Z to the neuroprose archive (anonymous ftp to cheops.cis.ohio-state.edu, directory pub/neuroprose). Like the original authors, I don't have the time to update or otherwise maintain this bibliography, except to fix obvious bugs. Is anybody out there who would be willing to organize a continually updated bibliography? Many thanks to Palmer, Krogh & Hertz for making their work available to us. Their original file (hertz.refs.tex.Z) has been updated last Thursday to fix a few minor problems. Happy citing, -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Mon May 20 22:24:30 1991 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 20 May 91 22:24:30 EDT Subject: information on NIPS 3 Message-ID: <7601.674792670@DST.BOLTZ.CS.CMU.EDU> Below is information on the forthcoming NIPS 3 volume (proceedings of the 1990 NIPS conference), which will be available next month from Morgan Kaufmann. There is catalog information, followed by the complete table of contents, followed by ordering information should you wish to purchase a copy. (NIPS authors and attendees will be receiving free copies.) Normally NIPS proceedings come out in April, but this volume turned out to be much larger than the previous ones (because more papers were accepted), resulting in various technical and logistical difficulties which hopfeully will not be repeated next year. -- Dave Touretzky ................................................................ PUBLICATION ANNOUNCEMENT ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS -3- Edited by Richard P. Lippmann (M.I.T. Lincoln Labs), John E. Moody (Yale University), and David S. Touretzky (Carnegie Mellon University) NIPS 3 (1991) ISBN 1-55860-184-8 $49.95 U.S. 1100 pages MORGAN KAUFMANN PUBLISHERS, INC. Tabel of Contents Neurobiology......................................................1 "Studies of a Model for the Development and Regeneration of Eye-Brain Maps" J.D. Cowan and A.E. Friedman.................................3 "Development and Spatial Structure of Cortical Feature Maps: A Model Study" K. Obermayer, H. Ritter, and K. Schulten....................11 "Interaction Among Ocularity, Retinotopy and On-center/ Off-center Pathways" Shigeru Tanaka..............................................18 "Simple Spin Models for the Development of Ocular Dominance Columns and Iso-Orientation Patches" J.D. Cowan and A.E. Friedman................................26 "A Recurrent Neural Network Model of Velocity Storage in the Vestibulo-Ocular Reflex" Thomas J. Anastasio.........................................32 "Self-organization of Hebbian Synapses in Hippocampal Neurons" Thomas H. Brown, Zachary F. Mainen, Anthony M. Zador, and Brenda J. Claiborne.........................................39 "Cholinergic Modulation May Enhance Cortical Associative Memory Function" Michael E. Hasselmo, Brooke P. Anderson, and James M. Bower.46 Neuro-Dynamics...................................................53 "Order Reduction for Dynamical Systems Describing the Behavior of Complex Neurons" Thomas B. Kepler, L.F. Abbott, and Eve Marder...............55 "Stochastic Neurodynamics" J.D. Cowan..................................................62 "Dynamics of Learning in Recurrent Feature-Discovery Networks" Todd K. Leen................................................70 "A Lagrangian Approach to Fixed Points" Eric Mjolsness and Willard L. Miranker......................77 "Associative Memory in a Network of Biological Neurons" Wulfram Gerstner............................................84 "CAM Storage of Analog Patterns and Continuous Sequences with 3N\u2\d Weights" Bill Baird and Frank Eeckman................................91 "Connection Topology and Dynamics in Lateral Inhibition Networks" C.M. Marcus, F.R. Waugh, and R.M. Westervelt................98 "Shaping the State Space Landscape in Recurrent Networks" Patrice Y. Simard, Jean Pierre Raysz, and Bernard Victorri.105 "Adjoint-Functions and Temporal Learning Algorithms in Neural Networks" N. Toomarian and J. Barhen....... .........................113 Oscillations....................................................121 "Phase-coupling in Two-Dimensional Networks of Interacting Oscillators" Ernst Niebur, Daniel M. Kammen, Christof Koch, Daniel Ruderman, and Heinz G. Schuster............................123 "Oscillation Onset in Neural Delayed Feedback" Andre Longtin............................................. 130 "Analog Computation at a Critical Point" Leonid Kruglyak and William Bialek.........................137 Temporal Reasoning..............................................145 "Modeling Time Varying Systems Using Hidden Control Neural Architecture" Esther Levin...............................................147 "The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning" Ulrich Bodenhausen and Alex Waibel.........................155 "A Theory for Neural Networks with Time Delays" Bert de Vries and Jose C. Principe.........................162 "ART2/BP Architecture for Adaptive Estimation of Dynamic Processes" Einar Sorheim.............................................169 "Statistical Mechanics of Temporal Association in Neural Networks" Andreas V.M. Herz, Zhaoping Li, and J. Leo van Hemmen......176 "Learning Time-varying Concepts" Anthony Kuh, Thomas Petsche, and Ronald L. Rivest..........183 "The Recurrent Cascade-Correlation Architecture" Scott E. Fahlman...........................................190 Speech..........................................................197 "Continuous Speech Recognition by Linked Predictive Neural Networks" Joe Tebelskis, Alex Waibel, Bojan Petek, and Otto Schmidbauer...........................................199 "A Recurrent Neural Network for Word Identification from Continuous Phoneme Strings" Robert B. Allen and Candace A. Kamm........................206 "Connectionist Approaches to the Use of Markov Models for Speech Recognition" Herve Bourlard, Nelson Morgan, and Chuck Wooters...........213 "Spoken Letter Recognition" Mark Fanty and Ronald Cole.................................220 "Speech Recognition Using Demi-Syllable Neural Prediction Model" Ken-ichi Iso and Takao Watanabe............................227 "RecNorm: Simultaneous Normalisation and Classification Applied to Speech Recognition" John S. Bridle and Stephen J. Cox..........................234 "Exploratory Feature Extraction in Speech Signals" Nathan Intrator............................................241 "Phonetic Classification and Recognition Using the Multi-Layer Perceptron" Hong C. Leung, James R. Glass, Michael S. Phillips, and Victor W. Zue.....................................................248 "From Speech Recognition to Spoken Language Understanding" Victor Zue, James Glass, David Goodine, Lynette Hirschman, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff.....................................................255 "Speech Recognition using Connectionist Approaches" Khalid Choukri.............................................262 Signal Processing...............................................271 "Natural Dolphin Echo Recognition Using an Integrator Gateway Network" Herbert L. Roitblat, Patrick W.B. Moore, Paul E. Nachtigall, and Ralph H. Penner........................................273 "Signal Processing by Multiplexing and Demultiplexing in Neurons" David C. Tam...............................................282 "Applications of Neural Networks in Video Signal Processing" John C. Pearson, Clay D. Spence, and Ronald Sverdlove......289 Visual Processing...............................................297 "Discovering Viewpoint-Invariant Relationships That Characterize Objects" Richard S. Zemel and Geoffrey E. Hinton....................299 "A Neural Network Approach for Three-Dimensional Object Recognition" Volker Tresp...............................................306 "A Second-Order Translation, Rotation and Scale Invariant Neural Network" Shelly D.D. Goggin, Kristina M. Johnson, and Karl E. Gustafson..................................................313 "Learning to See Rotation and Dilation with a Hebb Rule" Martin I. Sereno and Margaret E. Sereno....................320 "Stereopsis by a Neural Network Which Learns the Constraints" Alireza Khotanzad and Ying-Wung Lee........................327 "Grouping Contours by Iterated Pairing Network" Amnon Shashua and Shimon Ullman............................335 "Neural Dynamics of Motion Segmentation and Grouping" Ennio Mingolla.............................................342 "A Multiscale Adaptive Network Model of Motion Computation in Primates" H. Taichi Wang, Bimal Mathur, and Christof Koch............349 "Qualitative Structure From Motion" Daphna Weinshall...........................................356 "Optimal Sampling of Natural Images" William Bialek, Daniel L. Ruderman, and A. Zee.............363 "A VLSI Neural Network for Color Constancy" Andrew Moore, John Allman, Geoffrey Fox, and Rodney Goodman....................................................370 "Optimal Filtering in the Salamander Retina" Fred Rieke, W. Geoffrey Owen, and William Bialek...........377 "A Four Neuron Circuit Accounts for Change Sensitive Inhibition in Salamander Retina" Jeffrey L. Teeters, Frank H. Eeckman, and Frank S. Werblin.384 "Feedback Synapse to Cone and Light Adaptation" Josef Skrzypek.............................................391 "An Analog VLSI Chip for Finding Edges from Zero-crossings" Wyeth Bair and Christof Koch...............................399 "A Delay-Line Based Motion Detection Chip" Tim Horiuchi, John Lazzaro, Andrew Moore, and Christof Koch..............................................406 Control and Navigation..........................................413 "Neural Networks Structured for Control Application to Aircraft Landing" Charles Schley, Yves Chauvin, Van Henkle, and Richard Golden.............................................415 "Real-time Autonomous Robot Navigation Using VLSI Neural Networks" Lionel Tarassenko, Michael Brownlow, Gillian Marshall, Jon Tombs, and Alan Murray.....................................422 "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation" Dean A. Pomerleau..........................................429 "Learning Trajectory and Force Control of an Artificial Muscle Arm" Masazumi Katayama and Mitsuo Kawato........................436 "Proximity Effect Corrections in Electron Beam Lithography" Robert C. Frye, Kevin D. Cummings, and Edward A. Reitman...443 "Planning with an Adaptive World Model" Sebastian B. Thrun, Knut Moller, and Alexander Linden...........................................450 "A Connectionist Learning Control Architecture for Navigation" Jonathan R. Bachrach.......................................457 "Navigating Through Temporal Difference" Peter Dayan................................................464 "Integrated Modeling and Control Based on Reinforcement Learning" Richard S. Sutton..........................................471 "A Reinforcement Learning Variant for Control Scheduling" Aloke Guha.................................................479 "Adaptive Range Coding" Bruce E. Rosen, James M. Goodwin, and Jacques J. Vidal.....486 "Neural Network Implementation of Admission Control" Rodolfo A. Milito, Isabelle Guyon, and Sara A. Solla.......493 "Reinforcement Learning in Markovian and Non-Markovian Environments" Jurgen Schmidhuber.........................................500 "A Model of Distributed Sensorimotor Control in The Cockroach Escape Turn" R.D. Beer, G.J. Kacmarcik, R.E. Ritzmann, and H.J. Chiel...507 "Flight Control in the Dragonfly: A Neurobiological Simulation" William E. Faller and Marvin W. Luttges....................514 Applications....................................................521 "A Novel Approach to Prediction of the 3-Dimensional Structures" Henrik Fredholm, Henrik Bohr, Jakob Bohr, Soren Brunak, Rodney M.J. Cotterill, Benny Lautrup, and Steffen B. Petersen.....523 "Training Knowledge-Based Neural Networks to Recognize Genes" Michiel O. Noordewier, Geoffrey G. Towell, and Jude W. Shavlik....................................................530 "Neural Network Application to Diagnostics" Kenneth A. Marko...........................................537 "Lg Depth Estimation and Ripple Fire Characterization" John L. Perry and Douglas R. Baumgardt.....................544 "A B-P ANN Commodity Trader" Joseph E. Collard..........................................551 "Integrated Segmentation and Recognition of Hand-Printed Numerals" James D. Keeler, David E. Rumelhart, and Wee-Kheng Leow....557 "EMPATH: Face, Emotion, and Gender Recognition Using Holons" Garrison W. Cottrell and Janet Metcalfe....................564 "SEXNET: A Neural Network Identifies Sex From Human Faces" B.A. Golomb, D.T. Lawrence, and T.J. Sejnowski.............572 "A Neural Expert System with Automated Extraction of Fuzzy If-Then Rules" Yoichi Hayashi.............................................578 "Analog Neural Networks as Decoders" Ruth Erlanson and Yaser Abu-Mostafa........................585 Language and Cognition..........................................589 "Distributed Recursive Structure Processing" Geraldine Legendre, Yoshiro Miyata, and Paul Smolensky.....591 "Translating Locative Prepositions" Paul W. Munro and Mary Tabasko "A Short-Term Memory Architecture for the Learning of Morphophonemic Rules" Michael Gasser and Chan-Do Lee.............................605 "Exploiting Syllable Structure in a Connectionist Phonology Model" David S. Touretzky and Deirdre W. Wheeler..................612 "Language Induction by Phase Transition in Dynamical Recognizers" Jordan B. Pollack..........................................619 "Discovering Discrete Distributed Representations" Michael C. Mozer...........................................627 "Direct Memory Access Using Two Cues" Janet Wiles, Michael S. Humphreys, John D. Bain, and Simon Dennis...............................................635 "An Attractor Neural Network Model of Recall and Recognition" Eytan Ruppin and Yechezkel Yeshurun........................642 "ALCOVE: A Connectionist Model of Human Category Learning" John K. Kruschke...........................................649 "Spherical Units as Dynamic Consequential Regions" Stephen Jose Hanson and Mark A. Gluck......................656 "Connectionist Implementation of a Theory of Generalization" Roger N. Shepard and Sheila Kannappan......................665 Local Basis Functions...........................................673 "Adaptive Spline Networks" Jerome H. Friedman.........................................675 "Multi-Layer Perceptrons with B-Spline Receptive Field Functions" Stephen H. Lane, Marshall G. Flax, David A. Handelman, and Jack J. Gelfand............................................684 "Bumptrees for Efficient Function, Constraint, and Classification Learning" Stephen M. Omohundro.......................................693 "Basis-Function Trees as a Generalization of Local Variable Selection Methods" Terence D. Sanger..........................................700 "Generalization Properties of Radial Basis Functions" Sherif M. Botros and Christopher G. Atkeson................707 "Learning by Combining Memorization and Gradient Descent" John C. Platt..............................................714 "Sequential Adaptation of Radial Basis Function Neural Networks" V. Kadirkamanathan, M. Niranjan, and F. Fallside...........721 "Oriented Non-Radial Basis Functions for Image Coding and Analysis" Avijit Saha, Jim Christian, D.S. Tang, and Chuan-Lin Wu....728 "Computing with Arrays of Bell-Shaped and Sigmoid Functions" Pierre Baldi...............................................735 "Discrete Affine Wavelet Transforms" Y.C. Pati and P.S. Krishnaprasad................................743 "Extensions of a Theory of Networks for Approximation and Learning" Federico Girosi, Tomaso Poggio, and Bruno Caprile..........750 "How Receptive Field Parameters Affect Neural Learning" Bartlett W. Mel and Stephen M. Omohundro...................757 Learning Systems................................................765 "A Competitive Modular Connectionist Architecture" Robert A. Jacobs and Michael I. Jordan.....................767 "Evaluation of Adaptive Mixtures of Competing Experts" Steven J. Nowlan and Geoffrey E. Hinton....................774 "A Framework for the Cooperation of Learning Algorithms" Leon Bottou and Patrick Gallinari..........................781 "Connectionist Music Composition Based on Melodic and Stylistic Constraints" Michael C. Mozer and Todd Soukup...........................789 "Using Genetic Algorithms to Improve Pattern Classification Performance" Eric I. Chang and Richard P. Lippmann......................797 "Evolution and Learning in Neural Networks" Ron Keesing and David G. Stork.............................804 "Designing Linear Threshold Based Neural Network Pattern Classifiers" Terrence L. Fine...........................................811 "On Stochastic Complexity and Admissible Models for Neural Network Classifiers" Padhraic Smyth............................................ 818 "Efficient Design of Boltzmann Machines" Ajay Gupta and Wolfgang Maass..............................825 "Note on Learning Rate Schedules for Stochastic Optimization" Christian Darken and John Moody............................832 "Convergence of a Neural Network Classifier" John S. Baras and Anthony LaVigna..........................839 "Learning Theory and Experiments with Competitive Networks" Griff L. Bilbro and David E. Van den Bou...................846 "Transforming Neural-Net Output Levels to Probability Distributions" John S. Denker and Yann leCun..............................853 "Back Propagation is Sensitive to Initial Conditions" John F. Kolen and Jordan B. Pollack........................860 "Closed-Form Inversion of Backpropagation Networks" Michael L. Rossen..........................................868 Learning and Generalization.....................................873 "Generalization by Weight-Elimination with Application to Forecasting" Andreas S. Weigend, David E. Rumelhart, and Bernardo A. Huberman...................................................875 "The Devil and the Network" Sanjay Biswas and Santosh S. Venkatesh.....................883 "Generalization Dynamics in LMS Trained Linear Networks" Yves Chauvin...............................................890 "Dynamics of Generalization in Linear Perceptrons" Anders Krogh and John A. Hertz.............................897 "Constructing Hidden Units Using Examples and Queries" Eric B. Baum and Kevin J. Lang.............................904 Can Neural Networks do Better Than the Vapnik-Chervonenkis Bounds?" David Cohn and Gerald Tesauro..............................911 "Second Order Properties of Error Surfaces" Yann Le Cun, Ido Kanter, and Sara A. Solla.................918 "Chaitin-Kolmogorov Complexity and Generalization in Neural Networks" Barak A. Pearlmutter and Ronald Rosenfeld..................925 "Asymptotic Slowing Down of the Nearest-Neighbor Classifier" Robert R. Snapp, Demetri Psaltis, and Santosh S. Venkatesh..................................................932 "Remarks on Interpolation and Recognition Using Neural Nets" Eduardo D. Sontag..........................................939 "*-Entropy and the Complexity of Feedforward Neural Networks" Robert C. Williamson.......................................946 "On The Circuit Complexity of Neural Networks" V.P. Roychowdhury, A. Orlitsky, K.Y. Siu, and T. Kailath...953 Performance Comparisons.........................................961 "Comparison of Three Classification Techniques, CART, C4.5 and Multi-Layer Perceptrons" A.C. Tsoi and R.A. Pearson.................................963 "Practical Characteristics of Neural Network and Conventional Pattern Classifiers" Kenney Ng and Richard P. Lippmann..........................970 "Time Trials on Second-Order and Variable-Learning-Rate Algorithms" Richard Rohwer.............................................977 "Kohonen Networks and Clustering" Wesley Snyder, Daniel Nissman, David Van den Bout, and Griff Bilbro.....................................................984 VLSI............................................................991 "VLSI Implementations of Learning and Memory Systems" Mark A. Holler.............................................993 "Compact EEPROM-based Weight Functions" A. Kramer, C.K. Sin, R. Chu, and P.K. Ko..................1001 "An Analog VLSI Splining Network" Daniel B. Schwartz and Vijay K. Samalam...................1008 "Relaxation Networks for Large Supervised Learning Problems" Joshua Alspector, Robert B. Allen, Anthony Jayakumar, Torsten Zeppenfeld, and Ronny Meir................................1015 "Design and Implementation of a High Speed CMAC Neural Network" W. Thomas Miller, III, Brian A. Box, Erich C. Whitney, and James M. Glynn............................................1022 "Back Propagation Implementation" Hal McCartor..............................................1028 "Reconfigurable Neural Net Chip with 32K Connections" H.P. Graf, R. Janow, D. Henderson, and R. Lee.............1032 "Simulation of the Neocognitron on a CCD Parallel Processing Architecture" Michael L. Chuang and Alice M. Chiang.....................1039 "VLSI Implementation of TInMANN" Matt Melton, Tan Phan, Doug Reeves, and Dave Van den Bout.........................................1046 Subject Index Author Index Advances in Neural Information Processing Systems Bibliography NIPS 3 (1991) ISBN 1-55860-184-8 $49.95 U.S. 1100 pages NIPS 2 (1990) ISBN 1-55860-100-7 $35.95 U.S. 853 pages NIPS 1 (1989) ISBN 1-55860-015-9 $35.95 U.S. 819 pages Complete three volume set: NIPS 1, 2, & 3 ISBN 1-55860-189-9 $109.00 U.S. Morgan Kaufmann Publishers, Inc _________________________________________________________________ Ordering Information: Shipping is available at cost, plus a nominal handling fee: In the U.S. and Canada, please add $3.50 for the first book and $2.50 for each additional for surface shipping; for surface shipments to all other areas, please add $6.50 for the first book and $3.50 for each additional book. Air shipment available outside North America for $45.00 on the first book, and $25.00 on each additional book. We accept American Express, Master Card, Visa and personal checks drawn on US banks. MORGAN KAUFMANN PUBLISHERS, INC. Department B8 2929 Campus Drive, Suite 260 San Mateo, CA 94403 USA Phone: (800) 745-7323 (Toll Free for North America) (415) 578-9928 Fax: (415) 578-0672 email: morgan at unix.sri.com From SIGUENZA at EMDCCI11.BITNET Tue May 21 16:41:55 1991 From: SIGUENZA at EMDCCI11.BITNET (Juan Alberto Sigenza Pizarro) Date: Tue, 21 May 91 16:41:55 HOE Subject: No subject Message-ID: ************************************************************** * SUMMER SCHOOL * * * * ARTIFICIAL NEURAL NETWORWKS: FOUNDATIONS AND APLICATIONS. * * * ************************************************************** Organizers: Jose R. Dorronsoro (Instituto de Ingenieria del Conocimiento and Facultad de Ciencias de la Universidad Autonoma de Madrid) Juan A. Siguenza (Instituto de Ingenieria del Conocimiento and Facultad de Medicina de la Universidad Autonoma de Madrid) *******PROGRAMME******* July 22, 1991 ************* Curso introductorio a las Redes Neuronales. Vicente Lopez} (Instituto de Ingenieria del Conocimiento y Facultad de Ciencias de la UAM) 10:00 1st Session: Introduccion a las diferentes arquitecturas. 12:00 2nd Session: Redes basadas en algoritmos de retropropagacion. 17:00 3rd Session: Ejercicios de aplicacion practica. 23 July, 1991 ************* 10:00 Optimizacion global y algoritmos geneticos. A. Trias (AIA S.A. y Universidad de Barcelona.) 12:00 Neurobiological models and artificial neural networks. R. Granger (Bonney Center for the Neurobiology of Learning and Memory, Universidad de California, Irvine) 17:00 Demostraciones practicas} 24 July, 1991 ************* 10:00 Present perspectives for the optical implementation of neural networks. D. Selviah (University College of London) 12:00 Neural networks learning algorithms G. Tesauro (IBM, T.J. Watson Research Center, Yorktown From slehar at park.bu.edu Tue May 21 11:48:38 1991 From: slehar at park.bu.edu (Steve Lehar) Date: Tue, 21 May 91 11:48:38 -0400 Subject: neural nets as universal computing devices In-Reply-To: connectionists@c.cs.cmu.edu's message of 20 May 91 11:26:19 GM Message-ID: <9105211548.AA03404@park.bu.edu> Have you got a reference for McLennan's "field computation" idea? It sounds very interesting. From mackay at hope.caltech.edu Tue May 21 13:40:57 1991 From: mackay at hope.caltech.edu (David MacKay) Date: Tue, 21 May 91 10:40:57 PDT Subject: New Bayesian work Message-ID: <9105211740.AA27625@hope.caltech.edu> Two new papers available ------------------------ The papers that I presented at Snowbird this year are now available in the neuroprose archives. The titles: [1] Bayesian interpolation (14 pages) [2] A practical Bayesian framework for backprop networks (11 pages) The first paper describes and demonstrates recent developments in Bayesian regularisation and model comparison. The second applies this framework to backprop. The first paper is a prerequisite for understanding the second. Abstracts and instructions for anonymous ftp follow. If you have problems obtaining the files by ftp, feel free to contact me. David MacKay Office: (818) 397 2805 Fax: (818) 792 7402 Email: mackay at hope.caltech.edu Smail: Caltech 139-74, Pasadena, CA 91125 Abstracts --------- Bayesian interpolation ---------------------- Although Bayesian analysis has been in use since Laplace, the Bayesian method of {\em model--comparison} has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and model--comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other problems. Regularising constants are set by examining their posterior probability distribution. Alternative regularisers (priors) and alternative basis sets are objectively compared by evaluating the {\em evidence} for them. `Occam's razor' is automatically embodied by this framework. The way in which Bayes infers the values of regularising constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling. A practical Bayesian framework for backprop networks ---------------------------------------------------- A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for deletion of weights; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well--determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over--flexible and over--complex architectures. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well--matched to a problem, a good correlation between generalisation ability and the Bayesian evidence is obtained. Instructions for obtaining copies by ftp from neuroprose: --------------------------------------------------------- unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mackay.bayes-interpolation.ps.Z ftp> get mackay.bayes-backprop.ps.Z ftp> quit unix> [then `uncompress' files and lpr them.] From ethem at ICSI.Berkeley.EDU Tue May 21 13:56:40 1991 From: ethem at ICSI.Berkeley.EDU (Ethem Alpaydin) Date: Tue, 21 May 91 10:56:40 PDT Subject: New ICSI TR on incremental learning Message-ID: <9105211756.AA04169@icsib17.Berkeley.EDU> The following TR is available by anonymous net access at icsi-ftp.berkeley.edu (128.32.201.55) in postscript. Instructions to ftp and uncompress follow text. Hard copies may be requested by writing to either of the addresses below: ethem at icsi.berkeley.edu Ethem Alpaydin ICSI 1947 Center St. Suite 600 Berkeley CA 94704-1105 USA ------------------------------------------------------------------------------ GAL: Networks that grow when they learn and shrink when they forget Ethem Alpaydin International Computer Science Institute Berkeley, CA TR 91-032 Abstract Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. ``Grow and Learn'' (GAL) is a new algorithm that learns an association at one-shot due to being incremental and using a local representation. During the so-called ``sleep'' phase, units that were previously stored but which are no longer necessary due to recent modifications are removed to minimize network complexity. The incrementally constructed network can later be finetuned off-line to improve performance. Another method proposed that greatly increases recognition accuracy is to train a number of networks and vote over their responses. The algorithm and its variants are tested on recognition of handwritten numerals and seem promising especially in terms of learning speed. This makes the algorithm attractive for on-line learning tasks, e.g., in robotics. The biological plausibility of incremental learning is also discussed briefly. Keywords Incremental learning, supervised learning, classification, pruning, destructive methods, growth, constructive methods, nearest neighbor. -------------------------------------------------------------------------- Instructions to ftp the above-mentioned TR (Assuming you are under UNIX and have a postscript printer --- messages in parantheses indicate system's responses): ftp 128.32.201.55 (Connected to 128.32.201.55. 220 icsi-ftp (icsic) FTP server (Version 5.60 local) ready. Name (128.32.201.55:ethem):)anonymous (331 Guest login ok, send ident as password. Password:)(your email address) (230 Guest login Ok, access restrictions apply. ftp>)cd pub/techreports (250 CWD command successful. ftp>)bin (200 Type set to I. ftp>)get tr-91-032.ps.Z (200 PORT command successful. 150 Opening BINARY mode data connection for tr-91-032.ps.Z (153915 bytes). 226 Transfer complete. local: tr-91-032.ps.Z remote: tr-91-032.ps.Z 153915 bytes received in 0.62 seconds (2.4e+02 Kbytes/s) ftp>)quit (221 Goodbye.) (back to Unix) uncompress tr-91-032.ps.Z lpr tr-91-032.ps Happy reading, I hope you'll enjoy it. From gary at cs.UCSD.EDU Tue May 21 21:27:56 1991 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Tue, 21 May 91 18:27:56 PDT Subject: paper available: Learning the past tense in a recurrent network Message-ID: <9105220127.AA18926@desi.ucsd.edu> The following paper will appear in the Proceedings of the Thirteenth Annual Meeting of the Cognitive Science Society. It is now available in the neuroprose archive as cottrell.cogsci91.ps.Z. Learning the past tense in a recurrent network: Acquiring the mapping from meaning to sounds Garrison W. Cottrell Kim Plunkett Computer Science Dept. Inst. of Psychology UCSD University of Aarhus La Jolla, CA Aarhus, Denmark The performance of a recurrent neural network in mapping a set of plan vectors, representing verb semantics, to associated sequences of phonemes, representing the phonological structure of verb morphology, is investigated. Several semantic representations are explored in attempt to evaluate the role of verb synonymy and homophony in deteriming the patterns of error observed in the net's output performance. The model's performance offers several unexplored predictions for developmental profiles of young children acquiring English verb morphology. To retrieve this from the neuroprose archive type the following: ftp 128.146.8.62 anonymous bi cd pub/neuroprose get cottrell.cogsci91.ps.Z quit uncompress cottrell.cogsci91.ps.Z lpr cottrell.cogsci91.ps Thanks again to Jordan Pollack for this great idea for net distribution. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From mclennan at cs.utk.edu Tue May 21 22:07:04 1991 From: mclennan at cs.utk.edu (mclennan@cs.utk.edu) Date: Tue, 21 May 91 22:07:04 -0400 Subject: field computation papers Message-ID: <9105220207.AA00427@maclennan.cs.utk.edu> There have been several requests for my papers on field computation. In addition to an early paper in the first IEEE ICNN (San Diego, 1987), there are several reports in the neuroprose directory: maclennan.contincomp.ps.Z -- a short introduction maclennan.fieldcomp.ps.Z -- the current most comprehensive report maclennan.csa.ps.Z -- continuous spatial automata Of course I will be happy to send out hardcopy of these papers or several others not in neuroprose. Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 (615)974-5067 maclennan at cs.utk.edu Here are the directions for accessing files from neuroprose. Note that there is also in the directory a script called Getps that does all the work. unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get maclennan.csa.ps.Z ftp> quit unix> uncompress maclennan.csa.ps.Z unix> lpr maclennan.csa.ps (or however you print postscript) From AC1MPS at primea.sheffield.ac.uk Wed May 22 12:40:37 1991 From: AC1MPS at primea.sheffield.ac.uk (AC1MPS@primea.sheffield.ac.uk) Date: Wed, 22 May 91 12:40:37 Subject: maclennan papers Message-ID: Dear connectionists, A number of you have asked for more information on MacLennan's work concerning field computation. He mentions that a number of tech. reports on field computation are stored in the neuroprose archive (he doesn't remember their filenames off hand, but they all mention his name). His e-mail address is currently maclennan @ cs.utk.edu Technical reports and papers of which I know are as follows: 1987: Technology-independent design of neurocomputers: the universal field computer. Published in Proc IEEE 1st conf. on neural networks, June 1987. -------------------------------- Technical reports, Computer Science Department, University of Tennessee, Knoxville, TN 37916. CS-89-83 Continuous computation: taking massive parallelism seriously June 1989 CS-89-84 Outline of a theory of massively parallel analog computation June 1989 CS-90-100 Field computation: a theoretical framework for massively parallel analog computation, Parts I-IV. February 1990 CS-90-121 Continuous spatial automata November 1990 --------------------------------- Best wishes, Mike Stannett. From fritzke at immd2.informatik.uni-erlangen.de Wed May 22 18:03:46 1991 From: fritzke at immd2.informatik.uni-erlangen.de (B. Fritzke) Date: Wed, 22 May 91 18:03:46 MET DST Subject: TR's available (via ftp) Message-ID: <9105221603.AA01521@faui28.informatik.uni-erlangen.de> Hi there, I just have placed two short papers in the Neuroprose Archive at cheops.cis.ohio-state.edu (128.146.8.62) in the directory pub/neuroprose. The files are: fritzke.cell_structures.ps.Z (to be presented at ICANN-91 Helsinki) fritzke.clustering.ps.Z (to be presented at IJCNN-91 Seattle) They both deal with a new self-organizing network based on the model of Kohonen. The first one describes the model and the second one concentrates one an application. LET IT GROW -- SELF-ORGANIZING FEATURE MAPS WITH PROBLEM DEPENDENT CELL STRUCTURE Bernd FRITZKE Abstract: The self-organizing feature maps introduced by T. Kohonen use a cell array of fixed size and structure. In many cases this array is not able to model a given signal distribution properly. We present a method to construct two-dimensional cell structures during a self-organization process which are specially adapted to the underlying distribution: Starting with a small number of cells new cells are added successively. Thereby signal vectors according to the (usually not explicitly known) probabil- ity distribution are used to determine where to insert or delete cells in the current structure. This process leads to problem dependent cell structures which model the given distribution with arbitrary high accuracy. UNSUPERVISED CLUSTERING WITH GROWING CELL STRUCTURES Bernd FRITZKE Abstract: A Neural Network model is presented which is able to detect clusters of similar patterns. The patterns are n- dimensional real number vectors according to an unknown proba- bility distribution P(X). By evaluating sample vectors ac- cording to P(X) a two-dimensional cell structure is gradually built up which models the distribution. Through removal of cells corresponding to areas with low probability density the structure is then split into several disconnected substruc- tures. Each of them identifies one cluster of similar patterns. Not only the number of clusters is determined but also an ap- proximation of the probability distribution inside each cluster. The accuracy of the cluster description is increased linearly with the number of evaluated sample vectors. Enjoy, Bernd Bernd Fritzke ----------> e-mail: fritzke at immd2.informatik.uni-erlangen.de University of Erlangen, CS IMMD II, Martensstr. 3, 8520 Erlangen (Germany) From dtam at next-cns.neusc.bcm.tmc.edu Wed May 22 20:08:59 1991 From: dtam at next-cns.neusc.bcm.tmc.edu (David C. Tam) Date: Wed, 22 May 91 20:08:59 GMT-0600 Subject: Info on Snowflake diagrams for spike train analysis Message-ID: <9105230208.AA02196@next-cns.neusc.bcm.tmc.edu> This is a brief summary of the information on "snowflake" diagrams in Neurobiological Signal Analysis in reply to the request by Judea Pearl (via kroger at cognet.ucla.edu). Snowflake scatter diagram was one of the spike train analytical methods introduced by Donald Perkel and George Gerstein et al in the 1970's to analyze the correlation between firing intervals among 3 neurons. Background: Spike trains are time-series of action potentials recorded from biological neurons. Since the firing times of spikes by neurons vary in time (i.e., they jitter in time), the analysis of the timing relationships between the firing of neurons require specialized statistical methods which deals with pulse-codes. The most often used statistics is the correlation analysis (which is also developed by Donald Perkel and George Gerstein et al earlier in the 1960's to analyze spike train data). Snowflake analysis and correlation analysis are similar in the following ways: Whereas correlation analysis establishes statistics for pair-wise correlation between 2 spike trains (neurons), snowflake analysis establishes statistics for 3-wise correlation among 3 neurons. Whereas correlation analysis establishes statistics for all higher-order firing intervals between neurons, snowflake analysis establishes statistics for only first-order intervals. Snowflake diagram and joint-interval histogram are similar in the following ways: Whereas joint-interval scatter diagram has 2 orthogonal axes (in a 2-D plane) for displaying the adjacent cross-interval between 2 neurons, snowflake scatter diagram has 3-axes (each 120 degrees from each other) in a 2-D plane for displaying the adjacent cross-interval between 3 neurons. They both establish first-order interval statistics. I have worked with Donald Perkel until he deceased, but George Gerstein is still at Univ. of Penn. I have worked on numerous spike train analytical methods including snowflake diagram. I have also developed other similar spike train analysis techniques, so further detailed questions can be directed to me (David Tam, e-mail: dtam at next-cns.neusc.bcm.tmc.edu) if needed. Related references: Perkel, D.H., Gerstein, G.L., Smith, M.S. and Tatton, W.G. (1975) Nerve-impulse patterns: a quantitative display technique for three neurons, Brain Research. 100: 271-296. Gerstein, G. L. and Perkel, D. H. (1972) Mutual temporal relationships among neuronal spike trains, Biophysical Journal. 12: 453-473. Perkel, D.H., Gerstein, G.L. and Moore, G.P. (1967) Neuronal spike trains and stochastic point process. I. The single spike train. Biophysical Journal. 7: 391-418. Perkel, D.H., Gerstein, G.L. and Moore, G.P. (1967) Neuronal spike trains and stochastic point process. II. Simultaneous spike trains. Biophysical Journal. 7: 419-440. Tam, D.C, Ebner, T.J. and Knox, C.K. (1987) Conditional cross-interval correlation analyses with applications to simultaneously recorded cerebellar Purkinje neurons. Journal of Neurosci. Methods. 23: 23-33. From mike at park.bu.edu Wed May 22 22:54:40 1991 From: mike at park.bu.edu (mike@park.bu.edu) Date: Wed, 22 May 91 22:54:40 -0400 Subject: Bibliography Message-ID: <9105230254.AA13933@fenway.bu.edu> I would like to compile a fairly large bibliographic database in BiBTeX format including the Krogh Bibliographic Database in BibTeX format. If individuals mail me databases in either refer or BibTeX format I will make an effort to merge the files and place papers in neuroprose directory. -- Boston University (617-353-7857) Email: mike at bucasb.bu.edu Smail: Michael Cohen 111 Cummington Street, RM 242 Center for Adaptive Systems Boston, Mass 02215 Boston University From rsun at chaos.cs.brandeis.edu Thu May 23 16:32:23 1991 From: rsun at chaos.cs.brandeis.edu (Ron Sun) Date: Thu, 23 May 91 16:32:23 edt Subject: No subject Message-ID: <9105232032.AA20654@chaos.cs.brandeis.edu> The following paper will appear in the Proc.13th Annual Conference of Cognitive Science Society. It is a revised version of an earlier TR entitle "Integrating Rules and Connectionism for Robust Reasoning" Connectionist Models of Rule-Based Reasoning Ron Sun Brandeis University Computer Science Department rsun at cs.brandeis.edu We investigate connectionist models of rule-based reasoning, and show that while such models usually carry out reasoning in exactly the same way as symbolic systems, they have more to offer in terms of commonsense reasoning. A connectionist architecture for commonsense reasoning,CONSYDERR, is proposed to account for common reasoning patterns and to remedy the brittleness problem in traditional rule-based systems. A dual representational scheme is devised, which utilizes both localist and distributed representations and explores the synergy resulting from the interaction between the two. {CONSYDERR} is therefore capable of accounting for many difficult patterns in commonsense reasoning. This work shows that connectionist models of reasoning are not just ``implementations" of their symbolic counterparts, but better computational models of commonsense reasoning. ------------ FTP procedures ------------------------- (thanks to the service provided by Jordan Pollack) --- ftp cheops.cis.ohio-state.edu >name: anonymous >passwork: neuron >binary >cd pub/neuroprose >get sun.cogsci91.ps.Z >quit uncompress sun.integrate.ps.Z lpr sun.cogsci91.ps From schraudo at cs.UCSD.EDU Thu May 23 20:11:48 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Thu, 23 May 91 17:11:48 PDT Subject: hertz.refs.bib.Z -- key change Message-ID: <9105240011.AA09866@beowulf.ucsd.edu> I've added the prefix "HKP:" (for Hertz, Krogh & Palmer) to all citation keys in hertz.refs.bib.Z, and uploaded the new version to pub/neuroprose. The prefix prevents key clashes when several BibTeX files are searched together (as in "\bibliography{recent.work,hertz.refs,own.papers}"). -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From erol at ehei.ehei.fr Thu May 23 12:33:53 1991 From: erol at ehei.ehei.fr (Erol Gelenbe) Date: Thu, 23 May 91 16:35:53 +2 Subject: Technical report on learning in recurrent networks Message-ID: <9105240700.AA21235@corton.inria.fr> You may obtain a hard copy of the following tech report by sending me e-mail : Learning in the Recurrent Random Network by Erol Gelenbe EHEI 45 rue des Saints-Peres 75006 Paris This paper describes an "exact" learning algorithm for the recurrent random network model (see E. Gelenbe in Neural Computation, Vol 2, No 2, 1990). The algorithm is based on the delta rule for updating the network weights. Computationally, each step requires the solution of n non-linear equations (solved in time Kn where K is a constant) and 2n linear equations for the derivatives. Thus it is of O(n**3) complexity, where n is the number of neurons. From bap at james.psych.yale.edu Fri May 24 09:14:54 1991 From: bap at james.psych.yale.edu (Barak Pearlmutter) Date: Fri, 24 May 91 09:14:54 -0400 Subject: Technical report on learning in recurrent networks In-Reply-To: Erol Gelenbe's message of Thu, 23 May 91 16:35:53 +2 <9105240700.AA21235@corton.inria.fr> Message-ID: <9105241314.AA26892@james.psych.yale.edu> I would appreciate a copy. Thanks, Barak Pearlmutter Department of Psychology P.O. Box 11A Yale Station New Haven, CT 06520-7447 From jbarnden at NMSU.Edu Fri May 24 14:48:40 1991 From: jbarnden at NMSU.Edu (jbarnden@NMSU.Edu) Date: Fri, 24 May 91 12:48:40 MDT Subject: a book Message-ID: <9105241848.AA13844@NMSU.Edu> CONNECTIONIST BOOK ANNOUNCEMENT =============================== Barnden, J.A. & Pollack, J.B. (Eds). (1991). Advances in Connectionist and Neural Computation Theory, Vol. 1: High Level Connectionist Models. Norwood, N.J.: Ablex Publishing Corp. ------------------------------------------------ ISBN 0-89391-687-0 Location index QA76.5.H4815 1990 389 pp. Extensive subject index. Cost $34.50 for individuals and course adoption. For more information: jbarnden at nmsu.edu, pollack at cis.ohio-state.edu ------------------------------------------------ MAIN CONTENTS: David Waltz Foreword John A. Barnden & Jordan B. Pollack Introduction: problems for high level connectionism David S. Touretzky Connectionism and compositional semantics Michael G. Dyer Symbolic NeuroEngineering for natural language processing: a multilevel research approach. Lawrence Bookman & Richard Alterman Schema recognition for text understanding: an analog semantic feature approach Eugene Charniak & Eugene Santos A context-free connectionist parser which is not connectionist, but then it is not really context-free either Wendy G. Lehnert Symbolic/subsymbolic sentence analysis: exploiting the best of two worlds. James Hendler Developing hybrid symbolic/connectionist models John A. Barnden Encoding complex symbolic data structures with some unusual connectionist techniques Mark Derthick Finding a maximally plausible model of an inconsistent theory Lokendra Shastri The relevance of connectionism to AI: a representation and reasoning perspective Joachim Diederich Steps toward knowledge-intensive connectionist learning Garrison W. Cottrell & Fu-Sheng Tsung Learning simple arithmetic procedures. Jiawei Hong & Xiaonan Tan The similarity between connectionist and other parallel computation models Lawrence Birnbaum Complex features in planning and understanding: problems and opportunities for connectionism Jordan Pollack & John Barnden Conclusion From tap at ai.toronto.edu Fri May 24 16:48:42 1991 From: tap at ai.toronto.edu (Tony Plate) Date: Fri, 24 May 1991 16:48:42 -0400 Subject: techreport/preprint available Message-ID: <91May24.164846edt.785@neuron.ai.toronto.edu> ** Please do not forward to other newsgroups ** The following tech-report is available by ftp from the neuroprose archive at cheops.cis.ohio-state.edu. It is an expanded version of the paper "Holographic Reduced Representations: Convolution Algebra for Compositional Distributed Representations" which is to appear in the Proceedings of the 12th International Joint Conference on Artificial Intelligence (1991). Holographic Reduced Representations Tony Plate Department of Computer Science, University of Toronto Toronto, Ontario, Canada, M5S 1A4 tap at ai.utoronto.ca Technical Report CRG-TR-91-1 May 1991 Abstract A solution to the problem of representing compositional structure using distributed representations is described. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, frames, and reduced representations can be compressed into a fixed width vector. These representations are items in their own right, and can be used in constructing compositional struc- tures. The noisy reconstructions given by convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. Three appendices are attached. The first discusses some of the mathematical properties of convolution memories. The second gives a more intuitive explanation of convolution memories and explores the relationship between approximate and exact inverses to the convolution operation. The third contains examples of cal- culations of the capacities and recall probabilities for convolution memories. Here's what to do to get the file from neuroprose. unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get plate.hrr.ps.Z ftp> quit unix> uncompress plate.hrr.ps.Z unix> lpr plate.hrr.ps (or however you print postscript) If you are unable to get the file in this way, or have trouble printing it, mail me (tap at ai.utoronto.ca), and I can send a hardcopy. ---------------- Tony Plate ---------------------- tap at ai.utoronto.ca ----- Department of Computer Science, University of Toronto, 10 Kings College Road, Toronto, Ontario, CANADA M5S 1A4 ---------------------------------------------------------------------------- From nzt at research.att.com Sat May 25 09:50:38 1991 From: nzt at research.att.com (nzt@research.att.com) Date: Sat, 25 May 91 09:50:38 EDT Subject: Preprints on Statistical Mechanics of Learning Message-ID: <9105251350.AA16962@minos.att.com> The following preprints are available by ftp from the neuroprose archive at cheops.cis.ohio-state.edu. 1. Statistical Mechanics of Learning from Examples I: General Formulation and Annealed Approximation 2. Statistical Mechanics of Learning from Examples II: Quenched Theory and Unrealizable Rules by: Sebastian Seung, Haim Sompolinsky, and Naftali Tishby This is a two part detailed analytical and numerical study of learning curves in large neural networks, using techniques of equilibrium statistical mechanics. Abstract - Part I Learning from examples in feedforward neural networks is studied using equilibrium statistical mechanics. Two simple approximations to the exact quenched theory are presented: the high temperature limit and the annealed approximation. Within these approximations, we study four models of perceptron learning of realizable target rules. In each model, the target rule is perfectly realizable because it is another perceptron of identical architecture. We focus on the generalization curve, i.e. the average generalization error as a function of the number of examples. The case of continuously varying weights is considered first, for both linear and boolean output units. In these two models, learning is gradual, with generalization curves that asymptotically obey inverse power laws. Two other model perceptrons, with weights that are constrained to be discrete, exhibit sudden learning. For a linear output, there is a first-order transition occurring at low temperatures, from a state of poor generalization to a state of good generalization. Beyond the transition, the generalization curve decays exponentially to zero. For a boolean output, the first order transition is to perfect generalization at all temperatures. Monte Carlo simulations confirm that these approximate analytical results are quantitatively accurate at high temperatures and qualitatively correct at low temperatures. For unrealizable rules the annealed approximation breaks down in general, as we illustrate with a final model of a linear perceptron with unrealizable threshold. Finally, we propose a general classification of generalization curves in models of realizable rules. Abstract - Part II Learning from examples in feedforward neural networks is studied using the replica method. We focus on the generalization curve, which is defined as the average generalization error as a function of the number of examples. For smooth networks, i.e. those with continuously varying weights and smooth transfer functions, the generalization curve is found to asymptotically obey an inverse power law. This implies that generalization curves in smooth networks are generically gradual. In contrast, for discrete networks, discontinuous learning transitions can occur. We illustrate both gradual and discontinuous learning with four single-layer perceptron models. In each model, a perceptron is trained on a perfectly realizable target rule, i.e. a rule that is generated by another perceptron of identical architecture. The replica method yields results that are qualitatively similar to the approximate results derived in Part I for these models. We study another class of perceptron models, in which the target rule is unrealizable because it is generated by a perceptron of mismatched architecture. In this class of models, the quenched disorder inherent in the random sampling of the examples plays an important role, yielding generalization curves that differ from those predicted by the simple annealed approximation of Part I. In addition this disorder leads to the appearance of equilibrium spin glass phases, at least at low temperatures. Unrealizable rules also exhibit the phenomenon of overtraining, in which training at zero temperature produces inferior generalization to training at nonzero temperature. Here's what to do to get the files from neuroprose: unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get tishby.sst1.ps.Z ftp> get tishby.sst2.ps.Z ftp> quit unix> uncompress tishby.sst* unix> lpr tishby.sst* (or however you print postscript) Sebastian Seung Haim Sompolinsky Naftali Tishby ---------------------------------------------------------------------------- From schmidhu at kiss.informatik.tu-muenchen.de Mon May 27 05:22:10 1991 From: schmidhu at kiss.informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Mon, 27 May 1991 11:22:10 +0200 Subject: Technical report on learning in recurrent networks Message-ID: <9105270922.AA03776@kiss.informatik.tu-muenchen.de> Yes, send me your random recurrent network papers. Juergen Schmidhuber Institut fuer Informatik, Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2 GERMANY From hwang at pierce.ee.washington.edu Mon May 27 11:04:31 1991 From: hwang at pierce.ee.washington.edu ( J. N. Hwang) Date: Mon, 27 May 91 08:04:31 PDT Subject: deadline extension of IJCNN'91 Singapore Message-ID: <9105271504.AA09946@pierce.ee.washington.edu.> If you are inclined to waiting to the last minute to submit conference papers, you will be happy to learn that the deadline for submission of papers to the IJCNN'91 Singapore has been extended to June 30, 1991. IJCNN'91 Publicity Committee --------------------------------------------------------------------- IJCNN'91 SINGAPORE, CALL FOR PAPERS CONFERENCE: The IEEE Neural Network Council and the international neural network society (INNS) invite all persons interested in the field of Neural Networks to submit FULL PAPERS for possible presentation at the conference. FULL PAPERS: must be received by "June 30", 1991. All submissions will be acknowledged by mail. Authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. Papers will be reviewed by senior researchers in the field, and all papers accepted will be published in full in the conference proceedings. The conference hosts tutorials on Nov. 18 and tours arranged probably on Nov. 17 and Nov. 22, 1991. Conference sess- ions will be held from Nov. 19-21, 1991. Proposals for tutorial speakers & topics should be submitted to Professor Toshio Fukuda (address below) by Nov. 15, 1990. TOPICS OF INTEREST: original, basic and applied papers in all areas of neural networks & their applications are being solicited. FULL PAPERS may be submitted for consideration as oral or poster pres- entation in (but not limited to) the following sessions: -- Associative Memory -- Sensation & Perception -- Electrical Neurocomputers -- Sensormotor Control System -- Image Processing -- Supervised Learning -- Invertebrate Neural Networks -- Unsupervised Learning -- Machine Vision -- Neuro-Physiology -- Neurocognition -- Hybrid Systems (AI, Neural -- Neuro-Dynamics Networks, Fuzzy Systems) -- Optical Neurocomputers -- Mathematical Methods -- Optimization -- Applications -- Robotics AUTHORS' SCHEDULE: Deadline for submission of FULL PAPERS (camera ready) June 30, 1991 Notification of acceptance Aug. 31, 1991 SUBMISSION GUIDELINES: Eight copies (One original and seven photocopies) are required for submission. Do not fold or staple the original, camera ready copy. Papers of no more than 6 pages, including figures, tables and references, should be written in English and only complete papers will be considered. Papers must be submitted camera-ready on 8 1/2" x 11" white bond paper with 1" margins on all four sides. They should be prepared by typewriter or letter quality printer in one-column format, single-spaced or similar type of 10 points or larger and should be printed on one side of the paper only. FAX submissions are not acceptable. Centered at the top of the first page should be the complete title, author name(s), affiliation(s) and mailing address(es). This is followed by a blank space and then the abstract, up to 15 lines, followed by the text. In an accompanying letter, the following must be included: -- Corresponding author: -- Presentation preferred: Name Oral Mailing Address Poster Telephone & FAX number -- Technical Session: -- Presenter: 1st Choice Name 2nd Choice Mailing Address Telephone & FAX number FOR SUBMISSION FROM JAPAN, SEND TO: Professor Toshio Fukuda Programme Chairman IJCNN'91 SINGAPORE Dept. of Mechanical Engineering Nagoya University, Furo-cho, Chikusa-Ku Nagoya 464-01 Japan. (FAX: 81-52-781-9243) FOR SUBMISSION FROM USA, SEND TO: Ms Nomi Feldman Meeting Management 5565 Oberlin Drive, Suite 110 San Diego, CA 92121 (FAX: 81-52-781-9243) FOR SUBMISSION FROM REST OF THE WORLD, SEND TO: Dr. Teck-Seng, Low IJCNN'91 SINGAPORE Communication Intl Associates Pte Ltd 44/46 Tanjong Pagar Road Singapore 0208 (TEL: (65) 226-2838, FAX: (65) 226-2877, (65) 221-8916) From jbarnden at NMSU.Edu Tue May 28 11:41:45 1991 From: jbarnden at NMSU.Edu (jbarnden@NMSU.Edu) Date: Tue, 28 May 91 09:41:45 MDT Subject: ordering of announced book Message-ID: <9105281541.AA28578@NMSU.Edu> ADDENDUM TO A BOOK ANNOUNCEMENT =============================== Several people have asked about ordering a copy of a book I announced recently. This message includes publisher's address and ordering-department phone number. Barnden, J.A. & Pollack, J.B. (Eds). (1991). Advances in Connectionist and Neural Computation Theory, Vol. 1: High Level Connectionist Models. Norwood, N.J.: Ablex Publishing Corp. 355 Chestnut Street, Norwood, NJ 07648-2090 Order Dept.: (201) 767-8455 ------------------------------------------------ ISBN 0-89391-687-0 Location index QA76.5.H4815 1990 389 pp. Extensive subject index. Cost $34.50 for individuals and course adoption. For more information: jbarnden at nmsu.edu, pollack at cis.ohio-state.edu ------------------------------------------------ From gary at cs.UCSD.EDU Tue May 28 21:16:08 1991 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Tue, 28 May 91 18:16:08 PDT Subject: list mixup Message-ID: <9105290116.AA25934@desi.ucsd.edu> There was a problem here of an undergraduate inadvertently adding the connectionists mailing list to our "talks" mailing list. The way it works here, anyone can do this. I have remedied the problem. Aplogies from UCSD, and you should not see any more announcements of linguistics colloquia at UCSD! g. From white at teetot.acusd.edu Wed May 29 14:51:32 1991 From: white at teetot.acusd.edu (Ray White) Date: Wed, 29 May 91 11:51:32 -0700 Subject: No subject Message-ID: <9105291851.AA03981@teetot.acusd.edu> This notice is to announce a short paper which will be presented at IJCNN-91 Seattle. COMPETITIVE HEBBIAN LEARNING Ray H. White Departments of Physics and Computer Science University of San Diego Abstract Of crucial importance for applications of unsupervised learning to systems of many nodes with a common set of inputs is how the nodes may be trained to collectively develop optimal response to the input. In this paper Competitive Hebbian Learning, a modified Hebbian-learning rule, is introduced. In Competitive Hebbian Learning the change in each connection weight is made proportional to the product of node and input activities multiplied by a factor which decreases with increasing activity on the other nodes. The individual nodes learn to respond to different components of the input activity while collectively developing maximal response. Several applications of Competitive Hebbian Learning are then presented to show examples of the power and versatility of this learning algorithm. This paper has been placed in Jordan Pollack's neuroprose archive at Ohio State, and may be retrieved by anonymous ftp. The title of the file there is white.comp-hebb.ps.Z and it may be retrieved by the usual procedure: local> ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) Name(128.146.8.62:xxx) anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get white.comp-hebb.ps.Z ftp> quit local> uncompress white.comp-hebb.ps.Z local> lpr -P(your_local_postscript_printer) white.comp-hebb.ps Ray White (white at teetot.acusd.edu or white at cogsci.ucsd.edu) From CORTEX at buenga.bu.edu Thu May 30 13:01:00 1991 From: CORTEX at buenga.bu.edu (CORTEX@buenga.bu.edu) Date: Thu, 30 May 1991 13:01 EDT Subject: ANNOUNCEMENT: NEW BOOK Message-ID: <06F43BB160200576@buenga.bu.edu> ____________________________________________ COMPUTATIONAL-NEUROSCIENCE BOOK ANNOUNCEMENT -------------------------------------------- FROM THE RETINA TO THE NEOCORTEX SELECTED PAPERS OF DAVID MARR Edited by Lucia Vaina (1991) Distributer: Birkhauser Boston ________________________________________________________________________ ISBN 0-8176-3472-X ISBN 3-7643-3472-X Cost: $49 For more information: DORAN at SPINT.Compuserve.COM To order the book: call George Adelman at Birkhauser: (617) 876-2333. ________________________________________________________________________ MAIN CONTENTS: the book contains papers by David Marr, which are placed in the framework of current computational neuroscience by leaders in each of the subfields represented by these papers. (1) Early Papers 1. A Theory of Cerebellar Cortex [1969] with commentary by Thomas Thach 2. How the Cerebellum May be Used (with S. Blomfield) [1970] with commentary by Jack D. Cowan 3. Simple Memory: A Theory of Archicortex [1971] with commentaries by David Willshaw & Bruce McNaughton 4. A Theory of Cerebral Neocortex [1970] with commentary by Jack D. Cowan 5. A Computation of Lightness by the Primate Retina [1974] with commentary by Norberto M. Grzywacz (2) Binocular Depth Perception 6. A Note on the Computation of Binocular Disparity in a Symbolic, Low-Level Visual Processor [1974] 7. Cooperative Computation of Stereo Disparity (with T.Poggio) [1976] 8. Analysis of a Cooperative Stereo Algorithm (with G.Palm, T.Poggio) [1978] 9. A Computational Theory of Human Stereo Vision (with T. Poggio) [1979] with commentary on Binocular Depth Perception by Ellen C. Hildreth and W. Eric L. Grimson (3) David Marr: A Pioneer in Computational Neuroscience by Terrence J. Sejnowski (4) Epilogue: Remembering David Marr by former students and colleagues: Peter Rado, Tony Pay, G.S. Brindley, Benjamin Kaminer, Francis H. Crick, Whitman Richards, Tommy Poggio, Shimon Ullman, Ellen Hildreth. From gk at thp.Uni-Koeln.DE Fri May 31 07:09:38 1991 From: gk at thp.Uni-Koeln.DE (gk@thp.Uni-Koeln.DE) Date: Fri, 31 May 91 13:09:38 +0200 Subject: No subject Message-ID: <9105311109.AA17022@sun0.thp.Uni-Koeln.DE> ************* Please DO NOT post to other mailing list ****************** ************* Please DO NOT post to other mailing list ****************** Neural Network Simulations on the European Teraflop Computer Dear Colleagues, Scientist in Europe are actively seeking to build a Teraflop computer, i.e., a computer which can perform a million million floating point operations per second. In the USA there is a similar effort, however, the use of that machine is to be limited to the small group of physicist working on problems in Lattice Gauge Theory. The Steering Committee of the European Teraflop Initiative has decided to take a much broader view of scientific computing, and at its Geneva meeting of May 16 asked for proposals from other scientific fields. In particular, they asked if the group of researchers working under the broad heading of "neural networks" would be interested in using such a machine. Now, the current time scale for building this machine with the speed and memory of a 1000 Crays is 3-5 years, however, the architecture, which is not yet decided upon, will be influenced by the preliminary proposals put forward in the next few months. And so my question is, do we need Teraflop Computers for ``neural network '' research? In particular: 1) Who wants later (if at all) do simulations on such a machine? 2) Who wants to cooperate in the planning of the simulations ? 3) Which models and problems should be selected (biological systems or applications oriented problems)? 4) What types of algorithms would such simulations require? 5) Should we drop this plan because of competition in the same simulations from: (i) special purpose computers, (ii) ``secret'' industrial research, ...? Presumably the project would require a long planning period, heavy competition from other fields of research, and the willingness to program in a language fitted to the computer and not to us. Please send me your comments on these questions, preferably by e-mail, if you are interested. Please pass this announcement on to other researchers. Yours sincerely Gregory Kohring Institute for Theoretical Physics, Cologne University, D-5000 Koln 41, Germany, fax +49 221 470 5159 e-mail: gk at thp.uni-koeln.de ************* Please DO NOT post to other mailing list ****************** ************* Please DO NOT post to other mailing list ****************** From DUDZIAKM at isnet.inmos.COM Fri May 31 16:51:03 1991 From: DUDZIAKM at isnet.inmos.COM (MJD / NEURAL NETWORK R&D / SGS-THOMSON MICROELECTRONICS USA) Date: Fri, 31 May 91 14:51:03 MDT Subject: Posting FYI - I have tested this out and it is quite robust Message-ID: <29292.9105312051@inmos-c.inmos.com> PRESS RELEASE AND CORPORATION INTRODUCES FIRST HOLOGRAPHICALLY BASED NEUROCOMPUTING SYSTEM AND CORPORATION 4 Hughson St. Suite 305 Hamilton, Ontario Canada L8N 3Z1 phone (416) 570 0525 fax (416) 570 0498 AND Corporation based in Canada has developed a new technology related to the current field of artificial neural systems. This new field is referred to as holographic neural technology. The operational basis stems from holographic principles in the superposition or "enfolding" of information by convolution of complex vectors. An analogous, albeit far more limited, process occurs within interacting electromagnetic fields in the generation of optical holograms. Information as dealt with in the neural system represents analog stimulus-response patterns and the holographic process permits one to superimpose or enfold very large numbers of such patterns or analog mappings onto a single neuron cell. Analog stimulus-response associations are learned in one non- iterative transformation on exposing the neuron cell to a stimulus and desired response data field. Similarly, decoding or expression of a response is performed in one non-iterative transformation. The process exhibits the property of non-disturbance whereby previously learned mappings are minimally corrupted or influenced by subsequent learning. During learning of associations, the holographic neural process generates a true deterministic mapping of analog stimulus input to desired analog response. Large sets of such stimulus-response mappings are enfolded onto the identically same correlation set (array of complex vectors) and may be controlled in such a manner that these mappings have modifiable properties of generalization. Specifically, this generalization characteristic refers to a stimulus-response mapping in which input states circumscribed within a modifiable region of the stimulus locus will accurately regenerate the associated analog response. In addition, neuron cells may be configured to exhibit a range of dynamic memory profiles extending from short to long term memory. This feature applies variable decay within the correlation sets whereby encoded stimulus-response mappings may be attenuated at controlled rates. A single neuron cell employing the holographic principle displays vastly increased capabilities over much larger ANS networks employing standard gradient descent methods. AND Corporation has constructed an applications development system based on this new technology it has called the HNeT system (for Holographic NeTwork). This system executes within an INMOS transputer hardware platform. The HNeT development system permits the user to design an entire neural configuration which remains resident and executes within an INMOS transputer based co-processing board. The neural engine executes concurrently with the host resident application program, the host processor providing essentially I/O and operator interface services. Internally the neural engine is structured by the user as a configuration of cells having data field flow paths established between these cells. Data fields within the holographic system are structured as matrices of complex values representing analog stimulus and response data fields. The manner in which data sets, normally expressed as real-numbered values in the external domain, are mapped to complex data fields within the neural engine is not of necessary importance to the neural system designer as data transfer functions provided within the HNeT development system perform these data field conversions automatically. An extensive set of 'C' neural library routines provide the user flexibility in configuring up to 16K cells per transputer and 64K synaptic inputs per cell (memory limited). Functions for configuring these cells within the neural engine are provided within the HNeT library, and may be grouped into the following general categories: Neural cells - These cell types form the principle operational component within the neural engine, employing the holographic neural process for single pass encoding (learning) and decoding (expression) of analog stimulus-response associations. These cells generate the correlation sets or matrices which store the enfolded stimulus-response mappings. Operator cells - Cells may also be configured within the neural engine to perform a wide variety of complex vector transform operations and data manipulation over data fields. These cells principally perform preprocessing operations on data fields fed into neural cells. Input cells - These cells operate essentially as buffers within the neural engine for data transferred between the host application program and the transputer resident neural configuration. This category of cells may also be used to facilitate recurrent data flow structures within the neural engine. In configuring the neural engine the user has the flexibility of constructing a wide range of cell types within any definable configuration, and specifying any data flow path between these diverse cell types. Configuration of the neural engine is simple and straightforward using the programming convention provided for the HNeT neural development system. For instance, a powerful configuration comprised of two inputs cells and one neural cell (cortex), capable of encoding large numbers of analog stimulus- response mappings (potentially >> 64K mappings), can be configured using three function calls . i.e. stim = receptor(256,255); des_resp = buffer(1,1); output = cortex(des_resp, stim, ENDLIST); The above 'C' application code configures one cortex cell type within the neural engine receiving a stimulus field of 256 by 255 elements (stim), and returns a label to the output data field containing the generated response value (output) for that cell. This configuration may be set into operation using an execute instruction to perform either decoding only, or both decoding/encoding functions concurrently. The cortex cell has defined within its function variable list two input data fields, that is the stimulus field (stim) and the desired response data field (des_resp). On encoding the stimulus-to-desired-response association, an analog mapping is generated employing holographic neural principles, and this mapping enfolded onto the cortex cells correlation set. On a decoding cycle, the cortex cell generates the response from a stimulus field transformed through its correlation set, returning a label to that data field (output). An entire neural configuration may be constructed using the above convention of function calls to configure various cells and establishing data flow paths via labels returned by the configuration functions. The users application program performs principally I/O of stimulus-response data fields to the neural engine and establishes control over execution cycles for both learning and expression (response recall) operations. The transputer resident neural engine independently performs all the transform operations and data field transfers for the established neural configuration. The host IBM resources may be fully allocated to ancillary tasks such as peripheral and console interface, retrieval/storage of data, etc. This format allows maximum concurrency of operation. For the control engineer, the holographic neural system provides a new and powerful process whereby input states may be deterministically mapped to control output states over the extent of the control input or state space domain. The mapping of these analog control states is generated simply by training the neural engine. Realizing the property of non-disturbance exhibited by the holographic process, the neural configuration may be constructed to learn large sets of spacio-temporal patterns useful in robotics applications. The neural system designer may explicitly control and modify the generalization characteristics and resolution of this mapping space through design and modification of higher order statistics used within the system. In other words, stimulus-response control states are mapped out within the cell, and allowing the user to explicitly define the higher order characteristics or generalization properties of the neural system. Control states are encoded simply by presenting the neural engine with the suite of analog stimulus-response control actions to be learned. This technology is patent pending in North and South America, Europe, Britan, Asia, Australia and the USSR. The HNeT applications development system is commercially available for single transputer platforms at a base price of $7,450.00 US. For further information call or write to AND Corporation, address given above.