From ftlee at suna0.cs.uiuc.edu Sun Apr 1 17:21:10 1990 From: ftlee at suna0.cs.uiuc.edu (ftlee@suna0.cs.uiuc.edu) Date: Sun, 1 Apr 90 16:21:10 -0500 Subject: New Subscriber Message-ID: <9004012121.AA01294@sunb7.cs.uiuc.edu> Area of research is Passive Sonar Detection and Classification using Neural Networks - research is propriety and classified. Don/Lee From marek at iuvax.cs.indiana.edu Sun Apr 1 23:22:03 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Sun, 1 Apr 90 22:22:03 -0500 Subject: New Subscriber Message-ID: }Date: Sun, 1 Apr 90 16:21:10 -0500 }From: ftlee at suna0.cs.uiuc.edu }Message-Id: <9004012121.AA01294 at sunb7.cs.uiuc.edu> }To: connectionists at CS.CMU.EDU }Subject: New Subscriber }Cc: ftlee at cs.uiuc.edu } } }Area of research is Passive Sonar Detection and Classification }using Neural Networks - research is propriety and classified. } }Don/Lee } Perhaps you should consider unsubscribing. This is an international list with no classified traffic or propietary content. This list was created back in 1986 with the idea of sharing ideas. If you must do otherwise, this is of no interest to me as a connectionist. Do you do anything that is of interest? -- Marek Lugowski (connectionist summer school '86) From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon Apr 2 07:39:39 1990 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 02 Apr 90 07:39:39 EDT Subject: Classified research Message-ID: }Area of research is Passive Sonar Detection and Classification }using Neural Networks - research is propriety and classified. Perhaps you should consider unsubscribing. This is an international list with no classified traffic or propietary content. This list was created back in 1986 with the idea of sharing ideas. If you must do otherwise, this is of no interest to me as a connectionist. Do you do anything that is of interest? -- Marek Lugowski I disagree with Marek Lugowski. Obviously, this list is not an appropriate forum for discussing research and ideas that are classified or proprietary -- it goes all over the world to all sorts of people. We would prefer that subscribers to this list share as many of their ideas as possible with the rest of us, but I do not think that we want to say that a legitimate researcher is unwelcome to participate in this group just because some portion of his or her ideas are not going to be shared with the rest of us immediately, or because he happens to work on classified problems. If we put in such a restriction, we would lose a large fraction of our participants. There are are lot of people out there who read these messages, but who have never contributed to these discussions. Perhaps their work is proprietary, perhaps they want to publish their ideas initially in a journal or some other formum for which they get "credit", or perhaps they don't yet have anything to say. That's OK -- in fact, the current setup would not work if everyone felt compelled to "share" something, whether or not he had anything to say. When someone *does* have something he wants to say, this forum provides a medium by which he can address a large community of interested, legitimate researchers. If your objection is to having any contact with military-sponsored research, perhaps *you* had better consider unsubscribing. The machines and networks upon which the roots of this list reside are paid for mostly by the U.S. Department of Defense. It is unfortunately the case, as of today, that most computer science research in the U.S. -- not counting proprietary research within companies -- is paid for through DoD in one way or another. Some of us hope that will change, but it won't change over night. In the meantime, it would be quite hypocritical for us to suggest that people doing classified research are not welcome even to listen to what goes on here. -- Scott Fahlman From MRE1%VMS.BRIGHTON.AC.UK at VMA.CC.CMU.EDU Mon Apr 2 14:38:00 1990 From: MRE1%VMS.BRIGHTON.AC.UK at VMA.CC.CMU.EDU (MRE1%VMS.BRIGHTON.AC.UK@VMA.CC.CMU.EDU) Date: Mon, 2 Apr 90 14:38 BST Subject: No subject Message-ID: I am writing to enquire about the neuron network mail list. Have I got the correct address? From fritz_dg%ncsd.dnet at gte.com Mon Apr 2 15:17:25 1990 From: fritz_dg%ncsd.dnet at gte.com (fritz_dg%ncsd.dnet@gte.com) Date: Mon, 2 Apr 90 15:17:25 -0400 Subject: literature Message-ID: <9004021917.AA04100@bunny.gte.com> Has anyone a good handle on literature covering implementable connectionist models for invertebrate sensory-motor circuits? --especially papers with details on models that have been made to work, not run-on streams of consciousness. From marek at iuvax.cs.indiana.edu Mon Apr 2 11:26:50 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Mon, 2 Apr 90 10:26:50 -0500 Subject: Classified research Message-ID: Scott, I think you could have read my message differently. My objection was to the absence of anything to share. I maybe should have made this clear in 4 pages of submission but (perhaps mistakenly) did not wish to take up bandwidth. Normally when people introduce themselves on the list (if they choose to do so) they write about what is of interest to the list. -- Marek P.s. Do you still disagree with me? From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon Apr 2 19:56:58 1990 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 02 Apr 90 19:56:58 EDT Subject: Classified research In-Reply-To: Your message of Mon, 02 Apr 90 10:26:50 -0500. Message-ID: My objection was to the absence of anything to share... Normally when people introduce themselves on the list (if they choose to do so) they write about what is of interest to the list. Most people don't "introduce" themselves on the list, and I think that's probably a good thing. It would create a lot of traffic that is not of general interest. I thought that the message in question was sent to the whole list by accident. I think there are a lot of people on this list in listen-only mode. I've got no objection to that -- it doesn't cost much and it serves a useful educational function. -- Scott From Michael.Witbrock at MJW.BOLTZ.CS.CMU.EDU Tue Apr 3 12:05:56 1990 From: Michael.Witbrock at MJW.BOLTZ.CS.CMU.EDU (Michael.Witbrock@MJW.BOLTZ.CS.CMU.EDU) Date: Tue, 3 Apr 90 12:05:56 EDT Subject: Talk of people unsubscribing. Message-ID: I used to be the maintainer of connectionists. When people ask to subscribe to it, they are asked to tell connectionists-request what they work on (to maintain connectionists as a group of people actually working in the field). I am fairly sure that the message that sparked this discussion was such a message, sent to connectionists instead of connectionists-request by mistake. michael From sayegh at ed.ecn.purdue.edu Tue Apr 3 18:00:25 1990 From: sayegh at ed.ecn.purdue.edu (Samir Sayegh) Date: Tue, 3 Apr 90 17:00:25 -0500 Subject: List of Speakers 3rd Conf. NN & PDP Indiana-Purdue University Message-ID: <9004032200.AA08806@ed.ecn.purdue.edu> LIST OF SPEAKERS AND THEIR TOPICS THIRD CONFERENCE ON NEURAL NETWORKS AND PARALLEL DISTRIBUTED PROCESSING INDIANA-PURDUE UNIVERSITY Thursday, April 12, 6-9:00 p.m. INTEGRATED AUTONOMOUS NAVIGATION BY ADAPTIVE NEURAL NETWORKS D.A. Pomerleau Department of Computer Science Carnegie Mellon University APPLYING A HOPFIELD-STYLE NETWORK TO DEGRADED PRINTED TEXT RESTORA- TION A. Jagota Department of Computer Science State University of New York-Buffalo RECENT STUDIES WITH PARALLEL SELF-ORGANIZING HIERARCHICAL NEURAL NETWORKS O.K. Ersoy and D. Hong School of Electrical Engineering Purdue University INEQUALITIES, PERCEPTrONS AND ROBOTIC PATH PLANNING S.I. Sayegh Department of Physics Indiana-Purdue University GENETIC ALGORITHMS FOR FEATURE SELECTION FOR COUNTERPROPOGATION NETWORKS F.Z. Brill and W.N. Martin Department of Computer Science University of Virginia Friday, April 13, 6-9:00 p.m. MULTI-SCALE VISION-BASED NAVIGATION ON DISTRIBUTED-MEMORY MIND COMPUTERS A.W. Ho and G.C. Fox Caltech Concurrent Computation Program California Institute of Technology A NEURAL NETWORK WHICH ENABLES SPECIFICATION OF PRODUCTION RULES N. Liu and K.J. Cios The University of Toledo Friday, April 13, continued PIECE-WISE LINEAR ESTIMATION OF MECHANICAL PROPERTIES OF MATERIALS WITH NEURAL NETWORKS I.H. Shin, K.J. Cios, A. Vary and H.E. Kautz The University of Toledo & NASA Lewis Research Center MULTIPLE SENSOR INTEGRATION VIA NEURAL NETWORKS FOR ESTIMATING SURFACE ROUGHNESS AND BORE TOLERANCE IN CIRCULAR END MILLING -TIME DOMAIN A.C. Okafor, M. Marcus and R. Tipirneni Department of Mechanical & Aerospace Engineering & Engineering Mechan- ics University of Missouri-Rolla MULTIPLE SENSOR INTEGRATION VIA NEURAL NETWORKS FOR ESTIMATING SURFACE ROUGHNESS AND BORE TOLERANCE IN CIRCULAR END MILLING - FREQUENCY DOMAIN A.C. Okafor, M. Marcus and R. Tipirneni Department of Mechanical and Aerospace Engineering and Engineering Mechanics University of Missouri-Rolla Saturday, April 14, 9:00 a.m.-1:00 p.m. SIMULATION OF A CORTICAL MODEL FOR THE ADULT CAT E. Niebur and F. Worgotter California Institute of Technology LEARNING BY GRADIENT DESCENT IN FUNCTION SPACE G. Mani Department of Computer Science University of Wisconsin-Madison REAL TIME DYNAMIC RECOGNITION OF SPATIAL TEMPORAL PATTERNS M.F. Tenorio School of Electrical Engineering Purdue University A NEURAL ARCHITECTURE FOR COGNITIVE MAPS M. Sonntag Cognitive Science & Machine Intelligence Lab University of Michigan SUCCESSIVE REFINEMENT OF THE INTERNAL REPRESENTATIONS OF THE ENVIRONMENT IN CONNECTIONIST NETWORKS Vasant Honovar and Leonard Uhr Department of Computer Sciences University of Wisconsin-Madison For more information: e-mail: sayegh at ed.ecn.purdue.edu sayegh at ipfwcvax.bitnet Voice: (219) 481-6157 FAX : (219) 481-6880 From marvit at hplpm.hpl.hp.com Wed Apr 4 14:52:41 1990 From: marvit at hplpm.hpl.hp.com (Peter Marvit) Date: Wed, 04 Apr 90 11:52:41 PDT Subject: literature In-Reply-To: Your message of "Mon, 02 Apr 90 15:17:25 PDT." <9004021917.AA04100@bunny.gte.com> Message-ID: <5121.639255161@hplpm.hpl.hp.com> Although I do not "have a good handle" on connectionist models of invertebrate neural circuits, I assume there is a significant enough field to warrent the entire paper session on that very subject at IJCNN this summer. Anyone on this list intending to present relevent material? -Peter "Spineless" Marvit From BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU Thu Apr 5 05:14:32 1990 From: BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU (BOVET JAMON BENHAMOU OTTOMANI) Date: Thu, 05 Apr 90 09:14:32 GMT Subject: INVERTEBRATE SENSORI-MOTOR CIRCUITS Message-ID: Like Fritz_dg I am very interested in sending each other litterature references on connectionist models for invertebrate sensory-motor circuits. But the first reference I am immediatly thinking about is a preprint on Lamprey (unfortunatly vertebrate) : S.Grillner, A.Lansner, P.Wallen, O.Ekeberg, L.Brodin, H.Traven, & M.Stensmo, The neural network underlying locomotion. Initiation, segmental burst generation and sensory entrainment, analyzed by simulation. P.BOVET,LABO.NEUROSCIENCES,CNRS,MARSEILLE,FRANCE. From mcgrawg at iuvax.cs.indiana.edu Thu Apr 5 15:50:17 1990 From: mcgrawg at iuvax.cs.indiana.edu (Gary McGraw) Date: Thu, 5 Apr 90 14:50:17 -0500 Subject: Request for references. Message-ID: My current research involves training recurrent networks of various architectures to do a temporal pattern recognition task. I have attempted to train both fully and partially recurrent networks (using different learning rules) and am now interested in analyzing their behavior, trainability, etc. Does anyone know of any papers that compare two or more recurrent architectures' behavior given some common task? I am looking for papers similar to Cottrell and Tsung's "Learning Simple Arithmetic Procedures" from the proceedings of the 11th annual cogsci conference. Thanks for your help. Gary McGraw Center for Research on Concepts and Cognition Department of Computer Science Indiana University From ersoy at ee.ecn.purdue.edu Fri Apr 6 10:59:51 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 6 Apr 90 09:59:51 -0500 Subject: No subject Message-ID: <9004061459.AA25694@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From russ at dash.mitre.org Fri Apr 6 09:44:36 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Fri, 6 Apr 90 09:44:36 EDT Subject: Nettalk phonemes => WaveForms Message-ID: <9004061344.AA02562@dash.mitre.org> I have recently replicated the Nettalk experiment (using data from nnbench). Now I would like to play out the phonemes. Does anyone have any publicly available software to map phonemes to waveforms? Altough the particular format of the wave forms is not that important the ideal software would tanslate the phoneme tokens used in the Nettalk paper to waveforms in the format used in the sound files on a Sun SparcStation1. If no one has such software now, I think it might be useful to develop it for the community at large, since it allows play back at your workstation. Russ NFSNET: russ at dash.mitre.org Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From Connectionists-Request at CS.CMU.EDU Fri Apr 6 12:47:15 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 06 Apr 90 12:47:15 EDT Subject: Fwd: Neural networks and transputers mail list Message-ID: <20531.639420435@B.GP.CS.CMU.EDU> I don't remember seeing this on the main list. I apologize if this is a duplicate post. Contact neurtran at isnet.inmos.com if you have any questions. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) ------- Forwarded Message From DUDZIAKM at isnet.inmos.COM Wed Apr 4 16:28:36 1990 From: DUDZIAKM at isnet.inmos.COM (Neurotechnology Center - Martin Dudziak) Date: Wed, 4 Apr 90 14:28:36 MDT Subject: Neural networks and transputers mail list Message-ID: <178.9004042028@inmos-c.inmos.com> Excuse me - I don't remember if I sent any information to you earlier, but in any case: There is a new mail list dedicated to issues of implementing neural networks using transputers and transputer-based hardware envieronments (i.e., specialized neural processors that act as co-processors w transputers, transputers and DSP chips like the A110, A121, etc.). This mail list is accessible as NEURTRAN at ISNET.INMOS.COM. Some earlier announcement(s) may have listed it as neurtran-request, but due to some site problems, that long of a name won't work, so anyone who is interested in subscribing, getting info, making contributions, etc. should just communicate to neurtran at isnet.inmos.com. Martin Dudziak, Moderator ------- End of Forwarded Message From mm at cogsci.indiana.edu Fri Apr 6 17:26:02 1990 From: mm at cogsci.indiana.edu (Melanie Mitchell) Date: Fri, 6 Apr 90 16:26:02 EST Subject: Technical Report Available Message-ID: The following technical report is available from the Center for Research on Concepts and Cognition at Indiana University: The Right Concept at the Right Time: How Concepts Emerge as Relevant in Response to Context-Dependent Pressures (CRCC Report 42) Melanie Mitchell and Douglas R. Hofstadter Center for Research on Concepts and Cognition Indiana University Abstract A central question about cognition is how, when faced with a situation, one explores possible ways of understanding and responding to it. In particular, how do concepts initially considered to be irrelevant, or not even considered at all, become relevant in response to pressures evoked by the understanding process itself? We describe a model of concepts and high-level perception in which concepts consist of a central region surrounded by a dynamic nondeterministic "halo" of potential associations, in which relevance and degree of association change as processing proceeds. As the representation of a situation is constructed, associations arise and are considered in a probabilistic fashion according to a "parallel terraced scan", in which many routes toward understanding the situation are tested in parallel, each at a rate and to a depth reflecting ongoing evaluations of its promise. We describe Copycat, a computer program that implements this model in the context of analogy-making, and illustrate how the program's ability to flexibly bring in appropriate concepts for a given situation emerges from the mechanisms that we are proposing. (This paper has been submitted to the 1990 Cognitive Science Society conference.) To request copies of this report, send mail to mm at cogsci.indiana.edu or mm at iuvax.cs.indiana.edu or Melanie Mitchell Center For Research on Concepts and Cognition Indiana University 510 N. Fess Street Bloomington, Indiana 47408 From Connectionists-Request at CS.CMU.EDU Sat Apr 7 10:15:39 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Sat, 07 Apr 90 10:15:39 EDT Subject: Fwd: please post Message-ID: <2982.639497739@B.GP.CS.CMU.EDU> ------- Forwarded Message From shriver at usl.edu Sat Apr 7 08:50:16 1990 From: shriver at usl.edu (Shriver Bruce D) Date: Sat, 7 Apr 90 07:50:16 CDT Subject: please post Message-ID: <9004071250.AA26933@rouge.usl.edu> Could you please post the following? Thank you, Bruce Shriver =============================================================== This note is being separately posted on the following bulletin boards: connectionists neuron-digest neutran Please recommend other bulletin boards that you think are also appropriate. =============================================================== I am interested in learning what experiences people have had using neural network chips. In an article that Colin Johnson did for PC AI's January/February 1990 issue, he listed the information given below about a number of NN chips (I've rearranged it in alphabetical order by company name). This list is undoubtedly incomplete (no efforts at universities and industrial research laboratories are listed, for example) and may have inaccuracies in it. Such a list would be more useful if it would contain the name, address, phone number, FAX number, and electronic mail address of a contact person at each company would be identified. Information about the hardware and software support (interface and coprocessor boards, prototype development kits, simulators, development software, etc.) is missing. Additionally, pointers to researchers who are planning to or have actually been using these or similar chips would be extremely useful. I am interested in finding out the range of intended applications. Could you please send me: a) updates and corrections to the list b) company contact information c) hardware and software support information d) information about plans to use or experiences with having used any of these chips (or chips that are not listed) In a few weeks, if I get a sufficient response, I will resubmit an enhanced listing of this information to the bulletin boards to which I originally sent this note. Thanks, Bruce Shriver (shriver at usl.edu) ================================================================= Company: Accotech Chip Name: AK107 Description: an Intel 8051 digital microprocessor with its on- chip ROM coded for neural networks Availability: available now Company: Fujitsu Ltd. Chip Name: MB4442 Description: one neuron chip capable of 70,000 connections per second Availability: available in Japan now Company: Hitachi Ltd. Chip Name: none yet Description: information encoded in pulse trains Availability: experimental Company: HNC Inc. Chip Name: HNC-100X Description: 100 million connections per second Availability: Army battlefield computer Company: HNC Chip Name: HNC-200X Description: 2.5 billion connections per second Availability: Defense Advanced Research Projects Agency (DARPA) contract Company: Intel Corp Chip Name: N64 Description: 2.5 connections per second 64-by-64-by-64 with 10,000 synapses Availability: available now Company: Micro Devices Chip Name: MD1210 Description: fuzzy logic combined with neural networks in its fuzzy comparator chip Availability: available now Company: Motorola Inc. Chip Name: none yet Description: "whole brain" chip models senses, reflex, instinct- the "old brain" Availability: late in 1990 Company: NASA, Jet Propulsion Laboratory (JPL) Chip Name: none yet Description: synapse is charge on capacitors that are refreshed from RAM Availability: experimental Company: NEC Corp. Chip Name: uPD7281 Description: a data-flow chip set that NEC sells on PC board with neural software Availability: available in Japan Company: Nestor Inc. Chip Name: NNC Description: 150 million connections per second, 150,000 connections Availability: Defense Dept. contract due in 1991 Company: Nippon Telephone and Telegraph (NTT) Chip Name: none yet Description: massive array of 65,536 one-bit processors on 1024 chips Availability: experimental Company: Science Applications International. Corp. Chip Name: none yet Description: information encoded in pulse trains Availability: Defense Advanced Research Projects Agency (DARPA) contract Company: Syntonic Systems Inc. Chip Name: Dendros-1 Dendros-2 Description: each has 22 synapses, two required by any number can be used Availability: available now  ------- End of Forwarded Message From sankar at caip.rutgers.edu Mon Apr 9 14:20:43 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Mon, 9 Apr 90 14:20:43 EDT Subject: No subject Message-ID: <9004091820.AA17500@caip.rutgers.edu> From fineberg at enterprise.rutgers.edu Mon Apr 9 14:16:07 1990 From: fineberg at enterprise.rutgers.edu (Fineberg) Date: Mon, 9 Apr 90 14:16:07 EDT Subject: No subject Message-ID: <9004091816.AA03931@enterprise.rutgers.edu> Rutgers University CAIP Center CAIP Neural Network Workshop 15-17 October 1990 A neural network workshop will be held during 15-17 October 1990 in East Brunswick, New Jersey under the sponsorship of the CAIP Center of Rutgers University. The theme of the workshop will be "Theory and Applications of Neural Networks" with particular emphasis on industrial applications. Leaders in the field from both industrial organizations and universities will present the state-of-the-art in neural networks. Attendance will be limited to about 90 persons. Partial List of Speakers and Panel Chairmen J. Alspector, Bellcore A. Barto, University of Massachusetts R. Brockett, Harvard University K. Fukushima, Osaka University S. Grossberg, Boston University R. Hecht-Nielsen, HNN, San Diego J. Hopfield, California Institute of Technology S. Kung, Princeton University F. Pineda, JPL, California Institute of Technology R. Linsker, IBM, T. J. Watson Research Center E. Sontag, Rutgers University H. Stark, Illinois Institute of Technology B. Widrow, Stanford University Y. Zeevi, CAIP Center, Rutgers University and The Technion, Israel The workshop will begin with registration at 8:30 AM on Monday, 15 October and end at 5:00 PM on Wednesday. There will be a dinner on Tuesday evening followed by special-topic discussion sessions. The $395 registration fee ($295 for participants from CA IP member organizations), includes the cost of the dinner. Participants are urged to remain in attendance throughout the entire period of the workshop. Proceedings of the workshop will subsequently be published in book form. Individuals wishing to participate in the workshop should fill out the attached form and mail it to the address below. In addition to the formal presentations, there will be a limited number of poster papers. Interested parties should send a title and a bstract to be considered for poster presentation. The papers should be submitted by July 31, 1990. For further information, contact Dr. Richard Mammone Telephone: (201)932-5554 Electronic Mail: mammone at caip.rutgers.edu FAX: (201)932-4775 Telex: 6502497820 mci Rutgers University CAIP Center CAIP Neural Network Workshop 15-17 October 1990 I would like to participate in the Neural Network Workshop. Please send registration details. Title:________ Last:__________________________ First:____________________ Middle:______________________ Affiliation _________________________________________________________ Address _________________________________________________________ _________________________________________________________ Business Telephone: (___)________________________ FAX: (___)_________________________________________ Electronic Mail:_______________________ Home Telephone:(___)______________________________ I am particularly interested in the following aspects of neural networks: ____________________________________________________________________________________________________ ____________________________________________________________________________________________________ I would be interested in participating in a panel___, round-table discussion and/or in___presenting a paper on the subject of__________________________________________. (Please attach a one-page title and abstract). Please complete the above and mail this form to: Neural Network Workshop CAIP Center, Rutgers University P.O. Box 1390 Piscataway, NJ 08855-1390 (USA) From miyata at dendrite.Colorado.EDU Mon Apr 9 18:31:27 1990 From: miyata at dendrite.Colorado.EDU (Yoshiro Miyata) Date: Mon, 9 Apr 90 16:31:27 MDT Subject: Harmonic Grammar Part 1 & 2 - Technical Reports Available Message-ID: <9004092231.AA08134@dendrite.colorado.edu> ------------------- PLEASE DO NOT FORWARD TO OTHER BBOARDS -------------------- The following 2 technical reports are available. Please mail requests for copies to: conn_tech_report at boulder.colorado.edu with only your name and physical address in the content of the mail. On the subject line, please indicate which report(s) you are requesting. =============================================================================== Technical Report CU-CS-464-90 Harmonic Grammar - A formal multi-level connectionist theory of linguistic well-formedness: An application Geraldine Legendre Yoshiro Miyata Paul Smolensky University of Colorado at Boulder We describe "harmonic grammar", a connectionist-based approach to formal theories of linguistic well-formedness. The general approach can be applied to various kinds of linguistic well-formedness, e.g., phonological and syntactic. Here, we address a syntactic problem: unaccusativity. Harmonic grammar is a two-level theory, involving a distributed, lower level connectionist network whose relevant aggregate computational behavior is described by a local, higher level network. The central hypothesis is that the connectionist well-formedness measure called "harmony" can be used to model linguistic well-formedness; what is crucial about the relation between the lower and higher level networks is that there is a harmony-preserving mapping between them: they are "isoharmonic" (at least approximately). A companion paper (Legendre, Miyata, & Smolensky, 1990) describes the theoretical basis for the two level approach, starting from general connectionist principles. In this paper, we discuss the problem of unaccusativity, give a high level characterization of harmonic syntax, and present a higher level network to account for unaccusativity data in French. We interpret this network as a fragment of the grammar and lexicon of French expressed in "soft rules." Of the 760 sentence types represented in our data, the network correctly predicts the acceptability in all but two cases. This coverage of real, problematic syntactic data greatly exceeds that of any other formal account of unaccusativity of which we are aware. =============================================================================== Technical Report CU-CS-465-90 Harmonic Grammar - A formal multi-level connectionist theory of linguistic well-formedness: Theoretical foundations Geraldine Legendre Yoshiro Miyata Paul Smolensky University of Colorado at Boulder In this paper, we derive the formalism of "harmonic grammar", a connectionist-based theory of linguistic well-formedness. Harmonic grammar is a two-level theory, involving a low level connectionist network using a particular kind of distributed representation, and a second, higher level network that uses local representations and which approximately and incompletely describes the aggregate computational behavior of the lower level network. The central hypothesis is that the connectionist well-formedness measure "harmony" can be used to model linguistic well-formedness; what is crucial about the relation between the lower and higher level networks is that there is a harmony-preserving mapping between them: they are "isoharmonic" (at least approximately). In a companion paper (Legendre, Miyata, & Smolensky, 1990), we apply harmonic grammar to a syntactic problem, unaccusativity, and show that the resulting network is capable of a degree of coverage of difficult data that is unparallelled by symbolic approaches of which we are aware: of the 760 sentence types represented in our data, the network correctly predicts the acceptability in all but two cases. In the present paper, we describe the theoretical basis for the two level approach, illustrating the general theory through the derivation from first principles of the unaccusativity network of Legendre, Miyata, & Smolensky (1990). From yu at cs.utexas.edu Tue Apr 10 06:38:22 1990 From: yu at cs.utexas.edu (Yeong-Ho Yu) Date: Tue, 10 Apr 90 05:38:22 CDT Subject: Tech Reports Available Message-ID: <9004101038.AA15616@ai.cs.utexas.edu> The following two technical reports are available. They will appear in the Proceedings of IJCNN90. ---------------------------------------------------------------------- EXTRA OUTPUT BIASED LEARNING Yeong-Ho Yu and Robert F. Simmons AI Lab, The University of Texas at Austin March 1990 AI90-128 ABSTRACT One way to view feed-forward neural networks is to regard them as mapping functions from the input space to the output space. In this view, the immediate goal of back-propagation in training such a network is to find a correct mapping function among the set of all possible mapping functions of the given topology. However, finding a correct one is sometimes not an easy task, especially when there are local minima. Moreover, it is harder to train a network so that it can produce correct output not only for training patterns but for novel patterns which the network has never seen before. This so-called generalization capability has been poorly understood, and there is little guidance for achieving a better generalization. This paper presents a unified viewpoint for the training and generalization of a feed-forward network, and a technique for improved training and generalization based on this viewpoint. ------------------------------------------------------------------------ DESCENDING EPSILON IN BACK-PROPAGATION: A TECHNIQUE FOR BETTER GENERALIZATION Yeong-Ho Yu and Robert F. Simmons AI Lab, The University of Texas at Austin March 1990 AI90-130 ABSTRACT There are two measures for the optimality of a trained feed-forward network for the given training patterns. One is the global error function which is the sum of squared differences between target outputs and actual outputs over all output units of all training patterns. The most popular training method, back-propagation based on the Generalized Delta Rule, is to minimize the value of this function. In this method, the smaller the global error is, the better the network is supposed to be. The other measure is the correctness ratio which shows, when the network's outputs are converted into binary outputs, for what percentage of training patterns the network generates the correct binary outputs. Actually, this is the measure that often really matters. This paper argues that those two measures are not parallel and presents a technique with which the back-propagation method results in a high correctness ratio. The results show that the trained networks with this technique often exhibit high correctness ratios not only for the training patterns but also for novel patterns. ----------------------------------------------------------------------- To obtain copies, either: a) use the getps script (by Tony Plate and Jordan Pollack, posted on connectionists a few weeks ago) b) unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get (remote-file) yu.output-biased.ps.Z (local-file) foo.ps.Z ftp> get (remote-file) yu.epsilon.ps.Z (local-file) bar.ps.Z ftp> quit unix> uncompress foo.ps bar.ps unix> lpr -P(your_local_postscript_printer) foo.ps bar.ps c) If you have any problem of accessing the directory above, send a request to yu at cs.utexas.edu or Yeong-Ho Yu AI Lab The University of Texas at Austin Austin, TX 78712. ------------------------------------------------------------------------ From ai-vie!georg at relay.EU.net Tue Apr 10 09:02:17 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Tue, 10 Apr 90 12:02:17 -0100 Subject: EMCSR 1990 Message-ID: <9004101002.AA01681@ai-vie.uucp> Announcement Tenth European Meeting on Cybernetics and Systems Research April 17-20, 1990 University of Vienna, Austria Symposium L: Parallel Distributed Processing in Humans and Machines Chairs: David Touretzky (Carnegie Mellon) Georg Dorffner (Vienna) The following papers will be presented: Tuesday afternoon (Apr. 17) INVITED LECTURE: A Computational Basis for Phonology D. Touretzky, USA On the Neural Connectance-Performance Relationship G. Barna, P. Erdi, Hungary Quasi-Optimized Learning Dynamics in Sparsely Connected Neural Network Models K.E. Kuerten, Germany Memorization and Deleting in Linear Neural Networks A. Petrosino, F. Savastano, R. Tagliaferri, Italy A Memory-Based Connectionist Network for Speech Recognition C.-C. Chen, Belgium Meta-Parsing in Neural Networks A. Nijholt, The Netherlands Parallel Data Assimilation in Knowledge Networks A. Parodi, S. Khouas, France Wednesday morning (Apr. 18): Preprocessing of Musical Information and Examples of Applications for Neural Networks G. Hipfinger, C. Linster, Austria Symbolic Behavior and Code Generation: The Emergence of "Equivalence Relations" in Neural Networks G.D.A. Brown, M. Oaksford, United Kingdom Connectionism and Unsupervised Knowledge Representation I.M. Havel, Czechoslovakia On Learning Content-Blind Rules C. Mannes, G. Dorffner, Austria - * - The conference will include other symposia on the following topics: - General Systems Methodology - Fuzzy Sets, Approximate Reasoning and Knowledge-Based Systems - Designing and Systems - Humanity, Architecture and Conceptualization - Cybernetics in Biology and Medicine - Cybernetics of Socio-Economic Systems - Managing Change and Innovation - Systems Engineering and Artificial Intelligence for Peace Research - Communication and Computers - Software Development for Systems Theory - Artificial Intelligence - Impacts of Artificial Intelligence - Panel on Organizational Cybernetics, National Development Planning, and Large-Scale Social Experiments - * - Conference Fee: AS 2,900 (ca. $240, incl.proceedings), NO FEE for students with valid id! The proceedings will also be available from World Scientific Publishing Co., entitled "Cybernetics and Systems '90; R.Trappl (ed.)" Registration will be possible at the conference site (main building of the University of Vienna). You can also contact: EMCSR Conference Secretariat Austrian Society for Cybernetic Studies Schottengasse 3 A-1010 Vienna, Austria Tel: +43 1 535 32 810 Fax: +43 1 63 06 52 Email: sek at ai-vie.uucp From ftsung at UCSD.EDU Tue Apr 10 13:49:19 1990 From: ftsung at UCSD.EDU (Fu-Sheng Tsung) Date: Tue, 10 Apr 90 10:49:19 PDT Subject: invertebrate sensori-motor circuits Message-ID: <9004101749.AA07631@kenallen.ucsd.edu> I will be presenting our work on modeling the lobster's gastric circuit, which is a central pattern generator consisting of 11 neurons. We use Williams-Zipser's recurrent learning algorithm; the model network has one unit for each neuron and the connectivity is constrained to be the same as the gastric circuit. The main result is that such a simple network of sigmoidal units can reproduce a good approximation of the oscillation generated by the in-vitro gastric circuit (w.r.t. phase and amplitude). Note that none of the units/neurons are oscillatory by themselves. The learned oscillation is very stable as is the real circuit. Experimentation with the model suggests that the network topology is intimately related to the phase relationships of the oscillations a network can (stably) generate. This is NOT a detailed model of the gastric neurons, as it models only the input/output function and the connectivity of the circuit. Reference: Fu-Sheng Tsung, Gary Cottrell, Allen Selverston, "Some Experiments On Learning Stable Network Oscillations." (to appear in IJCNN90, June, San Diego). R. Williams & D. Zipser, "A learning algorithm for continually running, fully recurrent neural networks." Neural Computation, 1, 270-280 (1989). Fu-Sheng Tsung UCSD, tsung at cs.ucsd.edu From kawahara at av-convex.ntt.jp Wed Apr 11 08:34:31 1990 From: kawahara at av-convex.ntt.jp (Hideki KAWAHARA) Date: Wed, 11 Apr 90 21:34:31+0900 Subject: Japan Neural Netowrk Society meeting. (List of titles) Message-ID: <9004111234.AA19548@av-convex.ntt.jp> Japan Neural Network Society had its first joint technical meeting with the IEICE and the SICE Japan on 16-17 March/1990. Followings are the list of titles presented. I hope this will give some understanding of neural network research activities in Japan and provide a pointer. The JNNS will also have its first annual conference on 10-12 September/1990. Hideki Kawahara NTT Basic Research Laboratories 3-9-11 Midori-cho, Musashino, Tokyo 180, JAPAN Tel:+81 422 59 2276 Fax:+81 422 3393 kawahara at nttlab.ntt.jp (CSNET) *more* - ----------------------------------------------------------- IEICE Technical Report (NC: Neurocomputing) contents (IEICE: The Institute of Electronics Information and Communication Engineers) (SICE: The Society of Instrument and Control Engineers) * Affiliation of the first author of each report is attached. - ------------------------------------------------------------ Makoto KANO, KAWATO, UNO, SUZUKI:"Learning Trajectory Control of A Redundant Arm by Feedback-Error-Learning", IEICE Technical Report, NC89-61, Vol.89, No.463, pp.1-6, (1990-03). *Faculty of Engineering Science, Osaka University Hiroaki GOMI, KAWATO:"Learning Control of an Unstable System with Feedback Error Learning., IEICE Technical Report, NC89-62, Vol.89, No.463, pp.7-12, (1990-03). *ATR Auditory and Visual Perception Research Laboratories Masayuki NAKAMURA, UNO, SUZUKI, KAWATO:"Formation of Optimal Trajectory in Arm Movement Using Inverse Dynamics Model", IEICE Technical Report, NC89-63, Vol.89, No.463, pp.13-18, (1990-03). *Faculty of Engineering, University of Tokyo Motohiro KITANO, KAWATO, UNO, SUZUKI:"Optimal Trajectory Control by the Cascade Neural Network Model for Industrial Manipulator", IEICE Technical Report, NC89-64, Vol.89, No.463, pp.19-24, (1990-03). *Faculty of Engineering Science, Osaka University Makoto HIRAYAMA, KAWATO, JORDAN:"Speed-Accuracy Trade-off of Arm Movement Predicted by the Cascade Neural Network Model", IEICE Technical Report, NC89-65, Vol.89, No.463, pp.25-30, (1990-03). (In English) *ATR Auditory Visual Perception Research Laboratories Yoshinori UESAKA, TSUKADA:"On a Family of Acceptance Functions for Simulated Annealing", IEICE Technical Report, NC89-66, Vol.89, No.463, pp.31-36, (1990-03). *Faculty of Science and Technology, Science University of Tokyo Akira YAMASHITA, AKIYAMA, ANZAI:"Proposal of Novel Simulated Annealing Method based on the Entropy of the Neural Network", IEICE Technical Report, NC89-67, Vol.89, No.463, pp.37-42, (1990-03). *Faculty of Science and Technology, Keio University Haruhisa TAKAHASHI, TOMITA, KAWABATA:"Acquirement of Internal Representations and The Backpropagation Convergence Theorem", IEICE Technical Report, NC89-68, Vol.89, No.463, pp.43-48, (1990-03). (In English) *The University of Electro-Communications Haruhisa TAKAHASHI, ARAI, TOMITA:"Some Results in Stationary Recurrent Neural Networks", IEICE Technical Report, NC89-69, Vol.89, No.463, pp.49-54, (1990-03). *The University of Electro-Communications Masanobu MIYASHITA, TANAKA:"Application of thermodynamics in the potts spin system to the combinatorial optimization problems", IEICE Technical Report, NC89-70, Vol.89, No.463, pp.55-60, (1990-03). *Fundamental Research Laboratories, NEC Corporation Yuuichi SAKURABA, NAKAMOTO, MORIIZUMI:"Proposal of Learning Vector Quantization Method Using Fuzzy Theory", IEICE Technical Report, NC89-71, Vol.89, No.463, pp.61-66, (1990-03). *Faculty of Engineering, Tokyo Institute of Technology Koji KURATA:"On the Formation of Columnar and Hyper-Columnar Structures in Self-Organizing Models of Topographic Mappings", IEICE Technical Report, NC89-72, Vol.89, No.463, pp.67-72, (1990-03). *Faculty of Engineering, University of Tokyo (to March/1990) *Osaka University (from April/1990) Shotaro AKAHO, AMARI:"On the Lower Bound of the Capacity of Three-Layer Networks Using the Sparse Encoding Method", IEICE Technical Report, NC89-73, Vol.89, No.463, pp.73-78, (1990-03). *Faculty of Engineering, University of Tokyo Hirofumi YANAI, SAWADA:"On associative recall by a randomly sparse model neural network", IEICE Technical Report, NC89-74, Vol.89, No.463, pp.79-84, (1990-03). *Research Institute of Electrical Communication, Tohoku University Tadashi KURATA, SAITOH:"Design of Parallel Distributed Processor for Neuralnet Simulator", IEICE Technical Report, NC89-75, Vol.89, No.463, pp.85-90, (1990-03). *Faculty of Engineering, Chiba University Hideki KATO, YOSHIZAWA, ICIKI:"A Parallel Neurocomputer Architecture with Ring Registers", IEICE Technical Report, NC89-76, Vol.89, No.463, pp.91-96, (1990-03). *Fujitsu Laboratories Ltd. Takafumi KAJIWARA, KITAYAMA:"Spread Spectrum Decoding Method Utilizing Neural Network", IEICE Technical Report, NC89-77, Vol.89, No.463, pp.97-100, (1990-03). *NTT Transmission Systems Laboratories Shigeo SATO, NISHIMURA, HAYAKAWA, IWASAKI, NAKAJIMA, MUROTA, MIKOSHIBA, SAWADA:"Implementation of Integrated Neural Elements and Their Application to An A/D Converter", IEICE Technical Report, NC89-78, Vol.89, No.463, pp.101-106, (1990-03). *Research Institute of Electrical Communication, Tohoku University Masahiro OKAMOTO:"Development of Biochemical Threshold-Logic Device Capable of Storing Short-Memory", IEICE Technical Report, NC89-79, Vol.89, No.463, pp.107-112, (1990-03).(In English) *Kyushu Institute of Technology Shuji AKIYAMA, SHIGEMATSU, IIJIMA, MATSUMOTO:"An Analysis and Modeling System Based on The Concurrent Observation of Neuron Network Activity of Hippocampus Slice", IEICE Technical Report, NC89-80, Vol.89, No.463, pp.113-118, (1990-03). *Electrotechnical Laboratory Y. SHIGEMATSU, AKIYAMA, MATSUMOTO:"Suppression, a necessary function for the synaptic plasticity of hippocampus", IEICE Technical Report, NC89-81, Vol.89, No.463, pp.119-122, (1990-03). *Electrotechnical Laboratory T. AIHARA, SUZUKI, TUKADA, KATO:"The Mechanism and a Model for LTP Induction in the Hippocampus", IEICE Technical Report, NC89-82, Vol.89, No.463, pp.123-128, (1990-03). *Faculty of Engineering, Tamagawa University Hiroyuki MIYAMOTO, FUKUSHIMA:"Recognition of Temporal Patterns by a Multi-layered Neural Network Model", IEICE Technical Report, NC89-83, Vol.89, No.463, pp.129-134, (1990-03). *Faculty of Engineering Science, Osaka University Tatsuo KITAJIMA, HARA:"Associative Memory and Learning in Nerve Cell", IEICE Technical Report, NC89-84, Vol.89, No.463, pp.135-140, (1990-03). *Faculty of Engineering, Yamagata University Ichiro SHIMADA, HARA:"Fractal Properties of Animal Behavior", IEICE Technical Report, NC89-85, Vol.89, No.463, pp.141-146, (1990-03). *Tohoku University Tetsuo FURUKAWA, YASUI:"Formation of center-surround opponent receptive fields through edge detection by backpropagation learning", IEICE Technical Report, NC89-86, Vol.89, No.463, pp.147-152, (1990-03). *Faculty of Computer Science and Engineering, Kyusyu Institute of Technology Yukihiro YOSHIDA, HIRAI:"A Model of Color Processing", IEICE Technical Report, NC89-87, Vol.89, No.463, pp.153-158, (1990-03). *University of Tsukuba Hiroshi NAKAJIMA, MIZUNO, HIDA, SAITO, TSUKADA:"Effect of the Percentage of the Coherent Movement of Visual Texture Components on the Recognition of Direction of Wide-Field Movement", IEICE Technical Report, NC89-88, Vol.89, No.463, pp.159-164, (1990-03) *Faculty of Engineering, Tamagawa University Teruhiko OHTOMO, T.HARA, OHUCHI, K.HARA:"Recognition of Hand-Written Chinese Characters Constructed by Radical and Non-radical Using Neural Network Models", IEICE Technical Report, NC89-90, Vol.89, No.464, pp.1-6, (1990-03). *Faculty of Engineering, Yamagata University Yoshihiro ARIYAMA, ITO, TSUKADA:"Alphanumeric Character Recognition by Neocognitron with Error Correct Training", IEICE Technical Report, NC89-91, Vol.89, No.464, pp.7-12, (1990-03). *Faculty of Engineering, Tamagawa University Hiroshi ISHIJIMA, NAGANO:"A Neural Network Model with Function to Grasp its Situation", IEICE Technical Report, NC89-92, Vol.89, No.464, pp.13-18, (1990-03). *College of Engineering, Hosei University Makoto HIRAHARA, NAGANO:"A neural network for fixation point selection", IEICE Technical Report, NC89-93, Vol.89, No.464, pp.19-24, (1990-03). *College of Engineering, Hosei University Masahiko HASEBE, OHNISHI, SUGIE:"Automatic Generation of a World Map for Autonomous Mobile Robot", IEICE Technical Report, NC89-94, Vol.89, No.464, pp.25-30, (1990-03). *Faculty of Engineering, Nagoya University Hidetatsu KAKENO, SUGIE:"FOCUSSED REGION SEGMENTATION FROM BLURED BACKGROUND USING D2G FILTERS", IEICE Technical Report, NC89-95, Vol.89, No.464, pp.31-36, (1990-03). *Toyota College of Technology Takashi FURUKAWA, ARITA, SUGIHARA, HIRAI:"Computer Map-reading using Neural Networks", IEICE Technical Report, NC89-96, Vol.89, No.464, pp.37-42, (1990-03). *Electronics R & D Lab., Nippon Steel Co. Takao MATSUMOTO, KOGA:"Study on a High-Speed Learning Method for Analog Neural Networks", IEICE Technical Report, NC89-97, Vol.89, No.464, pp.43-48, (1990-03). *NTT Transmission Systems Laboratories Kazuhisa NIKI, YAMADA:"Can backpropagation learning rule co-exist with Hebbian learning rule?", IEICE Technical Report, NC89-98, Vol.89, No.464, pp.49-54, (1990-03). *Electrotechnical Laboratory Masumi ISHIKAWA:"A General Structure Learning of Connectionist Models Using Forgetting", IEICE Technical Report, NC89-99, Vol.89, No.464, pp.55-60, (1990-03). *Electrotechnical Laboratory Naohiro TODA, HAGIWARA, USUI:"Data Fitting by Multilayered Neural Network -- Decision of Network Structure via Information Criterion --", IEICE Technical Report, NC89-100, Vol.89, No.464, pp.61-66, (1990-03). *Toyohashi University of Technology Akio TANAKA, YOSHIMURA:"Theoretical Analysis of a Three-Layer Neural Network with Spread Pattern Information Learning method", IEICE Technical Report, NC89-101, Vol.89, No.464, pp.67-72, (1990-03). *International Institute for Advanced Study of Social Information Science(IIAS-SIS), Fujitsu Ltd. Shin-ya MIYAZAKI, YONEKURA, TORIWAKI:"On the Capability for Geometrical Structure Analysis of Sample Distribution -- Relationship between Auto Associative Networks and PPN", IEICE Technical Report, NC89-102, Vol.89, No.464, pp.73-78, (1990-03). *School of Engineering, Nagoya University Shin SUZUKI, KAWAHARA:"Evaluating Neural Networks using Mean Curvature", IEICE Technical Report, NC89-103, Vol.89, No.464, pp.79-84, (1990-03). *NTT Basic Research Laboratories Masafumi HAGIWARA:"Back-propagation with Artificial Selection -- Reduction of the number of hidden units --", IEICE Technical Report, NC89-104, Vol.89, No.464, pp.85-90, (1990-03). *Faculty of Science and Technology, Keio University Hiroyuki ENDOH, IDE:"The influence of the number of units on the ability of learning", IEICE Technical Report, NC89-105, Vol.89, No.464, pp.91-96, (1990-03). *Aoyama Gakuin University Special lecture Eiichi Iwai:IEICE Technical Report, NC89-89, Vol.89, No.463, pp.165-176, (1990-03). From LUBTODI%YALEVM.BITNET at vma.CC.CMU.EDU Wed Apr 11 15:08:00 1990 From: LUBTODI%YALEVM.BITNET at vma.CC.CMU.EDU (LUBTODI%YALEVM.BITNET@vma.CC.CMU.EDU) Date: Wed, 11 Apr 90 14:08 EST Subject: emergent properties Message-ID: Emergent properties are one of the potential benefits of using a connectionist model to perform a task. For example, if the task is to classify objects, a connectionist algorithm can give gracefully degraded performance when the input is poor or the network is damaged. Graceful degradation and content addressability are two properties that often seem to be called emergent properties--they come for free with a connectionist model. My questions are: 1. What other beneficial properties emerge from connectionist models? 2. Do different network architectures lead to different emergent properties? 3. There may also be emergent constraints on the task to be performed. Using the categorization example, perhaps only a certain number of categories can be formed given a certain number of hidden units. What constraints do emerge for the task-level description? From my point of view, the relevance of connectionist models is increased when the emergent benefits of a model are those that humans have and the emergent constraints on task performance are also seen in humans. I am interested in hearing about (1) examples of emergent benefits and constraints in both low level and high level tasks, and (2) whether these emergent network properties are seen in people (or whatever species is being modelled). Todd Lubart LUBTODI at YALEVM From turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK Thu Apr 12 23:38:09 1990 From: turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK (Turing Conference) Date: Thu, 12 Apr 90 23:38:09 BST Subject: Easter request Message-ID: <11865.9004122238@ctcs.leeds.ac.uk> I have been asked by a colleague at Leeds to send on this message: I would be grateful if, like myself, you were able to respond to a letter received recently, in order to help Craig. He is a 7 year old boy who is in the Royal Marsden Hospital in London. Craig Shergold has a tumour on the brain and one on the spine and has very little time to live. It is his ambition to have an entry in the Guiness Book of Records for the largest number of "Get Well" cards ever received by an individual. Please send a card to: Craig Shergold 56 Selby Road CARSHALTON Surrey SN6 1LD United Kingdom I would be grateful if you could send a copy of this letter to at least another 10 companies or individuals. Yours sincerely, Ian Mitchell Lambert Department of Theology University of Kent at Canterbury From chrisley at parc.xerox.com Thu Apr 12 22:15:09 1990 From: chrisley at parc.xerox.com (Ron Chrisley) Date: Thu, 12 Apr 90 19:15:09 PDT Subject: Easter request In-Reply-To: Turing Conference's message of Thu, 12 Apr 90 23:38:09 BST <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: <9004130215.AA21816@roo.parc.xerox.com> Connectionists: As nice a gesture as it is, this "Easter request" should be ignored. I'm posting some news articles relevant to the message which indicate that NO MORE CARDS SHOULD BE SENT. Please do not waste anymore bandwidth by forwarding the message to other mailing lists, bboards, or newsgroups. Ask for Cards, and Ye Shall Receive and Receive and Receive by Douglas Burns WEST PALM BEACH, Fla. -- A 7-year-old English boy with cancer is finding that once a story hits the modern-day grapevine of fax machines and computer bulletin boards, it is impossible to stop. Critically ill with a rare brain tumor, Craig Shergold told his parents and nurses at a British hospital in September of his wish to be in the Guinness Book of World Records for owning the world's largest collection of post cards. The same wish was fulfilled only a year earlier for another English boy with cancer. Once the news was out, it flowed through every conceivable medium to even the most unimaginable places on the globe. Budget Rent A Car in Miami got news about Craig from a Budget office in Gibraltar and sent one of their employees out to alert South Florida businesses. ``We also passed it around to all our offices in the nation,'' said Maria Borchers, director of travel marketing. Children's Wish International, a non-profit organization based in Atlanta, is also working to get cards for Craig. One of its appeals made its way to a computer bulletin board run by Bechtel, a Maryland-based company with an office in Palm Beach Gardens. ``We are getting 10,000 to 15,000 cards for Craig per day,'' said Arthur Stein, director of Children's Wish International. But Craig doesn't want any more cards. In November, he received a certificate from Guinness after his mountain-sized collection of 1.5 million cards broke the record set in 1988 by Mario Morby, a 13-year-old cancer victim. Since then, Craig's dream has become a logistical nightmare for his parents, phone operators and the Royal Marsden Hospital in Surrey, England. Monday, the unofficial count for Craig's collection reached 4 million, said Mark Young, a Guinness Publishing Ltd. spokesmen. The hospital has set up a separate answering service to implore callers to refrain from sending more postcards. Despite pleas of mercy and reports in the media, hundreds of post cards continue to pour into the hospital every day. ``Thank you for being so kind,'' said Maria Dest, a nurse at Royal Marsden. ``But he really does not need any more post cards.'' Dest said that whenever a corporation gets wind of Craig's plight, the bundles of mail increase. ``As soon as it starts to slow down, it goes around again,'' she said. Dest would not discuss the specifics of his condition. ``His condition is deteriorating, but he is still able to talk and function,'' she said. Young, with Guinness, said he gets several calls every day from people who question if Craig Shergold even exists. ``This is definitely legitimate and Craig will be in the 1990 Guinness Book,'' said Young. But because of the problems the two appeals have caused, Young said Guinness plans to discontinue the category. The public outpouring for Mario and now Craig surprised virtually everyone involved, he said. ``These two boys really captured the public imagination,'' Young said. Daniel It gets worse. Some quotes from newsgroups (I assume they are true): "Guinness has announced that this category will not be included in future editions because attempts to break the record have taken several lives. One critically ill child suffocated when a stack of 500,000 cards fell over on him." and "ok guys, this story has been bouncing around the net for a while. yes it is true, but the kid has died already. the parents have requested that no further letters be sent. this story has been on soc.singles, misc.misc, and several other boards for the past few months. it has been debated for a long while! please don't start this over again, the kid has died there is no purpose for bouncing this any longer!!" Ron Chrisley chrisley at csli.stanford.edu Xerox PARC SSL New College Palo Alto, CA 94304 Oxford OX1 3BN, UK (415) 494-4728 (865) 793-484 From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Thu Apr 12 20:25:03 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Thu, 12 Apr 90 20:25:03 EDT Subject: Easter request In-Reply-To: Your message of Thu, 12 Apr 90 23:38:09 -0000. <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: <5842.639966303@DST.BOLTZ.CS.CMU.EDU> > I would be grateful if, like myself, you were able to respond to a > letter received recently, in order to help Craig. He is a 7 year old boy > who is in the Royal Marsden Hospital in London. > > ... rest of blather about postcards for dying boy deleted ... > > I would be grateful if you could send a copy of this letter to at least > another 10 companies or individuals. > Ian Mitchell Lambert > Department of Theology > University of Kent at Canterbury Chain letters are an abuse of the Internet, and CONNECTIONISTS is a private mailing list. If you ever post something like this to CONNECTIONISTS again, I will file a formal complaint with your site administrator. I'm cc'ing this to the whole CONNECTIONISTS list to forestall a flame-fest, and to remind other readers that this is *not* acceptable behavior. -- Dave Touretzky From ccm at DARWIN.CRITTERS.CS.CMU.EDU Thu Apr 12 22:49:51 1990 From: ccm at DARWIN.CRITTERS.CS.CMU.EDU (Christopher McConnell) Date: Thu, 12 Apr 90 22:49:51 EDT Subject: Easter request In-Reply-To: Turing Conference's message of Thu, 12 Apr 90 23:38:09 BST <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: He has broken the record and requests that no more cards are sent. Guiness is retiring the catagory. From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Fri Apr 13 09:16:44 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Fri, 13 Apr 90 09:16:44 BST Subject: Easter request - DON'T PLEASE Message-ID: <1164.9004130816@psuni.cogs.susx.ac.uk> DO NOT RESPOND TO THIS REQUEST - THEY HAVE HAD ENOUGH .......... I would be grateful if, like myself, you were able to respond to a letter > received recently, in order to help Craig. He is a 7 year old boy who > is in the Royal Marsden Hospital in London. > > Craig Shergold has a tumour on the brain and one on the spine and has very > little time to live. > > It is his ambition to have an entry in the Guiness Book of Records for > the largest number of "Get Well" cards ever received by an individual. > > Please send a card to: ...... > I would be grateful if you could send a copy of this letter to at least > another 10 companies or individuals. THE BOY HAS ACHIEVED HIS GOAL AND THE HOSPITAL HAS DESPERATELY REQUESTED (EVEN VIA BBC NEWS) THAT NO MORE CARDS BE SENT. THEY CANNOT COPE WITH THE FLOOD. If you have copied the letter to others, please ask them to ignore it. Aaron Sloman From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Apr 13 04:23:48 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 13 Apr 90 04:23:48 EDT Subject: tech report available Message-ID: <6200.639995028@DST.BOLTZ.CS.CMU.EDU> *** DO NOT FORWARD TO OTHER MAILING LISTS *** *** DO NOT FORWARD TO OTHER MAILING LISTS *** Rules and Maps II: Recent Progress in Connectionist Symbol Processing David S. Touretzky [1] Deirdre W. Wheeler [2] Gillette Elvgren III [1] CMU-CS-90-112 March 1990 [1] School of Computer Science [2] Department of Linguistics Carnegie Mellon University University of Pittsburgh Pittsburgh, PA 15213 Pittsburgh, PA 15260 ABSTRACT This report contains three papers on symbol processing in connectionist networks. The first two, ``A Computational Basis for Phonology'' and ``Rationale for a `Many Maps' Phonology Machine,'' present the latest results of our ongoing project to develop a connectionist explanation for the regularities and peculiarities of human phonological behavior. The third paper, ``Rule Representations in a Connectionist Chunker,'' introduces a new rule chunking architecture based on competitive learning, and compares its performance with that of a backpropagation-based chunker. Earlier work in these areas was described in report CMU-CS-89-158, ``Rules and Maps in Connectionist Symbol Processing.'' ``A Computational Basis for Phonology'' and ``Rule Representations in a Connectionist Chunker'' will appear in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, the collected papers of the 1989 IEEE Conference on Neural Information Processing Systems - Natural and Synthetic, Denver, CO, November 1989, Morgan Kaufmann Publishers. ``Rationale for a `Many Maps' Phonology Machine'' will appear in the Proceedings of EMCSR-90: the Tenth European Meeting on Cybernetics Systems Research, Vienna, Austria, April 1990, World Scientific Publishing Co. ................................................................ To order copies of this report, write to Ms. Catherine Copetas at the School of Computer Science, or send email to copetas+ at cs.cmu.edu. Be sure to include the tech report number, CMU-CS-90-112, in your message. There is no charge for this report. From jai at blake.acs.washington.edu Sat Apr 14 00:19:33 1990 From: jai at blake.acs.washington.edu (Jai Choi) Date: Fri, 13 Apr 90 21:19:33 -0700 Subject: TRs available Message-ID: <9004140419.AA24872@blake.acs.washington.edu> To whom it may concern: We appreciate if you post followings which advertises two technical notes. Thanks in advance. Jai Choi. ================================================================== Two Technical Notes Available ================================================================== 1. Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons Jenq-Neng Hwang, Jai J. Choi, Seho Oh, Robert J. Marks II Interactive Systems Design Lab. Department of Electrical Engr., FT-10 University of Washington Seattle, WA 98195 ****** Abstract ******* In many machine learning applications, the source of the training data can be modeled as an oracle. An oracle has the ability, when presented with an example (query), to give a correct classification. An efficient query learning is to provide the good training data to the oracle at low cost. This report presents a novel approach for query based neural network learning. Consider a layered perceptron partially trained for binary classification. The single output neuron is trained to be either a 0 or a 1. A test decision is made by thresholding the output at, say, 0.5. The set of inputs that produce an output of 0.5, forms the classification boundary. We adopted an inversion algorithm for the neural network that allows generation of this boundary. In addition, for each boundary point, we can generate the classification gradient. The gradient provides a useful measure of the sharpness of the multi-dimensional decision surfaces. Using the boundary point and gradient information, conjugate input pair locations are generated and presented to an oracle for proper classification. This new data is used to further refine the classification boundary thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points. An application example to power security assessment is given. (will be presented in IJCNN'90, San Diego) ********************************************************************** 2. Iterative Constrained Inversion of Neural Networks and its Applications Jenq-Neng Hwang, Chi H. Chan ****** Abstract ****** This report presents a new approach to solve the constrained inverse problems for a trained nonlinear mapping. These problems can be found in a wide variety of applications in dynamic control of nonlinear systems and nonlinear constrained optimization. The forward problem in a nonlinear functional mapping is to obtain the best approximation of the output vector given the input vector. The inverse problem, on the other hand, is to obtain the best approximation of the input vector given a specified output vector, i.e., to find the inverse function of the nonlinear mapping, which might not exist except when the constraints are imposed on. Most neural networks previously proposed for training the inverse mapping either adopted an one-way constraint perturbation or a two-stage learning. Both of these approaches are very laborious and unreliable. Instead of using two neural networks for emulating the forward and inverse mappings separately, we applied the network inversion algorithm, which works directly on the network used to train the forward mapping, yielding the inverse mapping. Our approach uses one network to emulate both of forward and inverse nonlinear mapping without explicitly characterizing and implementing the inverse mapping. Furthermore, our single network inversion approach allows to iteratively locate the optimal inverted solution which also satisfies some constraints imposed on the inputs, and also allows best exploitation of the sensitivity measure of the inputs to outputs in a non- linear mapping. (presented in 24 Conf. on Information Systems and Sciences) ******** For copy of above two TR ************ Send your physical address to Jai Choi Dept. EE, FT-10 Univ. of Washington Seattle, WA 98195 or "jai at blake.acs.washington.edu". From MURRE%HLERUL55.BITNET at vma.CC.CMU.EDU Wed Apr 18 14:18:00 1990 From: MURRE%HLERUL55.BITNET at vma.CC.CMU.EDU (MURRE%HLERUL55.BITNET@vma.CC.CMU.EDU) Date: Wed, 18 Apr 90 14:18 MET Subject: selective attention Message-ID: We noticed that there were some inquiries about connectionist work on selective attention and that our work was mentioned. This work will appear shortly in Cognitive Psychology: Phaf, R.H., A.H.C. Van der Heijden, and P.T.W. Hudson (1990) SLAM: A connectionist model for attention in visual selection tasks. Cognitive Psychology, in press. For those really interested a limited number of offprints of this rather long paper will be available (after we have received the offprints ourselves). We have extended the attentional network to learning neural networks, incorporating some effects of attention on learning and memory. The main building block of these networks is the Categorizing And Learning Module (CALM). We have nearly finished a long report on this work, which will be announced for 'report requests' in this list shortly. In the mean time, those interested may want to check: Murre, J.M.J., R.H. Phaf, and G. Wolters (1989) CALM: a modular approach to supervised and unsupervised learning. IEEE-INNS, Proceedings of the International Joint Conference on Neural Networks, Washington DC, June 1989, Vol.1, p.649-656. A review of this paper by T.P. Vogl will appear in one of the forthcoming issues of Neural Network Review, accompanied by a reply from us. With these CALM modules we have constructed a model which shows a dissociation between implicit and explicit memory tasks (e.g., Schacter, 1987): Phaf, R.H., E. Postma, and G. Wolters (submitted) ELAN-1: a connectionist model for implicit and explicit memory tasks. For those really interested some internal reports on this work are available. R. Hans Phaf Jacob M.J. Murre E-mail: MURRE at HLERUL55.Bitnet Address: Unit of Experimental and Theoretical Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands From gary%cs at ucsd.edu Wed Apr 18 14:30:08 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Wed, 18 Apr 90 11:30:08 PDT Subject: selective attention In-Reply-To: MURRE%HLERUL55.BITNET@vma.CC.CMU.EDU's message of Wed, 18 Apr 90 14:18 MET Message-ID: <9004181830.AA09286@desi.ucsd.edu> Please send me a copy. If you get other requests from UCSD, direct them to me. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (ARPA) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu Wed Apr 18 18:31:33 1990 From: stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu (Andreas Stolcke) Date: Wed, 18 Apr 90 18:31:33 BST Subject: 2 TRs available Message-ID: <9004190131.AA04915@icsib12.berkeley.edu.> The following Technical Reports are available. Please refer to the end of this message for information on how to obtain them. ------------------------------------------------------------------------------- MINIATURE LANGUAGE ACQUISITION: A TOUCHSTONE FOR COGNITIVE SCIENCE Jerome A. Feldman, George Lakoff, Andreas Stolcke and Susan Hollbach Weber International Computer Science Institute Technical Report TR-90-009 March 1990 ABSTRACT Cognitive Science, whose genesis was interdisciplinary, shows signs of reverting to a disjoint collection of fields. This paper presents a compact, theory-free task that inherently requires an integrated solution. The basic problem is learning a subset of an arbitrary natural language from picture-sentence pairs. We describe a very specific instance of this task and show how it presents fundamental (but not impossible) challenges to several areas of cognitive science including vision, language, inference and learning. ------------------------------------------------------------------------------- LEARNING FEATURE-BASED SEMANTICS WITH SIMPLE RECURRENT NETWORKS Andreas Stolcke International Computer Science Institute Technical Report TR-90-015 April 1990 ABSTRACT The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's original recurrent network architecture are introduced. ------------------------------------------------------------------------------- [Versions of these papers have been submitted to the 12th Annual Conference of the Cognitive Science Society.] The reports can be obtained as compressed PostScript files from host cis.ohio-state.edu via anonymous ftp. The filenames are feldman.tr90-9.ps.Z and stolcke.tr90-15.ps.Z in directory /pub/neuroprose. Hardcopies may be requested via e-mail to weber at icsi.berkeley.edu or stolcke at icsi.berkeley.edu or physical mail to one of the authors at the following address: International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 U.S.A. ------------------------------------------------------------------------------- Andreas Stolcke From STIVA%IRMKANT.BITNET at vma.CC.CMU.EDU Fri Apr 20 10:18:43 1990 From: STIVA%IRMKANT.BITNET at vma.CC.CMU.EDU (Nolfi & Cecconi) Date: Fri, 20 Apr 90 10:18:43 EDT Subject: weight spaces Message-ID: We would like to submit to discussion this topic: A good way to understand neural network functioning is to see the learning process as a trajectory in the weight space. More specifically we can imagine, given a task, the process of learning as a movement on the error (fitness) surface of the weight space. The concept of local minima, for example, that derive from this idea, has been showed to be extremally useful. However, we know very little about weight spaces. This certainly comes from the fact that they are very complex to investigate. On the other hand we think that it would be useful to try to answer questions like: are there some kind of regularities in the error surface ? If this is the case, are these regularities task dependent or there are also general type regularities ? How learning algorithms differ from the point of view of the trajectory in the weight space ? We will appreciate comments and possibly references about that. Nolfi & Cecconi From sasha at alla.kodak.com Fri Apr 20 10:16:31 1990 From: sasha at alla.kodak.com (alex shustorovich) Date: Fri, 20 Apr 90 10:16:31 EDT Subject: weight spaces Message-ID: <9004201416.AA04517@alla.kodak.com> The following technical report seems to be relevant to this discussion: ______________________________________________________________________________ Reducing the Weight Space of a Net With Hidden Units to a Minimum Cone. Alexander Shustorovich Image Electronics Center, Eastman Kodak Company 901 Elmgrove Road, Rochester NY 14653-5719 ABSTRACT In his recent talk on the theory of Back-propagation at IJCNN-89, Dr. Hecht-Nielsen made an important observation that any single meaningful combination of weights can be represented in the net in a huge number of variants due to the permutations of hidden units. He remarked that if it were possible to find a cone in the weight space such that the whole space is produced from this cone by permutations of axes corresponding to the permutations of the hidden units, it would greatly reduce the volume of space in which we have to organize the search for the solutions. In this paper such a cone is built. Besides the obvious benefits mentioned above, the same procedure enables the direct comparison of different solutions and trajectories in the weight space, that is, the analysis and comparison of functions performed by individual hidden units. ______________________________________________________________________________ This paper was accepted for poster presentation at INNC-90-Paris in July and it will appear in the proceedings. If you would like to have this TR now, send your request to the author. Alexander Shustorovich, email: sasha at alla.kodak.com From solla%nordita.dk at vma.CC.CMU.EDU Sat Apr 21 13:23:57 1990 From: solla%nordita.dk at vma.CC.CMU.EDU (solla%nordita.dk@vma.CC.CMU.EDU) Date: Sat, 21 Apr 90 19:23:57 +0200 Subject: No subject Message-ID: <9004211723.AA01648@astro.dk> Subject: Weight spaces The concept of `weight space' has been shown to be a useful tool to explore the ensemble of all possible network configurations, or wirings, compatible with a fixed, given architecture. [1] Such spaces are indeed complex, both because of their high dimensionality and the roughness of the surface defined by the error function. It has been shown that different choices of the distance between the targets and the actual outputs can lead to error surfaces that are both generally smoother and steeper in the vicinity of the minima, resulting in an accelerated form of the back-propagation algorithm. [2] Full explorations of such weight spaces, or configuration spaces, defines a probability distribution over the space of functions. Such distribution is a complete characterization of the functional capabilities of the chosen architecture. [3] The entropy of such prior distribution is a useful tool to characterize the functional diversity of the chosen ensemble. Monitoring the evolution of the probability distribution over the space of functions and its associated entropy during learning provides a quantitative measure of the emergence of generalization ability. [3,4] [1] J.S. Denker, D.B. Schwartz, B.S. Wittner, S.A.Solla, R.E. Howard, L.D. Jackel, and J.J. Hopfield, `Automatic learning, rule extraction, and generalization', Complex Systems, Vol 1. P. 877-922 (1987). [2] S.A. Solla. E. Levin, and M. Fleisher, `Accelerated learning in layered neural networks', Complex Systems, Vol 2, p. 625-639 (1988). [3] S.A. Solla, `Learning and generalization in layered neural networks: the contiguity problem', in `Neural networks from models to applications, ed. by L. Personnaz and G. Dreyfus, IDSET, Paris, p. 168-177 (1989). [4] D.B. Schwartz, V.K. Samalam, S.A. Solla, and J.S. Denker, `Exhaustive learning', Neural Computation, MIT, in press. =================================================================== Sorry! A copy of this message ust went out without proper author identification! Here it is. =================================================================== Sara A. Solla Current address (until August 31st) Nordita Blegdamsvej 17 DK-2100 Copenhagen Denmark solla at nordita.dk Permanent address AT&T Bell Laboratories Holmdel, NJ 07733, USA solla at homxa.att.com From solla%nordita.dk at vma.CC.CMU.EDU Sat Apr 21 13:12:42 1990 From: solla%nordita.dk at vma.CC.CMU.EDU (solla%nordita.dk@vma.CC.CMU.EDU) Date: Sat, 21 Apr 90 19:12:42 +0200 Subject: No subject Message-ID: <9004211712.AA01629@astro.dk> Subject: Weight spaces The concept of `weight space' has been shown to be a useful tool to explore the ensemble of all possible network configurations, or wirings, compatible with a fixed, given architecture. [1] Such spaces are indeed complex, both because of their high dimensionality and the roughness of the surface defined by the error function. It has been shown that different choices of the distance between the targets and the actual outputs can lead to error surfaces that are both generally smoother and steeper in the vicinity of the minima, resulting in an accelerated form of the back-propagation algorithm. [2] Full explorations of such weight spaces, or configuration spaces, defines a probability distribution over the space of functions. Such distribution is a complete characterization of the functional capabilities of the chosen architecture. [3] The entropy of such prior distribution is a useful tool to characterize the functional diversity of the chosen ensemble. Monitoring the evolution of the probability distribution over the space of functions and its associated entropy during learning provides a quantitative measure of the emergence of generalization ability. [3,4] [1] J.S. Denker, D.B. Schwartz, B.S. Wittner, S.A.Solla, R.E. Howard, L.D. Jackel, and J.J. Hopfield, `Automatic learning, rule extraction, and generalization', Complex Systems, Vol 1. P. 877-922 (1987). [2] S.A. Solla. E. Levin, and M. Fleisher, `Accelerated learning in layered neural networks', Complex Systems, Vol 2, p. 625-639 (1988). [3] S.A. Solla, `Learning and generalization in layered neural networks: the contiguity problem', in `Neural networks from models to applications, ed. by L. Personnaz and G. Dreyfus, IDSET, Paris, p. 168-177 (1989). [4] D.B. Schwartz, V.K. Samalam, S.A. Solla, and J.S. Denker, `Exhaustive learning', Neural Computation, MIT, in press. From victor%FRLRI61.BITNET at CUNYVM.CUNY.EDU Sun Apr 22 08:07:12 1990 From: victor%FRLRI61.BITNET at CUNYVM.CUNY.EDU (victor%FRLRI61.BITNET@CUNYVM.CUNY.EDU) Date: Sun, 22 Apr 90 14:07:12 +0200 Subject: Please delete me of your email list Message-ID: <9004221207.AA06928@sun3c.lri.fr> Cc: Please delete me victor at lri.lri.fr From harnad at clarity.Princeton.EDU Sun Apr 22 20:49:13 1990 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Sun, 22 Apr 90 20:49:13 EDT Subject: BBS Call for Commentators: Dynamic Programming/Optimization Message-ID: <9004230049.AA04749@reason.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. To be considered as a commentator or to suggest other appropriate commentators, please send email to: harnad at clarity.princeton.edu or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] Please specify the aspect of the article that you are qualified and interested to comment upon. If you are not a current BBS Associate, please send your CV and/or the name of a current Associate who would be prepared to nominate you. ____________________________________________________________________ Modeling Behavioral Adaptations Colin W. Clark Institute of Applied Mathematics University of British Columbia Vancouver BC V6T 1Y4 Canada Keywords: Dynamic programming; optimization; control theory; game theory; behavioral ecology; evolution; adaptation; fitness. ABSTRACT: The behavioral landscape for any individual organism is a complex dynamical system consisting of the individual's own physiological and mental states and the state of the physical and biological environment in which it lives. To understand the adaptive significance of behavioral traits one must formulate, analyse and test simplified models of this complex landscape. The target article describes a technique of dynamic behavioral modeling with many desirable characteristics. There is an explicit treatment of state variables and their dynamics. Darwinian fitness is represented directly in terms of survival and reproduction. Behavioral decisions are modeled simultaneously and sequentially with biologically meaningful parameters and variables, generating empirically testable predictions. The technique has been applied to field and laboratory data in a wide variety of species and behaviors. Some limitations result from the unwieldiness of large-scale dynamic models in parameter estimation and numerical computation. (This article is a follow-up to a previous BBS paper by Houston & Macnamara, but it can be read independently.) From X92%DHDURZ1.BITNET at vma.CC.CMU.EDU Mon Apr 23 10:25:41 1990 From: X92%DHDURZ1.BITNET at vma.CC.CMU.EDU (Joachim Lammarsch) Date: Mon, 23 Apr 90 10:25:41 CET Subject: Unsubscribe Message-ID: Please delete the account Q89 @ DHDURZ1 from the distribution list. Kind regards Joachim Lammarsch (NAD DHDURZ1) From nelsonde%avlab.dnet at wrdc.af.mil Mon Apr 23 11:15:13 1990 From: nelsonde%avlab.dnet at wrdc.af.mil (nelsonde%avlab.dnet@wrdc.af.mil) Date: Mon, 23 Apr 90 11:15:13 EDT Subject: Call for Papers Message-ID: <9004231515.AA00527@wrdc.af.mil> I N T E R O F F I C E M E M O R A N D U M Date: 23-Apr-1990 11:06am EST From: Dale E. Nelson NELSONDE Dept: AAAT-1 Tel No: 57646 TO: Remote Addressee ( _LABDDN::"CONNECTIONISTS%CS.CMU.EDU at Q.CS.CMU.EDU" ) TO: Remote Addressee ( _LABDDN::"NEURON-REQUEST at HPLPM.HPL.HP.COM" ) Subject: Call for Papers Please post the following announcement and call for papers. --------------------------------------------------------------------------- AGARD ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT 7 RUE ANCELLE - 92200 NEUILLY-SUR-SEINE - FRANCE TELEPHONE: (1)47 38 5765 TELEX: 610176 AGARD TELEFAX: (1)47 38 57 99 AVP/46 2 APRIL 1990 CALL FOR PAPERS for the SPRING, 1991 AVIONICS PANEL SYMPOSIUM ON MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS to be held in LISBON, Portugal 13-16 May 1991 This meeting will be UNCLASSIFIED Abstracts must be received not later than 31 August 1990. Note: US & UK Authors must comply with National Clearance Procedures requirements for Abstracts and Papers. THEME MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS A large amount of research is being conducted to develop and apply Machine Intelligence (MI) technology to aerospace applications. Machine Intelligence research covers the technical areas under the headings of Artificial Intelligence, Expert Systems, Knowledge Representation, Neural Networks and Machine Learning. This list is not all inclusive. It has been suggested that this research will dramatically alter the design of aerospace electronics systems because MI technology enables automatic or semi-automatic operation and control. Some of the application areas where MI is being considered inlcude sensor cueing, data and information fusion, command/control/communications/intelligence, navigation and guidance, pilot aiding, spacecraft and launch operations, and logistics support for aerospace electronics. For many routine jobs, it appears that MI systems would provide screened and processed ata as well as recommended courses of action to human operators. MI technology will enable electronics systems or subsystems which adapt or correct for errors and many of the paradigms have parallel implementation or use intelligent algorithms to increase the speed of response to near real time. With all of the interest in MI research and the desire to expedite transition of the technology, it is appropriate to organize a symposium to present the results of efforts applying MI technology to aerospace electronics applications. The symposium will focus on applications research and development to determine the types of MI paradigms which are best suited to the wide variety of aerospace electronics applications. The symposium will be organizaed into separate sessions for the various aerospace electronics application areas. It is tentatively proposed that the sessions be organized as follows: SESSION 1 - Offensive System Electronics (fire control systems, sensor cueing and control, signal/data/information fusion, machine vision, etc.) SESSION 2 - Defensive System electronics (electronic counter measures, radar warning receivers, countermeasure resource management, situation awareness, fusion, etc.) SESSION 3 - Command/Control/Communications/Intelligence - C3I (sensor control, signal/data/information fusion, etc.) SESSION 4 - Navigation System Electronics (data filtering, sensor cueing and control, etc.) SESSION 5 - Space Operations (launch and orbital) SESSION 6 - Logistic Systems to Support Aerospace Electronics (on and off-board systems, embedded training, diagnostics and prognostics, etc.) GENERAL INFORMATION This Meeting, supported by the Avionics Panel will be held in Lisbon, Portugal on 13-16 May 1991. It is expected that 30 to 40 papers will be presented. Each author will normally have 20 minutes for presentation and 10 minutes for questions and discussions. Equipment will be available for projection of viewgraph transparencies, 35 mm slides, and 16 mm films. The audience will include Members of the Avionics Panel and 150 to 200 invited experts from the NATO nations. Attendance at AGARD Meetings is by invitation only from an AGARD National Delegate or Panel Member. Final manuscripts should be limited to no more than 16 pages including figures. Presentations at the meeting should be an extract of the final manuscript and not a reading of it. Complete instructions will be sent to authors of papers selected by the Technical Programme Committee. Authors submitting abstracts should insure that financial support for attendance at the meeting will be available. CLASSIFICATION This meeting will be UNCLASSIFIED LANGUAGES Papers may be written and presented either in English or French. Simultanewous interpretation will be provided between these two languages at all sessions. A copy of your prepared remarks (Oral Presentation) and visual aids should be provided to the AGARD staff at least one month prior to the meeting date. This procedure will ensure correct interpretation of your spoken words. ABSTRACTS Abstracts of papers offered for this Symposium are now invited and should conform with the following instructions: LENGTH: 200 to 500 words CONTENT: Scope of the Contribution & Relevance to the Meeting - Your abstract should fully represent your contribution SUMITTAL: To the Technical Programme committee by all authors (US authors must comply with Attachment 1) IDENTIFICATION: Author Information Form (Attachment 2) must be provided with you abstract CLASSIFICATION: Abstracts must be unclassified Your abstracts and Attachment 2 should be mailed in time to reach all members of the Technical Program Committee, and the Executive not later than 31 AUGUST 1990 (Note the exception for the US Authors). This date is important and must be met to ensure that your paper is considered. Abstracts should be submitted in the format shown on the reverse of this page. TITLE OF PAPER Name of Author Organization or Company Affiliation Address Name of Co-Author Organization or Company Affiliation Address The test of your ABSTRACT should start on this line. PUBLICATIONS The proceedings of this meeting will be published in a single volume Conference Proceedings. The Conference Proceedings will include the papers which are presented at the meeting, the questions/discussion following each presentation, and a Technical Evaluation Report of the meeting. It should be noted that AGARD reserves the right to print in the Conference Proceedings any paper or material presented at the Meeting. The Conference Proceedings will be sent to the printer on or about July 1990. NOTE: Authors that fail to provide the required Camera-Ready manuscript by this date may not be published. QUESTIONS concerning the technical programme should be addressed to the Technical Programme Committee. Administrative questions should be sent directly to the Avionics Panel Executive. GENERAL SCHEDULE (Note: Exception for US Authors) EVENT DEADLINE SUBMIT AUTHOR INFORMATION FORM 31 AUG 90 SUBMIT ABSTRACT 31 AUG 90 PROGRAMME COMMITTEE SELECTION OF PAPERS 1 OCT 90 NOTIFICATION OF AUTHORS OCT 90 RETURN AUTHOR REPLY FORM TO AGARD IMMEDIATELY START PUBLICATION/PRESENTATION CLEARANCE PROCEDURE UPON NOTIFICATION AGARD INSTRUCTIONS WILL BE SENT TO CONTRIBUTORS OCT 90 MEETING ANNOUNCEMENT WILL BE PUBLISHED IN JAN 91 SUBMIT CAMERA-READY MANUSCRIPT AND PUBLICATION/ PRESENTATION CLEARANCE CERTIFICATE to arrive at AGARD by 15 MAR 91 SEND ORAL PRESENTATION AND COPIES OF VISUAL AIDS TO THE AVIONICS PANEL EXECUTIVE to arrive at AGARD by 19 APR 91 ALL PAPERS TO BE PRESENTED 13-16 MAY 91 TECHNICAL PROGRAMME COMMITTEE CHAIRMAN Dr Charles H. KRUEGER Jr Director, Systems Avionics Division Wright Research and Development Center (AFSC), ATTN: AAA Wright Patterson Air Force Base Dayton, OH 45433, USA Telephone: (513) 255-5218 Telefax: (513) 476-4020 Mr John J. BART Prof Dr A. Nejat INCE Technical Director, Directorate Burumcuk sokak 7/10 of Reliability & Compatibility P.K. 8 Rome Air Development Center (AFSC) 06752 MALTEPE, ANKARA GRIFFISS AFB, NY 13441 Turkey USA Mr J.M. BRICE Mr Edward M. LASSITER Directeur Technique Vice President THOMSON TMS Space Flight Ops Program Group B.P. 123 P.O. Box 92957 38521 SAINT EGREVE CEDEX LOS ANGELES, CA 90009-2957 France USA Mr L.L. DOPPING-HEPENSTAL Eng. Jose M.B.G. MASCARENHAS Head of Systems Development C-924 BRITISH AEROSPACE PLC, C/O CINCIBERLANT HQ Military Aircraft Limited 2780 OEIRAS WARTON AERODROME Portugal PRESTEN, LANCS PR4 1AX United Kingdom Mr J. DOREY Mr Dale NELSON Directeur des Etudes & Syntheses Wright Research & Development Center O.N.E.R.A. ATTN: AAAT 29 Av. de la Division Leclerc Wright Patterson AFB 92320 CHATILLON CEDEX Dayton, OH 45433 France USA Mr David V. GAGGIN Ir. H.A.T. TIMMERS Director Head, Electronics Department U.S. Army Avionics R&D Activity National Aerospace Laboratory ATTN: SAVAA-D P.O. Box 90502 FT MONMOUTH, NJ 07703-5401 1006 BM Amsterdam USA Netherlands AVIONICS PANEL EXECUTIVE LTC James E. CLAY, US Army Telephone Telex Telefax (33) (1) 47-38-57-65 610176 (33) (1) 47-38-57-99 MAILING ADDRESSES: From Europe and Canada From United States AGARD AGARD ATTN: AVIONICS PANEL ATTN: AVIONICS PANEL 7, rue Ancelle APO NY 09777 92200 Neuilly-sur-Seine France ATTACHMENT 1 FOR US AUTHORS ONLY 1. Authors of US papers involving work performed or sponsored by a US Government Agency must receive clearance from their sponsoring agency. These authors should allow at least six weeks for clearance from their sponsoring agency. Abstracts, notices of clearance by sponsoring agencies, and Attachment 2 should be sent to Mr GAGGIN to arrive not later than 15 AUGUST 1990. 2. All other US authors should forward abstracts and Attachment 2 to Mr GAGGIN to arrive before 31 JULY 1990. These contributors should include the following statements in the cover letter: A. The work described was not performed under sponsorship of a US Government Agency. B. The abstract is technically correct. C. The abstract is unclassified. D. The abstract does not violate any proprietary rights. 3. US authors should send their abstracts to Mr GAGGIn and Dr KRUEGER only. Abstracts should NOT be sent to non-US members of the Technical Programme Committee or the Avionics Panel Executive. ABSTRACTS OF PAPERS FROM US AUTHORS CAN ONLY BE SENT TO: Mr David V. GAGGIN and Dr Charles H. KRUEGER Jr Director Director, Avionics Systems Div Avionics Research & Dev Activity Wright Research & Dev Center ATTN: SAVAA-D ATTN: WRDC/AAA Ft Monmouth, NJ 07703-5401 Wright Patterson AFB Dayton, OH 45433 Telephone: (201) 544-4851 Telephone: (513) 255-5218 or AUTOVON: 995-4851 4. US authors should send the Author Information Form (Attachment 2) to the Avionics Panel Executive, Mr GAGGIN, Dr KRUEGER, and each Technical Programme Committee Member, to meet the above deadlines. 5. Authors selected from the United States are remined that their full papers must be cleared by an authorized national clearance office before they can be forwarded to AGARD. Clearance procedures should be started at least 12 weeks before the paper is to be mailed to AGARD. Mr GAGGIN will provide additional information at the appropriate time. AUTHOR INFORMATION FORM FOR AUTHORS SUBMITTING AN ABSTRACT FOR THE AVIONICS PANEL SYMPOSIUM on MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS INSTRUCTIONS 1. Authors should complete this form and send a copy to the Avionics Panel Executive and all Technical Program Committee members by 31 AUGUST 1990. 2. Attach a copy of your abstract to these forms before they are mailed. US Authors must comply with ATTACHMENT 1 requirements. a. Probable Title Paper: ____________________________________________ _______________________________________________________________________ b. Paper most appropriate for Session # ______________________________ c. Full Name of Author to be listed first on Programmee, including Courtesy Title, First Name and/or Initials, Last Name & Nationality. d. Name of Organization or Activity: _________________________________ _______________________________________________________________________ e. Address for Return Correspondence: Telephone Number: __________________________________ ____________________ __________________________________ Telefax Number: __________________________________ ____________________ __________________________________ Telex Number: __________________________________ ____________________ f. Names of Co-Authors including Courtesy Titles, First Name and/or Initials, Last Name, their Organization, and their nationality. ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ __________ ____________________ Date Signature DUE NOT LATER THAN 15 AUGUST 1990 From Connectionists-Request at CS.CMU.EDU Mon Apr 23 09:22:53 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 23 Apr 90 09:22:53 EDT Subject: Fwd: Re: weight space Message-ID: <9058.640876973@B.GP.CS.CMU.EDU> ------- Forwarded Message From bill at wayback.unm.edu Fri Apr 20 19:10:57 1990 From: bill at wayback.unm.edu (william horne) Date: Fri, 20 Apr 90 17:10:57 MDT Subject: weight space Message-ID: <9004202310.AA07162@wayback.unm.edu> Nolfi & Cecconi write: >A good way to understand neural network functioning is to see the learning >process as a trajectory in the weight space.... >However, we know very little about weight spaces. >... it would be useful to try to answer questions like: are there some kind >of regularities in the error surface ? If this is the case, are these >regularities task dependent or there are also general type regularities ? >How learning algorithms differ from the point of view of the trajectory in >the weight space ? We have submitted an article to IEEE Trans. on Neural Networks which addresses exactly this problem. The Title is: "Error Surfaces for Multi-layer Perceptrons", Hush, D., Salas, J. and Horne, B. I will send a copy out to anyone who desires it.... - -bill horne ------- End of Forwarded Message From pollack at cis.ohio-state.edu Mon Apr 23 14:55:23 1990 From: pollack at cis.ohio-state.edu (pollack@cis.ohio-state.edu) Date: Mon, 23 Apr 90 14:55:23 -0400 Subject: (Slices through) Weight Space Message-ID: <9004231855.AA00348@dendrite.cis.ohio-state.edu> ****** Do not forward to other b-boards or mailing lists. thank you **** This tech report, with plenty of pretty pictures, addresses the relationship between initial and final points in weight space... Jordan --------------------------------------------------------------------------- Back Propagation is Sensitive to Initial Conditions John F. Kolen Jordan B. Pollack Report 90-JK-BPSIC Laboratory for Artificial Intelligence Research Computer and Information Science Department The Ohio State University Columbus, Ohio 43210, USA kolen-j at cis.ohio-state.edu pollack at cis.ohio-state.edu Abstract This paper explores the effect of initial weight selection on feed- forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ----------------------------------------------------------------------- This tech report is available by the usual method of anonymous FTP from cheops.cis.ohio-state.edu in pub/neuroprose as kolen.bpsic.tr.ps.Z kolen.bpsic.fig1.ps.Z kolen.bpsic.fig2.ps.Z kolen.bpsic.fig3.ps.Z kolen.bpsic.fig4.ps.Z kolen.bpsic.fig5.ps.Z Or, write for Report 90-JK-BPSIC to: Technical Report Librarian Laboratory for AI Research Ohio State University 2036 Neil Ave. Columbus, OH 43210 From hsf at magi.ncsl.nist.gov Tue Apr 24 10:44:17 1990 From: hsf at magi.ncsl.nist.gov (Handprint Sample Form Account) Date: Tue, 24 Apr 90 10:44:17 EDT Subject: character recognition testing Message-ID: <9004241444.AA25367@magi.ncsl.nist.gov> The National Institute of Standards and Technology (NIST) formerly National Bureau of Standards (NBS) has developed a data base for testing handprint character recognition. The database is on a ISO-9660 formated CD and is described briefly below. Please forward this to interested parties. __________________________________________________________________ NIST Handprint Database The NIST handprinted character database consists of 2100 pages of bilevel, black and white, image data of hand printed numerals and text with a total character count of over 1,000,000 characters. Data is compressed using CCIT G4 compression and decompression software is provided in C. The total image database, in uncompressed form, contains about 3 Gigabytes of image data, with 273,000 numerals and 707,700 alphabetic characters. The handprinting sample was obtained from a selection of field data collection staff of the Bureau of the Census, with a geographic sampling corresponding to the population density of the United States. The geographical sampling was done because previous national samples of handprinted material have suggested that there are significant regional differences in handprinting style. Most of the individuals who participated in the sampling are accustomed to filling out forms relatively neatly, and so this sample may represent a "best possible" sample of handprinting. Even so, the range of characters and spatial placement of those characters is broad enough to present very difficult challenges to the image recognition systems currently available or likely to be available in the near future. Typical Use This test data set was designed for multiple uses in the area of image (character) recognition. The problem of computer recognition of document content from images is usually broken down into three operations. First the relevant areas containing text are located. This is usually referred to as field isolation. Next the entire field image containing one or more characters is broken into the images of individual characters. This process is usually referred to as segmentation. Finally, these isolated characters must be correctly interpreted. The images in the data base are designed to test all three of the processes. The test data can be used for any one of the three operations, although it is important to recognize that the success of all subsequent steps in this process is dependent on the success of the previous steps. for further information contact: Joan Sauerwine 301-975-2208 FAX 301-975-2183 From andreas at psych.Stanford.EDU Tue Apr 24 20:11:08 1990 From: andreas at psych.Stanford.EDU (Andreas Weigend) Date: Tue, 24 Apr 90 17:11:08 PDT Subject: preprint: Predicting the Future (Weigend, Huberman, Rumelhart) Message-ID: ______________________________ PREDICTING THE FUTURE - A CONNECTIONIST APPROACH ______________________________ Andreas S. Weigend [1] Bernardo A. Huberman [2] David E. Rumelhart [3] ______________________________ We investigate the effectiveness of connectionist networks for predicting the behavior of non-linear dynamical systems. We use feed-forward networks of the type used by Lapedes and Farber to ob- tain forecasts in the context of noisy real world data from sunspots and computational ecosystems. The networks generate accurate future predictions from knowledge of the past and consistently outperform traditional statistical non-linear approaches to these problems. The problem of having too many weights compared to the number of data points (overfitting) is addressed by adding a term to the cost function that penalizes large weights. We show that this weight-elimination procedure successfully shrinks the net down. We compare different classes of activation functions and explain why the convergence of sigmoids is significantly better than the convergence of of radial basis functions for higher dimensional input. We suggest the use of the concept of mutual information to interpret the weights. We introduce two measures of non-linearity and compare the sunspot and ecosystem data to a series generated by a linear autoregressive model. The solution for the sunspot data is found to be moderately non-linear, the solution for the prediction of the ecosystem highly non-linear. Submitted to "International Journal of Neural Systems" If you would really like a copy of the preprint, send your physical address to: hershey at psych.stanford.edu (preprint number: Stanford-PDP-90-01, PARC-SSL-90-20) [1] Physics Department, Stanford University, Stanford, CA 94305 [2] Dynamics of Computation Group, Xerox PARC, Palo Alto, CA 94304 [3] Psychology Department, Stanford University, Stanford, CA 94305 ______________________________ From AMR at IBM.COM Wed Apr 25 00:01:18 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Wed, 25 Apr 90 00:01:18 EDT Subject: Connectionism and Linguistic Regularities Message-ID: Some time ago I was involved in a debate here about the NL and connectionism. I now have a specific question about a kind of linguistic phenomenon which I find it difficult to see how connectionist models would handle. The phenomenon is that in many languages some class of items (morphemes, words, phrases) behaves in a certain completely regular way, yet this way of doing things becomes irregular in the sense that new items do not behave the same way. I am not sure I can come up with very good examples from English, but it is as though the domain of irregular past tenses or irregular plurals were predictable in English (e.g., hypothetically all monosyllabic nouns ending in the phonetic sequence u:s pluralize in i:s , or the like), yet when new words of this form enter the language they do not behave this way and speakers have trouble recognizing the regularity on test involving nonsense items of the right shape. There is a growing body of such examples in the linguistic literature, and to the extent that an explanation is sought it is assumed to lie in some highly specific (perhaps innate) properties of the human linguistic faculty. This is what makes me sceptical of the ability of connectionist architectures to handle this kind of phenomenon, while at the same time, if they can, that would be a striking piece of evidence in favor of the connectionist approach. Alexis Manaster-Ramer amr at ibm.com or amr at yktvmh.bitnet P.S. Can someone please remind me of the email address to use for things such as getting people added to the mailing list? Thanks. From gasser at iuvax.cs.indiana.edu Wed Apr 25 00:51:52 1990 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Tue, 24 Apr 90 23:51:52 -0500 Subject: tech report Message-ID: ****** Do not forward to other b-boards or mailing lists. thank you **** ****** Do not forward to other b-boards or mailing lists. thank you **** ****** Do not forward to other b-boards or mailing lists. thank you **** The following tech report is available. --------------------------------------------------------------------------- Networks and Morphophonemic Rules Revisited Michael Gasser Chan-Do Lee Report 307 Computer Science Department Indiana University Bloomington, IN 47405 USA gasser at cs.indiana.edu, cdlee at cs.indiana.edu Abstract In the debate over the power of connectionist models to handle linguistic phenomena, considerable attention has been focused on the learning of simple morphophonemic rules. Rumelhart and McClelland's celebrated model of the acquisition of the English past tense (1986), which used a simple pattern associator to learn mappings from stems to past tense forms, was advanced as evidence that networks could learn to emulate rule-like linguistic behavior. Pinker and Prince's equally celebrated critique of the past-tense model (1988) argued forcefully that the model was inadequate on several grounds. For our purposes, these are (1) the fact that the model is not constrained in ways that humans language learners clearly are and (2) the fact that, since the model cannot represent the notion "word", it cannot distinguish homophonous verbs. A further deficiency of the model, one not brought out by Pinker and Prince, is that it is not a processing account: the task that the network learns is that of associating forms with forms rather than that of producing forms given meanings or meanings given forms. This paper describes a model making use of an adaptation of a simple recurrent network which addresses all three objections to the earlier work on morphophonemic rule acquisition. The model learns to generate forms in one or another "tense", given arbitrary patterns representing "meanings" and to output the appropriate tense given forms. The inclusion of meanings in the network means that homophonous forms are distinguished. In addition, this network experiences difficulty learning reversal processes which do not occur in human language. ----------------------------------------------------------------------- This is available in compressed PostScript form from the OSU neuroprose database: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose/Inbox ftp> binary ftp> get morphophonemics.ps.Z ftp> quit unix> uncompress morphophonemics.ps.Z unix> lpr morphophonemics.ps (with whatever your printer needs for PostScript) [The report should be moved to pub/neuroprose soon.] From B344DSL at UTARLG.ARL.UTEXAS.EDU Tue Apr 24 23:55:00 1990 From: B344DSL at UTARLG.ARL.UTEXAS.EDU (B344DSL@UTARLG.ARL.UTEXAS.EDU) Date: Tue, 24 Apr 90 22:55 CDT Subject: Conference Announcement Message-ID: CALL FOR ABSTRACTS NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) October 4-6, 1990 IBM Westlake, TX Abstracts due June 15, 1990 The Metroplex Institute for Neural Dynamics is an independent organization of industrial and academic interests within the Dallas/Fort Worth Metroplex. This is our fourth annual workshop, each being dedicated to a specific problem area, and all of them characterized by a balance of theoretical, applied, and biological interests. Past topics have included Sensory-Motor Coordination and Motivation, Emotion, and Goal Direction. Past speakers have included Harry Klopf, Richard Sutton, Karl Pribram, Harold Szu, Michael Kuperstein, Daniel Bullock, and James Houk. This year's topic of Knowledge Representation and Inference will be focused by its attempts to apply neural architectures within the more traditional rubrics of artificial intelligence and general computer science. This is not merely the application of neural networks to the problem domains of other approaches; more fundamentally, this workshop will explore how the con- nectionist approach can implement other theoretical frameworks and translate to other technical vocabularies. Subtopics will include: -- Connectionist approaches to semantic and symbolic problems from AI -- Architectures for evidential and case-based reasoning -- Cognitive maps and their control of sequence and planning -- Representations of logical primitives and constitutive relations. The 1988 MIND workshop on Motivation, Emotion, and Goal Direction in Neural Networks has culminated in a book, now in press at Erlbaum. This book is characterized by extensive cross-referencing of papers, arising from the associations of the workshop. We plan to generate a similar book from this workshop on Knowledge Represent- ation and Inference. Abstracts must be submitted for review and will be available to participants at the workshop. Some of the presentations will then be developed into book chapters. In addition to oral presentations, there will be some space for poster presentations at the workshop. Abstracts (2 or 3 paragraphs) must be submitted by June 15 to either: Daniel S. Levine Dept. of Mathematics Univ. of Texas at Arlington Arlington, TX 76019-9408 (817)-273-3598 b344dsl at utarlg.bitnet or b344dsl at utarlg.arl.utexas.edu or Manuel Aparicio Mail Stop 03-04-40 IBM 5 West Kirkwood Blvd. Roanoke, TX 76299-0001 (817)-962-5944 From bates at amos.ucsd.edu Wed Apr 25 17:02:38 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Wed, 25 Apr 90 14:02:38 PDT Subject: Connectionism and Linguistic Regularities Message-ID: <9004252102.AA10762@amos.ucsd.edu> There are some detailed responses to the regular/irregular morpheme issue (initially raised by Pinker & Prince) in papers by Plunkett and Marchman, Marchman and Plunkett, and in a more recent paper by Brian MacWhinney. In a nutshell: they manage to get a homogeneous architecture to behave "as though" it had two mechanisms, one for irregulars (encapsulated, undergeneralized) and one for regulars (permeable, prone to overgeneralization). The secret lies in the statistical differences between regulars and irregulars (i.e. type/token ratios). Write to marchman at amos.ucsd.edu and to brian+ at andrew.cmu.edu for copies of the relevant papers. -liz bates From gasser at iuvax.cs.indiana.edu Wed Apr 25 16:13:24 1990 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Wed, 25 Apr 90 15:13:24 -0500 Subject: TR Message-ID: The report I advertised here recently, _Networks and Morphophonemic Rules Revisited_, is now in pub/neuroprose/gasser.morpho.ps.Z (compressed PostScript) in the OSU database. Please try the ftp option before requesting the paper from me. Michael Gasser gasser at cs.indiana.edu Computer Science Department (812) 855-7078 Indiana University Bloomington, IN 47405 USA From stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu Wed Apr 25 14:18:28 1990 From: stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu (Andreas Stolcke) Date: Wed, 25 Apr 90 14:18:28 BST Subject: TR printing problems Message-ID: <9004252118.AA06811@icsib12.berkeley.edu.> Some people had problems printing the postscript versions of the two ICSI tech reports announced recently (files feldman.tr90-9.ps.Z and stolcke.tr90-15.ps.Z in /pub/neuroprose on cis.ohio-state.edu). These problems were caused by some fonts not available on some PostScript printers. The problem has been fixed and people who haven't yet requested a hardcopy by mail are encouraged to obtain the new version via ftp. Requests for mailed copies will of course still be honored. Moral: Don't use any exotic PostScript fonts in your submissions to neuroprose. Either stick to what TeX provides or use the standard Times, Helvetica and Courier fonts. Andreas From LAUTRUP%nbivax.nbi.dk at vma.CC.CMU.EDU Thu Apr 26 07:18:00 1990 From: LAUTRUP%nbivax.nbi.dk at vma.CC.CMU.EDU (Benny Lautrup) Date: Thu, 26 Apr 90 13:18 +0200 (NBI, Copenhagen) Subject: No subject Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of issue 1: 1. C. Peterson and B. Soderberg: A new Method for mapping Optimization Problems onto Neural Networks. 2. M. G. Paulin, M. E. Nelson and J. M. Bower: Dynamics of Compensatory Eye Movement Control: An Optimal Estimation Analysis of the Vestibulo-Ocular Reflex. 3. P. Peretto: Learning Learning Sets in Neural Networks. 4. B. A. Huberman: The Collective Brain. 5. S. Patarnello and P. Carnevali: A Neural Network Model to Simulate a conditioning Experiment. 6. J.-P. Nadal: Study of a Growth Algorithm for a Feed-Forward Network. 7. E. Oja: Neural Networks, Principal Components and Subspaces. 8. S. Bacci, G. Mato, and N. Parga: The Organization of Metastable States in a Neural Network with Hierarchical Patterns. 9. A. Lansner and O. Ekeberg: A One-layer Feedback Artificial Neural Network with a Bayesian Learning Rule. 10. J. Midtgaard and J. Hounsgaard: Nerve Cells as Source of Time Scale and Processing Density in Brain Function. 11. S. Chen: On Computational Vision. ---------------------------------- Contents of issue 2: 1. P. Baldi and A. Attiya: Oscillations and synchronizations in neural networks: An exploration of the labeling hypothesis. 2. A. W. Smith and D. Zipser: Learning sequential structure with the real-time recurrent learning algorithm. 3. M. R. Davenport and G. W. Hoffmann: A recurrent neural network using tri-state hidden neurons to orthogonalize the memory space. 4. H. K. M. Yusuf, S. Rahman and H. Akhtar: Rats kept in environmental isolation for twelve months from weaning: Performance in maze learning and visual discrimination tasks and brain composition. 5. H. C. Card and W. R. Moore: VLSI devices and circuits for learning in neural networks. 6. L. Gislen, C. Peterson and B. Soderberg: "Teachers and classes" with neural networks. 7. A. E. Gunhan, L. P. Csernai, and J. Randrup: Unsupervised competitive learning in Purkinje networks. 8. H.-U. Bauer and T. Geisel: Motion detection and direction detection in local neural nets. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstroem (University of Oregon) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)1-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)278-6188 ----------------------------------------------------------------------- End Message From gaudiano at bucasb.bu.edu Wed Apr 25 22:58:35 1990 From: gaudiano at bucasb.bu.edu (gaudiano@bucasb.bu.edu) Date: Wed, 25 Apr 90 22:58:35 EDT Subject: Student Society Update Message-ID: <9004260258.AA27445@retina.bu.edu> Hello everyone, we are very excited about the overwhelming response to our student society. Our new name (International Student Society for Neural Networks, or ISSNNet) reflects the large number of interested people from all over the world. Over 400 people requested a copy of our first newsletter, almost one half from outside the U.S. If you had sent a request before April 10 but still have not received the newsletter, or if you have any other questions, please send a message to: issnnet at bucasb.bu.edu We have begun receiving membership requests (only $5 for the year), and some official donations. Please remember that we will only continue to send out future issues of the newsletter and other information to official members, so send us your membership form as soon as you can! Also, although we realize it was not clearly stated in the newsletter, YOU DON'T HAVE TO BE A STUDENT TO JOIN! We have many activities and programs that will be useful to everyone, and your non-student memberships will show your support for students. If you are going to IJCNN in San Diego or to INNC in Paris, come visit our booth. We will have T-shirts, newsletters, and some of our other events. We will also have an official ISSNNet meeting/social event at IJCNN (more details later). If you want to make donations or sponsor students presenting papers at NN conferences, send e-mail to . We are in the process of becoming incorporated, and we should have our non-profit status sometime this fall. We have provisions in our bylaws for a flexible governing board to accommodate the international and dynamic nature of our society. Get involved! From tp at irst.it Thu Apr 26 11:12:20 1990 From: tp at irst.it (Tomaso Poggio) Date: Thu, 26 Apr 90 17:12:20 +0200 Subject: preprint: Predicting the Future (Weigend, Huberman, Rumelhart) In-Reply-To: Andreas Weigend's message of Tue, 24 Apr 90 17:11:08 PDT <9004250034.AA12221@life.ai.mit.edu> Message-ID: <9004261512.AA10081@caneva.irst.it> From kamil at wdl1.fac.ford.com Thu Apr 26 15:47:24 1990 From: kamil at wdl1.fac.ford.com (Kamil A Grajski) Date: Thu, 26 Apr 90 12:47:24 -0700 Subject: Summer Hiring at Ford Aerospace Message-ID: <9004261947.AA12402@wdl1.fac.ford.com> Hi, In the spirit of public-service, here is an unofficial announcement, an announcelette, if you please, that some people at Ford Aerospace (San Jose) might be looking for summer hires. ================================================================= 4/25/90 The Advanced Development Department of Ford Aerospace's Western Development Laboratories in San Jose historically has summer (May,June- August) job positions open to promising junior & senior undergraduates and graduate students. Broadly speaking, on-going projects are aimed at algorithm design and development (software & hardware) for real-time systems combining digital signal processing and classification. The classification component includes, but is NOT limited to neural network technology. There is a statistical component which is looking at classical as well as some new non-parametric methods. On-going projects (funded by Ford Motor Co. and/or IR&D) involve: a.) design, development and implementation of an on-board engine knock detector and classifier for aiding engine performance optimization - joint project with Ford Motor Co. - real real-time data! (Free rides in a Taurus!) b.) several related projects in real-time speech processing, e.g., speaker change detection, word spotting in continuous speech, - we have home-grown, TIMIT and other databases on-line. We are currently interested in the performance and applicability of recurrent architectures to ASR, developing synergy with HMMs, and some recent nonparametric statistical discriminant methods. c.) parallel computation - we have a 2K processor element SIMD machine from MasPar (the beta version) with possible limited access to 8K and 16K versions onto which we are porting a variety of DSP, neural network and statistical methodologies for production and in-house research efforts. We are emphasizing some neat approaches to writing "virtualized" code for multi-processor systems. (Preliminary results to be reported at IJCNN and INNC.) The office environment is a typical Silicon Valley set-up. There are shower facilities for that afternoon jog or bike-ride; loads of places to eat, etc. Send resume or note to: kamil at wdl1.fac.ford.com (TCP/IP:128.5.32.1). Dr. Kamil A. Grajski Ford Aerospace Advanced Development Department Mail Stop X-22 220 Henry Ford II Dr. (dig that address!) San Jose, CA 95161-9041 ================================================================== Kamil From jm2z+ at ANDREW.CMU.EDU Thu Apr 26 19:05:26 1990 From: jm2z+ at ANDREW.CMU.EDU (Javier Movellan) Date: Thu, 26 Apr 90 19:05:26 -0400 (EDT) Subject: PREPRINT: Contrastive Hebbian Message-ID: <8aBruqa00WBMQ2T21l@andrew.cmu.edu> This preprint has been placed in the account kindly provided by Ohio State. CONTRASTIVE HEBBIAN LEARNING IN INTERACTIVE NETWORKS Javier R. Movellan Department of Psychology Carnegie Mellon University Pittsburgh, Pa 15213 email: jm2z+ at andrew.cmu.edu Submitted to Neural Computation Interactive networks, as defined by Hopfield (1984), Grossberg (1978), and McClelland & Rumelhart(1981) may have an advantage over feed-forward architectures because of their completion properties, and flexibility in the treatment of units as inputs or outputs. Ackley, Hinton and Sejnowski (1985) derived a learning rule to train Boltzmann machines, which are discrete, interactive networks. Unfortunately, because of the discrete stochasticity of its units, Boltzmann learning is intolerably slow. Peterson and Anderson (1987) showed that Boltzmann machines with large number of units can be approximated with deterministic networks whose logistic activations represent the average activation of discrete Boltzmann units (Mean Field Approximation). Under these conditions a learning rule that I call Contrastive Hebbian Learning (CHL) was shown to be a good approximation to the Boltzmann weight update rule and to achieve learning speeds comparable to backpropagation. Hinton (1989) showed that for Mean Field networks, CHL is at least a first order approximation to gradient descent on an error function. The purpose in this paper is to show that CHL works with any interactive network with bounded, continuous activation functions and symmetric weights. The approach taken does not presume the existence of Boltzmann machines whose behavior is approximated with mean field networks. It is also shown that CHL performs gradient descent on a contrastive function of the same form investigated by Hinton (1989) The paper is divided in two sections and one appendix. In Section 1 I study the dynamics of the activations in interactive networks. Section 2 shows how to modify the weights for the stable states of the network to reproduce desired patterns of activations. The appendix contains mathematical details, and some comments on how to implement Contrastive Hebbian Learning in Interactive Networks. The format is Latex. Here are the instructions to get the file: unix> ftp cheops.cis.ohio-state.edu Name:anonymous Password:neuron ftp> cd pub/neuroprose ftp> get Movellan.CHL.LateX From INS_ATGE%JHUVMS.BITNET at vma.CC.CMU.EDU Fri Apr 27 02:35:00 1990 From: INS_ATGE%JHUVMS.BITNET at vma.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@vma.CC.CMU.EDU) Date: Fri, 27 Apr 90 01:35 EST Subject: Recurrent Linguistic Domain Papers? Message-ID: I recently entered into a discussion with a professor of Cognitive Science, who was of the opinion that connectionist models are not reasonable ways of explaining linguistic processing since "there is no way for such systems to temporarily 'save their state' perform some other computation, and then restore the prior state. As a result, they seem to be limitied to...'finite state automata'" Since he only knows about feedforward style models, I can understand his feelings. I am curious if anyone knows of a reference to recurrent connectionist models which show non-FSA behavior (linear bounded automata would be fine), or a reference which discusses connectionist models used in linguistic domains involving recursive grammars utilizing recurrent nets. -Tom From pollack at CIS.OHIO-STATE.EDU Fri Apr 27 12:58:41 1990 From: pollack at CIS.OHIO-STATE.EDU (pollack@CIS.OHIO-STATE.EDU) Date: Fri, 27 Apr 90 12:58:41 -0400 Subject: Recurrent Linguistic Domain Papers? In-Reply-To: INS_ATGE%JHUVMS.BITNET@vma.CC.CMU.EDU's message of Fri, 27 Apr 90 01:35 EST <9004271448.AA17935@cheops.cis.ohio-state.edu> Message-ID: <9004271658.AA00656@dendrite.cis.ohio-state.edu> Tom, It is clear that naive applications of connectionism usually lead to finite state models with limited representational abilities. Having been one of the first to build such a limited model (Pollack & Waltz 1982, 85), I've been working on this question, and have a couple of answers for your professor: In my 1987 Ph.D thesis from the University of Illinois, I show how to build a TM with units having rational outputs, linear combinations, thresholds and multiplicative gating. ($6 for mccs-87-100 from TR librarian, CRL, NMSU, Las Cruces NM 88003) The work on RAAM (88 Cogsci conf and AIJ in press) shows how to get stacks and trees into fixed-width distributed representations. (pollack.newraam.ps.Z in pub/neuroprose) My "strange automata" paper (submitted) shows that a high order recurrent network can bifurcate to chaos becoming an infinite state machine (in what chomsky class?) whose transitions are not arbitrary but are controlled by an underlying strange attractor. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From ersoy at ee.ecn.purdue.edu Fri Apr 27 10:02:06 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 27 Apr 90 09:02:06 -0500 Subject: No subject Message-ID: <9004271402.AA19432@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From pollack at cis.ohio-state.edu Fri Apr 27 12:17:15 1990 From: pollack at cis.ohio-state.edu (pollack@cis.ohio-state.edu) Date: Fri, 27 Apr 90 12:17:15 -0400 Subject: Neuroprose lead time Message-ID: <9004271617.AA00631@dendrite.cis.ohio-state.edu> **Do not forward to other lists** Recently, some connectionists have placed their reports in the Inbox, notified me and, a couple of hours later, posted their TR announcement to the mailing list. I'm a fairly busy guy, and it takes at least a day or two to process my email and do simple chores like moving and renaming files, and keeping the INDEX file. I realize that you want your papers to be read, but please be patient or the connectionist preprint system will break down due to misinformation and the email traffic to fix it. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 (kolen.bpsic.tr.ps.Z has been fixed and kolen.bpsic.fig0.ps.Z added From elman at amos.ucsd.edu Fri Apr 27 15:26:46 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 27 Apr 90 12:26:46 PDT Subject: Recurrent Linguistic Domain Papers? Message-ID: <9004271926.AA23119@amos.ucsd.edu> I have done work along these lines, using a simple recurrent network. Nets have been trained on a variety of stimuli. Probably the most interesting simulations (for your purposes) are those which involve discovering a way to represent recursive grammatical structures. The networks succeed to a limited extent by implementing what I call "leaky" or "context-sensitive recursion" in which a state vector does the job normally done by a stack. Since the entire state vector is visible to the part of the network which computes the output function, information from different levels leaks. I believe this kind of leakage is just what is needed to account for natural language processing. For a copy of TR's reporting this work, send a note to 'yvonne at amos.ucsd.edu' asking for 8801 and 8903. Jeff Elman Dept. of Cognitive Science UCSD From schraudo%cs at ucsd.edu Fri Apr 27 17:27:54 1990 From: schraudo%cs at ucsd.edu (Nici Schraudolph) Date: Fri, 27 Apr 90 14:27:54 PDT Subject: Recurrent Linguistic Domain Papers? Message-ID: <9004272127.AA15861@beowulf.ucsd.edu> Jeff Elman has trained a simple recurrent prediction network on a corpus of sentences with embedded clauses produced by a recursive grammar. The net was required to remember noun/verb agreement across the embedded clauses; its capacity to do so showed limits similar to those of human linguistic capability: namely, performance degraded after about three levels of embedding, with center embeddings more adversely affected than tail recursions. These findings are reported in his tech report "Representation and Structure in Connectionist Models" (CRL TR 8903), available from the Center for Research in Language, UCSD, LA Jolla, CA 92093-0108 (e-mail: jan at amos.ucsd.edu). -- Nici Schraudolph, C-014 nschraudolph at ucsd.edu University of California, San Diego nschraudolph at ucsd.bitnet La Jolla, CA 92093 ...!ucsd!nschraudolph From cole at cse.ogi.edu Fri Apr 27 20:22:06 1990 From: cole at cse.ogi.edu (Ron Cole) Date: Fri, 27 Apr 90 17:22:06 -0700 Subject: SPEECH RECOGNITION Message-ID: <9004280022.AA24108@cse.ogi.edu> COMPUTER SPEECH RECOGNITION: THE STATE OF THE ART A Four-Day Workshop Covering Current and Emerging Technologies in Computer Speech Recognition July 16-19, 1990 Oregon Graduate Institute Portland, Oregon INSTRUCTORS Ron Cole, Organizer Oregon Graduate Institute Les Atlas University of Washington Mark Fanty Oregon Graduate Institute Dan Hammerstrom Oregon Graduate Institute Kai-Fu Lee Carnegie Mellon University Stephanie Seneff Massachusetts Institute of Technology Victor Zue Massachusetts Institute of Technology COURSE SUMMARY "Computer Speech Recognition: The State of the Art" will cover current and emerging technologies in computer speech recognition. Leading experts in the field will describe today's most successful speech recognition systems and discuss the strengths and limitations of the technology underlying each. Speech recognition systems examined in detail include EAR, an English Alphabet Recognizer that uses neural networks to achieve high recognition accuracy for spoken letters; SPHINX, a system that uses hidden Markov models to recognize continuous speech in a 1000 word vocabulary; and VOYAGER, a system that combines speech recognition with natural language understanding to converse with the speaker. Workshop participants will gain an understanding of the problems involved in speaker-independent computer speech recognition and the various research approaches. Topics to be presented in detail include the phonetic structure and variability of speech; signal representations; hidden Markov models; neural network processing techniques; and natural language understanding. The instructors are distinguished researchers who have produced state- of-the-art systems representing the different approaches to the problem. Live demonstrations of speech recognition algorithms will be provided throughout the course. The workshop fee is $995 per person. To receive more information and a registration form contact: Department of Academic Services Oregon Graduate Institute 19600 NW von Neumann Dr. Beaverton, OR 97006 (503) 690-1137 fischer at admin.ogi.edu From reynolds at bucasd.bu.edu Sat Apr 21 20:17:47 1990 From: reynolds at bucasd.bu.edu (John Huntington Reynolds) Date: 22 Apr 90 00:17:47 GMT Subject: Temporal Pulse Codes Message-ID: I'm very interested in "multiple meaning" theories (e.g. Raymond and Lettvin, and now Optican and Richmond), the informational role that conduction blocks in axon arbors might play, and the function of temporally modulated pulse codes in general. I'm writing in order to gather references to related work. I'm really just getting my feet wet at this point -- I joined Steve Grossberg's Cognitive and Neural Systems program as a PhD student in September, and with courses and my R.A. work I've been too snowed under to really pursue these interests very fully. Work in temporal pulse encoding I am aware of includes Chung, Raymond, and Lettvin (1970) Multiple meanings in single visual units. Brain Behavior and Evolution 3:72-101. Gray, Konig, Engel, and Singer (1989) Oscillatory Responses in Cat Visual Cortex Exhibit inter-Columnar Synchronization Which Reflects Global Stimulus Properties. Nature Vol. 338, March 1989. Optican, Podell, Richmond, and Spitzer (1987) Temporal Encoding of Two-Dimensional Patterns by Single Units in Primate Inferior Temporal Cortex. (three part series) Journal of Neurophysiology. Vol 57, No 1, January 1987. Pratt, Gill (1990) Pulse Computation. PhD Thesis. MIT, January, 1990. Steve Raymond and Jerry Lettvin (1978) Aftereffects of activity in peripheral axons as a clue to nervous coding. In: Physiology and Pathobiology of Axons. SG Waxman, ed. Raven Press, New York. Richmond, Optican, and Gawne (1990) Neurons Use Multiple Messages Encoded in Temporally Modulated Spike Trains to Represent Pictures. Preprint of a chapter in Seeing Contour and Color ed. J. Kulikowski, Pergamon Press. ... and a lot of work that has been done in the area of temporal coding in the auditory nerve and cochlear nucleus (average localized synchrony response (ALSR) coding). I've finally reached a (brief) lull in my activities here, and I'd appreciate any advice you'd care to offer. --thanks in advance, John Reynolds From ftlee at suna0.cs.uiuc.edu Sun Apr 1 17:21:10 1990 From: ftlee at suna0.cs.uiuc.edu (ftlee@suna0.cs.uiuc.edu) Date: Sun, 1 Apr 90 16:21:10 -0500 Subject: New Subscriber Message-ID: <9004012121.AA01294@sunb7.cs.uiuc.edu> Area of research is Passive Sonar Detection and Classification using Neural Networks - research is propriety and classified. Don/Lee From marek at iuvax.cs.indiana.edu Sun Apr 1 23:22:03 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Sun, 1 Apr 90 22:22:03 -0500 Subject: New Subscriber Message-ID: }Date: Sun, 1 Apr 90 16:21:10 -0500 }From: ftlee at suna0.cs.uiuc.edu }Message-Id: <9004012121.AA01294 at sunb7.cs.uiuc.edu> }To: connectionists at CS.CMU.EDU }Subject: New Subscriber }Cc: ftlee at cs.uiuc.edu } } }Area of research is Passive Sonar Detection and Classification }using Neural Networks - research is propriety and classified. } }Don/Lee } Perhaps you should consider unsubscribing. This is an international list with no classified traffic or propietary content. This list was created back in 1986 with the idea of sharing ideas. If you must do otherwise, this is of no interest to me as a connectionist. Do you do anything that is of interest? -- Marek Lugowski (connectionist summer school '86) From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon Apr 2 07:39:39 1990 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 02 Apr 90 07:39:39 EDT Subject: Classified research Message-ID: }Area of research is Passive Sonar Detection and Classification }using Neural Networks - research is propriety and classified. Perhaps you should consider unsubscribing. This is an international list with no classified traffic or propietary content. This list was created back in 1986 with the idea of sharing ideas. If you must do otherwise, this is of no interest to me as a connectionist. Do you do anything that is of interest? -- Marek Lugowski I disagree with Marek Lugowski. Obviously, this list is not an appropriate forum for discussing research and ideas that are classified or proprietary -- it goes all over the world to all sorts of people. We would prefer that subscribers to this list share as many of their ideas as possible with the rest of us, but I do not think that we want to say that a legitimate researcher is unwelcome to participate in this group just because some portion of his or her ideas are not going to be shared with the rest of us immediately, or because he happens to work on classified problems. If we put in such a restriction, we would lose a large fraction of our participants. There are are lot of people out there who read these messages, but who have never contributed to these discussions. Perhaps their work is proprietary, perhaps they want to publish their ideas initially in a journal or some other formum for which they get "credit", or perhaps they don't yet have anything to say. That's OK -- in fact, the current setup would not work if everyone felt compelled to "share" something, whether or not he had anything to say. When someone *does* have something he wants to say, this forum provides a medium by which he can address a large community of interested, legitimate researchers. If your objection is to having any contact with military-sponsored research, perhaps *you* had better consider unsubscribing. The machines and networks upon which the roots of this list reside are paid for mostly by the U.S. Department of Defense. It is unfortunately the case, as of today, that most computer science research in the U.S. -- not counting proprietary research within companies -- is paid for through DoD in one way or another. Some of us hope that will change, but it won't change over night. In the meantime, it would be quite hypocritical for us to suggest that people doing classified research are not welcome even to listen to what goes on here. -- Scott Fahlman From MRE1%VMS.BRIGHTON.AC.UK at VMA.CC.CMU.EDU Mon Apr 2 14:38:00 1990 From: MRE1%VMS.BRIGHTON.AC.UK at VMA.CC.CMU.EDU (MRE1%VMS.BRIGHTON.AC.UK@VMA.CC.CMU.EDU) Date: Mon, 2 Apr 90 14:38 BST Subject: No subject Message-ID: I am writing to enquire about the neuron network mail list. Have I got the correct address? From fritz_dg%ncsd.dnet at gte.com Mon Apr 2 15:17:25 1990 From: fritz_dg%ncsd.dnet at gte.com (fritz_dg%ncsd.dnet@gte.com) Date: Mon, 2 Apr 90 15:17:25 -0400 Subject: literature Message-ID: <9004021917.AA04100@bunny.gte.com> Has anyone a good handle on literature covering implementable connectionist models for invertebrate sensory-motor circuits? --especially papers with details on models that have been made to work, not run-on streams of consciousness. From marek at iuvax.cs.indiana.edu Mon Apr 2 11:26:50 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Mon, 2 Apr 90 10:26:50 -0500 Subject: Classified research Message-ID: Scott, I think you could have read my message differently. My objection was to the absence of anything to share. I maybe should have made this clear in 4 pages of submission but (perhaps mistakenly) did not wish to take up bandwidth. Normally when people introduce themselves on the list (if they choose to do so) they write about what is of interest to the list. -- Marek P.s. Do you still disagree with me? From Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU Mon Apr 2 19:56:58 1990 From: Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU (Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU) Date: Mon, 02 Apr 90 19:56:58 EDT Subject: Classified research In-Reply-To: Your message of Mon, 02 Apr 90 10:26:50 -0500. Message-ID: My objection was to the absence of anything to share... Normally when people introduce themselves on the list (if they choose to do so) they write about what is of interest to the list. Most people don't "introduce" themselves on the list, and I think that's probably a good thing. It would create a lot of traffic that is not of general interest. I thought that the message in question was sent to the whole list by accident. I think there are a lot of people on this list in listen-only mode. I've got no objection to that -- it doesn't cost much and it serves a useful educational function. -- Scott From Michael.Witbrock at MJW.BOLTZ.CS.CMU.EDU Tue Apr 3 12:05:56 1990 From: Michael.Witbrock at MJW.BOLTZ.CS.CMU.EDU (Michael.Witbrock@MJW.BOLTZ.CS.CMU.EDU) Date: Tue, 3 Apr 90 12:05:56 EDT Subject: Talk of people unsubscribing. Message-ID: I used to be the maintainer of connectionists. When people ask to subscribe to it, they are asked to tell connectionists-request what they work on (to maintain connectionists as a group of people actually working in the field). I am fairly sure that the message that sparked this discussion was such a message, sent to connectionists instead of connectionists-request by mistake. michael From sayegh at ed.ecn.purdue.edu Tue Apr 3 18:00:25 1990 From: sayegh at ed.ecn.purdue.edu (Samir Sayegh) Date: Tue, 3 Apr 90 17:00:25 -0500 Subject: List of Speakers 3rd Conf. NN & PDP Indiana-Purdue University Message-ID: <9004032200.AA08806@ed.ecn.purdue.edu> LIST OF SPEAKERS AND THEIR TOPICS THIRD CONFERENCE ON NEURAL NETWORKS AND PARALLEL DISTRIBUTED PROCESSING INDIANA-PURDUE UNIVERSITY Thursday, April 12, 6-9:00 p.m. INTEGRATED AUTONOMOUS NAVIGATION BY ADAPTIVE NEURAL NETWORKS D.A. Pomerleau Department of Computer Science Carnegie Mellon University APPLYING A HOPFIELD-STYLE NETWORK TO DEGRADED PRINTED TEXT RESTORA- TION A. Jagota Department of Computer Science State University of New York-Buffalo RECENT STUDIES WITH PARALLEL SELF-ORGANIZING HIERARCHICAL NEURAL NETWORKS O.K. Ersoy and D. Hong School of Electrical Engineering Purdue University INEQUALITIES, PERCEPTrONS AND ROBOTIC PATH PLANNING S.I. Sayegh Department of Physics Indiana-Purdue University GENETIC ALGORITHMS FOR FEATURE SELECTION FOR COUNTERPROPOGATION NETWORKS F.Z. Brill and W.N. Martin Department of Computer Science University of Virginia Friday, April 13, 6-9:00 p.m. MULTI-SCALE VISION-BASED NAVIGATION ON DISTRIBUTED-MEMORY MIND COMPUTERS A.W. Ho and G.C. Fox Caltech Concurrent Computation Program California Institute of Technology A NEURAL NETWORK WHICH ENABLES SPECIFICATION OF PRODUCTION RULES N. Liu and K.J. Cios The University of Toledo Friday, April 13, continued PIECE-WISE LINEAR ESTIMATION OF MECHANICAL PROPERTIES OF MATERIALS WITH NEURAL NETWORKS I.H. Shin, K.J. Cios, A. Vary and H.E. Kautz The University of Toledo & NASA Lewis Research Center MULTIPLE SENSOR INTEGRATION VIA NEURAL NETWORKS FOR ESTIMATING SURFACE ROUGHNESS AND BORE TOLERANCE IN CIRCULAR END MILLING -TIME DOMAIN A.C. Okafor, M. Marcus and R. Tipirneni Department of Mechanical & Aerospace Engineering & Engineering Mechan- ics University of Missouri-Rolla MULTIPLE SENSOR INTEGRATION VIA NEURAL NETWORKS FOR ESTIMATING SURFACE ROUGHNESS AND BORE TOLERANCE IN CIRCULAR END MILLING - FREQUENCY DOMAIN A.C. Okafor, M. Marcus and R. Tipirneni Department of Mechanical and Aerospace Engineering and Engineering Mechanics University of Missouri-Rolla Saturday, April 14, 9:00 a.m.-1:00 p.m. SIMULATION OF A CORTICAL MODEL FOR THE ADULT CAT E. Niebur and F. Worgotter California Institute of Technology LEARNING BY GRADIENT DESCENT IN FUNCTION SPACE G. Mani Department of Computer Science University of Wisconsin-Madison REAL TIME DYNAMIC RECOGNITION OF SPATIAL TEMPORAL PATTERNS M.F. Tenorio School of Electrical Engineering Purdue University A NEURAL ARCHITECTURE FOR COGNITIVE MAPS M. Sonntag Cognitive Science & Machine Intelligence Lab University of Michigan SUCCESSIVE REFINEMENT OF THE INTERNAL REPRESENTATIONS OF THE ENVIRONMENT IN CONNECTIONIST NETWORKS Vasant Honovar and Leonard Uhr Department of Computer Sciences University of Wisconsin-Madison For more information: e-mail: sayegh at ed.ecn.purdue.edu sayegh at ipfwcvax.bitnet Voice: (219) 481-6157 FAX : (219) 481-6880 From marvit at hplpm.hpl.hp.com Wed Apr 4 14:52:41 1990 From: marvit at hplpm.hpl.hp.com (Peter Marvit) Date: Wed, 04 Apr 90 11:52:41 PDT Subject: literature In-Reply-To: Your message of "Mon, 02 Apr 90 15:17:25 PDT." <9004021917.AA04100@bunny.gte.com> Message-ID: <5121.639255161@hplpm.hpl.hp.com> Although I do not "have a good handle" on connectionist models of invertebrate neural circuits, I assume there is a significant enough field to warrent the entire paper session on that very subject at IJCNN this summer. Anyone on this list intending to present relevent material? -Peter "Spineless" Marvit From BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU Thu Apr 5 05:14:32 1990 From: BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU (BOVET JAMON BENHAMOU OTTOMANI) Date: Thu, 05 Apr 90 09:14:32 GMT Subject: INVERTEBRATE SENSORI-MOTOR CIRCUITS Message-ID: Like Fritz_dg I am very interested in sending each other litterature references on connectionist models for invertebrate sensory-motor circuits. But the first reference I am immediatly thinking about is a preprint on Lamprey (unfortunatly vertebrate) : S.Grillner, A.Lansner, P.Wallen, O.Ekeberg, L.Brodin, H.Traven, & M.Stensmo, The neural network underlying locomotion. Initiation, segmental burst generation and sensory entrainment, analyzed by simulation. P.BOVET,LABO.NEUROSCIENCES,CNRS,MARSEILLE,FRANCE. From mcgrawg at iuvax.cs.indiana.edu Thu Apr 5 15:50:17 1990 From: mcgrawg at iuvax.cs.indiana.edu (Gary McGraw) Date: Thu, 5 Apr 90 14:50:17 -0500 Subject: Request for references. Message-ID: My current research involves training recurrent networks of various architectures to do a temporal pattern recognition task. I have attempted to train both fully and partially recurrent networks (using different learning rules) and am now interested in analyzing their behavior, trainability, etc. Does anyone know of any papers that compare two or more recurrent architectures' behavior given some common task? I am looking for papers similar to Cottrell and Tsung's "Learning Simple Arithmetic Procedures" from the proceedings of the 11th annual cogsci conference. Thanks for your help. Gary McGraw Center for Research on Concepts and Cognition Department of Computer Science Indiana University From ersoy at ee.ecn.purdue.edu Fri Apr 6 10:59:51 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 6 Apr 90 09:59:51 -0500 Subject: No subject Message-ID: <9004061459.AA25694@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From russ at dash.mitre.org Fri Apr 6 09:44:36 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Fri, 6 Apr 90 09:44:36 EDT Subject: Nettalk phonemes => WaveForms Message-ID: <9004061344.AA02562@dash.mitre.org> I have recently replicated the Nettalk experiment (using data from nnbench). Now I would like to play out the phonemes. Does anyone have any publicly available software to map phonemes to waveforms? Altough the particular format of the wave forms is not that important the ideal software would tanslate the phoneme tokens used in the Nettalk paper to waveforms in the format used in the sound files on a Sun SparcStation1. If no one has such software now, I think it might be useful to develop it for the community at large, since it allows play back at your workstation. Russ NFSNET: russ at dash.mitre.org Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From Connectionists-Request at CS.CMU.EDU Fri Apr 6 12:47:15 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 06 Apr 90 12:47:15 EDT Subject: Fwd: Neural networks and transputers mail list Message-ID: <20531.639420435@B.GP.CS.CMU.EDU> I don't remember seeing this on the main list. I apologize if this is a duplicate post. Contact neurtran at isnet.inmos.com if you have any questions. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) ------- Forwarded Message From DUDZIAKM at isnet.inmos.COM Wed Apr 4 16:28:36 1990 From: DUDZIAKM at isnet.inmos.COM (Neurotechnology Center - Martin Dudziak) Date: Wed, 4 Apr 90 14:28:36 MDT Subject: Neural networks and transputers mail list Message-ID: <178.9004042028@inmos-c.inmos.com> Excuse me - I don't remember if I sent any information to you earlier, but in any case: There is a new mail list dedicated to issues of implementing neural networks using transputers and transputer-based hardware envieronments (i.e., specialized neural processors that act as co-processors w transputers, transputers and DSP chips like the A110, A121, etc.). This mail list is accessible as NEURTRAN at ISNET.INMOS.COM. Some earlier announcement(s) may have listed it as neurtran-request, but due to some site problems, that long of a name won't work, so anyone who is interested in subscribing, getting info, making contributions, etc. should just communicate to neurtran at isnet.inmos.com. Martin Dudziak, Moderator ------- End of Forwarded Message From mm at cogsci.indiana.edu Fri Apr 6 17:26:02 1990 From: mm at cogsci.indiana.edu (Melanie Mitchell) Date: Fri, 6 Apr 90 16:26:02 EST Subject: Technical Report Available Message-ID: The following technical report is available from the Center for Research on Concepts and Cognition at Indiana University: The Right Concept at the Right Time: How Concepts Emerge as Relevant in Response to Context-Dependent Pressures (CRCC Report 42) Melanie Mitchell and Douglas R. Hofstadter Center for Research on Concepts and Cognition Indiana University Abstract A central question about cognition is how, when faced with a situation, one explores possible ways of understanding and responding to it. In particular, how do concepts initially considered to be irrelevant, or not even considered at all, become relevant in response to pressures evoked by the understanding process itself? We describe a model of concepts and high-level perception in which concepts consist of a central region surrounded by a dynamic nondeterministic "halo" of potential associations, in which relevance and degree of association change as processing proceeds. As the representation of a situation is constructed, associations arise and are considered in a probabilistic fashion according to a "parallel terraced scan", in which many routes toward understanding the situation are tested in parallel, each at a rate and to a depth reflecting ongoing evaluations of its promise. We describe Copycat, a computer program that implements this model in the context of analogy-making, and illustrate how the program's ability to flexibly bring in appropriate concepts for a given situation emerges from the mechanisms that we are proposing. (This paper has been submitted to the 1990 Cognitive Science Society conference.) To request copies of this report, send mail to mm at cogsci.indiana.edu or mm at iuvax.cs.indiana.edu or Melanie Mitchell Center For Research on Concepts and Cognition Indiana University 510 N. Fess Street Bloomington, Indiana 47408 From Connectionists-Request at CS.CMU.EDU Sat Apr 7 10:15:39 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Sat, 07 Apr 90 10:15:39 EDT Subject: Fwd: please post Message-ID: <2982.639497739@B.GP.CS.CMU.EDU> ------- Forwarded Message From shriver at usl.edu Sat Apr 7 08:50:16 1990 From: shriver at usl.edu (Shriver Bruce D) Date: Sat, 7 Apr 90 07:50:16 CDT Subject: please post Message-ID: <9004071250.AA26933@rouge.usl.edu> Could you please post the following? Thank you, Bruce Shriver =============================================================== This note is being separately posted on the following bulletin boards: connectionists neuron-digest neutran Please recommend other bulletin boards that you think are also appropriate. =============================================================== I am interested in learning what experiences people have had using neural network chips. In an article that Colin Johnson did for PC AI's January/February 1990 issue, he listed the information given below about a number of NN chips (I've rearranged it in alphabetical order by company name). This list is undoubtedly incomplete (no efforts at universities and industrial research laboratories are listed, for example) and may have inaccuracies in it. Such a list would be more useful if it would contain the name, address, phone number, FAX number, and electronic mail address of a contact person at each company would be identified. Information about the hardware and software support (interface and coprocessor boards, prototype development kits, simulators, development software, etc.) is missing. Additionally, pointers to researchers who are planning to or have actually been using these or similar chips would be extremely useful. I am interested in finding out the range of intended applications. Could you please send me: a) updates and corrections to the list b) company contact information c) hardware and software support information d) information about plans to use or experiences with having used any of these chips (or chips that are not listed) In a few weeks, if I get a sufficient response, I will resubmit an enhanced listing of this information to the bulletin boards to which I originally sent this note. Thanks, Bruce Shriver (shriver at usl.edu) ================================================================= Company: Accotech Chip Name: AK107 Description: an Intel 8051 digital microprocessor with its on- chip ROM coded for neural networks Availability: available now Company: Fujitsu Ltd. Chip Name: MB4442 Description: one neuron chip capable of 70,000 connections per second Availability: available in Japan now Company: Hitachi Ltd. Chip Name: none yet Description: information encoded in pulse trains Availability: experimental Company: HNC Inc. Chip Name: HNC-100X Description: 100 million connections per second Availability: Army battlefield computer Company: HNC Chip Name: HNC-200X Description: 2.5 billion connections per second Availability: Defense Advanced Research Projects Agency (DARPA) contract Company: Intel Corp Chip Name: N64 Description: 2.5 connections per second 64-by-64-by-64 with 10,000 synapses Availability: available now Company: Micro Devices Chip Name: MD1210 Description: fuzzy logic combined with neural networks in its fuzzy comparator chip Availability: available now Company: Motorola Inc. Chip Name: none yet Description: "whole brain" chip models senses, reflex, instinct- the "old brain" Availability: late in 1990 Company: NASA, Jet Propulsion Laboratory (JPL) Chip Name: none yet Description: synapse is charge on capacitors that are refreshed from RAM Availability: experimental Company: NEC Corp. Chip Name: uPD7281 Description: a data-flow chip set that NEC sells on PC board with neural software Availability: available in Japan Company: Nestor Inc. Chip Name: NNC Description: 150 million connections per second, 150,000 connections Availability: Defense Dept. contract due in 1991 Company: Nippon Telephone and Telegraph (NTT) Chip Name: none yet Description: massive array of 65,536 one-bit processors on 1024 chips Availability: experimental Company: Science Applications International. Corp. Chip Name: none yet Description: information encoded in pulse trains Availability: Defense Advanced Research Projects Agency (DARPA) contract Company: Syntonic Systems Inc. Chip Name: Dendros-1 Dendros-2 Description: each has 22 synapses, two required by any number can be used Availability: available now  ------- End of Forwarded Message From sankar at caip.rutgers.edu Mon Apr 9 14:20:43 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Mon, 9 Apr 90 14:20:43 EDT Subject: No subject Message-ID: <9004091820.AA17500@caip.rutgers.edu> From fineberg at enterprise.rutgers.edu Mon Apr 9 14:16:07 1990 From: fineberg at enterprise.rutgers.edu (Fineberg) Date: Mon, 9 Apr 90 14:16:07 EDT Subject: No subject Message-ID: <9004091816.AA03931@enterprise.rutgers.edu> Rutgers University CAIP Center CAIP Neural Network Workshop 15-17 October 1990 A neural network workshop will be held during 15-17 October 1990 in East Brunswick, New Jersey under the sponsorship of the CAIP Center of Rutgers University. The theme of the workshop will be "Theory and Applications of Neural Networks" with particular emphasis on industrial applications. Leaders in the field from both industrial organizations and universities will present the state-of-the-art in neural networks. Attendance will be limited to about 90 persons. Partial List of Speakers and Panel Chairmen J. Alspector, Bellcore A. Barto, University of Massachusetts R. Brockett, Harvard University K. Fukushima, Osaka University S. Grossberg, Boston University R. Hecht-Nielsen, HNN, San Diego J. Hopfield, California Institute of Technology S. Kung, Princeton University F. Pineda, JPL, California Institute of Technology R. Linsker, IBM, T. J. Watson Research Center E. Sontag, Rutgers University H. Stark, Illinois Institute of Technology B. Widrow, Stanford University Y. Zeevi, CAIP Center, Rutgers University and The Technion, Israel The workshop will begin with registration at 8:30 AM on Monday, 15 October and end at 5:00 PM on Wednesday. There will be a dinner on Tuesday evening followed by special-topic discussion sessions. The $395 registration fee ($295 for participants from CA IP member organizations), includes the cost of the dinner. Participants are urged to remain in attendance throughout the entire period of the workshop. Proceedings of the workshop will subsequently be published in book form. Individuals wishing to participate in the workshop should fill out the attached form and mail it to the address below. In addition to the formal presentations, there will be a limited number of poster papers. Interested parties should send a title and a bstract to be considered for poster presentation. The papers should be submitted by July 31, 1990. For further information, contact Dr. Richard Mammone Telephone: (201)932-5554 Electronic Mail: mammone at caip.rutgers.edu FAX: (201)932-4775 Telex: 6502497820 mci Rutgers University CAIP Center CAIP Neural Network Workshop 15-17 October 1990 I would like to participate in the Neural Network Workshop. Please send registration details. Title:________ Last:__________________________ First:____________________ Middle:______________________ Affiliation _________________________________________________________ Address _________________________________________________________ _________________________________________________________ Business Telephone: (___)________________________ FAX: (___)_________________________________________ Electronic Mail:_______________________ Home Telephone:(___)______________________________ I am particularly interested in the following aspects of neural networks: ____________________________________________________________________________________________________ ____________________________________________________________________________________________________ I would be interested in participating in a panel___, round-table discussion and/or in___presenting a paper on the subject of__________________________________________. (Please attach a one-page title and abstract). Please complete the above and mail this form to: Neural Network Workshop CAIP Center, Rutgers University P.O. Box 1390 Piscataway, NJ 08855-1390 (USA) From miyata at dendrite.Colorado.EDU Mon Apr 9 18:31:27 1990 From: miyata at dendrite.Colorado.EDU (Yoshiro Miyata) Date: Mon, 9 Apr 90 16:31:27 MDT Subject: Harmonic Grammar Part 1 & 2 - Technical Reports Available Message-ID: <9004092231.AA08134@dendrite.colorado.edu> ------------------- PLEASE DO NOT FORWARD TO OTHER BBOARDS -------------------- The following 2 technical reports are available. Please mail requests for copies to: conn_tech_report at boulder.colorado.edu with only your name and physical address in the content of the mail. On the subject line, please indicate which report(s) you are requesting. =============================================================================== Technical Report CU-CS-464-90 Harmonic Grammar - A formal multi-level connectionist theory of linguistic well-formedness: An application Geraldine Legendre Yoshiro Miyata Paul Smolensky University of Colorado at Boulder We describe "harmonic grammar", a connectionist-based approach to formal theories of linguistic well-formedness. The general approach can be applied to various kinds of linguistic well-formedness, e.g., phonological and syntactic. Here, we address a syntactic problem: unaccusativity. Harmonic grammar is a two-level theory, involving a distributed, lower level connectionist network whose relevant aggregate computational behavior is described by a local, higher level network. The central hypothesis is that the connectionist well-formedness measure called "harmony" can be used to model linguistic well-formedness; what is crucial about the relation between the lower and higher level networks is that there is a harmony-preserving mapping between them: they are "isoharmonic" (at least approximately). A companion paper (Legendre, Miyata, & Smolensky, 1990) describes the theoretical basis for the two level approach, starting from general connectionist principles. In this paper, we discuss the problem of unaccusativity, give a high level characterization of harmonic syntax, and present a higher level network to account for unaccusativity data in French. We interpret this network as a fragment of the grammar and lexicon of French expressed in "soft rules." Of the 760 sentence types represented in our data, the network correctly predicts the acceptability in all but two cases. This coverage of real, problematic syntactic data greatly exceeds that of any other formal account of unaccusativity of which we are aware. =============================================================================== Technical Report CU-CS-465-90 Harmonic Grammar - A formal multi-level connectionist theory of linguistic well-formedness: Theoretical foundations Geraldine Legendre Yoshiro Miyata Paul Smolensky University of Colorado at Boulder In this paper, we derive the formalism of "harmonic grammar", a connectionist-based theory of linguistic well-formedness. Harmonic grammar is a two-level theory, involving a low level connectionist network using a particular kind of distributed representation, and a second, higher level network that uses local representations and which approximately and incompletely describes the aggregate computational behavior of the lower level network. The central hypothesis is that the connectionist well-formedness measure "harmony" can be used to model linguistic well-formedness; what is crucial about the relation between the lower and higher level networks is that there is a harmony-preserving mapping between them: they are "isoharmonic" (at least approximately). In a companion paper (Legendre, Miyata, & Smolensky, 1990), we apply harmonic grammar to a syntactic problem, unaccusativity, and show that the resulting network is capable of a degree of coverage of difficult data that is unparallelled by symbolic approaches of which we are aware: of the 760 sentence types represented in our data, the network correctly predicts the acceptability in all but two cases. In the present paper, we describe the theoretical basis for the two level approach, illustrating the general theory through the derivation from first principles of the unaccusativity network of Legendre, Miyata, & Smolensky (1990). From yu at cs.utexas.edu Tue Apr 10 06:38:22 1990 From: yu at cs.utexas.edu (Yeong-Ho Yu) Date: Tue, 10 Apr 90 05:38:22 CDT Subject: Tech Reports Available Message-ID: <9004101038.AA15616@ai.cs.utexas.edu> The following two technical reports are available. They will appear in the Proceedings of IJCNN90. ---------------------------------------------------------------------- EXTRA OUTPUT BIASED LEARNING Yeong-Ho Yu and Robert F. Simmons AI Lab, The University of Texas at Austin March 1990 AI90-128 ABSTRACT One way to view feed-forward neural networks is to regard them as mapping functions from the input space to the output space. In this view, the immediate goal of back-propagation in training such a network is to find a correct mapping function among the set of all possible mapping functions of the given topology. However, finding a correct one is sometimes not an easy task, especially when there are local minima. Moreover, it is harder to train a network so that it can produce correct output not only for training patterns but for novel patterns which the network has never seen before. This so-called generalization capability has been poorly understood, and there is little guidance for achieving a better generalization. This paper presents a unified viewpoint for the training and generalization of a feed-forward network, and a technique for improved training and generalization based on this viewpoint. ------------------------------------------------------------------------ DESCENDING EPSILON IN BACK-PROPAGATION: A TECHNIQUE FOR BETTER GENERALIZATION Yeong-Ho Yu and Robert F. Simmons AI Lab, The University of Texas at Austin March 1990 AI90-130 ABSTRACT There are two measures for the optimality of a trained feed-forward network for the given training patterns. One is the global error function which is the sum of squared differences between target outputs and actual outputs over all output units of all training patterns. The most popular training method, back-propagation based on the Generalized Delta Rule, is to minimize the value of this function. In this method, the smaller the global error is, the better the network is supposed to be. The other measure is the correctness ratio which shows, when the network's outputs are converted into binary outputs, for what percentage of training patterns the network generates the correct binary outputs. Actually, this is the measure that often really matters. This paper argues that those two measures are not parallel and presents a technique with which the back-propagation method results in a high correctness ratio. The results show that the trained networks with this technique often exhibit high correctness ratios not only for the training patterns but also for novel patterns. ----------------------------------------------------------------------- To obtain copies, either: a) use the getps script (by Tony Plate and Jordan Pollack, posted on connectionists a few weeks ago) b) unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get (remote-file) yu.output-biased.ps.Z (local-file) foo.ps.Z ftp> get (remote-file) yu.epsilon.ps.Z (local-file) bar.ps.Z ftp> quit unix> uncompress foo.ps bar.ps unix> lpr -P(your_local_postscript_printer) foo.ps bar.ps c) If you have any problem of accessing the directory above, send a request to yu at cs.utexas.edu or Yeong-Ho Yu AI Lab The University of Texas at Austin Austin, TX 78712. ------------------------------------------------------------------------ From ai-vie!georg at relay.EU.net Tue Apr 10 09:02:17 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Tue, 10 Apr 90 12:02:17 -0100 Subject: EMCSR 1990 Message-ID: <9004101002.AA01681@ai-vie.uucp> Announcement Tenth European Meeting on Cybernetics and Systems Research April 17-20, 1990 University of Vienna, Austria Symposium L: Parallel Distributed Processing in Humans and Machines Chairs: David Touretzky (Carnegie Mellon) Georg Dorffner (Vienna) The following papers will be presented: Tuesday afternoon (Apr. 17) INVITED LECTURE: A Computational Basis for Phonology D. Touretzky, USA On the Neural Connectance-Performance Relationship G. Barna, P. Erdi, Hungary Quasi-Optimized Learning Dynamics in Sparsely Connected Neural Network Models K.E. Kuerten, Germany Memorization and Deleting in Linear Neural Networks A. Petrosino, F. Savastano, R. Tagliaferri, Italy A Memory-Based Connectionist Network for Speech Recognition C.-C. Chen, Belgium Meta-Parsing in Neural Networks A. Nijholt, The Netherlands Parallel Data Assimilation in Knowledge Networks A. Parodi, S. Khouas, France Wednesday morning (Apr. 18): Preprocessing of Musical Information and Examples of Applications for Neural Networks G. Hipfinger, C. Linster, Austria Symbolic Behavior and Code Generation: The Emergence of "Equivalence Relations" in Neural Networks G.D.A. Brown, M. Oaksford, United Kingdom Connectionism and Unsupervised Knowledge Representation I.M. Havel, Czechoslovakia On Learning Content-Blind Rules C. Mannes, G. Dorffner, Austria - * - The conference will include other symposia on the following topics: - General Systems Methodology - Fuzzy Sets, Approximate Reasoning and Knowledge-Based Systems - Designing and Systems - Humanity, Architecture and Conceptualization - Cybernetics in Biology and Medicine - Cybernetics of Socio-Economic Systems - Managing Change and Innovation - Systems Engineering and Artificial Intelligence for Peace Research - Communication and Computers - Software Development for Systems Theory - Artificial Intelligence - Impacts of Artificial Intelligence - Panel on Organizational Cybernetics, National Development Planning, and Large-Scale Social Experiments - * - Conference Fee: AS 2,900 (ca. $240, incl.proceedings), NO FEE for students with valid id! The proceedings will also be available from World Scientific Publishing Co., entitled "Cybernetics and Systems '90; R.Trappl (ed.)" Registration will be possible at the conference site (main building of the University of Vienna). You can also contact: EMCSR Conference Secretariat Austrian Society for Cybernetic Studies Schottengasse 3 A-1010 Vienna, Austria Tel: +43 1 535 32 810 Fax: +43 1 63 06 52 Email: sek at ai-vie.uucp From ftsung at UCSD.EDU Tue Apr 10 13:49:19 1990 From: ftsung at UCSD.EDU (Fu-Sheng Tsung) Date: Tue, 10 Apr 90 10:49:19 PDT Subject: invertebrate sensori-motor circuits Message-ID: <9004101749.AA07631@kenallen.ucsd.edu> I will be presenting our work on modeling the lobster's gastric circuit, which is a central pattern generator consisting of 11 neurons. We use Williams-Zipser's recurrent learning algorithm; the model network has one unit for each neuron and the connectivity is constrained to be the same as the gastric circuit. The main result is that such a simple network of sigmoidal units can reproduce a good approximation of the oscillation generated by the in-vitro gastric circuit (w.r.t. phase and amplitude). Note that none of the units/neurons are oscillatory by themselves. The learned oscillation is very stable as is the real circuit. Experimentation with the model suggests that the network topology is intimately related to the phase relationships of the oscillations a network can (stably) generate. This is NOT a detailed model of the gastric neurons, as it models only the input/output function and the connectivity of the circuit. Reference: Fu-Sheng Tsung, Gary Cottrell, Allen Selverston, "Some Experiments On Learning Stable Network Oscillations." (to appear in IJCNN90, June, San Diego). R. Williams & D. Zipser, "A learning algorithm for continually running, fully recurrent neural networks." Neural Computation, 1, 270-280 (1989). Fu-Sheng Tsung UCSD, tsung at cs.ucsd.edu From kawahara at av-convex.ntt.jp Wed Apr 11 08:34:31 1990 From: kawahara at av-convex.ntt.jp (Hideki KAWAHARA) Date: Wed, 11 Apr 90 21:34:31+0900 Subject: Japan Neural Netowrk Society meeting. (List of titles) Message-ID: <9004111234.AA19548@av-convex.ntt.jp> Japan Neural Network Society had its first joint technical meeting with the IEICE and the SICE Japan on 16-17 March/1990. Followings are the list of titles presented. I hope this will give some understanding of neural network research activities in Japan and provide a pointer. The JNNS will also have its first annual conference on 10-12 September/1990. Hideki Kawahara NTT Basic Research Laboratories 3-9-11 Midori-cho, Musashino, Tokyo 180, JAPAN Tel:+81 422 59 2276 Fax:+81 422 3393 kawahara at nttlab.ntt.jp (CSNET) *more* - ----------------------------------------------------------- IEICE Technical Report (NC: Neurocomputing) contents (IEICE: The Institute of Electronics Information and Communication Engineers) (SICE: The Society of Instrument and Control Engineers) * Affiliation of the first author of each report is attached. - ------------------------------------------------------------ Makoto KANO, KAWATO, UNO, SUZUKI:"Learning Trajectory Control of A Redundant Arm by Feedback-Error-Learning", IEICE Technical Report, NC89-61, Vol.89, No.463, pp.1-6, (1990-03). *Faculty of Engineering Science, Osaka University Hiroaki GOMI, KAWATO:"Learning Control of an Unstable System with Feedback Error Learning., IEICE Technical Report, NC89-62, Vol.89, No.463, pp.7-12, (1990-03). *ATR Auditory and Visual Perception Research Laboratories Masayuki NAKAMURA, UNO, SUZUKI, KAWATO:"Formation of Optimal Trajectory in Arm Movement Using Inverse Dynamics Model", IEICE Technical Report, NC89-63, Vol.89, No.463, pp.13-18, (1990-03). *Faculty of Engineering, University of Tokyo Motohiro KITANO, KAWATO, UNO, SUZUKI:"Optimal Trajectory Control by the Cascade Neural Network Model for Industrial Manipulator", IEICE Technical Report, NC89-64, Vol.89, No.463, pp.19-24, (1990-03). *Faculty of Engineering Science, Osaka University Makoto HIRAYAMA, KAWATO, JORDAN:"Speed-Accuracy Trade-off of Arm Movement Predicted by the Cascade Neural Network Model", IEICE Technical Report, NC89-65, Vol.89, No.463, pp.25-30, (1990-03). (In English) *ATR Auditory Visual Perception Research Laboratories Yoshinori UESAKA, TSUKADA:"On a Family of Acceptance Functions for Simulated Annealing", IEICE Technical Report, NC89-66, Vol.89, No.463, pp.31-36, (1990-03). *Faculty of Science and Technology, Science University of Tokyo Akira YAMASHITA, AKIYAMA, ANZAI:"Proposal of Novel Simulated Annealing Method based on the Entropy of the Neural Network", IEICE Technical Report, NC89-67, Vol.89, No.463, pp.37-42, (1990-03). *Faculty of Science and Technology, Keio University Haruhisa TAKAHASHI, TOMITA, KAWABATA:"Acquirement of Internal Representations and The Backpropagation Convergence Theorem", IEICE Technical Report, NC89-68, Vol.89, No.463, pp.43-48, (1990-03). (In English) *The University of Electro-Communications Haruhisa TAKAHASHI, ARAI, TOMITA:"Some Results in Stationary Recurrent Neural Networks", IEICE Technical Report, NC89-69, Vol.89, No.463, pp.49-54, (1990-03). *The University of Electro-Communications Masanobu MIYASHITA, TANAKA:"Application of thermodynamics in the potts spin system to the combinatorial optimization problems", IEICE Technical Report, NC89-70, Vol.89, No.463, pp.55-60, (1990-03). *Fundamental Research Laboratories, NEC Corporation Yuuichi SAKURABA, NAKAMOTO, MORIIZUMI:"Proposal of Learning Vector Quantization Method Using Fuzzy Theory", IEICE Technical Report, NC89-71, Vol.89, No.463, pp.61-66, (1990-03). *Faculty of Engineering, Tokyo Institute of Technology Koji KURATA:"On the Formation of Columnar and Hyper-Columnar Structures in Self-Organizing Models of Topographic Mappings", IEICE Technical Report, NC89-72, Vol.89, No.463, pp.67-72, (1990-03). *Faculty of Engineering, University of Tokyo (to March/1990) *Osaka University (from April/1990) Shotaro AKAHO, AMARI:"On the Lower Bound of the Capacity of Three-Layer Networks Using the Sparse Encoding Method", IEICE Technical Report, NC89-73, Vol.89, No.463, pp.73-78, (1990-03). *Faculty of Engineering, University of Tokyo Hirofumi YANAI, SAWADA:"On associative recall by a randomly sparse model neural network", IEICE Technical Report, NC89-74, Vol.89, No.463, pp.79-84, (1990-03). *Research Institute of Electrical Communication, Tohoku University Tadashi KURATA, SAITOH:"Design of Parallel Distributed Processor for Neuralnet Simulator", IEICE Technical Report, NC89-75, Vol.89, No.463, pp.85-90, (1990-03). *Faculty of Engineering, Chiba University Hideki KATO, YOSHIZAWA, ICIKI:"A Parallel Neurocomputer Architecture with Ring Registers", IEICE Technical Report, NC89-76, Vol.89, No.463, pp.91-96, (1990-03). *Fujitsu Laboratories Ltd. Takafumi KAJIWARA, KITAYAMA:"Spread Spectrum Decoding Method Utilizing Neural Network", IEICE Technical Report, NC89-77, Vol.89, No.463, pp.97-100, (1990-03). *NTT Transmission Systems Laboratories Shigeo SATO, NISHIMURA, HAYAKAWA, IWASAKI, NAKAJIMA, MUROTA, MIKOSHIBA, SAWADA:"Implementation of Integrated Neural Elements and Their Application to An A/D Converter", IEICE Technical Report, NC89-78, Vol.89, No.463, pp.101-106, (1990-03). *Research Institute of Electrical Communication, Tohoku University Masahiro OKAMOTO:"Development of Biochemical Threshold-Logic Device Capable of Storing Short-Memory", IEICE Technical Report, NC89-79, Vol.89, No.463, pp.107-112, (1990-03).(In English) *Kyushu Institute of Technology Shuji AKIYAMA, SHIGEMATSU, IIJIMA, MATSUMOTO:"An Analysis and Modeling System Based on The Concurrent Observation of Neuron Network Activity of Hippocampus Slice", IEICE Technical Report, NC89-80, Vol.89, No.463, pp.113-118, (1990-03). *Electrotechnical Laboratory Y. SHIGEMATSU, AKIYAMA, MATSUMOTO:"Suppression, a necessary function for the synaptic plasticity of hippocampus", IEICE Technical Report, NC89-81, Vol.89, No.463, pp.119-122, (1990-03). *Electrotechnical Laboratory T. AIHARA, SUZUKI, TUKADA, KATO:"The Mechanism and a Model for LTP Induction in the Hippocampus", IEICE Technical Report, NC89-82, Vol.89, No.463, pp.123-128, (1990-03). *Faculty of Engineering, Tamagawa University Hiroyuki MIYAMOTO, FUKUSHIMA:"Recognition of Temporal Patterns by a Multi-layered Neural Network Model", IEICE Technical Report, NC89-83, Vol.89, No.463, pp.129-134, (1990-03). *Faculty of Engineering Science, Osaka University Tatsuo KITAJIMA, HARA:"Associative Memory and Learning in Nerve Cell", IEICE Technical Report, NC89-84, Vol.89, No.463, pp.135-140, (1990-03). *Faculty of Engineering, Yamagata University Ichiro SHIMADA, HARA:"Fractal Properties of Animal Behavior", IEICE Technical Report, NC89-85, Vol.89, No.463, pp.141-146, (1990-03). *Tohoku University Tetsuo FURUKAWA, YASUI:"Formation of center-surround opponent receptive fields through edge detection by backpropagation learning", IEICE Technical Report, NC89-86, Vol.89, No.463, pp.147-152, (1990-03). *Faculty of Computer Science and Engineering, Kyusyu Institute of Technology Yukihiro YOSHIDA, HIRAI:"A Model of Color Processing", IEICE Technical Report, NC89-87, Vol.89, No.463, pp.153-158, (1990-03). *University of Tsukuba Hiroshi NAKAJIMA, MIZUNO, HIDA, SAITO, TSUKADA:"Effect of the Percentage of the Coherent Movement of Visual Texture Components on the Recognition of Direction of Wide-Field Movement", IEICE Technical Report, NC89-88, Vol.89, No.463, pp.159-164, (1990-03) *Faculty of Engineering, Tamagawa University Teruhiko OHTOMO, T.HARA, OHUCHI, K.HARA:"Recognition of Hand-Written Chinese Characters Constructed by Radical and Non-radical Using Neural Network Models", IEICE Technical Report, NC89-90, Vol.89, No.464, pp.1-6, (1990-03). *Faculty of Engineering, Yamagata University Yoshihiro ARIYAMA, ITO, TSUKADA:"Alphanumeric Character Recognition by Neocognitron with Error Correct Training", IEICE Technical Report, NC89-91, Vol.89, No.464, pp.7-12, (1990-03). *Faculty of Engineering, Tamagawa University Hiroshi ISHIJIMA, NAGANO:"A Neural Network Model with Function to Grasp its Situation", IEICE Technical Report, NC89-92, Vol.89, No.464, pp.13-18, (1990-03). *College of Engineering, Hosei University Makoto HIRAHARA, NAGANO:"A neural network for fixation point selection", IEICE Technical Report, NC89-93, Vol.89, No.464, pp.19-24, (1990-03). *College of Engineering, Hosei University Masahiko HASEBE, OHNISHI, SUGIE:"Automatic Generation of a World Map for Autonomous Mobile Robot", IEICE Technical Report, NC89-94, Vol.89, No.464, pp.25-30, (1990-03). *Faculty of Engineering, Nagoya University Hidetatsu KAKENO, SUGIE:"FOCUSSED REGION SEGMENTATION FROM BLURED BACKGROUND USING D2G FILTERS", IEICE Technical Report, NC89-95, Vol.89, No.464, pp.31-36, (1990-03). *Toyota College of Technology Takashi FURUKAWA, ARITA, SUGIHARA, HIRAI:"Computer Map-reading using Neural Networks", IEICE Technical Report, NC89-96, Vol.89, No.464, pp.37-42, (1990-03). *Electronics R & D Lab., Nippon Steel Co. Takao MATSUMOTO, KOGA:"Study on a High-Speed Learning Method for Analog Neural Networks", IEICE Technical Report, NC89-97, Vol.89, No.464, pp.43-48, (1990-03). *NTT Transmission Systems Laboratories Kazuhisa NIKI, YAMADA:"Can backpropagation learning rule co-exist with Hebbian learning rule?", IEICE Technical Report, NC89-98, Vol.89, No.464, pp.49-54, (1990-03). *Electrotechnical Laboratory Masumi ISHIKAWA:"A General Structure Learning of Connectionist Models Using Forgetting", IEICE Technical Report, NC89-99, Vol.89, No.464, pp.55-60, (1990-03). *Electrotechnical Laboratory Naohiro TODA, HAGIWARA, USUI:"Data Fitting by Multilayered Neural Network -- Decision of Network Structure via Information Criterion --", IEICE Technical Report, NC89-100, Vol.89, No.464, pp.61-66, (1990-03). *Toyohashi University of Technology Akio TANAKA, YOSHIMURA:"Theoretical Analysis of a Three-Layer Neural Network with Spread Pattern Information Learning method", IEICE Technical Report, NC89-101, Vol.89, No.464, pp.67-72, (1990-03). *International Institute for Advanced Study of Social Information Science(IIAS-SIS), Fujitsu Ltd. Shin-ya MIYAZAKI, YONEKURA, TORIWAKI:"On the Capability for Geometrical Structure Analysis of Sample Distribution -- Relationship between Auto Associative Networks and PPN", IEICE Technical Report, NC89-102, Vol.89, No.464, pp.73-78, (1990-03). *School of Engineering, Nagoya University Shin SUZUKI, KAWAHARA:"Evaluating Neural Networks using Mean Curvature", IEICE Technical Report, NC89-103, Vol.89, No.464, pp.79-84, (1990-03). *NTT Basic Research Laboratories Masafumi HAGIWARA:"Back-propagation with Artificial Selection -- Reduction of the number of hidden units --", IEICE Technical Report, NC89-104, Vol.89, No.464, pp.85-90, (1990-03). *Faculty of Science and Technology, Keio University Hiroyuki ENDOH, IDE:"The influence of the number of units on the ability of learning", IEICE Technical Report, NC89-105, Vol.89, No.464, pp.91-96, (1990-03). *Aoyama Gakuin University Special lecture Eiichi Iwai:IEICE Technical Report, NC89-89, Vol.89, No.463, pp.165-176, (1990-03). From LUBTODI%YALEVM.BITNET at vma.CC.CMU.EDU Wed Apr 11 15:08:00 1990 From: LUBTODI%YALEVM.BITNET at vma.CC.CMU.EDU (LUBTODI%YALEVM.BITNET@vma.CC.CMU.EDU) Date: Wed, 11 Apr 90 14:08 EST Subject: emergent properties Message-ID: Emergent properties are one of the potential benefits of using a connectionist model to perform a task. For example, if the task is to classify objects, a connectionist algorithm can give gracefully degraded performance when the input is poor or the network is damaged. Graceful degradation and content addressability are two properties that often seem to be called emergent properties--they come for free with a connectionist model. My questions are: 1. What other beneficial properties emerge from connectionist models? 2. Do different network architectures lead to different emergent properties? 3. There may also be emergent constraints on the task to be performed. Using the categorization example, perhaps only a certain number of categories can be formed given a certain number of hidden units. What constraints do emerge for the task-level description? From my point of view, the relevance of connectionist models is increased when the emergent benefits of a model are those that humans have and the emergent constraints on task performance are also seen in humans. I am interested in hearing about (1) examples of emergent benefits and constraints in both low level and high level tasks, and (2) whether these emergent network properties are seen in people (or whatever species is being modelled). Todd Lubart LUBTODI at YALEVM From turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK Thu Apr 12 23:38:09 1990 From: turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK (Turing Conference) Date: Thu, 12 Apr 90 23:38:09 BST Subject: Easter request Message-ID: <11865.9004122238@ctcs.leeds.ac.uk> I have been asked by a colleague at Leeds to send on this message: I would be grateful if, like myself, you were able to respond to a letter received recently, in order to help Craig. He is a 7 year old boy who is in the Royal Marsden Hospital in London. Craig Shergold has a tumour on the brain and one on the spine and has very little time to live. It is his ambition to have an entry in the Guiness Book of Records for the largest number of "Get Well" cards ever received by an individual. Please send a card to: Craig Shergold 56 Selby Road CARSHALTON Surrey SN6 1LD United Kingdom I would be grateful if you could send a copy of this letter to at least another 10 companies or individuals. Yours sincerely, Ian Mitchell Lambert Department of Theology University of Kent at Canterbury From chrisley at parc.xerox.com Thu Apr 12 22:15:09 1990 From: chrisley at parc.xerox.com (Ron Chrisley) Date: Thu, 12 Apr 90 19:15:09 PDT Subject: Easter request In-Reply-To: Turing Conference's message of Thu, 12 Apr 90 23:38:09 BST <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: <9004130215.AA21816@roo.parc.xerox.com> Connectionists: As nice a gesture as it is, this "Easter request" should be ignored. I'm posting some news articles relevant to the message which indicate that NO MORE CARDS SHOULD BE SENT. Please do not waste anymore bandwidth by forwarding the message to other mailing lists, bboards, or newsgroups. Ask for Cards, and Ye Shall Receive and Receive and Receive by Douglas Burns WEST PALM BEACH, Fla. -- A 7-year-old English boy with cancer is finding that once a story hits the modern-day grapevine of fax machines and computer bulletin boards, it is impossible to stop. Critically ill with a rare brain tumor, Craig Shergold told his parents and nurses at a British hospital in September of his wish to be in the Guinness Book of World Records for owning the world's largest collection of post cards. The same wish was fulfilled only a year earlier for another English boy with cancer. Once the news was out, it flowed through every conceivable medium to even the most unimaginable places on the globe. Budget Rent A Car in Miami got news about Craig from a Budget office in Gibraltar and sent one of their employees out to alert South Florida businesses. ``We also passed it around to all our offices in the nation,'' said Maria Borchers, director of travel marketing. Children's Wish International, a non-profit organization based in Atlanta, is also working to get cards for Craig. One of its appeals made its way to a computer bulletin board run by Bechtel, a Maryland-based company with an office in Palm Beach Gardens. ``We are getting 10,000 to 15,000 cards for Craig per day,'' said Arthur Stein, director of Children's Wish International. But Craig doesn't want any more cards. In November, he received a certificate from Guinness after his mountain-sized collection of 1.5 million cards broke the record set in 1988 by Mario Morby, a 13-year-old cancer victim. Since then, Craig's dream has become a logistical nightmare for his parents, phone operators and the Royal Marsden Hospital in Surrey, England. Monday, the unofficial count for Craig's collection reached 4 million, said Mark Young, a Guinness Publishing Ltd. spokesmen. The hospital has set up a separate answering service to implore callers to refrain from sending more postcards. Despite pleas of mercy and reports in the media, hundreds of post cards continue to pour into the hospital every day. ``Thank you for being so kind,'' said Maria Dest, a nurse at Royal Marsden. ``But he really does not need any more post cards.'' Dest said that whenever a corporation gets wind of Craig's plight, the bundles of mail increase. ``As soon as it starts to slow down, it goes around again,'' she said. Dest would not discuss the specifics of his condition. ``His condition is deteriorating, but he is still able to talk and function,'' she said. Young, with Guinness, said he gets several calls every day from people who question if Craig Shergold even exists. ``This is definitely legitimate and Craig will be in the 1990 Guinness Book,'' said Young. But because of the problems the two appeals have caused, Young said Guinness plans to discontinue the category. The public outpouring for Mario and now Craig surprised virtually everyone involved, he said. ``These two boys really captured the public imagination,'' Young said. Daniel It gets worse. Some quotes from newsgroups (I assume they are true): "Guinness has announced that this category will not be included in future editions because attempts to break the record have taken several lives. One critically ill child suffocated when a stack of 500,000 cards fell over on him." and "ok guys, this story has been bouncing around the net for a while. yes it is true, but the kid has died already. the parents have requested that no further letters be sent. this story has been on soc.singles, misc.misc, and several other boards for the past few months. it has been debated for a long while! please don't start this over again, the kid has died there is no purpose for bouncing this any longer!!" Ron Chrisley chrisley at csli.stanford.edu Xerox PARC SSL New College Palo Alto, CA 94304 Oxford OX1 3BN, UK (415) 494-4728 (865) 793-484 From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Thu Apr 12 20:25:03 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Thu, 12 Apr 90 20:25:03 EDT Subject: Easter request In-Reply-To: Your message of Thu, 12 Apr 90 23:38:09 -0000. <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: <5842.639966303@DST.BOLTZ.CS.CMU.EDU> > I would be grateful if, like myself, you were able to respond to a > letter received recently, in order to help Craig. He is a 7 year old boy > who is in the Royal Marsden Hospital in London. > > ... rest of blather about postcards for dying boy deleted ... > > I would be grateful if you could send a copy of this letter to at least > another 10 companies or individuals. > Ian Mitchell Lambert > Department of Theology > University of Kent at Canterbury Chain letters are an abuse of the Internet, and CONNECTIONISTS is a private mailing list. If you ever post something like this to CONNECTIONISTS again, I will file a formal complaint with your site administrator. I'm cc'ing this to the whole CONNECTIONISTS list to forestall a flame-fest, and to remind other readers that this is *not* acceptable behavior. -- Dave Touretzky From ccm at DARWIN.CRITTERS.CS.CMU.EDU Thu Apr 12 22:49:51 1990 From: ccm at DARWIN.CRITTERS.CS.CMU.EDU (Christopher McConnell) Date: Thu, 12 Apr 90 22:49:51 EDT Subject: Easter request In-Reply-To: Turing Conference's message of Thu, 12 Apr 90 23:38:09 BST <11865.9004122238@ctcs.leeds.ac.uk> Message-ID: He has broken the record and requests that no more cards are sent. Guiness is retiring the catagory. From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Fri Apr 13 09:16:44 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Fri, 13 Apr 90 09:16:44 BST Subject: Easter request - DON'T PLEASE Message-ID: <1164.9004130816@psuni.cogs.susx.ac.uk> DO NOT RESPOND TO THIS REQUEST - THEY HAVE HAD ENOUGH .......... I would be grateful if, like myself, you were able to respond to a letter > received recently, in order to help Craig. He is a 7 year old boy who > is in the Royal Marsden Hospital in London. > > Craig Shergold has a tumour on the brain and one on the spine and has very > little time to live. > > It is his ambition to have an entry in the Guiness Book of Records for > the largest number of "Get Well" cards ever received by an individual. > > Please send a card to: ...... > I would be grateful if you could send a copy of this letter to at least > another 10 companies or individuals. THE BOY HAS ACHIEVED HIS GOAL AND THE HOSPITAL HAS DESPERATELY REQUESTED (EVEN VIA BBC NEWS) THAT NO MORE CARDS BE SENT. THEY CANNOT COPE WITH THE FLOOD. If you have copied the letter to others, please ask them to ignore it. Aaron Sloman From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Fri Apr 13 04:23:48 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Fri, 13 Apr 90 04:23:48 EDT Subject: tech report available Message-ID: <6200.639995028@DST.BOLTZ.CS.CMU.EDU> *** DO NOT FORWARD TO OTHER MAILING LISTS *** *** DO NOT FORWARD TO OTHER MAILING LISTS *** Rules and Maps II: Recent Progress in Connectionist Symbol Processing David S. Touretzky [1] Deirdre W. Wheeler [2] Gillette Elvgren III [1] CMU-CS-90-112 March 1990 [1] School of Computer Science [2] Department of Linguistics Carnegie Mellon University University of Pittsburgh Pittsburgh, PA 15213 Pittsburgh, PA 15260 ABSTRACT This report contains three papers on symbol processing in connectionist networks. The first two, ``A Computational Basis for Phonology'' and ``Rationale for a `Many Maps' Phonology Machine,'' present the latest results of our ongoing project to develop a connectionist explanation for the regularities and peculiarities of human phonological behavior. The third paper, ``Rule Representations in a Connectionist Chunker,'' introduces a new rule chunking architecture based on competitive learning, and compares its performance with that of a backpropagation-based chunker. Earlier work in these areas was described in report CMU-CS-89-158, ``Rules and Maps in Connectionist Symbol Processing.'' ``A Computational Basis for Phonology'' and ``Rule Representations in a Connectionist Chunker'' will appear in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, the collected papers of the 1989 IEEE Conference on Neural Information Processing Systems - Natural and Synthetic, Denver, CO, November 1989, Morgan Kaufmann Publishers. ``Rationale for a `Many Maps' Phonology Machine'' will appear in the Proceedings of EMCSR-90: the Tenth European Meeting on Cybernetics Systems Research, Vienna, Austria, April 1990, World Scientific Publishing Co. ................................................................ To order copies of this report, write to Ms. Catherine Copetas at the School of Computer Science, or send email to copetas+ at cs.cmu.edu. Be sure to include the tech report number, CMU-CS-90-112, in your message. There is no charge for this report. From jai at blake.acs.washington.edu Sat Apr 14 00:19:33 1990 From: jai at blake.acs.washington.edu (Jai Choi) Date: Fri, 13 Apr 90 21:19:33 -0700 Subject: TRs available Message-ID: <9004140419.AA24872@blake.acs.washington.edu> To whom it may concern: We appreciate if you post followings which advertises two technical notes. Thanks in advance. Jai Choi. ================================================================== Two Technical Notes Available ================================================================== 1. Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons Jenq-Neng Hwang, Jai J. Choi, Seho Oh, Robert J. Marks II Interactive Systems Design Lab. Department of Electrical Engr., FT-10 University of Washington Seattle, WA 98195 ****** Abstract ******* In many machine learning applications, the source of the training data can be modeled as an oracle. An oracle has the ability, when presented with an example (query), to give a correct classification. An efficient query learning is to provide the good training data to the oracle at low cost. This report presents a novel approach for query based neural network learning. Consider a layered perceptron partially trained for binary classification. The single output neuron is trained to be either a 0 or a 1. A test decision is made by thresholding the output at, say, 0.5. The set of inputs that produce an output of 0.5, forms the classification boundary. We adopted an inversion algorithm for the neural network that allows generation of this boundary. In addition, for each boundary point, we can generate the classification gradient. The gradient provides a useful measure of the sharpness of the multi-dimensional decision surfaces. Using the boundary point and gradient information, conjugate input pair locations are generated and presented to an oracle for proper classification. This new data is used to further refine the classification boundary thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points. An application example to power security assessment is given. (will be presented in IJCNN'90, San Diego) ********************************************************************** 2. Iterative Constrained Inversion of Neural Networks and its Applications Jenq-Neng Hwang, Chi H. Chan ****** Abstract ****** This report presents a new approach to solve the constrained inverse problems for a trained nonlinear mapping. These problems can be found in a wide variety of applications in dynamic control of nonlinear systems and nonlinear constrained optimization. The forward problem in a nonlinear functional mapping is to obtain the best approximation of the output vector given the input vector. The inverse problem, on the other hand, is to obtain the best approximation of the input vector given a specified output vector, i.e., to find the inverse function of the nonlinear mapping, which might not exist except when the constraints are imposed on. Most neural networks previously proposed for training the inverse mapping either adopted an one-way constraint perturbation or a two-stage learning. Both of these approaches are very laborious and unreliable. Instead of using two neural networks for emulating the forward and inverse mappings separately, we applied the network inversion algorithm, which works directly on the network used to train the forward mapping, yielding the inverse mapping. Our approach uses one network to emulate both of forward and inverse nonlinear mapping without explicitly characterizing and implementing the inverse mapping. Furthermore, our single network inversion approach allows to iteratively locate the optimal inverted solution which also satisfies some constraints imposed on the inputs, and also allows best exploitation of the sensitivity measure of the inputs to outputs in a non- linear mapping. (presented in 24 Conf. on Information Systems and Sciences) ******** For copy of above two TR ************ Send your physical address to Jai Choi Dept. EE, FT-10 Univ. of Washington Seattle, WA 98195 or "jai at blake.acs.washington.edu". From MURRE%HLERUL55.BITNET at vma.CC.CMU.EDU Wed Apr 18 14:18:00 1990 From: MURRE%HLERUL55.BITNET at vma.CC.CMU.EDU (MURRE%HLERUL55.BITNET@vma.CC.CMU.EDU) Date: Wed, 18 Apr 90 14:18 MET Subject: selective attention Message-ID: We noticed that there were some inquiries about connectionist work on selective attention and that our work was mentioned. This work will appear shortly in Cognitive Psychology: Phaf, R.H., A.H.C. Van der Heijden, and P.T.W. Hudson (1990) SLAM: A connectionist model for attention in visual selection tasks. Cognitive Psychology, in press. For those really interested a limited number of offprints of this rather long paper will be available (after we have received the offprints ourselves). We have extended the attentional network to learning neural networks, incorporating some effects of attention on learning and memory. The main building block of these networks is the Categorizing And Learning Module (CALM). We have nearly finished a long report on this work, which will be announced for 'report requests' in this list shortly. In the mean time, those interested may want to check: Murre, J.M.J., R.H. Phaf, and G. Wolters (1989) CALM: a modular approach to supervised and unsupervised learning. IEEE-INNS, Proceedings of the International Joint Conference on Neural Networks, Washington DC, June 1989, Vol.1, p.649-656. A review of this paper by T.P. Vogl will appear in one of the forthcoming issues of Neural Network Review, accompanied by a reply from us. With these CALM modules we have constructed a model which shows a dissociation between implicit and explicit memory tasks (e.g., Schacter, 1987): Phaf, R.H., E. Postma, and G. Wolters (submitted) ELAN-1: a connectionist model for implicit and explicit memory tasks. For those really interested some internal reports on this work are available. R. Hans Phaf Jacob M.J. Murre E-mail: MURRE at HLERUL55.Bitnet Address: Unit of Experimental and Theoretical Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands From gary%cs at ucsd.edu Wed Apr 18 14:30:08 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Wed, 18 Apr 90 11:30:08 PDT Subject: selective attention In-Reply-To: MURRE%HLERUL55.BITNET@vma.CC.CMU.EDU's message of Wed, 18 Apr 90 14:18 MET Message-ID: <9004181830.AA09286@desi.ucsd.edu> Please send me a copy. If you get other requests from UCSD, direct them to me. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (ARPA) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu Wed Apr 18 18:31:33 1990 From: stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu (Andreas Stolcke) Date: Wed, 18 Apr 90 18:31:33 BST Subject: 2 TRs available Message-ID: <9004190131.AA04915@icsib12.berkeley.edu.> The following Technical Reports are available. Please refer to the end of this message for information on how to obtain them. ------------------------------------------------------------------------------- MINIATURE LANGUAGE ACQUISITION: A TOUCHSTONE FOR COGNITIVE SCIENCE Jerome A. Feldman, George Lakoff, Andreas Stolcke and Susan Hollbach Weber International Computer Science Institute Technical Report TR-90-009 March 1990 ABSTRACT Cognitive Science, whose genesis was interdisciplinary, shows signs of reverting to a disjoint collection of fields. This paper presents a compact, theory-free task that inherently requires an integrated solution. The basic problem is learning a subset of an arbitrary natural language from picture-sentence pairs. We describe a very specific instance of this task and show how it presents fundamental (but not impossible) challenges to several areas of cognitive science including vision, language, inference and learning. ------------------------------------------------------------------------------- LEARNING FEATURE-BASED SEMANTICS WITH SIMPLE RECURRENT NETWORKS Andreas Stolcke International Computer Science Institute Technical Report TR-90-015 April 1990 ABSTRACT The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's original recurrent network architecture are introduced. ------------------------------------------------------------------------------- [Versions of these papers have been submitted to the 12th Annual Conference of the Cognitive Science Society.] The reports can be obtained as compressed PostScript files from host cis.ohio-state.edu via anonymous ftp. The filenames are feldman.tr90-9.ps.Z and stolcke.tr90-15.ps.Z in directory /pub/neuroprose. Hardcopies may be requested via e-mail to weber at icsi.berkeley.edu or stolcke at icsi.berkeley.edu or physical mail to one of the authors at the following address: International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 U.S.A. ------------------------------------------------------------------------------- Andreas Stolcke From STIVA%IRMKANT.BITNET at vma.CC.CMU.EDU Fri Apr 20 10:18:43 1990 From: STIVA%IRMKANT.BITNET at vma.CC.CMU.EDU (Nolfi & Cecconi) Date: Fri, 20 Apr 90 10:18:43 EDT Subject: weight spaces Message-ID: We would like to submit to discussion this topic: A good way to understand neural network functioning is to see the learning process as a trajectory in the weight space. More specifically we can imagine, given a task, the process of learning as a movement on the error (fitness) surface of the weight space. The concept of local minima, for example, that derive from this idea, has been showed to be extremally useful. However, we know very little about weight spaces. This certainly comes from the fact that they are very complex to investigate. On the other hand we think that it would be useful to try to answer questions like: are there some kind of regularities in the error surface ? If this is the case, are these regularities task dependent or there are also general type regularities ? How learning algorithms differ from the point of view of the trajectory in the weight space ? We will appreciate comments and possibly references about that. Nolfi & Cecconi From sasha at alla.kodak.com Fri Apr 20 10:16:31 1990 From: sasha at alla.kodak.com (alex shustorovich) Date: Fri, 20 Apr 90 10:16:31 EDT Subject: weight spaces Message-ID: <9004201416.AA04517@alla.kodak.com> The following technical report seems to be relevant to this discussion: ______________________________________________________________________________ Reducing the Weight Space of a Net With Hidden Units to a Minimum Cone. Alexander Shustorovich Image Electronics Center, Eastman Kodak Company 901 Elmgrove Road, Rochester NY 14653-5719 ABSTRACT In his recent talk on the theory of Back-propagation at IJCNN-89, Dr. Hecht-Nielsen made an important observation that any single meaningful combination of weights can be represented in the net in a huge number of variants due to the permutations of hidden units. He remarked that if it were possible to find a cone in the weight space such that the whole space is produced from this cone by permutations of axes corresponding to the permutations of the hidden units, it would greatly reduce the volume of space in which we have to organize the search for the solutions. In this paper such a cone is built. Besides the obvious benefits mentioned above, the same procedure enables the direct comparison of different solutions and trajectories in the weight space, that is, the analysis and comparison of functions performed by individual hidden units. ______________________________________________________________________________ This paper was accepted for poster presentation at INNC-90-Paris in July and it will appear in the proceedings. If you would like to have this TR now, send your request to the author. Alexander Shustorovich, email: sasha at alla.kodak.com From solla%nordita.dk at vma.CC.CMU.EDU Sat Apr 21 13:23:57 1990 From: solla%nordita.dk at vma.CC.CMU.EDU (solla%nordita.dk@vma.CC.CMU.EDU) Date: Sat, 21 Apr 90 19:23:57 +0200 Subject: No subject Message-ID: <9004211723.AA01648@astro.dk> Subject: Weight spaces The concept of `weight space' has been shown to be a useful tool to explore the ensemble of all possible network configurations, or wirings, compatible with a fixed, given architecture. [1] Such spaces are indeed complex, both because of their high dimensionality and the roughness of the surface defined by the error function. It has been shown that different choices of the distance between the targets and the actual outputs can lead to error surfaces that are both generally smoother and steeper in the vicinity of the minima, resulting in an accelerated form of the back-propagation algorithm. [2] Full explorations of such weight spaces, or configuration spaces, defines a probability distribution over the space of functions. Such distribution is a complete characterization of the functional capabilities of the chosen architecture. [3] The entropy of such prior distribution is a useful tool to characterize the functional diversity of the chosen ensemble. Monitoring the evolution of the probability distribution over the space of functions and its associated entropy during learning provides a quantitative measure of the emergence of generalization ability. [3,4] [1] J.S. Denker, D.B. Schwartz, B.S. Wittner, S.A.Solla, R.E. Howard, L.D. Jackel, and J.J. Hopfield, `Automatic learning, rule extraction, and generalization', Complex Systems, Vol 1. P. 877-922 (1987). [2] S.A. Solla. E. Levin, and M. Fleisher, `Accelerated learning in layered neural networks', Complex Systems, Vol 2, p. 625-639 (1988). [3] S.A. Solla, `Learning and generalization in layered neural networks: the contiguity problem', in `Neural networks from models to applications, ed. by L. Personnaz and G. Dreyfus, IDSET, Paris, p. 168-177 (1989). [4] D.B. Schwartz, V.K. Samalam, S.A. Solla, and J.S. Denker, `Exhaustive learning', Neural Computation, MIT, in press. =================================================================== Sorry! A copy of this message ust went out without proper author identification! Here it is. =================================================================== Sara A. Solla Current address (until August 31st) Nordita Blegdamsvej 17 DK-2100 Copenhagen Denmark solla at nordita.dk Permanent address AT&T Bell Laboratories Holmdel, NJ 07733, USA solla at homxa.att.com From solla%nordita.dk at vma.CC.CMU.EDU Sat Apr 21 13:12:42 1990 From: solla%nordita.dk at vma.CC.CMU.EDU (solla%nordita.dk@vma.CC.CMU.EDU) Date: Sat, 21 Apr 90 19:12:42 +0200 Subject: No subject Message-ID: <9004211712.AA01629@astro.dk> Subject: Weight spaces The concept of `weight space' has been shown to be a useful tool to explore the ensemble of all possible network configurations, or wirings, compatible with a fixed, given architecture. [1] Such spaces are indeed complex, both because of their high dimensionality and the roughness of the surface defined by the error function. It has been shown that different choices of the distance between the targets and the actual outputs can lead to error surfaces that are both generally smoother and steeper in the vicinity of the minima, resulting in an accelerated form of the back-propagation algorithm. [2] Full explorations of such weight spaces, or configuration spaces, defines a probability distribution over the space of functions. Such distribution is a complete characterization of the functional capabilities of the chosen architecture. [3] The entropy of such prior distribution is a useful tool to characterize the functional diversity of the chosen ensemble. Monitoring the evolution of the probability distribution over the space of functions and its associated entropy during learning provides a quantitative measure of the emergence of generalization ability. [3,4] [1] J.S. Denker, D.B. Schwartz, B.S. Wittner, S.A.Solla, R.E. Howard, L.D. Jackel, and J.J. Hopfield, `Automatic learning, rule extraction, and generalization', Complex Systems, Vol 1. P. 877-922 (1987). [2] S.A. Solla. E. Levin, and M. Fleisher, `Accelerated learning in layered neural networks', Complex Systems, Vol 2, p. 625-639 (1988). [3] S.A. Solla, `Learning and generalization in layered neural networks: the contiguity problem', in `Neural networks from models to applications, ed. by L. Personnaz and G. Dreyfus, IDSET, Paris, p. 168-177 (1989). [4] D.B. Schwartz, V.K. Samalam, S.A. Solla, and J.S. Denker, `Exhaustive learning', Neural Computation, MIT, in press. From victor%FRLRI61.BITNET at CUNYVM.CUNY.EDU Sun Apr 22 08:07:12 1990 From: victor%FRLRI61.BITNET at CUNYVM.CUNY.EDU (victor%FRLRI61.BITNET@CUNYVM.CUNY.EDU) Date: Sun, 22 Apr 90 14:07:12 +0200 Subject: Please delete me of your email list Message-ID: <9004221207.AA06928@sun3c.lri.fr> Cc: Please delete me victor at lri.lri.fr From harnad at clarity.Princeton.EDU Sun Apr 22 20:49:13 1990 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Sun, 22 Apr 90 20:49:13 EDT Subject: BBS Call for Commentators: Dynamic Programming/Optimization Message-ID: <9004230049.AA04749@reason.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. To be considered as a commentator or to suggest other appropriate commentators, please send email to: harnad at clarity.princeton.edu or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] Please specify the aspect of the article that you are qualified and interested to comment upon. If you are not a current BBS Associate, please send your CV and/or the name of a current Associate who would be prepared to nominate you. ____________________________________________________________________ Modeling Behavioral Adaptations Colin W. Clark Institute of Applied Mathematics University of British Columbia Vancouver BC V6T 1Y4 Canada Keywords: Dynamic programming; optimization; control theory; game theory; behavioral ecology; evolution; adaptation; fitness. ABSTRACT: The behavioral landscape for any individual organism is a complex dynamical system consisting of the individual's own physiological and mental states and the state of the physical and biological environment in which it lives. To understand the adaptive significance of behavioral traits one must formulate, analyse and test simplified models of this complex landscape. The target article describes a technique of dynamic behavioral modeling with many desirable characteristics. There is an explicit treatment of state variables and their dynamics. Darwinian fitness is represented directly in terms of survival and reproduction. Behavioral decisions are modeled simultaneously and sequentially with biologically meaningful parameters and variables, generating empirically testable predictions. The technique has been applied to field and laboratory data in a wide variety of species and behaviors. Some limitations result from the unwieldiness of large-scale dynamic models in parameter estimation and numerical computation. (This article is a follow-up to a previous BBS paper by Houston & Macnamara, but it can be read independently.) From X92%DHDURZ1.BITNET at vma.CC.CMU.EDU Mon Apr 23 10:25:41 1990 From: X92%DHDURZ1.BITNET at vma.CC.CMU.EDU (Joachim Lammarsch) Date: Mon, 23 Apr 90 10:25:41 CET Subject: Unsubscribe Message-ID: Please delete the account Q89 @ DHDURZ1 from the distribution list. Kind regards Joachim Lammarsch (NAD DHDURZ1) From nelsonde%avlab.dnet at wrdc.af.mil Mon Apr 23 11:15:13 1990 From: nelsonde%avlab.dnet at wrdc.af.mil (nelsonde%avlab.dnet@wrdc.af.mil) Date: Mon, 23 Apr 90 11:15:13 EDT Subject: Call for Papers Message-ID: <9004231515.AA00527@wrdc.af.mil> I N T E R O F F I C E M E M O R A N D U M Date: 23-Apr-1990 11:06am EST From: Dale E. Nelson NELSONDE Dept: AAAT-1 Tel No: 57646 TO: Remote Addressee ( _LABDDN::"CONNECTIONISTS%CS.CMU.EDU at Q.CS.CMU.EDU" ) TO: Remote Addressee ( _LABDDN::"NEURON-REQUEST at HPLPM.HPL.HP.COM" ) Subject: Call for Papers Please post the following announcement and call for papers. --------------------------------------------------------------------------- AGARD ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT 7 RUE ANCELLE - 92200 NEUILLY-SUR-SEINE - FRANCE TELEPHONE: (1)47 38 5765 TELEX: 610176 AGARD TELEFAX: (1)47 38 57 99 AVP/46 2 APRIL 1990 CALL FOR PAPERS for the SPRING, 1991 AVIONICS PANEL SYMPOSIUM ON MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS to be held in LISBON, Portugal 13-16 May 1991 This meeting will be UNCLASSIFIED Abstracts must be received not later than 31 August 1990. Note: US & UK Authors must comply with National Clearance Procedures requirements for Abstracts and Papers. THEME MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS A large amount of research is being conducted to develop and apply Machine Intelligence (MI) technology to aerospace applications. Machine Intelligence research covers the technical areas under the headings of Artificial Intelligence, Expert Systems, Knowledge Representation, Neural Networks and Machine Learning. This list is not all inclusive. It has been suggested that this research will dramatically alter the design of aerospace electronics systems because MI technology enables automatic or semi-automatic operation and control. Some of the application areas where MI is being considered inlcude sensor cueing, data and information fusion, command/control/communications/intelligence, navigation and guidance, pilot aiding, spacecraft and launch operations, and logistics support for aerospace electronics. For many routine jobs, it appears that MI systems would provide screened and processed ata as well as recommended courses of action to human operators. MI technology will enable electronics systems or subsystems which adapt or correct for errors and many of the paradigms have parallel implementation or use intelligent algorithms to increase the speed of response to near real time. With all of the interest in MI research and the desire to expedite transition of the technology, it is appropriate to organize a symposium to present the results of efforts applying MI technology to aerospace electronics applications. The symposium will focus on applications research and development to determine the types of MI paradigms which are best suited to the wide variety of aerospace electronics applications. The symposium will be organizaed into separate sessions for the various aerospace electronics application areas. It is tentatively proposed that the sessions be organized as follows: SESSION 1 - Offensive System Electronics (fire control systems, sensor cueing and control, signal/data/information fusion, machine vision, etc.) SESSION 2 - Defensive System electronics (electronic counter measures, radar warning receivers, countermeasure resource management, situation awareness, fusion, etc.) SESSION 3 - Command/Control/Communications/Intelligence - C3I (sensor control, signal/data/information fusion, etc.) SESSION 4 - Navigation System Electronics (data filtering, sensor cueing and control, etc.) SESSION 5 - Space Operations (launch and orbital) SESSION 6 - Logistic Systems to Support Aerospace Electronics (on and off-board systems, embedded training, diagnostics and prognostics, etc.) GENERAL INFORMATION This Meeting, supported by the Avionics Panel will be held in Lisbon, Portugal on 13-16 May 1991. It is expected that 30 to 40 papers will be presented. Each author will normally have 20 minutes for presentation and 10 minutes for questions and discussions. Equipment will be available for projection of viewgraph transparencies, 35 mm slides, and 16 mm films. The audience will include Members of the Avionics Panel and 150 to 200 invited experts from the NATO nations. Attendance at AGARD Meetings is by invitation only from an AGARD National Delegate or Panel Member. Final manuscripts should be limited to no more than 16 pages including figures. Presentations at the meeting should be an extract of the final manuscript and not a reading of it. Complete instructions will be sent to authors of papers selected by the Technical Programme Committee. Authors submitting abstracts should insure that financial support for attendance at the meeting will be available. CLASSIFICATION This meeting will be UNCLASSIFIED LANGUAGES Papers may be written and presented either in English or French. Simultanewous interpretation will be provided between these two languages at all sessions. A copy of your prepared remarks (Oral Presentation) and visual aids should be provided to the AGARD staff at least one month prior to the meeting date. This procedure will ensure correct interpretation of your spoken words. ABSTRACTS Abstracts of papers offered for this Symposium are now invited and should conform with the following instructions: LENGTH: 200 to 500 words CONTENT: Scope of the Contribution & Relevance to the Meeting - Your abstract should fully represent your contribution SUMITTAL: To the Technical Programme committee by all authors (US authors must comply with Attachment 1) IDENTIFICATION: Author Information Form (Attachment 2) must be provided with you abstract CLASSIFICATION: Abstracts must be unclassified Your abstracts and Attachment 2 should be mailed in time to reach all members of the Technical Program Committee, and the Executive not later than 31 AUGUST 1990 (Note the exception for the US Authors). This date is important and must be met to ensure that your paper is considered. Abstracts should be submitted in the format shown on the reverse of this page. TITLE OF PAPER Name of Author Organization or Company Affiliation Address Name of Co-Author Organization or Company Affiliation Address The test of your ABSTRACT should start on this line. PUBLICATIONS The proceedings of this meeting will be published in a single volume Conference Proceedings. The Conference Proceedings will include the papers which are presented at the meeting, the questions/discussion following each presentation, and a Technical Evaluation Report of the meeting. It should be noted that AGARD reserves the right to print in the Conference Proceedings any paper or material presented at the Meeting. The Conference Proceedings will be sent to the printer on or about July 1990. NOTE: Authors that fail to provide the required Camera-Ready manuscript by this date may not be published. QUESTIONS concerning the technical programme should be addressed to the Technical Programme Committee. Administrative questions should be sent directly to the Avionics Panel Executive. GENERAL SCHEDULE (Note: Exception for US Authors) EVENT DEADLINE SUBMIT AUTHOR INFORMATION FORM 31 AUG 90 SUBMIT ABSTRACT 31 AUG 90 PROGRAMME COMMITTEE SELECTION OF PAPERS 1 OCT 90 NOTIFICATION OF AUTHORS OCT 90 RETURN AUTHOR REPLY FORM TO AGARD IMMEDIATELY START PUBLICATION/PRESENTATION CLEARANCE PROCEDURE UPON NOTIFICATION AGARD INSTRUCTIONS WILL BE SENT TO CONTRIBUTORS OCT 90 MEETING ANNOUNCEMENT WILL BE PUBLISHED IN JAN 91 SUBMIT CAMERA-READY MANUSCRIPT AND PUBLICATION/ PRESENTATION CLEARANCE CERTIFICATE to arrive at AGARD by 15 MAR 91 SEND ORAL PRESENTATION AND COPIES OF VISUAL AIDS TO THE AVIONICS PANEL EXECUTIVE to arrive at AGARD by 19 APR 91 ALL PAPERS TO BE PRESENTED 13-16 MAY 91 TECHNICAL PROGRAMME COMMITTEE CHAIRMAN Dr Charles H. KRUEGER Jr Director, Systems Avionics Division Wright Research and Development Center (AFSC), ATTN: AAA Wright Patterson Air Force Base Dayton, OH 45433, USA Telephone: (513) 255-5218 Telefax: (513) 476-4020 Mr John J. BART Prof Dr A. Nejat INCE Technical Director, Directorate Burumcuk sokak 7/10 of Reliability & Compatibility P.K. 8 Rome Air Development Center (AFSC) 06752 MALTEPE, ANKARA GRIFFISS AFB, NY 13441 Turkey USA Mr J.M. BRICE Mr Edward M. LASSITER Directeur Technique Vice President THOMSON TMS Space Flight Ops Program Group B.P. 123 P.O. Box 92957 38521 SAINT EGREVE CEDEX LOS ANGELES, CA 90009-2957 France USA Mr L.L. DOPPING-HEPENSTAL Eng. Jose M.B.G. MASCARENHAS Head of Systems Development C-924 BRITISH AEROSPACE PLC, C/O CINCIBERLANT HQ Military Aircraft Limited 2780 OEIRAS WARTON AERODROME Portugal PRESTEN, LANCS PR4 1AX United Kingdom Mr J. DOREY Mr Dale NELSON Directeur des Etudes & Syntheses Wright Research & Development Center O.N.E.R.A. ATTN: AAAT 29 Av. de la Division Leclerc Wright Patterson AFB 92320 CHATILLON CEDEX Dayton, OH 45433 France USA Mr David V. GAGGIN Ir. H.A.T. TIMMERS Director Head, Electronics Department U.S. Army Avionics R&D Activity National Aerospace Laboratory ATTN: SAVAA-D P.O. Box 90502 FT MONMOUTH, NJ 07703-5401 1006 BM Amsterdam USA Netherlands AVIONICS PANEL EXECUTIVE LTC James E. CLAY, US Army Telephone Telex Telefax (33) (1) 47-38-57-65 610176 (33) (1) 47-38-57-99 MAILING ADDRESSES: From Europe and Canada From United States AGARD AGARD ATTN: AVIONICS PANEL ATTN: AVIONICS PANEL 7, rue Ancelle APO NY 09777 92200 Neuilly-sur-Seine France ATTACHMENT 1 FOR US AUTHORS ONLY 1. Authors of US papers involving work performed or sponsored by a US Government Agency must receive clearance from their sponsoring agency. These authors should allow at least six weeks for clearance from their sponsoring agency. Abstracts, notices of clearance by sponsoring agencies, and Attachment 2 should be sent to Mr GAGGIN to arrive not later than 15 AUGUST 1990. 2. All other US authors should forward abstracts and Attachment 2 to Mr GAGGIN to arrive before 31 JULY 1990. These contributors should include the following statements in the cover letter: A. The work described was not performed under sponsorship of a US Government Agency. B. The abstract is technically correct. C. The abstract is unclassified. D. The abstract does not violate any proprietary rights. 3. US authors should send their abstracts to Mr GAGGIn and Dr KRUEGER only. Abstracts should NOT be sent to non-US members of the Technical Programme Committee or the Avionics Panel Executive. ABSTRACTS OF PAPERS FROM US AUTHORS CAN ONLY BE SENT TO: Mr David V. GAGGIN and Dr Charles H. KRUEGER Jr Director Director, Avionics Systems Div Avionics Research & Dev Activity Wright Research & Dev Center ATTN: SAVAA-D ATTN: WRDC/AAA Ft Monmouth, NJ 07703-5401 Wright Patterson AFB Dayton, OH 45433 Telephone: (201) 544-4851 Telephone: (513) 255-5218 or AUTOVON: 995-4851 4. US authors should send the Author Information Form (Attachment 2) to the Avionics Panel Executive, Mr GAGGIN, Dr KRUEGER, and each Technical Programme Committee Member, to meet the above deadlines. 5. Authors selected from the United States are remined that their full papers must be cleared by an authorized national clearance office before they can be forwarded to AGARD. Clearance procedures should be started at least 12 weeks before the paper is to be mailed to AGARD. Mr GAGGIN will provide additional information at the appropriate time. AUTHOR INFORMATION FORM FOR AUTHORS SUBMITTING AN ABSTRACT FOR THE AVIONICS PANEL SYMPOSIUM on MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS INSTRUCTIONS 1. Authors should complete this form and send a copy to the Avionics Panel Executive and all Technical Program Committee members by 31 AUGUST 1990. 2. Attach a copy of your abstract to these forms before they are mailed. US Authors must comply with ATTACHMENT 1 requirements. a. Probable Title Paper: ____________________________________________ _______________________________________________________________________ b. Paper most appropriate for Session # ______________________________ c. Full Name of Author to be listed first on Programmee, including Courtesy Title, First Name and/or Initials, Last Name & Nationality. d. Name of Organization or Activity: _________________________________ _______________________________________________________________________ e. Address for Return Correspondence: Telephone Number: __________________________________ ____________________ __________________________________ Telefax Number: __________________________________ ____________________ __________________________________ Telex Number: __________________________________ ____________________ f. Names of Co-Authors including Courtesy Titles, First Name and/or Initials, Last Name, their Organization, and their nationality. ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ __________ ____________________ Date Signature DUE NOT LATER THAN 15 AUGUST 1990 From Connectionists-Request at CS.CMU.EDU Mon Apr 23 09:22:53 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 23 Apr 90 09:22:53 EDT Subject: Fwd: Re: weight space Message-ID: <9058.640876973@B.GP.CS.CMU.EDU> ------- Forwarded Message From bill at wayback.unm.edu Fri Apr 20 19:10:57 1990 From: bill at wayback.unm.edu (william horne) Date: Fri, 20 Apr 90 17:10:57 MDT Subject: weight space Message-ID: <9004202310.AA07162@wayback.unm.edu> Nolfi & Cecconi write: >A good way to understand neural network functioning is to see the learning >process as a trajectory in the weight space.... >However, we know very little about weight spaces. >... it would be useful to try to answer questions like: are there some kind >of regularities in the error surface ? If this is the case, are these >regularities task dependent or there are also general type regularities ? >How learning algorithms differ from the point of view of the trajectory in >the weight space ? We have submitted an article to IEEE Trans. on Neural Networks which addresses exactly this problem. The Title is: "Error Surfaces for Multi-layer Perceptrons", Hush, D., Salas, J. and Horne, B. I will send a copy out to anyone who desires it.... - -bill horne ------- End of Forwarded Message From pollack at cis.ohio-state.edu Mon Apr 23 14:55:23 1990 From: pollack at cis.ohio-state.edu (pollack@cis.ohio-state.edu) Date: Mon, 23 Apr 90 14:55:23 -0400 Subject: (Slices through) Weight Space Message-ID: <9004231855.AA00348@dendrite.cis.ohio-state.edu> ****** Do not forward to other b-boards or mailing lists. thank you **** This tech report, with plenty of pretty pictures, addresses the relationship between initial and final points in weight space... Jordan --------------------------------------------------------------------------- Back Propagation is Sensitive to Initial Conditions John F. Kolen Jordan B. Pollack Report 90-JK-BPSIC Laboratory for Artificial Intelligence Research Computer and Information Science Department The Ohio State University Columbus, Ohio 43210, USA kolen-j at cis.ohio-state.edu pollack at cis.ohio-state.edu Abstract This paper explores the effect of initial weight selection on feed- forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ----------------------------------------------------------------------- This tech report is available by the usual method of anonymous FTP from cheops.cis.ohio-state.edu in pub/neuroprose as kolen.bpsic.tr.ps.Z kolen.bpsic.fig1.ps.Z kolen.bpsic.fig2.ps.Z kolen.bpsic.fig3.ps.Z kolen.bpsic.fig4.ps.Z kolen.bpsic.fig5.ps.Z Or, write for Report 90-JK-BPSIC to: Technical Report Librarian Laboratory for AI Research Ohio State University 2036 Neil Ave. Columbus, OH 43210 From hsf at magi.ncsl.nist.gov Tue Apr 24 10:44:17 1990 From: hsf at magi.ncsl.nist.gov (Handprint Sample Form Account) Date: Tue, 24 Apr 90 10:44:17 EDT Subject: character recognition testing Message-ID: <9004241444.AA25367@magi.ncsl.nist.gov> The National Institute of Standards and Technology (NIST) formerly National Bureau of Standards (NBS) has developed a data base for testing handprint character recognition. The database is on a ISO-9660 formated CD and is described briefly below. Please forward this to interested parties. __________________________________________________________________ NIST Handprint Database The NIST handprinted character database consists of 2100 pages of bilevel, black and white, image data of hand printed numerals and text with a total character count of over 1,000,000 characters. Data is compressed using CCIT G4 compression and decompression software is provided in C. The total image database, in uncompressed form, contains about 3 Gigabytes of image data, with 273,000 numerals and 707,700 alphabetic characters. The handprinting sample was obtained from a selection of field data collection staff of the Bureau of the Census, with a geographic sampling corresponding to the population density of the United States. The geographical sampling was done because previous national samples of handprinted material have suggested that there are significant regional differences in handprinting style. Most of the individuals who participated in the sampling are accustomed to filling out forms relatively neatly, and so this sample may represent a "best possible" sample of handprinting. Even so, the range of characters and spatial placement of those characters is broad enough to present very difficult challenges to the image recognition systems currently available or likely to be available in the near future. Typical Use This test data set was designed for multiple uses in the area of image (character) recognition. The problem of computer recognition of document content from images is usually broken down into three operations. First the relevant areas containing text are located. This is usually referred to as field isolation. Next the entire field image containing one or more characters is broken into the images of individual characters. This process is usually referred to as segmentation. Finally, these isolated characters must be correctly interpreted. The images in the data base are designed to test all three of the processes. The test data can be used for any one of the three operations, although it is important to recognize that the success of all subsequent steps in this process is dependent on the success of the previous steps. for further information contact: Joan Sauerwine 301-975-2208 FAX 301-975-2183 From andreas at psych.Stanford.EDU Tue Apr 24 20:11:08 1990 From: andreas at psych.Stanford.EDU (Andreas Weigend) Date: Tue, 24 Apr 90 17:11:08 PDT Subject: preprint: Predicting the Future (Weigend, Huberman, Rumelhart) Message-ID: ______________________________ PREDICTING THE FUTURE - A CONNECTIONIST APPROACH ______________________________ Andreas S. Weigend [1] Bernardo A. Huberman [2] David E. Rumelhart [3] ______________________________ We investigate the effectiveness of connectionist networks for predicting the behavior of non-linear dynamical systems. We use feed-forward networks of the type used by Lapedes and Farber to ob- tain forecasts in the context of noisy real world data from sunspots and computational ecosystems. The networks generate accurate future predictions from knowledge of the past and consistently outperform traditional statistical non-linear approaches to these problems. The problem of having too many weights compared to the number of data points (overfitting) is addressed by adding a term to the cost function that penalizes large weights. We show that this weight-elimination procedure successfully shrinks the net down. We compare different classes of activation functions and explain why the convergence of sigmoids is significantly better than the convergence of of radial basis functions for higher dimensional input. We suggest the use of the concept of mutual information to interpret the weights. We introduce two measures of non-linearity and compare the sunspot and ecosystem data to a series generated by a linear autoregressive model. The solution for the sunspot data is found to be moderately non-linear, the solution for the prediction of the ecosystem highly non-linear. Submitted to "International Journal of Neural Systems" If you would really like a copy of the preprint, send your physical address to: hershey at psych.stanford.edu (preprint number: Stanford-PDP-90-01, PARC-SSL-90-20) [1] Physics Department, Stanford University, Stanford, CA 94305 [2] Dynamics of Computation Group, Xerox PARC, Palo Alto, CA 94304 [3] Psychology Department, Stanford University, Stanford, CA 94305 ______________________________ From AMR at IBM.COM Wed Apr 25 00:01:18 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Wed, 25 Apr 90 00:01:18 EDT Subject: Connectionism and Linguistic Regularities Message-ID: Some time ago I was involved in a debate here about the NL and connectionism. I now have a specific question about a kind of linguistic phenomenon which I find it difficult to see how connectionist models would handle. The phenomenon is that in many languages some class of items (morphemes, words, phrases) behaves in a certain completely regular way, yet this way of doing things becomes irregular in the sense that new items do not behave the same way. I am not sure I can come up with very good examples from English, but it is as though the domain of irregular past tenses or irregular plurals were predictable in English (e.g., hypothetically all monosyllabic nouns ending in the phonetic sequence u:s pluralize in i:s , or the like), yet when new words of this form enter the language they do not behave this way and speakers have trouble recognizing the regularity on test involving nonsense items of the right shape. There is a growing body of such examples in the linguistic literature, and to the extent that an explanation is sought it is assumed to lie in some highly specific (perhaps innate) properties of the human linguistic faculty. This is what makes me sceptical of the ability of connectionist architectures to handle this kind of phenomenon, while at the same time, if they can, that would be a striking piece of evidence in favor of the connectionist approach. Alexis Manaster-Ramer amr at ibm.com or amr at yktvmh.bitnet P.S. Can someone please remind me of the email address to use for things such as getting people added to the mailing list? Thanks. From gasser at iuvax.cs.indiana.edu Wed Apr 25 00:51:52 1990 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Tue, 24 Apr 90 23:51:52 -0500 Subject: tech report Message-ID: ****** Do not forward to other b-boards or mailing lists. thank you **** ****** Do not forward to other b-boards or mailing lists. thank you **** ****** Do not forward to other b-boards or mailing lists. thank you **** The following tech report is available. --------------------------------------------------------------------------- Networks and Morphophonemic Rules Revisited Michael Gasser Chan-Do Lee Report 307 Computer Science Department Indiana University Bloomington, IN 47405 USA gasser at cs.indiana.edu, cdlee at cs.indiana.edu Abstract In the debate over the power of connectionist models to handle linguistic phenomena, considerable attention has been focused on the learning of simple morphophonemic rules. Rumelhart and McClelland's celebrated model of the acquisition of the English past tense (1986), which used a simple pattern associator to learn mappings from stems to past tense forms, was advanced as evidence that networks could learn to emulate rule-like linguistic behavior. Pinker and Prince's equally celebrated critique of the past-tense model (1988) argued forcefully that the model was inadequate on several grounds. For our purposes, these are (1) the fact that the model is not constrained in ways that humans language learners clearly are and (2) the fact that, since the model cannot represent the notion "word", it cannot distinguish homophonous verbs. A further deficiency of the model, one not brought out by Pinker and Prince, is that it is not a processing account: the task that the network learns is that of associating forms with forms rather than that of producing forms given meanings or meanings given forms. This paper describes a model making use of an adaptation of a simple recurrent network which addresses all three objections to the earlier work on morphophonemic rule acquisition. The model learns to generate forms in one or another "tense", given arbitrary patterns representing "meanings" and to output the appropriate tense given forms. The inclusion of meanings in the network means that homophonous forms are distinguished. In addition, this network experiences difficulty learning reversal processes which do not occur in human language. ----------------------------------------------------------------------- This is available in compressed PostScript form from the OSU neuroprose database: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose/Inbox ftp> binary ftp> get morphophonemics.ps.Z ftp> quit unix> uncompress morphophonemics.ps.Z unix> lpr morphophonemics.ps (with whatever your printer needs for PostScript) [The report should be moved to pub/neuroprose soon.] From B344DSL at UTARLG.ARL.UTEXAS.EDU Tue Apr 24 23:55:00 1990 From: B344DSL at UTARLG.ARL.UTEXAS.EDU (B344DSL@UTARLG.ARL.UTEXAS.EDU) Date: Tue, 24 Apr 90 22:55 CDT Subject: Conference Announcement Message-ID: CALL FOR ABSTRACTS NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) October 4-6, 1990 IBM Westlake, TX Abstracts due June 15, 1990 The Metroplex Institute for Neural Dynamics is an independent organization of industrial and academic interests within the Dallas/Fort Worth Metroplex. This is our fourth annual workshop, each being dedicated to a specific problem area, and all of them characterized by a balance of theoretical, applied, and biological interests. Past topics have included Sensory-Motor Coordination and Motivation, Emotion, and Goal Direction. Past speakers have included Harry Klopf, Richard Sutton, Karl Pribram, Harold Szu, Michael Kuperstein, Daniel Bullock, and James Houk. This year's topic of Knowledge Representation and Inference will be focused by its attempts to apply neural architectures within the more traditional rubrics of artificial intelligence and general computer science. This is not merely the application of neural networks to the problem domains of other approaches; more fundamentally, this workshop will explore how the con- nectionist approach can implement other theoretical frameworks and translate to other technical vocabularies. Subtopics will include: -- Connectionist approaches to semantic and symbolic problems from AI -- Architectures for evidential and case-based reasoning -- Cognitive maps and their control of sequence and planning -- Representations of logical primitives and constitutive relations. The 1988 MIND workshop on Motivation, Emotion, and Goal Direction in Neural Networks has culminated in a book, now in press at Erlbaum. This book is characterized by extensive cross-referencing of papers, arising from the associations of the workshop. We plan to generate a similar book from this workshop on Knowledge Represent- ation and Inference. Abstracts must be submitted for review and will be available to participants at the workshop. Some of the presentations will then be developed into book chapters. In addition to oral presentations, there will be some space for poster presentations at the workshop. Abstracts (2 or 3 paragraphs) must be submitted by June 15 to either: Daniel S. Levine Dept. of Mathematics Univ. of Texas at Arlington Arlington, TX 76019-9408 (817)-273-3598 b344dsl at utarlg.bitnet or b344dsl at utarlg.arl.utexas.edu or Manuel Aparicio Mail Stop 03-04-40 IBM 5 West Kirkwood Blvd. Roanoke, TX 76299-0001 (817)-962-5944 From bates at amos.ucsd.edu Wed Apr 25 17:02:38 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Wed, 25 Apr 90 14:02:38 PDT Subject: Connectionism and Linguistic Regularities Message-ID: <9004252102.AA10762@amos.ucsd.edu> There are some detailed responses to the regular/irregular morpheme issue (initially raised by Pinker & Prince) in papers by Plunkett and Marchman, Marchman and Plunkett, and in a more recent paper by Brian MacWhinney. In a nutshell: they manage to get a homogeneous architecture to behave "as though" it had two mechanisms, one for irregulars (encapsulated, undergeneralized) and one for regulars (permeable, prone to overgeneralization). The secret lies in the statistical differences between regulars and irregulars (i.e. type/token ratios). Write to marchman at amos.ucsd.edu and to brian+ at andrew.cmu.edu for copies of the relevant papers. -liz bates From gasser at iuvax.cs.indiana.edu Wed Apr 25 16:13:24 1990 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Wed, 25 Apr 90 15:13:24 -0500 Subject: TR Message-ID: The report I advertised here recently, _Networks and Morphophonemic Rules Revisited_, is now in pub/neuroprose/gasser.morpho.ps.Z (compressed PostScript) in the OSU database. Please try the ftp option before requesting the paper from me. Michael Gasser gasser at cs.indiana.edu Computer Science Department (812) 855-7078 Indiana University Bloomington, IN 47405 USA From stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu Wed Apr 25 14:18:28 1990 From: stolcke%icsib12.Berkeley.EDU at jade.berkeley.edu (Andreas Stolcke) Date: Wed, 25 Apr 90 14:18:28 BST Subject: TR printing problems Message-ID: <9004252118.AA06811@icsib12.berkeley.edu.> Some people had problems printing the postscript versions of the two ICSI tech reports announced recently (files feldman.tr90-9.ps.Z and stolcke.tr90-15.ps.Z in /pub/neuroprose on cis.ohio-state.edu). These problems were caused by some fonts not available on some PostScript printers. The problem has been fixed and people who haven't yet requested a hardcopy by mail are encouraged to obtain the new version via ftp. Requests for mailed copies will of course still be honored. Moral: Don't use any exotic PostScript fonts in your submissions to neuroprose. Either stick to what TeX provides or use the standard Times, Helvetica and Courier fonts. Andreas From LAUTRUP%nbivax.nbi.dk at vma.CC.CMU.EDU Thu Apr 26 07:18:00 1990 From: LAUTRUP%nbivax.nbi.dk at vma.CC.CMU.EDU (Benny Lautrup) Date: Thu, 26 Apr 90 13:18 +0200 (NBI, Copenhagen) Subject: No subject Message-ID: Begin Message: ----------------------------------------------------------------------- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS The International Journal of Neural Systems is a quarterly journal which covers information processing in natural and artificial neural systems. It publishes original contributions on all aspects of this broad subject which involves physics, biology, psychology, computer science and engineering. Contributions include research papers, reviews and short communications. The journal presents a fresh undogmatic attitude towards this multidisciplinary field with the aim to be a forum for novel ideas and improved understanding of collective and cooperative phenomena with computational capabilities. ISSN: 0129-0657 (IJNS) ---------------------------------- Contents of issue 1: 1. C. Peterson and B. Soderberg: A new Method for mapping Optimization Problems onto Neural Networks. 2. M. G. Paulin, M. E. Nelson and J. M. Bower: Dynamics of Compensatory Eye Movement Control: An Optimal Estimation Analysis of the Vestibulo-Ocular Reflex. 3. P. Peretto: Learning Learning Sets in Neural Networks. 4. B. A. Huberman: The Collective Brain. 5. S. Patarnello and P. Carnevali: A Neural Network Model to Simulate a conditioning Experiment. 6. J.-P. Nadal: Study of a Growth Algorithm for a Feed-Forward Network. 7. E. Oja: Neural Networks, Principal Components and Subspaces. 8. S. Bacci, G. Mato, and N. Parga: The Organization of Metastable States in a Neural Network with Hierarchical Patterns. 9. A. Lansner and O. Ekeberg: A One-layer Feedback Artificial Neural Network with a Bayesian Learning Rule. 10. J. Midtgaard and J. Hounsgaard: Nerve Cells as Source of Time Scale and Processing Density in Brain Function. 11. S. Chen: On Computational Vision. ---------------------------------- Contents of issue 2: 1. P. Baldi and A. Attiya: Oscillations and synchronizations in neural networks: An exploration of the labeling hypothesis. 2. A. W. Smith and D. Zipser: Learning sequential structure with the real-time recurrent learning algorithm. 3. M. R. Davenport and G. W. Hoffmann: A recurrent neural network using tri-state hidden neurons to orthogonalize the memory space. 4. H. K. M. Yusuf, S. Rahman and H. Akhtar: Rats kept in environmental isolation for twelve months from weaning: Performance in maze learning and visual discrimination tasks and brain composition. 5. H. C. Card and W. R. Moore: VLSI devices and circuits for learning in neural networks. 6. L. Gislen, C. Peterson and B. Soderberg: "Teachers and classes" with neural networks. 7. A. E. Gunhan, L. P. Csernai, and J. Randrup: Unsupervised competitive learning in Purkinje networks. 8. H.-U. Bauer and T. Geisel: Motion detection and direction detection in local neural nets. ---------------------------------- Editorial board: B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge) S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge) D. Stork (Stanford) (Book review editor) Associate editors: B. Baird (Berkeley) D. Ballard (University of Rochester) E. Baum (NEC Research Institute) S. Bjornsson (University of Iceland) J. M. Bower (CalTech) S. S. Chen (University of North Carolina) R. Eckmiller (University of Dusseldorf) J. L. Elman (University of San Diego) M. V. Feigelman (Landau Institute for Theoretical Physics) F. Fogelman-Soulie (Paris) K. Fukushima (Osaka University) A. Gjedde (Montreal Neurological Institute) S. Grillner (Nobel Institute for Neurophysiology, Stockholm) T. Gulliksen (University of Oslo) D. Hammerstroem (University of Oregon) J. Hounsgaard (University of Copenhagen) B. A. Huberman (XEROX PARC) L. B. Ioffe (Landau Institute for Theoretical Physics) P. I. M. Johannesma (Katholieke Univ. Nijmegen) M. Jordan (MIT) G. Josin (Neural Systems Inc.) I. Kanter (Princeton University) J. H. Kaas (Vanderbilt University) A. Lansner (Royal Institute of Technology, Stockholm) A. Lapedes (Los Alamos) B. McWhinney (Carnegie-Mellon University) M. Mezard (Ecole Normale Superieure, Paris) A. F. Murray (University of Edinburgh) J. P. Nadal (Ecole Normale Superieure, Paris) E. Oja (Lappeenranta University of Technology, Finland) N. Parga (Centro Atomico Bariloche, Argentina) S. Patarnello (IBM ECSEC, Italy) P. Peretto (Centre d'Etudes Nucleaires de Grenoble) C. Peterson (University of Lund) K. Plunkett (University of Aarhus) S. A. Solla (AT&T Bell Labs) M. A. Virasoro (University of Rome) D. J. Wallace (University of Edinburgh) D. Zipser (University of California, San Diego) ---------------------------------- CALL FOR PAPERS Original contributions consistent with the scope of the journal are welcome. Complete instructions as well as sample copies and subscription information are available from The Editorial Secretariat, IJNS World Scientific Publishing Co. Pte. Ltd. 73, Lynton Mead, Totteridge London N20 8DH ENGLAND Telephone: (44)1-446-2461 or World Scientific Publishing Co. Inc. 687 Hardwell St. Teaneck New Jersey 07666 USA Telephone: (1)201-837-8858 or World Scientific Publishing Co. Pte. Ltd. Farrer Road, P. O. Box 128 SINGAPORE 9128 Telephone (65)278-6188 ----------------------------------------------------------------------- End Message From gaudiano at bucasb.bu.edu Wed Apr 25 22:58:35 1990 From: gaudiano at bucasb.bu.edu (gaudiano@bucasb.bu.edu) Date: Wed, 25 Apr 90 22:58:35 EDT Subject: Student Society Update Message-ID: <9004260258.AA27445@retina.bu.edu> Hello everyone, we are very excited about the overwhelming response to our student society. Our new name (International Student Society for Neural Networks, or ISSNNet) reflects the large number of interested people from all over the world. Over 400 people requested a copy of our first newsletter, almost one half from outside the U.S. If you had sent a request before April 10 but still have not received the newsletter, or if you have any other questions, please send a message to: issnnet at bucasb.bu.edu We have begun receiving membership requests (only $5 for the year), and some official donations. Please remember that we will only continue to send out future issues of the newsletter and other information to official members, so send us your membership form as soon as you can! Also, although we realize it was not clearly stated in the newsletter, YOU DON'T HAVE TO BE A STUDENT TO JOIN! We have many activities and programs that will be useful to everyone, and your non-student memberships will show your support for students. If you are going to IJCNN in San Diego or to INNC in Paris, come visit our booth. We will have T-shirts, newsletters, and some of our other events. We will also have an official ISSNNet meeting/social event at IJCNN (more details later). If you want to make donations or sponsor students presenting papers at NN conferences, send e-mail to . We are in the process of becoming incorporated, and we should have our non-profit status sometime this fall. We have provisions in our bylaws for a flexible governing board to accommodate the international and dynamic nature of our society. Get involved! From tp at irst.it Thu Apr 26 11:12:20 1990 From: tp at irst.it (Tomaso Poggio) Date: Thu, 26 Apr 90 17:12:20 +0200 Subject: preprint: Predicting the Future (Weigend, Huberman, Rumelhart) In-Reply-To: Andreas Weigend's message of Tue, 24 Apr 90 17:11:08 PDT <9004250034.AA12221@life.ai.mit.edu> Message-ID: <9004261512.AA10081@caneva.irst.it> From kamil at wdl1.fac.ford.com Thu Apr 26 15:47:24 1990 From: kamil at wdl1.fac.ford.com (Kamil A Grajski) Date: Thu, 26 Apr 90 12:47:24 -0700 Subject: Summer Hiring at Ford Aerospace Message-ID: <9004261947.AA12402@wdl1.fac.ford.com> Hi, In the spirit of public-service, here is an unofficial announcement, an announcelette, if you please, that some people at Ford Aerospace (San Jose) might be looking for summer hires. ================================================================= 4/25/90 The Advanced Development Department of Ford Aerospace's Western Development Laboratories in San Jose historically has summer (May,June- August) job positions open to promising junior & senior undergraduates and graduate students. Broadly speaking, on-going projects are aimed at algorithm design and development (software & hardware) for real-time systems combining digital signal processing and classification. The classification component includes, but is NOT limited to neural network technology. There is a statistical component which is looking at classical as well as some new non-parametric methods. On-going projects (funded by Ford Motor Co. and/or IR&D) involve: a.) design, development and implementation of an on-board engine knock detector and classifier for aiding engine performance optimization - joint project with Ford Motor Co. - real real-time data! (Free rides in a Taurus!) b.) several related projects in real-time speech processing, e.g., speaker change detection, word spotting in continuous speech, - we have home-grown, TIMIT and other databases on-line. We are currently interested in the performance and applicability of recurrent architectures to ASR, developing synergy with HMMs, and some recent nonparametric statistical discriminant methods. c.) parallel computation - we have a 2K processor element SIMD machine from MasPar (the beta version) with possible limited access to 8K and 16K versions onto which we are porting a variety of DSP, neural network and statistical methodologies for production and in-house research efforts. We are emphasizing some neat approaches to writing "virtualized" code for multi-processor systems. (Preliminary results to be reported at IJCNN and INNC.) The office environment is a typical Silicon Valley set-up. There are shower facilities for that afternoon jog or bike-ride; loads of places to eat, etc. Send resume or note to: kamil at wdl1.fac.ford.com (TCP/IP:128.5.32.1). Dr. Kamil A. Grajski Ford Aerospace Advanced Development Department Mail Stop X-22 220 Henry Ford II Dr. (dig that address!) San Jose, CA 95161-9041 ================================================================== Kamil From jm2z+ at ANDREW.CMU.EDU Thu Apr 26 19:05:26 1990 From: jm2z+ at ANDREW.CMU.EDU (Javier Movellan) Date: Thu, 26 Apr 90 19:05:26 -0400 (EDT) Subject: PREPRINT: Contrastive Hebbian Message-ID: <8aBruqa00WBMQ2T21l@andrew.cmu.edu> This preprint has been placed in the account kindly provided by Ohio State. CONTRASTIVE HEBBIAN LEARNING IN INTERACTIVE NETWORKS Javier R. Movellan Department of Psychology Carnegie Mellon University Pittsburgh, Pa 15213 email: jm2z+ at andrew.cmu.edu Submitted to Neural Computation Interactive networks, as defined by Hopfield (1984), Grossberg (1978), and McClelland & Rumelhart(1981) may have an advantage over feed-forward architectures because of their completion properties, and flexibility in the treatment of units as inputs or outputs. Ackley, Hinton and Sejnowski (1985) derived a learning rule to train Boltzmann machines, which are discrete, interactive networks. Unfortunately, because of the discrete stochasticity of its units, Boltzmann learning is intolerably slow. Peterson and Anderson (1987) showed that Boltzmann machines with large number of units can be approximated with deterministic networks whose logistic activations represent the average activation of discrete Boltzmann units (Mean Field Approximation). Under these conditions a learning rule that I call Contrastive Hebbian Learning (CHL) was shown to be a good approximation to the Boltzmann weight update rule and to achieve learning speeds comparable to backpropagation. Hinton (1989) showed that for Mean Field networks, CHL is at least a first order approximation to gradient descent on an error function. The purpose in this paper is to show that CHL works with any interactive network with bounded, continuous activation functions and symmetric weights. The approach taken does not presume the existence of Boltzmann machines whose behavior is approximated with mean field networks. It is also shown that CHL performs gradient descent on a contrastive function of the same form investigated by Hinton (1989) The paper is divided in two sections and one appendix. In Section 1 I study the dynamics of the activations in interactive networks. Section 2 shows how to modify the weights for the stable states of the network to reproduce desired patterns of activations. The appendix contains mathematical details, and some comments on how to implement Contrastive Hebbian Learning in Interactive Networks. The format is Latex. Here are the instructions to get the file: unix> ftp cheops.cis.ohio-state.edu Name:anonymous Password:neuron ftp> cd pub/neuroprose ftp> get Movellan.CHL.LateX From INS_ATGE%JHUVMS.BITNET at vma.CC.CMU.EDU Fri Apr 27 02:35:00 1990 From: INS_ATGE%JHUVMS.BITNET at vma.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@vma.CC.CMU.EDU) Date: Fri, 27 Apr 90 01:35 EST Subject: Recurrent Linguistic Domain Papers? Message-ID: I recently entered into a discussion with a professor of Cognitive Science, who was of the opinion that connectionist models are not reasonable ways of explaining linguistic processing since "there is no way for such systems to temporarily 'save their state' perform some other computation, and then restore the prior state. As a result, they seem to be limitied to...'finite state automata'" Since he only knows about feedforward style models, I can understand his feelings. I am curious if anyone knows of a reference to recurrent connectionist models which show non-FSA behavior (linear bounded automata would be fine), or a reference which discusses connectionist models used in linguistic domains involving recursive grammars utilizing recurrent nets. -Tom From pollack at CIS.OHIO-STATE.EDU Fri Apr 27 12:58:41 1990 From: pollack at CIS.OHIO-STATE.EDU (pollack@CIS.OHIO-STATE.EDU) Date: Fri, 27 Apr 90 12:58:41 -0400 Subject: Recurrent Linguistic Domain Papers? In-Reply-To: INS_ATGE%JHUVMS.BITNET@vma.CC.CMU.EDU's message of Fri, 27 Apr 90 01:35 EST <9004271448.AA17935@cheops.cis.ohio-state.edu> Message-ID: <9004271658.AA00656@dendrite.cis.ohio-state.edu> Tom, It is clear that naive applications of connectionism usually lead to finite state models with limited representational abilities. Having been one of the first to build such a limited model (Pollack & Waltz 1982, 85), I've been working on this question, and have a couple of answers for your professor: In my 1987 Ph.D thesis from the University of Illinois, I show how to build a TM with units having rational outputs, linear combinations, thresholds and multiplicative gating. ($6 for mccs-87-100 from TR librarian, CRL, NMSU, Las Cruces NM 88003) The work on RAAM (88 Cogsci conf and AIJ in press) shows how to get stacks and trees into fixed-width distributed representations. (pollack.newraam.ps.Z in pub/neuroprose) My "strange automata" paper (submitted) shows that a high order recurrent network can bifurcate to chaos becoming an infinite state machine (in what chomsky class?) whose transitions are not arbitrary but are controlled by an underlying strange attractor. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From ersoy at ee.ecn.purdue.edu Fri Apr 27 10:02:06 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 27 Apr 90 09:02:06 -0500 Subject: No subject Message-ID: <9004271402.AA19432@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From pollack at cis.ohio-state.edu Fri Apr 27 12:17:15 1990 From: pollack at cis.ohio-state.edu (pollack@cis.ohio-state.edu) Date: Fri, 27 Apr 90 12:17:15 -0400 Subject: Neuroprose lead time Message-ID: <9004271617.AA00631@dendrite.cis.ohio-state.edu> **Do not forward to other lists** Recently, some connectionists have placed their reports in the Inbox, notified me and, a couple of hours later, posted their TR announcement to the mailing list. I'm a fairly busy guy, and it takes at least a day or two to process my email and do simple chores like moving and renaming files, and keeping the INDEX file. I realize that you want your papers to be read, but please be patient or the connectionist preprint system will break down due to misinformation and the email traffic to fix it. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 (kolen.bpsic.tr.ps.Z has been fixed and kolen.bpsic.fig0.ps.Z added From elman at amos.ucsd.edu Fri Apr 27 15:26:46 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 27 Apr 90 12:26:46 PDT Subject: Recurrent Linguistic Domain Papers? Message-ID: <9004271926.AA23119@amos.ucsd.edu> I have done work along these lines, using a simple recurrent network. Nets have been trained on a variety of stimuli. Probably the most interesting simulations (for your purposes) are those which involve discovering a way to represent recursive grammatical structures. The networks succeed to a limited extent by implementing what I call "leaky" or "context-sensitive recursion" in which a state vector does the job normally done by a stack. Since the entire state vector is visible to the part of the network which computes the output function, information from different levels leaks. I believe this kind of leakage is just what is needed to account for natural language processing. For a copy of TR's reporting this work, send a note to 'yvonne at amos.ucsd.edu' asking for 8801 and 8903. Jeff Elman Dept. of Cognitive Science UCSD From schraudo%cs at ucsd.edu Fri Apr 27 17:27:54 1990 From: schraudo%cs at ucsd.edu (Nici Schraudolph) Date: Fri, 27 Apr 90 14:27:54 PDT Subject: Recurrent Linguistic Domain Papers? Message-ID: <9004272127.AA15861@beowulf.ucsd.edu> Jeff Elman has trained a simple recurrent prediction network on a corpus of sentences with embedded clauses produced by a recursive grammar. The net was required to remember noun/verb agreement across the embedded clauses; its capacity to do so showed limits similar to those of human linguistic capability: namely, performance degraded after about three levels of embedding, with center embeddings more adversely affected than tail recursions. These findings are reported in his tech report "Representation and Structure in Connectionist Models" (CRL TR 8903), available from the Center for Research in Language, UCSD, LA Jolla, CA 92093-0108 (e-mail: jan at amos.ucsd.edu). -- Nici Schraudolph, C-014 nschraudolph at ucsd.edu University of California, San Diego nschraudolph at ucsd.bitnet La Jolla, CA 92093 ...!ucsd!nschraudolph From cole at cse.ogi.edu Fri Apr 27 20:22:06 1990 From: cole at cse.ogi.edu (Ron Cole) Date: Fri, 27 Apr 90 17:22:06 -0700 Subject: SPEECH RECOGNITION Message-ID: <9004280022.AA24108@cse.ogi.edu> COMPUTER SPEECH RECOGNITION: THE STATE OF THE ART A Four-Day Workshop Covering Current and Emerging Technologies in Computer Speech Recognition July 16-19, 1990 Oregon Graduate Institute Portland, Oregon INSTRUCTORS Ron Cole, Organizer Oregon Graduate Institute Les Atlas University of Washington Mark Fanty Oregon Graduate Institute Dan Hammerstrom Oregon Graduate Institute Kai-Fu Lee Carnegie Mellon University Stephanie Seneff Massachusetts Institute of Technology Victor Zue Massachusetts Institute of Technology COURSE SUMMARY "Computer Speech Recognition: The State of the Art" will cover current and emerging technologies in computer speech recognition. Leading experts in the field will describe today's most successful speech recognition systems and discuss the strengths and limitations of the technology underlying each. Speech recognition systems examined in detail include EAR, an English Alphabet Recognizer that uses neural networks to achieve high recognition accuracy for spoken letters; SPHINX, a system that uses hidden Markov models to recognize continuous speech in a 1000 word vocabulary; and VOYAGER, a system that combines speech recognition with natural language understanding to converse with the speaker. Workshop participants will gain an understanding of the problems involved in speaker-independent computer speech recognition and the various research approaches. Topics to be presented in detail include the phonetic structure and variability of speech; signal representations; hidden Markov models; neural network processing techniques; and natural language understanding. The instructors are distinguished researchers who have produced state- of-the-art systems representing the different approaches to the problem. Live demonstrations of speech recognition algorithms will be provided throughout the course. The workshop fee is $995 per person. To receive more information and a registration form contact: Department of Academic Services Oregon Graduate Institute 19600 NW von Neumann Dr. Beaverton, OR 97006 (503) 690-1137 fischer at admin.ogi.edu From reynolds at bucasd.bu.edu Sat Apr 21 20:17:47 1990 From: reynolds at bucasd.bu.edu (John Huntington Reynolds) Date: 22 Apr 90 00:17:47 GMT Subject: Temporal Pulse Codes Message-ID: I'm very interested in "multiple meaning" theories (e.g. Raymond and Lettvin, and now Optican and Richmond), the informational role that conduction blocks in axon arbors might play, and the function of temporally modulated pulse codes in general. I'm writing in order to gather references to related work. I'm really just getting my feet wet at this point -- I joined Steve Grossberg's Cognitive and Neural Systems program as a PhD student in September, and with courses and my R.A. work I've been too snowed under to really pursue these interests very fully. Work in temporal pulse encoding I am aware of includes Chung, Raymond, and Lettvin (1970) Multiple meanings in single visual units. Brain Behavior and Evolution 3:72-101. Gray, Konig, Engel, and Singer (1989) Oscillatory Responses in Cat Visual Cortex Exhibit inter-Columnar Synchronization Which Reflects Global Stimulus Properties. Nature Vol. 338, March 1989. Optican, Podell, Richmond, and Spitzer (1987) Temporal Encoding of Two-Dimensional Patterns by Single Units in Primate Inferior Temporal Cortex. (three part series) Journal of Neurophysiology. Vol 57, No 1, January 1987. Pratt, Gill (1990) Pulse Computation. PhD Thesis. MIT, January, 1990. Steve Raymond and Jerry Lettvin (1978) Aftereffects of activity in peripheral axons as a clue to nervous coding. In: Physiology and Pathobiology of Axons. SG Waxman, ed. Raven Press, New York. Richmond, Optican, and Gawne (1990) Neurons Use Multiple Messages Encoded in Temporally Modulated Spike Trains to Represent Pictures. Preprint of a chapter in Seeing Contour and Color ed. J. Kulikowski, Pergamon Press. ... and a lot of work that has been done in the area of temporal coding in the auditory nerve and cochlear nucleus (average localized synchrony response (ALSR) coding). I've finally reached a (brief) lull in my activities here, and I'd appreciate any advice you'd care to offer. --thanks in advance, John Reynolds