From gasser at iuvax.cs.indiana.edu Mon May 1 11:55:49 1989 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Mon, 1 May 89 10:55:49 -0500 Subject: room sharing at IJCNN Message-ID: Post-doc at Indiana University looking for (preferably female) person to share room with at IJCNN. Contact Mayumi Koide at (812) 855-6828, (812) 339-5793, or koidem at gold.bacs.indiana.edu From ersoy at ee.ecn.purdue.edu Mon May 1 23:44:47 1989 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Mon, 1 May 89 22:44:47 -0500 Subject: No subject Message-ID: <8905020344.AA23700@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 23 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 3-6, 1990 The Neural Networks Track of HICSS-23 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-23 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 10, 1989. Notification of accepted papers by September 1, 1989. Accpeted manuscripts, camera-ready, are due by October 3, 1989. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy H. H. Szu Purdue University Naval Research Laboratories School of Electrical Engineering Code 5709 W. Lafayette, IN 47907 4555 Overlook Ave., SE (317) 494-6162 Washington, DC 20375 E-Mail: ersoy at ee.ecn.purdue (202) 767-2407 From netlist at psych.Stanford.EDU Tue May 2 10:46:57 1989 From: netlist at psych.Stanford.EDU (Mark Gluck) Date: Tue, 2 May 89 07:46:57 PDT Subject: TODAY (5/2): Bruce McNaughton, Network Model of Hippocampus Message-ID: REMINDER TODAY: Stanford University Interdisciplinary Colloquium Series: Adaptive Networks and their Applications May 2nd (Tuesday, 3:30pm): ******************************************************************************** Hebb-Steinbuch-Marr Networks and the Role of Movement in Hippocampal Representations of Spatial Relations Bruce L. McNaughton Dept. of Psychology University of Colorado Campus Box 345 Boulder, CO 80309 ******************************************************************************** Room 380-380C (Rear courtyard behind Psych & Math Bldgs.) From daugman%charybdis at harvard.harvard.edu Tue May 2 10:56:56 1989 From: daugman%charybdis at harvard.harvard.edu (j daugman) Date: Tue, 2 May 89 10:56:56 EDT Subject: Vision and Image Analysis Message-ID: Request for Technical Reports and Papers (Second Request) In preparation for upcoming Reviews and Tutorials at 1989 Conferences, I would be grateful to receive copies of any papers or technical reports pertaining to applications of neural nets to vision and image analysis. (This repeats an earlier request sent out in February.) Please send any material to the following address. Thank you in advance. John Daugman 950 William James Hall Harvard University Cambridge, Mass. 02138 From mv10801 at uc.msc.umn.edu Wed May 3 15:25:17 1989 From: mv10801 at uc.msc.umn.edu (mv10801@uc.msc.umn.edu) Date: Wed, 3 May 89 14:25:17 CDT Subject: Share hotel room at IJCNN? Message-ID: <8905031925.AA14952@uc.msc.umn.edu> I would like to find a roommate to share at hotel room with at the IJCNN conference in Washington DC in June. Male non-smoker preferred. If you're interested, please contact me by e-mail or by phone. Thanks! --Jonathan Marshall mv10801 at uc.msc.umn.edu Center for Research in Learning, Perception, and Cognition 205 Elliott Hall University of Minnesota 612-331-6919 (eve/weekend/msg) Minneapolis, MN 55455 612-626-1565 (office) From hendler at icsib9.Berkeley.EDU Wed May 3 17:29:29 1989 From: hendler at icsib9.Berkeley.EDU (James Hendler) Date: Wed, 3 May 89 14:29:29 PDT Subject: sort of connectionist: Message-ID: <8905032129.AA03381@icsib9.> CALL FOR PAPERS CONNECTION SCIENCE (Journal of Neural Computing, Artificial Intelligence and Cognitive Research) Special Issue -- HYBRID SYMBOLIC/CONNECTIONIST SYSTEMS Connectionism has recently seen a major resurgence of interest among both artificial intelligence and cognitive science researchers. The spectrum of connectionist approaches is quite large, ranging from structured models, in which individual network units carry meaning, through distributed models of weighted networks with learning algorithms. Very encouraging results, particularly in ``low-level'' perceptual and signal processing tasks, are being reported across the entire spectrum of these models. Unfortunately, connectionist systems have had more limited success in those ``higher cognitive'' areas where symbolic models have traditionally shown promise: expert reasoning, planning, and natural language processing. While it may not be inherently impossible for purely connectionist approaches to handle complex reasoning tasks someday, it will require significant breakthroughs for this to happen. Similarly, getting purely symbolic systems to handle the types of perceptual reasoning that connectionist networks perform well would require major advances in AI. One approach to the integration of connectionist and symbolic techniques is the development of hybrid reasoning systems in which differing components can communicate in the solving of problems. This special issue of the journal Connection Science will focus on the state of the art in the development of such hybrid reasoners. Papers are solicited which focus on: Current artificial intelligence systems which use connectionist components in the reasoning tasks they perform. Theoretical or experimental results showing how symbolic computations can be implemented in, or augmented by, connectionist components. Cognitive studies which discuss the relationship between functional models of higher level cognition and the ``lower level'' implementations in the brain. The special issue will give special consideration to papers sharing the primary emphases of the Connection Science Journal which include: 1) Replicability of Results: results of simulation models should be reported in such a way that they are repeatable by any competent scientist in another laboratory. The journal will be sympathetic to the problems that replicability poses for large complex artificial intelligence programs. 2) Interdisciplinary research: the journal is by nature multidisciplinary and will accept articles from a variety of disciplines such as psychology, cognitive science, computer science, language and linguistics, artificial intelligence, biology, neuroscience, physics, engineering and philosophy. It will particularly welcome papers which deal with issues from two or more subject areas (e.g. vision and language). Papers submitted to the special issue will also be considered for publication in later editions of the journal. All papers will be refereed. The expected publication date for the special issue is Volume 2(1), March, 1990. DEADLINES: Submission of papers June 15, 1989 Reviews/decisions September 30, 1989 Final rewrites due December 15, 1989. Authors should send four copies of the article to: Prof. James A. Hendler Associate Editor, Connection Science Dept. of Computer Science University of Maryland College Park, MD 20742 USA Those interested in submitting articles are welcome to contact the editor via e-mail (hendler at brillig.umd.edu - US Arpa or CSnet) or in writing at the above address. From neilson%cs at ucsd.edu Wed May 3 18:42:16 1989 From: neilson%cs at ucsd.edu (Robert Hecht-Nielsen) Date: Wed, 3 May 89 15:42:16 PDT Subject: Volunteers Wanted for IJCNN Message-ID: <8905032242.AA15364@odin.UCSD.EDU> Request for volunteers for the upcoming International Conference on Neural Networks (IJCNN) June 18 - June 22 Requirements: In order to receive full admission to conference and the proceedings, you are required to work June 19 - June 22, one shift each day. On June 18 there will be tutorials presented all day. In order to see a tutorial, you must work that tutorial. See the information below on what tutorials are being presented. Shifts: There are 3 shifts: Morning, afternoon and evening. It is best that you work the same shift each day. Volunteers are organized into groups and you will, more than likely, be working with the same group each day. This allows at great deal of flexibility for everyone. If there is a paper being presented at the time of your shift, you can normally work it out with your group to see it. Last year I had no complaints from any of the volunteers regarding missing a paper which they wanted to view. Tutorials: The following tutorials are being presented: 1) Pattern Recognition - Prof. David Casasent 2) Adaptive Pattern Recognition - Prof. Leon Cooper 2) Vision - Prof. John Daugman 4) Neurobiology Review - Dr. Walter Freeman 5) Adaptive Sensory Motor Control - Prof. Stephen Grossberg 6) Dynamical Systems Review - Prof. Morris Hirsch 7) Neural Nets - Algorithms & Microhardware - Prof. John Hopfield 8) VLSI Technology and Neural Network Chips - Dr. Larry Jackel 9) Self-Organizing Feature Maps - Tuevo Kohonen 10) Associative Memory - Prof. Bart Kosko 11) Optical Neurocomputers - Prof. Demitre Psaltis 12) Starting a High-Tech Company - Peter Wallace 13) LMS techniques in Neural Networks - Prof. Bernard Widrow 14) Reenforcement Learning - Prof. Ronald Williams If you want to work the tutorials, please return to me you preferences from 1 to 14 (1 being the one you want to see the most). Housing: Guest housing is available at the University of Maryland. It is about 30 minutes away from the hotel, but Washington D.C has a great "metro" system to get you to and from the conference. The cost of housing per night is $16.50 per person for a double room, or $22.50 for a single room. I will be getting more information on this, but you need to sign up as soon as possible as these prices are quite reasonable for the area and the rooms will go quickly. General Meeting: A general meeting is scheduled at the hotel on Saturday, June 17, around 6:00 pm. You must attend this meeting! If there is a problem with you not being able to make the meeting, I need to know about it. When you contact me to commit yourself officially, I will need from you the following: 1) shift preference 2) tutorial preferences 3) housing preference (University Housing?) To expedite things, I can be contacted at work at (619) 573-7391 during 7:00am-2:00pm west coast time. You may also leave a message on my home phone (619) 942-2843. Thank-you, Karen G. Haines IJCNN Tutorials Chairman neilson%cs at ucsd.edu From isabelle at neural.att.com Fri May 5 09:26:33 1989 From: isabelle at neural.att.com (isabelle@neural.att.com) Date: Fri, 5 May 89 09:26:33 EDT Subject: No subject Message-ID: <8905051325.AA29273@neural.UUCP> ....................................................................... ...............--- SPEECH --- SPEECH --- SPEECH ---.............. ....................................................................... I would like to know the people who have been working on the DARPA data base for digit recognition. I am interested in the results that can be obtained by the different methods, including the connectionist methods. Contact Isabelle Guyon, AT&T Bell Labs, Holmdel, NJ07733 (USA), (201) 949 3220, email isabelle at neural.att.com. Thanks in advance. Isabelle . From alexis%yummy at gateway.mitre.org Fri May 5 16:38:15 1989 From: alexis%yummy at gateway.mitre.org (Alexis Wieland) Date: Fri, 5 May 89 16:38:15 EDT Subject: Local Minima and XOR Message-ID: <8905052038.AA23517@yummy.mitre.org> I apologize for this rather late reply to a note, I was on vacation. In a note about searching weight spaces for local minima (I'm afraid I don't remember the name, it was from UCSD and was part of a NIPS paper) it was noted that they found local min's in a network for computing sin(). There was surprise, though, that there weren't any local min's for the strictly layered 2-in -> 2 -> 1-out XOR net. Many people have complained about getting caught in local min with this network .... It would seem that if you were trying to find a point where the gradient literally went to zero, that you couldn't find it ... because it's out at infinity. That network (when it "works") creates a ridge (or valley) diagonally through the 2D input space. If both of the hidden units get stuck creating the same side of the ridge then only 3 of the 4 points are correctly classified ... you're in the dasterdly "local min." But since you can still get those three points more accurate (closer to 1 or 0) by pushing the arc value twards infinity, you'll always have a non-zero gradient. What you have is a corner of weight space that, once in, will not let you simply learn out of. A learning "black hole," it's something like a saddle point in weight space. By the way, it is therefore intuitively obvious why having more hidden units reduces the chances of getting stuck -- the more hidden units, the less likely they will *ALL* get stuck on the same side of the ridge. alexis wieland -- alexis%yummy at gateway.mitre.org From weili at wpi.wpi.edu Sun May 7 19:48:15 1989 From: weili at wpi.wpi.edu (Wei Li) Date: Sun, 7 May 89 19:48:15 edt Subject: topological structure recognition Message-ID: <8905072348.AA10961@wpi> Hi, I would like to know if there are some kinds of neural networks, which can be applied to recognize things with same topological structures but they are not rigid. Any references and comments are welcome. wei li EE. DEPT. Worcester Polytechnic Institute Worcester, MA 01609 (508) 755-2097 (H) e-mail address weili at wpi.wpi.edu From hollbach at cs.rochester.edu Mon May 8 13:11:01 1989 From: hollbach at cs.rochester.edu (Susan Weber) Date: Mon, 08 May 89 13:11:01 -0400 Subject: TR: direct inferences and figurative adjective-noun combinations Message-ID: <8905081711.AA09133@deneb.cs.rochester.edu> The following TR can be requested from peg at cs.rochester.edu. Please do not cc your request to the entire mailing list. A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber University of Rochester Computer Science Department TR 289 Categories have internal structure sufficiently sophisticated to capture a variety of effects, ranging from the direct inferences arising from adjectival modification of nouns to the ability to comprehend figurative usages. The design of the internal structure of category representation is constrained by the model requirements of the connectionist implementation and by the observable behaviors exhibited in direct inferences. The former dictates the use of a spreading activation format, and the latter indicates some to the topology and connectivity of the resultant semantic network. The connectionist knowledge representation and inferencing scheme described in this report is based on the idea that categories and concepts are context sensitive and functionally structured. Each functional property value of a category motivates a distinct aspect of that category's internal structure. This model of cognition, as implemented in a structured connectionist knowledge representation system, permits the system to draw immediate inferences, and, when augmented with property inheritance mechanisms, mediated inferences about the full meaning of adjective-noun combinations. These inferences are used not only to understand the implicit references to correlated properties (a green peach is unripe) but also to make sense of figurative adjective uses, by drawing on the connotations of the adjective in literal contexts. From hollbach at cs.rochester.edu Tue May 9 11:04:50 1989 From: hollbach at cs.rochester.edu (Susan Weber) Date: Tue, 09 May 89 11:04:50 -0400 Subject: please reconfirm TR orders Message-ID: <8905091504.AA00271@birch.cs.rochester.edu> Due to the cost of copying the 170 page report, the Computer Science Department is charging $7.50 for the TR A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber Computer Science Department TR 289 University of Rochester So, if you have already ordered this TR from peg at cs.rochester.edu, please reconfirm your order, and you will be sent the report with a bill for $7.50. Thanks. From sereno%cogsci at ucsd.edu Tue May 9 16:13:50 1989 From: sereno%cogsci at ucsd.edu (Marty Sereno) Date: Tue, 9 May 89 13:13:50 PDT Subject: please reconfirm TR orders Message-ID: <8905092013.AA20300@cogsci.UCSD.EDU> From harris%cogsci at ucsd.edu Tue May 9 23:51:02 1989 From: harris%cogsci at ucsd.edu (Catherine Harris) Date: Tue, 9 May 89 20:51:02 PDT Subject: Report available Message-ID: <8905100351.AA25861@cogsci.UCSD.EDU> CONNECTIONIST EXPLORATIONS IN COGNITIVE LINGUISTICS Catherine L. Harris Department of Psychology and Program in Cognitive Science University of California, San Diego Abstract: Linguists working in the framework of cognitive linguistics have suggested that connectionist networks may provide a computational formalism well suited for the implementation of their theories. The appeal of these networks include the ability to extract the family resemblance structure inhering in a set of input patterns, to represent both rules and exceptions, and to integrate multiple sources of information in a graded fashion. The possible matches between cognitive linguistics and connectionism were explored in an implementation of the Brugman and Lakoff (1988) analysis of the diverse meanings of the preposition "over." Using a gradient-descent learning procedure, a network was trained to map patterns of the form "trajector verb (over) landmark" to feature-vectors representing the appropriate meaning of "over." Each word was identified as a unique item, but was not further semantically specified. The pattern set consisted of a distribution of form-meanings pairs that was meant to be evocative of English usage, in that the regularities implicit in the distribution spanned the spectrum from rules, to partial regularities, to exceptions. Under pressure to encode these regularities with limited resources, the nework used one hidden layer to recode the inputs into a set of abstract properties. Several of these categories, such as dimensionality of the trajector and vertical height of the landmark, correspond to properties B&L found to be important in determining which schema a given use of "over" evokes. This abstract recoding allowed the network to generalize to patterns outside the training set, to activate schemas to partial patterns, and to respond sensibly to "metaphoric" patterns. Furthermore, a second layer of hidden units self-organized into clusters which capture some of the qualities of the radial categories described by B&L. The paper concludes by describing the "rule-analogy continuum". Connectionist models are interesting systems for cognitive linguistics because they provide a mechanism for exploiting all points of this continuum. A short version of this paper will be published in The Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society, 1989. Send requests to: harris%cogsci.ucsd.edu From watrous at ai.toronto.edu Mon May 15 12:28:12 1989 From: watrous at ai.toronto.edu (Raymond Watrous) Date: Mon, 15 May 89 12:28:12 EDT Subject: GRADSIM Simulator Update Message-ID: <89May15.122916edt.11248@ephemeral.ai.toronto.edu> Subject: GRADSIM Connectionist Network Simulator Version 1.7 Corrections: A bug in the line_search module was discovered and repaired; the bug could result in a line search which was not closed. Enhancements: The line_search algorithm was enhanced to conform to the one described in R. Fletcher, Practical Optimization (2nd edition) 1987. The absolute value of the slope is now used in the termination test; this results in termination being more carefully controlled by the slope criterion parameter. Testing: The line_search bug was confirmed in a test of the BFGS algorithm on the Rosenbrock function from 100 random starting points. The bug was evident in 18 cases. The correction of the bug was confirmed for these same 100 starting points. The enhanced algorithm was also tested and required slightly fewer iterations then the corrected algorithm. Checking: Two test results are now included in the archive for checking the simulator. 1. The results of the BFGS algorithm on the Rosenbrock function from the start point (-1.2, 1.0) are listed. This checks the bfgs and line_search modules. 2. The results several iterations of the BFGS algorithm on a simple speech example. This checks the speech I/O, function and gradient evaluation modules. Make files are included in the archive for generating the simulator for these checks. Documentation: The GRADSIM simulator is briefly described in the University of Pensylvania Tech Report MS-CIS-88-16. GRADSIM: A connectionist network simulator using gradient optimization techniques. An excerpt of the relevant part of this report is now included in the gradsim archive. Access: The GRADSIM simulator may be obtained in compressed tar format via anonymous ftp from linc.cis.upenn.edu as /pub/gradsim.tar.Z. The simulator may also be obtained in uuencoded tar format by email from carol at ai.toronto.edu. From mjw at CS.CMU.EDU Mon May 15 14:13:35 1989 From: mjw at CS.CMU.EDU (Michael Witbrock) Date: Mon, 15 May 89 14:13:35 -0400 (EDT) Subject: Found Volunteer. Message-ID: Ignore last message; meant for local mailing list. Even list maintainers mess up sometimes. michael (connectionists-request) From mjw at CS.CMU.EDU Mon May 15 14:10:56 1989 From: mjw at CS.CMU.EDU (Michael Witbrock) Date: Mon, 15 May 89 14:10:56 -0400 (EDT) Subject: Found Volunteer. Message-ID: Wow, I already got a volunteer. My heartfult thanks to Dave Plaut. michael From THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU Wed May 17 13:00:00 1989 From: THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU (THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU) Date: Wed, 17 May 89 13:00 O Subject: Technical Report Available Message-ID: LU TP 89-1 A NEW METHOD FOR MAPPING OPTIMIZATION PROBLEMS ONTO NEURAL NETWORKS Carsten Peterson and Bo Soderberg Department of Theoretical Physics, University of Lund Solvegatan 14A, S-22362 Lund, Sweden Submitted to International Journal of Neural Systems ABSTRACT: A novel modified method for obtaining approximate solutions to difficult optimization problems within the neural network paradigm is presented. We consider the graph partition and the travelling salesman problems. The key new ingredient is a reduction of solution space by one dimension by using graded neurons, thereby avoiding the destructive redundancy that has plagued these problems when using straightforward neural network techniques. This approach maps the problems onto Potts glass rather than spin glass theories. A systematic prescription is given for estimating the phase transition temperatures in advance, which facilitates the choice of optimal parameters. This analysis, which is performed for both serial and synchronous updating of the mean field theory equations, makes it possible to consistently avoid chaotic bahaviour. When exploring this new technique numerically we find the results very encouraging; the quality of the solutions are in parity with those obtained by using optimally tuned simulated annealing heuristics. Our numerical study, which extends to 200-city problems, exhibits an impressive level of parameter insensitivity. ---------- For copies of this report send a request to THEPCAP at SELDC52 [don't forget to give your mailing address]. From eric at mcc.com Wed May 17 15:29:47 1989 From: eric at mcc.com (Eric Hartman) Date: Wed, 17 May 89 14:29:47 CDT Subject: TR announcement Message-ID: <8905171929.AA20103@legendre.aca.mcc.com> The following technical report is now available. Requests may be sent to eric at mcc.com or via physical mail to the MCC address below. --------------------------------------------------------------- MCC Technical Report Number: ACT-ST-146-89 Optoelectronic Implementation of Multi-Layer Neural Networks in a Single Photorefractive Crystal Carsten Peterson*, Stephen Redfield, James D. Keeler, and Eric Hartman Microelectronics and Computer Technology Corporation 3500 W. Balcones Center Dr. Austin, TX 78759-6509 Abstract: We present a novel, versatile optoelectronic neural network architecture for implementing supervised learning algorithms in photorefractive materials. The system is based on spatial multiplexing rather than the more commonly used angular multiplexing of the interconnect gratings. This simple, single-crystal architecture implements a variety of multi-layer supervised learning algorithms including mean-field-theory, back-propagation, and Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam depletion, rescattering, absorption, and decay effects of the crystal are compensated for by suitably modified supervised learning algorithms. *Present Address: Department of Theoretical Physics, University of Lund, Solvegatan 14A, S-22362 Lund, Sweden. From srilata at aquinas.csl.uiuc.edu Wed May 17 16:27:46 1989 From: srilata at aquinas.csl.uiuc.edu (Srilata Raman) Date: Wed, 17 May 89 15:27:46 CDT Subject: No subject Message-ID: <8905172027.AA12293@aquinas> I would like to have a copy of the report,"New Method for Mapping Optimization Problems onto Neural networks"-Peterson & Soderberg. email : srilata at aquinas.csl.uiuc.edu Postal add : Coordinated Science Lab, Univ. of Illinois at Urbana Champaign, Urbana, IL 61801 USA. I request that another copy be sent to : Prof.L.M.Patnaik, Dept.of Computer Science & Automation, Indian Institute of Science, Bangalore 560012 INDIA. Thanks. From weili at wpi Mon May 15 22:16:49 1989 From: weili at wpi (Wei Li) Date: Mon, 15 May 89 22:16:49 edt Subject: hand writing character recognition Message-ID: <8905160216.AA08056@wpi.wpi.edu> Hi, I would like to get some pointers to the work on hand writing characters recognition by neural networks. Thanks in advance. Wei Li EE. DEPT. WPI 100 Institute Road Worcester, MA 01609 e-mail: weili at wpi.wpi.edu From DSC%UMDC.BITNET at VMA.CC.CMU.EDU Thu May 18 12:18:03 1989 From: DSC%UMDC.BITNET at VMA.CC.CMU.EDU (DSC%UMDC.BITNET@VMA.CC.CMU.EDU) Date: Thu, 18 May 89 12:18:03 EDT Subject: Software request Message-ID: Does anybody know where I can find an implementation of the Boltzmann machine that learns, and runs on: (1) MS-DOS machine; (2) MacIntosh; or (3) Symbolics??? Thanks very much. Kathy Laskey Decision Science Consortium, Inc. From lina at ai.mit.edu Thu May 18 16:17:15 1989 From: lina at ai.mit.edu (Lina Massone) Date: Thu, 18 May 89 16:17:15 EDT Subject: No subject Message-ID: <8905182017.AA01655@gelatinosa.ai.mit.edu> Re: handwriting Pietro Morasso at the University of Genoa - Italy has been doing some work on handwriting with nn. He set up two models. The first is a Kohonen-like self organizing network, the second is a multi-layer perceptron. Here is his address. Pietro Morasso Dept. of Communication, Computer and System Sciences University of Genoa Via All'Opera Pia, 11a 16145 Genoa Italy EMAIL: mcvax!i2unix!dist.unige.it!piero at uunet.uu.net Lina Massone From GINDI%GINDI at Venus.YCC.Yale.Edu Fri May 19 09:54:00 1989 From: GINDI%GINDI at Venus.YCC.Yale.Edu (GINDI%GINDI@Venus.YCC.Yale.Edu) Date: Fri, 19 May 89 09:54 EDT Subject: Tech reports available Message-ID: The following two tech reports are now available. Please send requests to GINDI at VENUS.YCC.YALE.EDU or by physical mail to: Gene Gindi Yale University Department of Electrical Engineering P.O. Box 2157 , Yale Station New Haven, CT 06520 ------------------------------------------------------------------------------- Yale University, Dept. Electrical Engineering Center for Systems Science TR- 8903 Neural Networks for Object Recognition within Compositional Hierarches: Initial Experiments Joachim Utans, Gene Gindi * Dept. Electrical Engineering Yale University P.O. Box 2157, Yale Station New Haven CT 06520 *(to whom correspondence should be addressed) Eric Mjolsness, P. Anandan Dept. Computer Science Yale University New Haven CT 06520 Abstract We describe experiments with TLville, a neural-network for object recognition. The task is to recognize, in a translation-invariant manner, simple stick figures. We formulate the recognition task as the problem of matching a graph of model nodes to a graph of data nodes. Model nodes are simply user-specified labels for objects such as "vertical stick" or "t-junction"; data nodes are parameter vectors, such as (x,y,theta), of entities in the data. We use an optimization approach where an appropriate objective function specifies both the graph-matching problem and an analog neural net to carry out the optimization. Since the graph structure of the data is not known a priori; it must be computed dynamically as part of the optimization. The match metrics are model-specific and are invoked selectively, as part of the optimization, as various candidate matches of model-to-data occur. The network supports notions of abstraction in that the model nodes express compositional hierarchies involving object-part relationships. Also, a data node matched to an whole object contains a dynamically computed parameter vector which is an abstraction summarizing the parameters of data nodes matched to the constituent parts of the whole. Terms in the match metric specify the desired abstraction. In addition, a solution to the problem of computing a transformation from retinal to object-centered coordinates to support recognition is offered by this kind of network; the transformation is contained as part of the objective function in the form of the match metric. In experiments, the network usually succeeds in recognizing single or multiple instances of a single composite model amid instances of non-models, but it gets trapped in unfavorable local minima of the 5th-order objective when multiple composite objects are encoded in the database. ------------------------------------------------------------------------------- Yale University, Dept. Electrical Engineering Center for Systems Science TR- 8908 Stickville: A Neural Net for Object Recognition via Graph Matching Grant Shumaker School of Medicine, Yale University, New Haven, CT 06510 Gene Gindi Department of Electrical Engineering, Yale University P.O. Box 2157 Yale Station, New Haven,CT 06520 (to whom correspondence should be addressed) Eric Mjolsness, P.Anandan Department of Computer Science, Yale University, New Haven, CT 06510 Abstract An objective function for model-based object recognition is formulated and used to specify a neural network whose dynamics carry out the optimization, and hence the recognition task. Models are specified as graphs that capture structural properties of shapes to be recognized. In addition, compositional (INA) and specialization (ISA) hierarchies are imposed on the models as an aid to indexing and are represented in the objective function as sparse matrices. Data are also represented as a graph. The optimization is a graph-matching procedure whose dynamical variables are ``neurons'' hypothesizing matches between data and model nodes. The dynamics are specified as a third-order Hopfield-style network augmented by hard constraints implemented by ``Lagrange multiplier'' neurons. Experimental results are shown for recognition in Stickville, a domain of 2-D stick figures. For small databases, the network successfully recognizes both an object and its specialization. ---------------------------------------------- From mcvax!ai-vie!georg at uunet.UU.NET Fri May 19 11:37:42 1989 From: mcvax!ai-vie!georg at uunet.UU.NET (Georg Dorffner) Date: Fri, 19 May 89 14:37:42 -0100 Subject: conference announcement - EMCSR 1990 Message-ID: <8905191237.AA02167@ai-vie.uucp> Announcement and Call for Papers EMCSR 90 TENTH EUROPEAN MEETING ON CYBERNETICS AND SYSTEMS RESEARCH April 17-20, 1990 University of Vienna, Austria Session M: Parallel Distributed Processing in Man and Machine Chairs: D.Touretzky (Carnegie Mellon, Pittsburgh, PA) G.Dorffner (Vienna, Austria) Other Sessions at the meeting will be: A: General Systems Methodology B: Fuzzy Sets, Approximate Reasoning and Knowledge-based Systems C: Designing and Systems D: Humanity, Architecture and Conceptualization E: Cybernetics in Biology and Medicine F: Cybernetics in Socio-Economic Systems G: Workshop: Managing Change: Institutional Transition in the Private and Public Sector H: Innovation Systems in Management and Public Policy I: Systems Engineering and Artificial Intelligence for Peace Research J: Communication and Computers K: Software Development for Systems Theory L: Artificial Intelligence N: Impacts of Artificial Intelligence The conference is organized by the Austrian Society for Cybernetic Studies (chair: Robert Trappl). SUBMISSION OF PAPERS: For symposium M, all contributions in the fields of PDP, connectionism, and neural networks are welcome. Acceptance of contributors will be determined on the basis of Draft Final Papers. These papers must not exceed 7 single-spaced A4 pages (maximum 50 lines, final size will be 8.5 x 6 inches), in English. They have to contain the final text to be submitted, however, graphs and pictures need not be of reproducible quality. The Draft Final Paper must carry the title, author(s) name(s), and affiliation in this order. Please specify the symposium in which you would like to present the paper (one of the letters above). Each scientist shall submit only 1 paper. Please send t h r e e copies of the Draft Final Paper to: EMCSR 90 - Conference Secretariat Austrian Society for Cybernetic Studies Schottengasse 3 A-1010 Vienna, Austria Deadline for submission: Oct 15, 1989 Authors will be notified about acceptance no later than Nov 20, 1989. They will then be provided with the detailed instructions for the preperation of the Final Paper. Proceedings containing all accepted papers will be printed. For further information write to the above address, call +43 222 535 32 810, or send email to: sec at ai-vie.uucp Questions concerning symposium M (Parallel Distributed Processing) can be directed to Georg Dorffner (same address as secretariat), email: georg at ai-vie.uucp From rich at gte.com Fri May 19 15:01:15 1989 From: rich at gte.com (Rich Sutton) Date: Fri, 19 May 89 15:01:15 EDT Subject: TD Model of Conditioning -- Paper Announcement Message-ID: <8905191901.AA18756@bunny.gte.com> Andy Barto and I have just completed a major new paper relating temporal-difference learning, as used, for example, in our pole-balancing learning controller, to classical conditioning in animals. The paper will appear in the forthcoming book ``Learning and Computational Neuroscience,'' edited by J.W. Moore and M. Gabriel, MIT Press. A preprint can be obtained by emailing to rich%gte.com at relay.cs.net with your physical-mail address. The paper has no abstract, but begins as follows: TIME-DERIVATIVE MODELS OF PAVLOVIAN REINFORCEMENT Richard S. Sutton GTE Laboratories Incorporated Andrew G. Barto University of Massachusetts This chapter presents a model of classical conditioning called the temporal-difference (TD) model. The TD model was originally developed as a neuron-like unit for use in adaptive networks (Sutton & Barto, 1987; Sutton, 1984; Barto, Sutton & Anderson, 1983). In this paper, however, we analyze it from the point of view of animal learning theory. Our intended audience is both animal learning researchers interested in computational theories of behavior and machine learning researchers interested in how their learning algorithms relate to, and may be constrained by, animal learning studies. We focus on what we see as the primary theoretical contribution to animal learning theory of the TD and related models: the hypothesis that reinforcement in classical conditioning is the time derivative of a composite association combining innate (US) and acquired (CS) associations. We call models based on some variant of this hypothesis ``time-derivative models'', examples of which are the models by Klopf (1988), Sutton & Barto (1981a), Moore et al (1986), Hawkins & Kandel (1984), Gelperin, Hopfield & Tank (1985), Tesauro (1987), and Kosko (1986); we examine several of these models in relation to the TD model. We also briefly explore relationships with animal learning theories of reinforcement, including Mowrer's drive-induction theory (Mowrer, 1960) and the Rescorla-Wagner model (Rescorla & Wagner, 1972). In this paper, we systematically analyze the inter-stimulus interval (ISI) dependency of time-derivative models, using realistic stimulus durations and both forward and backward CS--US intervals. The models' behaviors are compared with the empirical data for rabbit eyeblink (nictitating membrane) conditioning. We find that our earlier time-derivative model (Sutton & Barto, 1981a) has significant problems reproducing features of these data, and we briefly explore partial solutions in subsequent time-derivative models proposed by Moore et al. (1986), Klopf (1988), and Gelperin et al. (1985). The TD model was designed to eliminate these problems by relying on a slightly more complex time-derivative theory of reinforcement. In this paper, we motivate and explain this theory from the point of view of animal learning theory, and show that the TD model solves the ISI problems and other problems with simpler time-derivative models. Finally, we demonstrate the TD model's behavior in a range of conditioning paradigms including conditioned inhibition, primacy effects (Egger & Miller, 1962), facilitation of remote associations, and second-order conditioning. From gardner%dendrite at boulder.Colorado.EDU Tue May 23 12:25:12 1989 From: gardner%dendrite at boulder.Colorado.EDU (Phillip Gardner) Date: Tue, 23 May 89 10:25:12 MDT Subject: simulators using X11 Message-ID: <8905231625.AA10181@dendrite> Does anyone know of a connectionist simulator which uses X11 for its graphical interface? Thanks for any help you can give! Phil Gardner gardner at boulder.colorado.edu From french at cogsci.indiana.edu Tue May 23 12:41:17 1989 From: french at cogsci.indiana.edu (Bob French) Date: Tue, 23 May 89 11:41:17 EST Subject: Subcognition and the Limits of the Turing Test Message-ID: A pre-print of an article on subcognition and the Turing Test to appear in MIND: "Subcognition and the Limits of the Turing Test" Robert M. French Center for Research on Concepts and Cognition Indiana University Ostensibly a philosophy paper (to appear in MIND at the end of this year), this article is of special interest to connectionists. It argues that: i) as a REAL test for intelligence, the Turing Test is inappropriate in spite of arguments by some philosophers to the contrary; ii) only machines that have experienced the world as we have could pass the Test. This means that such machines would have to learn about the world in approximately the same way that we humans have -- by falling off bicycles, crossing streets, smelling sewage, tasting strawberries, etc. This is not a statement about the inherent inability of a computer to achieve intelligence, it is rather a comment about the use of the Turing Test as a means of testing for that intelligence; iii) (especially for connectionists) the physical, subcognitive and cognitive levels are INEXTRICABLY interwoven and it is impossible to tease them apart. This is ultimately the reason why no machine that had not experienced the world as we had could ever pass the Turing Test. The heart of the discussion of these issues revolves around humans' use of a vast associative network of concepts that operates, for the most part, below cognitive perceptual thresholds and that has been acquired over a lifetime of experience with the world. The Turing Test tests for the presence or absence of this HUMAN associative concept network, which explains why it would be so difficult -- although not theoretically impossible -- for any machine to pass the Test. This paper shows how a clever interrogator could always "peek behind the screen" to unmask a computer that had not experienced the world as we had by exploiting human abilities based on the use of this vast associative concept network, for example, our abilities to analogize and to categorize; This paper is short and non-technical but nevertheless focuses on issues that are of significant philosophical importance to AI researchers, and to connectionists in particular. If you would like a copy, please send your name and address to: Helga Keller C.R.C.C. 510 North Fess Bloomington, Indiana 47401 or send an e-mail request to helga at cogsci.indiana.edu - Bob French french at cogsci.indiana.edu From Dave.Touretzky at B.GP.CS.CMU.EDU Fri May 26 00:57:40 1989 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 26 May 89 00:57:40 EDT Subject: NIPS and Memorial Day Message-ID: <400.612161860@DST.BOLTZ.CS.CMU.EDU> A lot of us head-in-the-clouds research types have just found out that Monday, May 29, is Memorial Day, and there will be no FedEx or Express Mail pickups that day. But the submission deadline for NIPS abstracts is Tuesday, May 30. Technically I suppose that means we should send out our abstracts today instead of working on them over the weekend. I don't think that was the intent of the organizing committee, though. Everybody knows you set a Tuesday deadline so people can make the Monday afternoon FedEx pickup. Therefore, I intend to send a batch of CMU abstracts off to Kathie Hibbard on Tuesday, with a short note apologizing for their arriving a day late. Since the NIPS organizing committee is composed of such eminently reasonable and sympathetic folks, there will surely be no problem. Other people out there are apparently having similar concerns with the surprise appearance of Memorial Day, so I thought I'd post this note as as suggested remedy. There's a cc: to Kathie so she'll know to expect a few more FedEx packets on the 31st. -- Dave From KIRK at ibm.com Fri May 26 11:54:20 1989 From: KIRK at ibm.com (Scott Kirkpatrick) Date: 26 May 89 11:54:20 EDT Subject: No subject Message-ID: <052689.115420.kirk@ibm.com> Re: NIPS and Memorial Day ***** Reply to your mail of: 05/26 02:14:22 ************************** I have a confession to make. My head was sufficiently in the clouds when I selected a deadline for submission of abstracts and summaries for NIPS89 that I didn't realize that May has 31 days! It does, I have since learned, and I offer the extra day to all those who will be spending the Memorial Day weekend finishing their NIPS submissions. From Q89%DHDURZ1.BITNET at VMA.CC.CMU.EDU Tue May 30 11:08:39 1989 From: Q89%DHDURZ1.BITNET at VMA.CC.CMU.EDU (Gabriele Scheler) Date: Tue, 30 May 89 11:08:39 CET Subject: mailing-list inclusion Message-ID: Dear "connectionists", I am a computational linguist in Heidelberg and I am experimenting with connectionist models from time to time (disambiguation as constraint satis- faction). I would like to be a recipient of the mail that you send to in- terested persons. My e-mail address is included in the header, my postal address (in case you need it): Dr Gabriele Scheler Lehrstuhl fuer Computerlinguistik Karlstr. 2 6900 Heidelberg Federal Republic of Germany Best regards, Gabriele Scheler From grumbach at ulysse.enst.fr Tue May 30 11:34:56 1989 From: grumbach at ulysse.enst.fr (Alain Grumbach) Date: Tue, 30 May 89 17:34:56 +0200 Subject: supervised learning Message-ID: <8905301534.AA17056@ulysse.enst.fr> Often I feel troubled while readind phrases : "supervised learning" or "unsupervised learning". What does it actually mean ? Usually, it refers to the fact that, for learning purpose, network outputs are given by the "teacher". But learning is a FUNCTION from EXAMPLE DATAS to WEIGHTS. The informations needed for example datas may include network inputs, or inputs and associated outputs. These outputs are input datas for the learning phase. Within this point of view, what would be a "unsupervised learning" ? I should put forward the idea that unsupervised learning takes place when example datas are not given by any teacher, i.e. when the system chooses (may be randomly) its examples within the example space. In human learning, there are many cases : games, common skills (driving a bicycle) ... In symbolic learning domain, for instance, the LEX system (Mitchell) designs itself the examples it will process. In neural networks, such an unsupervised learning is rare. Why ? Lastly, if we keep this idea, how could we name the kind of learning which was concerned by phrase "supervised learning" ? I should put forward the expression: "week(ly) supervised learning". Summary : this is a proposition for a more precise phrasing : - unsupervised learning : without any teacher information - week(ly) supervised learning : the teacher gives input vectors only - (fully) supervised learning : the teacher gives input and output vectors. What is your feeling about this ? grumbach @ ulysse.enst.fr (Alain GRUMBACH ENST Dept INF 46 rue Barrault 75634 PARIS Cedex 13 FRANCE) From Dave.Touretzky at B.GP.CS.CMU.EDU Wed May 31 21:53:16 1989 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Wed, 31 May 89 21:53:16 EDT Subject: tech report available Message-ID: <5456.612669196@DST.BOLTZ.CS.CMU.EDU> CONNECTIONISM AND COMPOSITIONAL SEMANTICS David S. Touretzky School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 Technical report CMU-CS-89-147 May 1989 Abstract: Quite a few interesting experiments have been done applying neural networks to natural language tasks. Without detracting from the value of these early investigations, this paper argues that current neural network architectures are too weak to solve anything but toy language problems. Their downfall is the need for ``dynamic inference,'' in which several pieces of information not previously seen together are dynamically combined to derive the meaning of a novel input. The first half of the paper defines a hierarchy of classes of connectionist models, from categorizers and associative memories to pattern transformers and dynamic inferencers. Some well-known connectionist models that deal with natural language are shown to be either categorizers or pattern transformers. The second half examines in detail a particular natural language problem: prepositional phrase attachment. Attaching a PP to an NP changes its meaning, thereby influencing other attachments. So PP attachment requires compositional semantics, and compositionality in non-toy domains requires dynamic inference. Mere pattern transformers cannot learn the PP attachment task without an exponential training set. Connectionist-style computation still has many valuable ideas to offer, so this is not an indictment of connectionism's potential. It is an argument for a more sophisticated and more symbolic connectionist approach to language. An earlier version of this paper appeared in the Proceedings of the 1988 Connectionist Models Summer School. ================ TO ORDER COPIES of this tech report: send electronic mail to copetas at cs.cmu.edu, or write the School of Computer Science at the address above. ** Do not use your mailer's "reply" command. ** From gasser at iuvax.cs.indiana.edu Mon May 1 11:55:49 1989 From: gasser at iuvax.cs.indiana.edu (Michael Gasser) Date: Mon, 1 May 89 10:55:49 -0500 Subject: room sharing at IJCNN Message-ID: Post-doc at Indiana University looking for (preferably female) person to share room with at IJCNN. Contact Mayumi Koide at (812) 855-6828, (812) 339-5793, or koidem at gold.bacs.indiana.edu From ersoy at ee.ecn.purdue.edu Mon May 1 23:44:47 1989 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Mon, 1 May 89 22:44:47 -0500 Subject: No subject Message-ID: <8905020344.AA23700@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 23 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 3-6, 1990 The Neural Networks Track of HICSS-23 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-23 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 10, 1989. Notification of accepted papers by September 1, 1989. Accpeted manuscripts, camera-ready, are due by October 3, 1989. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy H. H. Szu Purdue University Naval Research Laboratories School of Electrical Engineering Code 5709 W. Lafayette, IN 47907 4555 Overlook Ave., SE (317) 494-6162 Washington, DC 20375 E-Mail: ersoy at ee.ecn.purdue (202) 767-2407 From netlist at psych.Stanford.EDU Tue May 2 10:46:57 1989 From: netlist at psych.Stanford.EDU (Mark Gluck) Date: Tue, 2 May 89 07:46:57 PDT Subject: TODAY (5/2): Bruce McNaughton, Network Model of Hippocampus Message-ID: REMINDER TODAY: Stanford University Interdisciplinary Colloquium Series: Adaptive Networks and their Applications May 2nd (Tuesday, 3:30pm): ******************************************************************************** Hebb-Steinbuch-Marr Networks and the Role of Movement in Hippocampal Representations of Spatial Relations Bruce L. McNaughton Dept. of Psychology University of Colorado Campus Box 345 Boulder, CO 80309 ******************************************************************************** Room 380-380C (Rear courtyard behind Psych & Math Bldgs.) From daugman%charybdis at harvard.harvard.edu Tue May 2 10:56:56 1989 From: daugman%charybdis at harvard.harvard.edu (j daugman) Date: Tue, 2 May 89 10:56:56 EDT Subject: Vision and Image Analysis Message-ID: Request for Technical Reports and Papers (Second Request) In preparation for upcoming Reviews and Tutorials at 1989 Conferences, I would be grateful to receive copies of any papers or technical reports pertaining to applications of neural nets to vision and image analysis. (This repeats an earlier request sent out in February.) Please send any material to the following address. Thank you in advance. John Daugman 950 William James Hall Harvard University Cambridge, Mass. 02138 From mv10801 at uc.msc.umn.edu Wed May 3 15:25:17 1989 From: mv10801 at uc.msc.umn.edu (mv10801@uc.msc.umn.edu) Date: Wed, 3 May 89 14:25:17 CDT Subject: Share hotel room at IJCNN? Message-ID: <8905031925.AA14952@uc.msc.umn.edu> I would like to find a roommate to share at hotel room with at the IJCNN conference in Washington DC in June. Male non-smoker preferred. If you're interested, please contact me by e-mail or by phone. Thanks! --Jonathan Marshall mv10801 at uc.msc.umn.edu Center for Research in Learning, Perception, and Cognition 205 Elliott Hall University of Minnesota 612-331-6919 (eve/weekend/msg) Minneapolis, MN 55455 612-626-1565 (office) From hendler at icsib9.Berkeley.EDU Wed May 3 17:29:29 1989 From: hendler at icsib9.Berkeley.EDU (James Hendler) Date: Wed, 3 May 89 14:29:29 PDT Subject: sort of connectionist: Message-ID: <8905032129.AA03381@icsib9.> CALL FOR PAPERS CONNECTION SCIENCE (Journal of Neural Computing, Artificial Intelligence and Cognitive Research) Special Issue -- HYBRID SYMBOLIC/CONNECTIONIST SYSTEMS Connectionism has recently seen a major resurgence of interest among both artificial intelligence and cognitive science researchers. The spectrum of connectionist approaches is quite large, ranging from structured models, in which individual network units carry meaning, through distributed models of weighted networks with learning algorithms. Very encouraging results, particularly in ``low-level'' perceptual and signal processing tasks, are being reported across the entire spectrum of these models. Unfortunately, connectionist systems have had more limited success in those ``higher cognitive'' areas where symbolic models have traditionally shown promise: expert reasoning, planning, and natural language processing. While it may not be inherently impossible for purely connectionist approaches to handle complex reasoning tasks someday, it will require significant breakthroughs for this to happen. Similarly, getting purely symbolic systems to handle the types of perceptual reasoning that connectionist networks perform well would require major advances in AI. One approach to the integration of connectionist and symbolic techniques is the development of hybrid reasoning systems in which differing components can communicate in the solving of problems. This special issue of the journal Connection Science will focus on the state of the art in the development of such hybrid reasoners. Papers are solicited which focus on: Current artificial intelligence systems which use connectionist components in the reasoning tasks they perform. Theoretical or experimental results showing how symbolic computations can be implemented in, or augmented by, connectionist components. Cognitive studies which discuss the relationship between functional models of higher level cognition and the ``lower level'' implementations in the brain. The special issue will give special consideration to papers sharing the primary emphases of the Connection Science Journal which include: 1) Replicability of Results: results of simulation models should be reported in such a way that they are repeatable by any competent scientist in another laboratory. The journal will be sympathetic to the problems that replicability poses for large complex artificial intelligence programs. 2) Interdisciplinary research: the journal is by nature multidisciplinary and will accept articles from a variety of disciplines such as psychology, cognitive science, computer science, language and linguistics, artificial intelligence, biology, neuroscience, physics, engineering and philosophy. It will particularly welcome papers which deal with issues from two or more subject areas (e.g. vision and language). Papers submitted to the special issue will also be considered for publication in later editions of the journal. All papers will be refereed. The expected publication date for the special issue is Volume 2(1), March, 1990. DEADLINES: Submission of papers June 15, 1989 Reviews/decisions September 30, 1989 Final rewrites due December 15, 1989. Authors should send four copies of the article to: Prof. James A. Hendler Associate Editor, Connection Science Dept. of Computer Science University of Maryland College Park, MD 20742 USA Those interested in submitting articles are welcome to contact the editor via e-mail (hendler at brillig.umd.edu - US Arpa or CSnet) or in writing at the above address. From neilson%cs at ucsd.edu Wed May 3 18:42:16 1989 From: neilson%cs at ucsd.edu (Robert Hecht-Nielsen) Date: Wed, 3 May 89 15:42:16 PDT Subject: Volunteers Wanted for IJCNN Message-ID: <8905032242.AA15364@odin.UCSD.EDU> Request for volunteers for the upcoming International Conference on Neural Networks (IJCNN) June 18 - June 22 Requirements: In order to receive full admission to conference and the proceedings, you are required to work June 19 - June 22, one shift each day. On June 18 there will be tutorials presented all day. In order to see a tutorial, you must work that tutorial. See the information below on what tutorials are being presented. Shifts: There are 3 shifts: Morning, afternoon and evening. It is best that you work the same shift each day. Volunteers are organized into groups and you will, more than likely, be working with the same group each day. This allows at great deal of flexibility for everyone. If there is a paper being presented at the time of your shift, you can normally work it out with your group to see it. Last year I had no complaints from any of the volunteers regarding missing a paper which they wanted to view. Tutorials: The following tutorials are being presented: 1) Pattern Recognition - Prof. David Casasent 2) Adaptive Pattern Recognition - Prof. Leon Cooper 2) Vision - Prof. John Daugman 4) Neurobiology Review - Dr. Walter Freeman 5) Adaptive Sensory Motor Control - Prof. Stephen Grossberg 6) Dynamical Systems Review - Prof. Morris Hirsch 7) Neural Nets - Algorithms & Microhardware - Prof. John Hopfield 8) VLSI Technology and Neural Network Chips - Dr. Larry Jackel 9) Self-Organizing Feature Maps - Tuevo Kohonen 10) Associative Memory - Prof. Bart Kosko 11) Optical Neurocomputers - Prof. Demitre Psaltis 12) Starting a High-Tech Company - Peter Wallace 13) LMS techniques in Neural Networks - Prof. Bernard Widrow 14) Reenforcement Learning - Prof. Ronald Williams If you want to work the tutorials, please return to me you preferences from 1 to 14 (1 being the one you want to see the most). Housing: Guest housing is available at the University of Maryland. It is about 30 minutes away from the hotel, but Washington D.C has a great "metro" system to get you to and from the conference. The cost of housing per night is $16.50 per person for a double room, or $22.50 for a single room. I will be getting more information on this, but you need to sign up as soon as possible as these prices are quite reasonable for the area and the rooms will go quickly. General Meeting: A general meeting is scheduled at the hotel on Saturday, June 17, around 6:00 pm. You must attend this meeting! If there is a problem with you not being able to make the meeting, I need to know about it. When you contact me to commit yourself officially, I will need from you the following: 1) shift preference 2) tutorial preferences 3) housing preference (University Housing?) To expedite things, I can be contacted at work at (619) 573-7391 during 7:00am-2:00pm west coast time. You may also leave a message on my home phone (619) 942-2843. Thank-you, Karen G. Haines IJCNN Tutorials Chairman neilson%cs at ucsd.edu From isabelle at neural.att.com Fri May 5 09:26:33 1989 From: isabelle at neural.att.com (isabelle@neural.att.com) Date: Fri, 5 May 89 09:26:33 EDT Subject: No subject Message-ID: <8905051325.AA29273@neural.UUCP> ....................................................................... ...............--- SPEECH --- SPEECH --- SPEECH ---.............. ....................................................................... I would like to know the people who have been working on the DARPA data base for digit recognition. I am interested in the results that can be obtained by the different methods, including the connectionist methods. Contact Isabelle Guyon, AT&T Bell Labs, Holmdel, NJ07733 (USA), (201) 949 3220, email isabelle at neural.att.com. Thanks in advance. Isabelle . From alexis%yummy at gateway.mitre.org Fri May 5 16:38:15 1989 From: alexis%yummy at gateway.mitre.org (Alexis Wieland) Date: Fri, 5 May 89 16:38:15 EDT Subject: Local Minima and XOR Message-ID: <8905052038.AA23517@yummy.mitre.org> I apologize for this rather late reply to a note, I was on vacation. In a note about searching weight spaces for local minima (I'm afraid I don't remember the name, it was from UCSD and was part of a NIPS paper) it was noted that they found local min's in a network for computing sin(). There was surprise, though, that there weren't any local min's for the strictly layered 2-in -> 2 -> 1-out XOR net. Many people have complained about getting caught in local min with this network .... It would seem that if you were trying to find a point where the gradient literally went to zero, that you couldn't find it ... because it's out at infinity. That network (when it "works") creates a ridge (or valley) diagonally through the 2D input space. If both of the hidden units get stuck creating the same side of the ridge then only 3 of the 4 points are correctly classified ... you're in the dasterdly "local min." But since you can still get those three points more accurate (closer to 1 or 0) by pushing the arc value twards infinity, you'll always have a non-zero gradient. What you have is a corner of weight space that, once in, will not let you simply learn out of. A learning "black hole," it's something like a saddle point in weight space. By the way, it is therefore intuitively obvious why having more hidden units reduces the chances of getting stuck -- the more hidden units, the less likely they will *ALL* get stuck on the same side of the ridge. alexis wieland -- alexis%yummy at gateway.mitre.org From weili at wpi.wpi.edu Sun May 7 19:48:15 1989 From: weili at wpi.wpi.edu (Wei Li) Date: Sun, 7 May 89 19:48:15 edt Subject: topological structure recognition Message-ID: <8905072348.AA10961@wpi> Hi, I would like to know if there are some kinds of neural networks, which can be applied to recognize things with same topological structures but they are not rigid. Any references and comments are welcome. wei li EE. DEPT. Worcester Polytechnic Institute Worcester, MA 01609 (508) 755-2097 (H) e-mail address weili at wpi.wpi.edu From hollbach at cs.rochester.edu Mon May 8 13:11:01 1989 From: hollbach at cs.rochester.edu (Susan Weber) Date: Mon, 08 May 89 13:11:01 -0400 Subject: TR: direct inferences and figurative adjective-noun combinations Message-ID: <8905081711.AA09133@deneb.cs.rochester.edu> The following TR can be requested from peg at cs.rochester.edu. Please do not cc your request to the entire mailing list. A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber University of Rochester Computer Science Department TR 289 Categories have internal structure sufficiently sophisticated to capture a variety of effects, ranging from the direct inferences arising from adjectival modification of nouns to the ability to comprehend figurative usages. The design of the internal structure of category representation is constrained by the model requirements of the connectionist implementation and by the observable behaviors exhibited in direct inferences. The former dictates the use of a spreading activation format, and the latter indicates some to the topology and connectivity of the resultant semantic network. The connectionist knowledge representation and inferencing scheme described in this report is based on the idea that categories and concepts are context sensitive and functionally structured. Each functional property value of a category motivates a distinct aspect of that category's internal structure. This model of cognition, as implemented in a structured connectionist knowledge representation system, permits the system to draw immediate inferences, and, when augmented with property inheritance mechanisms, mediated inferences about the full meaning of adjective-noun combinations. These inferences are used not only to understand the implicit references to correlated properties (a green peach is unripe) but also to make sense of figurative adjective uses, by drawing on the connotations of the adjective in literal contexts. From hollbach at cs.rochester.edu Tue May 9 11:04:50 1989 From: hollbach at cs.rochester.edu (Susan Weber) Date: Tue, 09 May 89 11:04:50 -0400 Subject: please reconfirm TR orders Message-ID: <8905091504.AA00271@birch.cs.rochester.edu> Due to the cost of copying the 170 page report, the Computer Science Department is charging $7.50 for the TR A Structured Connectionist Approach to Direct Inferences and Figurative Adjective-Noun Combinations Susan Hollbach Weber Computer Science Department TR 289 University of Rochester So, if you have already ordered this TR from peg at cs.rochester.edu, please reconfirm your order, and you will be sent the report with a bill for $7.50. Thanks. From sereno%cogsci at ucsd.edu Tue May 9 16:13:50 1989 From: sereno%cogsci at ucsd.edu (Marty Sereno) Date: Tue, 9 May 89 13:13:50 PDT Subject: please reconfirm TR orders Message-ID: <8905092013.AA20300@cogsci.UCSD.EDU> From harris%cogsci at ucsd.edu Tue May 9 23:51:02 1989 From: harris%cogsci at ucsd.edu (Catherine Harris) Date: Tue, 9 May 89 20:51:02 PDT Subject: Report available Message-ID: <8905100351.AA25861@cogsci.UCSD.EDU> CONNECTIONIST EXPLORATIONS IN COGNITIVE LINGUISTICS Catherine L. Harris Department of Psychology and Program in Cognitive Science University of California, San Diego Abstract: Linguists working in the framework of cognitive linguistics have suggested that connectionist networks may provide a computational formalism well suited for the implementation of their theories. The appeal of these networks include the ability to extract the family resemblance structure inhering in a set of input patterns, to represent both rules and exceptions, and to integrate multiple sources of information in a graded fashion. The possible matches between cognitive linguistics and connectionism were explored in an implementation of the Brugman and Lakoff (1988) analysis of the diverse meanings of the preposition "over." Using a gradient-descent learning procedure, a network was trained to map patterns of the form "trajector verb (over) landmark" to feature-vectors representing the appropriate meaning of "over." Each word was identified as a unique item, but was not further semantically specified. The pattern set consisted of a distribution of form-meanings pairs that was meant to be evocative of English usage, in that the regularities implicit in the distribution spanned the spectrum from rules, to partial regularities, to exceptions. Under pressure to encode these regularities with limited resources, the nework used one hidden layer to recode the inputs into a set of abstract properties. Several of these categories, such as dimensionality of the trajector and vertical height of the landmark, correspond to properties B&L found to be important in determining which schema a given use of "over" evokes. This abstract recoding allowed the network to generalize to patterns outside the training set, to activate schemas to partial patterns, and to respond sensibly to "metaphoric" patterns. Furthermore, a second layer of hidden units self-organized into clusters which capture some of the qualities of the radial categories described by B&L. The paper concludes by describing the "rule-analogy continuum". Connectionist models are interesting systems for cognitive linguistics because they provide a mechanism for exploiting all points of this continuum. A short version of this paper will be published in The Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society, 1989. Send requests to: harris%cogsci.ucsd.edu From watrous at ai.toronto.edu Mon May 15 12:28:12 1989 From: watrous at ai.toronto.edu (Raymond Watrous) Date: Mon, 15 May 89 12:28:12 EDT Subject: GRADSIM Simulator Update Message-ID: <89May15.122916edt.11248@ephemeral.ai.toronto.edu> Subject: GRADSIM Connectionist Network Simulator Version 1.7 Corrections: A bug in the line_search module was discovered and repaired; the bug could result in a line search which was not closed. Enhancements: The line_search algorithm was enhanced to conform to the one described in R. Fletcher, Practical Optimization (2nd edition) 1987. The absolute value of the slope is now used in the termination test; this results in termination being more carefully controlled by the slope criterion parameter. Testing: The line_search bug was confirmed in a test of the BFGS algorithm on the Rosenbrock function from 100 random starting points. The bug was evident in 18 cases. The correction of the bug was confirmed for these same 100 starting points. The enhanced algorithm was also tested and required slightly fewer iterations then the corrected algorithm. Checking: Two test results are now included in the archive for checking the simulator. 1. The results of the BFGS algorithm on the Rosenbrock function from the start point (-1.2, 1.0) are listed. This checks the bfgs and line_search modules. 2. The results several iterations of the BFGS algorithm on a simple speech example. This checks the speech I/O, function and gradient evaluation modules. Make files are included in the archive for generating the simulator for these checks. Documentation: The GRADSIM simulator is briefly described in the University of Pensylvania Tech Report MS-CIS-88-16. GRADSIM: A connectionist network simulator using gradient optimization techniques. An excerpt of the relevant part of this report is now included in the gradsim archive. Access: The GRADSIM simulator may be obtained in compressed tar format via anonymous ftp from linc.cis.upenn.edu as /pub/gradsim.tar.Z. The simulator may also be obtained in uuencoded tar format by email from carol at ai.toronto.edu. From mjw at CS.CMU.EDU Mon May 15 14:13:35 1989 From: mjw at CS.CMU.EDU (Michael Witbrock) Date: Mon, 15 May 89 14:13:35 -0400 (EDT) Subject: Found Volunteer. Message-ID: Ignore last message; meant for local mailing list. Even list maintainers mess up sometimes. michael (connectionists-request) From mjw at CS.CMU.EDU Mon May 15 14:10:56 1989 From: mjw at CS.CMU.EDU (Michael Witbrock) Date: Mon, 15 May 89 14:10:56 -0400 (EDT) Subject: Found Volunteer. Message-ID: Wow, I already got a volunteer. My heartfult thanks to Dave Plaut. michael From THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU Wed May 17 13:00:00 1989 From: THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU (THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU) Date: Wed, 17 May 89 13:00 O Subject: Technical Report Available Message-ID: LU TP 89-1 A NEW METHOD FOR MAPPING OPTIMIZATION PROBLEMS ONTO NEURAL NETWORKS Carsten Peterson and Bo Soderberg Department of Theoretical Physics, University of Lund Solvegatan 14A, S-22362 Lund, Sweden Submitted to International Journal of Neural Systems ABSTRACT: A novel modified method for obtaining approximate solutions to difficult optimization problems within the neural network paradigm is presented. We consider the graph partition and the travelling salesman problems. The key new ingredient is a reduction of solution space by one dimension by using graded neurons, thereby avoiding the destructive redundancy that has plagued these problems when using straightforward neural network techniques. This approach maps the problems onto Potts glass rather than spin glass theories. A systematic prescription is given for estimating the phase transition temperatures in advance, which facilitates the choice of optimal parameters. This analysis, which is performed for both serial and synchronous updating of the mean field theory equations, makes it possible to consistently avoid chaotic bahaviour. When exploring this new technique numerically we find the results very encouraging; the quality of the solutions are in parity with those obtained by using optimally tuned simulated annealing heuristics. Our numerical study, which extends to 200-city problems, exhibits an impressive level of parameter insensitivity. ---------- For copies of this report send a request to THEPCAP at SELDC52 [don't forget to give your mailing address]. From eric at mcc.com Wed May 17 15:29:47 1989 From: eric at mcc.com (Eric Hartman) Date: Wed, 17 May 89 14:29:47 CDT Subject: TR announcement Message-ID: <8905171929.AA20103@legendre.aca.mcc.com> The following technical report is now available. Requests may be sent to eric at mcc.com or via physical mail to the MCC address below. --------------------------------------------------------------- MCC Technical Report Number: ACT-ST-146-89 Optoelectronic Implementation of Multi-Layer Neural Networks in a Single Photorefractive Crystal Carsten Peterson*, Stephen Redfield, James D. Keeler, and Eric Hartman Microelectronics and Computer Technology Corporation 3500 W. Balcones Center Dr. Austin, TX 78759-6509 Abstract: We present a novel, versatile optoelectronic neural network architecture for implementing supervised learning algorithms in photorefractive materials. The system is based on spatial multiplexing rather than the more commonly used angular multiplexing of the interconnect gratings. This simple, single-crystal architecture implements a variety of multi-layer supervised learning algorithms including mean-field-theory, back-propagation, and Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam depletion, rescattering, absorption, and decay effects of the crystal are compensated for by suitably modified supervised learning algorithms. *Present Address: Department of Theoretical Physics, University of Lund, Solvegatan 14A, S-22362 Lund, Sweden. From srilata at aquinas.csl.uiuc.edu Wed May 17 16:27:46 1989 From: srilata at aquinas.csl.uiuc.edu (Srilata Raman) Date: Wed, 17 May 89 15:27:46 CDT Subject: No subject Message-ID: <8905172027.AA12293@aquinas> I would like to have a copy of the report,"New Method for Mapping Optimization Problems onto Neural networks"-Peterson & Soderberg. email : srilata at aquinas.csl.uiuc.edu Postal add : Coordinated Science Lab, Univ. of Illinois at Urbana Champaign, Urbana, IL 61801 USA. I request that another copy be sent to : Prof.L.M.Patnaik, Dept.of Computer Science & Automation, Indian Institute of Science, Bangalore 560012 INDIA. Thanks. From weili at wpi Mon May 15 22:16:49 1989 From: weili at wpi (Wei Li) Date: Mon, 15 May 89 22:16:49 edt Subject: hand writing character recognition Message-ID: <8905160216.AA08056@wpi.wpi.edu> Hi, I would like to get some pointers to the work on hand writing characters recognition by neural networks. Thanks in advance. Wei Li EE. DEPT. WPI 100 Institute Road Worcester, MA 01609 e-mail: weili at wpi.wpi.edu From DSC%UMDC.BITNET at VMA.CC.CMU.EDU Thu May 18 12:18:03 1989 From: DSC%UMDC.BITNET at VMA.CC.CMU.EDU (DSC%UMDC.BITNET@VMA.CC.CMU.EDU) Date: Thu, 18 May 89 12:18:03 EDT Subject: Software request Message-ID: Does anybody know where I can find an implementation of the Boltzmann machine that learns, and runs on: (1) MS-DOS machine; (2) MacIntosh; or (3) Symbolics??? Thanks very much. Kathy Laskey Decision Science Consortium, Inc. From lina at ai.mit.edu Thu May 18 16:17:15 1989 From: lina at ai.mit.edu (Lina Massone) Date: Thu, 18 May 89 16:17:15 EDT Subject: No subject Message-ID: <8905182017.AA01655@gelatinosa.ai.mit.edu> Re: handwriting Pietro Morasso at the University of Genoa - Italy has been doing some work on handwriting with nn. He set up two models. The first is a Kohonen-like self organizing network, the second is a multi-layer perceptron. Here is his address. Pietro Morasso Dept. of Communication, Computer and System Sciences University of Genoa Via All'Opera Pia, 11a 16145 Genoa Italy EMAIL: mcvax!i2unix!dist.unige.it!piero at uunet.uu.net Lina Massone From GINDI%GINDI at Venus.YCC.Yale.Edu Fri May 19 09:54:00 1989 From: GINDI%GINDI at Venus.YCC.Yale.Edu (GINDI%GINDI@Venus.YCC.Yale.Edu) Date: Fri, 19 May 89 09:54 EDT Subject: Tech reports available Message-ID: The following two tech reports are now available. Please send requests to GINDI at VENUS.YCC.YALE.EDU or by physical mail to: Gene Gindi Yale University Department of Electrical Engineering P.O. Box 2157 , Yale Station New Haven, CT 06520 ------------------------------------------------------------------------------- Yale University, Dept. Electrical Engineering Center for Systems Science TR- 8903 Neural Networks for Object Recognition within Compositional Hierarches: Initial Experiments Joachim Utans, Gene Gindi * Dept. Electrical Engineering Yale University P.O. Box 2157, Yale Station New Haven CT 06520 *(to whom correspondence should be addressed) Eric Mjolsness, P. Anandan Dept. Computer Science Yale University New Haven CT 06520 Abstract We describe experiments with TLville, a neural-network for object recognition. The task is to recognize, in a translation-invariant manner, simple stick figures. We formulate the recognition task as the problem of matching a graph of model nodes to a graph of data nodes. Model nodes are simply user-specified labels for objects such as "vertical stick" or "t-junction"; data nodes are parameter vectors, such as (x,y,theta), of entities in the data. We use an optimization approach where an appropriate objective function specifies both the graph-matching problem and an analog neural net to carry out the optimization. Since the graph structure of the data is not known a priori; it must be computed dynamically as part of the optimization. The match metrics are model-specific and are invoked selectively, as part of the optimization, as various candidate matches of model-to-data occur. The network supports notions of abstraction in that the model nodes express compositional hierarchies involving object-part relationships. Also, a data node matched to an whole object contains a dynamically computed parameter vector which is an abstraction summarizing the parameters of data nodes matched to the constituent parts of the whole. Terms in the match metric specify the desired abstraction. In addition, a solution to the problem of computing a transformation from retinal to object-centered coordinates to support recognition is offered by this kind of network; the transformation is contained as part of the objective function in the form of the match metric. In experiments, the network usually succeeds in recognizing single or multiple instances of a single composite model amid instances of non-models, but it gets trapped in unfavorable local minima of the 5th-order objective when multiple composite objects are encoded in the database. ------------------------------------------------------------------------------- Yale University, Dept. Electrical Engineering Center for Systems Science TR- 8908 Stickville: A Neural Net for Object Recognition via Graph Matching Grant Shumaker School of Medicine, Yale University, New Haven, CT 06510 Gene Gindi Department of Electrical Engineering, Yale University P.O. Box 2157 Yale Station, New Haven,CT 06520 (to whom correspondence should be addressed) Eric Mjolsness, P.Anandan Department of Computer Science, Yale University, New Haven, CT 06510 Abstract An objective function for model-based object recognition is formulated and used to specify a neural network whose dynamics carry out the optimization, and hence the recognition task. Models are specified as graphs that capture structural properties of shapes to be recognized. In addition, compositional (INA) and specialization (ISA) hierarchies are imposed on the models as an aid to indexing and are represented in the objective function as sparse matrices. Data are also represented as a graph. The optimization is a graph-matching procedure whose dynamical variables are ``neurons'' hypothesizing matches between data and model nodes. The dynamics are specified as a third-order Hopfield-style network augmented by hard constraints implemented by ``Lagrange multiplier'' neurons. Experimental results are shown for recognition in Stickville, a domain of 2-D stick figures. For small databases, the network successfully recognizes both an object and its specialization. ---------------------------------------------- From mcvax!ai-vie!georg at uunet.UU.NET Fri May 19 11:37:42 1989 From: mcvax!ai-vie!georg at uunet.UU.NET (Georg Dorffner) Date: Fri, 19 May 89 14:37:42 -0100 Subject: conference announcement - EMCSR 1990 Message-ID: <8905191237.AA02167@ai-vie.uucp> Announcement and Call for Papers EMCSR 90 TENTH EUROPEAN MEETING ON CYBERNETICS AND SYSTEMS RESEARCH April 17-20, 1990 University of Vienna, Austria Session M: Parallel Distributed Processing in Man and Machine Chairs: D.Touretzky (Carnegie Mellon, Pittsburgh, PA) G.Dorffner (Vienna, Austria) Other Sessions at the meeting will be: A: General Systems Methodology B: Fuzzy Sets, Approximate Reasoning and Knowledge-based Systems C: Designing and Systems D: Humanity, Architecture and Conceptualization E: Cybernetics in Biology and Medicine F: Cybernetics in Socio-Economic Systems G: Workshop: Managing Change: Institutional Transition in the Private and Public Sector H: Innovation Systems in Management and Public Policy I: Systems Engineering and Artificial Intelligence for Peace Research J: Communication and Computers K: Software Development for Systems Theory L: Artificial Intelligence N: Impacts of Artificial Intelligence The conference is organized by the Austrian Society for Cybernetic Studies (chair: Robert Trappl). SUBMISSION OF PAPERS: For symposium M, all contributions in the fields of PDP, connectionism, and neural networks are welcome. Acceptance of contributors will be determined on the basis of Draft Final Papers. These papers must not exceed 7 single-spaced A4 pages (maximum 50 lines, final size will be 8.5 x 6 inches), in English. They have to contain the final text to be submitted, however, graphs and pictures need not be of reproducible quality. The Draft Final Paper must carry the title, author(s) name(s), and affiliation in this order. Please specify the symposium in which you would like to present the paper (one of the letters above). Each scientist shall submit only 1 paper. Please send t h r e e copies of the Draft Final Paper to: EMCSR 90 - Conference Secretariat Austrian Society for Cybernetic Studies Schottengasse 3 A-1010 Vienna, Austria Deadline for submission: Oct 15, 1989 Authors will be notified about acceptance no later than Nov 20, 1989. They will then be provided with the detailed instructions for the preperation of the Final Paper. Proceedings containing all accepted papers will be printed. For further information write to the above address, call +43 222 535 32 810, or send email to: sec at ai-vie.uucp Questions concerning symposium M (Parallel Distributed Processing) can be directed to Georg Dorffner (same address as secretariat), email: georg at ai-vie.uucp From rich at gte.com Fri May 19 15:01:15 1989 From: rich at gte.com (Rich Sutton) Date: Fri, 19 May 89 15:01:15 EDT Subject: TD Model of Conditioning -- Paper Announcement Message-ID: <8905191901.AA18756@bunny.gte.com> Andy Barto and I have just completed a major new paper relating temporal-difference learning, as used, for example, in our pole-balancing learning controller, to classical conditioning in animals. The paper will appear in the forthcoming book ``Learning and Computational Neuroscience,'' edited by J.W. Moore and M. Gabriel, MIT Press. A preprint can be obtained by emailing to rich%gte.com at relay.cs.net with your physical-mail address. The paper has no abstract, but begins as follows: TIME-DERIVATIVE MODELS OF PAVLOVIAN REINFORCEMENT Richard S. Sutton GTE Laboratories Incorporated Andrew G. Barto University of Massachusetts This chapter presents a model of classical conditioning called the temporal-difference (TD) model. The TD model was originally developed as a neuron-like unit for use in adaptive networks (Sutton & Barto, 1987; Sutton, 1984; Barto, Sutton & Anderson, 1983). In this paper, however, we analyze it from the point of view of animal learning theory. Our intended audience is both animal learning researchers interested in computational theories of behavior and machine learning researchers interested in how their learning algorithms relate to, and may be constrained by, animal learning studies. We focus on what we see as the primary theoretical contribution to animal learning theory of the TD and related models: the hypothesis that reinforcement in classical conditioning is the time derivative of a composite association combining innate (US) and acquired (CS) associations. We call models based on some variant of this hypothesis ``time-derivative models'', examples of which are the models by Klopf (1988), Sutton & Barto (1981a), Moore et al (1986), Hawkins & Kandel (1984), Gelperin, Hopfield & Tank (1985), Tesauro (1987), and Kosko (1986); we examine several of these models in relation to the TD model. We also briefly explore relationships with animal learning theories of reinforcement, including Mowrer's drive-induction theory (Mowrer, 1960) and the Rescorla-Wagner model (Rescorla & Wagner, 1972). In this paper, we systematically analyze the inter-stimulus interval (ISI) dependency of time-derivative models, using realistic stimulus durations and both forward and backward CS--US intervals. The models' behaviors are compared with the empirical data for rabbit eyeblink (nictitating membrane) conditioning. We find that our earlier time-derivative model (Sutton & Barto, 1981a) has significant problems reproducing features of these data, and we briefly explore partial solutions in subsequent time-derivative models proposed by Moore et al. (1986), Klopf (1988), and Gelperin et al. (1985). The TD model was designed to eliminate these problems by relying on a slightly more complex time-derivative theory of reinforcement. In this paper, we motivate and explain this theory from the point of view of animal learning theory, and show that the TD model solves the ISI problems and other problems with simpler time-derivative models. Finally, we demonstrate the TD model's behavior in a range of conditioning paradigms including conditioned inhibition, primacy effects (Egger & Miller, 1962), facilitation of remote associations, and second-order conditioning. From gardner%dendrite at boulder.Colorado.EDU Tue May 23 12:25:12 1989 From: gardner%dendrite at boulder.Colorado.EDU (Phillip Gardner) Date: Tue, 23 May 89 10:25:12 MDT Subject: simulators using X11 Message-ID: <8905231625.AA10181@dendrite> Does anyone know of a connectionist simulator which uses X11 for its graphical interface? Thanks for any help you can give! Phil Gardner gardner at boulder.colorado.edu From french at cogsci.indiana.edu Tue May 23 12:41:17 1989 From: french at cogsci.indiana.edu (Bob French) Date: Tue, 23 May 89 11:41:17 EST Subject: Subcognition and the Limits of the Turing Test Message-ID: A pre-print of an article on subcognition and the Turing Test to appear in MIND: "Subcognition and the Limits of the Turing Test" Robert M. French Center for Research on Concepts and Cognition Indiana University Ostensibly a philosophy paper (to appear in MIND at the end of this year), this article is of special interest to connectionists. It argues that: i) as a REAL test for intelligence, the Turing Test is inappropriate in spite of arguments by some philosophers to the contrary; ii) only machines that have experienced the world as we have could pass the Test. This means that such machines would have to learn about the world in approximately the same way that we humans have -- by falling off bicycles, crossing streets, smelling sewage, tasting strawberries, etc. This is not a statement about the inherent inability of a computer to achieve intelligence, it is rather a comment about the use of the Turing Test as a means of testing for that intelligence; iii) (especially for connectionists) the physical, subcognitive and cognitive levels are INEXTRICABLY interwoven and it is impossible to tease them apart. This is ultimately the reason why no machine that had not experienced the world as we had could ever pass the Turing Test. The heart of the discussion of these issues revolves around humans' use of a vast associative network of concepts that operates, for the most part, below cognitive perceptual thresholds and that has been acquired over a lifetime of experience with the world. The Turing Test tests for the presence or absence of this HUMAN associative concept network, which explains why it would be so difficult -- although not theoretically impossible -- for any machine to pass the Test. This paper shows how a clever interrogator could always "peek behind the screen" to unmask a computer that had not experienced the world as we had by exploiting human abilities based on the use of this vast associative concept network, for example, our abilities to analogize and to categorize; This paper is short and non-technical but nevertheless focuses on issues that are of significant philosophical importance to AI researchers, and to connectionists in particular. If you would like a copy, please send your name and address to: Helga Keller C.R.C.C. 510 North Fess Bloomington, Indiana 47401 or send an e-mail request to helga at cogsci.indiana.edu - Bob French french at cogsci.indiana.edu From Dave.Touretzky at B.GP.CS.CMU.EDU Fri May 26 00:57:40 1989 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 26 May 89 00:57:40 EDT Subject: NIPS and Memorial Day Message-ID: <400.612161860@DST.BOLTZ.CS.CMU.EDU> A lot of us head-in-the-clouds research types have just found out that Monday, May 29, is Memorial Day, and there will be no FedEx or Express Mail pickups that day. But the submission deadline for NIPS abstracts is Tuesday, May 30. Technically I suppose that means we should send out our abstracts today instead of working on them over the weekend. I don't think that was the intent of the organizing committee, though. Everybody knows you set a Tuesday deadline so people can make the Monday afternoon FedEx pickup. Therefore, I intend to send a batch of CMU abstracts off to Kathie Hibbard on Tuesday, with a short note apologizing for their arriving a day late. Since the NIPS organizing committee is composed of such eminently reasonable and sympathetic folks, there will surely be no problem. Other people out there are apparently having similar concerns with the surprise appearance of Memorial Day, so I thought I'd post this note as as suggested remedy. There's a cc: to Kathie so she'll know to expect a few more FedEx packets on the 31st. -- Dave From KIRK at ibm.com Fri May 26 11:54:20 1989 From: KIRK at ibm.com (Scott Kirkpatrick) Date: 26 May 89 11:54:20 EDT Subject: No subject Message-ID: <052689.115420.kirk@ibm.com> Re: NIPS and Memorial Day ***** Reply to your mail of: 05/26 02:14:22 ************************** I have a confession to make. My head was sufficiently in the clouds when I selected a deadline for submission of abstracts and summaries for NIPS89 that I didn't realize that May has 31 days! It does, I have since learned, and I offer the extra day to all those who will be spending the Memorial Day weekend finishing their NIPS submissions. From Q89%DHDURZ1.BITNET at VMA.CC.CMU.EDU Tue May 30 11:08:39 1989 From: Q89%DHDURZ1.BITNET at VMA.CC.CMU.EDU (Gabriele Scheler) Date: Tue, 30 May 89 11:08:39 CET Subject: mailing-list inclusion Message-ID: Dear "connectionists", I am a computational linguist in Heidelberg and I am experimenting with connectionist models from time to time (disambiguation as constraint satis- faction). I would like to be a recipient of the mail that you send to in- terested persons. My e-mail address is included in the header, my postal address (in case you need it): Dr Gabriele Scheler Lehrstuhl fuer Computerlinguistik Karlstr. 2 6900 Heidelberg Federal Republic of Germany Best regards, Gabriele Scheler From grumbach at ulysse.enst.fr Tue May 30 11:34:56 1989 From: grumbach at ulysse.enst.fr (Alain Grumbach) Date: Tue, 30 May 89 17:34:56 +0200 Subject: supervised learning Message-ID: <8905301534.AA17056@ulysse.enst.fr> Often I feel troubled while readind phrases : "supervised learning" or "unsupervised learning". What does it actually mean ? Usually, it refers to the fact that, for learning purpose, network outputs are given by the "teacher". But learning is a FUNCTION from EXAMPLE DATAS to WEIGHTS. The informations needed for example datas may include network inputs, or inputs and associated outputs. These outputs are input datas for the learning phase. Within this point of view, what would be a "unsupervised learning" ? I should put forward the idea that unsupervised learning takes place when example datas are not given by any teacher, i.e. when the system chooses (may be randomly) its examples within the example space. In human learning, there are many cases : games, common skills (driving a bicycle) ... In symbolic learning domain, for instance, the LEX system (Mitchell) designs itself the examples it will process. In neural networks, such an unsupervised learning is rare. Why ? Lastly, if we keep this idea, how could we name the kind of learning which was concerned by phrase "supervised learning" ? I should put forward the expression: "week(ly) supervised learning". Summary : this is a proposition for a more precise phrasing : - unsupervised learning : without any teacher information - week(ly) supervised learning : the teacher gives input vectors only - (fully) supervised learning : the teacher gives input and output vectors. What is your feeling about this ? grumbach @ ulysse.enst.fr (Alain GRUMBACH ENST Dept INF 46 rue Barrault 75634 PARIS Cedex 13 FRANCE) From Dave.Touretzky at B.GP.CS.CMU.EDU Wed May 31 21:53:16 1989 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Wed, 31 May 89 21:53:16 EDT Subject: tech report available Message-ID: <5456.612669196@DST.BOLTZ.CS.CMU.EDU> CONNECTIONISM AND COMPOSITIONAL SEMANTICS David S. Touretzky School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 Technical report CMU-CS-89-147 May 1989 Abstract: Quite a few interesting experiments have been done applying neural networks to natural language tasks. Without detracting from the value of these early investigations, this paper argues that current neural network architectures are too weak to solve anything but toy language problems. Their downfall is the need for ``dynamic inference,'' in which several pieces of information not previously seen together are dynamically combined to derive the meaning of a novel input. The first half of the paper defines a hierarchy of classes of connectionist models, from categorizers and associative memories to pattern transformers and dynamic inferencers. Some well-known connectionist models that deal with natural language are shown to be either categorizers or pattern transformers. The second half examines in detail a particular natural language problem: prepositional phrase attachment. Attaching a PP to an NP changes its meaning, thereby influencing other attachments. So PP attachment requires compositional semantics, and compositionality in non-toy domains requires dynamic inference. Mere pattern transformers cannot learn the PP attachment task without an exponential training set. Connectionist-style computation still has many valuable ideas to offer, so this is not an indictment of connectionism's potential. It is an argument for a more sophisticated and more symbolic connectionist approach to language. An earlier version of this paper appeared in the Proceedings of the 1988 Connectionist Models Summer School. ================ TO ORDER COPIES of this tech report: send electronic mail to copetas at cs.cmu.edu, or write the School of Computer Science at the address above. ** Do not use your mailer's "reply" command. **