From INAM%MCGILLB.BITNET at VMA.CC.CMU.EDU Sat Apr 1 12:40:00 1989 From: INAM%MCGILLB.BITNET at VMA.CC.CMU.EDU (INAM000) Date: SAT 01 APR 1989 12:40:00 EST Subject: Book Reviews,Journal of Mathematical Psychology Message-ID: The purpose of this mailing is to (re)draw your attention to the fact that the Journal of Mathematical Psychology, published by Academic Press, publishes reviews of books in the general area of mathematical (social, biological,....) science. For instance, in a forthcoming issue, a review of the revised edition of Minsky and Papert's PERCEPTRONS will appear (written by Jordan Pollack). The following is a partial list of books that we have recently received that I would like to get reviewed for the Journal -those most relevant to this group are marked by *s. As you will see, most of them are edited readings, which are hard to review. However, if you are interested in reviewing one or more of the books, I would like to hear from you. Our reviews are additions to the literature, not "straight" reviews, so writing a review for us gives you an opportunity to express your views on a field of research. I would also like to be kept informed of new books in this general area that you think we should review (or at least list in our Books Received section). And, of course, one reward for writing a review is that you receive a complimentary copy of the book. (SELECTED) Books Received The following books have been received for review.We encourage readers to volunteer themselves as reviewers.We consider our reviews contributions to the literature ,rather than "straight" reviews,and thus reviewers have considerable freedom in terms of format,length,and content of their reviews.Readers who would like to review any of these or previously listed books should contact A.A.J.Marley , Department of Psychology , McGill University,1205 Avenue Dr. Penfield,Montreal,Quebec H3A 1B1, Canada.(Email address: inam at musicb.mcgill.ca on BITNET). *Amit, D. J. Modelling brain function: The world of attractor neural networks. Cambridge, England: Cambridge University Press,1989. Pp. 500. Collins,A. and Smith,E.E. Readings in Cognitive Science.A Perspective from Psychology and Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.661pp. *Cotterill,R. M.J. (Ed).Computer Simulation in Brain Sciences.New York,New York: Cambridge University Press,1988.576pp,$65.00. *Grossberg,S. (Ed) Neural Networks and Natural Intelligence.Cambridge, Massachusetts : MIT Press,1988. 637pp. $35.00. Hirst,W. The Making of Cognitive Science.Essays In Honor of George A.Miller.New York,New York: Cambridge University Press,1988.288pp,$29.95. Laird,P.D. Learning from Good and Bad Data.Norwell,Massachusetts: Kluwer Academic,1988.211pp. *MacGregor, R. J. Neural and Brain Modeling. San Diego, California: Academic Press, 1987. 643pp. $95.50. Ortony,A,Clore,G.L. and Collins,A. The Cognitive Structure of Emotions.New York,New York: Cambridge University Press,1988. 175pp,$24.95. *Richards, W. (Ed). Natural Computation. Cambridge, Massachusetts: MIT Press, 1988. 561pp. Shrobe,H.E. and the American Association for Artificial Intelligence (Eds). Exploring Artificial Intelligence:Survey Talks from the National Conferences on Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.693pp. Vosniadou,S. and Ortony,A.Similarity and Analogical Reasoning.New York,New York: Cambridge University Press,1988.410pp,$44.50. *Richards,W. (Ed.) Natural Computation.Cambridge,Massachusetts: Bradford/MIT Press,1988.561pp.$25.00. Wilkins,D.E. Practical Planning:Extending the Classical AI Planning Paradigm. San Mateo, California : Morgan Kaufmann, 1988. 205pp.  From Shastri at cis.upenn.edu Sat Apr 1 13:42:00 1989 From: Shastri at cis.upenn.edu (Lokendra Shastri) Date: Sat, 1 Apr 89 13:42 EST Subject: Connectionist AI? Workshop at IJCAI-89. Call for participation Message-ID: <8904011829.AA23053@central.cis.upenn.edu> IJCAI-89 WORKSHOP CALL FOR PARTICIPATION CONNECTIONIST AI? Motivation and Agenda The focus of the workshop is to define critical issues that comprise the problem of systematic rule governed processes and connectionist architectures. The outcome of the workshop is to elaborate what the problem is and to motivate cross-talk between the connectionist and AI research communities. Numerous claims and counter claims have been made about the nature of connectionist models and how they relate to rule governed behavior. We feel that some researchers tend to oversimplify connectionism and underestimate what it has to offer. At the same time some others make very strong claims about connectionism and tend to underestimate the complexity of the AI problem and ignore insights obtained over years of research in AI and cognitive science. We also feel that some underlying problems in the discussions have never been raised. Through this workshop we hope to gain a better understanding of specific issues related to the integration of rules with connectionist processing approaches and to be able to more clearly specify critical problems that need to be addressed if a reconciliation between the approaches is warranted. Specific issues to be discussed Introductory Discussions - (Session I) 1. There are a number of variations on connectionism such as parallel distributed processing, localist or structured connectionist models, neural nets. What are the core aspects of connectionist models? 2. What is a rule? Aspects of rules to be addressed include - structure and representation of rules and control of rule-based processes. Reconciling rules with connectionism -- the alternatives? (Session II) 1. Is there a clash between rules and connectionist architectures? It is often asserted that connectionist models are "non-symbolic" or "sub-symbolic", and hence, fundamentally different from traditional AI approaches. Examine this claim? 2. Should connectionist architectures compute rules? If so, what kind of rules? If not, how does one reconcile the approach with rules as characteristics of performance? Can connectionism contribute to AI? (Session III) 1. It is claimed that connectionism just provides an interesting implementation paradigm. What is meant by "an implementation paradigm"? Can an implementation paradigm offer crucial insights into problems? 2. Evaluate the contributions made by recent work in Connectionism to central problems in AI such as representation, reasoning, and learning. Format Our aim is to gather around 25 experts from within mainstream AI as well as connectionism to discuss the above issues in depth. The workshop will consist of three 3-hour discussion sessions spread over one and a half days There will not be any presentations but only moderated discussions. Participation Participation in the workshop is by invitation only and is limited to 25 persons. Anyone who has published on issues directly related to the workshop may apply. Please submit one two page abstract outlining your position on one or more topics to be discussed and a list of your recent publications on any of these topics. The abstract should be in 12 point font (the size of this text) and double spaced. (References may extend beyond the two page limit.) Send three copies of your submission by APRIL 17, 1989 to: Lokendra Shastri Computer and Information Science University of Pennsylvania, Philadelphia, PA 19104. Organizers: Helen Gigley Lokendra Shastri Army Audiology and Speech Center Computer and Information Science Dept Walter Reed Army Medical Center University of Pennsylvania Washington, D.C. 20012 Philadelphia, PA l9l04 hgigley at note.nsf.gov shastri at cis.upenn.edu Alan Prince Psychology Department Brandeis University Waltham, MA 02254 prince at brandeis.bitnet From noel at CS.EXETER.AC.UK Wed Apr 5 16:49:13 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Wed, 5 Apr 89 16:49:13 BST Subject: No subject Message-ID: <3141.8904051549@entropy.cs.exeter.ac.uk> what does one have to do nowadays to get on the connectionist mailing list. i was on it until i changed universities. Since then, despite repeated requests and one reply over a period of nearly 4 months, i have received nothing. perhaps moving universities has turned me into a GARGOYLE or some other undesirable creature. Can someone out there help. yours desparately, noel sharkey Centre for Connection Science JANET: noel at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!noel Exeter EX4 4PT Devon BITNET: noel at cs.exeter.ac.uk.UKACRL U.K. From zemel at ai.toronto.edu Wed Apr 5 18:11:27 1989 From: zemel at ai.toronto.edu (Richard Zemel) Date: Wed, 5 Apr 89 18:11:27 EDT Subject: research reports available Message-ID: <89Apr5.181134edt.10961@ephemeral.ai.toronto.edu> The following two technical reports are now available. The first report describes the main ideas of TRAFFIC. It appeared in the Proceedings of the 1988 Connectionist Summer School, Morgan Kaufmann Publishers, edited by D.S. Touretzky, G.E. Hinton, and T.J. Sejnowski. The second report is a revised version of my Master's thesis. It contains a thorough description of the model, as well as implementation details and some experimental results. This report is rather long (~75 pages), so if you are curious about the model we'll send you the first one. On the other hand, if you want to plough through the details, ask specifically for the second one. *************************************************************************** "TRAFFIC: A Model of Object Recognition Based On Transformations of Feature Instances" Richard S. Zemel, Michael C. Mozer, Geoffrey E. Hinton Department of Computer Science University of Toronto Technical report CRG-TR-88-7 (Sept. 1988) ABSTRACT Visual object recognition involves not only detecting the presence of salient features of objects, but ensuring that these features are in the appropriate relationships to one another. Recent connectionist models designed to recognize two-dimensional shapes independent of their orientation, position, and scale have primarily dealt with simple objects, and they have not represented structural relations of these objects in an efficient manner. A new model is proposed that takes advantage of the fact that given a rigid object, and a particular feature of that object, there is a fixed viewpoint-independent tranformation from the feature's reference frame to the object's. This fixed transformation can be expressed as a matrix multiplication that is efficiently implemented by a set of weights in a connectionist network. By using a hierarchy of these transformations, with increasing feature complexity in each successive layer, a network can recognize multiple objects in parallel. ****************************** "TRAFFIC: A Connectionist Model of Object Recognition" Richard S. Zemel Department of Computer Science University of Toronto Technical report CRG-TR-89-2 (March 1989) ABSTRACT Recent connectionist models designed to recognize two-dimensional shapes independent of their orientation, position, and scale have not represented structural relations of the objects in an efficient manner. A new model is described that takes advantage of the fact that given a rigid object, and a particular feature of that object, there is a fixed viewpoint-independent transformation from the feature's reference frame to the object's. This fixed transformation can be expressed as a matrix multiplication that is efficiently implemented by a set of weights in a connectionist network. The model, called TRAFFIC (a loose acronym for ``transforming feature instances''), uses a hierarchy of these transformations, with increasing feature complexity in each successive layer, in order to recognize multiple objects in parallel. An implementation of TRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-independent manner. ************************************************************************* Copies of either report can be obtained by sending an email request to: INTERNET: carol at ai.toronto.edu UUCP: uunet!utai!carol BITNET: carol at utorgpu From bradley at ivy.Princeton.EDU Wed Apr 5 22:48:13 1989 From: bradley at ivy.Princeton.EDU (Bradley Dickinson) Date: Wed, 5 Apr 89 21:48:13 EST Subject: Inform. Theory paper nominations sought Message-ID: <8904060248.AA11436@ivy.Princeton.EDU> CALL FOR NOMINATIONS: IEEE INFORMATION THEORY SOCIETY PAPER AWARD Nominations are sought for the 1989 IEEE Information Theory Society Paper Award. This award will be given for an outstanding publication in the field of interest* of the Society appearing in any journal during the preceding two calendar years (1987-1988). Nominations should consist of a complete citation for the publication, a brief description of the contribution of the paper, and a statement indicating why the paper is deserving of the Award. Nominations may be submitted to the Chair of the IT Society Awards Committee: H. Vincent Poor Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 West Springfield Avenue Urbana, IL 61801 USA E-mail: poor at uicsl.csl.uiuc.edu Telephone: (217) 333-6449 Telefax: (217) 244-1764; or to any other member of the Committee: Thomas M. Cover (cover at isl.stanford.edu) Daniel J. Costello (dan at ndsun.uucp) Shu Lin (University of Hawaii) Sergio Verdu (verdu at ivy.princeton.edu ) Kung Yao (IAD3TAO%OAC.UCLA.EDU). Nominations should be received by April 30, 1989, at the latest, and preferably by April 21, 1989. *The field of interest of the Information Theory Society is the processing, transmission, storage and use of information and the foundations of the communication process. It specifically encompasses theoretical and certain applied aspects of coding, communications and communications networks, complexity and cryptography, detection and estimation, learning, Shannon Theory, and stochastic processes. From cfields at NMSU.Edu Sun Apr 9 17:56:55 1989 From: cfields at NMSU.Edu (cfields@NMSU.Edu) Date: Sun, 9 Apr 89 15:56:55 MDT Subject: No subject Message-ID: <8904092156.AA19088@NMSU.Edu> _________________________________________________________________________ The following are abstracts of papers appearing in the second issue of the Journal of Experimental and Theoretical Artificial Intelligence, to appear in April, 1989. For submission information, please contact either of the editors: Eric Dietrich Chris Fields PACSS - Department of Philosophy Box 30001/3CRL SUNY Binghamton New Mexico State University Binghamton, NY 13901 Las Cruces, NM 88003-0001 dietrich at bingvaxu.cc.binghamton.edu cfields at nmsu.edu JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia _________________________________________________________________________ Generating plausible diagnostic hypotheses with self-processing causal networks Jonathan Wald, Martin Farach, Malle Tagamets, and James Reggia Department of Computer Science, University of Maryland A recently proposed connectionist methodology for diagnostic problem solving is critically examined for its ability to construct problem solutions. A sizeable causal network (56 manifestation nodes, 26 disorder nodes, 384 causal links) served as the basis of experimental simulations. Initial results were discouraging, with less than two-thirds of simulations leading to stable solution states (equilibria). Examination of these simulation results identified a critical period during simulations, and analysis of the connectionist model's activation rule during this period led to an understanding of the model's nonstable oscillatory behavior. Slower decrease in the model's control parameters during the critical period resulted in all simulations reaching a stable equilibrium with plausible solutions. As a consequence of this work, it is possible to more rationally determine a schedule for control parameter variation during problem solving, and the way is now open for real-world experimental assessment of this problem solving method. _________________________________________________________________________ Organizing and integrating edge segments for texture discrimination Kenzo Iwama and Anthony Maida Department of Computer Science, Pennsylvania State University We propose a psychologically and psychophysically motivated texture segmentation algorithm. The algorithm is implemented as a computer program which parses visual images into regions on the basis of texture. The program's output matches human judgements on a very large class of stimuli. The program and algorithm offer very detailed hypotheses of how humans might segment stimuli, and also suggest plausible alternative explanations to those presented in the literature. In particular, contrary to Julesz and Bergen (1983), the program does not use crossings as textons and does use corners as textons. Nonetheless, the program is able to account for the same data. The program accounts for much of the linking phenomena of Beck, Pradzny, and Rosenfeld (1983). It does so by matching structures between feature maps on the basis of spatial overlap. These same mechanisms are also used to account for the feature integration phenomena of Triesman (1985). ---------------------------------------------------------------------------- Towards a paradigm shift in belief representation methodology John Barnden Computing Research Laboratory, New Mexico State University Research programs must often divide issues into managable sub-issues. The assumption is that an approach developed to cope with a sub-issue can later be integrated into an approach to the whole issue - possibly after some tinkering with the sub-approach, but without affecting its fundamental features. However, the present paper examines a case where an AI issue has been divided in a way that is, apparently, harmless and natural, but is actually fundamentally out of tune with the realities of the issue. As a result, some approaches developed for a certain sub-issue cannot be extended to a total approach without fundamental modification. The issue in question is that of modeling people's beliefs, hopes, intentions, and other ``propositional attitudes'', and/or interpreting natural language sentences that report propositional attitudes. Researchers have, quite understandably, de-emphasized the problem of dealing in detail with nested attitudes (e.g. hopes about beliefs, beliefs about intentions about beliefs), in favor of concentrating on the sub-issue of nonnested attitudes. Unfortunately, a wide variety of approaches to attitudes are prone to a deep but somewhat subtle problem when they are applied to nested attitudes. This problem can be very roughly described as an AI system's unwitting imputation of its own arcane ``theory'' of propositional attitudes to other agents. The details of this phenomenon have been published elsewhere by the author: the present paper merely sketches it, and concentrates instead on the methodological lessons to be drawn, both for propositional attitude research and, more tentatively, for AI in general. The paper also summarizes an argument (presented more completely elsewhere) for an approach to attitude representation based in part on metaphors of mind that are commonly used by people. This proposed new research direction should ultimately coax propositional attitude research out of the logical armchair and into the pyschological laboratory. --------------------------------------------------------------------------- The graph of a boolean function Frank Harary Department of Computer Science, New Mexico State University (Abstract not available) ___________________________________________________________________________ From jbower at bek-mc.caltech.edu Mon Apr 10 12:52:01 1989 From: jbower at bek-mc.caltech.edu (Jim Bower) Date: Mon, 10 Apr 89 08:52:01 pst Subject: CPGs Message-ID: <8904101652.AA07358@bek-mc.caltech.edu> At the just completed Snowbird meeting on neural networks, several participants asked for more information on biological central pattern generators (CPGs). I recommend a book titled "Model Neural Networks and Behavior" edited by Al Selverston published by Plenum Press in 1985. For those that don't know, CPGs as a class probably represent the most completely described real biological networks in terms of their anatomy, physiology, and, importantly, the behavior they control. As such they are well worth knowing about. Jim Bower Caltech From mm at cogsci.indiana.edu Mon Apr 10 15:44:18 1989 From: mm at cogsci.indiana.edu (Melanie Mitchell) Date: Mon, 10 Apr 89 13:44:18 CST Subject: Technical report available Message-ID: The following report is available from the Center for Research on Concepts and Cognition at Indiana University: The Role of Computational Temperature in a Computer Model of Concepts and Analogy-Making Melanie Mitchell and Douglas R. Hofstadter Center For Research on Concepts and Cognition Indiana University Abstract In this paper we discuss the role of computational temperature in Copycat, a computer model of the mental mechanisms underlying human concepts and analogy-making. Central features of Copycat's architecture are a high degree of parallelism, fine-grained distributed processing, competition, randomness, and an interaction of bottom-up perceptual pressures with an associative, overlapping, and context-sensitive conceptual network. In Copycat, computational temperature is used both to measure the amount and quality of perceptual organization created by the program as processing proceeds, and, reciprocally, to continuously control the degree of randomness in the system. In this paper we will discuss the role of temperature in two aspects of perception central to Copycat's behavior: (1) the emergence of a "parallel terraced scan", in which many possible courses of action are explored simultaneously, each at a speed and to a depth proportional to moment-to-moment estimates of its promise, and (2) the ability to restructure initial perceptions -- sometimes radically -- in order to arrive at a deeper, more essential understanding of a situation. We will also compare our notion of temperature to similar notions in other computational frameworks. Finally, an example will be given of how temperature is used in Copycat's creation of a subtle and insightful analogy. For copies of this report, send a request for CRCC-89-1 to helga at cogsci.indiana.edu or to Helga Keller Center for Research on Concepts and Cognition Indiana University Bloomington, Indiana, 47408 From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Apr 12 13:41:47 1989 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Wed, 12 Apr 89 13:41:47 EDT Subject: request from Dr. Lippman Message-ID: if Dr. Lippman is reading this, can you please send me a copy of your joint paper with Dr. Huang: "Comparison between Neural Net and Conventional Classifiers" my address is: Thanasis Kehagias Division of Applied Mathematics Brown University Providence, RI 02912 USA thanks in advance. From dario%TECHUNIX.BITNET at VMA.CC.CMU.EDU Thu Apr 13 08:47:57 1989 From: dario%TECHUNIX.BITNET at VMA.CC.CMU.EDU (Dario Ringach) Date: Thu, 13 Apr 89 14:47:57 +0200 Subject: Texture Segmentation using the Boundary Contour System Message-ID: <8904131247.AA10308@techunix.bitnet> How can the Boundary Contour System segment textures with identical second-order statistics? I mean, first-order differences are easily discovered by the "contrast sensitive" cells at the first stage of the BCS (the OC filter), while the CC-loop can account for second-order (dipole) statistics; but how can the BCS segment textures, as the ones presented by Julez, which have even identical third-order statistics but are easily discriminable? Is the BCS/FCS model consistent with Julez's Textons theory? If so, in which way? Thanks! -- Dario dario at techunix.bitnet From kube%cs at ucsd.edu Thu Apr 13 17:44:44 1989 From: kube%cs at ucsd.edu (Paul Kube) Date: Thu, 13 Apr 89 14:44:44 PDT Subject: Texture Segmentation using the Boundary Contour System In-Reply-To: Dario Ringach's message of Thu, 13 Apr 89 14:47:57 +0200 <8904131247.AA10308@techunix.bitnet> Message-ID: <8904132144.AA02793@kokoro.UCSD.EDU> Date: Thu, 13 Apr 89 14:47:57 +0200 From: Dario Ringach How can the Boundary Contour System segment textures with identical second-order statistics? If by "segment" you mean construct a Boundary Contour between the two textured regions, it probably won't. I mean, first-order differences are easily discovered by the "contrast sensitive" cells at the first stage of the BCS (the OC filter), Yes. while the CC-loop can account for second-order (dipole) statistics; The CC loop can construct linkages between texture elements with similar orientation, and so can, to some extent, "group" elements into regions with similar second-order statistics. However there is nothing in the BC system that is sensitive to such differences in 2nd order statistics as may exist between regions, so it won't mark a boundary between them, so it won't segment them, except perhaps by accident (e.g. if end cuts of elements at the boundary between regions happen to line up in the right way). but how can the BCS segment textures, as the ones presented by Julez, which have even identical third-order statistics but are easily discriminable? Indeed, but it doesn't even have the resources to segment correctly in the iso-first-order case. Is the BCS/FCS model consistent with Julez's Textons theory? If so, in which way? I think it's most useful to see Grossberg and Mingolla (Perception and Psychophysics 1985) as proposing a mechanism to account for some of the linking, subjective contour, and neon spreading phenomena that seem to be involved in some texture stimuli used by Beck. In my way of carving things up, this is a contribution to the texture *description* problem. Julesz has (almost) always studiously avoided these kinds of effects in his displays so it has (almost) nothing to say about his work. In any case, it has nothing to say about the texture *discrimination* problem, which is exactly the problem of computing boundaries between differently textured image regions, given some representation which shows them to be different. Let me add that when I discussed these issues with Mingolla about a year ago, he was optimistic that the BC/FC system might segment Julesz's stimuli, though it hadn't been tried. I was and remain pessimistic, though to my knowledge the experiments still haven't been done. --Paul Kube kube at ucsd.edu From pauls at boulder.Colorado.EDU Thu Apr 13 11:45:53 1989 From: pauls at boulder.Colorado.EDU (Paul Smolensky) Date: Thu, 13 Apr 89 09:45:53 MDT Subject: TR: Virtual Memories and Massive Generalization Message-ID: <8904131545.AA20801@sigi.colorado.edu> Virtual Memories and Massive Generalization in Connectionist Combinatorial Learning Olivier Brousse & Paul Smolensky Department of Computer Science & Institute of Cognitive Science University of Colorado at Boulder We report a series of experiments on connectionist learning that addresses a particularly pressing set of objections on the plau- sibility of connectionist learning as a model of human learning. Connectionist models have typically suffered from rather severe problems of inadequate generalization (where generalizations are significantly fewer than training inputs) and interference of newly learned items with previously learned items. Taking a cue from the domains in which human learning dramatically overcomes such problems, we see that indeed connectionist learning can es- cape these problems in *combinatorially structured domains.* In the simple combinatorial domain of letter sequences, we find that a basic connectionist learning model trained on 50 6-letter se- quences can correctly generalize to over 10,000 novel sequences. We also discover that the model exhibits over 1,000,000 *virtual memories*: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. Virtual memories can be thought of states which are not harmony maxima (energy minima) but which can become so with a few presentations, without in- terfering with existing harmony maxima. Like generalizations, virtual memories in combinatorial memories are largely novel com- binations of familiar subpatterns extracted from the contexts in which they appear in the training set. We conclude that, in com- binatorial domains like language, connectionist learning is not as harmful to the empiricist position as typical connectionist learning experiments might suggest. Submitted to the annual meeting of the Cognitive Science Society. Please send requests to conn_tech_report at boulder.Colorado.EDU and request report CU-CS-431-89. These will be available for mailing shortly. From ersoy at ee.ecn.purdue.edu Fri Apr 14 10:24:52 1989 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 14 Apr 89 09:24:52 -0500 Subject: No subject Message-ID: <8904141424.AA11764@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 23 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 3-6, 1990 The Neural Networks Track of HICSS-23 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-23 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 10, 1989. Notification of accepted papers by September 1, 1989. Accpeted manuscripts, camera-ready, are due by October 3, 1989. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy H. H. Szu Purdue University Naval Research Laboratories School of Electrical Engineering Code 5709 W. Lafayette, IN 47907 4555 Overlook Ave., SE (317) 494-6162 Washington, DC 20375 E-Mail: ersoy at ee.ecn.purdue (202) 767-2407 From mike at bucasb.BU.EDU Thu Apr 13 16:26:51 1989 From: mike at bucasb.BU.EDU (Michael Cohen) Date: Thu, 13 Apr 89 16:26:51 EDT Subject: network meeting announcement Message-ID: <8904132026.AA01714@bucasb.bu.edu> NEURAL NETWORK MODELS OF CONDITIONING AND ACTION 12th Symposium on Models of Behavior Friday and Saturday, June 2 and 3, 1989 105 William James Hall, Harvard University 33 Kirkland Street, Cambridge, Massachusetts PROGRAM COMMITTEE: Michael Commons, Harvard Medical School Stephen Grossberg, Boston University John E.R. Staddon, Duke University JUNE 2, 8:30AM--11:45AM ----------------------- Daniel L. Alkon, ``Pattern Recognition and Storage by an Artificial Network Derived from Biological Systems'' John H. Byrne, ``Analysis and Simulation of Cellular and Network Properties Contributing to Learning and Memory in Aplysia'' William B. Levy, ``Synaptic Modification Rules in Hippocampal Learning'' JUNE 2, 1:00PM--5:15PM ---------------------- Gail A. Carpenter, ``Recognition Learning by a Hierarchical ART Network Modulated by Reinforcement Feedback'' Stephen Grossberg, ``Neural Dynamics of Reinforcement Learning, Selective Attention, and Adaptive Timing'' Daniel S. Levine, ``Simulations of Conditioned Perseveration and Novelty Preference from Frontal Lobe Damage'' Nestor A. Schmajuk, ``Neural Dynamics of Hippocampal Modulation of Classical Conditioning'' JUNE 3, 8:30AM--11:45AM ----------------------- John W. Moore, ``Implementing Connectionist Algorithms for Classical Conditioning in the Brain'' Russell M. Church, ``A Connectionist Model of Scalar Timing Theory'' William S. Maki, ``Connectionist Approach to Conditional Discrimination: Learning, Short-Term Memory, and Attention'' JUNE 3, 1:00PM--5:15PM ---------------------- Michael L. Commons, ``Models of Acquisition and Preference'' John E.R. Staddon, ``Simple Parallel Model for Operant Learning with Application to a Class of Inference Problems'' Alliston K. Reid, ``Computational Models of Instrumental and Scheduled Performance'' Stephen Jose Hanson, ``Behavioral Diversity, Hypothesis Testing, and the Stochastic Delta Rule'' Richard S. Sutton, ``Time Derivative Models of Pavlovian Reinforcement'' FOR REGISTRATION INFORMATION SEE ATTACHED OR WRITE: Dr. Michael L. Commons Society for Quantitative Analysis of Behavior 234 Huron Avenue Cambridge, MA 02138 ---------------------------------------------------------------------- ---------------------------------------------------------------------- REGISTRATION FEE BY MAIL (Paid by check to Society for Quantitative Analysis of Behavior) (Postmarked by April 30, 1989) Name: ______________________________________________ Title: _____________________________________________ Affiliation: _______________________________________ Address: ___________________________________________ Telephone(s): ______________________________________ E-mail address: ____________________________________ ( ) Regular $35 ( ) Full-time student $25 School ____________________________________________ Graduate Date _____________________________________ Print Faculty Name ________________________________ Faculty Signature _________________________________ PREPAID 10-COURSE CHINESE BANQUET ON JUNE 2 ( ) $20 (add to pre-registration fee check) ----------------------------------------------------------------------------- (cut here and mail with your check to) Dr. Michael L. Commons Society for Quantitative Analysis of Behavior 234 Huron Avenue Cambridge, MA 02138 REGISTRATION FEE AT THE MEETING ( ) Regular $45 ( ) Full-time Student $30 (Students must show active student I.D. to receive this rate) ON SITE REGISTRATION 5:00--8:00PM, June 1, at the RECEPTION in Room 1550, William James Hall, 33 Kirkland Street, and 7:30--8:30AM, June 2, in the LOBBY of William James Hall. Registration by mail before April 30, 1989 is recommended as seating is limited HOUSING INFORMATION Rooms have been reserved in the name of the symposium ("Models of Behavior") for the Friday and Saturday nights at: Best Western Homestead Inn 220 Alewife Brook Parkway Cambridge, MA 02138 Single: $71 Double: $80 Call (617) 491-1890 or (800) 528-1234 and ask for the Group Sales desk. Reserve your room as soon as possible. The hotel will not hold them past May 1. Because of Harvard and MIT graduation ceremonies, space will fill up rapidly. Other nearby hotels: Howard Johnson's Motor Lodge 777 Memorial Drive Cambridge, MA 02139 (617) 492-7777 (800) 654-2000 Single: $115--$135 Double: $115--$135 Suisse Chalet 211 Concord Turnpike Parkway Cambridge, MA 02140 (617) 661-7800 (800) 258-1980 Single: $48.70 Double: $52.70 --------------------------------------------------------------------------- From cole at cse.ogc.edu Mon Apr 17 22:08:54 1989 From: cole at cse.ogc.edu (Ron Cole) Date: Mon, 17 Apr 89 19:08:54 -0700 Subject: POST-DOC: SPEECH & NEURAL NETS Message-ID: <8904180208.AA14232@ogccse.OGC.EDU> POST-DOCTORAL POSITION AT OREGON GRADUATE CENTER Speech Recognition with Neural Nets A post-doctoral position is available at the Oregon Graduate Center to study connectionist approaches to computer speech recognition, beginning Summer or Fall, 1989. The main requirements are (1) a strong background in the theory and application of neural networks, and (2) willingness to learn about the wonderful world of speech. Knowledge of computer speech recognition is helpful but not required; the PI has extensive experience in the area and is willing to teach the necessary skills. The goal of our research is to develop speech recognition algorithms that are motivated by research on hearing, acoustic phonetics and speech perception, and to compare performance of algorithms that use neural network classifiers to more traditional techniques. In the past year, our group has applied neural network classification to several problem areas in speaker-independent recognition of continuous speech: Pitch and formant tracking, segmentation, broad phonetic classification and fine phonetic discrimination. In addition, we have recently demonstrated the feasibility of using multi-layered networks to identify languages on the basis of their temporal characteristics (preprint available from vincew at ogc.cse.edu). OGC provides an excellent environment for research in speech recognition and neural networks, with state-of-the-art speech processing software (including Dick Lyon's cochleogram, a representation based on a computational model of the auditory system), speech databases, and simulation tools. The department has a Sequent Symmetry multiprocessor, Intel Hypercube and Cogent Research XTM parallel workstations, and the speech project has several dedicated Sun4 and Sun3 workstations. The speech group has close ties with Dan Hammerstrom's Cognitive Architecture Project at OGC, and with Les Atlas and his group at the University of Washington. OGC is located ten miles west of Portland on a spacious campus in the heart of Oregon's technology corridor. Nearby companies include Sequent, Intel, Tektronix, Cogent Research, Mentor Graphics, BiiN, NCUBE, and FPS Computing. The cultural attractions of Portland are close by, and the Columbia River Gorge, Oregon Coast and Cascade Mts (skiing through September) are less than 90 minutes away. Housing is inexpensive and quality of life is excellent. Please send resume to: Ronald Cole Computer Science and Engineering Oregon Graduate Center Beaverton, OR 97006 503 690 1159 From cole at cse.ogc.edu Tue Apr 18 15:57:43 1989 From: cole at cse.ogc.edu (Ron Cole) Date: Tue, 18 Apr 89 12:57:43 -0700 Subject: Address Correction Message-ID: <8904181957.AA14690@ogccse.OGC.EDU> The recent post-doc announcement (speech and neural nets) did not include the complete mailing address. It is: Ronald Cole Computer Science and Engineering Oregon Graduate Center 19600 N.W. Von Neumann Drive Beaverton, OR 97006-1999 From marvit%hplpm at hplabs.hp.com Thu Apr 20 21:44:15 1989 From: marvit%hplpm at hplabs.hp.com (Peter Marvit) Date: Thu, 20 Apr 89 18:44:15 PDT Subject: research reports available In-Reply-To: Your message of "Wed, 05 Apr 89 18:11:27 EDT." <89Apr5.181134edt.10961@ephemeral.ai.toronto.edu> Message-ID: <8904210144.AA01477@hplpm.HPL.HP.COM> From PSYKIMP at vms2.uni-c.dk Thu Apr 27 11:57:00 1989 From: PSYKIMP at vms2.uni-c.dk (PSYKIMP@vms2.uni-c.dk) Date: Thu, 27 Apr 89 16:57 +0100 Subject: Position available Message-ID: The Institute of Psychology, University of Aarhus, Denmark is announcing a new position at the Associate Professor level. Applicants should document research within the area of psychology or Cognitive Science which involves the relation between information and computer technology, and psychological processes. Qualifications within the latter area - the relation to computer technology and psychology - will be given special consideration. For further details, please contact Dr. Kim Plunkett: psykimp at dkarh02.bitnet (Deadline for receipt of applications: June 2nd, 1989) From netlist at psych.Stanford.EDU Thu Apr 27 11:50:49 1989 From: netlist at psych.Stanford.EDU (Mark Gluck) Date: Thu, 27 Apr 89 08:50:49 PDT Subject: NEXT TUES (5/2): Bruce McNaughton on Neural Networks for Spacial Representation in Hippocampus Message-ID: Stanford University Interdisciplinary Colloquium Series: Adaptive Networks and their Applications May 2nd (Tuesday, 3:30pm): Room 380-380C ******************************************************************************** Hebb-Steinbuch-Marr Networks and the Role of Movement in Hippocampal Representations of Spatial Relations Bruce L. McNaughton Dept. of Psychology University of Colorado Campus Box 345 Boulder, CO 80309 ******************************************************************************** Abstract Over 15 years ago, Marr proposed models for associative learning and pattern completion in specific brain regions. These models incorporated Hebb's postulate, the "learning matrix" concept of Steinbuch, recurrent excitation, and the assumptions that a few excitatory synapses are disproportionately powerful, and that inhibitory synapses divide postsynaptic excitation by the total input. These ideas provide a basis for understanding much of the circuitry and physiology of the hippocampus, and will be used to suggest how spatial relationships are coded there by forming conditional associations between location and movement representations originating in the inferotemporal and parietal cortical systems respectively. References: ----------- McNaughton, B. L. & Morris R.G.M. (1988). Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in Neurosci. 10:408-415. McNaughton, B. L. & Nadel, L. (in press, 1989). Hebb-Marr networks and the neurobiological representation of action in space. To appear in M. Gluck & D. Rumelhart (Eds.), Neuroscience and Connectionist Models, Erlbaum: Hillsdale, NJ Additional Information: ---------------------- Location: Room 380-380C, which can be reached through the lower level between the Psychology and Mathematical Sciences buildings. Level: Technically oriented for persons working in related areas. Mailing lists: To be added to the network mailing list, netmail to netlist at psych.stanford.edu with "addme" as your subject header. For additional information, contact Mark Gluck (gluck at psych.stanford.edu). From Christopher.McConnell at A.GP.CS.CMU.EDU Thu Apr 27 17:16:17 1989 From: Christopher.McConnell at A.GP.CS.CMU.EDU (Christopher.McConnell@A.GP.CS.CMU.EDU) Date: Thu, 27 Apr 89 17:16:17 EDT Subject: NEXT TUES (5/2): Bruce McNaughton on Neural Networks for Spacial Representation in Hippocampus Message-ID: From INAM%MCGILLB.BITNET at VMA.CC.CMU.EDU Sat Apr 1 12:40:00 1989 From: INAM%MCGILLB.BITNET at VMA.CC.CMU.EDU (INAM000) Date: SAT 01 APR 1989 12:40:00 EST Subject: Book Reviews,Journal of Mathematical Psychology Message-ID: The purpose of this mailing is to (re)draw your attention to the fact that the Journal of Mathematical Psychology, published by Academic Press, publishes reviews of books in the general area of mathematical (social, biological,....) science. For instance, in a forthcoming issue, a review of the revised edition of Minsky and Papert's PERCEPTRONS will appear (written by Jordan Pollack). The following is a partial list of books that we have recently received that I would like to get reviewed for the Journal -those most relevant to this group are marked by *s. As you will see, most of them are edited readings, which are hard to review. However, if you are interested in reviewing one or more of the books, I would like to hear from you. Our reviews are additions to the literature, not "straight" reviews, so writing a review for us gives you an opportunity to express your views on a field of research. I would also like to be kept informed of new books in this general area that you think we should review (or at least list in our Books Received section). And, of course, one reward for writing a review is that you receive a complimentary copy of the book. (SELECTED) Books Received The following books have been received for review.We encourage readers to volunteer themselves as reviewers.We consider our reviews contributions to the literature ,rather than "straight" reviews,and thus reviewers have considerable freedom in terms of format,length,and content of their reviews.Readers who would like to review any of these or previously listed books should contact A.A.J.Marley , Department of Psychology , McGill University,1205 Avenue Dr. Penfield,Montreal,Quebec H3A 1B1, Canada.(Email address: inam at musicb.mcgill.ca on BITNET). *Amit, D. J. Modelling brain function: The world of attractor neural networks. Cambridge, England: Cambridge University Press,1989. Pp. 500. Collins,A. and Smith,E.E. Readings in Cognitive Science.A Perspective from Psychology and Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.661pp. *Cotterill,R. M.J. (Ed).Computer Simulation in Brain Sciences.New York,New York: Cambridge University Press,1988.576pp,$65.00. *Grossberg,S. (Ed) Neural Networks and Natural Intelligence.Cambridge, Massachusetts : MIT Press,1988. 637pp. $35.00. Hirst,W. The Making of Cognitive Science.Essays In Honor of George A.Miller.New York,New York: Cambridge University Press,1988.288pp,$29.95. Laird,P.D. Learning from Good and Bad Data.Norwell,Massachusetts: Kluwer Academic,1988.211pp. *MacGregor, R. J. Neural and Brain Modeling. San Diego, California: Academic Press, 1987. 643pp. $95.50. Ortony,A,Clore,G.L. and Collins,A. The Cognitive Structure of Emotions.New York,New York: Cambridge University Press,1988. 175pp,$24.95. *Richards, W. (Ed). Natural Computation. Cambridge, Massachusetts: MIT Press, 1988. 561pp. Shrobe,H.E. and the American Association for Artificial Intelligence (Eds). Exploring Artificial Intelligence:Survey Talks from the National Conferences on Artificial Intelligence.San Mateo,California:Morgan Kaufmann,1988.693pp. Vosniadou,S. and Ortony,A.Similarity and Analogical Reasoning.New York,New York: Cambridge University Press,1988.410pp,$44.50. *Richards,W. (Ed.) Natural Computation.Cambridge,Massachusetts: Bradford/MIT Press,1988.561pp.$25.00. Wilkins,D.E. Practical Planning:Extending the Classical AI Planning Paradigm. San Mateo, California : Morgan Kaufmann, 1988. 205pp.  From Shastri at cis.upenn.edu Sat Apr 1 13:42:00 1989 From: Shastri at cis.upenn.edu (Lokendra Shastri) Date: Sat, 1 Apr 89 13:42 EST Subject: Connectionist AI? Workshop at IJCAI-89. Call for participation Message-ID: <8904011829.AA23053@central.cis.upenn.edu> IJCAI-89 WORKSHOP CALL FOR PARTICIPATION CONNECTIONIST AI? Motivation and Agenda The focus of the workshop is to define critical issues that comprise the problem of systematic rule governed processes and connectionist architectures. The outcome of the workshop is to elaborate what the problem is and to motivate cross-talk between the connectionist and AI research communities. Numerous claims and counter claims have been made about the nature of connectionist models and how they relate to rule governed behavior. We feel that some researchers tend to oversimplify connectionism and underestimate what it has to offer. At the same time some others make very strong claims about connectionism and tend to underestimate the complexity of the AI problem and ignore insights obtained over years of research in AI and cognitive science. We also feel that some underlying problems in the discussions have never been raised. Through this workshop we hope to gain a better understanding of specific issues related to the integration of rules with connectionist processing approaches and to be able to more clearly specify critical problems that need to be addressed if a reconciliation between the approaches is warranted. Specific issues to be discussed Introductory Discussions - (Session I) 1. There are a number of variations on connectionism such as parallel distributed processing, localist or structured connectionist models, neural nets. What are the core aspects of connectionist models? 2. What is a rule? Aspects of rules to be addressed include - structure and representation of rules and control of rule-based processes. Reconciling rules with connectionism -- the alternatives? (Session II) 1. Is there a clash between rules and connectionist architectures? It is often asserted that connectionist models are "non-symbolic" or "sub-symbolic", and hence, fundamentally different from traditional AI approaches. Examine this claim? 2. Should connectionist architectures compute rules? If so, what kind of rules? If not, how does one reconcile the approach with rules as characteristics of performance? Can connectionism contribute to AI? (Session III) 1. It is claimed that connectionism just provides an interesting implementation paradigm. What is meant by "an implementation paradigm"? Can an implementation paradigm offer crucial insights into problems? 2. Evaluate the contributions made by recent work in Connectionism to central problems in AI such as representation, reasoning, and learning. Format Our aim is to gather around 25 experts from within mainstream AI as well as connectionism to discuss the above issues in depth. The workshop will consist of three 3-hour discussion sessions spread over one and a half days There will not be any presentations but only moderated discussions. Participation Participation in the workshop is by invitation only and is limited to 25 persons. Anyone who has published on issues directly related to the workshop may apply. Please submit one two page abstract outlining your position on one or more topics to be discussed and a list of your recent publications on any of these topics. The abstract should be in 12 point font (the size of this text) and double spaced. (References may extend beyond the two page limit.) Send three copies of your submission by APRIL 17, 1989 to: Lokendra Shastri Computer and Information Science University of Pennsylvania, Philadelphia, PA 19104. Organizers: Helen Gigley Lokendra Shastri Army Audiology and Speech Center Computer and Information Science Dept Walter Reed Army Medical Center University of Pennsylvania Washington, D.C. 20012 Philadelphia, PA l9l04 hgigley at note.nsf.gov shastri at cis.upenn.edu Alan Prince Psychology Department Brandeis University Waltham, MA 02254 prince at brandeis.bitnet From noel at CS.EXETER.AC.UK Wed Apr 5 16:49:13 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Wed, 5 Apr 89 16:49:13 BST Subject: No subject Message-ID: <3141.8904051549@entropy.cs.exeter.ac.uk> what does one have to do nowadays to get on the connectionist mailing list. i was on it until i changed universities. Since then, despite repeated requests and one reply over a period of nearly 4 months, i have received nothing. perhaps moving universities has turned me into a GARGOYLE or some other undesirable creature. Can someone out there help. yours desparately, noel sharkey Centre for Connection Science JANET: noel at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!noel Exeter EX4 4PT Devon BITNET: noel at cs.exeter.ac.uk.UKACRL U.K. From zemel at ai.toronto.edu Wed Apr 5 18:11:27 1989 From: zemel at ai.toronto.edu (Richard Zemel) Date: Wed, 5 Apr 89 18:11:27 EDT Subject: research reports available Message-ID: <89Apr5.181134edt.10961@ephemeral.ai.toronto.edu> The following two technical reports are now available. The first report describes the main ideas of TRAFFIC. It appeared in the Proceedings of the 1988 Connectionist Summer School, Morgan Kaufmann Publishers, edited by D.S. Touretzky, G.E. Hinton, and T.J. Sejnowski. The second report is a revised version of my Master's thesis. It contains a thorough description of the model, as well as implementation details and some experimental results. This report is rather long (~75 pages), so if you are curious about the model we'll send you the first one. On the other hand, if you want to plough through the details, ask specifically for the second one. *************************************************************************** "TRAFFIC: A Model of Object Recognition Based On Transformations of Feature Instances" Richard S. Zemel, Michael C. Mozer, Geoffrey E. Hinton Department of Computer Science University of Toronto Technical report CRG-TR-88-7 (Sept. 1988) ABSTRACT Visual object recognition involves not only detecting the presence of salient features of objects, but ensuring that these features are in the appropriate relationships to one another. Recent connectionist models designed to recognize two-dimensional shapes independent of their orientation, position, and scale have primarily dealt with simple objects, and they have not represented structural relations of these objects in an efficient manner. A new model is proposed that takes advantage of the fact that given a rigid object, and a particular feature of that object, there is a fixed viewpoint-independent tranformation from the feature's reference frame to the object's. This fixed transformation can be expressed as a matrix multiplication that is efficiently implemented by a set of weights in a connectionist network. By using a hierarchy of these transformations, with increasing feature complexity in each successive layer, a network can recognize multiple objects in parallel. ****************************** "TRAFFIC: A Connectionist Model of Object Recognition" Richard S. Zemel Department of Computer Science University of Toronto Technical report CRG-TR-89-2 (March 1989) ABSTRACT Recent connectionist models designed to recognize two-dimensional shapes independent of their orientation, position, and scale have not represented structural relations of the objects in an efficient manner. A new model is described that takes advantage of the fact that given a rigid object, and a particular feature of that object, there is a fixed viewpoint-independent transformation from the feature's reference frame to the object's. This fixed transformation can be expressed as a matrix multiplication that is efficiently implemented by a set of weights in a connectionist network. The model, called TRAFFIC (a loose acronym for ``transforming feature instances''), uses a hierarchy of these transformations, with increasing feature complexity in each successive layer, in order to recognize multiple objects in parallel. An implementation of TRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-independent manner. ************************************************************************* Copies of either report can be obtained by sending an email request to: INTERNET: carol at ai.toronto.edu UUCP: uunet!utai!carol BITNET: carol at utorgpu From bradley at ivy.Princeton.EDU Wed Apr 5 22:48:13 1989 From: bradley at ivy.Princeton.EDU (Bradley Dickinson) Date: Wed, 5 Apr 89 21:48:13 EST Subject: Inform. Theory paper nominations sought Message-ID: <8904060248.AA11436@ivy.Princeton.EDU> CALL FOR NOMINATIONS: IEEE INFORMATION THEORY SOCIETY PAPER AWARD Nominations are sought for the 1989 IEEE Information Theory Society Paper Award. This award will be given for an outstanding publication in the field of interest* of the Society appearing in any journal during the preceding two calendar years (1987-1988). Nominations should consist of a complete citation for the publication, a brief description of the contribution of the paper, and a statement indicating why the paper is deserving of the Award. Nominations may be submitted to the Chair of the IT Society Awards Committee: H. Vincent Poor Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 West Springfield Avenue Urbana, IL 61801 USA E-mail: poor at uicsl.csl.uiuc.edu Telephone: (217) 333-6449 Telefax: (217) 244-1764; or to any other member of the Committee: Thomas M. Cover (cover at isl.stanford.edu) Daniel J. Costello (dan at ndsun.uucp) Shu Lin (University of Hawaii) Sergio Verdu (verdu at ivy.princeton.edu ) Kung Yao (IAD3TAO%OAC.UCLA.EDU). Nominations should be received by April 30, 1989, at the latest, and preferably by April 21, 1989. *The field of interest of the Information Theory Society is the processing, transmission, storage and use of information and the foundations of the communication process. It specifically encompasses theoretical and certain applied aspects of coding, communications and communications networks, complexity and cryptography, detection and estimation, learning, Shannon Theory, and stochastic processes. From cfields at NMSU.Edu Sun Apr 9 17:56:55 1989 From: cfields at NMSU.Edu (cfields@NMSU.Edu) Date: Sun, 9 Apr 89 15:56:55 MDT Subject: No subject Message-ID: <8904092156.AA19088@NMSU.Edu> _________________________________________________________________________ The following are abstracts of papers appearing in the second issue of the Journal of Experimental and Theoretical Artificial Intelligence, to appear in April, 1989. For submission information, please contact either of the editors: Eric Dietrich Chris Fields PACSS - Department of Philosophy Box 30001/3CRL SUNY Binghamton New Mexico State University Binghamton, NY 13901 Las Cruces, NM 88003-0001 dietrich at bingvaxu.cc.binghamton.edu cfields at nmsu.edu JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia _________________________________________________________________________ Generating plausible diagnostic hypotheses with self-processing causal networks Jonathan Wald, Martin Farach, Malle Tagamets, and James Reggia Department of Computer Science, University of Maryland A recently proposed connectionist methodology for diagnostic problem solving is critically examined for its ability to construct problem solutions. A sizeable causal network (56 manifestation nodes, 26 disorder nodes, 384 causal links) served as the basis of experimental simulations. Initial results were discouraging, with less than two-thirds of simulations leading to stable solution states (equilibria). Examination of these simulation results identified a critical period during simulations, and analysis of the connectionist model's activation rule during this period led to an understanding of the model's nonstable oscillatory behavior. Slower decrease in the model's control parameters during the critical period resulted in all simulations reaching a stable equilibrium with plausible solutions. As a consequence of this work, it is possible to more rationally determine a schedule for control parameter variation during problem solving, and the way is now open for real-world experimental assessment of this problem solving method. _________________________________________________________________________ Organizing and integrating edge segments for texture discrimination Kenzo Iwama and Anthony Maida Department of Computer Science, Pennsylvania State University We propose a psychologically and psychophysically motivated texture segmentation algorithm. The algorithm is implemented as a computer program which parses visual images into regions on the basis of texture. The program's output matches human judgements on a very large class of stimuli. The program and algorithm offer very detailed hypotheses of how humans might segment stimuli, and also suggest plausible alternative explanations to those presented in the literature. In particular, contrary to Julesz and Bergen (1983), the program does not use crossings as textons and does use corners as textons. Nonetheless, the program is able to account for the same data. The program accounts for much of the linking phenomena of Beck, Pradzny, and Rosenfeld (1983). It does so by matching structures between feature maps on the basis of spatial overlap. These same mechanisms are also used to account for the feature integration phenomena of Triesman (1985). ---------------------------------------------------------------------------- Towards a paradigm shift in belief representation methodology John Barnden Computing Research Laboratory, New Mexico State University Research programs must often divide issues into managable sub-issues. The assumption is that an approach developed to cope with a sub-issue can later be integrated into an approach to the whole issue - possibly after some tinkering with the sub-approach, but without affecting its fundamental features. However, the present paper examines a case where an AI issue has been divided in a way that is, apparently, harmless and natural, but is actually fundamentally out of tune with the realities of the issue. As a result, some approaches developed for a certain sub-issue cannot be extended to a total approach without fundamental modification. The issue in question is that of modeling people's beliefs, hopes, intentions, and other ``propositional attitudes'', and/or interpreting natural language sentences that report propositional attitudes. Researchers have, quite understandably, de-emphasized the problem of dealing in detail with nested attitudes (e.g. hopes about beliefs, beliefs about intentions about beliefs), in favor of concentrating on the sub-issue of nonnested attitudes. Unfortunately, a wide variety of approaches to attitudes are prone to a deep but somewhat subtle problem when they are applied to nested attitudes. This problem can be very roughly described as an AI system's unwitting imputation of its own arcane ``theory'' of propositional attitudes to other agents. The details of this phenomenon have been published elsewhere by the author: the present paper merely sketches it, and concentrates instead on the methodological lessons to be drawn, both for propositional attitude research and, more tentatively, for AI in general. The paper also summarizes an argument (presented more completely elsewhere) for an approach to attitude representation based in part on metaphors of mind that are commonly used by people. This proposed new research direction should ultimately coax propositional attitude research out of the logical armchair and into the pyschological laboratory. --------------------------------------------------------------------------- The graph of a boolean function Frank Harary Department of Computer Science, New Mexico State University (Abstract not available) ___________________________________________________________________________ From jbower at bek-mc.caltech.edu Mon Apr 10 12:52:01 1989 From: jbower at bek-mc.caltech.edu (Jim Bower) Date: Mon, 10 Apr 89 08:52:01 pst Subject: CPGs Message-ID: <8904101652.AA07358@bek-mc.caltech.edu> At the just completed Snowbird meeting on neural networks, several participants asked for more information on biological central pattern generators (CPGs). I recommend a book titled "Model Neural Networks and Behavior" edited by Al Selverston published by Plenum Press in 1985. For those that don't know, CPGs as a class probably represent the most completely described real biological networks in terms of their anatomy, physiology, and, importantly, the behavior they control. As such they are well worth knowing about. Jim Bower Caltech From mm at cogsci.indiana.edu Mon Apr 10 15:44:18 1989 From: mm at cogsci.indiana.edu (Melanie Mitchell) Date: Mon, 10 Apr 89 13:44:18 CST Subject: Technical report available Message-ID: The following report is available from the Center for Research on Concepts and Cognition at Indiana University: The Role of Computational Temperature in a Computer Model of Concepts and Analogy-Making Melanie Mitchell and Douglas R. Hofstadter Center For Research on Concepts and Cognition Indiana University Abstract In this paper we discuss the role of computational temperature in Copycat, a computer model of the mental mechanisms underlying human concepts and analogy-making. Central features of Copycat's architecture are a high degree of parallelism, fine-grained distributed processing, competition, randomness, and an interaction of bottom-up perceptual pressures with an associative, overlapping, and context-sensitive conceptual network. In Copycat, computational temperature is used both to measure the amount and quality of perceptual organization created by the program as processing proceeds, and, reciprocally, to continuously control the degree of randomness in the system. In this paper we will discuss the role of temperature in two aspects of perception central to Copycat's behavior: (1) the emergence of a "parallel terraced scan", in which many possible courses of action are explored simultaneously, each at a speed and to a depth proportional to moment-to-moment estimates of its promise, and (2) the ability to restructure initial perceptions -- sometimes radically -- in order to arrive at a deeper, more essential understanding of a situation. We will also compare our notion of temperature to similar notions in other computational frameworks. Finally, an example will be given of how temperature is used in Copycat's creation of a subtle and insightful analogy. For copies of this report, send a request for CRCC-89-1 to helga at cogsci.indiana.edu or to Helga Keller Center for Research on Concepts and Cognition Indiana University Bloomington, Indiana, 47408 From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Apr 12 13:41:47 1989 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Wed, 12 Apr 89 13:41:47 EDT Subject: request from Dr. Lippman Message-ID: if Dr. Lippman is reading this, can you please send me a copy of your joint paper with Dr. Huang: "Comparison between Neural Net and Conventional Classifiers" my address is: Thanasis Kehagias Division of Applied Mathematics Brown University Providence, RI 02912 USA thanks in advance. From dario%TECHUNIX.BITNET at VMA.CC.CMU.EDU Thu Apr 13 08:47:57 1989 From: dario%TECHUNIX.BITNET at VMA.CC.CMU.EDU (Dario Ringach) Date: Thu, 13 Apr 89 14:47:57 +0200 Subject: Texture Segmentation using the Boundary Contour System Message-ID: <8904131247.AA10308@techunix.bitnet> How can the Boundary Contour System segment textures with identical second-order statistics? I mean, first-order differences are easily discovered by the "contrast sensitive" cells at the first stage of the BCS (the OC filter), while the CC-loop can account for second-order (dipole) statistics; but how can the BCS segment textures, as the ones presented by Julez, which have even identical third-order statistics but are easily discriminable? Is the BCS/FCS model consistent with Julez's Textons theory? If so, in which way? Thanks! -- Dario dario at techunix.bitnet From kube%cs at ucsd.edu Thu Apr 13 17:44:44 1989 From: kube%cs at ucsd.edu (Paul Kube) Date: Thu, 13 Apr 89 14:44:44 PDT Subject: Texture Segmentation using the Boundary Contour System In-Reply-To: Dario Ringach's message of Thu, 13 Apr 89 14:47:57 +0200 <8904131247.AA10308@techunix.bitnet> Message-ID: <8904132144.AA02793@kokoro.UCSD.EDU> Date: Thu, 13 Apr 89 14:47:57 +0200 From: Dario Ringach How can the Boundary Contour System segment textures with identical second-order statistics? If by "segment" you mean construct a Boundary Contour between the two textured regions, it probably won't. I mean, first-order differences are easily discovered by the "contrast sensitive" cells at the first stage of the BCS (the OC filter), Yes. while the CC-loop can account for second-order (dipole) statistics; The CC loop can construct linkages between texture elements with similar orientation, and so can, to some extent, "group" elements into regions with similar second-order statistics. However there is nothing in the BC system that is sensitive to such differences in 2nd order statistics as may exist between regions, so it won't mark a boundary between them, so it won't segment them, except perhaps by accident (e.g. if end cuts of elements at the boundary between regions happen to line up in the right way). but how can the BCS segment textures, as the ones presented by Julez, which have even identical third-order statistics but are easily discriminable? Indeed, but it doesn't even have the resources to segment correctly in the iso-first-order case. Is the BCS/FCS model consistent with Julez's Textons theory? If so, in which way? I think it's most useful to see Grossberg and Mingolla (Perception and Psychophysics 1985) as proposing a mechanism to account for some of the linking, subjective contour, and neon spreading phenomena that seem to be involved in some texture stimuli used by Beck. In my way of carving things up, this is a contribution to the texture *description* problem. Julesz has (almost) always studiously avoided these kinds of effects in his displays so it has (almost) nothing to say about his work. In any case, it has nothing to say about the texture *discrimination* problem, which is exactly the problem of computing boundaries between differently textured image regions, given some representation which shows them to be different. Let me add that when I discussed these issues with Mingolla about a year ago, he was optimistic that the BC/FC system might segment Julesz's stimuli, though it hadn't been tried. I was and remain pessimistic, though to my knowledge the experiments still haven't been done. --Paul Kube kube at ucsd.edu From pauls at boulder.Colorado.EDU Thu Apr 13 11:45:53 1989 From: pauls at boulder.Colorado.EDU (Paul Smolensky) Date: Thu, 13 Apr 89 09:45:53 MDT Subject: TR: Virtual Memories and Massive Generalization Message-ID: <8904131545.AA20801@sigi.colorado.edu> Virtual Memories and Massive Generalization in Connectionist Combinatorial Learning Olivier Brousse & Paul Smolensky Department of Computer Science & Institute of Cognitive Science University of Colorado at Boulder We report a series of experiments on connectionist learning that addresses a particularly pressing set of objections on the plau- sibility of connectionist learning as a model of human learning. Connectionist models have typically suffered from rather severe problems of inadequate generalization (where generalizations are significantly fewer than training inputs) and interference of newly learned items with previously learned items. Taking a cue from the domains in which human learning dramatically overcomes such problems, we see that indeed connectionist learning can es- cape these problems in *combinatorially structured domains.* In the simple combinatorial domain of letter sequences, we find that a basic connectionist learning model trained on 50 6-letter se- quences can correctly generalize to over 10,000 novel sequences. We also discover that the model exhibits over 1,000,000 *virtual memories*: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. Virtual memories can be thought of states which are not harmony maxima (energy minima) but which can become so with a few presentations, without in- terfering with existing harmony maxima. Like generalizations, virtual memories in combinatorial memories are largely novel com- binations of familiar subpatterns extracted from the contexts in which they appear in the training set. We conclude that, in com- binatorial domains like language, connectionist learning is not as harmful to the empiricist position as typical connectionist learning experiments might suggest. Submitted to the annual meeting of the Cognitive Science Society. Please send requests to conn_tech_report at boulder.Colorado.EDU and request report CU-CS-431-89. These will be available for mailing shortly. From ersoy at ee.ecn.purdue.edu Fri Apr 14 10:24:52 1989 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Fri, 14 Apr 89 09:24:52 -0500 Subject: No subject Message-ID: <8904141424.AA11764@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 23 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 3-6, 1990 The Neural Networks Track of HICSS-23 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-23 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES Six copies of the manuscript are due by June 10, 1989. Notification of accepted papers by September 1, 1989. Accpeted manuscripts, camera-ready, are due by October 3, 1989. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy H. H. Szu Purdue University Naval Research Laboratories School of Electrical Engineering Code 5709 W. Lafayette, IN 47907 4555 Overlook Ave., SE (317) 494-6162 Washington, DC 20375 E-Mail: ersoy at ee.ecn.purdue (202) 767-2407 From mike at bucasb.BU.EDU Thu Apr 13 16:26:51 1989 From: mike at bucasb.BU.EDU (Michael Cohen) Date: Thu, 13 Apr 89 16:26:51 EDT Subject: network meeting announcement Message-ID: <8904132026.AA01714@bucasb.bu.edu> NEURAL NETWORK MODELS OF CONDITIONING AND ACTION 12th Symposium on Models of Behavior Friday and Saturday, June 2 and 3, 1989 105 William James Hall, Harvard University 33 Kirkland Street, Cambridge, Massachusetts PROGRAM COMMITTEE: Michael Commons, Harvard Medical School Stephen Grossberg, Boston University John E.R. Staddon, Duke University JUNE 2, 8:30AM--11:45AM ----------------------- Daniel L. Alkon, ``Pattern Recognition and Storage by an Artificial Network Derived from Biological Systems'' John H. Byrne, ``Analysis and Simulation of Cellular and Network Properties Contributing to Learning and Memory in Aplysia'' William B. Levy, ``Synaptic Modification Rules in Hippocampal Learning'' JUNE 2, 1:00PM--5:15PM ---------------------- Gail A. Carpenter, ``Recognition Learning by a Hierarchical ART Network Modulated by Reinforcement Feedback'' Stephen Grossberg, ``Neural Dynamics of Reinforcement Learning, Selective Attention, and Adaptive Timing'' Daniel S. Levine, ``Simulations of Conditioned Perseveration and Novelty Preference from Frontal Lobe Damage'' Nestor A. Schmajuk, ``Neural Dynamics of Hippocampal Modulation of Classical Conditioning'' JUNE 3, 8:30AM--11:45AM ----------------------- John W. Moore, ``Implementing Connectionist Algorithms for Classical Conditioning in the Brain'' Russell M. Church, ``A Connectionist Model of Scalar Timing Theory'' William S. Maki, ``Connectionist Approach to Conditional Discrimination: Learning, Short-Term Memory, and Attention'' JUNE 3, 1:00PM--5:15PM ---------------------- Michael L. Commons, ``Models of Acquisition and Preference'' John E.R. Staddon, ``Simple Parallel Model for Operant Learning with Application to a Class of Inference Problems'' Alliston K. Reid, ``Computational Models of Instrumental and Scheduled Performance'' Stephen Jose Hanson, ``Behavioral Diversity, Hypothesis Testing, and the Stochastic Delta Rule'' Richard S. Sutton, ``Time Derivative Models of Pavlovian Reinforcement'' FOR REGISTRATION INFORMATION SEE ATTACHED OR WRITE: Dr. Michael L. Commons Society for Quantitative Analysis of Behavior 234 Huron Avenue Cambridge, MA 02138 ---------------------------------------------------------------------- ---------------------------------------------------------------------- REGISTRATION FEE BY MAIL (Paid by check to Society for Quantitative Analysis of Behavior) (Postmarked by April 30, 1989) Name: ______________________________________________ Title: _____________________________________________ Affiliation: _______________________________________ Address: ___________________________________________ Telephone(s): ______________________________________ E-mail address: ____________________________________ ( ) Regular $35 ( ) Full-time student $25 School ____________________________________________ Graduate Date _____________________________________ Print Faculty Name ________________________________ Faculty Signature _________________________________ PREPAID 10-COURSE CHINESE BANQUET ON JUNE 2 ( ) $20 (add to pre-registration fee check) ----------------------------------------------------------------------------- (cut here and mail with your check to) Dr. Michael L. Commons Society for Quantitative Analysis of Behavior 234 Huron Avenue Cambridge, MA 02138 REGISTRATION FEE AT THE MEETING ( ) Regular $45 ( ) Full-time Student $30 (Students must show active student I.D. to receive this rate) ON SITE REGISTRATION 5:00--8:00PM, June 1, at the RECEPTION in Room 1550, William James Hall, 33 Kirkland Street, and 7:30--8:30AM, June 2, in the LOBBY of William James Hall. Registration by mail before April 30, 1989 is recommended as seating is limited HOUSING INFORMATION Rooms have been reserved in the name of the symposium ("Models of Behavior") for the Friday and Saturday nights at: Best Western Homestead Inn 220 Alewife Brook Parkway Cambridge, MA 02138 Single: $71 Double: $80 Call (617) 491-1890 or (800) 528-1234 and ask for the Group Sales desk. Reserve your room as soon as possible. The hotel will not hold them past May 1. Because of Harvard and MIT graduation ceremonies, space will fill up rapidly. Other nearby hotels: Howard Johnson's Motor Lodge 777 Memorial Drive Cambridge, MA 02139 (617) 492-7777 (800) 654-2000 Single: $115--$135 Double: $115--$135 Suisse Chalet 211 Concord Turnpike Parkway Cambridge, MA 02140 (617) 661-7800 (800) 258-1980 Single: $48.70 Double: $52.70 --------------------------------------------------------------------------- From cole at cse.ogc.edu Mon Apr 17 22:08:54 1989 From: cole at cse.ogc.edu (Ron Cole) Date: Mon, 17 Apr 89 19:08:54 -0700 Subject: POST-DOC: SPEECH & NEURAL NETS Message-ID: <8904180208.AA14232@ogccse.OGC.EDU> POST-DOCTORAL POSITION AT OREGON GRADUATE CENTER Speech Recognition with Neural Nets A post-doctoral position is available at the Oregon Graduate Center to study connectionist approaches to computer speech recognition, beginning Summer or Fall, 1989. The main requirements are (1) a strong background in the theory and application of neural networks, and (2) willingness to learn about the wonderful world of speech. Knowledge of computer speech recognition is helpful but not required; the PI has extensive experience in the area and is willing to teach the necessary skills. The goal of our research is to develop speech recognition algorithms that are motivated by research on hearing, acoustic phonetics and speech perception, and to compare performance of algorithms that use neural network classifiers to more traditional techniques. In the past year, our group has applied neural network classification to several problem areas in speaker-independent recognition of continuous speech: Pitch and formant tracking, segmentation, broad phonetic classification and fine phonetic discrimination. In addition, we have recently demonstrated the feasibility of using multi-layered networks to identify languages on the basis of their temporal characteristics (preprint available from vincew at ogc.cse.edu). OGC provides an excellent environment for research in speech recognition and neural networks, with state-of-the-art speech processing software (including Dick Lyon's cochleogram, a representation based on a computational model of the auditory system), speech databases, and simulation tools. The department has a Sequent Symmetry multiprocessor, Intel Hypercube and Cogent Research XTM parallel workstations, and the speech project has several dedicated Sun4 and Sun3 workstations. The speech group has close ties with Dan Hammerstrom's Cognitive Architecture Project at OGC, and with Les Atlas and his group at the University of Washington. OGC is located ten miles west of Portland on a spacious campus in the heart of Oregon's technology corridor. Nearby companies include Sequent, Intel, Tektronix, Cogent Research, Mentor Graphics, BiiN, NCUBE, and FPS Computing. The cultural attractions of Portland are close by, and the Columbia River Gorge, Oregon Coast and Cascade Mts (skiing through September) are less than 90 minutes away. Housing is inexpensive and quality of life is excellent. Please send resume to: Ronald Cole Computer Science and Engineering Oregon Graduate Center Beaverton, OR 97006 503 690 1159 From cole at cse.ogc.edu Tue Apr 18 15:57:43 1989 From: cole at cse.ogc.edu (Ron Cole) Date: Tue, 18 Apr 89 12:57:43 -0700 Subject: Address Correction Message-ID: <8904181957.AA14690@ogccse.OGC.EDU> The recent post-doc announcement (speech and neural nets) did not include the complete mailing address. It is: Ronald Cole Computer Science and Engineering Oregon Graduate Center 19600 N.W. Von Neumann Drive Beaverton, OR 97006-1999 From marvit%hplpm at hplabs.hp.com Thu Apr 20 21:44:15 1989 From: marvit%hplpm at hplabs.hp.com (Peter Marvit) Date: Thu, 20 Apr 89 18:44:15 PDT Subject: research reports available In-Reply-To: Your message of "Wed, 05 Apr 89 18:11:27 EDT." <89Apr5.181134edt.10961@ephemeral.ai.toronto.edu> Message-ID: <8904210144.AA01477@hplpm.HPL.HP.COM> From PSYKIMP at vms2.uni-c.dk Thu Apr 27 11:57:00 1989 From: PSYKIMP at vms2.uni-c.dk (PSYKIMP@vms2.uni-c.dk) Date: Thu, 27 Apr 89 16:57 +0100 Subject: Position available Message-ID: The Institute of Psychology, University of Aarhus, Denmark is announcing a new position at the Associate Professor level. Applicants should document research within the area of psychology or Cognitive Science which involves the relation between information and computer technology, and psychological processes. Qualifications within the latter area - the relation to computer technology and psychology - will be given special consideration. For further details, please contact Dr. Kim Plunkett: psykimp at dkarh02.bitnet (Deadline for receipt of applications: June 2nd, 1989) From netlist at psych.Stanford.EDU Thu Apr 27 11:50:49 1989 From: netlist at psych.Stanford.EDU (Mark Gluck) Date: Thu, 27 Apr 89 08:50:49 PDT Subject: NEXT TUES (5/2): Bruce McNaughton on Neural Networks for Spacial Representation in Hippocampus Message-ID: Stanford University Interdisciplinary Colloquium Series: Adaptive Networks and their Applications May 2nd (Tuesday, 3:30pm): Room 380-380C ******************************************************************************** Hebb-Steinbuch-Marr Networks and the Role of Movement in Hippocampal Representations of Spatial Relations Bruce L. McNaughton Dept. of Psychology University of Colorado Campus Box 345 Boulder, CO 80309 ******************************************************************************** Abstract Over 15 years ago, Marr proposed models for associative learning and pattern completion in specific brain regions. These models incorporated Hebb's postulate, the "learning matrix" concept of Steinbuch, recurrent excitation, and the assumptions that a few excitatory synapses are disproportionately powerful, and that inhibitory synapses divide postsynaptic excitation by the total input. These ideas provide a basis for understanding much of the circuitry and physiology of the hippocampus, and will be used to suggest how spatial relationships are coded there by forming conditional associations between location and movement representations originating in the inferotemporal and parietal cortical systems respectively. References: ----------- McNaughton, B. L. & Morris R.G.M. (1988). Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in Neurosci. 10:408-415. McNaughton, B. L. & Nadel, L. (in press, 1989). Hebb-Marr networks and the neurobiological representation of action in space. To appear in M. Gluck & D. Rumelhart (Eds.), Neuroscience and Connectionist Models, Erlbaum: Hillsdale, NJ Additional Information: ---------------------- Location: Room 380-380C, which can be reached through the lower level between the Psychology and Mathematical Sciences buildings. Level: Technically oriented for persons working in related areas. Mailing lists: To be added to the network mailing list, netmail to netlist at psych.stanford.edu with "addme" as your subject header. For additional information, contact Mark Gluck (gluck at psych.stanford.edu). From Christopher.McConnell at A.GP.CS.CMU.EDU Thu Apr 27 17:16:17 1989 From: Christopher.McConnell at A.GP.CS.CMU.EDU (Christopher.McConnell@A.GP.CS.CMU.EDU) Date: Thu, 27 Apr 89 17:16:17 EDT Subject: NEXT TUES (5/2): Bruce McNaughton on Neural Networks for Spacial Representation in Hippocampus Message-ID: