From wilson at Think.COM Thu Feb 1 11:02:42 1990 From: wilson at Think.COM (Stewart Wilson) Date: Thu, 01 Feb 90 11:02:42 EST Subject: SAB90 Call for Papers Message-ID: <9002011602.AA14047@pozzo> Dear colleagues, Dr. Meyer and I would be very grateful if you would again distribute the following call for papers on your email list. It was distributed a month ago--this is for readers who may have missed it then. Thank you. Sincerely, Stewart Wilson ============================================================================== ============================================================================== Call for Papers SIMULATION OF ADAPTIVE BEHAVIOR: FROM ANIMALS TO ANIMATS An International Conference to be held in Paris September 24-28, 1990 The object of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contact among scientists from diverse disciplines should contribute to better appreciation of each other's approaches and vocabularies, to cross-fertilization of fundamental and applied research, and to defining objectives, constraints, and challenges for future work. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behaviors Autonomous robots Action selection and behavioral Hierarchical and parallel organizations sequences Self organization of behavioral Conditioning, learning and induction modules Neural correlates of behavior Problem solving and planning Perception and motor control Goal directed behavior Motivation and emotion Neural networks and classifier Behavioral ontogeny systems Cognitive maps and internal Emergent structures and behaviors world models Authors are requested to send two copies (hard copy only) of a full paper to each of the Conference chairmen: Jean-Arcady MEYER Stewart WILSON Groupe de Bioinformatique The Rowland Institute for Science URA686.Ecole Normale Superieure 100 Cambridge Parkway 46 rue d'Ulm Cambridge, MA 02142 75230 Paris Cedex 05 USA France e-mail: meyer%FRULM63.bitnet@ e-mail: wilson at think.com cunyvm.cuny.edu A brief preliminary letter to one chairman indicating the intention to participate--with the tentative title of the intended paper and a list of the topics addressed--would be appreciated for planning purposes. For conference information, please also contact one of the chairmen. Conference committee: Conference Chair J.A. Meyer, S. Wilson Organizing Committee Groupe de BioInformatique.ENS.France. and local arrangements A. Guillot, J.A. Meyer, P. Tarroux, P. Vincens Program Committee L. Booker, USA R. Brooks, USA P. Colgan, Canada P. Greussay, France D. McFarland, UK L. Steels, Belgium R. Sutton, USA F. Toates, UK D. Waltz, USA Official Language: English Important Dates 31 May 90 Submissions must be received by the chairmen 30 June 90 Notification of acceptance or rejection 31 August 90 Camera ready revised versions due 24-28 September 90 Conference dates =============================================================================== =============================================================================== From IP%IRMKANT.BITNET at VMA.CC.CMU.EDU Thu Feb 1 16:39:56 1990 From: IP%IRMKANT.BITNET at VMA.CC.CMU.EDU (IP%IRMKANT.BITNET@VMA.CC.CMU.EDU) Date: Thu, 01 Feb 90 17:39:56 EDT Subject: Change of ID name Message-ID: Our address is changed from RM0410 at IRMIAS to IP at IRMKANT. Please update the mailing list. Thanks. ISTITUTO DI PSICOLOGIA CNR ROMA From AEH at buenga.bu.edu Thu Feb 1 14:38:00 1990 From: AEH at buenga.bu.edu (AEH@buenga.bu.edu) Date: Thu, 1 Feb 90 14:38 EST Subject: unsuscribe Message-ID: please delete me from the mailing list. From russ at dash.mitre.org Fri Feb 2 09:10:08 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Fri, 2 Feb 90 09:10:08 EST Subject: Research Positions at MITRE Message-ID: <9002021410.AA27211@dash.mitre.org> The MITRE corporation Signal Processing Center is interviewing qualified candidates for positions in Neural Network research and pattern recognition. The MITRE corporation Signal Processing Center has been invloved with neural network research for over three years. In addition, the Signal Processing Center has groups doing active research in the areas of A.S.W., speech processing and high speed computing. We are seeking candidates with some of the following characteristics: 1. Experience in neural network research. 2. Familiarity with tradional pattern recognition, detection and estimation theory. 3. Strong programming abilities. - Unix - C, Fortran, Postscript - User interface (X11, NeWS) 4. Hardware experience, particulary parallel scientific computing. A U.S. citizenship is REQUIRED. Interested candidates please send resumes to: Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From Connectionists-Request at CS.CMU.EDU Fri Feb 2 16:32:14 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 02 Feb 90 16:32:14 EST Subject: Using the right address Message-ID: <4152.633994334@B.GP.CS.CMU.EDU> We have been getting a few too many administrative and other random requests sent to the entire list. Please send administrative requests (address changes, other list members addresses, etc.) to me at: Connectionists-Request at CS.CMU.EDU (note the exact spelling!) and NOT: Connectionists at CS.CMU.EDU To respond to the author of a message on the connectionists list, e.g, to order a copy of a tech report. Use the "mail" command, NOT the "reply" command. Otherwise you will end up sending the message to the entire list, which REALLY annoys some people (like the maintainer who will get the message several times). Happy hacking. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) From aboulang at WILMA.BBN.COM Sat Feb 3 19:45:32 1990 From: aboulang at WILMA.BBN.COM (aboulang@WILMA.BBN.COM) Date: Sat, 3 Feb 90 19:45:32 EST Subject: Upcoming talk at BBN of interest Message-ID: BBN Systems and Technologies Corporation Science Development Program APPLIED & COMPUTATIONAL MATHEMATICS SEMINAR --------------------------------------------------------------------------- TIME DELAYS, NOISE, AND NEURAL DYNAMICS John G. Milton (telaces at uchimvs1.bitnet) Assistant Professor Department of Neurology University of Chicago Chicago Illinois, 60637 Wednesday February 14th, 10:30AM 2nd Floor Large Conference Room (6/273) BBN 10 Moulton St. An intrinsic property of neural control mechanisms is the presence of time delays which arise as a consequence of, for example, finite conduction times along the axon and across the synapse. A neural control mechanism which is amenable to manipulation and non-invasive monitoring is the pupil light reflex (PLR). Specifically it is possible to "clamp" the PLR with external electronic feedback and thus compare prediction to experimental observation in a precisely controllable manner. The PLR is modeled with a first-order delay-differential equation (DDE) and the dynamics compared with those observed experimentally. Physiological considerations suggest the importance of considering: - DDEs with distributed and state-dependent delays, - second order DDEs, - the influence of noise (stochastic DDEs). ------------------------------------------------------------------- | | | I have an electronic mailing list for these | | announcements. If you would like to be on the list send | | mail to: ABOULANGER at BBN.COM. For more information on this | | talk or the series contact Albert Boulanger (617 873-3891). | | | ------------------------------------------------------------------- From inesc!lba%alf at relay.EU.net Mon Feb 5 13:46:27 1990 From: inesc!lba%alf at relay.EU.net (Luis Borges de Almeida) Date: Mon, 5 Feb 90 13:46:27 EST Subject: Proceedings book available Message-ID: <9002051346.AA01489@alf.inesc.pt> The proceedings volume of the EURASIP Workshop on Neural Networks (Sesimbra, Portugal, 15-17 Feb. 1990) is already available from Springer-Verlag. It has been published in their Lecture Notes in Computer Science series, and the complete reference is: Neural Networks EURASIP Workshop 1990 Sesimbra, Portugal, February 1990 Proceedings L. B. Almeida and C. J. Wellekens (Eds.) Springer-Verlag, 1990 The volume contains two invited papers, by Eric Baum and George Cybenko, and the full contributions to the workshop, which were evaluated by an international technical committee, resulting in the acceptance of only 40% of the submissions. Below is the table of contents. Have a good reading! Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) --------------------------------------------------------------------- TABLE OF CONTENTS PART I - Invited Papers When Are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples? E.B. Baum Complexity Theory of Neural Networks and Classification Problems G. Cybenko PART II - Theory, Algorithms Generalization Performance of Overtrained Back-Propagation Networks Y. Chauvin Stability of the Random Neural Network Model E. Gelenbe Temporal Pattern Recognition Using EBPS M. Gori, G. Soda Markovian Spatial Properties of a Random Field Describing a Sthochastic Neural Network: Sequential or Parallel Implementation? T.Herve, O. Francois, J. Demongeot Chaos in Neural Networks S. Renals The "Moving Targets" Training Algorithm R. Rohwer Acceleration Techniques for the Backpropagation Algorithm F.M. Silva, L.B. Almeida Rule-Injection Hints as a Means of Improving Network Performance and Learning Time S.C. Suddarth, Y.L. Kergosien Inversion in Time S. Thrun, A. Linden Cellular Neural Networks: Dynamic Properties and Adaptive Learning Algorithm L. Vandenberghe, S. Tan, J. Vandewalle Improved Simulated Annealing, Boltzmann Machine, and Attributed Graph Matching L. Xu, E. Oja PART III - Speech Processing Artificial Dendritic Learning T. Bell A Neural-Net Model of Human Short-Term Memory Development G.D.A. Brown Large Vocabulary Speech Recogntion Using Neural-Fuzzy and Concept Networks N. Hataoka, A. Amano, T. Aritsuka, A. Ichikawa Speech Feature Extraction Using Neural Networks M. Niranjan, F. Fallside Neural Network Based Continuous Speech Recogntion by Combining Self Organizing Feature Maps and Hidden Markov Modeling G. Rigoll PART IV - Image Processing Ultra-Small Implementation of a Neural Halftoning Technique T. Bernard, P. Garda, F. Devos, B. Zavidovique Application of Self-Organizing Networks to Signal Processing J. Kennedy, P. Morasso A Study of Neural Network Applications to Signal Processing S. Kollias PART V - Implementation Simulation Machine and Integrated Implementation of Neural Networks: a Review of Methods, Problems and Realizations C. Jutten, A. Guerin, J. Herault VLSI Implementation of an Associative Memory Based on Distributed Storage of Information U. Rueckert Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) From ersoy at ee.ecn.purdue.edu Mon Feb 5 10:22:56 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Mon, 5 Feb 90 10:22:56 -0500 Subject: No subject Message-ID: <9002051522.AA11423@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Sun Feb 4 14:11:17 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Sun, 4 Feb 90 19:11:17 GMT Subject: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <18538.9002041911@csunb.cogs.susx.ac.uk> I have been asked to circulate information about this conference. NB - please do NOT use "reply". Email enquiries should go to turing at uk.ac.sussex.syma ----------------------------------------------------------------------- TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church-Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, while other prominent contributors include Robert French (Indiana), Beatrice de Gelder (Tilburg), Andrew Hodges (Oxford), Philip Pettit (ANU) and Aaron Sloman (Sussex). Anyone wishing to attend this Conference should complete the enclosed form and send it to Andy Clark, TURING Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March, 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start at lunchtime on Tuesday 3rd April, 1990, and will end on Friday 6th April after tea. Final details will be sent to registered participants in February 1990. Conference Organizing Committee Andy Clark (Sussex University), David Holdcroft (Leeds University), Peter Millican (Leeds University), Steve Torrance (Middlesex Polytechnic) ___________________________________________________________________________ PROGRAMME OF INVITED SPEAKERS Paul CHURCHLAND (UCSD) Title to be announced Joseph FORD (Georgia) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Douglas HOFSTADTER (Indiana) Title to be announced J.R. LUCAS (Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark Turing 90 Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. ____________________________________________________________________________ From Connectionists-Request at CS.CMU.EDU Mon Feb 5 12:33:09 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 05 Feb 90 12:33:09 EST Subject: Too much junk mail Message-ID: <6199.634239189@B.GP.CS.CMU.EDU> We have been getting too much junk mail sent to the entire list. Some of our overseas subscribers pay hard cash for every message they recieve; let's keep the noise level to a minimum. For administrative matters please use: Connectionists-Request at CS.CMU.EDU (note the exact spelling!) and NOT: Connectionists at CS.CMU.EDU To respond to the author of a message on the connectionists list, e.g., to order a copy of his or her new tech report, use the "mail" command, NOT the "reply" command. Otherwise you will end up sending your message to the entire list, which REALLY annoys some people (especially the maintainer who will get the message several times). The rest of us will just laugh at you behind your back. Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarassment if she/he tries to subscribe. Happy hacking. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) From ersoy at ee.ecn.purdue.edu Tue Feb 6 11:14:17 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Tue, 6 Feb 90 11:14:17 -0500 Subject: No subject Message-ID: <9002061614.AA27561@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From jose at neuron.siemens.com Tue Feb 6 18:28:54 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Tue, 6 Feb 90 18:28:54 EST Subject: NIPS-90 WORKSHOPS Call for Proposals Message-ID: <9002062328.AA02485@neuron.siemens.com.siemens.com> REQUEST FOR PROPOSALS NIPS-90 Post-Conference Workshops November 30 and December 1, 1990 Following the regular NIPS program, workshops on current topics on Neural Information Processing will be held on November 30 and December 1, 1990, at a ski resort near Denver. Proposals by qualified individuals interested in chairing on of these workshops are solicited. Past topics have included: Rules and Connectionist Models; Speech; Vision; Neural Network Dynamics; Neurobiology; Computational Complexity Issues; Fault Tolerance in Neural Networks; Benchmarking and Comparing Neural Network Applications; Architectural Issues; Fast Training Techniques; VLSI; Control; Optimization, Statistical Inference, Genetic Algorithms. The format of the workshop is informal. Beyond reporting on past research, their goal is to provide a forum for scientists actively working in the field to freely discuss current issues of concern and interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for the ongoing individual exchange or outdoor activities. Specific open or controversial issues are encouraged and preferred as workshop topics. Individuals interested in chairing a workshop must propose a topic of current interest and must be willing to accept responsibility for their group's discussion. Discussion leaders' responsibilities include: arrange brief informal presentations by experts working on this topic, moderate or lead the discussion, and report its high points, findings and conclusions to the group during evening plenary sessions, and in a short (2 page) summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest by May 17, 1990. Proposals should include a title and a short description of what the workshop is to address and accomplish. It should state why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, list of publications and evidence of scholarship in the field of interest. Mail submissions to: Dr. Alex Waibel Attn: NIPS90 Workshops School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Name, mailing address, phone number, and e-mail net address (if applicable) must be on all submissions. Workshop Organizing Committee: Alex Waibel, Carnegie-Mellon, Workshop Chairman; Kathie Hibbard, University of Colorado, NIPS Local Arrangements; Howard Watchel, University of Colorado, Workshop Local Arrangements; PROPOSALS MUST BE RECEIVED BY MAY 17,1990 Please Post From jose at neuron.siemens.com Tue Feb 6 19:32:57 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Tue, 6 Feb 90 19:32:57 EST Subject: NIPS-90 CALL For Papers Message-ID: <9002070032.AA02512@neuron.siemens.com.siemens.com> CALL FOR PAPERS IEEE Conference on Neural Information Processing Systems -Natural and Synthetic- Monday, November 26 - Thursday, November 29, 1990 Denver, Colorado This is the fourth meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. Two days of focused workshops will follow at a nearby ski area (Nov 30-Dec 1). Major categories and examples of subcategories for paper submissions are the following; Neuroscience: Neurobiological models of development, cellular information processing, synaptic function, learning and memory. Studies and analyses of neurobiological systems. Implementation and Simulation: Hardware implementation of neural nets. VLSI, Optical Computing, and practical issues for simulations and simulation tools. Algorithms and Architectures: Description and experimental evaluation of new net architectures or learning algorithms: data representations, static and dynamic nets, modularity, rapid training, learning pattern sequences, implementing conventional algorithms. Theory: Theoretical analysis of: learning, algorithms, generalization, complexity, scaling, capability, stability, dynamics, fault tolerance, sensitivity, relationship to conventional algorithms. Cognitive Science & AI: Cognitive models or simulations of natural language understanding, problem solving, language acquisition, reasoning, skill acquisition, perception, motor control, categorization, or concept formation. Applications: Neural Networks applied to signal processing, speech, vision, character recognition, motor control, robotics, adaptive systems tasks. Technical Program: Plenary, contributed and poster sessions will be held. There will be no parallel sessions. The full text of presented papers will be published. Submission Procedures: Original research contributions are solicited, and will be carefully refereed. Authors must submit six copies of both a 1000-word (or less) summary and six copies of a separate single-page 50-100 word abstract clearly stating their results by May 17, 1990. At the bottom of each abstract page and on the first summary page indicate preference for oral or poster presentation and specify one of the above six broad categories and, if appropriate, sub-categories (For example: POSTER-Applications: Speech, ORAL-Implementation: Analog VLSI). Include addresses of all authors at the front of the summary and the abstract and to which author correspondence should be addressed. Submissions will not be considered that lack category information, separate abstract sheets, the required six copies, author addresses or are late. Mail Submissions To: Mail Requests For Registration Material To: John Moody Kathie Hibbard NIPS*90 Submissions NIPS*90 Local Committee Department of Computer Science Engineering Center Yale University University of Colorado P.O. Box 2158 Yale Station Campus Box 425 New Haven, Conn. 06520 Boulder, CO 80309-0425 Organizing Committee: General Chair: Richard Lippmann, MIT Lincoln Labs; Program Chair: John Moody, Yale; Neurobiology Co-Chair: Terry Sejnowski, Salk; Theory Co-Chair: Gerry Tesauro, IBM; Implementation Co-Chair: Josh Alspector, Bellcore; Cognitive Science and AI Co-Chair: Stephen Hanson, Siemens; Architectures Co-Chair: Yann Le Cun, ATT Bell Labs; Applications Co-Chair: Lee Giles, NEC; Workshop Chair: Alex Waibel, CMU; Workshop Local Arrangements, Howard Wachtel, U. Colorado; Local Arrangements, Kathie Hibbard, U. Colorado; Publicity: Stephen Hanson, Siemens; Publications: David Touretzky, CMU; Neurosciences Liaison: James Bower, Caltech; IEEE Liaison: Edward Posner, Caltech; APS Liaison: Larry Jackel, ATT Bell Labs; Treasurer: Kristina Johnson, U. Colorado; DEADLINE FOR SUMMARIES & ABSTRACTS IS MAY 17, 1990 please post From ersoy at ee.ecn.purdue.edu Wed Feb 7 09:32:59 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Wed, 7 Feb 90 09:32:59 -0500 Subject: No subject Message-ID: <9002071432.AA04300@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 8-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 1, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 E-Mail: ersoy at ee.ecn.purdue.edu From ai-vie!georg at relay.EU.net Wed Feb 7 13:19:51 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Wed, 7 Feb 90 17:19:51 -0100 Subject: connectionism & AI conf. Message-ID: <9002071619.AA02670@ai-vie.uucp> Announcement and Call for Papers Sixth Austrian Artificial Intelligence Conference --------------------------------------------------------------- Connectionism in Artificial Intelligence and Cognitive Science --------------------------------------------------------------- organized by the Austrian Society for Artificial Intelligence (OeGAI) in cooperation with the Gesellschaft fuer Informatik (GI, German Society for Computer Science), Section for Connectionism Sep 18 - 21, 1990 Salzburg, Austria Conference chair: Georg Dorffner (Univ. of Vienna, Austria) Program committee: J. Diederich (GMD St. Augustin, Germany) C. Freksa (Techn. Univ. Munich, Germany) Ch. Lischka (GMD St.Augustin, Germany) A. Kobsa (Univ. of Saarland, Germany) M. Koehle (Techn. Univ. Vienna, Austria) B. Neumann (Univ. Hamburg, Germany) H. Schnelle (Univ. Bochum, Germany) Z. Schreter (Univ. Zurich, Switzerland) Recently, connectionism is becoming more and more influential as a basic paradigm and method for artificial intelligence and cognitive science. Although there is an abundance of conferences on artificial neural networks - the basis of connectionism - only few meetings are devoted to modeling cognitive processes and building AI models with the novel approach. This conference is designed to fill this space. It will bring together works in the field of neural networks for AI problems, but also basic aspects of massive parallelism and theoretical implications of the new paradigm. The program will consist of submitted papers, workshops, invited talks and panels. IMPORTANT! The conference languages are German and English. Most of the conference will be held in German, though, but papers in English are welcome! Scientific program: papers on the following topics, among others, are solicited: - networks in practical AI applications - connectionist "expert systems" - localist (structured) networks - localist and self-organizing approaches - explanation and interpretation of network behavior - hybrid systems - knowledge representation in neural networks - representation vs. behavior - validity of learning mechanisms - parallelism in humans and machines - associative inferences - connectionism and language processing - connectionism and pattern recognition - network simulation software as AI tool - neural networks and genetic algorithms - philosophical and epistemological implications - neural networks and robotics Workshops: - massive parallelism and cognition (Ch. Lischka) - structured (localist) network models (J. Diederich) - connectionism in language processing The workshops consist of short persentations and intensive discussions on the specialized topic. Presentations are usually invited, but can also be submitted. They will be open to all participants at the conference. Panel: Explanation and transparency of connectionist systems ------------------------------------------------------------- All submissions for the scientific program should consist of no more than 10 pages, for the workshops of no more than 5 pages. Languages - as mentioned above - are German and English. All accepted papers will be printed in a proceedings volume. Send all submissions to: Georg Dorffner Dept. of Medical Cybernetics and Artificial Intelligence University of Vienna Freyung 6/2 A-1010 Vienna, Austria Deadlines: complete submission postmarked no later than March 15, 1990 April 30, 1990: Notification of acceptance / rejection June 1, 1990: Deadline for camera-ready paper System demonstrations are possible, if the conference chair is notified early. From honavar at cs.wisc.edu Wed Feb 7 17:42:48 1990 From: honavar at cs.wisc.edu (Vasant Honavar) Date: Wed, 7 Feb 90 16:42:48 -0600 Subject: TR available by FTP Message-ID: <9002072242.AA13257@goat.cs.wisc.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** The following tech report is available via ftp from cheops.cis.ohio-state.edu (courtesy Jordan Pollack). Here is what you need to do to get a copy: unix> ftp cheops.cis.ohio-state.edu Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> get (remote-file) honavar.control.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps unix> lpr -Pxx foo.ps (xx is the name of your local postscript printer). ------------------------------------------------------------------------ Computer Sciences Technical Report #910, January 1990. Coordination and Control Structures and Processes: Possibilities for Connectionist Networks (CN) Vasant Honavar & Leonard Uhr Computer Sciences Department University of Wisconsin-Madison Abstract The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs - along with all other types of programs - are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them - albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control - but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost. From inesc!lba%alf at relay.EU.net Thu Feb 8 14:03:13 1990 From: inesc!lba%alf at relay.EU.net (Luis Borges de Almeida) Date: Thu, 8 Feb 90 14:03:13 EST Subject: EURASIP Workshop on NN - Emergency announcement Message-ID: <9002081403.AA25467@alf.inesc.pt> [I apologize to the many readers of this list who are not involved in the EURASIP workshop, but this was the means to get to many people fast, on emergency. I hope you will understand Thanks Luis B. Almeida] ----------------------------------------------------------------------- VERY URGENT Dear workshop participant, We are very sorry to inform that, from what we have just learned, the Portuguese air traffic controllers have announced a strike from February 14 through February 18. This means there will be a big trouble with transportation to/from Portugal. From our judgment of the situation, we would guess that the strike will not be called off. However, it is said that the Government might make a civilian requisition of the controllers. Below are some indications of the possible measures that you could take to ensure your arrival on time, and your departure, in case the strike is maintained. Two points, however, are very important: 1) ACT QUICKLY - alternate transportation around those days will probably get full very fast. 2) LET US KNOW OF YOUR TRAVEL ARRANGEMENTS, AS SOON AS POSSIBLE - we will try to help minimize the consequences of this strike to our participants (the best ways to contact us are given at the end of this message). Measures that you can take: 1 - Contact your travel agent, and have him make "protective reservations" for arrival on the 13th, and departure on the 19th. Don't forget to do that for all flights along your route. It is best to also keep your old reservations, in case the strike is called off. For extra lodging, if Hotel do Mar is full and can't help you, we can suggest Holiday Inn in Lisbon (phone +351-1-735093, 735123, 735222, 736018; fax +351-1-736572, 736672; telex 60330 HOLINN P). Mention that you are coming to a meeting organized by Inesc, they'll probalbly give you a special price. 2 - Make "protective reservations" for arrival on the 14th and/or departure on the 18th, in Madrid, instead of Lisbon. You can then use the train to/from Lisbon, but we will also try to arrange a bus if there are enough people in this situation. You can also choose to drive between Madrid and Sesimbra (about 600 km). You can contact anyone in the local organizing committee: Luis B. Almeida, Ilda Goncalves, Joaquim S. Rodrigues, Fernando M. Silva, Joao Neto Phone numbers: +351-1-544607,545150 Fax: +351-1-525843 (may get quite busy, the next few days) Telex: 15696 INESC P E-mail: lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) {any backbone, uunet}!mcvax!inesc!lba (older, but should still work) We (still) look forward to meeting you in Sesimbra. Sincerely, Luis B. Almeida From harnad at Princeton.EDU Thu Feb 8 20:07:30 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 8 Feb 90 20:07:30 EST Subject: Searle/Pinker: BBS Call for Commentators Message-ID: <9002090107.AA03347@reason.Princeton.EDU> Below are the abstracts of two forthcoming target articles [Searle on consciousness, Pinker & Bloom on language] that are about to be circulated for commentary by Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on one of these articles (please specify which), or to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] ____________________________________________________________________ (1) Searle: Consciousness & Explanation (2) Pinker & Bloom: Language Evolution --------------------------------------------------------------------- (1) CONSCIOUSNESS, EXPLANATORY INVERSION AND COGNITIVE SCIENCE by John R. Searle Department of Philosophy University of Californai Berkeley CA Cognitive science typically postulates unconscious mental phenomena, computational or otherwise, to explain cognitive capacities. The mental phenomena in question are supposed to be inaccessible in principle to consciousness. I try to show that this is a mistake, because all unconscious intentionality must be accessible in principle to consciousness; we have no notion of intrinsic intentionality except in terms of its accessibility to consciousness. I call this claim the Connection Principle. The argument for it proceeds in six steps. The essential point is that intrinsic intentionality has aspectual shape: our mental representations represent the world under specific aspects, and these aspectual features are essential to a mental state's being the state that it is. Once we recognize the Connection Principle, we see that it is necessary to perform an inversion on the explanatory models of cognitive science, an inversion analogous to the one evolutionary biology imposes on pre-Darwinian animistic modes of explanation. In place of the original intentionalistic explanations we have a combination of hardware and functional explanations. This radically alters the structure of explanation, because instead of a mental representation (such as a rule) causing the pattern of behavior it represents (such as rule governed behavior), there is a neurophysiological cause of a pattern (such as a pattern of behavior), and the pattern plays a functional role in the life of the organism. What we mistakenly thought were descriptions of underlying mental principles in, for example, theories of vision and language, were in fact descriptions of functional aspects of systems, which will have to be explained by underlying neurophysiological mechanisms. In such cases what looks like mentalistic psychology is sometimes better construed as speculative neurophysiology. The moral is that the big mistake in cognitive science is not the overestimation of the computer metaphor (though that is indeed a mistake) but the neglect of consciousness. --------------------------------------------------------------------- (2) NATURAL LANGUAGE AND NATURAL SELECTION Steven Pinker and Paul Bloom Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Many have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the byproduct of selection for other abilities or as a consequence of unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with Darwinian theory: Grammar shows no genetic variation, could not exist in any intermediate forms, confers no selective advantage, and would require more time and genomic space to evolve than is available. We show that these arguments depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers a clear criterion for attributing a trait to natural selection: complex design for a function with no alternative processes to explain the complexity. Human language meets this criterion: Grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the claim that language is an adaptation, but this reasoning is unsound: Communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, the child's acquisition of language should differ systematically from language evolution in the species; attempts to make analogies between them are misleading. Reviewing other arguments and data, we conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo-Darwinian process. -------------------------------------------------------------------------- From P.Refenes at CS.UCL.AC.UK Fri Feb 9 10:12:42 1990 From: P.Refenes at CS.UCL.AC.UK (P.Refenes@CS.UCL.AC.UK) Date: Fri, 9 Feb 90 15:12:42 GMT Subject: No subject Message-ID: The Knowledge Engineering Review is planning a special issue on "Combined Symbolic & Numeric Processing Systems". Is there anyone out there with an interest in "theories and techniques for mixed symbolic/numeric processing systems" who is willing to write a comprehensive review paper? There follow the short notes for contributors. The first one is in straight ascii, the second in postscript: ========================================================== The Knowledge Engineering Review Published by Cambridge University Press Special Issue on: Combined Symbolic & numeric processing Systems NOTES FOR CONTRIBUTORS Editor: Apostolos N. REFENES Department of Computer Science BITNET: refenes%uk.ac.ucl.cs at ukacrl University College London UUCP: {...mcvax!ukc!}ucl.cs!refenes Gower Street, London, WC 1 6BT, UK. ARPANet: refenes at cs.ucl.ac.uk THE KNOWLEDGE ENGINEERING REVIEW - SPECIAL ISSUE ON Inferences and Algorithms: co-operating in computation or (Symbols and Numbers: co-operating in computation) THEME The theme of this special issue of KER is to review developments in the subject of integrated symbolic and numeric computation. The subject area of combined symbolic and numeric computation is a prominent emerging subject. In Europe, ESPRIT is already funding a $15m project to investigate the integration of symbolic and numeric computation and is planning to issue a further call for a $20m type A project in Autumn this year. In the USA, various funding agencies, like the DoD and NSF, have been heavily involved in supporting research into the integration of symbolic and numeric computing systems over the last few years. Algorithmic (or numeric) computational methods are mostly used for low-level, data driven computations to achieve problem solutions by exhaustive evaluation, and are based on static, hardwired decision making procedures. The statisticity and regularity of the knowledge, data, and control structures that are employed by such algorithmic methods permits their efficient mapping and execution on conventional supercomputers. However, the scale of the computation increases often non-linearly with the problem size, and the strength of the data inter-dependencies. Symbolic (or inference) computational methods have the capability to drastically reduce the required computations by using high-level, model-driven knowledge, and hypothesis-and-test techniques about the application domain. However, the irregularity, uncertainty, and dynamicity of the knowledge, data, and control structures that are employed by symbolic methods presents a major obstacle to their efficient mapping and execution on conventional parallel computers. This has led many researchers to propose the development of integrated numeric and symbolic computation systems, which have the potential to achieve optimal solutions for large classes of problems, in which algorithmic and symbolic component are engaged in close co-operation. The need for such interaction is particularly obvious in such applications as image understanding, speech recognition, weather forecasting, financial forecasting, the solution of partial differential equations etc. In these applications, numeric software components are tightly coupled with their symbolic counterparts, which in turn, have the power to feed- back adjustable algorithm parameters, and hence, support a "hypothesis-and-test" capability required to validate the numeric data. It is this application domain that provided the motivation for developing theoretical frameworks, techniques, programming languages, and computer architectures to efficiently support both symbolic and numeric computation. The special issue of The Knowledge Engineering Review KER aims to provide a comprehensive and timely review of the state of the art in integrated symbolic and numeric knowledge engineering systems. The special issue will cover the topics outlined in the next section. TOPICS There are four important topics that are related to the subject area of integrated symbolic and numeric computation. This special issue will have one comprehensive review paper in each of the topics, and a general overview article (or editorial) to link them together. 1. Theory and Techniques Traditional theoretical frameworks for decision making are are generally considered to be too restrictive for developing practical knowledge based systems. The principal set of restrictions is that classical algorithmic decision theories and techniques do not address the need to reason about the decision process itself. Classical techniques cannot reflect on what the decision is, what the options are, what methods should be (or have been) used in making decision and so forth. Approaches that accommodate numerical methods but extend them with non-monotonic inference techniques are described extensively in the literature e.g [Coguen, Eberbach, Vanneschi, Fox et al, etc]. What is needed is an in-depth analysis, taxonomy and evaluation of these techniques. This review of the theoretical approaches and background into integrated symbolic and numeric computation should be highly valuable to those involved in symbolic, numeric, and integrated symbolic plus numeric computation. 2. Applications Here there would be a review of the applications which provide the stimulus, and demonstrate techniques for integrating symbolic and numeric computing components. Effectiveness considerations and performance gains should also be included where appropriate. Model applications may include: image understanding, weather forecasting, financial forecasting, expert systems for PDE solving, simulation, real-time process control,etc. The review article should expose the common ground that these applications share, the potential improvement in reasoning and computation efficiency, the requirements that they impose on the theoretical frameworks, programming languages, and computer architecture. 3. Programming Languages This would be a review of the programming languages which provide the means for integrating symbolic and numeric computations. The article(s) should describe the competing approaches, i.e. integration through homogenisation, and integration through interfacing heterogeneous systems. Language design issues, features for parallelism, etc. Possible languages that might be considered are: Spinlog, Orient84K, LOOPS , Cornerstone, Solve, Parle, etc. A comparative analysis of the involved languages should be included. 4. Computer Architecture This review should give a comprehensive review of the novel computer architectures that are involved, their basic operating principles, their internal structure, a comparative analysis, etc. Possible architectures that might be considered are: PADMAVATI, SPRINT, ... DEADLINES March 15th - Extended Abstract. April 30th - Full Manuscript. ================================================================== %!PS-Adobe-1.0 %%Creator: beans.cs.ucl.ac.uk:pnr (Paul Refenes,301,3717) %%Title: stdin (ditroff) %%CreationDate: Fri Feb 9 13:53:37 1990 %%EndComments % Start of psdit.pro -- prolog for ditroff translator % Copyright (c) 1985,1987 Adobe Systems Incorporated. All Rights Reserved. % GOVERNMENT END USERS: See Notice file in TranScript library directory % -- probably /usr/lib/ps/Notice % RCS: $Header: psdit.pro,v 2.2 87/11/17 16:40:42 byron Rel $ /$DITroff 140 dict def $DITroff begin /fontnum 1 def /fontsize 10 def /fontheight 10 def /fontslant 0 def % Change at UCL % /xi {0 72 11 mul translate 72 resolution div dup neg scale 0 0 moveto /xi {0 72 11 mul translate 0 35 translate 72 resolution div dup neg scale 0 0 moveto /fontnum 1 def /fontsize 10 def /fontheight 10 def /fontslant 0 def F /pagesave save def}def /PB{save /psv exch def currentpoint translate resolution 72 div dup neg scale 0 0 moveto}def /PE{psv restore}def /m1 matrix def /m2 matrix def /m3 matrix def /oldmat matrix def /tan{dup sin exch cos div}bind def /point{resolution 72 div mul}bind def /dround {transform round exch round exch itransform}bind def /xT{/devname exch def}def /xr{/mh exch def /my exch def /resolution exch def}def /xp{}def /xs{docsave restore end}def /xt{}def /xf{/fontname exch def /slotno exch def fontnames slotno get fontname eq not {fonts slotno fontname findfont put fontnames slotno fontname put}if}def /xH{/fontheight exch def F}bind def /xS{/fontslant exch def F}bind def /s{/fontsize exch def /fontheight fontsize def F}bind def /f{/fontnum exch def F}bind def /F{fontheight 0 le {/fontheight fontsize def}if fonts fontnum get fontsize point 0 0 fontheight point neg 0 0 m1 astore fontslant 0 ne{1 0 fontslant tan 1 0 0 m2 astore m3 concatmatrix}if makefont setfont .04 fontsize point mul 0 dround pop setlinewidth}bind def /X{exch currentpoint exch pop moveto show}bind def /N{3 1 roll moveto show}bind def /Y{exch currentpoint pop exch moveto show}bind def /S /show load def /ditpush{}def/ditpop{}def /AX{3 -1 roll currentpoint exch pop moveto 0 exch ashow}bind def /AN{4 2 roll moveto 0 exch ashow}bind def /AY{3 -1 roll currentpoint pop exch moveto 0 exch ashow}bind def /AS{0 exch ashow}bind def /MX{currentpoint exch pop moveto}bind def /MY{currentpoint pop exch moveto}bind def /MXY /moveto load def /cb{pop}def % action on unknown char -- nothing for now /n{}def/w{}def /p{pop showpage pagesave restore /pagesave save def}def /abspoint{currentpoint exch pop add exch currentpoint pop add exch}def /dstroke{currentpoint stroke moveto}bind def /Dl{2 copy gsave rlineto stroke grestore rmoveto}bind def /arcellipse{oldmat currentmatrix pop currentpoint translate 1 diamv diamh div scale /rad diamh 2 div def rad 0 rad -180 180 arc oldmat setmatrix}def /Dc{gsave dup /diamv exch def /diamh exch def arcellipse dstroke grestore diamh 0 rmoveto}def /De{gsave /diamv exch def /diamh exch def arcellipse dstroke grestore diamh 0 rmoveto}def /Da{currentpoint /by exch def /bx exch def /fy exch def /fx exch def /cy exch def /cx exch def /rad cx cx mul cy cy mul add sqrt def /ang1 cy neg cx neg atan def /ang2 fy fx atan def cx bx add cy by add 2 copy rad ang1 ang2 arcn stroke exch fx add exch fy add moveto}def /Barray 200 array def % 200 values in a wiggle /D~{mark}def /D~~{counttomark Barray exch 0 exch getinterval astore /Bcontrol exch def pop /Blen Bcontrol length def Blen 4 ge Blen 2 mod 0 eq and {Bcontrol 0 get Bcontrol 1 get abspoint /Ycont exch def /Xcont exch def Bcontrol 0 2 copy get 2 mul put Bcontrol 1 2 copy get 2 mul put Bcontrol Blen 2 sub 2 copy get 2 mul put Bcontrol Blen 1 sub 2 copy get 2 mul put /Ybi /Xbi currentpoint 3 1 roll def def 0 2 Blen 4 sub {/i exch def Bcontrol i get 3 div Bcontrol i 1 add get 3 div Bcontrol i get 3 mul Bcontrol i 2 add get add 6 div Bcontrol i 1 add get 3 mul Bcontrol i 3 add get add 6 div /Xbi Xcont Bcontrol i 2 add get 2 div add def /Ybi Ycont Bcontrol i 3 add get 2 div add def /Xcont Xcont Bcontrol i 2 add get add def /Ycont Ycont Bcontrol i 3 add get add def Xbi currentpoint pop sub Ybi currentpoint exch pop sub rcurveto }for dstroke}if}def end /ditstart{$DITroff begin /nfonts 60 def % NFONTS makedev/ditroff dependent! /fonts[nfonts{0}repeat]def /fontnames[nfonts{()}repeat]def /docsave save def }def % character outcalls /oc {/pswid exch def /cc exch def /name exch def /ditwid pswid fontsize mul resolution mul 72000 div def /ditsiz fontsize resolution mul 72 div def ocprocs name known{ocprocs name get exec}{name cb} ifelse}def /fractm [.65 0 0 .6 0 0] def /fraction {/fden exch def /fnum exch def gsave /cf currentfont def cf fractm makefont setfont 0 .3 dm 2 copy neg rmoveto fnum show rmoveto currentfont cf setfont(\244)show setfont fden show grestore ditwid 0 rmoveto} def /oce {grestore ditwid 0 rmoveto}def /dm {ditsiz mul}def /ocprocs 50 dict def ocprocs begin (14){(1)(4)fraction}def (12){(1)(2)fraction}def (34){(3)(4)fraction}def (13){(1)(3)fraction}def (23){(2)(3)fraction}def (18){(1)(8)fraction}def (38){(3)(8)fraction}def (58){(5)(8)fraction}def (78){(7)(8)fraction}def (sr){gsave .05 dm .16 dm rmoveto(\326)show oce}def (is){gsave 0 .15 dm rmoveto(\362)show oce}def (->){gsave 0 .02 dm rmoveto(\256)show oce}def (<-){gsave 0 .02 dm rmoveto(\254)show oce}def (==){gsave 0 .05 dm rmoveto(\272)show oce}def end % DIThacks fonts for some special chars 50 dict dup begin /FontType 3 def /FontName /DIThacks def /FontMatrix [.001 0.0 0.0 .001 0.0 0.0] def /FontBBox [-220 -280 900 900] def% a lie but ... /Encoding 256 array def 0 1 255{Encoding exch /.notdef put}for Encoding dup 8#040/space put %space dup 8#110/rc put %right ceil dup 8#111/lt put %left top curl dup 8#112/bv put %bold vert dup 8#113/lk put %left mid curl dup 8#114/lb put %left bot curl dup 8#115/rt put %right top curl dup 8#116/rk put %right mid curl dup 8#117/rb put %right bot curl dup 8#120/rf put %right floor dup 8#121/lf put %left floor dup 8#122/lc put %left ceil dup 8#140/sq put %square dup 8#141/bx put %box dup 8#142/ci put %circle dup 8#143/br put %box rule dup 8#144/rn put %root extender dup 8#145/vr put %vertical rule dup 8#146/ob put %outline bullet dup 8#147/bu put %bullet dup 8#150/ru put %rule dup 8#151/ul put %underline pop /DITfd 100 dict def /BuildChar{0 begin /cc exch def /fd exch def /charname fd /Encoding get cc get def /charwid fd /Metrics get charname get def /charproc fd /CharProcs get charname get def charwid 0 fd /FontBBox get aload pop setcachedevice 40 setlinewidth newpath 0 0 moveto gsave charproc grestore end}def /BuildChar load 0 DITfd put %/UniqueID 5 def /CharProcs 50 dict def CharProcs begin /space{}def /.notdef{}def /ru{500 0 rls}def /rn{0 750 moveto 500 0 rls}def /vr{20 800 moveto 0 -770 rls}def /bv{20 800 moveto 0 -1000 rls}def /br{20 770 moveto 0 -1040 rls}def /ul{0 -250 moveto 500 0 rls}def /ob{200 250 rmoveto currentpoint newpath 200 0 360 arc closepath stroke}def /bu{200 250 rmoveto currentpoint newpath 200 0 360 arc closepath fill}def /sq{80 0 rmoveto currentpoint dround newpath moveto 640 0 rlineto 0 640 rlineto -640 0 rlineto closepath stroke}def /bx{80 0 rmoveto currentpoint dround newpath moveto 640 0 rlineto 0 640 rlineto -640 0 rlineto closepath fill}def /ci{355 333 rmoveto currentpoint newpath 333 0 360 arc 50 setlinewidth stroke}def /lt{20 -200 moveto 0 550 rlineto currx 800 2cx s4 add exch s4 a4p stroke}def /lb{20 800 moveto 0 -550 rlineto currx -200 2cx s4 add exch s4 a4p stroke}def /rt{20 -200 moveto 0 550 rlineto currx 800 2cx s4 sub exch s4 a4p stroke}def /rb{20 800 moveto 0 -500 rlineto currx -200 2cx s4 sub exch s4 a4p stroke}def /lk{20 800 moveto 20 300 -280 300 s4 arcto pop pop 1000 sub currentpoint stroke moveto 20 300 4 2 roll s4 a4p 20 -200 lineto stroke}def /rk{20 800 moveto 20 300 320 300 s4 arcto pop pop 1000 sub currentpoint stroke moveto 20 300 4 2 roll s4 a4p 20 -200 lineto stroke}def /lf{20 800 moveto 0 -1000 rlineto s4 0 rls}def /rf{20 800 moveto 0 -1000 rlineto s4 neg 0 rls}def /lc{20 -200 moveto 0 1000 rlineto s4 0 rls}def /rc{20 -200 moveto 0 1000 rlineto s4 neg 0 rls}def end /Metrics 50 dict def Metrics begin /.notdef 0 def /space 500 def /ru 500 def /br 0 def /lt 250 def /lb 250 def /rt 250 def /rb 250 def /lk 250 def /rk 250 def /rc 250 def /lc 250 def /rf 250 def /lf 250 def /bv 250 def /ob 350 def /bu 350 def /ci 750 def /bx 750 def /sq 750 def /rn 500 def /ul 500 def /vr 0 def end DITfd begin /s2 500 def /s4 250 def /s3 333 def /a4p{arcto pop pop pop pop}def /2cx{2 copy exch}def /rls{rlineto stroke}def /currx{currentpoint pop}def /dround{transform round exch round exch itransform} def end end /DIThacks exch definefont pop ditstart (psc)xT 576 1 1 xr 1(Times-Roman)xf 1 f 2(Times-Italic)xf 2 f 3(Times-Bold)xf 3 f 4(Times-BoldItalic)xf 4 f 5(Helvetica)xf 5 f 6(Helvetica-Bold)xf 6 f 7(Courier)xf 7 f 8(Courier-Bold)xf 8 f 9(Symbol)xf 9 f 10(DIThacks)xf 10 f 10 s 1 f xi %%EndProlog %%Page: 1 1 10 s 0 xH 0 xS 1 f 2434 192(\302)N 3 f 19 s 1266 704(T)N 1367(he)X 1557(K)X 1676(now)X 1945(ledge)X 2322(E)X 2423(ngineering)X 3155(R)X 3264(eview)X 13 s 1519 880(Published)N 1985(by)X 2121(Cambridge)X 2650(University)X 3139(Press)X 1 f 16 s 518 1584(Special)N 928(Issue)X 1224(on:)X 2 f 18 s 893 1936(Com)N (bined)S 1521(Sym)X 1761(bolic)X 2085(and)X 2337(Num)X (eric)S 2869(Processing)X 3545(System)X 3945(s)X 3 f 1431 3504(N)N 1535(O)X 1647(TES)X 1955(FO)X 2155(R)X 2295(C)X 2399(O)X 2511(N)X 2615(TR)X (IBU)S (TO)S 3279(R)X 3383(S)X 1 f 518 4944(Editor:)N 11 s 863 5200(Apostolos)N 1242(N.)X 1349(REFENES)X 863 5328(Department)N 1301(of)X 1396(Computer)X 1771(Science)X 2688(BITNET:)X 3048(refenes%uk.ac.ucl.cs at ukacrl)X 863 5456(University)N 1257(College)X 1554(London)X 2688(UUCP:)X 2991({...mcvax!ukc!}ucl.cs!refenes)X 863 5584(Gower)N 1123(Street,)X 1373(London,)X 1691(WC)X 1855(1)X 1921(6BT,)X 2122(UK.)X 2688(ARPANet:)X 3096(refenes at cs.ucl.ac.uk)X 2 p %%Page: 2 2 11 s 0 xH 0 xS 1 f 2433 256(\302)N 3 f 1079 704(THE)N 1288(KNOWLEDGE)X 1908(ENGINEERING)X 2565(REVIEW)X 2953(-)X 3004(SPECIAL)X 3407(ISSUE)X 3683(ON)X 13 s 1192 960(Inferences)N 1681(and)X 1875(Algorithms:)X 2440(co-operating)X 3027(in)X 3140(computation)X 1 f 11 s 2411 1088(or)N 3 f 1435 1216(\(Symbols)N 1804(and)X 1968(Numbers:)X 2365(co-operating)X 2861(in)X 2957(computation\))X 518 1600(THEME)N 1 f 806 1792(The)N 982(theme)X 1237(of)X 1349(this)X 1517(special)X 1802(issue)X 2018(of)X 2131(KER)X 2347(is)X 2446(to)X 2555(review)X 2834(developments)X 3364(in)X 3473(the)X 3621(subject)X 3911(of)X 4024(integrated)X 806 1920(symbolic)N 1192(and)X 1381(numeric)X 1731(computation.)X 2256(The)X 2454(subject)X 2765(area)X 2972(of)X 3106(combined)X 3515(symbolic)X 3900(and)X 4088(numeric)X 806 2048(computation)N 1274(is)X 1359(a)X 1424(prominent)X 1813(emerging)X 2172(subject.)X 2471(In)X 2571(Europe,)X 10 s 2872(ESPRIT)X 11 s 3165(is)X 3251(already)X 3537(funding)X 3838(a)X 3904($15m)X 4132(project)X 806 2176(to)N 898(investigate)X 1304(the)X 1435(integration)X 1840(of)X 1935(symbolic)X 2281(and)X 2430(numeric)X 2741(computation)X 3227(and)X 3376(is)X 3457(planning)X 3788(to)X 3879(issue)X 4077(a)X 4138(further)X 806 2304(call)N 963(for)X 1094(a)X 1162($20m)X 1392(type)X 1573(A)X 1665(project)X 1939(in)X 2037(Autumn)X 2355(this)X 2512(year.)X 2714(In)X 2816(the)X 10 s 2951(USA)X 11 s (,)S 3162(various)X 3450(funding)X 3753(agencies,)X 4107(like)X 4269(the)X 10 s 806 2432(DoD)N 11 s 984(and)X 10 s 1131(NSF)X 11 s 1277(,)X 1321(have)X 1509(been)X 1697(heavily)X 1979(involved)X 2310(in)X 2401(supporting)X 2800(research)X 3114(into)X 3274(the)X 3404(integration)X 3809(of)X 3904(symbolic)X 4250(and)X 806 2560(numeric)N 1117(computing)X 1517(systems)X 1818(over)X 1996(the)X 2126(last)X 2271(few)X 2424(years.)X 806 2752(Algorithmic)N 1290(\(or)X 1444(numeric\))X 1814(computational)X 2372(methods)X 2723(are)X 2882(mostly)X 3175(used)X 3389(for)X 3544(low-level,)X 3952(data)X 4152(driven)X 806 2880(computations)N 1329(to)X 1445(achieve)X 1761(problem)X 2102(solutions)X 2468(by)X 2603(exhaustive)X 3026(evaluation,)X 3462(and)X 3635(are)X 3788(based)X 4034(on)X 4168(static,)X 806 3008(hardwired)N 1194(decision)X 1520(making)X 1817(procedures.)X 2256(The)X 2426 0.3011(statisticity)AX 2824(and)X 2984(regularity)X 3360(of)X 3466(the)X 3607(knowledge,)X 4048(data,)X 4250(and)X 806 3136(control)N 1091(structures)X 1468(that)X 1635(are)X 1776(employed)X 2158(by)X 2280(such)X 2475(algorithmic)X 2917(methods)X 3250(permits)X 3549(their)X 3745(ef\256cient)X 4068(mapping)X 806 3264(and)N 969(execution)X 1349(on)X 1474(conventional)X 1967(supercomputers.)X 2583(However,)X 2963(the)X 3108(scale)X 3321(of)X 3431(the)X 3576(computation)X 4055(increases)X 806 3392(often)N 1009(non-linearly)X 1462(with)X 1641(the)X 1771(problem)X 2087(size,)X 2268(and)X 2417(the)X 2547(strength)X 2853(of)X 2948(the)X 3078(data)X 3247(inter-dependencies.)X 806 3584(Symbolic)N 1167(\(or)X 1291(inference\))X 1670(computational)X 2199(methods)X 2521(have)X 2710(the)X 2841(capability)X 3213(to)X 3305(drastically)X 3696(reduce)X 3953(the)X 4084(required)X 806 3712(computations)N 1328(by)X 1462(using)X 1699(high-level,)X 2125(model-driven)X 2645(knowledge,)X 3098(and)X 3270(hypothesis-and-test)X 4000(techniques)X 806 3840(about)N 1053(the)X 1212(application)X 1657(domain.)X 2018(However,)X 2413(the)X 2573(irregularity,)X 3044(uncertainty,)X 3515(and)X 3694(dynamicity)X 4144(of)X 4269(the)X 806 3968(knowledge,)N 1249(data,)X 1452(and)X 1613(control)X 1897(structures)X 2273(that)X 2440(are)X 2581(employed)X 2963(by)X 3085(symbolic)X 3443(methods)X 3776(presents)X 4098(a)X 4171(major)X 806 4096(obstacle)N 1117(to)X 1208(their)X 1392(ef\256cient)X 1703(mapping)X 2034(and)X 2183(execution)X 2548(on)X 2658(conventional)X 3136(parallel)X 3423(computers.)X 806 4288(This)N 1003(has)X 1160(led)X 1308(many)X 1544(researchers)X 1978(to)X 2087(propose)X 2405(the)X 2553(development)X 3049(of)X 3162(integrated)X 3555(numeric)X 3885(and)X 4053(symbolic)X 806 4416(computation)N 1287(systems,)X 1627(which)X 1881(have)X 2086(the)X 2233(potential)X 2582(to)X 2690(achieve)X 2998(optimal)X 3308(solutions)X 3666(for)X 3807(large)X 4022(classes)X 4304(of)X 806 4544(problems,)N 1192(in)X 1297(which)X 1548(algorithmic)X 1992(and)X 2155(symbolic)X 2515(component)X 2943(are)X 3086(engaged)X 3415(in)X 3520(close)X 3737(co-operation.)X 4240(The)X 806 4672(need)N 997(for)X 1123(such)X 1308(interaction)X 1710(is)X 1793(particularly)X 2224(obvious)X 2527(in)X 2620(such)X 2805(applications)X 3256(as)X 3353(image)X 3593(understanding,)X 4138(speech)X 806 4800(recognition,)N 1261(weather)X 1570(forecasting,)X 2014(\256nancial)X 2350(forecasting,)X 2795(the)X 2935(solution)X 3252(of)X 3357(partial)X 3615(differential)X 4039(equations)X 806 4928(etc.)N 990(In)X 1100(these)X 1318(applications,)X 1804(numeric)X 2130(software)X 2469(components)X 2932(are)X 3076(tightly)X 3345(coupled)X 3661(with)X 3855(their)X 4053(symbolic)X 806 5056(counterparts,)N 1285(which)X 1522(in)X 1613(turn,)X 1799(have)X 1987(the)X 2118(power)X 2360(to)X 2452(feed-back)X 2821(adjustable)X 3202(algorithm)X 3569(parameters,)X 4000(and)X 4150(hence,)X 806 5184(support)N 1092(a)X 1153("hypothesis-and-test")X 1932(capability)X 2303(required)X 2618(to)X 2709(validate)X 3011(the)X 3141(numeric)X 3452(data.)X 806 5376(It)N 894(is)X 987(this)X 2 f 1149(application)X 1586(domain)X 1 f 1884(that)X 2051(provided)X 2398(the)X 2540(motivation)X 2958(for)X 3095(developing)X 2 f 3522(theoretical)X 3940(frameworks,)X 806 5504(techniques,)N 1265(programming)X 1808(languages)X 1 f 2170(,)X 2251(and)X 2 f 2437(computer)X 2828(architectures)X 1 f 3352(to)X 3480(ef\256ciently)X 3897(support)X 4220(both)X 806 5632(symbolic)N 1156(and)X 1309(numeric)X 1624(computation.)X 2136(The)X 2299(special)X 2570(issue)X 2772(of)X 2871(The)X 3034(Knowledge)X 3465(Engineering)X 3923(Review)X 10 s 4217(KER)X 11 s 806 5760(aims)N 997(to)X 1090(provide)X 1383(a)X 1446(comprehensive)X 2003(and)X 2154(timely)X 2405(review)X 2668(of)X 2765(the)X 2 f 2896(state)X 3086(of)X 3178(the)X 3309(art)X 1 f 3435(in)X 3527(integrated)X 3903(symbolic)X 4250(and)X 806 5888(numeric)N 1120(knowledge)X 1531(engineering)X 1972(systems.)X 2298(The)X 2460(special)X 2730(issue)X 2931(will)X 3094(cover)X 3314(the)X 3447(topics)X 3683(outlined)X 3998(in)X 4092(the)X 4225(next)X 806 6016(section.)N 3 p %%Page: 3 3 11 s 0 xH 0 xS 1 f 2433 256(\302)N 3 f 518 512(TOPICS)N 1 f 806 640(There)N 1052(are)X 1200(four)X 1388(important)X 1774(topics)X 2027(that)X 2202(are)X 2351(related)X 2633(to)X 2744(the)X 2894(subject)X 3186(area)X 3374(of)X 3489(integrated)X 3884(symbolic)X 4250(and)X 806 768(numeric)N 1126(computation.)X 1621(This)X 1809(special)X 2085(issue)X 2292(will)X 2461(have)X 2658(one)X 2816(comprehensive)X 3380(review)X 3650(paper)X 3876(in)X 3975(each)X 4166(of)X 4269(the)X 806 896(topics,)N 1061(and)X 1210(a)X 1271(general)X 1552(overview)X 1901(article)X 2144(\(or)X 2268(editorial\))X 2614(to)X 2705(link)X 2865(them)X 3064(together.)X 3 f 828 1088(1.)N 916(Theory)X 1212(and)X 1376(Techniques)X 1 f 982 1216(Traditional)N 1405(theoretical)X 1808(frameworks)X 2257(for)X 2389(decision)X 2713(making)X 3008(are)X 3145(are)X 3282(generally)X 3640(considered)X 4051(to)X 4150(be)X 4264(too)X 982 1344(restrictive)N 1359(for)X 1485(developing)X 1901(practical)X 2229(knowledge)X 2639(based)X 2863(systems.)X 3188(The)X 3349(principal)X 3686(set)X 3807(of)X 3903(restrictions)X 4318(is)X 982 1472(that)N 1149(classical)X 1482(algorithmic)X 1924(decision)X 2252(theories)X 2565(and)X 2726(techniques)X 3137(do)X 3259(not)X 3406(address)X 3703(the)X 3845(need)X 4045(to)X 4148(reason)X 982 1600(about)N 1212(the)X 1354(decision)X 1682(process)X 1979(itself.)X 2212(Classical)X 2565(techniques)X 2976(cannot)X 3245(re\257ect)X 3499(on)X 3621(what)X 3826(the)X 3968(decision)X 4296(is,)X 982 1728(what)N 1183(the)X 1321(options)X 1611(are,)X 1770(what)X 1971(methods)X 2300(should)X 2565(be)X 2678(\(or)X 2810(have)X 3006(been\))X 3231(used)X 3422(in)X 3521(making)X 3816(decision)X 4141(and)X 4299(so)X 982 1856(forth.)N 1204(Approaches)X 1652(that)X 1814(accommodate)X 2333(numerical)X 2715(methods)X 3043(but)X 3185(extend)X 3448(them)X 3653(with)X 3838(non-monotonic)X 982 1984(inference)N 1365(techniques)X 1798(are)X 1961(described)X 2354(extensively)X 2812(in)X 2938(the)X 3103(literature)X 3479(e.g)X 3641([Coguen,)X 4023(Eberbach,)X 982 2112(Vanneschi,)N 1402(Fox)X 1566(et)X 1657(al,)X 1770(etc].)X 1951(What)X 2169(is)X 2255(needed)X 2531(is)X 2617(an)X 2726(in-depth)X 3046(analysis,)X 3378(taxonomy)X 3757(and)X 3910(evaluation)X 4304(of)X 982 2240(these)N 1202(techniques.)X 1640(This)X 1836(review)X 2115(of)X 2228(the)X 2376(theoretical)X 2789(approaches)X 3224(and)X 3391(background)X 3846(into)X 4024(integrated)X 982 2368(symbolic)N 1339(and)X 1499(numeric)X 1821(computation)X 2296(should)X 2564(be)X 2680(highly)X 2939(valuable)X 3270(to)X 3371(those)X 3589(involved)X 3930(in)X 4031(symbolic,)X 982 2496(numeric,)N 1315(and)X 1464(integrated)X 1839(symbolic)X 2185(plus)X 2354(numeric)X 2665(computation.)X 3 f 828 2688(2.)N 916(Applications)X 1 f 982 2816(Here)N 1179(there)X 1382(would)X 1629(be)X 1739(a)X 1806(review)X 2073(of)X 2174(the)X 2310(applications)X 2765(which)X 3008(provide)X 3305(the)X 3441(stimulus,)X 3791(and)X 3946(demonstrate)X 982 2944(techniques)N 1421(for)X 1585(integrating)X 2030(symbolic)X 2416(and)X 2605(numeric)X 2955(computing)X 3394(components.)X 3903(Effectiveness)X 982 3072(considerations)N 1540(and)X 1716(performance)X 2210(gains)X 2446(should)X 2731(also)X 2923(be)X 3056(included)X 3410(where)X 3674(appropriate.)X 4147(Model)X 982 3200(applications)N 1458(may)X 1659(include:)X 1992(image)X 2256(understanding,)X 2825(weather)X 3151(forecasting,)X 3612(\256nancial)X 3964(forecasting,)X 982 3328(expert)N 1236(systems)X 1549(for)X 1685(PDE)X 1885(solving,)X 2201(simulation,)X 2631(real-time)X 2984(process)X 3281(control,etc.)X 3712(The)X 3883(review)X 4156(article)X 982 3456(should)N 1246(expose)X 1519(the)X 1656(common)X 1994(ground)X 2272(that)X 2434(these)X 2644(applications)X 3100(share,)X 3335(the)X 3471(potential)X 3809(improvement)X 4308(in)X 982 3584(reasoning)N 1363(and)X 1529(computation)X 2011(ef\256ciency,)X 2420(the)X 2568(requirements)X 3068(that)X 3241(they)X 3433(impose)X 3728(on)X 3856(the)X 4004(theoretical)X 982 3712(frameworks,)N 1445(programming)X 1947(languages,)X 2343(and)X 2492(computer)X 2847(architecture.)X 3 f 828 3904(3.)N 916(Programming)X 1466(Languages)X 1 f 982 4032(This)N 1163(would)X 1407(be)X 1514(a)X 1577(review)X 1840(of)X 1937(the)X 2069(programming)X 2574(languages)X 2951(which)X 3191(provide)X 3485(the)X 2 f 3618(means)X 1 f 3867(for)X 3994(integrating)X 982 4160(symbolic)N 1335(and)X 1490(numeric)X 1807(computations.)X 2333(The)X 2498(article\(s\))X 2839(should)X 3102(describe)X 3423(the)X 3559(competing)X 3960(approaches,)X 982 4288(i.e.)N 1136(integration)X 1565(through)X 1885(homogenisation,)X 2517(and)X 2690(integration)X 3119(through)X 3440(interfacing)X 3869(heterogeneous)X 982 4416(systems.)N 1308(Language)X 1680(design)X 1935(issues,)X 2192(features)X 2495(for)X 2622(parallelism,)X 3062(etc.)X 3212(Possible)X 3530(languages)X 3906(that)X 4063(might)X 4294(be)X 982 4544(considered)N 1393(are:)X 1555(Spinlog,)X 1882(Orient84K,)X 12 s 2312(LOOPS)X 11 s 2645(,)X 2698(Cornerstone,)X 3181(Solve,)X 3435(Parle,)X 3669(etc.)X 3847(A)X 3941(comparative)X 982 4672(analysis)N 1288(of)X 1383(the)X 1513(involved)X 1844(languages)X 2218(should)X 2475(be)X 2580(included.)X 3 f 828 4864(4.)N 916(Computer)X 1323(Architecture)X 1 f 982 4992(This)N 1169(review)X 1438(should)X 1704(give)X 1887(a)X 1957(comprehensive)X 2521(review)X 2791(of)X 2895(the)X 3034(novel)X 3261(computer)X 3625(architectures)X 4106(that)X 4270(are)X 982 5120(involved,)N 1345(their)X 1539(basic)X 1752(operating)X 2117(principles,)X 2519(their)X 2713(internal)X 3015(structure,)X 3377(a)X 3448(comparative)X 3915(analysis,)X 4252(etc.)X 982 5248(Possible)N 1298(architectures)X 1770(that)X 1925(might)X 2154(be)X 2259(considered)X 2662(are:)X 10 s 2814(PADMAVATI)X 11 s 3295(,)X 10 s 3337(SPRINT)X 11 s 3612(,)X 3656(...)X 3 f 518 5376(DEADLINES)N 1 f 806 5504(March)N 1057(15th)X 1236(-)X 1309(Extended)X 1664(Abstract.)X 806 5632(April)N 1014(30th)X 1215(-)X 1288(Full)X 1453(Manuscript.)X 3 p %%Trailer xt xs ==================================== From jbower at smaug.cns.caltech.edu Fri Feb 9 14:28:47 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 9 Feb 90 11:28:47 PST Subject: Summer Course Message-ID: <9002091928.AA02507@smaug.cns.caltech.edu> Summer Course Announcement Methods in Computational Neurobiology August 5th - September 1st Marine Biological Laboratory Woods Hole, MA This course is for advanced graduate students and postdoctoral fellows in neurobiology, physics, electrical engineering, computer science and psychology with an interest in "Computational Neuroscience." A background in programming (preferably in C or PASCAL) is highly desirable and basic knowledge of neurobiology is required. Limited to 20 students. This four-week course presents the basic techniques necessary to study single cells and neural networks from a computational point of view, emphasizing their possible function in information processing. The aim is to enable participants to simulate the functional properties of their particular system of study and to appreciate the advantages and pitfalls of this approach to understanding the nervous system. The first section of the course focuses on simulating the electrical properties of single neurons (compartmental models, active currents, interactions between synapses, calcium dynamics). The second section deals with the numerical and graphical techniques necessary for modeling biological neuronal networks. Examples are drawn from the invertebrate and vertebrate literature (visual system of the fly, learning in Hermissenda, mammalian olfactory and visual cortex). In the final section, connectionist neural networks relevant to perception and learning in the mammalian cortex, as well as network learning algorithms will be analyzed and discussed from a neurobiological point of view. The course includes lectures each morning and a computer laboratory in the afternoons and evenings. The laboratory section is organized around GENESIS, the Neuronal Network simulator developed at the California Institute of Technology, running on 20 state-of-the-art, single-user, graphic color workstations. Students initially work with GENESIS-based tutorials and then are expected to work on a simulation project of their own choosing. Co-Directors: James M. Bower and Christof Koch, Computation and Neural Systems Program, California Institute of Technology 1990 summer faculty: Ken Miller UCSF Paul Adams Stony Brook Idan Segev Jerusalem David Rumelhart Stanford John Rinzel NIH Richard Andersen MIT David Van Essen Caltech Kevin Martin Oxford Al Selverston UCSD Nancy Kopell Boston U. Avis Cohen Cornell Rudolfo Llinas NYU Tom Brown* Yale Norberto Grzywacz* MIT Terry Sejnowski UCSD/Salk Ted Adelson MIT *tentative Application deadline: May 15, 1990 Applications are evaluated by an admissions committee and individuals are notified of acceptance or non-acceptance by June 1. Tuition: $1,000 (includes room & board). Financial aid is available to qualified applicants. For further information contact: Admissions Coordinator Marine Biological Laboratory Woods Hole, MA 02543 (508) 548-3705, ext. 216 From KOKAR at northeastern.edu Fri Feb 9 14:58:00 1990 From: KOKAR at northeastern.edu (KOKAR@northeastern.edu) Date: Fri, 9 Feb 90 14:58 EST Subject: Conference on Intelligent Control Message-ID: The 5-th IEEE International Symposium on Intelligent Control Penn Tower Hotel, Philadelphia September 5 - 7, 1990 Sponsored by IEEE Control Society The IEEE International Symposium on Intelligent Control is the Annual Meeting dedicated to the problems of Control Systems associated with combined Control/Artificial Intelligence theoretical paradigm. This particular meeting is dedicated to the Perception - Representation - Action Triad. The Symposium will consist of three mini-conferences: Perception as a Source of Knowledge for Control (Chair - H.Wechsler) Knowledge as a Core of Perception-Control Activities (Chair - S.Navathe) Decision and Control via Perception and Knowledge (Chair - H.Kwatny) intersected by Three Plenary 2-hour Panel Discussions: I. On Perception in the Loop II. On Action in the Loop III. On Knowledge Representation in the Loop. Suggested Topics of Papers are not limited to the following list: - Intractable Control Problems in the Perception-Representation-Action Loop - Control with Perception Driven Representation - Multiple Modalities of Perception, and Their Use for Control - Control of Movements Required by Perception - Control of Systems with Complicated Dynamics - Intelligent Control for Interpretation in Biology and Psychology - Actively Building-up Representation Systems - Identification and Estimation of Complex Events in Unstructured Environment - Explanatory Procedures for Constructing Representations - Perception for Control of Goals, Subgoals, Tasks, Assignments - Mobility and Manipulation - Reconfigurable Systems - Intelligent Control of Power Systems - Intelligent Control in Automated Manufacturing - Perception Driven Actuation - Representations for Intelligent Controllers (geometry, physics, processes) - Robust Estimation in Intelligent Control - Decision Making Under Uncertainty - Discrete Event Systems - Computer-Aided Design of Intelligent Controllers - Dealing with Unstructured Environment - Learning and Adaptive Control Systems - Autonomous Systems - Intelligent Material Processing: Perception Based Reasoning D E A D L I N E S Extended abstracts (5 - 6 pages) should be submitted to: H. Kwatny, MEM Drexel University, Philadelphia, PA 19104 - CONTROL AREA S. Navathe, Comp. Sci., University of Florida, Gainesville, FL 32911 - KNOWLEDGE REPRESENTATION AREA H. Wechsler, George Mason University, Fairfax, VA 22030 - PERCEPTION AREA NO LATER THAN MARCH 1, 1990. Papers that are difficult to categorize, and/or related to all of these areas, as well as proposals for tutorials, invited sessions, demonstrations, etc., should be submitted to A. Meystel, ECE, Drexel University, Philadelphia, PA 19104, (215) 895-2220 before March 1, 1990. REGISTRATION FEES: On/before Aug.5, 1990 After Aug.5, 1990 Student $ 50 $ 70 IEEE Member $ 200 $ 220 Other $ 230 $ 275 Cancellation fee: $ 20. Payment in US dollars only, by check. Payable to: IC 90. Send check and registration form to: Intelligent Control - 1990, Department of ECE, Drexel University, Philadelphia, PA 19104. From jfeldman%icsib2.Berkeley.EDU at jade.berkeley.edu Sun Feb 11 17:13:11 1990 From: jfeldman%icsib2.Berkeley.EDU at jade.berkeley.edu (Jerry Feldman) Date: Sun, 11 Feb 90 14:13:11 PST Subject: Advertisement Message-ID: <9002112213.AA02197@icsib2.berkeley.edu.> The International Computer Science Institute is pleased to announce that Italy and Switzerland have joined Germany as sponsor nations. Citizens of these countries are especially encouraged to inquire about permanent, visiting, or post-doctoral positions at the Institute. There are also open positions at all levels, including the most senior, that will be filled independent of nationality. In addition to its connectionist research, ICSI has programs in Complexity Theory, Realization of Massively Parallel Systems, and Very Large Distributed Systems. From mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu Mon Feb 12 13:53:45 1990 From: mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu (mclennan%MACLENNAN.CS.UTK.EDU@cs.utk.edu) Date: Mon, 12 Feb 90 14:53:45 EDT Subject: Tech Report Available Message-ID: <9002121953.AA05739@MACLENNAN.CS.UTK.EDU> *************** PLEASE DO NOT DISTRIBUTE TO OTHER LISTS *************** The following technical report (CS-90-99) is available. Requests for copies: library at cs.utk.edu Other correspondence: maclennan at cs.utk.edu ----------------------------------------------------------------------- Evolution of Communication in a Population of Simple Machines Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 Technical Report CS-90-99 ABSTRACT We show that communication may evolve in a population of simple machines that are physically capable of sensing and modifying a shared environment, and for which there is selective pressure in favor of cooperative behavior. The emergence of communication was detected by comparing simulations in which communication was permitted with those in which it was suppressed. When communica- tion was not suppressed we found that at the end of the experi- ment the average fitness of the population was 84% higher and had increased at a rate 30 times faster than when communication was suppressed. Furthermore, when communication was suppressed, the statistical association of symbols with situations was random, as is expected. In contrast, permitting communication led to very structured associations of symbols and situations, as determined by a variety of measures (e.g., coefficient of variation and entropy). Inspection of the structure of individual highly fit machines confirmed the statistical structure. We also investi- gated a simple kind of learning. This did not help when communi- cation was suppressed, but when communication was permitted the resulting fitness was 845% higher and increased at a rate 80 times as fast as when it was suppressed. We argue that the experiments described here show a new way to investigate the emergence of communication, its function in populations of simple machines, and the structure of the resulting symbol systems. From Michael.Witbrock at CS.CMU.EDU Mon Feb 12 18:53:42 1990 From: Michael.Witbrock at CS.CMU.EDU (Michael.Witbrock@CS.CMU.EDU) Date: Mon, 12 Feb 90 18:53:42 -0500 (EST) Subject: Tech Report Announcement CMU-CS-89-208 Message-ID: Requests for copies should go to the address at the bottom of this post, not to the poster. PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS. ========================================================================= ==== An Implementation of Back-Propagation Learning on GF11, a Large SIMD Parallel Computer Michael Witbrock and Marco Zagha Carnegie Mellon University CMU-CS-89-208 December 1989 Current connectionist simulations require huge computational resources. We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops. We present our parallel implementation of the backpropagation learning algorithm, techniques for increasing efficiency, performance measurements on the NetTalk text-to-speech benchmark, and a performance model for the simulator. Our simulator currently runs the back-propagation learning algorithm at 900 million connections per second, where each ``connection per second'' includes both a forward and backward pass. This figure was obtained on the machine when only 356 processors were working; with all 566 processors operational, our simulation will run at over one billion connections per second. We conclude that the GF11 is well-suited to neural network simulation, and we analyze our use of the machine to determine which features are the most important for high performance. ========================================================================= ==== PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS. Requests for copies should go to: C Copetas School of Computer Science Carnegie Mellon University Pittsburgh PA 15213-3890 USA or to copetas at cs.cmu.edu The technical report deals with implementation issues for fast parallel connectionist simulators. It may not be of any great interest to anyone not working in this area. ------------------------------------------------------------------------- ----- P.S. This TR is in postscript. Could the person who runs the ftp archive tell me how to go about submitting it? Thanks. From kddlab!soc.sdl.melco.co.jp!izui at UUNET.UU.NET Sat Feb 10 17:26:39 1990 From: kddlab!soc.sdl.melco.co.jp!izui at UUNET.UU.NET (Izui Yoshio) Date: Sat, 10 Feb 90 17:26:39 JST Subject: including me in mailing list Message-ID: <9002100826.AA00495@loame26.soc.sdl.melco.co.jp> hello, could you include my address in your mailing list ? Yoshio Izui Industrial Systems Lab. Mitsubishi Electric Corp. 8-1-1, Tsukaguchi Honmachi, Amagasaki, Hyogo, 661 Japan izui at soc.sdl.melco.co.jp intersts: application to power engineering field. basic analysis of models From ULI%MPI01.MPI.KUN.NL at VMA.CC.CMU.EDU Tue Feb 13 12:37:00 1990 From: ULI%MPI01.MPI.KUN.NL at VMA.CC.CMU.EDU (ULI%MPI01.MPI.KUN.NL@VMA.CC.CMU.EDU) Date: Tue, 13 Feb 90 12:37 MET Subject: job announcement, please post Message-ID: Position Available The Max-Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, is looking for a programmer to participate in a project entitled, ``Computational modeling of lexical representation and processes''. The task of the successful candidate will be to help develop and implement software for studying and simulating the processes of human speech perception and word recognition with artificial neural nets of different types. A strong background in software development (good knowledge of C) and a good understanding of the mathematical/technical principles underlying neural nets are required. The position is to be filled starting in March 1990 and is limited to two years (up to BAT IIA on German salary scale). Qualified applicants (with university degree: Ph.D.) should send their curriculum vitae and two letters of recommendation by March 1, 1990 to: Uli Frauenfelder or Peter Wittenburg Max-Planck-Institute for Psycholinguistics Wundtlaan 1 NL-6525-XD Nijmegen, The Netherlands phone: 31-80-521-911 e-mail: uli at hnympi51.bitnet or pewi at hnympi51.bitnet. From lyn%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Tue Feb 13 08:17:34 1990 From: lyn%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Lyn Shackleton) Date: Tue, 13 Feb 90 13:17:34 GMT Subject: Reviewers for Special Issue of Connection Science Message-ID: <8863.9002131317@exsc.cs.exeter.ac.uk> The Journal, Connection Science, is soon to announce a call for papers for a special issue on Simulations of Psychological Processes. So far the special editorial board consists: James A. Anderson Walter Kintsch Dennis Norris David Rumelhart Noel Sharkey Others will be added to this list. We are now calling for REVIEWERS for the special issue. We would like to enlist volunteers from any area of psychology with experience in connectionist modeling. Please state name and area of expertise. lyn shackleton Centre for Connection Science JANET: lyn at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!lyn Exeter EX4 4PT Devon BITNET: lyn at cs.exeter.ac.uk.UKACRL U.K. From mani at grad1.cis.upenn.edu Tue Feb 13 12:15:22 1990 From: mani at grad1.cis.upenn.edu (D. R. Mani) Date: Tue, 13 Feb 90 12:15:22 EST Subject: Please add me to your mailing list Message-ID: <9002131715.AA27505@grad1.cis.upenn.edu> I am a graduate student in Computer Science at the University of Pennsylvania. I am interested in connectionist networks and would like my name to be included in your connectionist (e)mailing list. Thanks. D. R. Mani mani at grad1.cis.upenn.edu From carol at ai.toronto.edu Tue Feb 13 14:33:30 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 13 Feb 90 14:33:30 EST Subject: CRG-TR-90-2 request Message-ID: <90Feb13.143344est.10526@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS ********************************************************** The following paper is an expanded version of one published in the NIPS'89 Proceedings. If you would like to receive a copy of this paper, reply to this message with your physical mailing address. (Please omit all other information from your message except your address). Those who have previously requested copies of this TR at NIPS are already on the mailing list and do NOT need to send a new request. ------------------------------------------------------------------------------- MAX LIKELIHOOD COMPETITION IN RBF NETWORKS Steven J. Nowlan Department of Computer Science University of Toronto 10 King's College Road Toronto, Canada M5S 1A4 Technical Report CRG-TR-90-2 One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a ``softer'' form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. ------------------------------------------------------------------------------- From netbb at LONEX.RADC.AF.MIL Wed Feb 14 07:47:44 1990 From: netbb at LONEX.RADC.AF.MIL (Robert Russell) Date: Wed, 14 Feb 90 07:47:44 EST Subject: CRG-TR-90-2 request Message-ID: <9002141247.AA22192@lonex9.radc.af.mil> W. J. Buzz Szarek RADC/IRRA G.A.F.B., NY. 13441 From eisner%husc8 at harvard.harvard.edu Wed Feb 14 10:42:47 1990 From: eisner%husc8 at harvard.harvard.edu (Jason Eisner) Date: Wed, 14 Feb 90 10:42:47 EST Subject: CRG-TR-90-2 request Message-ID: Jason Eisner 60 Linnaean St. Harvard University Cambridge, MA 02138 From pollack at cis.ohio-state.edu Wed Feb 14 13:15:11 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 14 Feb 90 13:15:11 EST Subject: Neuroprose update Message-ID: <9002141815.AA02861@toto.cis.ohio-state.edu> Tony Plate & I wrote a script to make life easier for those people who don't like to ftp and uncompress. It is enclosed below, and whatever file you save it in should be made executable. (e.g. after saving and editing a file, do a "chmod +x filename" on it.) It is also stored as "Getps" in the neuroprose directory, where it will be maintained and improved. Also I'd like to take this opportunity to ask those who have stored postscript files there, or are planning to in the future, to send me mail with: 1) filename 2) way to contact author 3) single sentence abstract so I can Cons up an INDEX file. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 ---------------cut here, save, and chmod +x---------- #!/bin/sh ######################################################################## # usage: getps # # A Script to get, uncompress, and print postscript # files from the neuroprose directory on cheops.ohio-state.edu # # By Tony Plate & Jordan Pollack ######################################################################## if [ "$1" = "" ] ; then echo usage: $0 " " echo echo The filename must be exactly as it is in the archive, if your echo file is not found the first time, look in the file \"ftp.log\" echo for a list of files in the archive. echo echo The printerflags are used for the optional lpr command that echo is executed after the file is retrieved. A common use would echo be to use -P to specify a particular postscript printer. exit fi ######################################################################## # set up script for ftp ######################################################################## cat > .ftp.script < ftp.log rm -f .ftp.script if [ ! -f /tmp/$1 ] ; then echo Failed to get file - please inspect ftp.log for list of available files exit fi ######################################################################## # Uncompress if necessary ######################################################################## echo Retrieved /tmp/$1 case $1 in *.Z) echo Uncompressing /tmp/$1 uncompress /tmp/$1 FILE=`basename $1 .Z` ;; *) FILE=$1 esac ######################################################################## # query to print file ######################################################################## echo -n "Send /tmp/$FILE to 'lpr $2' (y or n)? " read x case $x in [yY]*) echo Printing /tmp/$FILE lpr $2 /tmp/$FILE ;; esac echo File left in /tmp/$FILE From ernst at aurel.cns.caltech.edu Wed Feb 14 21:53:11 1990 From: ernst at aurel.cns.caltech.edu (Ernst Niebur) Date: Wed, 14 Feb 90 18:53:11 PST Subject: Could you please add my address to your mailing list? Thank Message-ID: <9002150253.AA00212@aurel.cns.caltech.edu> you cc:ernst From mike at bucasb.bu.edu Thu Feb 15 18:57:28 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Thu, 15 Feb 90 18:57:28 EST Subject: Wang Conference ATR Call for Papers Message-ID: <9002152357.AA01664@bucasb.bu.edu> CALL FOR PAPERS NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION MAY 11--13, 1990 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This research conference at the cutting edge of neural network science and technology will bring together leading experts in academe, government, and industry to present their latest results on automatic target recognition in invited lectures and contributed posters. Automatic target recognition is a key process in systems designed for vision and image processing, speech and time series prediction, adaptive pattern recognition, and adaptive sensory- motor control and robotics. Invited lecturers include: JOE BROWN, Martin Marietta; GAIL CARPENTER, Boston Univ.; NABIL FARHAT, Univ. Pennsylvania; STEPHEN GROSSBERG, Boston Univ.; ROBERT HECHT-NIELSEN, HNC; KEN JOHNSON, Hughes Aircraft; PAUL KOLODZY, MIT Lincoln Lab; MICHAEL KUPERSTEIN, Neurogen; YANN LECUN, AT&T Bell Labs; CHRISTOPHER SCOFIELD, Nestor; STEVEN SIMMES, Science Applications International Co.; ALEX WAIBEL, Carnegie Mellon Univ.; ALLEN WAXMAN, MIT Lincoln Lab; FRED WEINGARD, Booz-Allen and Hamilton; BARBARA YOON, DARPA; CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR neural network research will be held on May 12, 1990. Attendees who wish to present a poster should submit 3 copies of an extended abstract (1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include with the abstract the name, address, and telephone number of the corresponding author. Mail to: ATR Poster Session, Neural Networks Conference, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors will be informed of abstract acceptance by March 31, 1990. SITE: The Wang Institute possesses excellent conference facilities on a beautiful 220-acre campus. It is easily reached from Boston's Logan Airport and Route 128. REGISTRATION FEE: Regular attendee--$90; full-time student--$70. Registration fee includes admission to all lectures and poster session, meeting proceedings, one reception, two continental breakfasts, one lunch, one dinner, daily morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available. For information, call (508) 649-9731. TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Feb 16 04:25:19 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 16 Feb 90 04:25:19 EST Subject: repetitive conference announcements Message-ID: <21816.635160319@DST.BOLTZ.CS.CMU.EDU> I spoke with Michael Cohen at BU about the repetitive conference announcements which some subscribers to this list find annoying. He wasn't aware of the policy on CONNECTIONISTS about repetitive posts, and assures me it won't happen anymore. For those who don't know: the policy has always been that conferences should be announced just once on CONNECTIONISTS. We've relaxed this a little to permit one early call for papers and one late posting of registration info as the time of the conference draws near. That's as far as it goes. No flames, please. Let's all get back to work. -- Dave From P.Refenes at Cs.Ucl.AC.UK Fri Feb 16 10:16:00 1990 From: P.Refenes at Cs.Ucl.AC.UK (P.Refenes@Cs.Ucl.AC.UK) Date: Fri, 16 Feb 90 15:16:00 +0000 Subject: KOHONEN's FEATURE MAPS Message-ID: Does any one out there have an implementation of KOHONENs feature map algorithm in a usefull programming language (e.g. C, C++, LISP, Prolog, RCS, GENESYS, etc.) If so is it possible to get our hands on teh sources? Thanks in advance, Paul Refenes. From bogus@does.not.exist.com Mon Feb 19 11:39:56 1990 From: bogus@does.not.exist.com () Date: 19 FEB 90 11:39:56 Subject: Forward of: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <$969797332S0340D19900219T093956.0001.Mail-VE> From aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK at vma.CC.CMU.EDU Sun Feb 4 14:11:17 1990 From: aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK at vma.CC.CMU.EDU (aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK@vma.CC.CMU.EDU) Date: Sun, 4 Feb 90 19:11:17 GMT Subject: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <18538.9002041911@csunb.cogs.susx.ac.uk> I have been asked to circulate information about this conference. NB - please do NOT use "reply". Email enquiries should go to turing at uk.ac.sussex.syma ----------------------------------------------------------------------- TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church-Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, while other prominent contributors include Robert French (Indiana), Beatrice de Gelder (Tilburg), Andrew Hodges (Oxford), Philip Pettit (ANU) and Aaron Sloman (Sussex). Anyone wishing to attend this Conference should complete the enclosed form and send it to Andy Clark, TURING Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March, 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start at lunchtime on Tuesday 3rd April, 1990, and will end on Friday 6th April after tea. Final details will be sent to registered participants in February 1990. Conference Organizing Committee Andy Clark (Sussex University), David Holdcroft (Leeds University), Peter Millican (Leeds University), Steve Torrance (Middlesex Polytechnic) ___________________________________________________________________________ PROGRAMME OF INVITED SPEAKERS Paul CHURCHLAND (UCSD) Title to be announced Joseph FORD (Georgia) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Douglas HOFSTADTER (Indiana) Title to be announced J.R. LUCAS (Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark Turing 90 Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. ____________________________________________________________________________ ------------------------------------------------ Following comments provided by: KDBG100.KDBG100.NVE at BGUNVE ------------------------------------------------ From gaudiano at bucasb.bu.edu Mon Feb 19 12:33:45 1990 From: gaudiano at bucasb.bu.edu (gaudiano@bucasb.bu.edu) Date: Mon, 19 Feb 90 12:33:45 EST Subject: New Student Society Message-ID: <9002191733.AA24435@retina.bu.edu> This is the first official announcement of the: Neural Networks Student Society ------------------------------- The purpose of the Society is to (1) provide a means of exchanging information among students and young professionals within the area of Neural Networks; (2) create an opportunity for interaction between students and professionals from academia and industry; (3) encourage support from academia and industry for the advancement of students in the area of Neural Networks; (4) insure that the interest of all students in the area of Neural Networks is taken into consideration by other societies and institutions that promote Neural Networks; and (5) lay down a solid, UNBIASED foundation upon which the study of Neural Networks will be developed into a self-contained discipline. The society is specifically intended to avoid discrimination based on age, sex, race, religion, national origin, annual income or graduate advisor. An organizational meeting was held at the Washington IJCNN meeting. We had about 60 students and other interested bodies there, and later that evening many of us went out to get to know each other over some fine ales. Many of the participants came from outside of the US, and the general consenus is that this is a society whose time has come. We have many action items on our agenda, including: 1) a quarterly newsletter 2) an e-mail newsgroup 3) a resume exchange service for neograduates in the field 4) summer co-ops with NN companies 5) a database of existing graduate programs in NNs 6) an ftp site for NN simulation code and other info 7) corporate sponsorships to support student travel expenses . . . n) many, many more activities and ideas A booth for our Society has been donated by the organizers of the Paris INNC conference (July 90), and we may also get one at the San Diego IJCNN conference. We will use the booth to advertise our society, and to promote student ideas and projects. More details will be given in the first newsletter, which is scheduled to come out March 21. It will include our official bylaws, and other introductory information. WHO TO CONTACT: -------------- If you'd like to be on the mailing list to receive newsletters by surface or electronic mail, and you did not already give us your name at the IJCNN Washington meeting, send a note to: nnss-request at thalamus.bu.edu (Newsletter requests) with your name, affiliation, and address. The first issue will contain all the necessary information to become a member for the rest of the year. Once the Society becomes official, there will be a nominal annual fee (about $5) to help with costs for publications and activities, but for now you can get our info to see what it's all about at no cost. You will only remain a member if you are interested in the society and send in the official membership form. In the meantime, if you are thinking about a job in NNs in the near future, and would like information about our resume exchange program, send a message to: khaines at galileo.ece.cmu.edu and if you have general questions about the society (other than a request for the first newsletter, or about the resume service), send mail to: nnss at thalamus.bu.edu Also, if you are willing to volunteer some time to help with the society, send a note to: gaudiano at thalamus.bu.edu we will definitely need some help at the upcoming conferences, and may also need some assistance with other odds & ends before that time. Finally, we will soon circulate a proposal for a new USENET newsgroup, so if you read usenet news keep your eyes open for an opportunity to vote in the next few weeks. Karen Haines and Paolo Gaudiano co-founders, NNSS From grumbach at ulysse.enst.fr Tue Feb 20 06:21:30 1990 From: grumbach at ulysse.enst.fr (Alain Grumbach) Date: Tue, 20 Feb 90 12:21:30 +0100 Subject: organization levels Message-ID: <9002201121.AA00583@ulysse.enst.fr> Being working on hybrid symbolic - connectionist systems, I am wondering about the notion of "organization level", which is underlying hybrid models. A lot of people use this phrase, from neuroscience, to cognitive psychology, via computer science, Artificial Intelligence : (Anderson, Newell, Simon, Hofstadter,Marr, Changeux, etc). But has anybody heard about a formal description of it ? (formal but understandable !) Thank you. Alain Grumbach grumbach at inf.enst.fr From rudnick at cse.ogi.edu Tue Feb 20 23:44:35 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Tue, 20 Feb 90 20:44:35 PST Subject: developmental aspects of NNs Message-ID: <9002210444.AA05798@cse.ogi.edu> Is anyone doing work on the developmental (as in developmental biology) aspects of NNs? I'm (at least somewhat) aware of Edelman & Reeke's work on neuronal group selection, and Wilson's papers on L-systems, but no other work. My particular interest is in using genetic search techniques for the design of artificial neural nets. I want to find a useful analog of developmental biology to aid in both the design of compact genetic codes and in converting those genetic codes to nets (more or less) ready to be trained. I'm eager to contact new people working in these areas, and would appreciate any pointers or references that may be appropriate. Thanks, Mike Rudnick Computer Science & Eng. Dept. Domain: rudnick at cse.ogi.edu Oregon Graduate Institute (was OGC) UUCP: {tektronix,verdix}!ogicse!rudnick 19600 N.W. von Neumann Dr. (503) 690-1121 X7390 (or X7309) Beaverton, OR. 97006-1999 From BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU Wed Feb 21 05:07:04 1990 From: BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU (BOVET JAMON BENHAMOU OTTOMANI) Date: Wed, 21 Feb 90 10:07:04 GMT Subject: organization levels Message-ID: Alain Grumbach is looking for a formal description of organization levels. I dont know any formal response to this question, but in Biology this concept seems clear: see for instance the wonderfull book of Francois Jacob: La logique du vivant. But Biology is not a formal science. Thus it will not give a formal response. In AI or in ANN I think that the question of organization levels is akin to the problem of a definition of complexity wich was yet discussed here some months ago. Pierre Bovet, Laboratoire de Neurosciences Fonctionnelles, Marseille. From TGELDER%IUBACS.BITNET at VMA.CC.CMU.EDU Wed Feb 21 10:34:00 1990 From: TGELDER%IUBACS.BITNET at VMA.CC.CMU.EDU (TGELDER%IUBACS.BITNET@VMA.CC.CMU.EDU) Date: Wed, 21 Feb 90 10:34 EST Subject: organization levels Message-ID: Talk of "levels" (of organization, of complexity, of analysis, etc) is pervasive in discussions of connectionism, especially in relation to the brain on one hand and some kind of "symbolic" level on the other. Unfortunately there is no well-developed account of what levels really are, and what kinds there are, and so much of the discussion lacks sharp edges, to say the least. A fellow here at Indiana University, Allen Yu-Huong Houng, is writing his philosophy PhD dissertation on the concept of "level" in scientific theorizing with particular application to psychology and the relation of connectionist modeling to other modes of psychological explanation. He adopts and develops theories of levels from other fields, primarily biology, mainstream philosophy of science, and computer science. He has a good first draft of a comprehensive chapter on the concept of a level, and would probably be glad to distribute it to interested parties for discussion and critical feedback. He can be contacted at Department of Philosophy Indiana University Bloomington, IN 47405 or I can pass on email messages. Tim van Gelder tgelder at ucs.indiana.edu From carol at ai.toronto.edu Wed Feb 21 15:27:48 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Wed, 21 Feb 90 15:27:48 EST Subject: CRG-TR-90-1 request Message-ID: <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OF MAILING LISTS ********************************************************** The following technical report is now available. If you'd like a copy please send me your real mail address (omitting all other information from your message). --------------------------------------------------------------------------- BUILDING ADAPTIVE INTERFACES WITH NEURAL NETWORKS: THE GLOVE-TALK PILOT STUDY S. Sidney Fels Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 CRG-TR-90-1 Connectionist learning procedures can be used to interpret incoming data and to generate complex responses. To illustrate the potential of using these procedures for adaptive interfaces, a system using neural networks to convert hand gestures to speech in real-time was developed. A VPL DataGlove connected to five networks and a DECtalk (speech synthesizer), implements the hand to speech system. Using existing connectionist learning procedures, the complex mapping of hand movements to speech particular to a specific user is learned using data obtained in a simple training phase. Based on a 203 gesture-to-word vocabulary, the noticeable word error rate is less than 1%. In addition, adaptive control of the speaking rate and word stress is available. The system is streamlined by using small, separate networks for each naturally defined subtask. Smaller networks reduce training and running times. This system demonstrates that connectionist learning procedures can be used to develop the complex mappings required in an adaptive interface. --------------------------------------------------------------------------- From turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK Wed Feb 21 11:42:37 1990 From: turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK (Turing Conference) Date: Wed, 21 Feb 90 16:42:37 GMT Subject: Turing 1990 Programme Message-ID: <4192.9002211642@ctcs.leeds.ac.uk> ____________________________________________________________________________ TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 PROGRAMME OF SPEAKERS AND REGISTRATION DOCUMENTS ____________________________________________________________________________ INVITED SPEAKERS Paul CHURCHLAND (Philosophy, University of California at San Diego) Title to be announced Joseph FORD (Physics, Georgia Institute of Technology) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Mathematical Institute, Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Philosophy, Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Andrew HODGES (Oxford, author of "Alan Turing: the enigma of intelligence") BACK TO THE FUTURE : ALAN TURING IN 1950 Douglas HOFSTADTER (Computer Science, Indiana) Title to be announced J.R. LUCAS (Merton College, Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute, Glasgow) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Magdalen College, Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Computer Science and Psychology, Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ OTHER SPEAKERS Most of the papers to be given at the Colloquium are interdisciplinary, and should hold considerable interest for those working in any area of Cognitive Science or related disciplines. However the papers below will be presented in paired parallel sessions, which have been arranged as far as possible to minimise clashes of subject area, so that those who have predominantly formal interests, for example, will be able to attend all of the papers which are most relevant to their work, and a similar point applies for those with mainly philosophical, psychological, or purely computational interests. Jonathan Cohen (The Queen's College, Oxford) "Does Belief Exist?" Mario Compiani (ENIDATA, Bologna, Italy) "Remarks on the Paradigms of Connectionism" Martin Davies (Philosophy, Birkbeck College, London) "Facing up to Eliminativism" Chris Fields (Computing Research Laboratory, New Mexico) "Measurement and Computational Description" Robert French (Center for Research on Concepts and Cognition, Indiana) "Subcognition and the Limits of the Turing Test" Beatrice de Gelder (Psychology and Philosophy, Tilburg, Netherlands) "Cognitive Science is Philosophy of Science Writ Small" Peter Mott (Computer Studies and Philosophy, Leeds) "A Grammar Based Approach to Commonsense Reasoning" Aaron Sloman (Cognitive and Computing Sciences, Sussex) "Beyond Turing Equivalence" Antony Galton (Computer Science, Exeter) "The Church-Turing Thesis: its Nature and Status" Ajit Narayanan (Computer Science, Exeter) "The Intentional Stance and the Imitation Game" Jon Oberlander and Peter Dayan (Centre for Cognitive Science, Edinburgh) "Altered States and Virtual Beliefs" Philip Pettit and Frank Jackson (Social Sciences Research, ANU, Canberra) "Causation in the Philosophy of Mind" Ian Pratt (Computer Science, Manchester) "Encoding Psychological Knowledge" Joop Schopman and Aziz Shawky (Philosophy, Utrecht, Netherlands) "Remarks on the Impact of Connectionism on our Thinking about Concepts" Murray Shanahan (Computing, Imperial College London) "Folk Psychology and Naive Physics" Iain Stewart (Computing Laboratory, Newcastle) "The Demise of the Turing Machine in Complexity Theory" Chris Thornton (Artificial Intelligence, Edinburgh) "Why Concept Learning is a Good Idea" Blay Whitby (Cognitive and Computing Sciences, Sussex) "The Turing Test: AI's Biggest Blind Alley?" ____________________________________________________________________________ TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church- Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Andrew Hodges, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, and there are many other prominent contributors, whose names and papers are listed above. Anyone wishing to attend this Conference should complete the form below and send it to Andy Clark, TURING 1990 Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start after lunch on Tuesday 3rd April 1990, and it will end on Friday 6th April after tea. Final details will be sent to registered participants towards the end of February. Conference Organizing Committee Andy Clark (Cognitive and Computing Sciences, Sussex University) David Holdcroft (Philosophy, Leeds University) Peter Millican (Computer Studies and Philosophy, Leeds University) Steve Torrance (Information Systems, Middlesex Polytechnic) ___________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark, Turing 1990 Registrations, Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. Email responses to: turing at uk.ac.sussex.syma ( from BITNET: turing at syma.sussex.ac.uk -NM ) ____________________________________________________________________________ IMPORTANT NOTICE FOR STUDENTS AND SUPERVISORS: The Analysis Committee has kindly made a donation to subsidise students who would benefit from attending the Colloquium but who might otherwise be unable to do so. The amount of any such subsidy will depend on the overall demand and the quality of the candidates, but it would certainly cover the registration fee and probably a proportion of the accommodation expenses. Interested parties should write immediately to Andy Clark at the address above, enclosing a brief supporting comment from a tutor or supervisor. ____________________________________________________________________________ PLEASE SEND ON THIS NOTICE to any researchers, lecturers or students in the fields of Artificial Intelligence, Cognitive Science, Computer Science, Logic, Mathematics, Philosophy or Psychology, in Britain or abroad, and to ANY APPROPRIATE BULLETIN BOARDS which have not previously displayed it. From p_j_angeline at cis.ohio-state.edu Wed Feb 21 20:53:53 1990 From: p_j_angeline at cis.ohio-state.edu (p_j_angeline@cis.ohio-state.edu) Date: Wed, 21 Feb 90 20:53:53 -0500 Subject: CRG-TR-90-1 request In-Reply-To: Carol Plathan's message of Wed, 21 Feb 90 15:27:48 EST <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> Message-ID: <9002220153.AA11170@kant.cis.ohio-state.EDU> Peter J Angeline Computer and Information Science Department 228 Bolz Hall The Ohio State University Columbus, Oh 43210 From russ at dash.mitre.org Thu Feb 22 10:07:28 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Thu, 22 Feb 90 10:07:28 EST Subject: CRG-TR-90-1 request In-Reply-To: Carol Plathan's message of Wed, 21 Feb 90 15:27:48 EST <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> Message-ID: <9002221507.AA14409@dash.mitre.org> Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From munnari!cluster.cs.su.oz.au!ray at uunet.uu.net Tue Feb 20 23:49:37 1990 From: munnari!cluster.cs.su.oz.au!ray at uunet.uu.net (munnari!cluster.cs.su.oz.au!ray@uunet.uu.net) Date: Wed, 21 Feb 90 15:49:37 +1100 Subject: pseudo-standard TSP coordinates Message-ID: <9002210451.3521@munnari.oz.au> I have received a number of requests for the city coordinates of the Traveling Salesman Problems I studied in my IJCNN-90-WASH paper (Lister, "Segment Reversal and the Traveling Salesman Problem"). Some of these requests arrived by physical mail. Apparently, some people have had trouble reaching my site with email. Below are the coordinates. They were originally authored by Hopfield and Tank, and Durbin and Willshaw, so in some sense they are pseudo-standard problems. I also have the coordinates for Angeniol et al's 1000 city problem (Neural Networks, Vol 1, No. 4, 1988), but I have decided not to clog the list with those. If you'd like it, mail me direct. Raymond Lister Basser Department of Computer Science University of Sydney NSW 2006 AUSTRALIA Internet: ray at cs.su.oz.AU CSNET: ray%cs.su.oz at RELAY.CS.NET UUCP: {uunet,hplabs,pyramid,mcvax,ukc,nttlab}!munnari!cs.su.oz.AU!ray JANET: munnari!cs.su.oz.AU!ray at ukc :::::::::::::: 30cities :::::::::::::: 0.4384 0.6920 0.4232 0.2328 0.7186 0.6939 0.3956 0.1845 0.9529 0.4058 0.6321 0.0704 0.6094 0.2125 0.3693 0.5692 0.3325 0.6035 0.0774 0.8135 0.5412 0.1743 0.3966 0.1180 0.2036 0.4527 0.7645 0.8556 0.5043 0.0289 0.9983 0.0065 0.6888 0.8236 0.6012 0.6401 0.5931 0.6716 0.7744 0.7172 0.3720 0.8944 0.2682 0.4146 0.6187 0.2461 0.6836 0.5692 0.6604 0.5272 0.6240 0.5331 0.4605 0.8192 0.3530 0.4450 0.3808 0.2468 0.8341 0.3587 :::::::::::::: 50cities.1 :::::::::::::: 0.4350 0.8356 0.4504 0.8461 0.4880 0.8283 0.5206 0.9079 0.8438 0.9863 0.9154 0.9904 0.8509 0.8348 0.8650 0.7895 0.9097 0.7217 0.9081 0.6131 0.9606 0.5607 0.9392 0.5594 0.7785 0.5432 0.6971 0.6394 0.6762 0.6239 0.7351 0.5343 0.6936 0.3988 0.7471 0.3539 0.6873 0.2615 0.9646 0.2585 0.8945 0.0733 0.8161 0.1113 0.6992 0.0820 0.6663 0.0174 0.5019 0.0049 0.3867 0.0599 0.5105 0.1261 0.5249 0.4292 0.4732 0.3098 0.4469 0.2965 0.4090 0.2606 0.2520 0.2568 0.2981 0.1185 0.1856 0.1690 0.1212 0.0842 0.0305 0.2598 0.0122 0.2907 0.1973 0.3335 0.1793 0.4409 0.3065 0.4587 0.2727 0.5221 0.3110 0.6040 0.1983 0.5341 0.1168 0.6241 0.1193 0.6312 0.2285 0.7029 0.0758 0.8780 0.1366 0.9645 0.2382 0.8138 0.3470 0.7827 :::::::::::::: 50cities.2 :::::::::::::: 0.4392 0.9303 0.4636 0.9095 0.5587 0.8905 0.5896 0.8895 0.6714 0.9683 0.7009 0.8750 0.7359 0.8052 0.8656 0.9821 0.9885 0.9857 0.8480 0.8279 0.9206 0.7491 0.9227 0.5485 0.6813 0.6180 0.6116 0.5308 0.6287 0.4489 0.7495 0.4912 0.7487 0.4666 0.7553 0.3944 0.7273 0.2440 0.9231 0.2949 0.9739 0.2994 0.9195 0.2663 0.9452 0.0385 0.7966 0.0011 0.5617 0.0057 0.5495 0.2450 0.5049 0.2892 0.3496 0.2911 0.3627 0.2523 0.2893 0.1574 0.2269 0.0761 0.1633 0.1262 0.2573 0.2150 0.2502 0.3518 0.3181 0.3910 0.3904 0.5787 0.4167 0.6261 0.4296 0.7125 0.3492 0.6551 0.2413 0.6781 0.1562 0.5234 0.1057 0.6129 0.0307 0.6446 0.0474 0.9277 0.0499 0.9452 0.1515 0.8886 0.1792 0.9865 0.2848 0.9195 0.2619 0.8507 0.3088 0.8945 :::::::::::::: 50cities.3 :::::::::::::: 0.6112 0.6668 0.5856 0.7524 0.5759 0.7513 0.5434 0.8462 0.5759 0.9397 0.6453 0.9079 0.6843 0.8703 0.7668 0.8568 0.8143 0.8205 0.9806 0.9577 0.9746 0.7323 0.9883 0.6790 0.8011 0.6608 0.8252 0.6370 0.9003 0.4054 0.9032 0.3270 0.9007 0.2350 0.9628 0.1462 0.8175 0.1045 0.7817 0.1159 0.7478 0.1487 0.7049 0.1741 0.6702 0.1326 0.5940 0.0732 0.5198 0.1399 0.5346 0.2750 0.4146 0.2153 0.3946 0.1248 0.2412 0.0503 0.0584 0.0435 0.2849 0.1785 0.2857 0.4148 0.4591 0.5554 0.3606 0.5738 0.3056 0.7498 0.2734 0.6661 0.2525 0.5998 0.1497 0.6408 0.0759 0.5876 0.0263 0.5578 0.1066 0.7005 0.1790 0.7494 0.1471 0.7707 0.0550 0.8575 0.1761 0.9218 0.1731 0.9416 0.2609 0.9506 0.3572 0.8551 0.3911 0.9153 0.4660 0.8662 :::::::::::::: 50cities.4 :::::::::::::: 0.3055 0.3221 0.2637 0.2964 0.2868 0.2642 0.0540 0.1626 0.0252 0.1228 0.0129 0.0509 0.2389 0.0705 0.3103 0.0575 0.3322 0.0449 0.4481 0.0415 0.3293 0.1976 0.3460 0.2591 0.3893 0.2529 0.4708 0.2890 0.6871 0.3897 0.7786 0.4420 0.6620 0.2585 0.7870 0.1888 0.8040 0.1215 0.7347 0.0527 0.7994 0.0364 0.8278 0.0550 0.9797 0.1841 0.9653 0.4571 0.9677 0.6138 0.9496 0.7046 0.8630 0.6697 0.8912 0.6074 0.8107 0.6112 0.7588 0.6069 0.7871 0.7346 0.8768 0.9481 0.5963 0.9092 0.6702 0.7964 0.6152 0.7791 0.5838 0.6052 0.4474 0.6936 0.3191 0.6814 0.4197 0.9496 0.0804 0.9972 0.1735 0.8953 0.1319 0.7672 0.0612 0.7509 0.0953 0.6800 0.0336 0.6561 0.0083 0.6188 0.0163 0.3977 0.1149 0.5242 0.2502 0.5212 0.2533 0.4084 :::::::::::::: 50cities.5 :::::::::::::: 0.5914 0.6804 0.7154 0.5778 0.9689 0.5379 0.8848 0.6140 0.8827 0.6550 0.9179 0.6539 0.9924 0.9412 0.8401 0.8829 0.7848 0.9271 0.7588 0.9832 0.5520 0.8777 0.5093 0.9542 0.4510 0.9937 0.3311 0.9481 0.3353 0.8654 0.2694 0.8634 0.3326 0.6630 0.3528 0.6490 0.3659 0.5905 0.2826 0.6649 0.2322 0.6742 0.2115 0.7012 0.2020 0.6797 0.0642 0.6555 0.1284 0.5410 0.0197 0.4416 0.0310 0.4211 0.1721 0.0503 0.2314 0.3136 0.1684 0.4706 0.2443 0.4476 0.3990 0.4982 0.4748 0.5165 0.4130 0.4298 0.4720 0.3862 0.4515 0.3005 0.4727 0.2226 0.5642 0.1322 0.5099 0.0289 0.6761 0.0197 0.7533 0.1484 0.7771 0.1843 0.8511 0.1881 0.9306 0.2243 0.9149 0.2319 0.8529 0.2334 0.7672 0.2705 0.6454 0.3365 0.6870 0.4466 0.6339 0.4510 :::::::::::::: 100cities :::::::::::::: 0.1637 0.6152 .0981 .5942 .1722 .5547 .1271 .4577 .0971 .4008 .0839 .3896 .1145 .3781 .1400 .2946 .1588 .2799 .1304 .2560 .0432 .1606 .2639 .1067 .3191 .0594 .3472 .1434 .3428 .2300 .3021 .2828 .2979 .3045 .2772 .3681 .2500 .4306 .2419 .4549 .3066 .4445 .3582 .3820 .3892 .3556 .3954 .4322 .4159 .4635 .4799 .5269 .5657 .4879 .5655 .4756 .4770 .4061 .5198 .4012 .5530 .3584 .5654 .4184 .6066 .4159 .6511 .3986 .6467 .3504 .6255 .3147 .5698 .2520 .4959 .2367 .4767 .1526 .4948 .1247 .5139 .1757 .5406 .1682 .5722 .1188 .7022 .2264 .7502 .2080 .7187 .1879 .8230 .1519 .7900 .0788 .8872 .0367 .9568 .0281 .9792 .1264 .9476 .1717 .9378 .2333 .8028 .2189 .7734 .2448 .6840 .2929 .7442 .3807 .7375 .4091 .7786 .4315 .8730 .4270 .9834 .5354 .8955 .5948 .8665 .6745 .7795 .7110 .7657 .6465 .7584 .5819 .6528 .6042 .5790 .6379 .6550 .6905 .6763 .7326 .7801 .7579 .7671 .7802 .7553 .8609 .8351 .8449 .9315 .8669 .8948 .9781 .8385 .9672 .6140 .9882 .6741 .8094 .6068 .7854 .5531 .7403 .5631 .7156 .5224 .6996 .4461 .7046 .4773 .7997 .4419 .9150 .3469 .9172 .2458 .9450 .2126 .9585 .2378 .9860 .1975 .9898 .0953 .9628 .0358 .9771 .0434 .9560 .1353 .8643 .2002 .8269 .2922 .8722 .3187 .7569 .3087 .5345 .2430 .5895 :::::::::::::: 318cities - from original Lin and Kernighan paper :::::::::::::: 71 63 1402 63 2733 63 71 94 1402 94 2733 94 370 142 1701 142 3032 142 1276 173 2607 173 3938 173 1213 205 2544 205 3875 205 69 213 1400 213 2731 213 69 244 1400 244 2731 244 630 276 1961 276 3292 276 732 283 2063 283 3394 283 69 362 1400 362 2731 362 69 394 1400 394 2731 394 370 449 1701 449 3032 449 1276 480 2607 480 3938 480 1213 512 2544 512 3875 512 157 528 1488 528 2819 528 630 583 1961 583 3292 583 732 591 2063 591 3394 591 654 638 1985 638 3316 638 496 638 1827 638 3158 638 314 638 1645 638 2976 638 142 638 1473 638 2804 638 142 669 1473 669 2804 669 315 677 1646 677 2977 677 496 677 1827 677 3158 677 654 677 1985 677 3316 677 654 709 1985 709 3316 709 496 709 1827 709 3158 709 315 709 1646 709 2977 709 142 701 1473 701 2804 701 220 764 1551 764 2882 764 189 811 1520 811 2851 811 173 843 1504 843 2835 843 370 858 1701 858 3032 858 1276 890 2607 890 3938 890 1213 921 2544 921 3875 921 630 992 1961 992 3292 992 732 1000 2063 1000 3394 1000 1276 1197 2607 1197 3938 1197 1213 1228 2544 1228 3875 1228 205 1276 1536 1276 2867 1276 630 1299 1961 1299 3292 1299 732 1307 2063 1307 3394 1307 654 1362 1985 1362 3316 1362 496 1362 1827 1362 3158 1362 291 1362 1622 1362 2953 1362 654 1425 1985 1425 3316 1425 496 1425 1827 1425 3158 1425 291 1425 1622 1425 2953 1425 173 1417 1504 1417 2835 1417 291 1488 1622 1488 2953 1488 496 1488 1827 1488 3158 1488 654 1488 1985 1488 3316 1488 654 1551 1985 1551 3316 1551 496 1551 1827 1551 3158 1551 291 1551 1622 1551 2953 1551 291 1614 1622 1614 2953 1614 496 1614 1827 1614 3158 1614 654 1614 1985 1614 3316 1614 189 1732 1520 1732 2851 1732 1276 1811 2607 1811 3938 1811 1213 1843 2544 1843 3875 1843 630 1913 1961 1913 3292 1913 732 1921 2063 1921 3394 1921 370 2087 1701 2087 3032 2087 1276 2118 2607 2118 3938 2118 1213 2150 2544 2150 3875 2150 205 2189 1536 2189 2867 2189 189 2220 1520 2220 2851 2220 630 2220 1961 2220 3292 2220 732 2228 2063 2228 3394 2228 142 2244 1473 2244 2804 2244 315 2276 1646 2276 2977 2276 496 2276 1827 2276 3158 2276 654 2276 1985 2276 3316 2276 654 2315 1985 2315 3316 2315 496 2315 1827 2315 3158 2315 315 2315 1646 2315 2977 2315 142 2331 1473 2331 2804 2331 315 2346 1646 2346 2977 2346 496 2346 1827 2346 3158 2346 654 2346 1985 2346 3316 2346 142 2362 1473 2362 2804 2362 157 2402 1488 2402 2819 2402 220 2402 1551 2402 2882 2402 142 2480 1473 2480 2804 2480 370 2496 1701 2496 3032 2496 1276 2528 2607 2528 3938 2528 1213 2559 2544 2559 3875 2559 630 2630 1961 2630 3292 2630 732 2638 2063 2638 3394 2638 69 2756 1400 2756 2731 2756 69 2787 1400 2787 2731 2787 370 2803 1701 2803 3032 2803 1276 2835 2607 2835 3938 2835 1213 2966 2544 2966 3875 2966 69 2906 1400 2906 2731 2906 69 2937 1400 2937 2731 2937 630 2937 1961 2937 3292 2937 732 2945 2063 2945 3394 2945 1276 3016 2607 3016 3938 3016 69 3055 1400 3055 2731 3055 69 3087 1400 3087 2731 3087 220 606 1551 606 2882 606 370 1165 1701 1165 3032 1165 370 1780 1701 1780 3032 1780 -79 1417 -79 1496 4055 1693 From tenorio at ee.ecn.purdue.edu Thu Feb 22 12:51:04 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Thu, 22 Feb 90 12:51:04 EST Subject: seminar at Purdue Message-ID: <9002221751.AA07359@ee.ecn.purdue.edu> ------- Forwarded Message From: lhj (Leah Jamieson) Subject: seminar Please pass on to students and/or colleagues who might be interested. ------------------------------------------------------------- "Two Engineering Approaches to Speech Processing: Neural Networks and Analog VSLI" Moise Goldstein, Ph.D. Department of Electrical and Computer Engineering Johns Hopkins University Wednesday, Feb. 28, 1990 12:30 - 1:20 Heavilon Hall, Room 001 (Ground floor, northwest corner) PUrdue University ------------------------------------------------------------- ------- End of Forwarded Message From ala at nada.kth.se Fri Feb 23 06:47:27 1990 From: ala at nada.kth.se (Anders Lansner) Date: Fri, 23 Feb 90 12:47:27 +0100 Subject: CRG-TR-90-1 request Message-ID: <9002231147.AAdraken21697@nada.kth.se> Anders Lansner NADA KTH S-100 44 Stockholm SWEDEN From mukesh%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Fri Feb 23 12:12:49 1990 From: mukesh%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Mukesh Patel) Date: Fri, 23 Feb 90 17:12:49 GMT Subject: CRG-TR-90-1 request Message-ID: <22254.9002231712@rsunu.cogs.susx.ac.uk> Could somebody, somewhere please do something about Tech Report Requests that get mailed to a *ALL* of us? This is crazy because it is costing a lot of money to somebody/everybody and it needlessly clutters up the system. Maybe a tutorial on "how-to-reply/request tech reports" might help? Mukesh The University of Sussex, Centre for Cognitive and Computing Sciences, Falmer, Brighton BN1 9QH, E Sussex, UK. Phone: +44 273 606755 x3074 ARPA:mukesh%cogs.sussex.ac.uk at nfsnet-relay.ac.uk JANET:mukesh at uk.ac.sussex.cogs From thomasp at lan.informatik.tu-muenchen.dbp.de Fri Feb 23 14:52:00 1990 From: thomasp at lan.informatik.tu-muenchen.dbp.de (Patrick Thomas) Date: 23 Feb 90 18:52 -0100 Subject: Mathematical Tractability of Neural Nets Message-ID: <9002231852.AA08852@infovax.informatik.tu-muenchen.de> Date: Fri, 23 Feb 90 19:52:31 -0100 All neural nets which prove to be mathematically tractable (convergence..etc) seem to be too trivial or biologically remote in order to account for cerebral phenomena. It may be nice to prove the approximation capabilities of backprop or some kind of convergent behaviour exhibited by the ART networks. But (except perhaps for the ART-3 architecture?) they don't really deal with the complex interactions at synaptic levels already found by neurophysiologists and especially the ART networks rely on a similiarity measure which may be fundamentally inappropriate. So whats the alternative ? Define and refine some rules concerning synaptic interactions (of local AND global kind), think about some rules governing signal integration by neuronal units and then let it run. What do you get by this type of self-organizing net ? One thing for sure: mathematical untractability. This is the Edelman-way (among others). Which way should be followed by someone interested in brain phenomena and not with neural nets from an engineering point of view ? Is it true that all mathematically tractable neural net approaches are inadequate and that an empirical/experimental stand should be taken ? I would be grateful for comments on this. Patrick P.S.: The Bonhoeffer et al results showing the non-locality of synaptic amplification at least on the pre-synaptic side seem to fit nicely with Edelmans "synapses-as-populations" approach. From Terry_Sejnowski at UCSD.EDU Wed Feb 21 12:46:46 1990 From: Terry_Sejnowski at UCSD.EDU (Terry Sejnowski) Date: Wed, 21 Feb 90 09:46:46 PST Subject: Levels Message-ID: <9002211746.AA26305@sdbio2.UCSD.EDU> There are at least three notions of levels that are commonly used to discuss the brain. Marr introduced levels of analysis -- computatonal, algorithmic, and implementational, and thought they were independent of each other. In biology there are well defined levels of organization: molecular, synaptic, neuronal, networks, columns, maps, and systems. These can be characterized anatomically according to their spatial scale. Finally, one can distinguish levels of processing, from the sensory periphery toward higher processing centers. Although these centers can be ranked in a hierarchy according to latency, feedback connections allow information to flow in both directions. For a more detailed discussion on these three types of levels, and references, see Churchland and Sejnowski, Perspectives on Cognitive Neuroscience, Science 242, 741-745 (1988). Terry ----- From rr%cstr.edinburgh.ac.uk at NSFnet-Relay.AC.UK Sat Feb 24 10:50:12 1990 From: rr%cstr.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Richard Rohwer) Date: Sat, 24 Feb 90 15:50:12 GMT Subject: organization levels Message-ID: <23907.9002241550@cstr.ed.ac.uk> > From: Alain Grumbach > [...] > I am wondering about the notion of "organization level", > [...] > But has anybody heard about a formal description of it ? A serious attempt to mathematically formalize a notion of "level" within a broad formal theory of perception can be found in B. Bennett, D. Hoffman, and C. Prakash, "Observer Mechanics, A Formal Theory of Perception", Academic Press (1989). See especially Ch. 9. > (formal but understandable !) Copious use of concepts and notation from modern analysis make the reading a bit tedious. But in my opinion, the underlying conceptual structure is novel, plausible, and provocative. I have an ambition to write a less careful but more direct "Readers' Digest condensed version" -- but I won't say when. Richard Rohwer JANET: rr at uk.ac.ed.cstr Centre for Speech Technology Research ARPA: rr%ed.cstr at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at cstr.ed.ac.uk, 80, South Bridge rr%cstr.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!cstr!rr From slehar at bucasb.bu.edu Sat Feb 24 15:06:11 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Sat, 24 Feb 90 15:06:11 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: connectionists@c.cs.cmu.edu's message of 24 Feb 90 06:31:26 GM Message-ID: <9002242006.AA15813@bucasd.bu.edu> I agree with your comment about mathematically tractable neural models. I am currently taking courses on neuropsychology and I am amazed at the depth of knowledge available about brain functionality from the medical point of view. Lesion studies show in great detail how specific brain areas interact to perform various tasks, and neurologists can predict with great accuracy what the effects of different lesions would be in specific locations. This level of understanding must be exploited in our neural models. The problem is that this global level of understanding is very heuristic, and cannot be directly implemented in a neural model. In order to bring together the low level mathematical models and the high level neurological knowledge we must advance both sciences in converging directions. In other words, neurologists must study the microscopic origins of the observed macroscopic phenomena, and neural modelers must design neural models to duplicate specific anatomical structures or behavioral elements. This is the primary thrust of Grossbergs work. Grossberg's neural models are based on neurological findings, and are designed to duplicate behavioral data. This, it seems to me, is the way to bring together the two sciences. From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Sun Feb 25 05:56:14 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Sun, 25 Feb 90 10:56:14 GMT Subject: Levels Message-ID: <21319.9002251056@csunb.cogs.susx.ac.uk> > From: Terry Sejnowski > There are at least three notions of levels ... > ... Marr introduced levels of analysis -- computatonal, algorithmic, > and implementational ... > ... In biology there are well defined levels of organization > molecular, synaptic, neuronal, networks, columns, maps, and systems... > ... Finally, one can distinguish levels of processing, from the > sensory periphery toward higher processing centers.... Two comments - (A) a critique of Marr and (B) a pointer to another notion of level: A. I think that Marr's analysis is somewhat confused, and would be best replaced by the following: a. What he called the "computational" level should be re-named the "task" level, without any presumption that there is only one such level: tasks form a hierarchy of sub-tasks. This is closely related to what software engineers call "requirements analysis", and has to take into account the nature of the environment, the behaviour that is to be achieved within it, including constraints such as speed. In the case of vision, Marr's main concern, requirements analysis would include description of the relevant properties of light (or the optic array), visible surfaces, forms of visible motion, etc. as well as internal and external requirements of the organism e.g. recognition, generalisation, description, planning, explaining, control of actions, posture control, various kinds of visual reflexes (some trainable), reading, etc. Requirements analysis also includes investigation of trade-offs and priorities. E.g. in some conditions where there's a trade-off between speed and accuracy, getting a quick decision that has a good chance of being right may be more important than guaranteeing perfect accuracy. Internal requirements analysis would include description of other non-visual modules that require input from visual modules (e.g. for posture control, or fine control of movement through feedback loops - processes which don't necessarily require the same kind of visual information as e.g. recognition of objects). So there is not ONE requirement or task defining vision, but a rich multiplicity, which can vary from organism to organism (or machine). b. Then instead of Marr's two remaining levels, "algorithmic" and "implementational" (or physical mechanism), there would be a number of different layers of implementation, for each of which it is possible to distinguish design and implementation. How many layers there are, and which are implemented in software which in hardware, is an empirical question and might vary from one organism or machine to another. Moreover, because vision is multi-functional there need not be one hierarchy of layers: instead there could be a fairly tangled network of tasks performed by a network of interrelated processes sharing some sub-mechanisms (e.g. retinas). ---------------------------------------------------- B: There's at least one other notion of level, not in Terry's list, that's worth mentioning, though it's related to his three and to levels of task analysis mentioned above. It is familiar to computer scientists, though it may need to be generalised before it can be applied to brains. I refer to the notion of a "virtual machine". For example, a particular programming language refers to a class of entities (e.g. words, strings, numbers, lists, trees, etc) and defines operations on these, that together define a virtual machine. A particular virtual machine can be implemented in a lower level virtual machine via an interpreter or compiler (with interestingly different consequences). The lower level virtual machine (e.g. the virtual machine that defines a VAX architecture) may itself be an abstraction that is implemented in some lower level machine (e.g. hardware, or a mixture of hardware and microcode). Processes in a computer can have many levels of virtual machine each implemented via a compiler or interpreter to a lower level or possibly more than one lower level, e.g. if two sorts of virtual machines are combined to implement a higher level hybrid. Circular organisation is possible if a low-level machine can invoke a sub-routine defined in a high level machine (e.g. for handling errors or interrupts). Different layers of virtual machine do not map in any simple way onto physically differentiable structures in the computer: indeed without changing the lowest level physical architecture one can implement very many different higher level virtual machines, though there will be some physical differences in the form of different patterns of bits in the virtual memory, or different patterns of switch states or magnetic molecule states in the physical memory. In this important sense virtual structures in high level virtual machines may be "distributed" over the memory of a conventional computer with no simple mapping from physical components to the virtual structures they represent. (This is especially clear in the case of sparse arrays, databases using inference, default values for slots, etc.) (This is why I think talk about "physical symbol systems" in AI is utterly misleading: most of the interesting symbol systems are virtual structures in virtual machines, not physical structures.) Similarly, I presume different abstract virtual machines can be implemented in neural nets, though the kind of implementation will be different. E.g. it does not seem appropriate to talk about a compiler or interpreter, at least at the lower levels. An example of such an abstract virtual machine implemented in a human brain would be one that can store and execute a long and complex sequence of instructions, such as reciting a poem, doing a dance, or playing a piano sonata from memory. Logical thinking (especially when done by an expert trained logician) would be another example. My expectation is that "connectionist" approaches to intelligence will begin to take off when this branch of AI has a good theory about the kinds of virtual machines that need to be implented to achieve different sorts of intelligent systems, including a theory of how such virtual machines are layered and how they may be implemented in different kinds of neural networks (perhaps using the levels of organisation described by Terry). Aaron Sloman, School of Cognitive and Computing Sciences, Univ of Sussex, Brighton, BN1 9QH, England EMAIL aarons at cogs.sussex.ac.uk aarons%uk.ac.sussex.cogs at nsfnet-relay.ac.uk aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk at relay.cs.net From tds at ai.mit.edu Sun Feb 25 12:16:48 1990 From: tds at ai.mit.edu (Terence D. Sanger) Date: Sun, 25 Feb 90 12:16:48 EST Subject: levels Message-ID: <9002251716.AA03032@globus-pallidus> It seems to me that there are many different ways of describing any phenomenon or algorithm in terms of levels. "Levels of abstraction" is probably only one, which might be interpreted to include both biological hardware levels of organization (receptors, synapses, neurons, Brodmann's areas) and the processing levels which a system goes through in order to interpret its environment (receptors, features, objects, interpretation). As Sejnowski points out, Marr's levels (implementation, algorithm, theory) are an additional type. Different concepts of level might have different theoretical uses, but when it comes to trying to find examples in the hardware of a system, (see Aaron Sloman's note) there may not be that many possibilities. I would like to suggest another concept of level that might have some basis in biological hardware. I call it "anatomic levels". The idea is that the lower levels correspond to the local processing units which (due to physical constraints) have access only to a small portion of the total number of inputs and controls. The higher levels progressively integrate input information from lower levels and coordinate the outputs from lower levels. An example would be segments of a spinal cord performing maximal processing based on the inputs and outputs available at that segment. Propriospinal communication would be the next level up, combining inputs and coordinating motion between a few levels. Sensory association cortex and perhaps supplementary motor cortex might (vaguely) correspond to higher levels which integrate sensory information across modalities and coordinate motor control across all spinal segments. Note that relatively sophisticated processing (according to some measures) might be occurring even within an individual spinal level. "Complete" interpretation of the available input and "optimal" control of the available outputs is theoretically possible, and might involve a good deal of processing at multiple "levels of abstraction". Does this have any relevance for computers? Perhaps in a distributed processing system where nodes do not have access to complete sensory information or control outputs, it would be useful to have an idea of how to integrate sensory information across the entire network and how to generate coordinated control. Terry Sanger MIT E25-534 Cambridge, MA 02139 tds at ai.mit.edu From bates at amos.ucsd.edu Sun Feb 25 15:39:47 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Sun, 25 Feb 90 12:39:47 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002252039.AA16585@amos.ucsd.edu> I must object to the argument that "Neurologists know with great specificity how to predict the effects of lesions...". In fact, the more we know about structural and functional brain imaging, the less clear the localization story becomes. For example, Basso et al. have reviewed CT or MRI records for 1,500 patients, and find that the classical teaching is contradicted (re type of lesion and type of aphasia) in at least 20% of the cases (e.g. fluent aphasias with a frontal lesion; non-fluent aphasias with a posterior lesion, and so forth). Antonio Damasio has a lovely paper that is titled something like "Where is Broca's area?" It turns out that it is not at all obvious where that is!! There are cases of syndromes with very specific behavioral impairments (e.g. the famous Hart, Berndt and Caramazza case who had a particularly severe problem naming fruits and vegetables; see also Warrington's patients). But there is a real mystery in that category-specific literature as well: most (though not all) of the reported cases of very very specific impairments come from patients with very global forms of encephalopathy, e.g. diffuse forms of brain injury. The real facts are that the localization story has been grossly OVERSOLD to the outside world. Insiders (e.g. members of the Academy of Aphasia) know all too well how approximate and often non-specific the relationships are between lesion site and syndrome type. For example, I suspect that many of you believe that there is a sound relationship between lesions to anterior cortex and damage to grammar (e.g. the Broca's-aphasia-as-agrammatism story). But how many know about the 1983 study by Linebarger et al. (followed by many replications, in several different languages) showing that so-called agrammatic Broca's aphasics can make spectacularly good and quite fine-grained judgments of grammaticality? How do you square that finding with the claim that Broca's area is the "grammar box"? In our own cross-linguistic research, we have found that Turkish Broca's aphasics look radically different from Serbo-Croatians, who look radically different from Italians, who look radically different from English-speakers, and so on. In studies with a Patient Group by Language Group design (e.g. Broca's and Wernicke's in several different languages) it is invariably the case that Language accounts for 4 - 5 times more variance than aphasia group! You can predict more of the linguistic behavior of a brain-damaged patient by knowing his premorbid language than you can by knowing his lesion site and/or his aphasic classification. These findings can ONLY be explained if we assume that a great deal (if not all) of linguistic knowledge is spared. Aphasics suffer from some poorly-understood problems in accessing and deploying this knowledge (and there are a lot of new proposals on board right now to try and explain the nature of these performance deficits). But the specificity is far less than textbook stories would have you believe. The Good News for Connectionists: the "real" data from patients with focal brain injury are in fact much more compatible with a neural network story (i.e. a story in which representations are broadly distributed and activated by degree) than they are with an old-fashioned 19th century Thing-In-a-Box theory of the brain. -liz bates From slehar at bucasb.bu.edu Sun Feb 25 21:14:41 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Sun, 25 Feb 90 21:14:41 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Sun, 25 Feb 90 12:39:47 PST <9002252039.AA16585@amos.ucsd.edu> Message-ID: <9002260214.AA29908@bucasd.bu.edu> Thank you for your lengthy reply to my posting. I do not dispute the variability of functional organization between individuals' brains, and I am intrigued by the organizational differences based on language that you pointed out. My point was not that brains are identical enough that pinpointing a lesion can necessarily lead to an accurate prediction of deficits (although admittedly that is what I said). What I meant to say is that the functionality of areas has been identified to a level of detail that would surprise many "neural network" modellers. The fact that the 'task' of speech, for example is functionally divided into the components generally performed by Brocca's area (grammar and articulation), Wernicke's (meaning), angular gyrus (vocabulary), right hemisphere (prosidy), frontal areas (initiation of speech), motor strip (execution of speech) etc. is extremely interesting to the neural modeler, as it gives a clue as to how a parallel speech system can be organized, while leaving open the tantalizing question of the fine level microstructure required for such a system to be actually implemented. It is the specific functionality of each area that has been mapped in such detail, not the physical location of that area in any particular individual. (In other words, if Broccas area is pinpointed in a particular individual, then lesion of that area will produce predictable deficits) In fact the very variability of the actual locations of such areas is equally interesting, and provides further clues as to the underlying mechanisms. The fact that a lesion nearby can induce a functional area like Broccas area to 'move over' to an adjacent region really emphasizes the adaptability and variability of the system, and until we duplicate that type of adaptability, we will not have duplicated the functionality either. Thank you for all your references to interesting work- I will preserve them for future reading. Steve Lehar From yaski at ntt-20.ntt.jp Mon Feb 26 10:14:50 1990 From: yaski at ntt-20.ntt.jp (Yasuki Saito) Date: Mon, 26 Feb 90 10:14:50 I Subject: Levels Message-ID: <12569294135.18.YASKI@NTT-20.NTT.JP> e ------- From bates at amos.ucsd.edu Mon Feb 26 00:11:46 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Sun, 25 Feb 90 21:11:46 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002260511.AA18795@amos.ucsd.edu> But in fact, you stil have the facts wrong: In richly-inflected languages, Wernicke's aphasics look just as bad as Broca's in the domain of grammar. The supposed grammar/semantics division is a peculiarity of English. When we first got these findings, I went back to Arnold Pick, the long-ago originator of the term "agrammatism." Pick worked with Czech & German patients -- and guess what? He in fact postulated two forms of agrammatism: non-fluent (anterior) and fluent (posterior). Of these two, he believed that the fluent was the most interesting of the two, revealing more about the point in processing (dynamically/temporally considered) at which assignment of grammatical forms is made. Yes, you are right, the brain is more than a bowl of oatmeal: there are lines running from the eyes to the occiptal lobes, there is such a thing as a motor strip, and so on. And of course these things need to be taken into account by connectionist models. But even if you COULD pinpoint broca's area with precision for any given individual, that would not nail down for you ANY particularly linguistic domain. Re right hemisphere language: Gazzaniga claims to have new evidence that the right hemisphere (in split brain folks) can make grammaticality judgments!! where does that leave you? Broca's and Wernicke's BOTH have semantic problems (e.g. in priming) and BOTH have grammatical problems (as noted above). In short -- you have bought a used car. -liz bates From slehar at bucasb.bu.edu Mon Feb 26 10:25:38 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Mon, 26 Feb 90 10:25:38 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Sun, 25 Feb 90 21:11:46 PST <9002260511.AA18795@amos.ucsd.edu> Message-ID: <9002261525.AA08045@bucasd.bu.edu> You say: "But even if you COULD pinpoint broca's area with precision for any given individual, that would not nail down for you ANY particularly linguistic domain." Do you mean that if, for an English speaking subject, Brocca's area is identified, located, and ablated, that we could not predict the resulting deficits? I don't know if we are splitting hairs here, I'm sure we both agree that the subject would become "Brocca's aphasic", a well defined syndrome with specific characteristics. True, those characteristics are defined in somewhat fuzzy terms, and even so, our patient is not guaranteed to suffer all the components of the defined syndrome. Indeed, immediately after the ablation the subject would immediately begin to re-organize his functional areas to compensate for the loss, and the resulting mapping will be changing in time and very individualized. Even in "normals" it is clear that every individual organizes his / her brain in their own fashon, so that the distribution of functionalities is somewhat individualized. I don't dispute any of these facts, and I'm not entirely certain what your criticism is. I suspect that you misunderstand my original contention. I did not mean to say that brain functionality is segmented into predictable and well defined spatial locations such that grammar, for instance, is performed exclusively in the grammar area, and nowhere else is grammer performed. Some functions are performed in more localized areas (including grammar) while other functions are performed in more distributed areas (spatial thinking, higher cognition, ...). These functionalities are flexible and adaptive, and even localized functions like grammar are not fully localized, but have fuzzy and overlapping boundaries, and receive influence from beyond those boundaries as well. My point is, that people who work in the field understand these things. That neurologsts are beginning to understand the fundamental principles of brain organization. The very points that you were making reflect a new insight into the ways of the brain that was hard to find ten years ago. In order to contradict my contention you would have to say "We don't know anything about brain organization, everything is confused." On the contrary, it is clear that we are beginning to get a good grasp of the global principles, even though those principles define a fuzzy and ill defined scheme. My point is that the neuropsychological understanding of the brain is quite good at a global level, where it has difficulties is at the fine grained level. How are the signals propagated within the brain in order to produce the kind of global organization that we observe? This, I say, is the question to be adressed by neural modelers, and my argument was that we should use the findings and insights of neuropsychology to guide the direction of research in neural networks. Stated another way, you yourself would be critical of a neural model that is brittle, inflexible, too clearly defined and localized, because you know that that is not the way it works in the brain. My point is simply that neural modelers should listen to people like you for guidance as to whether they are on the right track. That the science is ready for a coming together of the local mathematical models and the global neuropsychological ones. Surely you don't disagree with that? (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) (O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O) (O)((O))((( (617) 424-7035 (H) (617) 353-6425 (W) )))((O))(O) (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) From bates at amos.ucsd.edu Mon Feb 26 14:02:18 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Mon, 26 Feb 90 11:02:18 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002261902.AA22328@amos.ucsd.edu> Do I think the brain is cottage cheese, all the same everywhere? No, of course not. And to be sure, there are clearcut differentiations by modality (visual cortex, etc.). But I indeed insist, based on all we now know, that EVEN IF WE COULD PINPOINT BROCA'S AREA (notice the spelling, only on "c") WE COULD NOT NECESSARILY PREDICT THE PATIENT'S BEHAVIOR. that is EXACTLY what the current data suggest. For example, there are age-related changes that occur WITHIN individuals, as follows: up to some time between 7 and 12 years of age (no one knows the cutoff), anterior and posterior lesions both result in a non-fluent aphasia; then things stabilize into the usual correlation between lesion site and aphasia type (a loose correlation at that); then again, some time after 50 (no one knows the cutoff), the pattern changes again, with the probability of a FLUENT aphasia going up even with an anterior lesion. As for the other issues: in fact there is no real evidence (not anymore...) linking grammar with a particular region within the left hemisphere. Grammatical errors occur in fluent and nonfluent patients, with lesions all over the left half of the brain. Your belief in the right hemisphere's role in prosody (note the spelling ) is also an oversimplification. It isn't clear at all whether the current prosody results are more than a by-product of a right hemisphere bias for certain kinds of emotional signals. In short, the whole story is and remains MUCH less differentiated that you have been taught to believe. Is there specialization of some sort? Yes, of course, but we are so far off from mapping it out for language that it is a poor time to recommend that connectionists pied-pipe after neurologists (by the way, the people doing the best experimental work on aphasia tend not to be neurologists anyway; they are usually experimental psychologists working with some neurologist nearby to read the CT scans....). -liz bates From pa1490%sdcc13 at ucsd.edu Mon Feb 26 17:42:00 1990 From: pa1490%sdcc13 at ucsd.edu (Dave Scotese) Date: Mon, 26 Feb 90 14:42:00 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002262242.AA04062@sdcc13.UCSD.EDU> I am not very well versed in the whole idea of tractability or whatnot or even nueral nets themselves. In my humble and perhaps erroneous model of what we are discussing, it seems that any simulation of cerebral activity would necessarily avoid convergence (= tractability?). This comes from my idea that if the cerebral activity in a human did converge, he would have stopped thinking. While this might be the goal of the devout follower of eastern philosophy, I think it is impossible. Sorry if my insight reflects the ramblings of a misguided simpleton, really, as I feel really uneducated when I read most of the stuff here. -Dave Scotese *%) From mesard at BBN.COM Mon Feb 26 22:34:46 1990 From: mesard at BBN.COM (mesard@BBN.COM) Date: Mon, 26 Feb 90 22:34:46 -0500 Subject: Convergence (was Re: Mathematical Tractability of Neural Nets) In-Reply-To: Dave Scotese's message dated Mon, 26 Feb 90 14:42:00 PST Message-ID: > This comes from my idea that if the cerebral activity > in a human did converge, he would have stopped thinking. While this > might be the goal of the devout follower of eastern philosophy, I > think it is impossible. There are two assumptions made in the case of artificial neural nets [or a large class of them anyway] that don't generally hold for a brain: 1) The set of input patterns is finite. 2) There is no random neural activity (aside from an random initial state). Remove these assumptions, and an ANN can quite easily be made to never converge. Impose these assumptions on a brain, and it would very likely stop thinking. (In fact, even one assumption may be enough to produce a sort of convergence. Consider the effects of solitary confinement, etc.) -- void Wayne_Mesard(); Mesard at BBN.COM Bolt Beranek and Newman, Cambridge, MA From slehar at bucasb.bu.edu Tue Feb 27 12:47:37 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Tue, 27 Feb 90 12:47:37 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Mon, 26 Feb 90 11:02:18 PST <9002261902.AA22328@amos.ucsd.edu> Message-ID: <9002271747.AA26951@bucasd.bu.edu> On the subject of the transfer of knowledge from neurobiology to neural science you write: -> we are so far off from mapping it out for language that it is a poor -> time to recommend that connectionists pied-pipe after neurologists... Not only is it the right time, but whether you like it or not, it is already happening! Significant advances have already been made in the use of neurobiological and psychophysical data in models of vision, cognition, motor control and speech. (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) (O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O) (O)((O))((( (617) 424-7035 (H) (617) 353-6425 (W) )))((O))(O) (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) From bates at amos.ucsd.edu Tue Feb 27 13:07:42 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Tue, 27 Feb 90 10:07:42 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002271807.AA05458@amos.ucsd.edu> I think you need to distinguish between neuroscience in general (where significant progress is being made in many areas), and the particular area of neurology, with particular reference to language. I think that someday, in retrospect, we will see that progress WAS made in the neurology of language during this period in our history, but much of that progress will prove to be the debunking of classic disconnection and localization theories. Witness, for example, the stunning papers by Posner, Pedersen, Fox, Raichle, etc. on metabolic activity during language use -- fascinating, but only marginally compatible with anything that we previously believed. Looking at a PET scan or and ERP study of "live" language use, one can only be impressed with HOW MUCH of the brain is very active during language use -- which, of course, fits with other anomalous findings that have been around but largely ignored (e.g. Ojemann's findings on the many many different points in the brain that can interrupt language processing when an electric stimulus is applied during cortical mapping prior to surgery). We are, without question, in a period of transition and serious rethinking. For example, Geoff Hinton and Tim Shallice (a former believer in old-fashioned localization) have been carrying out simulations in which a neural network is trained up on some language task and then "lesioned". Some very specific but totally unexpected "syndromes" fall out of randomly placed or indeed randomly distributed damage to the net. Specific syndromes can be a "local minimum", a fact about the mathematics of a distributed network rather than a result (of the typical sort) induced by "subtracting" some local and highly specific piece-of-the-machine. When you were trying to recommend "findings" by "neurologists" that connectionists should follow, you stressed some classic claims about Grammar (Broca's area), semantics (Wernicke's area), frontal lobs (that's lobes -- speech initiation), in short the Geschwind view that was so popular through the 1970's. That is the view that I am objecting to now, not the more general and indeed very fruitful union between neuroscience and computation. One cannot compare our knowledge of the visual system (which is extensive) with our knowledge of how the brain is organized for language (which is, right now, totally up for grabs). -liz bates From turk%picadilly.media.mit.edu at media-lab.media.mit.edu Tue Feb 27 13:57:40 1990 From: turk%picadilly.media.mit.edu at media-lab.media.mit.edu (Matthew Turk) Date: Tue, 27 Feb 90 13:57:40 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: slehar@bucasb.bu.edu's message of Tue, 27 Feb 90 12:47:37 EST <9002271747.AA26951@bucasd.bu.edu> Message-ID: <9002271857.AA01089@picadilly.media.mit.edu> > > Not only is it the right time, but whether you like it or not, it is > already happening! Significant advances have already been made in the > use of neurobiological and psychophysical data in models of vision, > cognition, motor control and speech. > > (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) I hate to be a naysayer, but this sounds a bit too optimistic to me. I think the point was that neurobiologists don't know as much about the workings of the brain as connectionists often think (or hope, or tell others) they do -- the example given was language areas. I think we should be conservative in our claims, in any scientific endeavor, as to "significant advances". Perhaps this is a good forum to discuss in house just what we think are currently the advances and gaps in connectionist models of vision, cognition, motor control, and speech. Since this is basically a "closed" group, we can affort to honestly point out shortcomings, and not just hype the field. Matthew Turk MIT Media Lab turk at media-lab.media.mit.edu 20 Ames St., E15-414 uunet!mit-amt!turk Cambridge, MA 02139 (617)253-0381 From jbower at smaug.cns.caltech.edu Tue Feb 27 19:05:30 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Tue, 27 Feb 90 16:05:30 PST Subject: Biology Message-ID: <9002280005.AA24297@smaug.cns.caltech.edu> The current exchange concerning the neural basis of language processing reflects a general and perhaps growing tension in this field between what is really known about biology, what is claimed to be known about biology (often in this field by those that don't actually do biology but synthesize selected biological facts), and engineers interested in using the nervous system as a source of ideas for neural network implementations. A few comments seem appropriate: First, there is absolutely no question that our real understanding of how the nervous system works is extremely rudimentary. This is as true at the cognitive level as it is at the level of the neurobiological details. If someone states otherwise they are probably selling something. Second, understanding what is and is not known about biology requires a considerable commitment to the study of biology itself. Summary articles and general lectures at neural network conferences are not enough to develop the intuition necessary to interpret neurobiological data. This is especially true in the case of lesion and psychophysical data which are in any event problematically related to the actual structure of the brain. Third, while neurobiologists have and are continuing to collect massive amounts of structural information about the nervous system, our ignorance is such that it is very difficult to even know where to begin in relating the abstract imaginings of neurologists, cognitive psychologists or connectionists to neural structure. Yet it is the firm belief of some of us that the structure of the brain itself must guide these more abstract musings. Only hard work and cross training will allow this correspondence to be made. Non-biologists should also keep in mind that the lack of formalism in biology is not related exclusively to the inclinations of biologists. It is also the case that we are studying the most complicated structures known anywhere. Physicists are still debating how to formally characterize the behavior of dripping faucets. With respect to the ongoing discussion of levels, for example, it is not at all clear that feedforward networks of the connectionist type are even a particularly appropriate metaphor for thinking about levels within the brain. This is especially true if a hierarchical organization is also implied. Specifically, the usual description of a sensory to motor path within the brain, with "lower levels" of local sensory processing units feeding "higher integrating levels" that in turn coordinate motor response is certainly a vast oversimplification and quite possibly conceptually wrong. In the case of visual processing, the often mentioned but still completely not understood fact that there are 10 to 100 times more connections from the visual cortex to the geniculate than vice versa at least obscures any simple causal processing hierarchy. Further, the sensory to motor, lower to higher to effector view of the brain would seem to completely break down when one realizes that, under normal operating conditions (i.e., monkeys not in chairs looking at television screens), an animal itself controls the way it seeks data. This sensory acquisition process almost certainly reflects a complex and evolving understanding of the object being explored. Figuring out how the deepest levels of the brain control sensory acquisition and thus the neural flow of information through direct neural and indirect behavioral loops is likely to be an essential part of understanding how brains operate in the world. Clearly, our understanding of how the nervous system works will not only benefit from, but will be dependent on the fusion of computational and neurobiological research. However, any attempt to fake a fusion by smoothing over the facts, and proceeding at full pace without concern for the structural details of the nervous system itself, is likely to do more harm than good. Jim Bower Div. of Biology Computational Neural Systems Program Caltech From HORN%TAUNIVM.BITNET at VMA.CC.CMU.EDU Wed Feb 28 17:03:30 1990 From: HORN%TAUNIVM.BITNET at VMA.CC.CMU.EDU (David Horn) Date: Wed, 28 Feb 90 17:03:30 IST Subject: Convergence Message-ID: In-reply-to: Dave Scortese and Wayne Mesard We have demonstrated how a convergent neural network of the Hopfield type can turn into a system which displays an open- ended motion in pattern-space (the space of all its input memories). Its dynamical motion converges on a short-time scale, moving in the direction of an attractor, but escapes it leading to a non- convergent motion on a long time scale. Adding pointers connecting different memories we obtain a process which has some resemblance to associative thinking. It is interesting to note that such a non-convergent behavior does not necessitate random neural activity. The way we made it work was by introducing dynamical thresholds as new degrees of freedom. The threshold is being changed as a function of the firing history of the neuron to which it belongs (e.g. mimicking fatigue). This can lead to the destabilization of the attractors of the neural network, turning them into transients of its motion. Refrences: D. Horn and M. Usher, Neural Networks with Dynamical Thresholds, Phys. Rev. A 40 (1989) 1036-1044; Motion in the Space of Memory Patterns,IJCNN (Washington meeting June 1989) I-61-66; Excitatory-Inhibitory Networks with Dynamical Thresholds, preprint. From pjh%compsci.stirling.ac.uk at NSFnet-Relay.AC.UK Wed Feb 28 04:54:59 1990 From: pjh%compsci.stirling.ac.uk at NSFnet-Relay.AC.UK (Peter J.B. Hancock) Date: 28 Feb 90 09:54:59 GMT (Wed) Subject: No subject Message-ID: <9002280954.AA05738@uk.ac.stir.cs.lira> I've been following your comments on localisation (or lack of it) of language with interest. I'd be further interested to know what you think of dissociation findings such as that reported by McCarthy and Warrington (Nature 334, 428-430). They have a patient who is apparently unable to identify animals from their spoken names, but quite able if presented with a picture. There are many other such dissociations. Do you think they are of general applicability, or is language processing so variable that they should be treated as one offs? Peter Hancock From bates at amos.ucsd.edu Wed Feb 28 12:42:58 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Wed, 28 Feb 90 09:42:58 PST Subject: Biology Message-ID: <9002281742.AA14794@amos.ucsd.edu> Hear-Hear!! I fully (and humbly) concur with Bower's well-crafted statement, and happily leave this debate with that statement hopefully functioning as the final word. -liz bates From schmidhu at tumult.informatik.tu-muenchen.de Wed Feb 28 10:37:59 1990 From: schmidhu at tumult.informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 28 Feb 90 16:37:59 +0100 Subject: FKI-REPORTS AVAILABLE Message-ID: <9002281537.AA12412@kiss.informatik.tu-muenchen.de> Three reports on three quite different on-line algorithms for recurrent neural networks with external feedback (through a non-stationary environment) are available. A LOCAL LEARNING ALGORITHM FOR DYNAMIC FEEDFORWARD AND RECURRENT NETWORKS Juergen Schmidhuber FKI-Report 90-124 Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually can not deal sensibly with `hidden units'. In contrast, as far as we can judge by now, learning rules in biological systems with many `hidden units' are local in both space and time. In this paper we propose a parallel on-line learning algorithm which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are `hidden in time'. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent `neural' dissipative system which is consuming `weight-substance' and permanently trying to distribute this substance onto its connections in an appropriate way. Experiments demonstrating the feasability of the algorithm are reported. NETWORKS ADJUSTING NETWORKS Juergen Schmidhuber FKI-Report 90-125 An approach to spatiotemporal credit assignment in recurrent reinforcement learning networks is presented. The algorithm may be viewed as an application of Sutton's `Temporal Difference Methods' to the temporal evolution of recurrent networks. State transitions in a completely recurrent network are observed by a second non-recurrent adaptive network which receives as input the complete activation vectors of the recurrent one. Differences between successive state evaluations made by the second network provide update information for the recurrent network. In a reinforcement learning system an adaptive critic (like the one used in Barto, Sutton and Anderson's AHC algorithm) controls the temporal evolution of a recurrent network in a changing environment. This is done by letting the critic learn learning rates for a Hebb-like rule used to associate or disassociate successive states in the recurrent network. Only computations local in space and time take place. With a linear critic this scheme can be applied to tasks without linear solutions. It was successfully tested on a delayed XOR-problem, and a complicated pole balancing task with asymmetrically scaled inputs). We finally consider how in a changing environment a recurrent dynamic supervised learning critic can interact with a recurrent dynamic reinforcement learning network in order to improve its performance. MAKING THE WORLD DIFFERENTIABLE: ON USING SUPERVISED LEARNING FULLY RECURRENT NEURAL NETWORKS FOR DYNAMIC REINFORCEMENT LEARNING AND PLANNING IN NON-STATIONARY ENVIRONMENTS. Juergen Schmidhuber FKI-Report 90-126 First a brief introduction to supervised and reinforcement learning with recurrent networks in non-stationary environments is given. The introduction also covers the basic principle of SYSTEM IDENTIFICATION as employed by Munro, Robinson and Fallside, Werbos, Jordan, and Widrow. This principle allows to employ supervised learning techniques for reinforcement learning. Then a very general on-line algorithm for a reinforcement learning neural network with internal and external feedback in a non-stationary reactive environment is described. Internal feedback is given by connections that allow cyclic activation flow through the network. External feedback is given by output actions that may change the state of the environment thus influencing subsequent input activations. The network's main goal is to receive as much reinforcement (or as few `pain') as possible. Arbitrary time lags between actions and later consequences are possible. Although the approach is based on `supervised' learning algorithms for fully recurrent dynamic networks, no teacher is required. An adaptive model of the environmental dynamics is constructed which includes a model of future reinforcement to be received. This model is used for learning goal directed behavior. For reasons of efficiency the on-line algorithm CONCURRENTLY learns the model and learns to pursue the main goal. The algorithm is applied to the most difficult pole balancing problem ever given to any neural network. A connection to `META-learning' (learning how to learn) is noted. The possibility to use the model for learning by `mental simulation' of the environmental dynamics is investigated. The approach is compared to approaches based on Sutton's methods of temporal differences and Werbos' heuristic dynamic programming. Finally it is described how the algorithm can be augmented by dynamic CURIOSITY and BOREDOM . This can be done by introducing (delayed) reinforcement for controller actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance. Please direct requests to schmidhu at lan.informatik.tu-muenchen.dbp.de Only if this does not work for some reason, try schmidhu at tumult.informatik.tu-muenchen.de Leave nothing but your physical address (subject: FKI-Reports). DO NOT USE `REPLY'. Of course, those who asked for copies at IJCNN in Washington will receive them without any further requests. From crg-tech-reports at cs.toronto.edu Wed Feb 28 15:03:31 1990 From: crg-tech-reports at cs.toronto.edu (crg-tech-reports@cs.toronto.edu) Date: Wed, 28 Feb 90 15:03:31 EST Subject: U of Toronto CRG-TR-90-3 announcement Message-ID: <90Feb28.150343est.10568@ephemeral.ai.toronto.edu> DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS *************************************************** The following technical report is now available. If you'd like a copy please send me your real mail address (omitting all other information from your message). Also, do not reply to the entire mailing list. ------------------------------------------------------------------------------- EXPERIMENTS ON DISCOVERING HIGH ORDER FEATURES WITH MEAN FIELD MODULES Conrad C. Galland & Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 CRG-TR-90-3 A new form of the deterministic Boltzmann machine (DBM) learning procedure is presented which can efficiently train network modules to discriminate between input vectors according to some criterion. The new technique directly utilizes the free energy of these "mean field modules" to represent the probability that the criterion is met, the free energy being readily manipulated by the learning procedure. Although conventional deterministic Boltzmann learning fails to extract the higher order feature of shift at a network bottleneck, combining the new mean field modules with the mutual information objective function rapidly produces modules that perfectly extract this important higher order feature without direct external supervision. ------------------------------------------------------------------------------- From wilson at Think.COM Thu Feb 1 11:02:42 1990 From: wilson at Think.COM (Stewart Wilson) Date: Thu, 01 Feb 90 11:02:42 EST Subject: SAB90 Call for Papers Message-ID: <9002011602.AA14047@pozzo> Dear colleagues, Dr. Meyer and I would be very grateful if you would again distribute the following call for papers on your email list. It was distributed a month ago--this is for readers who may have missed it then. Thank you. Sincerely, Stewart Wilson ============================================================================== ============================================================================== Call for Papers SIMULATION OF ADAPTIVE BEHAVIOR: FROM ANIMALS TO ANIMATS An International Conference to be held in Paris September 24-28, 1990 The object of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contact among scientists from diverse disciplines should contribute to better appreciation of each other's approaches and vocabularies, to cross-fertilization of fundamental and applied research, and to defining objectives, constraints, and challenges for future work. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behaviors Autonomous robots Action selection and behavioral Hierarchical and parallel organizations sequences Self organization of behavioral Conditioning, learning and induction modules Neural correlates of behavior Problem solving and planning Perception and motor control Goal directed behavior Motivation and emotion Neural networks and classifier Behavioral ontogeny systems Cognitive maps and internal Emergent structures and behaviors world models Authors are requested to send two copies (hard copy only) of a full paper to each of the Conference chairmen: Jean-Arcady MEYER Stewart WILSON Groupe de Bioinformatique The Rowland Institute for Science URA686.Ecole Normale Superieure 100 Cambridge Parkway 46 rue d'Ulm Cambridge, MA 02142 75230 Paris Cedex 05 USA France e-mail: meyer%FRULM63.bitnet@ e-mail: wilson at think.com cunyvm.cuny.edu A brief preliminary letter to one chairman indicating the intention to participate--with the tentative title of the intended paper and a list of the topics addressed--would be appreciated for planning purposes. For conference information, please also contact one of the chairmen. Conference committee: Conference Chair J.A. Meyer, S. Wilson Organizing Committee Groupe de BioInformatique.ENS.France. and local arrangements A. Guillot, J.A. Meyer, P. Tarroux, P. Vincens Program Committee L. Booker, USA R. Brooks, USA P. Colgan, Canada P. Greussay, France D. McFarland, UK L. Steels, Belgium R. Sutton, USA F. Toates, UK D. Waltz, USA Official Language: English Important Dates 31 May 90 Submissions must be received by the chairmen 30 June 90 Notification of acceptance or rejection 31 August 90 Camera ready revised versions due 24-28 September 90 Conference dates =============================================================================== =============================================================================== From IP%IRMKANT.BITNET at VMA.CC.CMU.EDU Thu Feb 1 16:39:56 1990 From: IP%IRMKANT.BITNET at VMA.CC.CMU.EDU (IP%IRMKANT.BITNET@VMA.CC.CMU.EDU) Date: Thu, 01 Feb 90 17:39:56 EDT Subject: Change of ID name Message-ID: Our address is changed from RM0410 at IRMIAS to IP at IRMKANT. Please update the mailing list. Thanks. ISTITUTO DI PSICOLOGIA CNR ROMA From AEH at buenga.bu.edu Thu Feb 1 14:38:00 1990 From: AEH at buenga.bu.edu (AEH@buenga.bu.edu) Date: Thu, 1 Feb 90 14:38 EST Subject: unsuscribe Message-ID: please delete me from the mailing list. From russ at dash.mitre.org Fri Feb 2 09:10:08 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Fri, 2 Feb 90 09:10:08 EST Subject: Research Positions at MITRE Message-ID: <9002021410.AA27211@dash.mitre.org> The MITRE corporation Signal Processing Center is interviewing qualified candidates for positions in Neural Network research and pattern recognition. The MITRE corporation Signal Processing Center has been invloved with neural network research for over three years. In addition, the Signal Processing Center has groups doing active research in the areas of A.S.W., speech processing and high speed computing. We are seeking candidates with some of the following characteristics: 1. Experience in neural network research. 2. Familiarity with tradional pattern recognition, detection and estimation theory. 3. Strong programming abilities. - Unix - C, Fortran, Postscript - User interface (X11, NeWS) 4. Hardware experience, particulary parallel scientific computing. A U.S. citizenship is REQUIRED. Interested candidates please send resumes to: Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From Connectionists-Request at CS.CMU.EDU Fri Feb 2 16:32:14 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 02 Feb 90 16:32:14 EST Subject: Using the right address Message-ID: <4152.633994334@B.GP.CS.CMU.EDU> We have been getting a few too many administrative and other random requests sent to the entire list. Please send administrative requests (address changes, other list members addresses, etc.) to me at: Connectionists-Request at CS.CMU.EDU (note the exact spelling!) and NOT: Connectionists at CS.CMU.EDU To respond to the author of a message on the connectionists list, e.g, to order a copy of a tech report. Use the "mail" command, NOT the "reply" command. Otherwise you will end up sending the message to the entire list, which REALLY annoys some people (like the maintainer who will get the message several times). Happy hacking. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) From aboulang at WILMA.BBN.COM Sat Feb 3 19:45:32 1990 From: aboulang at WILMA.BBN.COM (aboulang@WILMA.BBN.COM) Date: Sat, 3 Feb 90 19:45:32 EST Subject: Upcoming talk at BBN of interest Message-ID: BBN Systems and Technologies Corporation Science Development Program APPLIED & COMPUTATIONAL MATHEMATICS SEMINAR --------------------------------------------------------------------------- TIME DELAYS, NOISE, AND NEURAL DYNAMICS John G. Milton (telaces at uchimvs1.bitnet) Assistant Professor Department of Neurology University of Chicago Chicago Illinois, 60637 Wednesday February 14th, 10:30AM 2nd Floor Large Conference Room (6/273) BBN 10 Moulton St. An intrinsic property of neural control mechanisms is the presence of time delays which arise as a consequence of, for example, finite conduction times along the axon and across the synapse. A neural control mechanism which is amenable to manipulation and non-invasive monitoring is the pupil light reflex (PLR). Specifically it is possible to "clamp" the PLR with external electronic feedback and thus compare prediction to experimental observation in a precisely controllable manner. The PLR is modeled with a first-order delay-differential equation (DDE) and the dynamics compared with those observed experimentally. Physiological considerations suggest the importance of considering: - DDEs with distributed and state-dependent delays, - second order DDEs, - the influence of noise (stochastic DDEs). ------------------------------------------------------------------- | | | I have an electronic mailing list for these | | announcements. If you would like to be on the list send | | mail to: ABOULANGER at BBN.COM. For more information on this | | talk or the series contact Albert Boulanger (617 873-3891). | | | ------------------------------------------------------------------- From inesc!lba%alf at relay.EU.net Mon Feb 5 13:46:27 1990 From: inesc!lba%alf at relay.EU.net (Luis Borges de Almeida) Date: Mon, 5 Feb 90 13:46:27 EST Subject: Proceedings book available Message-ID: <9002051346.AA01489@alf.inesc.pt> The proceedings volume of the EURASIP Workshop on Neural Networks (Sesimbra, Portugal, 15-17 Feb. 1990) is already available from Springer-Verlag. It has been published in their Lecture Notes in Computer Science series, and the complete reference is: Neural Networks EURASIP Workshop 1990 Sesimbra, Portugal, February 1990 Proceedings L. B. Almeida and C. J. Wellekens (Eds.) Springer-Verlag, 1990 The volume contains two invited papers, by Eric Baum and George Cybenko, and the full contributions to the workshop, which were evaluated by an international technical committee, resulting in the acceptance of only 40% of the submissions. Below is the table of contents. Have a good reading! Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) --------------------------------------------------------------------- TABLE OF CONTENTS PART I - Invited Papers When Are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples? E.B. Baum Complexity Theory of Neural Networks and Classification Problems G. Cybenko PART II - Theory, Algorithms Generalization Performance of Overtrained Back-Propagation Networks Y. Chauvin Stability of the Random Neural Network Model E. Gelenbe Temporal Pattern Recognition Using EBPS M. Gori, G. Soda Markovian Spatial Properties of a Random Field Describing a Sthochastic Neural Network: Sequential or Parallel Implementation? T.Herve, O. Francois, J. Demongeot Chaos in Neural Networks S. Renals The "Moving Targets" Training Algorithm R. Rohwer Acceleration Techniques for the Backpropagation Algorithm F.M. Silva, L.B. Almeida Rule-Injection Hints as a Means of Improving Network Performance and Learning Time S.C. Suddarth, Y.L. Kergosien Inversion in Time S. Thrun, A. Linden Cellular Neural Networks: Dynamic Properties and Adaptive Learning Algorithm L. Vandenberghe, S. Tan, J. Vandewalle Improved Simulated Annealing, Boltzmann Machine, and Attributed Graph Matching L. Xu, E. Oja PART III - Speech Processing Artificial Dendritic Learning T. Bell A Neural-Net Model of Human Short-Term Memory Development G.D.A. Brown Large Vocabulary Speech Recogntion Using Neural-Fuzzy and Concept Networks N. Hataoka, A. Amano, T. Aritsuka, A. Ichikawa Speech Feature Extraction Using Neural Networks M. Niranjan, F. Fallside Neural Network Based Continuous Speech Recogntion by Combining Self Organizing Feature Maps and Hidden Markov Modeling G. Rigoll PART IV - Image Processing Ultra-Small Implementation of a Neural Halftoning Technique T. Bernard, P. Garda, F. Devos, B. Zavidovique Application of Self-Organizing Networks to Signal Processing J. Kennedy, P. Morasso A Study of Neural Network Applications to Signal Processing S. Kollias PART V - Implementation Simulation Machine and Integrated Implementation of Neural Networks: a Review of Methods, Problems and Realizations C. Jutten, A. Guerin, J. Herault VLSI Implementation of an Associative Memory Based on Distributed Storage of Information U. Rueckert Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) From ersoy at ee.ecn.purdue.edu Mon Feb 5 10:22:56 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Mon, 5 Feb 90 10:22:56 -0500 Subject: No subject Message-ID: <9002051522.AA11423@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Sun Feb 4 14:11:17 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Sun, 4 Feb 90 19:11:17 GMT Subject: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <18538.9002041911@csunb.cogs.susx.ac.uk> I have been asked to circulate information about this conference. NB - please do NOT use "reply". Email enquiries should go to turing at uk.ac.sussex.syma ----------------------------------------------------------------------- TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church-Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, while other prominent contributors include Robert French (Indiana), Beatrice de Gelder (Tilburg), Andrew Hodges (Oxford), Philip Pettit (ANU) and Aaron Sloman (Sussex). Anyone wishing to attend this Conference should complete the enclosed form and send it to Andy Clark, TURING Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March, 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start at lunchtime on Tuesday 3rd April, 1990, and will end on Friday 6th April after tea. Final details will be sent to registered participants in February 1990. Conference Organizing Committee Andy Clark (Sussex University), David Holdcroft (Leeds University), Peter Millican (Leeds University), Steve Torrance (Middlesex Polytechnic) ___________________________________________________________________________ PROGRAMME OF INVITED SPEAKERS Paul CHURCHLAND (UCSD) Title to be announced Joseph FORD (Georgia) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Douglas HOFSTADTER (Indiana) Title to be announced J.R. LUCAS (Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark Turing 90 Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. ____________________________________________________________________________ From Connectionists-Request at CS.CMU.EDU Mon Feb 5 12:33:09 1990 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 05 Feb 90 12:33:09 EST Subject: Too much junk mail Message-ID: <6199.634239189@B.GP.CS.CMU.EDU> We have been getting too much junk mail sent to the entire list. Some of our overseas subscribers pay hard cash for every message they recieve; let's keep the noise level to a minimum. For administrative matters please use: Connectionists-Request at CS.CMU.EDU (note the exact spelling!) and NOT: Connectionists at CS.CMU.EDU To respond to the author of a message on the connectionists list, e.g., to order a copy of his or her new tech report, use the "mail" command, NOT the "reply" command. Otherwise you will end up sending your message to the entire list, which REALLY annoys some people (especially the maintainer who will get the message several times). The rest of us will just laugh at you behind your back. Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarassment if she/he tries to subscribe. Happy hacking. Scott Crowder Connectionists-Request at cs.cmu.edu (ARPAnet) From ersoy at ee.ecn.purdue.edu Tue Feb 6 11:14:17 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Tue, 6 Feb 90 11:14:17 -0500 Subject: No subject Message-ID: <9002061614.AA27561@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 9-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 3, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 From jose at neuron.siemens.com Tue Feb 6 18:28:54 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Tue, 6 Feb 90 18:28:54 EST Subject: NIPS-90 WORKSHOPS Call for Proposals Message-ID: <9002062328.AA02485@neuron.siemens.com.siemens.com> REQUEST FOR PROPOSALS NIPS-90 Post-Conference Workshops November 30 and December 1, 1990 Following the regular NIPS program, workshops on current topics on Neural Information Processing will be held on November 30 and December 1, 1990, at a ski resort near Denver. Proposals by qualified individuals interested in chairing on of these workshops are solicited. Past topics have included: Rules and Connectionist Models; Speech; Vision; Neural Network Dynamics; Neurobiology; Computational Complexity Issues; Fault Tolerance in Neural Networks; Benchmarking and Comparing Neural Network Applications; Architectural Issues; Fast Training Techniques; VLSI; Control; Optimization, Statistical Inference, Genetic Algorithms. The format of the workshop is informal. Beyond reporting on past research, their goal is to provide a forum for scientists actively working in the field to freely discuss current issues of concern and interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for the ongoing individual exchange or outdoor activities. Specific open or controversial issues are encouraged and preferred as workshop topics. Individuals interested in chairing a workshop must propose a topic of current interest and must be willing to accept responsibility for their group's discussion. Discussion leaders' responsibilities include: arrange brief informal presentations by experts working on this topic, moderate or lead the discussion, and report its high points, findings and conclusions to the group during evening plenary sessions, and in a short (2 page) summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest by May 17, 1990. Proposals should include a title and a short description of what the workshop is to address and accomplish. It should state why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, list of publications and evidence of scholarship in the field of interest. Mail submissions to: Dr. Alex Waibel Attn: NIPS90 Workshops School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Name, mailing address, phone number, and e-mail net address (if applicable) must be on all submissions. Workshop Organizing Committee: Alex Waibel, Carnegie-Mellon, Workshop Chairman; Kathie Hibbard, University of Colorado, NIPS Local Arrangements; Howard Watchel, University of Colorado, Workshop Local Arrangements; PROPOSALS MUST BE RECEIVED BY MAY 17,1990 Please Post From jose at neuron.siemens.com Tue Feb 6 19:32:57 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Tue, 6 Feb 90 19:32:57 EST Subject: NIPS-90 CALL For Papers Message-ID: <9002070032.AA02512@neuron.siemens.com.siemens.com> CALL FOR PAPERS IEEE Conference on Neural Information Processing Systems -Natural and Synthetic- Monday, November 26 - Thursday, November 29, 1990 Denver, Colorado This is the fourth meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. Two days of focused workshops will follow at a nearby ski area (Nov 30-Dec 1). Major categories and examples of subcategories for paper submissions are the following; Neuroscience: Neurobiological models of development, cellular information processing, synaptic function, learning and memory. Studies and analyses of neurobiological systems. Implementation and Simulation: Hardware implementation of neural nets. VLSI, Optical Computing, and practical issues for simulations and simulation tools. Algorithms and Architectures: Description and experimental evaluation of new net architectures or learning algorithms: data representations, static and dynamic nets, modularity, rapid training, learning pattern sequences, implementing conventional algorithms. Theory: Theoretical analysis of: learning, algorithms, generalization, complexity, scaling, capability, stability, dynamics, fault tolerance, sensitivity, relationship to conventional algorithms. Cognitive Science & AI: Cognitive models or simulations of natural language understanding, problem solving, language acquisition, reasoning, skill acquisition, perception, motor control, categorization, or concept formation. Applications: Neural Networks applied to signal processing, speech, vision, character recognition, motor control, robotics, adaptive systems tasks. Technical Program: Plenary, contributed and poster sessions will be held. There will be no parallel sessions. The full text of presented papers will be published. Submission Procedures: Original research contributions are solicited, and will be carefully refereed. Authors must submit six copies of both a 1000-word (or less) summary and six copies of a separate single-page 50-100 word abstract clearly stating their results by May 17, 1990. At the bottom of each abstract page and on the first summary page indicate preference for oral or poster presentation and specify one of the above six broad categories and, if appropriate, sub-categories (For example: POSTER-Applications: Speech, ORAL-Implementation: Analog VLSI). Include addresses of all authors at the front of the summary and the abstract and to which author correspondence should be addressed. Submissions will not be considered that lack category information, separate abstract sheets, the required six copies, author addresses or are late. Mail Submissions To: Mail Requests For Registration Material To: John Moody Kathie Hibbard NIPS*90 Submissions NIPS*90 Local Committee Department of Computer Science Engineering Center Yale University University of Colorado P.O. Box 2158 Yale Station Campus Box 425 New Haven, Conn. 06520 Boulder, CO 80309-0425 Organizing Committee: General Chair: Richard Lippmann, MIT Lincoln Labs; Program Chair: John Moody, Yale; Neurobiology Co-Chair: Terry Sejnowski, Salk; Theory Co-Chair: Gerry Tesauro, IBM; Implementation Co-Chair: Josh Alspector, Bellcore; Cognitive Science and AI Co-Chair: Stephen Hanson, Siemens; Architectures Co-Chair: Yann Le Cun, ATT Bell Labs; Applications Co-Chair: Lee Giles, NEC; Workshop Chair: Alex Waibel, CMU; Workshop Local Arrangements, Howard Wachtel, U. Colorado; Local Arrangements, Kathie Hibbard, U. Colorado; Publicity: Stephen Hanson, Siemens; Publications: David Touretzky, CMU; Neurosciences Liaison: James Bower, Caltech; IEEE Liaison: Edward Posner, Caltech; APS Liaison: Larry Jackel, ATT Bell Labs; Treasurer: Kristina Johnson, U. Colorado; DEADLINE FOR SUMMARIES & ABSTRACTS IS MAY 17, 1990 please post From ersoy at ee.ecn.purdue.edu Wed Feb 7 09:32:59 1990 From: ersoy at ee.ecn.purdue.edu (Okan K Ersoy) Date: Wed, 7 Feb 90 09:32:59 -0500 Subject: No subject Message-ID: <9002071432.AA04300@ee.ecn.purdue.edu> CALL FOR PAPERS AND REFEREES HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 24 NEURAL NETWORKS AND RELATED EMERGING TECHNOLOGIES KAILUA-KONA, HAWAII - JANUARY 8-11, 1991 The Neural Networks Track of HICSS-24 will contain a special set of papers focusing on a broad selection of topics in the area of Neural Networks and Related Emerging Technologies. The presentations will provide a forum to discuss new advances in learning theory, associative memory, self-organization, architectures, implementations and applications. Papers are invited that may be theoretical, conceptual, tutorial or descriptive in nature. Those papers selected for presentation will appear in the Conference Proceedings which is published by the Computer Society of the IEEE. HICSS-24 is sponsored by the University of Hawaii in cooperation with the ACM, the Computer Society,and the Pacific Research Institute for Informaiton Systems and Management (PRIISM). Submissions are solicited in: Supervised and Unsupervised Learning Issues of Complexity and Scaling Associative Memory Self-Organization Architectures Optical, Electronic and Other Novel Implementations Optimization Signal/Image Processing and Understanding Novel Applications INSTRUCTIONS FOR SUBMITTING PAPERS Manuscripts should be 22-26 typewritten, double-spaced pages in length. Do not send submissions that are significantly shorter or longer than this. Papers must not have been previously presented or published, nor currently submitted for journal publication. Each manuscript will be put through a rigorous refereeing process. Manuscripts should have a title page that includes the title of the paper, full name of its author(s), affiliations(s), complete physical and electronic address(es), telephone number(s) and a 300-word abstract of the paper. DEADLINES A 300-word optional abstract may be submitted by April 30, 1990 by e-mail or mail. Feedback to author concerning abstract will be given by May 31, 1990. Six copies of the manuscript are due by June 25, 1990. Notification of accepted papers by September 1, 1990. Accepted manuscripts, camera-ready, are due by October 1, 1990. SEND SUBMISSIONS AND QUESTIONS TO O. K. Ersoy Purdue University School of Electrical Engineering W. Lafayette, IN 47907 (317) 494-6162 E-Mail: ersoy at ee.ecn.purdue.edu From ai-vie!georg at relay.EU.net Wed Feb 7 13:19:51 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Wed, 7 Feb 90 17:19:51 -0100 Subject: connectionism & AI conf. Message-ID: <9002071619.AA02670@ai-vie.uucp> Announcement and Call for Papers Sixth Austrian Artificial Intelligence Conference --------------------------------------------------------------- Connectionism in Artificial Intelligence and Cognitive Science --------------------------------------------------------------- organized by the Austrian Society for Artificial Intelligence (OeGAI) in cooperation with the Gesellschaft fuer Informatik (GI, German Society for Computer Science), Section for Connectionism Sep 18 - 21, 1990 Salzburg, Austria Conference chair: Georg Dorffner (Univ. of Vienna, Austria) Program committee: J. Diederich (GMD St. Augustin, Germany) C. Freksa (Techn. Univ. Munich, Germany) Ch. Lischka (GMD St.Augustin, Germany) A. Kobsa (Univ. of Saarland, Germany) M. Koehle (Techn. Univ. Vienna, Austria) B. Neumann (Univ. Hamburg, Germany) H. Schnelle (Univ. Bochum, Germany) Z. Schreter (Univ. Zurich, Switzerland) Recently, connectionism is becoming more and more influential as a basic paradigm and method for artificial intelligence and cognitive science. Although there is an abundance of conferences on artificial neural networks - the basis of connectionism - only few meetings are devoted to modeling cognitive processes and building AI models with the novel approach. This conference is designed to fill this space. It will bring together works in the field of neural networks for AI problems, but also basic aspects of massive parallelism and theoretical implications of the new paradigm. The program will consist of submitted papers, workshops, invited talks and panels. IMPORTANT! The conference languages are German and English. Most of the conference will be held in German, though, but papers in English are welcome! Scientific program: papers on the following topics, among others, are solicited: - networks in practical AI applications - connectionist "expert systems" - localist (structured) networks - localist and self-organizing approaches - explanation and interpretation of network behavior - hybrid systems - knowledge representation in neural networks - representation vs. behavior - validity of learning mechanisms - parallelism in humans and machines - associative inferences - connectionism and language processing - connectionism and pattern recognition - network simulation software as AI tool - neural networks and genetic algorithms - philosophical and epistemological implications - neural networks and robotics Workshops: - massive parallelism and cognition (Ch. Lischka) - structured (localist) network models (J. Diederich) - connectionism in language processing The workshops consist of short persentations and intensive discussions on the specialized topic. Presentations are usually invited, but can also be submitted. They will be open to all participants at the conference. Panel: Explanation and transparency of connectionist systems ------------------------------------------------------------- All submissions for the scientific program should consist of no more than 10 pages, for the workshops of no more than 5 pages. Languages - as mentioned above - are German and English. All accepted papers will be printed in a proceedings volume. Send all submissions to: Georg Dorffner Dept. of Medical Cybernetics and Artificial Intelligence University of Vienna Freyung 6/2 A-1010 Vienna, Austria Deadlines: complete submission postmarked no later than March 15, 1990 April 30, 1990: Notification of acceptance / rejection June 1, 1990: Deadline for camera-ready paper System demonstrations are possible, if the conference chair is notified early. From honavar at cs.wisc.edu Wed Feb 7 17:42:48 1990 From: honavar at cs.wisc.edu (Vasant Honavar) Date: Wed, 7 Feb 90 16:42:48 -0600 Subject: TR available by FTP Message-ID: <9002072242.AA13257@goat.cs.wisc.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** The following tech report is available via ftp from cheops.cis.ohio-state.edu (courtesy Jordan Pollack). Here is what you need to do to get a copy: unix> ftp cheops.cis.ohio-state.edu Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> get (remote-file) honavar.control.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps unix> lpr -Pxx foo.ps (xx is the name of your local postscript printer). ------------------------------------------------------------------------ Computer Sciences Technical Report #910, January 1990. Coordination and Control Structures and Processes: Possibilities for Connectionist Networks (CN) Vasant Honavar & Leonard Uhr Computer Sciences Department University of Wisconsin-Madison Abstract The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs - along with all other types of programs - are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them - albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control - but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost. From inesc!lba%alf at relay.EU.net Thu Feb 8 14:03:13 1990 From: inesc!lba%alf at relay.EU.net (Luis Borges de Almeida) Date: Thu, 8 Feb 90 14:03:13 EST Subject: EURASIP Workshop on NN - Emergency announcement Message-ID: <9002081403.AA25467@alf.inesc.pt> [I apologize to the many readers of this list who are not involved in the EURASIP workshop, but this was the means to get to many people fast, on emergency. I hope you will understand Thanks Luis B. Almeida] ----------------------------------------------------------------------- VERY URGENT Dear workshop participant, We are very sorry to inform that, from what we have just learned, the Portuguese air traffic controllers have announced a strike from February 14 through February 18. This means there will be a big trouble with transportation to/from Portugal. From our judgment of the situation, we would guess that the strike will not be called off. However, it is said that the Government might make a civilian requisition of the controllers. Below are some indications of the possible measures that you could take to ensure your arrival on time, and your departure, in case the strike is maintained. Two points, however, are very important: 1) ACT QUICKLY - alternate transportation around those days will probably get full very fast. 2) LET US KNOW OF YOUR TRAVEL ARRANGEMENTS, AS SOON AS POSSIBLE - we will try to help minimize the consequences of this strike to our participants (the best ways to contact us are given at the end of this message). Measures that you can take: 1 - Contact your travel agent, and have him make "protective reservations" for arrival on the 13th, and departure on the 19th. Don't forget to do that for all flights along your route. It is best to also keep your old reservations, in case the strike is called off. For extra lodging, if Hotel do Mar is full and can't help you, we can suggest Holiday Inn in Lisbon (phone +351-1-735093, 735123, 735222, 736018; fax +351-1-736572, 736672; telex 60330 HOLINN P). Mention that you are coming to a meeting organized by Inesc, they'll probalbly give you a special price. 2 - Make "protective reservations" for arrival on the 14th and/or departure on the 18th, in Madrid, instead of Lisbon. You can then use the train to/from Lisbon, but we will also try to arrange a bus if there are enough people in this situation. You can also choose to drive between Madrid and Sesimbra (about 600 km). You can contact anyone in the local organizing committee: Luis B. Almeida, Ilda Goncalves, Joaquim S. Rodrigues, Fernando M. Silva, Joao Neto Phone numbers: +351-1-544607,545150 Fax: +351-1-525843 (may get quite busy, the next few days) Telex: 15696 INESC P E-mail: lba at inesc.inesc.pt (from Europe) lba%inesc.inesc.pt at uunet.uu.net (from outside Europe) lba at inesc.uucp (if you have access to uucp) {any backbone, uunet}!mcvax!inesc!lba (older, but should still work) We (still) look forward to meeting you in Sesimbra. Sincerely, Luis B. Almeida From harnad at Princeton.EDU Thu Feb 8 20:07:30 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 8 Feb 90 20:07:30 EST Subject: Searle/Pinker: BBS Call for Commentators Message-ID: <9002090107.AA03347@reason.Princeton.EDU> Below are the abstracts of two forthcoming target articles [Searle on consciousness, Pinker & Bloom on language] that are about to be circulated for commentary by Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on one of these articles (please specify which), or to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] ____________________________________________________________________ (1) Searle: Consciousness & Explanation (2) Pinker & Bloom: Language Evolution --------------------------------------------------------------------- (1) CONSCIOUSNESS, EXPLANATORY INVERSION AND COGNITIVE SCIENCE by John R. Searle Department of Philosophy University of Californai Berkeley CA Cognitive science typically postulates unconscious mental phenomena, computational or otherwise, to explain cognitive capacities. The mental phenomena in question are supposed to be inaccessible in principle to consciousness. I try to show that this is a mistake, because all unconscious intentionality must be accessible in principle to consciousness; we have no notion of intrinsic intentionality except in terms of its accessibility to consciousness. I call this claim the Connection Principle. The argument for it proceeds in six steps. The essential point is that intrinsic intentionality has aspectual shape: our mental representations represent the world under specific aspects, and these aspectual features are essential to a mental state's being the state that it is. Once we recognize the Connection Principle, we see that it is necessary to perform an inversion on the explanatory models of cognitive science, an inversion analogous to the one evolutionary biology imposes on pre-Darwinian animistic modes of explanation. In place of the original intentionalistic explanations we have a combination of hardware and functional explanations. This radically alters the structure of explanation, because instead of a mental representation (such as a rule) causing the pattern of behavior it represents (such as rule governed behavior), there is a neurophysiological cause of a pattern (such as a pattern of behavior), and the pattern plays a functional role in the life of the organism. What we mistakenly thought were descriptions of underlying mental principles in, for example, theories of vision and language, were in fact descriptions of functional aspects of systems, which will have to be explained by underlying neurophysiological mechanisms. In such cases what looks like mentalistic psychology is sometimes better construed as speculative neurophysiology. The moral is that the big mistake in cognitive science is not the overestimation of the computer metaphor (though that is indeed a mistake) but the neglect of consciousness. --------------------------------------------------------------------- (2) NATURAL LANGUAGE AND NATURAL SELECTION Steven Pinker and Paul Bloom Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Many have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the byproduct of selection for other abilities or as a consequence of unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with Darwinian theory: Grammar shows no genetic variation, could not exist in any intermediate forms, confers no selective advantage, and would require more time and genomic space to evolve than is available. We show that these arguments depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers a clear criterion for attributing a trait to natural selection: complex design for a function with no alternative processes to explain the complexity. Human language meets this criterion: Grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the claim that language is an adaptation, but this reasoning is unsound: Communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, the child's acquisition of language should differ systematically from language evolution in the species; attempts to make analogies between them are misleading. Reviewing other arguments and data, we conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo-Darwinian process. -------------------------------------------------------------------------- From P.Refenes at CS.UCL.AC.UK Fri Feb 9 10:12:42 1990 From: P.Refenes at CS.UCL.AC.UK (P.Refenes@CS.UCL.AC.UK) Date: Fri, 9 Feb 90 15:12:42 GMT Subject: No subject Message-ID: The Knowledge Engineering Review is planning a special issue on "Combined Symbolic & Numeric Processing Systems". Is there anyone out there with an interest in "theories and techniques for mixed symbolic/numeric processing systems" who is willing to write a comprehensive review paper? There follow the short notes for contributors. The first one is in straight ascii, the second in postscript: ========================================================== The Knowledge Engineering Review Published by Cambridge University Press Special Issue on: Combined Symbolic & numeric processing Systems NOTES FOR CONTRIBUTORS Editor: Apostolos N. REFENES Department of Computer Science BITNET: refenes%uk.ac.ucl.cs at ukacrl University College London UUCP: {...mcvax!ukc!}ucl.cs!refenes Gower Street, London, WC 1 6BT, UK. ARPANet: refenes at cs.ucl.ac.uk THE KNOWLEDGE ENGINEERING REVIEW - SPECIAL ISSUE ON Inferences and Algorithms: co-operating in computation or (Symbols and Numbers: co-operating in computation) THEME The theme of this special issue of KER is to review developments in the subject of integrated symbolic and numeric computation. The subject area of combined symbolic and numeric computation is a prominent emerging subject. In Europe, ESPRIT is already funding a $15m project to investigate the integration of symbolic and numeric computation and is planning to issue a further call for a $20m type A project in Autumn this year. In the USA, various funding agencies, like the DoD and NSF, have been heavily involved in supporting research into the integration of symbolic and numeric computing systems over the last few years. Algorithmic (or numeric) computational methods are mostly used for low-level, data driven computations to achieve problem solutions by exhaustive evaluation, and are based on static, hardwired decision making procedures. The statisticity and regularity of the knowledge, data, and control structures that are employed by such algorithmic methods permits their efficient mapping and execution on conventional supercomputers. However, the scale of the computation increases often non-linearly with the problem size, and the strength of the data inter-dependencies. Symbolic (or inference) computational methods have the capability to drastically reduce the required computations by using high-level, model-driven knowledge, and hypothesis-and-test techniques about the application domain. However, the irregularity, uncertainty, and dynamicity of the knowledge, data, and control structures that are employed by symbolic methods presents a major obstacle to their efficient mapping and execution on conventional parallel computers. This has led many researchers to propose the development of integrated numeric and symbolic computation systems, which have the potential to achieve optimal solutions for large classes of problems, in which algorithmic and symbolic component are engaged in close co-operation. The need for such interaction is particularly obvious in such applications as image understanding, speech recognition, weather forecasting, financial forecasting, the solution of partial differential equations etc. In these applications, numeric software components are tightly coupled with their symbolic counterparts, which in turn, have the power to feed- back adjustable algorithm parameters, and hence, support a "hypothesis-and-test" capability required to validate the numeric data. It is this application domain that provided the motivation for developing theoretical frameworks, techniques, programming languages, and computer architectures to efficiently support both symbolic and numeric computation. The special issue of The Knowledge Engineering Review KER aims to provide a comprehensive and timely review of the state of the art in integrated symbolic and numeric knowledge engineering systems. The special issue will cover the topics outlined in the next section. TOPICS There are four important topics that are related to the subject area of integrated symbolic and numeric computation. This special issue will have one comprehensive review paper in each of the topics, and a general overview article (or editorial) to link them together. 1. Theory and Techniques Traditional theoretical frameworks for decision making are are generally considered to be too restrictive for developing practical knowledge based systems. The principal set of restrictions is that classical algorithmic decision theories and techniques do not address the need to reason about the decision process itself. Classical techniques cannot reflect on what the decision is, what the options are, what methods should be (or have been) used in making decision and so forth. Approaches that accommodate numerical methods but extend them with non-monotonic inference techniques are described extensively in the literature e.g [Coguen, Eberbach, Vanneschi, Fox et al, etc]. What is needed is an in-depth analysis, taxonomy and evaluation of these techniques. This review of the theoretical approaches and background into integrated symbolic and numeric computation should be highly valuable to those involved in symbolic, numeric, and integrated symbolic plus numeric computation. 2. Applications Here there would be a review of the applications which provide the stimulus, and demonstrate techniques for integrating symbolic and numeric computing components. Effectiveness considerations and performance gains should also be included where appropriate. Model applications may include: image understanding, weather forecasting, financial forecasting, expert systems for PDE solving, simulation, real-time process control,etc. The review article should expose the common ground that these applications share, the potential improvement in reasoning and computation efficiency, the requirements that they impose on the theoretical frameworks, programming languages, and computer architecture. 3. Programming Languages This would be a review of the programming languages which provide the means for integrating symbolic and numeric computations. The article(s) should describe the competing approaches, i.e. integration through homogenisation, and integration through interfacing heterogeneous systems. Language design issues, features for parallelism, etc. Possible languages that might be considered are: Spinlog, Orient84K, LOOPS , Cornerstone, Solve, Parle, etc. A comparative analysis of the involved languages should be included. 4. Computer Architecture This review should give a comprehensive review of the novel computer architectures that are involved, their basic operating principles, their internal structure, a comparative analysis, etc. Possible architectures that might be considered are: PADMAVATI, SPRINT, ... DEADLINES March 15th - Extended Abstract. April 30th - Full Manuscript. ================================================================== %!PS-Adobe-1.0 %%Creator: beans.cs.ucl.ac.uk:pnr (Paul Refenes,301,3717) %%Title: stdin (ditroff) %%CreationDate: Fri Feb 9 13:53:37 1990 %%EndComments % Start of psdit.pro -- prolog for ditroff translator % Copyright (c) 1985,1987 Adobe Systems Incorporated. All Rights Reserved. % GOVERNMENT END USERS: See Notice file in TranScript library directory % -- probably /usr/lib/ps/Notice % RCS: $Header: psdit.pro,v 2.2 87/11/17 16:40:42 byron Rel $ /$DITroff 140 dict def $DITroff begin /fontnum 1 def /fontsize 10 def /fontheight 10 def /fontslant 0 def % Change at UCL % /xi {0 72 11 mul translate 72 resolution div dup neg scale 0 0 moveto /xi {0 72 11 mul translate 0 35 translate 72 resolution div dup neg scale 0 0 moveto /fontnum 1 def /fontsize 10 def /fontheight 10 def /fontslant 0 def F /pagesave save def}def /PB{save /psv exch def currentpoint translate resolution 72 div dup neg scale 0 0 moveto}def /PE{psv restore}def /m1 matrix def /m2 matrix def /m3 matrix def /oldmat matrix def /tan{dup sin exch cos div}bind def /point{resolution 72 div mul}bind def /dround {transform round exch round exch itransform}bind def /xT{/devname exch def}def /xr{/mh exch def /my exch def /resolution exch def}def /xp{}def /xs{docsave restore end}def /xt{}def /xf{/fontname exch def /slotno exch def fontnames slotno get fontname eq not {fonts slotno fontname findfont put fontnames slotno fontname put}if}def /xH{/fontheight exch def F}bind def /xS{/fontslant exch def F}bind def /s{/fontsize exch def /fontheight fontsize def F}bind def /f{/fontnum exch def F}bind def /F{fontheight 0 le {/fontheight fontsize def}if fonts fontnum get fontsize point 0 0 fontheight point neg 0 0 m1 astore fontslant 0 ne{1 0 fontslant tan 1 0 0 m2 astore m3 concatmatrix}if makefont setfont .04 fontsize point mul 0 dround pop setlinewidth}bind def /X{exch currentpoint exch pop moveto show}bind def /N{3 1 roll moveto show}bind def /Y{exch currentpoint pop exch moveto show}bind def /S /show load def /ditpush{}def/ditpop{}def /AX{3 -1 roll currentpoint exch pop moveto 0 exch ashow}bind def /AN{4 2 roll moveto 0 exch ashow}bind def /AY{3 -1 roll currentpoint pop exch moveto 0 exch ashow}bind def /AS{0 exch ashow}bind def /MX{currentpoint exch pop moveto}bind def /MY{currentpoint pop exch moveto}bind def /MXY /moveto load def /cb{pop}def % action on unknown char -- nothing for now /n{}def/w{}def /p{pop showpage pagesave restore /pagesave save def}def /abspoint{currentpoint exch pop add exch currentpoint pop add exch}def /dstroke{currentpoint stroke moveto}bind def /Dl{2 copy gsave rlineto stroke grestore rmoveto}bind def /arcellipse{oldmat currentmatrix pop currentpoint translate 1 diamv diamh div scale /rad diamh 2 div def rad 0 rad -180 180 arc oldmat setmatrix}def /Dc{gsave dup /diamv exch def /diamh exch def arcellipse dstroke grestore diamh 0 rmoveto}def /De{gsave /diamv exch def /diamh exch def arcellipse dstroke grestore diamh 0 rmoveto}def /Da{currentpoint /by exch def /bx exch def /fy exch def /fx exch def /cy exch def /cx exch def /rad cx cx mul cy cy mul add sqrt def /ang1 cy neg cx neg atan def /ang2 fy fx atan def cx bx add cy by add 2 copy rad ang1 ang2 arcn stroke exch fx add exch fy add moveto}def /Barray 200 array def % 200 values in a wiggle /D~{mark}def /D~~{counttomark Barray exch 0 exch getinterval astore /Bcontrol exch def pop /Blen Bcontrol length def Blen 4 ge Blen 2 mod 0 eq and {Bcontrol 0 get Bcontrol 1 get abspoint /Ycont exch def /Xcont exch def Bcontrol 0 2 copy get 2 mul put Bcontrol 1 2 copy get 2 mul put Bcontrol Blen 2 sub 2 copy get 2 mul put Bcontrol Blen 1 sub 2 copy get 2 mul put /Ybi /Xbi currentpoint 3 1 roll def def 0 2 Blen 4 sub {/i exch def Bcontrol i get 3 div Bcontrol i 1 add get 3 div Bcontrol i get 3 mul Bcontrol i 2 add get add 6 div Bcontrol i 1 add get 3 mul Bcontrol i 3 add get add 6 div /Xbi Xcont Bcontrol i 2 add get 2 div add def /Ybi Ycont Bcontrol i 3 add get 2 div add def /Xcont Xcont Bcontrol i 2 add get add def /Ycont Ycont Bcontrol i 3 add get add def Xbi currentpoint pop sub Ybi currentpoint exch pop sub rcurveto }for dstroke}if}def end /ditstart{$DITroff begin /nfonts 60 def % NFONTS makedev/ditroff dependent! /fonts[nfonts{0}repeat]def /fontnames[nfonts{()}repeat]def /docsave save def }def % character outcalls /oc {/pswid exch def /cc exch def /name exch def /ditwid pswid fontsize mul resolution mul 72000 div def /ditsiz fontsize resolution mul 72 div def ocprocs name known{ocprocs name get exec}{name cb} ifelse}def /fractm [.65 0 0 .6 0 0] def /fraction {/fden exch def /fnum exch def gsave /cf currentfont def cf fractm makefont setfont 0 .3 dm 2 copy neg rmoveto fnum show rmoveto currentfont cf setfont(\244)show setfont fden show grestore ditwid 0 rmoveto} def /oce {grestore ditwid 0 rmoveto}def /dm {ditsiz mul}def /ocprocs 50 dict def ocprocs begin (14){(1)(4)fraction}def (12){(1)(2)fraction}def (34){(3)(4)fraction}def (13){(1)(3)fraction}def (23){(2)(3)fraction}def (18){(1)(8)fraction}def (38){(3)(8)fraction}def (58){(5)(8)fraction}def (78){(7)(8)fraction}def (sr){gsave .05 dm .16 dm rmoveto(\326)show oce}def (is){gsave 0 .15 dm rmoveto(\362)show oce}def (->){gsave 0 .02 dm rmoveto(\256)show oce}def (<-){gsave 0 .02 dm rmoveto(\254)show oce}def (==){gsave 0 .05 dm rmoveto(\272)show oce}def end % DIThacks fonts for some special chars 50 dict dup begin /FontType 3 def /FontName /DIThacks def /FontMatrix [.001 0.0 0.0 .001 0.0 0.0] def /FontBBox [-220 -280 900 900] def% a lie but ... /Encoding 256 array def 0 1 255{Encoding exch /.notdef put}for Encoding dup 8#040/space put %space dup 8#110/rc put %right ceil dup 8#111/lt put %left top curl dup 8#112/bv put %bold vert dup 8#113/lk put %left mid curl dup 8#114/lb put %left bot curl dup 8#115/rt put %right top curl dup 8#116/rk put %right mid curl dup 8#117/rb put %right bot curl dup 8#120/rf put %right floor dup 8#121/lf put %left floor dup 8#122/lc put %left ceil dup 8#140/sq put %square dup 8#141/bx put %box dup 8#142/ci put %circle dup 8#143/br put %box rule dup 8#144/rn put %root extender dup 8#145/vr put %vertical rule dup 8#146/ob put %outline bullet dup 8#147/bu put %bullet dup 8#150/ru put %rule dup 8#151/ul put %underline pop /DITfd 100 dict def /BuildChar{0 begin /cc exch def /fd exch def /charname fd /Encoding get cc get def /charwid fd /Metrics get charname get def /charproc fd /CharProcs get charname get def charwid 0 fd /FontBBox get aload pop setcachedevice 40 setlinewidth newpath 0 0 moveto gsave charproc grestore end}def /BuildChar load 0 DITfd put %/UniqueID 5 def /CharProcs 50 dict def CharProcs begin /space{}def /.notdef{}def /ru{500 0 rls}def /rn{0 750 moveto 500 0 rls}def /vr{20 800 moveto 0 -770 rls}def /bv{20 800 moveto 0 -1000 rls}def /br{20 770 moveto 0 -1040 rls}def /ul{0 -250 moveto 500 0 rls}def /ob{200 250 rmoveto currentpoint newpath 200 0 360 arc closepath stroke}def /bu{200 250 rmoveto currentpoint newpath 200 0 360 arc closepath fill}def /sq{80 0 rmoveto currentpoint dround newpath moveto 640 0 rlineto 0 640 rlineto -640 0 rlineto closepath stroke}def /bx{80 0 rmoveto currentpoint dround newpath moveto 640 0 rlineto 0 640 rlineto -640 0 rlineto closepath fill}def /ci{355 333 rmoveto currentpoint newpath 333 0 360 arc 50 setlinewidth stroke}def /lt{20 -200 moveto 0 550 rlineto currx 800 2cx s4 add exch s4 a4p stroke}def /lb{20 800 moveto 0 -550 rlineto currx -200 2cx s4 add exch s4 a4p stroke}def /rt{20 -200 moveto 0 550 rlineto currx 800 2cx s4 sub exch s4 a4p stroke}def /rb{20 800 moveto 0 -500 rlineto currx -200 2cx s4 sub exch s4 a4p stroke}def /lk{20 800 moveto 20 300 -280 300 s4 arcto pop pop 1000 sub currentpoint stroke moveto 20 300 4 2 roll s4 a4p 20 -200 lineto stroke}def /rk{20 800 moveto 20 300 320 300 s4 arcto pop pop 1000 sub currentpoint stroke moveto 20 300 4 2 roll s4 a4p 20 -200 lineto stroke}def /lf{20 800 moveto 0 -1000 rlineto s4 0 rls}def /rf{20 800 moveto 0 -1000 rlineto s4 neg 0 rls}def /lc{20 -200 moveto 0 1000 rlineto s4 0 rls}def /rc{20 -200 moveto 0 1000 rlineto s4 neg 0 rls}def end /Metrics 50 dict def Metrics begin /.notdef 0 def /space 500 def /ru 500 def /br 0 def /lt 250 def /lb 250 def /rt 250 def /rb 250 def /lk 250 def /rk 250 def /rc 250 def /lc 250 def /rf 250 def /lf 250 def /bv 250 def /ob 350 def /bu 350 def /ci 750 def /bx 750 def /sq 750 def /rn 500 def /ul 500 def /vr 0 def end DITfd begin /s2 500 def /s4 250 def /s3 333 def /a4p{arcto pop pop pop pop}def /2cx{2 copy exch}def /rls{rlineto stroke}def /currx{currentpoint pop}def /dround{transform round exch round exch itransform} def end end /DIThacks exch definefont pop ditstart (psc)xT 576 1 1 xr 1(Times-Roman)xf 1 f 2(Times-Italic)xf 2 f 3(Times-Bold)xf 3 f 4(Times-BoldItalic)xf 4 f 5(Helvetica)xf 5 f 6(Helvetica-Bold)xf 6 f 7(Courier)xf 7 f 8(Courier-Bold)xf 8 f 9(Symbol)xf 9 f 10(DIThacks)xf 10 f 10 s 1 f xi %%EndProlog %%Page: 1 1 10 s 0 xH 0 xS 1 f 2434 192(\302)N 3 f 19 s 1266 704(T)N 1367(he)X 1557(K)X 1676(now)X 1945(ledge)X 2322(E)X 2423(ngineering)X 3155(R)X 3264(eview)X 13 s 1519 880(Published)N 1985(by)X 2121(Cambridge)X 2650(University)X 3139(Press)X 1 f 16 s 518 1584(Special)N 928(Issue)X 1224(on:)X 2 f 18 s 893 1936(Com)N (bined)S 1521(Sym)X 1761(bolic)X 2085(and)X 2337(Num)X (eric)S 2869(Processing)X 3545(System)X 3945(s)X 3 f 1431 3504(N)N 1535(O)X 1647(TES)X 1955(FO)X 2155(R)X 2295(C)X 2399(O)X 2511(N)X 2615(TR)X (IBU)S (TO)S 3279(R)X 3383(S)X 1 f 518 4944(Editor:)N 11 s 863 5200(Apostolos)N 1242(N.)X 1349(REFENES)X 863 5328(Department)N 1301(of)X 1396(Computer)X 1771(Science)X 2688(BITNET:)X 3048(refenes%uk.ac.ucl.cs at ukacrl)X 863 5456(University)N 1257(College)X 1554(London)X 2688(UUCP:)X 2991({...mcvax!ukc!}ucl.cs!refenes)X 863 5584(Gower)N 1123(Street,)X 1373(London,)X 1691(WC)X 1855(1)X 1921(6BT,)X 2122(UK.)X 2688(ARPANet:)X 3096(refenes at cs.ucl.ac.uk)X 2 p %%Page: 2 2 11 s 0 xH 0 xS 1 f 2433 256(\302)N 3 f 1079 704(THE)N 1288(KNOWLEDGE)X 1908(ENGINEERING)X 2565(REVIEW)X 2953(-)X 3004(SPECIAL)X 3407(ISSUE)X 3683(ON)X 13 s 1192 960(Inferences)N 1681(and)X 1875(Algorithms:)X 2440(co-operating)X 3027(in)X 3140(computation)X 1 f 11 s 2411 1088(or)N 3 f 1435 1216(\(Symbols)N 1804(and)X 1968(Numbers:)X 2365(co-operating)X 2861(in)X 2957(computation\))X 518 1600(THEME)N 1 f 806 1792(The)N 982(theme)X 1237(of)X 1349(this)X 1517(special)X 1802(issue)X 2018(of)X 2131(KER)X 2347(is)X 2446(to)X 2555(review)X 2834(developments)X 3364(in)X 3473(the)X 3621(subject)X 3911(of)X 4024(integrated)X 806 1920(symbolic)N 1192(and)X 1381(numeric)X 1731(computation.)X 2256(The)X 2454(subject)X 2765(area)X 2972(of)X 3106(combined)X 3515(symbolic)X 3900(and)X 4088(numeric)X 806 2048(computation)N 1274(is)X 1359(a)X 1424(prominent)X 1813(emerging)X 2172(subject.)X 2471(In)X 2571(Europe,)X 10 s 2872(ESPRIT)X 11 s 3165(is)X 3251(already)X 3537(funding)X 3838(a)X 3904($15m)X 4132(project)X 806 2176(to)N 898(investigate)X 1304(the)X 1435(integration)X 1840(of)X 1935(symbolic)X 2281(and)X 2430(numeric)X 2741(computation)X 3227(and)X 3376(is)X 3457(planning)X 3788(to)X 3879(issue)X 4077(a)X 4138(further)X 806 2304(call)N 963(for)X 1094(a)X 1162($20m)X 1392(type)X 1573(A)X 1665(project)X 1939(in)X 2037(Autumn)X 2355(this)X 2512(year.)X 2714(In)X 2816(the)X 10 s 2951(USA)X 11 s (,)S 3162(various)X 3450(funding)X 3753(agencies,)X 4107(like)X 4269(the)X 10 s 806 2432(DoD)N 11 s 984(and)X 10 s 1131(NSF)X 11 s 1277(,)X 1321(have)X 1509(been)X 1697(heavily)X 1979(involved)X 2310(in)X 2401(supporting)X 2800(research)X 3114(into)X 3274(the)X 3404(integration)X 3809(of)X 3904(symbolic)X 4250(and)X 806 2560(numeric)N 1117(computing)X 1517(systems)X 1818(over)X 1996(the)X 2126(last)X 2271(few)X 2424(years.)X 806 2752(Algorithmic)N 1290(\(or)X 1444(numeric\))X 1814(computational)X 2372(methods)X 2723(are)X 2882(mostly)X 3175(used)X 3389(for)X 3544(low-level,)X 3952(data)X 4152(driven)X 806 2880(computations)N 1329(to)X 1445(achieve)X 1761(problem)X 2102(solutions)X 2468(by)X 2603(exhaustive)X 3026(evaluation,)X 3462(and)X 3635(are)X 3788(based)X 4034(on)X 4168(static,)X 806 3008(hardwired)N 1194(decision)X 1520(making)X 1817(procedures.)X 2256(The)X 2426 0.3011(statisticity)AX 2824(and)X 2984(regularity)X 3360(of)X 3466(the)X 3607(knowledge,)X 4048(data,)X 4250(and)X 806 3136(control)N 1091(structures)X 1468(that)X 1635(are)X 1776(employed)X 2158(by)X 2280(such)X 2475(algorithmic)X 2917(methods)X 3250(permits)X 3549(their)X 3745(ef\256cient)X 4068(mapping)X 806 3264(and)N 969(execution)X 1349(on)X 1474(conventional)X 1967(supercomputers.)X 2583(However,)X 2963(the)X 3108(scale)X 3321(of)X 3431(the)X 3576(computation)X 4055(increases)X 806 3392(often)N 1009(non-linearly)X 1462(with)X 1641(the)X 1771(problem)X 2087(size,)X 2268(and)X 2417(the)X 2547(strength)X 2853(of)X 2948(the)X 3078(data)X 3247(inter-dependencies.)X 806 3584(Symbolic)N 1167(\(or)X 1291(inference\))X 1670(computational)X 2199(methods)X 2521(have)X 2710(the)X 2841(capability)X 3213(to)X 3305(drastically)X 3696(reduce)X 3953(the)X 4084(required)X 806 3712(computations)N 1328(by)X 1462(using)X 1699(high-level,)X 2125(model-driven)X 2645(knowledge,)X 3098(and)X 3270(hypothesis-and-test)X 4000(techniques)X 806 3840(about)N 1053(the)X 1212(application)X 1657(domain.)X 2018(However,)X 2413(the)X 2573(irregularity,)X 3044(uncertainty,)X 3515(and)X 3694(dynamicity)X 4144(of)X 4269(the)X 806 3968(knowledge,)N 1249(data,)X 1452(and)X 1613(control)X 1897(structures)X 2273(that)X 2440(are)X 2581(employed)X 2963(by)X 3085(symbolic)X 3443(methods)X 3776(presents)X 4098(a)X 4171(major)X 806 4096(obstacle)N 1117(to)X 1208(their)X 1392(ef\256cient)X 1703(mapping)X 2034(and)X 2183(execution)X 2548(on)X 2658(conventional)X 3136(parallel)X 3423(computers.)X 806 4288(This)N 1003(has)X 1160(led)X 1308(many)X 1544(researchers)X 1978(to)X 2087(propose)X 2405(the)X 2553(development)X 3049(of)X 3162(integrated)X 3555(numeric)X 3885(and)X 4053(symbolic)X 806 4416(computation)N 1287(systems,)X 1627(which)X 1881(have)X 2086(the)X 2233(potential)X 2582(to)X 2690(achieve)X 2998(optimal)X 3308(solutions)X 3666(for)X 3807(large)X 4022(classes)X 4304(of)X 806 4544(problems,)N 1192(in)X 1297(which)X 1548(algorithmic)X 1992(and)X 2155(symbolic)X 2515(component)X 2943(are)X 3086(engaged)X 3415(in)X 3520(close)X 3737(co-operation.)X 4240(The)X 806 4672(need)N 997(for)X 1123(such)X 1308(interaction)X 1710(is)X 1793(particularly)X 2224(obvious)X 2527(in)X 2620(such)X 2805(applications)X 3256(as)X 3353(image)X 3593(understanding,)X 4138(speech)X 806 4800(recognition,)N 1261(weather)X 1570(forecasting,)X 2014(\256nancial)X 2350(forecasting,)X 2795(the)X 2935(solution)X 3252(of)X 3357(partial)X 3615(differential)X 4039(equations)X 806 4928(etc.)N 990(In)X 1100(these)X 1318(applications,)X 1804(numeric)X 2130(software)X 2469(components)X 2932(are)X 3076(tightly)X 3345(coupled)X 3661(with)X 3855(their)X 4053(symbolic)X 806 5056(counterparts,)N 1285(which)X 1522(in)X 1613(turn,)X 1799(have)X 1987(the)X 2118(power)X 2360(to)X 2452(feed-back)X 2821(adjustable)X 3202(algorithm)X 3569(parameters,)X 4000(and)X 4150(hence,)X 806 5184(support)N 1092(a)X 1153("hypothesis-and-test")X 1932(capability)X 2303(required)X 2618(to)X 2709(validate)X 3011(the)X 3141(numeric)X 3452(data.)X 806 5376(It)N 894(is)X 987(this)X 2 f 1149(application)X 1586(domain)X 1 f 1884(that)X 2051(provided)X 2398(the)X 2540(motivation)X 2958(for)X 3095(developing)X 2 f 3522(theoretical)X 3940(frameworks,)X 806 5504(techniques,)N 1265(programming)X 1808(languages)X 1 f 2170(,)X 2251(and)X 2 f 2437(computer)X 2828(architectures)X 1 f 3352(to)X 3480(ef\256ciently)X 3897(support)X 4220(both)X 806 5632(symbolic)N 1156(and)X 1309(numeric)X 1624(computation.)X 2136(The)X 2299(special)X 2570(issue)X 2772(of)X 2871(The)X 3034(Knowledge)X 3465(Engineering)X 3923(Review)X 10 s 4217(KER)X 11 s 806 5760(aims)N 997(to)X 1090(provide)X 1383(a)X 1446(comprehensive)X 2003(and)X 2154(timely)X 2405(review)X 2668(of)X 2765(the)X 2 f 2896(state)X 3086(of)X 3178(the)X 3309(art)X 1 f 3435(in)X 3527(integrated)X 3903(symbolic)X 4250(and)X 806 5888(numeric)N 1120(knowledge)X 1531(engineering)X 1972(systems.)X 2298(The)X 2460(special)X 2730(issue)X 2931(will)X 3094(cover)X 3314(the)X 3447(topics)X 3683(outlined)X 3998(in)X 4092(the)X 4225(next)X 806 6016(section.)N 3 p %%Page: 3 3 11 s 0 xH 0 xS 1 f 2433 256(\302)N 3 f 518 512(TOPICS)N 1 f 806 640(There)N 1052(are)X 1200(four)X 1388(important)X 1774(topics)X 2027(that)X 2202(are)X 2351(related)X 2633(to)X 2744(the)X 2894(subject)X 3186(area)X 3374(of)X 3489(integrated)X 3884(symbolic)X 4250(and)X 806 768(numeric)N 1126(computation.)X 1621(This)X 1809(special)X 2085(issue)X 2292(will)X 2461(have)X 2658(one)X 2816(comprehensive)X 3380(review)X 3650(paper)X 3876(in)X 3975(each)X 4166(of)X 4269(the)X 806 896(topics,)N 1061(and)X 1210(a)X 1271(general)X 1552(overview)X 1901(article)X 2144(\(or)X 2268(editorial\))X 2614(to)X 2705(link)X 2865(them)X 3064(together.)X 3 f 828 1088(1.)N 916(Theory)X 1212(and)X 1376(Techniques)X 1 f 982 1216(Traditional)N 1405(theoretical)X 1808(frameworks)X 2257(for)X 2389(decision)X 2713(making)X 3008(are)X 3145(are)X 3282(generally)X 3640(considered)X 4051(to)X 4150(be)X 4264(too)X 982 1344(restrictive)N 1359(for)X 1485(developing)X 1901(practical)X 2229(knowledge)X 2639(based)X 2863(systems.)X 3188(The)X 3349(principal)X 3686(set)X 3807(of)X 3903(restrictions)X 4318(is)X 982 1472(that)N 1149(classical)X 1482(algorithmic)X 1924(decision)X 2252(theories)X 2565(and)X 2726(techniques)X 3137(do)X 3259(not)X 3406(address)X 3703(the)X 3845(need)X 4045(to)X 4148(reason)X 982 1600(about)N 1212(the)X 1354(decision)X 1682(process)X 1979(itself.)X 2212(Classical)X 2565(techniques)X 2976(cannot)X 3245(re\257ect)X 3499(on)X 3621(what)X 3826(the)X 3968(decision)X 4296(is,)X 982 1728(what)N 1183(the)X 1321(options)X 1611(are,)X 1770(what)X 1971(methods)X 2300(should)X 2565(be)X 2678(\(or)X 2810(have)X 3006(been\))X 3231(used)X 3422(in)X 3521(making)X 3816(decision)X 4141(and)X 4299(so)X 982 1856(forth.)N 1204(Approaches)X 1652(that)X 1814(accommodate)X 2333(numerical)X 2715(methods)X 3043(but)X 3185(extend)X 3448(them)X 3653(with)X 3838(non-monotonic)X 982 1984(inference)N 1365(techniques)X 1798(are)X 1961(described)X 2354(extensively)X 2812(in)X 2938(the)X 3103(literature)X 3479(e.g)X 3641([Coguen,)X 4023(Eberbach,)X 982 2112(Vanneschi,)N 1402(Fox)X 1566(et)X 1657(al,)X 1770(etc].)X 1951(What)X 2169(is)X 2255(needed)X 2531(is)X 2617(an)X 2726(in-depth)X 3046(analysis,)X 3378(taxonomy)X 3757(and)X 3910(evaluation)X 4304(of)X 982 2240(these)N 1202(techniques.)X 1640(This)X 1836(review)X 2115(of)X 2228(the)X 2376(theoretical)X 2789(approaches)X 3224(and)X 3391(background)X 3846(into)X 4024(integrated)X 982 2368(symbolic)N 1339(and)X 1499(numeric)X 1821(computation)X 2296(should)X 2564(be)X 2680(highly)X 2939(valuable)X 3270(to)X 3371(those)X 3589(involved)X 3930(in)X 4031(symbolic,)X 982 2496(numeric,)N 1315(and)X 1464(integrated)X 1839(symbolic)X 2185(plus)X 2354(numeric)X 2665(computation.)X 3 f 828 2688(2.)N 916(Applications)X 1 f 982 2816(Here)N 1179(there)X 1382(would)X 1629(be)X 1739(a)X 1806(review)X 2073(of)X 2174(the)X 2310(applications)X 2765(which)X 3008(provide)X 3305(the)X 3441(stimulus,)X 3791(and)X 3946(demonstrate)X 982 2944(techniques)N 1421(for)X 1585(integrating)X 2030(symbolic)X 2416(and)X 2605(numeric)X 2955(computing)X 3394(components.)X 3903(Effectiveness)X 982 3072(considerations)N 1540(and)X 1716(performance)X 2210(gains)X 2446(should)X 2731(also)X 2923(be)X 3056(included)X 3410(where)X 3674(appropriate.)X 4147(Model)X 982 3200(applications)N 1458(may)X 1659(include:)X 1992(image)X 2256(understanding,)X 2825(weather)X 3151(forecasting,)X 3612(\256nancial)X 3964(forecasting,)X 982 3328(expert)N 1236(systems)X 1549(for)X 1685(PDE)X 1885(solving,)X 2201(simulation,)X 2631(real-time)X 2984(process)X 3281(control,etc.)X 3712(The)X 3883(review)X 4156(article)X 982 3456(should)N 1246(expose)X 1519(the)X 1656(common)X 1994(ground)X 2272(that)X 2434(these)X 2644(applications)X 3100(share,)X 3335(the)X 3471(potential)X 3809(improvement)X 4308(in)X 982 3584(reasoning)N 1363(and)X 1529(computation)X 2011(ef\256ciency,)X 2420(the)X 2568(requirements)X 3068(that)X 3241(they)X 3433(impose)X 3728(on)X 3856(the)X 4004(theoretical)X 982 3712(frameworks,)N 1445(programming)X 1947(languages,)X 2343(and)X 2492(computer)X 2847(architecture.)X 3 f 828 3904(3.)N 916(Programming)X 1466(Languages)X 1 f 982 4032(This)N 1163(would)X 1407(be)X 1514(a)X 1577(review)X 1840(of)X 1937(the)X 2069(programming)X 2574(languages)X 2951(which)X 3191(provide)X 3485(the)X 2 f 3618(means)X 1 f 3867(for)X 3994(integrating)X 982 4160(symbolic)N 1335(and)X 1490(numeric)X 1807(computations.)X 2333(The)X 2498(article\(s\))X 2839(should)X 3102(describe)X 3423(the)X 3559(competing)X 3960(approaches,)X 982 4288(i.e.)N 1136(integration)X 1565(through)X 1885(homogenisation,)X 2517(and)X 2690(integration)X 3119(through)X 3440(interfacing)X 3869(heterogeneous)X 982 4416(systems.)N 1308(Language)X 1680(design)X 1935(issues,)X 2192(features)X 2495(for)X 2622(parallelism,)X 3062(etc.)X 3212(Possible)X 3530(languages)X 3906(that)X 4063(might)X 4294(be)X 982 4544(considered)N 1393(are:)X 1555(Spinlog,)X 1882(Orient84K,)X 12 s 2312(LOOPS)X 11 s 2645(,)X 2698(Cornerstone,)X 3181(Solve,)X 3435(Parle,)X 3669(etc.)X 3847(A)X 3941(comparative)X 982 4672(analysis)N 1288(of)X 1383(the)X 1513(involved)X 1844(languages)X 2218(should)X 2475(be)X 2580(included.)X 3 f 828 4864(4.)N 916(Computer)X 1323(Architecture)X 1 f 982 4992(This)N 1169(review)X 1438(should)X 1704(give)X 1887(a)X 1957(comprehensive)X 2521(review)X 2791(of)X 2895(the)X 3034(novel)X 3261(computer)X 3625(architectures)X 4106(that)X 4270(are)X 982 5120(involved,)N 1345(their)X 1539(basic)X 1752(operating)X 2117(principles,)X 2519(their)X 2713(internal)X 3015(structure,)X 3377(a)X 3448(comparative)X 3915(analysis,)X 4252(etc.)X 982 5248(Possible)N 1298(architectures)X 1770(that)X 1925(might)X 2154(be)X 2259(considered)X 2662(are:)X 10 s 2814(PADMAVATI)X 11 s 3295(,)X 10 s 3337(SPRINT)X 11 s 3612(,)X 3656(...)X 3 f 518 5376(DEADLINES)N 1 f 806 5504(March)N 1057(15th)X 1236(-)X 1309(Extended)X 1664(Abstract.)X 806 5632(April)N 1014(30th)X 1215(-)X 1288(Full)X 1453(Manuscript.)X 3 p %%Trailer xt xs ==================================== From jbower at smaug.cns.caltech.edu Fri Feb 9 14:28:47 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 9 Feb 90 11:28:47 PST Subject: Summer Course Message-ID: <9002091928.AA02507@smaug.cns.caltech.edu> Summer Course Announcement Methods in Computational Neurobiology August 5th - September 1st Marine Biological Laboratory Woods Hole, MA This course is for advanced graduate students and postdoctoral fellows in neurobiology, physics, electrical engineering, computer science and psychology with an interest in "Computational Neuroscience." A background in programming (preferably in C or PASCAL) is highly desirable and basic knowledge of neurobiology is required. Limited to 20 students. This four-week course presents the basic techniques necessary to study single cells and neural networks from a computational point of view, emphasizing their possible function in information processing. The aim is to enable participants to simulate the functional properties of their particular system of study and to appreciate the advantages and pitfalls of this approach to understanding the nervous system. The first section of the course focuses on simulating the electrical properties of single neurons (compartmental models, active currents, interactions between synapses, calcium dynamics). The second section deals with the numerical and graphical techniques necessary for modeling biological neuronal networks. Examples are drawn from the invertebrate and vertebrate literature (visual system of the fly, learning in Hermissenda, mammalian olfactory and visual cortex). In the final section, connectionist neural networks relevant to perception and learning in the mammalian cortex, as well as network learning algorithms will be analyzed and discussed from a neurobiological point of view. The course includes lectures each morning and a computer laboratory in the afternoons and evenings. The laboratory section is organized around GENESIS, the Neuronal Network simulator developed at the California Institute of Technology, running on 20 state-of-the-art, single-user, graphic color workstations. Students initially work with GENESIS-based tutorials and then are expected to work on a simulation project of their own choosing. Co-Directors: James M. Bower and Christof Koch, Computation and Neural Systems Program, California Institute of Technology 1990 summer faculty: Ken Miller UCSF Paul Adams Stony Brook Idan Segev Jerusalem David Rumelhart Stanford John Rinzel NIH Richard Andersen MIT David Van Essen Caltech Kevin Martin Oxford Al Selverston UCSD Nancy Kopell Boston U. Avis Cohen Cornell Rudolfo Llinas NYU Tom Brown* Yale Norberto Grzywacz* MIT Terry Sejnowski UCSD/Salk Ted Adelson MIT *tentative Application deadline: May 15, 1990 Applications are evaluated by an admissions committee and individuals are notified of acceptance or non-acceptance by June 1. Tuition: $1,000 (includes room & board). Financial aid is available to qualified applicants. For further information contact: Admissions Coordinator Marine Biological Laboratory Woods Hole, MA 02543 (508) 548-3705, ext. 216 From KOKAR at northeastern.edu Fri Feb 9 14:58:00 1990 From: KOKAR at northeastern.edu (KOKAR@northeastern.edu) Date: Fri, 9 Feb 90 14:58 EST Subject: Conference on Intelligent Control Message-ID: The 5-th IEEE International Symposium on Intelligent Control Penn Tower Hotel, Philadelphia September 5 - 7, 1990 Sponsored by IEEE Control Society The IEEE International Symposium on Intelligent Control is the Annual Meeting dedicated to the problems of Control Systems associated with combined Control/Artificial Intelligence theoretical paradigm. This particular meeting is dedicated to the Perception - Representation - Action Triad. The Symposium will consist of three mini-conferences: Perception as a Source of Knowledge for Control (Chair - H.Wechsler) Knowledge as a Core of Perception-Control Activities (Chair - S.Navathe) Decision and Control via Perception and Knowledge (Chair - H.Kwatny) intersected by Three Plenary 2-hour Panel Discussions: I. On Perception in the Loop II. On Action in the Loop III. On Knowledge Representation in the Loop. Suggested Topics of Papers are not limited to the following list: - Intractable Control Problems in the Perception-Representation-Action Loop - Control with Perception Driven Representation - Multiple Modalities of Perception, and Their Use for Control - Control of Movements Required by Perception - Control of Systems with Complicated Dynamics - Intelligent Control for Interpretation in Biology and Psychology - Actively Building-up Representation Systems - Identification and Estimation of Complex Events in Unstructured Environment - Explanatory Procedures for Constructing Representations - Perception for Control of Goals, Subgoals, Tasks, Assignments - Mobility and Manipulation - Reconfigurable Systems - Intelligent Control of Power Systems - Intelligent Control in Automated Manufacturing - Perception Driven Actuation - Representations for Intelligent Controllers (geometry, physics, processes) - Robust Estimation in Intelligent Control - Decision Making Under Uncertainty - Discrete Event Systems - Computer-Aided Design of Intelligent Controllers - Dealing with Unstructured Environment - Learning and Adaptive Control Systems - Autonomous Systems - Intelligent Material Processing: Perception Based Reasoning D E A D L I N E S Extended abstracts (5 - 6 pages) should be submitted to: H. Kwatny, MEM Drexel University, Philadelphia, PA 19104 - CONTROL AREA S. Navathe, Comp. Sci., University of Florida, Gainesville, FL 32911 - KNOWLEDGE REPRESENTATION AREA H. Wechsler, George Mason University, Fairfax, VA 22030 - PERCEPTION AREA NO LATER THAN MARCH 1, 1990. Papers that are difficult to categorize, and/or related to all of these areas, as well as proposals for tutorials, invited sessions, demonstrations, etc., should be submitted to A. Meystel, ECE, Drexel University, Philadelphia, PA 19104, (215) 895-2220 before March 1, 1990. REGISTRATION FEES: On/before Aug.5, 1990 After Aug.5, 1990 Student $ 50 $ 70 IEEE Member $ 200 $ 220 Other $ 230 $ 275 Cancellation fee: $ 20. Payment in US dollars only, by check. Payable to: IC 90. Send check and registration form to: Intelligent Control - 1990, Department of ECE, Drexel University, Philadelphia, PA 19104. From jfeldman%icsib2.Berkeley.EDU at jade.berkeley.edu Sun Feb 11 17:13:11 1990 From: jfeldman%icsib2.Berkeley.EDU at jade.berkeley.edu (Jerry Feldman) Date: Sun, 11 Feb 90 14:13:11 PST Subject: Advertisement Message-ID: <9002112213.AA02197@icsib2.berkeley.edu.> The International Computer Science Institute is pleased to announce that Italy and Switzerland have joined Germany as sponsor nations. Citizens of these countries are especially encouraged to inquire about permanent, visiting, or post-doctoral positions at the Institute. There are also open positions at all levels, including the most senior, that will be filled independent of nationality. In addition to its connectionist research, ICSI has programs in Complexity Theory, Realization of Massively Parallel Systems, and Very Large Distributed Systems. From mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu Mon Feb 12 13:53:45 1990 From: mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu (mclennan%MACLENNAN.CS.UTK.EDU@cs.utk.edu) Date: Mon, 12 Feb 90 14:53:45 EDT Subject: Tech Report Available Message-ID: <9002121953.AA05739@MACLENNAN.CS.UTK.EDU> *************** PLEASE DO NOT DISTRIBUTE TO OTHER LISTS *************** The following technical report (CS-90-99) is available. Requests for copies: library at cs.utk.edu Other correspondence: maclennan at cs.utk.edu ----------------------------------------------------------------------- Evolution of Communication in a Population of Simple Machines Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301 Technical Report CS-90-99 ABSTRACT We show that communication may evolve in a population of simple machines that are physically capable of sensing and modifying a shared environment, and for which there is selective pressure in favor of cooperative behavior. The emergence of communication was detected by comparing simulations in which communication was permitted with those in which it was suppressed. When communica- tion was not suppressed we found that at the end of the experi- ment the average fitness of the population was 84% higher and had increased at a rate 30 times faster than when communication was suppressed. Furthermore, when communication was suppressed, the statistical association of symbols with situations was random, as is expected. In contrast, permitting communication led to very structured associations of symbols and situations, as determined by a variety of measures (e.g., coefficient of variation and entropy). Inspection of the structure of individual highly fit machines confirmed the statistical structure. We also investi- gated a simple kind of learning. This did not help when communi- cation was suppressed, but when communication was permitted the resulting fitness was 845% higher and increased at a rate 80 times as fast as when it was suppressed. We argue that the experiments described here show a new way to investigate the emergence of communication, its function in populations of simple machines, and the structure of the resulting symbol systems. From Michael.Witbrock at CS.CMU.EDU Mon Feb 12 18:53:42 1990 From: Michael.Witbrock at CS.CMU.EDU (Michael.Witbrock@CS.CMU.EDU) Date: Mon, 12 Feb 90 18:53:42 -0500 (EST) Subject: Tech Report Announcement CMU-CS-89-208 Message-ID: Requests for copies should go to the address at the bottom of this post, not to the poster. PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS. ========================================================================= ==== An Implementation of Back-Propagation Learning on GF11, a Large SIMD Parallel Computer Michael Witbrock and Marco Zagha Carnegie Mellon University CMU-CS-89-208 December 1989 Current connectionist simulations require huge computational resources. We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops. We present our parallel implementation of the backpropagation learning algorithm, techniques for increasing efficiency, performance measurements on the NetTalk text-to-speech benchmark, and a performance model for the simulator. Our simulator currently runs the back-propagation learning algorithm at 900 million connections per second, where each ``connection per second'' includes both a forward and backward pass. This figure was obtained on the machine when only 356 processors were working; with all 566 processors operational, our simulation will run at over one billion connections per second. We conclude that the GF11 is well-suited to neural network simulation, and we analyze our use of the machine to determine which features are the most important for high performance. ========================================================================= ==== PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS. Requests for copies should go to: C Copetas School of Computer Science Carnegie Mellon University Pittsburgh PA 15213-3890 USA or to copetas at cs.cmu.edu The technical report deals with implementation issues for fast parallel connectionist simulators. It may not be of any great interest to anyone not working in this area. ------------------------------------------------------------------------- ----- P.S. This TR is in postscript. Could the person who runs the ftp archive tell me how to go about submitting it? Thanks. From kddlab!soc.sdl.melco.co.jp!izui at UUNET.UU.NET Sat Feb 10 17:26:39 1990 From: kddlab!soc.sdl.melco.co.jp!izui at UUNET.UU.NET (Izui Yoshio) Date: Sat, 10 Feb 90 17:26:39 JST Subject: including me in mailing list Message-ID: <9002100826.AA00495@loame26.soc.sdl.melco.co.jp> hello, could you include my address in your mailing list ? Yoshio Izui Industrial Systems Lab. Mitsubishi Electric Corp. 8-1-1, Tsukaguchi Honmachi, Amagasaki, Hyogo, 661 Japan izui at soc.sdl.melco.co.jp intersts: application to power engineering field. basic analysis of models From ULI%MPI01.MPI.KUN.NL at VMA.CC.CMU.EDU Tue Feb 13 12:37:00 1990 From: ULI%MPI01.MPI.KUN.NL at VMA.CC.CMU.EDU (ULI%MPI01.MPI.KUN.NL@VMA.CC.CMU.EDU) Date: Tue, 13 Feb 90 12:37 MET Subject: job announcement, please post Message-ID: Position Available The Max-Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, is looking for a programmer to participate in a project entitled, ``Computational modeling of lexical representation and processes''. The task of the successful candidate will be to help develop and implement software for studying and simulating the processes of human speech perception and word recognition with artificial neural nets of different types. A strong background in software development (good knowledge of C) and a good understanding of the mathematical/technical principles underlying neural nets are required. The position is to be filled starting in March 1990 and is limited to two years (up to BAT IIA on German salary scale). Qualified applicants (with university degree: Ph.D.) should send their curriculum vitae and two letters of recommendation by March 1, 1990 to: Uli Frauenfelder or Peter Wittenburg Max-Planck-Institute for Psycholinguistics Wundtlaan 1 NL-6525-XD Nijmegen, The Netherlands phone: 31-80-521-911 e-mail: uli at hnympi51.bitnet or pewi at hnympi51.bitnet. From lyn%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Tue Feb 13 08:17:34 1990 From: lyn%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Lyn Shackleton) Date: Tue, 13 Feb 90 13:17:34 GMT Subject: Reviewers for Special Issue of Connection Science Message-ID: <8863.9002131317@exsc.cs.exeter.ac.uk> The Journal, Connection Science, is soon to announce a call for papers for a special issue on Simulations of Psychological Processes. So far the special editorial board consists: James A. Anderson Walter Kintsch Dennis Norris David Rumelhart Noel Sharkey Others will be added to this list. We are now calling for REVIEWERS for the special issue. We would like to enlist volunteers from any area of psychology with experience in connectionist modeling. Please state name and area of expertise. lyn shackleton Centre for Connection Science JANET: lyn at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!lyn Exeter EX4 4PT Devon BITNET: lyn at cs.exeter.ac.uk.UKACRL U.K. From mani at grad1.cis.upenn.edu Tue Feb 13 12:15:22 1990 From: mani at grad1.cis.upenn.edu (D. R. Mani) Date: Tue, 13 Feb 90 12:15:22 EST Subject: Please add me to your mailing list Message-ID: <9002131715.AA27505@grad1.cis.upenn.edu> I am a graduate student in Computer Science at the University of Pennsylvania. I am interested in connectionist networks and would like my name to be included in your connectionist (e)mailing list. Thanks. D. R. Mani mani at grad1.cis.upenn.edu From carol at ai.toronto.edu Tue Feb 13 14:33:30 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 13 Feb 90 14:33:30 EST Subject: CRG-TR-90-2 request Message-ID: <90Feb13.143344est.10526@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS ********************************************************** The following paper is an expanded version of one published in the NIPS'89 Proceedings. If you would like to receive a copy of this paper, reply to this message with your physical mailing address. (Please omit all other information from your message except your address). Those who have previously requested copies of this TR at NIPS are already on the mailing list and do NOT need to send a new request. ------------------------------------------------------------------------------- MAX LIKELIHOOD COMPETITION IN RBF NETWORKS Steven J. Nowlan Department of Computer Science University of Toronto 10 King's College Road Toronto, Canada M5S 1A4 Technical Report CRG-TR-90-2 One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a ``softer'' form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. ------------------------------------------------------------------------------- From netbb at LONEX.RADC.AF.MIL Wed Feb 14 07:47:44 1990 From: netbb at LONEX.RADC.AF.MIL (Robert Russell) Date: Wed, 14 Feb 90 07:47:44 EST Subject: CRG-TR-90-2 request Message-ID: <9002141247.AA22192@lonex9.radc.af.mil> W. J. Buzz Szarek RADC/IRRA G.A.F.B., NY. 13441 From eisner%husc8 at harvard.harvard.edu Wed Feb 14 10:42:47 1990 From: eisner%husc8 at harvard.harvard.edu (Jason Eisner) Date: Wed, 14 Feb 90 10:42:47 EST Subject: CRG-TR-90-2 request Message-ID: Jason Eisner 60 Linnaean St. Harvard University Cambridge, MA 02138 From pollack at cis.ohio-state.edu Wed Feb 14 13:15:11 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 14 Feb 90 13:15:11 EST Subject: Neuroprose update Message-ID: <9002141815.AA02861@toto.cis.ohio-state.edu> Tony Plate & I wrote a script to make life easier for those people who don't like to ftp and uncompress. It is enclosed below, and whatever file you save it in should be made executable. (e.g. after saving and editing a file, do a "chmod +x filename" on it.) It is also stored as "Getps" in the neuroprose directory, where it will be maintained and improved. Also I'd like to take this opportunity to ask those who have stored postscript files there, or are planning to in the future, to send me mail with: 1) filename 2) way to contact author 3) single sentence abstract so I can Cons up an INDEX file. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 ---------------cut here, save, and chmod +x---------- #!/bin/sh ######################################################################## # usage: getps # # A Script to get, uncompress, and print postscript # files from the neuroprose directory on cheops.ohio-state.edu # # By Tony Plate & Jordan Pollack ######################################################################## if [ "$1" = "" ] ; then echo usage: $0 " " echo echo The filename must be exactly as it is in the archive, if your echo file is not found the first time, look in the file \"ftp.log\" echo for a list of files in the archive. echo echo The printerflags are used for the optional lpr command that echo is executed after the file is retrieved. A common use would echo be to use -P to specify a particular postscript printer. exit fi ######################################################################## # set up script for ftp ######################################################################## cat > .ftp.script < ftp.log rm -f .ftp.script if [ ! -f /tmp/$1 ] ; then echo Failed to get file - please inspect ftp.log for list of available files exit fi ######################################################################## # Uncompress if necessary ######################################################################## echo Retrieved /tmp/$1 case $1 in *.Z) echo Uncompressing /tmp/$1 uncompress /tmp/$1 FILE=`basename $1 .Z` ;; *) FILE=$1 esac ######################################################################## # query to print file ######################################################################## echo -n "Send /tmp/$FILE to 'lpr $2' (y or n)? " read x case $x in [yY]*) echo Printing /tmp/$FILE lpr $2 /tmp/$FILE ;; esac echo File left in /tmp/$FILE From ernst at aurel.cns.caltech.edu Wed Feb 14 21:53:11 1990 From: ernst at aurel.cns.caltech.edu (Ernst Niebur) Date: Wed, 14 Feb 90 18:53:11 PST Subject: Could you please add my address to your mailing list? Thank Message-ID: <9002150253.AA00212@aurel.cns.caltech.edu> you cc:ernst From mike at bucasb.bu.edu Thu Feb 15 18:57:28 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Thu, 15 Feb 90 18:57:28 EST Subject: Wang Conference ATR Call for Papers Message-ID: <9002152357.AA01664@bucasb.bu.edu> CALL FOR PAPERS NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION MAY 11--13, 1990 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This research conference at the cutting edge of neural network science and technology will bring together leading experts in academe, government, and industry to present their latest results on automatic target recognition in invited lectures and contributed posters. Automatic target recognition is a key process in systems designed for vision and image processing, speech and time series prediction, adaptive pattern recognition, and adaptive sensory- motor control and robotics. Invited lecturers include: JOE BROWN, Martin Marietta; GAIL CARPENTER, Boston Univ.; NABIL FARHAT, Univ. Pennsylvania; STEPHEN GROSSBERG, Boston Univ.; ROBERT HECHT-NIELSEN, HNC; KEN JOHNSON, Hughes Aircraft; PAUL KOLODZY, MIT Lincoln Lab; MICHAEL KUPERSTEIN, Neurogen; YANN LECUN, AT&T Bell Labs; CHRISTOPHER SCOFIELD, Nestor; STEVEN SIMMES, Science Applications International Co.; ALEX WAIBEL, Carnegie Mellon Univ.; ALLEN WAXMAN, MIT Lincoln Lab; FRED WEINGARD, Booz-Allen and Hamilton; BARBARA YOON, DARPA; CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR neural network research will be held on May 12, 1990. Attendees who wish to present a poster should submit 3 copies of an extended abstract (1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include with the abstract the name, address, and telephone number of the corresponding author. Mail to: ATR Poster Session, Neural Networks Conference, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors will be informed of abstract acceptance by March 31, 1990. SITE: The Wang Institute possesses excellent conference facilities on a beautiful 220-acre campus. It is easily reached from Boston's Logan Airport and Route 128. REGISTRATION FEE: Regular attendee--$90; full-time student--$70. Registration fee includes admission to all lectures and poster session, meeting proceedings, one reception, two continental breakfasts, one lunch, one dinner, daily morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available. For information, call (508) 649-9731. TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Feb 16 04:25:19 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 16 Feb 90 04:25:19 EST Subject: repetitive conference announcements Message-ID: <21816.635160319@DST.BOLTZ.CS.CMU.EDU> I spoke with Michael Cohen at BU about the repetitive conference announcements which some subscribers to this list find annoying. He wasn't aware of the policy on CONNECTIONISTS about repetitive posts, and assures me it won't happen anymore. For those who don't know: the policy has always been that conferences should be announced just once on CONNECTIONISTS. We've relaxed this a little to permit one early call for papers and one late posting of registration info as the time of the conference draws near. That's as far as it goes. No flames, please. Let's all get back to work. -- Dave From P.Refenes at Cs.Ucl.AC.UK Fri Feb 16 10:16:00 1990 From: P.Refenes at Cs.Ucl.AC.UK (P.Refenes@Cs.Ucl.AC.UK) Date: Fri, 16 Feb 90 15:16:00 +0000 Subject: KOHONEN's FEATURE MAPS Message-ID: Does any one out there have an implementation of KOHONENs feature map algorithm in a usefull programming language (e.g. C, C++, LISP, Prolog, RCS, GENESYS, etc.) If so is it possible to get our hands on teh sources? Thanks in advance, Paul Refenes. From bogus@does.not.exist.com Mon Feb 19 11:39:56 1990 From: bogus@does.not.exist.com () Date: 19 FEB 90 11:39:56 Subject: Forward of: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <$969797332S0340D19900219T093956.0001.Mail-VE> From aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK at vma.CC.CMU.EDU Sun Feb 4 14:11:17 1990 From: aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK at vma.CC.CMU.EDU (aarons%cogs.sussex.ac.uk%NSFnet-Relay.AC.UK@vma.CC.CMU.EDU) Date: Sun, 4 Feb 90 19:11:17 GMT Subject: Turing 1990 Colloquium, 3-6 April 1990, Sussex University Message-ID: <18538.9002041911@csunb.cogs.susx.ac.uk> I have been asked to circulate information about this conference. NB - please do NOT use "reply". Email enquiries should go to turing at uk.ac.sussex.syma ----------------------------------------------------------------------- TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church-Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, while other prominent contributors include Robert French (Indiana), Beatrice de Gelder (Tilburg), Andrew Hodges (Oxford), Philip Pettit (ANU) and Aaron Sloman (Sussex). Anyone wishing to attend this Conference should complete the enclosed form and send it to Andy Clark, TURING Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March, 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start at lunchtime on Tuesday 3rd April, 1990, and will end on Friday 6th April after tea. Final details will be sent to registered participants in February 1990. Conference Organizing Committee Andy Clark (Sussex University), David Holdcroft (Leeds University), Peter Millican (Leeds University), Steve Torrance (Middlesex Polytechnic) ___________________________________________________________________________ PROGRAMME OF INVITED SPEAKERS Paul CHURCHLAND (UCSD) Title to be announced Joseph FORD (Georgia) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Douglas HOFSTADTER (Indiana) Title to be announced J.R. LUCAS (Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark Turing 90 Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. ____________________________________________________________________________ ------------------------------------------------ Following comments provided by: KDBG100.KDBG100.NVE at BGUNVE ------------------------------------------------ From gaudiano at bucasb.bu.edu Mon Feb 19 12:33:45 1990 From: gaudiano at bucasb.bu.edu (gaudiano@bucasb.bu.edu) Date: Mon, 19 Feb 90 12:33:45 EST Subject: New Student Society Message-ID: <9002191733.AA24435@retina.bu.edu> This is the first official announcement of the: Neural Networks Student Society ------------------------------- The purpose of the Society is to (1) provide a means of exchanging information among students and young professionals within the area of Neural Networks; (2) create an opportunity for interaction between students and professionals from academia and industry; (3) encourage support from academia and industry for the advancement of students in the area of Neural Networks; (4) insure that the interest of all students in the area of Neural Networks is taken into consideration by other societies and institutions that promote Neural Networks; and (5) lay down a solid, UNBIASED foundation upon which the study of Neural Networks will be developed into a self-contained discipline. The society is specifically intended to avoid discrimination based on age, sex, race, religion, national origin, annual income or graduate advisor. An organizational meeting was held at the Washington IJCNN meeting. We had about 60 students and other interested bodies there, and later that evening many of us went out to get to know each other over some fine ales. Many of the participants came from outside of the US, and the general consenus is that this is a society whose time has come. We have many action items on our agenda, including: 1) a quarterly newsletter 2) an e-mail newsgroup 3) a resume exchange service for neograduates in the field 4) summer co-ops with NN companies 5) a database of existing graduate programs in NNs 6) an ftp site for NN simulation code and other info 7) corporate sponsorships to support student travel expenses . . . n) many, many more activities and ideas A booth for our Society has been donated by the organizers of the Paris INNC conference (July 90), and we may also get one at the San Diego IJCNN conference. We will use the booth to advertise our society, and to promote student ideas and projects. More details will be given in the first newsletter, which is scheduled to come out March 21. It will include our official bylaws, and other introductory information. WHO TO CONTACT: -------------- If you'd like to be on the mailing list to receive newsletters by surface or electronic mail, and you did not already give us your name at the IJCNN Washington meeting, send a note to: nnss-request at thalamus.bu.edu (Newsletter requests) with your name, affiliation, and address. The first issue will contain all the necessary information to become a member for the rest of the year. Once the Society becomes official, there will be a nominal annual fee (about $5) to help with costs for publications and activities, but for now you can get our info to see what it's all about at no cost. You will only remain a member if you are interested in the society and send in the official membership form. In the meantime, if you are thinking about a job in NNs in the near future, and would like information about our resume exchange program, send a message to: khaines at galileo.ece.cmu.edu and if you have general questions about the society (other than a request for the first newsletter, or about the resume service), send mail to: nnss at thalamus.bu.edu Also, if you are willing to volunteer some time to help with the society, send a note to: gaudiano at thalamus.bu.edu we will definitely need some help at the upcoming conferences, and may also need some assistance with other odds & ends before that time. Finally, we will soon circulate a proposal for a new USENET newsgroup, so if you read usenet news keep your eyes open for an opportunity to vote in the next few weeks. Karen Haines and Paolo Gaudiano co-founders, NNSS From grumbach at ulysse.enst.fr Tue Feb 20 06:21:30 1990 From: grumbach at ulysse.enst.fr (Alain Grumbach) Date: Tue, 20 Feb 90 12:21:30 +0100 Subject: organization levels Message-ID: <9002201121.AA00583@ulysse.enst.fr> Being working on hybrid symbolic - connectionist systems, I am wondering about the notion of "organization level", which is underlying hybrid models. A lot of people use this phrase, from neuroscience, to cognitive psychology, via computer science, Artificial Intelligence : (Anderson, Newell, Simon, Hofstadter,Marr, Changeux, etc). But has anybody heard about a formal description of it ? (formal but understandable !) Thank you. Alain Grumbach grumbach at inf.enst.fr From rudnick at cse.ogi.edu Tue Feb 20 23:44:35 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Tue, 20 Feb 90 20:44:35 PST Subject: developmental aspects of NNs Message-ID: <9002210444.AA05798@cse.ogi.edu> Is anyone doing work on the developmental (as in developmental biology) aspects of NNs? I'm (at least somewhat) aware of Edelman & Reeke's work on neuronal group selection, and Wilson's papers on L-systems, but no other work. My particular interest is in using genetic search techniques for the design of artificial neural nets. I want to find a useful analog of developmental biology to aid in both the design of compact genetic codes and in converting those genetic codes to nets (more or less) ready to be trained. I'm eager to contact new people working in these areas, and would appreciate any pointers or references that may be appropriate. Thanks, Mike Rudnick Computer Science & Eng. Dept. Domain: rudnick at cse.ogi.edu Oregon Graduate Institute (was OGC) UUCP: {tektronix,verdix}!ogicse!rudnick 19600 N.W. von Neumann Dr. (503) 690-1121 X7390 (or X7309) Beaverton, OR. 97006-1999 From BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU Wed Feb 21 05:07:04 1990 From: BOVET%FRMOP11.BITNET at VMA.CC.CMU.EDU (BOVET JAMON BENHAMOU OTTOMANI) Date: Wed, 21 Feb 90 10:07:04 GMT Subject: organization levels Message-ID: Alain Grumbach is looking for a formal description of organization levels. I dont know any formal response to this question, but in Biology this concept seems clear: see for instance the wonderfull book of Francois Jacob: La logique du vivant. But Biology is not a formal science. Thus it will not give a formal response. In AI or in ANN I think that the question of organization levels is akin to the problem of a definition of complexity wich was yet discussed here some months ago. Pierre Bovet, Laboratoire de Neurosciences Fonctionnelles, Marseille. From TGELDER%IUBACS.BITNET at VMA.CC.CMU.EDU Wed Feb 21 10:34:00 1990 From: TGELDER%IUBACS.BITNET at VMA.CC.CMU.EDU (TGELDER%IUBACS.BITNET@VMA.CC.CMU.EDU) Date: Wed, 21 Feb 90 10:34 EST Subject: organization levels Message-ID: Talk of "levels" (of organization, of complexity, of analysis, etc) is pervasive in discussions of connectionism, especially in relation to the brain on one hand and some kind of "symbolic" level on the other. Unfortunately there is no well-developed account of what levels really are, and what kinds there are, and so much of the discussion lacks sharp edges, to say the least. A fellow here at Indiana University, Allen Yu-Huong Houng, is writing his philosophy PhD dissertation on the concept of "level" in scientific theorizing with particular application to psychology and the relation of connectionist modeling to other modes of psychological explanation. He adopts and develops theories of levels from other fields, primarily biology, mainstream philosophy of science, and computer science. He has a good first draft of a comprehensive chapter on the concept of a level, and would probably be glad to distribute it to interested parties for discussion and critical feedback. He can be contacted at Department of Philosophy Indiana University Bloomington, IN 47405 or I can pass on email messages. Tim van Gelder tgelder at ucs.indiana.edu From carol at ai.toronto.edu Wed Feb 21 15:27:48 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Wed, 21 Feb 90 15:27:48 EST Subject: CRG-TR-90-1 request Message-ID: <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OF MAILING LISTS ********************************************************** The following technical report is now available. If you'd like a copy please send me your real mail address (omitting all other information from your message). --------------------------------------------------------------------------- BUILDING ADAPTIVE INTERFACES WITH NEURAL NETWORKS: THE GLOVE-TALK PILOT STUDY S. Sidney Fels Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 CRG-TR-90-1 Connectionist learning procedures can be used to interpret incoming data and to generate complex responses. To illustrate the potential of using these procedures for adaptive interfaces, a system using neural networks to convert hand gestures to speech in real-time was developed. A VPL DataGlove connected to five networks and a DECtalk (speech synthesizer), implements the hand to speech system. Using existing connectionist learning procedures, the complex mapping of hand movements to speech particular to a specific user is learned using data obtained in a simple training phase. Based on a 203 gesture-to-word vocabulary, the noticeable word error rate is less than 1%. In addition, adaptive control of the speaking rate and word stress is available. The system is streamlined by using small, separate networks for each naturally defined subtask. Smaller networks reduce training and running times. This system demonstrates that connectionist learning procedures can be used to develop the complex mappings required in an adaptive interface. --------------------------------------------------------------------------- From turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK Wed Feb 21 11:42:37 1990 From: turing%ctcs.leeds.ac.uk at NSFnet-Relay.AC.UK (Turing Conference) Date: Wed, 21 Feb 90 16:42:37 GMT Subject: Turing 1990 Programme Message-ID: <4192.9002211642@ctcs.leeds.ac.uk> ____________________________________________________________________________ TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 PROGRAMME OF SPEAKERS AND REGISTRATION DOCUMENTS ____________________________________________________________________________ INVITED SPEAKERS Paul CHURCHLAND (Philosophy, University of California at San Diego) Title to be announced Joseph FORD (Physics, Georgia Institute of Technology) CHAOS : ITS PAST, ITS PRESENT, BUT MOSTLY ITS FUTURE Robin GANDY (Mathematical Institute, Oxford) HUMAN VERSUS MECHANICAL INTELLIGENCE Clark GLYMOUR (Philosophy, Carnegie-Mellon) COMPUTABILITY, CONCEPTUAL REVOLUTIONS AND THE LOGIC OF DISCOVERY Andrew HODGES (Oxford, author of "Alan Turing: the enigma of intelligence") BACK TO THE FUTURE : ALAN TURING IN 1950 Douglas HOFSTADTER (Computer Science, Indiana) Title to be announced J.R. LUCAS (Merton College, Oxford) MINDS, MACHINES AND GODEL : A RETROSPECT Donald MICHIE (Turing Institute, Glasgow) MACHINE INTELLIGENCE - TURING AND AFTER Christopher PEACOCKE (Magdalen College, Oxford) PHILOSOPHICAL AND PSYCHOLOGICAL THEORIES OF CONCEPTS Herbert SIMON (Computer Science and Psychology, Carnegie-Mellon) MACHINE AS MIND ____________________________________________________________________________ OTHER SPEAKERS Most of the papers to be given at the Colloquium are interdisciplinary, and should hold considerable interest for those working in any area of Cognitive Science or related disciplines. However the papers below will be presented in paired parallel sessions, which have been arranged as far as possible to minimise clashes of subject area, so that those who have predominantly formal interests, for example, will be able to attend all of the papers which are most relevant to their work, and a similar point applies for those with mainly philosophical, psychological, or purely computational interests. Jonathan Cohen (The Queen's College, Oxford) "Does Belief Exist?" Mario Compiani (ENIDATA, Bologna, Italy) "Remarks on the Paradigms of Connectionism" Martin Davies (Philosophy, Birkbeck College, London) "Facing up to Eliminativism" Chris Fields (Computing Research Laboratory, New Mexico) "Measurement and Computational Description" Robert French (Center for Research on Concepts and Cognition, Indiana) "Subcognition and the Limits of the Turing Test" Beatrice de Gelder (Psychology and Philosophy, Tilburg, Netherlands) "Cognitive Science is Philosophy of Science Writ Small" Peter Mott (Computer Studies and Philosophy, Leeds) "A Grammar Based Approach to Commonsense Reasoning" Aaron Sloman (Cognitive and Computing Sciences, Sussex) "Beyond Turing Equivalence" Antony Galton (Computer Science, Exeter) "The Church-Turing Thesis: its Nature and Status" Ajit Narayanan (Computer Science, Exeter) "The Intentional Stance and the Imitation Game" Jon Oberlander and Peter Dayan (Centre for Cognitive Science, Edinburgh) "Altered States and Virtual Beliefs" Philip Pettit and Frank Jackson (Social Sciences Research, ANU, Canberra) "Causation in the Philosophy of Mind" Ian Pratt (Computer Science, Manchester) "Encoding Psychological Knowledge" Joop Schopman and Aziz Shawky (Philosophy, Utrecht, Netherlands) "Remarks on the Impact of Connectionism on our Thinking about Concepts" Murray Shanahan (Computing, Imperial College London) "Folk Psychology and Naive Physics" Iain Stewart (Computing Laboratory, Newcastle) "The Demise of the Turing Machine in Complexity Theory" Chris Thornton (Artificial Intelligence, Edinburgh) "Why Concept Learning is a Good Idea" Blay Whitby (Cognitive and Computing Sciences, Sussex) "The Turing Test: AI's Biggest Blind Alley?" ____________________________________________________________________________ TURING 1990 COLLOQUIUM At the University of Sussex, Brighton, England 3rd - 6th April 1990 This Conference commemorates the 40th anniversary of the publication in Mind of Alan Turing's influential paper "Computing Machinery and Intelligence". It is hosted by the School of Cognitive and Computing Sciences at the University of Sussex and held under the auspices of the Mind Association. Additional support has been received from the Analysis Committee, the Aristotelian Society, The British Logic Colloquium, The International Union of History and Philosophy of Science, POPLOG, Philosophical Quarterly, and the SERC Logic for IT Initiative. The aim of the Conference is to draw together people working in Philosophy, Logic, Computer Science, Artificial Intelligence, Cognitive Science and related fields, in order to celebrate the intellectual and technological developments which owe so much to Turing's seminal thought. Papers will be presented on the following themes: Alan Turing and the emergence of Artificial Intelligence, Logic and the Theory of Computation, The Church- Turing Thesis, The Turing Test, Connectionism, Mind and Content, Philosophy and Methodology of Artificial Intelligence and Cognitive Science. Invited talks will be given by Paul Churchland, Joseph Ford, Robin Gandy, Clark Glymour, Andrew Hodges, Douglas Hofstadter, J.R. Lucas, Donald Michie, Christopher Peacocke and Herbert Simon, and there are many other prominent contributors, whose names and papers are listed above. Anyone wishing to attend this Conference should complete the form below and send it to Andy Clark, TURING 1990 Registrations, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, England, U.K., enclosing a STERLING cheque or money order for the total amount payable, made out to "Turing 1990". We regret that we cannot accept payment in other currencies. The form should be returned not later than Thursday 1st March 1990, after which an extra fee of #5.00 for late registration is payable and accommodation cannot be guaranteed. The conference will start after lunch on Tuesday 3rd April 1990, and it will end on Friday 6th April after tea. Final details will be sent to registered participants towards the end of February. Conference Organizing Committee Andy Clark (Cognitive and Computing Sciences, Sussex University) David Holdcroft (Philosophy, Leeds University) Peter Millican (Computer Studies and Philosophy, Leeds University) Steve Torrance (Information Systems, Middlesex Polytechnic) ___________________________________________________________________________ REGISTRATION DOCUMENT : TURING 1990 NAME AND TITLE : __________________________________________________________ INSTITUTION : _____________________________________________________________ STATUS : ________________________________________________________________ ADDRESS : ________________________________________________________________ ________________________________________________________________ POSTCODE : _________________ COUNTRY : ____________________________ Any special requirements (eg. diet, disability) : _________________________ I wish to register for the Turing 1990 Colloquium and enclose a Sterling cheque or money order, payable to "Turing 1990", for the total amount listed below : Please ENTER AMOUNTS as appropriate. 1. Registration Fee: Mind Association Members #30.00 .............. (Compulsory) Full-time students #30.00 .............. (enclose proof of status - e.g. letter from tutor) Academics (including retired academics) #50.00 .............. Non-Academics #80.00 .............. Late Registration Fee #5.00 .............. (payable after 1st March) 2. Full Board including all meals from Dinner #84.00 .............. on Tuesday 3rd April to Lunch on Friday 6th April, except for Thursday evening OR All meals from Dinner on Tuesday 3rd April #33.00 .............. to Lunch on Friday 6th April, except for Thursday evening 3. Conference banquet in the Royal Pavilion, #25.00 .............. Brighton on Thursday 5th April OR Dinner in the University on Thursday 5th April #6.00 .............. 4. Lunch on Tuesday 3rd April #6.00 .............. 5. Dinner on Friday 6th April #6.00 .............. ______________ TOTAL # ______________ Signed ________________________________ Date ______________________ Please return this form, with your cheque or money order (payable to "Turing 1990"), to: Dr Andy Clark, Turing 1990 Registrations, Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton, BN1 9QH, England. Email responses to: turing at uk.ac.sussex.syma ( from BITNET: turing at syma.sussex.ac.uk -NM ) ____________________________________________________________________________ IMPORTANT NOTICE FOR STUDENTS AND SUPERVISORS: The Analysis Committee has kindly made a donation to subsidise students who would benefit from attending the Colloquium but who might otherwise be unable to do so. The amount of any such subsidy will depend on the overall demand and the quality of the candidates, but it would certainly cover the registration fee and probably a proportion of the accommodation expenses. Interested parties should write immediately to Andy Clark at the address above, enclosing a brief supporting comment from a tutor or supervisor. ____________________________________________________________________________ PLEASE SEND ON THIS NOTICE to any researchers, lecturers or students in the fields of Artificial Intelligence, Cognitive Science, Computer Science, Logic, Mathematics, Philosophy or Psychology, in Britain or abroad, and to ANY APPROPRIATE BULLETIN BOARDS which have not previously displayed it. From p_j_angeline at cis.ohio-state.edu Wed Feb 21 20:53:53 1990 From: p_j_angeline at cis.ohio-state.edu (p_j_angeline@cis.ohio-state.edu) Date: Wed, 21 Feb 90 20:53:53 -0500 Subject: CRG-TR-90-1 request In-Reply-To: Carol Plathan's message of Wed, 21 Feb 90 15:27:48 EST <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> Message-ID: <9002220153.AA11170@kant.cis.ohio-state.EDU> Peter J Angeline Computer and Information Science Department 228 Bolz Hall The Ohio State University Columbus, Oh 43210 From russ at dash.mitre.org Thu Feb 22 10:07:28 1990 From: russ at dash.mitre.org (Russell Leighton) Date: Thu, 22 Feb 90 10:07:28 EST Subject: CRG-TR-90-1 request In-Reply-To: Carol Plathan's message of Wed, 21 Feb 90 15:27:48 EST <90Feb21.152804est.10599@ephemeral.ai.toronto.edu> Message-ID: <9002221507.AA14409@dash.mitre.org> Russell Leighton MITRE Signal Processing Lab 7525 Colshire Dr. McLean, Va. 22102 USA From munnari!cluster.cs.su.oz.au!ray at uunet.uu.net Tue Feb 20 23:49:37 1990 From: munnari!cluster.cs.su.oz.au!ray at uunet.uu.net (munnari!cluster.cs.su.oz.au!ray@uunet.uu.net) Date: Wed, 21 Feb 90 15:49:37 +1100 Subject: pseudo-standard TSP coordinates Message-ID: <9002210451.3521@munnari.oz.au> I have received a number of requests for the city coordinates of the Traveling Salesman Problems I studied in my IJCNN-90-WASH paper (Lister, "Segment Reversal and the Traveling Salesman Problem"). Some of these requests arrived by physical mail. Apparently, some people have had trouble reaching my site with email. Below are the coordinates. They were originally authored by Hopfield and Tank, and Durbin and Willshaw, so in some sense they are pseudo-standard problems. I also have the coordinates for Angeniol et al's 1000 city problem (Neural Networks, Vol 1, No. 4, 1988), but I have decided not to clog the list with those. If you'd like it, mail me direct. Raymond Lister Basser Department of Computer Science University of Sydney NSW 2006 AUSTRALIA Internet: ray at cs.su.oz.AU CSNET: ray%cs.su.oz at RELAY.CS.NET UUCP: {uunet,hplabs,pyramid,mcvax,ukc,nttlab}!munnari!cs.su.oz.AU!ray JANET: munnari!cs.su.oz.AU!ray at ukc :::::::::::::: 30cities :::::::::::::: 0.4384 0.6920 0.4232 0.2328 0.7186 0.6939 0.3956 0.1845 0.9529 0.4058 0.6321 0.0704 0.6094 0.2125 0.3693 0.5692 0.3325 0.6035 0.0774 0.8135 0.5412 0.1743 0.3966 0.1180 0.2036 0.4527 0.7645 0.8556 0.5043 0.0289 0.9983 0.0065 0.6888 0.8236 0.6012 0.6401 0.5931 0.6716 0.7744 0.7172 0.3720 0.8944 0.2682 0.4146 0.6187 0.2461 0.6836 0.5692 0.6604 0.5272 0.6240 0.5331 0.4605 0.8192 0.3530 0.4450 0.3808 0.2468 0.8341 0.3587 :::::::::::::: 50cities.1 :::::::::::::: 0.4350 0.8356 0.4504 0.8461 0.4880 0.8283 0.5206 0.9079 0.8438 0.9863 0.9154 0.9904 0.8509 0.8348 0.8650 0.7895 0.9097 0.7217 0.9081 0.6131 0.9606 0.5607 0.9392 0.5594 0.7785 0.5432 0.6971 0.6394 0.6762 0.6239 0.7351 0.5343 0.6936 0.3988 0.7471 0.3539 0.6873 0.2615 0.9646 0.2585 0.8945 0.0733 0.8161 0.1113 0.6992 0.0820 0.6663 0.0174 0.5019 0.0049 0.3867 0.0599 0.5105 0.1261 0.5249 0.4292 0.4732 0.3098 0.4469 0.2965 0.4090 0.2606 0.2520 0.2568 0.2981 0.1185 0.1856 0.1690 0.1212 0.0842 0.0305 0.2598 0.0122 0.2907 0.1973 0.3335 0.1793 0.4409 0.3065 0.4587 0.2727 0.5221 0.3110 0.6040 0.1983 0.5341 0.1168 0.6241 0.1193 0.6312 0.2285 0.7029 0.0758 0.8780 0.1366 0.9645 0.2382 0.8138 0.3470 0.7827 :::::::::::::: 50cities.2 :::::::::::::: 0.4392 0.9303 0.4636 0.9095 0.5587 0.8905 0.5896 0.8895 0.6714 0.9683 0.7009 0.8750 0.7359 0.8052 0.8656 0.9821 0.9885 0.9857 0.8480 0.8279 0.9206 0.7491 0.9227 0.5485 0.6813 0.6180 0.6116 0.5308 0.6287 0.4489 0.7495 0.4912 0.7487 0.4666 0.7553 0.3944 0.7273 0.2440 0.9231 0.2949 0.9739 0.2994 0.9195 0.2663 0.9452 0.0385 0.7966 0.0011 0.5617 0.0057 0.5495 0.2450 0.5049 0.2892 0.3496 0.2911 0.3627 0.2523 0.2893 0.1574 0.2269 0.0761 0.1633 0.1262 0.2573 0.2150 0.2502 0.3518 0.3181 0.3910 0.3904 0.5787 0.4167 0.6261 0.4296 0.7125 0.3492 0.6551 0.2413 0.6781 0.1562 0.5234 0.1057 0.6129 0.0307 0.6446 0.0474 0.9277 0.0499 0.9452 0.1515 0.8886 0.1792 0.9865 0.2848 0.9195 0.2619 0.8507 0.3088 0.8945 :::::::::::::: 50cities.3 :::::::::::::: 0.6112 0.6668 0.5856 0.7524 0.5759 0.7513 0.5434 0.8462 0.5759 0.9397 0.6453 0.9079 0.6843 0.8703 0.7668 0.8568 0.8143 0.8205 0.9806 0.9577 0.9746 0.7323 0.9883 0.6790 0.8011 0.6608 0.8252 0.6370 0.9003 0.4054 0.9032 0.3270 0.9007 0.2350 0.9628 0.1462 0.8175 0.1045 0.7817 0.1159 0.7478 0.1487 0.7049 0.1741 0.6702 0.1326 0.5940 0.0732 0.5198 0.1399 0.5346 0.2750 0.4146 0.2153 0.3946 0.1248 0.2412 0.0503 0.0584 0.0435 0.2849 0.1785 0.2857 0.4148 0.4591 0.5554 0.3606 0.5738 0.3056 0.7498 0.2734 0.6661 0.2525 0.5998 0.1497 0.6408 0.0759 0.5876 0.0263 0.5578 0.1066 0.7005 0.1790 0.7494 0.1471 0.7707 0.0550 0.8575 0.1761 0.9218 0.1731 0.9416 0.2609 0.9506 0.3572 0.8551 0.3911 0.9153 0.4660 0.8662 :::::::::::::: 50cities.4 :::::::::::::: 0.3055 0.3221 0.2637 0.2964 0.2868 0.2642 0.0540 0.1626 0.0252 0.1228 0.0129 0.0509 0.2389 0.0705 0.3103 0.0575 0.3322 0.0449 0.4481 0.0415 0.3293 0.1976 0.3460 0.2591 0.3893 0.2529 0.4708 0.2890 0.6871 0.3897 0.7786 0.4420 0.6620 0.2585 0.7870 0.1888 0.8040 0.1215 0.7347 0.0527 0.7994 0.0364 0.8278 0.0550 0.9797 0.1841 0.9653 0.4571 0.9677 0.6138 0.9496 0.7046 0.8630 0.6697 0.8912 0.6074 0.8107 0.6112 0.7588 0.6069 0.7871 0.7346 0.8768 0.9481 0.5963 0.9092 0.6702 0.7964 0.6152 0.7791 0.5838 0.6052 0.4474 0.6936 0.3191 0.6814 0.4197 0.9496 0.0804 0.9972 0.1735 0.8953 0.1319 0.7672 0.0612 0.7509 0.0953 0.6800 0.0336 0.6561 0.0083 0.6188 0.0163 0.3977 0.1149 0.5242 0.2502 0.5212 0.2533 0.4084 :::::::::::::: 50cities.5 :::::::::::::: 0.5914 0.6804 0.7154 0.5778 0.9689 0.5379 0.8848 0.6140 0.8827 0.6550 0.9179 0.6539 0.9924 0.9412 0.8401 0.8829 0.7848 0.9271 0.7588 0.9832 0.5520 0.8777 0.5093 0.9542 0.4510 0.9937 0.3311 0.9481 0.3353 0.8654 0.2694 0.8634 0.3326 0.6630 0.3528 0.6490 0.3659 0.5905 0.2826 0.6649 0.2322 0.6742 0.2115 0.7012 0.2020 0.6797 0.0642 0.6555 0.1284 0.5410 0.0197 0.4416 0.0310 0.4211 0.1721 0.0503 0.2314 0.3136 0.1684 0.4706 0.2443 0.4476 0.3990 0.4982 0.4748 0.5165 0.4130 0.4298 0.4720 0.3862 0.4515 0.3005 0.4727 0.2226 0.5642 0.1322 0.5099 0.0289 0.6761 0.0197 0.7533 0.1484 0.7771 0.1843 0.8511 0.1881 0.9306 0.2243 0.9149 0.2319 0.8529 0.2334 0.7672 0.2705 0.6454 0.3365 0.6870 0.4466 0.6339 0.4510 :::::::::::::: 100cities :::::::::::::: 0.1637 0.6152 .0981 .5942 .1722 .5547 .1271 .4577 .0971 .4008 .0839 .3896 .1145 .3781 .1400 .2946 .1588 .2799 .1304 .2560 .0432 .1606 .2639 .1067 .3191 .0594 .3472 .1434 .3428 .2300 .3021 .2828 .2979 .3045 .2772 .3681 .2500 .4306 .2419 .4549 .3066 .4445 .3582 .3820 .3892 .3556 .3954 .4322 .4159 .4635 .4799 .5269 .5657 .4879 .5655 .4756 .4770 .4061 .5198 .4012 .5530 .3584 .5654 .4184 .6066 .4159 .6511 .3986 .6467 .3504 .6255 .3147 .5698 .2520 .4959 .2367 .4767 .1526 .4948 .1247 .5139 .1757 .5406 .1682 .5722 .1188 .7022 .2264 .7502 .2080 .7187 .1879 .8230 .1519 .7900 .0788 .8872 .0367 .9568 .0281 .9792 .1264 .9476 .1717 .9378 .2333 .8028 .2189 .7734 .2448 .6840 .2929 .7442 .3807 .7375 .4091 .7786 .4315 .8730 .4270 .9834 .5354 .8955 .5948 .8665 .6745 .7795 .7110 .7657 .6465 .7584 .5819 .6528 .6042 .5790 .6379 .6550 .6905 .6763 .7326 .7801 .7579 .7671 .7802 .7553 .8609 .8351 .8449 .9315 .8669 .8948 .9781 .8385 .9672 .6140 .9882 .6741 .8094 .6068 .7854 .5531 .7403 .5631 .7156 .5224 .6996 .4461 .7046 .4773 .7997 .4419 .9150 .3469 .9172 .2458 .9450 .2126 .9585 .2378 .9860 .1975 .9898 .0953 .9628 .0358 .9771 .0434 .9560 .1353 .8643 .2002 .8269 .2922 .8722 .3187 .7569 .3087 .5345 .2430 .5895 :::::::::::::: 318cities - from original Lin and Kernighan paper :::::::::::::: 71 63 1402 63 2733 63 71 94 1402 94 2733 94 370 142 1701 142 3032 142 1276 173 2607 173 3938 173 1213 205 2544 205 3875 205 69 213 1400 213 2731 213 69 244 1400 244 2731 244 630 276 1961 276 3292 276 732 283 2063 283 3394 283 69 362 1400 362 2731 362 69 394 1400 394 2731 394 370 449 1701 449 3032 449 1276 480 2607 480 3938 480 1213 512 2544 512 3875 512 157 528 1488 528 2819 528 630 583 1961 583 3292 583 732 591 2063 591 3394 591 654 638 1985 638 3316 638 496 638 1827 638 3158 638 314 638 1645 638 2976 638 142 638 1473 638 2804 638 142 669 1473 669 2804 669 315 677 1646 677 2977 677 496 677 1827 677 3158 677 654 677 1985 677 3316 677 654 709 1985 709 3316 709 496 709 1827 709 3158 709 315 709 1646 709 2977 709 142 701 1473 701 2804 701 220 764 1551 764 2882 764 189 811 1520 811 2851 811 173 843 1504 843 2835 843 370 858 1701 858 3032 858 1276 890 2607 890 3938 890 1213 921 2544 921 3875 921 630 992 1961 992 3292 992 732 1000 2063 1000 3394 1000 1276 1197 2607 1197 3938 1197 1213 1228 2544 1228 3875 1228 205 1276 1536 1276 2867 1276 630 1299 1961 1299 3292 1299 732 1307 2063 1307 3394 1307 654 1362 1985 1362 3316 1362 496 1362 1827 1362 3158 1362 291 1362 1622 1362 2953 1362 654 1425 1985 1425 3316 1425 496 1425 1827 1425 3158 1425 291 1425 1622 1425 2953 1425 173 1417 1504 1417 2835 1417 291 1488 1622 1488 2953 1488 496 1488 1827 1488 3158 1488 654 1488 1985 1488 3316 1488 654 1551 1985 1551 3316 1551 496 1551 1827 1551 3158 1551 291 1551 1622 1551 2953 1551 291 1614 1622 1614 2953 1614 496 1614 1827 1614 3158 1614 654 1614 1985 1614 3316 1614 189 1732 1520 1732 2851 1732 1276 1811 2607 1811 3938 1811 1213 1843 2544 1843 3875 1843 630 1913 1961 1913 3292 1913 732 1921 2063 1921 3394 1921 370 2087 1701 2087 3032 2087 1276 2118 2607 2118 3938 2118 1213 2150 2544 2150 3875 2150 205 2189 1536 2189 2867 2189 189 2220 1520 2220 2851 2220 630 2220 1961 2220 3292 2220 732 2228 2063 2228 3394 2228 142 2244 1473 2244 2804 2244 315 2276 1646 2276 2977 2276 496 2276 1827 2276 3158 2276 654 2276 1985 2276 3316 2276 654 2315 1985 2315 3316 2315 496 2315 1827 2315 3158 2315 315 2315 1646 2315 2977 2315 142 2331 1473 2331 2804 2331 315 2346 1646 2346 2977 2346 496 2346 1827 2346 3158 2346 654 2346 1985 2346 3316 2346 142 2362 1473 2362 2804 2362 157 2402 1488 2402 2819 2402 220 2402 1551 2402 2882 2402 142 2480 1473 2480 2804 2480 370 2496 1701 2496 3032 2496 1276 2528 2607 2528 3938 2528 1213 2559 2544 2559 3875 2559 630 2630 1961 2630 3292 2630 732 2638 2063 2638 3394 2638 69 2756 1400 2756 2731 2756 69 2787 1400 2787 2731 2787 370 2803 1701 2803 3032 2803 1276 2835 2607 2835 3938 2835 1213 2966 2544 2966 3875 2966 69 2906 1400 2906 2731 2906 69 2937 1400 2937 2731 2937 630 2937 1961 2937 3292 2937 732 2945 2063 2945 3394 2945 1276 3016 2607 3016 3938 3016 69 3055 1400 3055 2731 3055 69 3087 1400 3087 2731 3087 220 606 1551 606 2882 606 370 1165 1701 1165 3032 1165 370 1780 1701 1780 3032 1780 -79 1417 -79 1496 4055 1693 From tenorio at ee.ecn.purdue.edu Thu Feb 22 12:51:04 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Thu, 22 Feb 90 12:51:04 EST Subject: seminar at Purdue Message-ID: <9002221751.AA07359@ee.ecn.purdue.edu> ------- Forwarded Message From: lhj (Leah Jamieson) Subject: seminar Please pass on to students and/or colleagues who might be interested. ------------------------------------------------------------- "Two Engineering Approaches to Speech Processing: Neural Networks and Analog VSLI" Moise Goldstein, Ph.D. Department of Electrical and Computer Engineering Johns Hopkins University Wednesday, Feb. 28, 1990 12:30 - 1:20 Heavilon Hall, Room 001 (Ground floor, northwest corner) PUrdue University ------------------------------------------------------------- ------- End of Forwarded Message From ala at nada.kth.se Fri Feb 23 06:47:27 1990 From: ala at nada.kth.se (Anders Lansner) Date: Fri, 23 Feb 90 12:47:27 +0100 Subject: CRG-TR-90-1 request Message-ID: <9002231147.AAdraken21697@nada.kth.se> Anders Lansner NADA KTH S-100 44 Stockholm SWEDEN From mukesh%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Fri Feb 23 12:12:49 1990 From: mukesh%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Mukesh Patel) Date: Fri, 23 Feb 90 17:12:49 GMT Subject: CRG-TR-90-1 request Message-ID: <22254.9002231712@rsunu.cogs.susx.ac.uk> Could somebody, somewhere please do something about Tech Report Requests that get mailed to a *ALL* of us? This is crazy because it is costing a lot of money to somebody/everybody and it needlessly clutters up the system. Maybe a tutorial on "how-to-reply/request tech reports" might help? Mukesh The University of Sussex, Centre for Cognitive and Computing Sciences, Falmer, Brighton BN1 9QH, E Sussex, UK. Phone: +44 273 606755 x3074 ARPA:mukesh%cogs.sussex.ac.uk at nfsnet-relay.ac.uk JANET:mukesh at uk.ac.sussex.cogs From thomasp at lan.informatik.tu-muenchen.dbp.de Fri Feb 23 14:52:00 1990 From: thomasp at lan.informatik.tu-muenchen.dbp.de (Patrick Thomas) Date: 23 Feb 90 18:52 -0100 Subject: Mathematical Tractability of Neural Nets Message-ID: <9002231852.AA08852@infovax.informatik.tu-muenchen.de> Date: Fri, 23 Feb 90 19:52:31 -0100 All neural nets which prove to be mathematically tractable (convergence..etc) seem to be too trivial or biologically remote in order to account for cerebral phenomena. It may be nice to prove the approximation capabilities of backprop or some kind of convergent behaviour exhibited by the ART networks. But (except perhaps for the ART-3 architecture?) they don't really deal with the complex interactions at synaptic levels already found by neurophysiologists and especially the ART networks rely on a similiarity measure which may be fundamentally inappropriate. So whats the alternative ? Define and refine some rules concerning synaptic interactions (of local AND global kind), think about some rules governing signal integration by neuronal units and then let it run. What do you get by this type of self-organizing net ? One thing for sure: mathematical untractability. This is the Edelman-way (among others). Which way should be followed by someone interested in brain phenomena and not with neural nets from an engineering point of view ? Is it true that all mathematically tractable neural net approaches are inadequate and that an empirical/experimental stand should be taken ? I would be grateful for comments on this. Patrick P.S.: The Bonhoeffer et al results showing the non-locality of synaptic amplification at least on the pre-synaptic side seem to fit nicely with Edelmans "synapses-as-populations" approach. From Terry_Sejnowski at UCSD.EDU Wed Feb 21 12:46:46 1990 From: Terry_Sejnowski at UCSD.EDU (Terry Sejnowski) Date: Wed, 21 Feb 90 09:46:46 PST Subject: Levels Message-ID: <9002211746.AA26305@sdbio2.UCSD.EDU> There are at least three notions of levels that are commonly used to discuss the brain. Marr introduced levels of analysis -- computatonal, algorithmic, and implementational, and thought they were independent of each other. In biology there are well defined levels of organization: molecular, synaptic, neuronal, networks, columns, maps, and systems. These can be characterized anatomically according to their spatial scale. Finally, one can distinguish levels of processing, from the sensory periphery toward higher processing centers. Although these centers can be ranked in a hierarchy according to latency, feedback connections allow information to flow in both directions. For a more detailed discussion on these three types of levels, and references, see Churchland and Sejnowski, Perspectives on Cognitive Neuroscience, Science 242, 741-745 (1988). Terry ----- From rr%cstr.edinburgh.ac.uk at NSFnet-Relay.AC.UK Sat Feb 24 10:50:12 1990 From: rr%cstr.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Richard Rohwer) Date: Sat, 24 Feb 90 15:50:12 GMT Subject: organization levels Message-ID: <23907.9002241550@cstr.ed.ac.uk> > From: Alain Grumbach > [...] > I am wondering about the notion of "organization level", > [...] > But has anybody heard about a formal description of it ? A serious attempt to mathematically formalize a notion of "level" within a broad formal theory of perception can be found in B. Bennett, D. Hoffman, and C. Prakash, "Observer Mechanics, A Formal Theory of Perception", Academic Press (1989). See especially Ch. 9. > (formal but understandable !) Copious use of concepts and notation from modern analysis make the reading a bit tedious. But in my opinion, the underlying conceptual structure is novel, plausible, and provocative. I have an ambition to write a less careful but more direct "Readers' Digest condensed version" -- but I won't say when. Richard Rohwer JANET: rr at uk.ac.ed.cstr Centre for Speech Technology Research ARPA: rr%ed.cstr at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at cstr.ed.ac.uk, 80, South Bridge rr%cstr.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!cstr!rr From slehar at bucasb.bu.edu Sat Feb 24 15:06:11 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Sat, 24 Feb 90 15:06:11 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: connectionists@c.cs.cmu.edu's message of 24 Feb 90 06:31:26 GM Message-ID: <9002242006.AA15813@bucasd.bu.edu> I agree with your comment about mathematically tractable neural models. I am currently taking courses on neuropsychology and I am amazed at the depth of knowledge available about brain functionality from the medical point of view. Lesion studies show in great detail how specific brain areas interact to perform various tasks, and neurologists can predict with great accuracy what the effects of different lesions would be in specific locations. This level of understanding must be exploited in our neural models. The problem is that this global level of understanding is very heuristic, and cannot be directly implemented in a neural model. In order to bring together the low level mathematical models and the high level neurological knowledge we must advance both sciences in converging directions. In other words, neurologists must study the microscopic origins of the observed macroscopic phenomena, and neural modelers must design neural models to duplicate specific anatomical structures or behavioral elements. This is the primary thrust of Grossbergs work. Grossberg's neural models are based on neurological findings, and are designed to duplicate behavioral data. This, it seems to me, is the way to bring together the two sciences. From aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK Sun Feb 25 05:56:14 1990 From: aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK (Aaron Sloman) Date: Sun, 25 Feb 90 10:56:14 GMT Subject: Levels Message-ID: <21319.9002251056@csunb.cogs.susx.ac.uk> > From: Terry Sejnowski > There are at least three notions of levels ... > ... Marr introduced levels of analysis -- computatonal, algorithmic, > and implementational ... > ... In biology there are well defined levels of organization > molecular, synaptic, neuronal, networks, columns, maps, and systems... > ... Finally, one can distinguish levels of processing, from the > sensory periphery toward higher processing centers.... Two comments - (A) a critique of Marr and (B) a pointer to another notion of level: A. I think that Marr's analysis is somewhat confused, and would be best replaced by the following: a. What he called the "computational" level should be re-named the "task" level, without any presumption that there is only one such level: tasks form a hierarchy of sub-tasks. This is closely related to what software engineers call "requirements analysis", and has to take into account the nature of the environment, the behaviour that is to be achieved within it, including constraints such as speed. In the case of vision, Marr's main concern, requirements analysis would include description of the relevant properties of light (or the optic array), visible surfaces, forms of visible motion, etc. as well as internal and external requirements of the organism e.g. recognition, generalisation, description, planning, explaining, control of actions, posture control, various kinds of visual reflexes (some trainable), reading, etc. Requirements analysis also includes investigation of trade-offs and priorities. E.g. in some conditions where there's a trade-off between speed and accuracy, getting a quick decision that has a good chance of being right may be more important than guaranteeing perfect accuracy. Internal requirements analysis would include description of other non-visual modules that require input from visual modules (e.g. for posture control, or fine control of movement through feedback loops - processes which don't necessarily require the same kind of visual information as e.g. recognition of objects). So there is not ONE requirement or task defining vision, but a rich multiplicity, which can vary from organism to organism (or machine). b. Then instead of Marr's two remaining levels, "algorithmic" and "implementational" (or physical mechanism), there would be a number of different layers of implementation, for each of which it is possible to distinguish design and implementation. How many layers there are, and which are implemented in software which in hardware, is an empirical question and might vary from one organism or machine to another. Moreover, because vision is multi-functional there need not be one hierarchy of layers: instead there could be a fairly tangled network of tasks performed by a network of interrelated processes sharing some sub-mechanisms (e.g. retinas). ---------------------------------------------------- B: There's at least one other notion of level, not in Terry's list, that's worth mentioning, though it's related to his three and to levels of task analysis mentioned above. It is familiar to computer scientists, though it may need to be generalised before it can be applied to brains. I refer to the notion of a "virtual machine". For example, a particular programming language refers to a class of entities (e.g. words, strings, numbers, lists, trees, etc) and defines operations on these, that together define a virtual machine. A particular virtual machine can be implemented in a lower level virtual machine via an interpreter or compiler (with interestingly different consequences). The lower level virtual machine (e.g. the virtual machine that defines a VAX architecture) may itself be an abstraction that is implemented in some lower level machine (e.g. hardware, or a mixture of hardware and microcode). Processes in a computer can have many levels of virtual machine each implemented via a compiler or interpreter to a lower level or possibly more than one lower level, e.g. if two sorts of virtual machines are combined to implement a higher level hybrid. Circular organisation is possible if a low-level machine can invoke a sub-routine defined in a high level machine (e.g. for handling errors or interrupts). Different layers of virtual machine do not map in any simple way onto physically differentiable structures in the computer: indeed without changing the lowest level physical architecture one can implement very many different higher level virtual machines, though there will be some physical differences in the form of different patterns of bits in the virtual memory, or different patterns of switch states or magnetic molecule states in the physical memory. In this important sense virtual structures in high level virtual machines may be "distributed" over the memory of a conventional computer with no simple mapping from physical components to the virtual structures they represent. (This is especially clear in the case of sparse arrays, databases using inference, default values for slots, etc.) (This is why I think talk about "physical symbol systems" in AI is utterly misleading: most of the interesting symbol systems are virtual structures in virtual machines, not physical structures.) Similarly, I presume different abstract virtual machines can be implemented in neural nets, though the kind of implementation will be different. E.g. it does not seem appropriate to talk about a compiler or interpreter, at least at the lower levels. An example of such an abstract virtual machine implemented in a human brain would be one that can store and execute a long and complex sequence of instructions, such as reciting a poem, doing a dance, or playing a piano sonata from memory. Logical thinking (especially when done by an expert trained logician) would be another example. My expectation is that "connectionist" approaches to intelligence will begin to take off when this branch of AI has a good theory about the kinds of virtual machines that need to be implented to achieve different sorts of intelligent systems, including a theory of how such virtual machines are layered and how they may be implemented in different kinds of neural networks (perhaps using the levels of organisation described by Terry). Aaron Sloman, School of Cognitive and Computing Sciences, Univ of Sussex, Brighton, BN1 9QH, England EMAIL aarons at cogs.sussex.ac.uk aarons%uk.ac.sussex.cogs at nsfnet-relay.ac.uk aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk at relay.cs.net From tds at ai.mit.edu Sun Feb 25 12:16:48 1990 From: tds at ai.mit.edu (Terence D. Sanger) Date: Sun, 25 Feb 90 12:16:48 EST Subject: levels Message-ID: <9002251716.AA03032@globus-pallidus> It seems to me that there are many different ways of describing any phenomenon or algorithm in terms of levels. "Levels of abstraction" is probably only one, which might be interpreted to include both biological hardware levels of organization (receptors, synapses, neurons, Brodmann's areas) and the processing levels which a system goes through in order to interpret its environment (receptors, features, objects, interpretation). As Sejnowski points out, Marr's levels (implementation, algorithm, theory) are an additional type. Different concepts of level might have different theoretical uses, but when it comes to trying to find examples in the hardware of a system, (see Aaron Sloman's note) there may not be that many possibilities. I would like to suggest another concept of level that might have some basis in biological hardware. I call it "anatomic levels". The idea is that the lower levels correspond to the local processing units which (due to physical constraints) have access only to a small portion of the total number of inputs and controls. The higher levels progressively integrate input information from lower levels and coordinate the outputs from lower levels. An example would be segments of a spinal cord performing maximal processing based on the inputs and outputs available at that segment. Propriospinal communication would be the next level up, combining inputs and coordinating motion between a few levels. Sensory association cortex and perhaps supplementary motor cortex might (vaguely) correspond to higher levels which integrate sensory information across modalities and coordinate motor control across all spinal segments. Note that relatively sophisticated processing (according to some measures) might be occurring even within an individual spinal level. "Complete" interpretation of the available input and "optimal" control of the available outputs is theoretically possible, and might involve a good deal of processing at multiple "levels of abstraction". Does this have any relevance for computers? Perhaps in a distributed processing system where nodes do not have access to complete sensory information or control outputs, it would be useful to have an idea of how to integrate sensory information across the entire network and how to generate coordinated control. Terry Sanger MIT E25-534 Cambridge, MA 02139 tds at ai.mit.edu From bates at amos.ucsd.edu Sun Feb 25 15:39:47 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Sun, 25 Feb 90 12:39:47 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002252039.AA16585@amos.ucsd.edu> I must object to the argument that "Neurologists know with great specificity how to predict the effects of lesions...". In fact, the more we know about structural and functional brain imaging, the less clear the localization story becomes. For example, Basso et al. have reviewed CT or MRI records for 1,500 patients, and find that the classical teaching is contradicted (re type of lesion and type of aphasia) in at least 20% of the cases (e.g. fluent aphasias with a frontal lesion; non-fluent aphasias with a posterior lesion, and so forth). Antonio Damasio has a lovely paper that is titled something like "Where is Broca's area?" It turns out that it is not at all obvious where that is!! There are cases of syndromes with very specific behavioral impairments (e.g. the famous Hart, Berndt and Caramazza case who had a particularly severe problem naming fruits and vegetables; see also Warrington's patients). But there is a real mystery in that category-specific literature as well: most (though not all) of the reported cases of very very specific impairments come from patients with very global forms of encephalopathy, e.g. diffuse forms of brain injury. The real facts are that the localization story has been grossly OVERSOLD to the outside world. Insiders (e.g. members of the Academy of Aphasia) know all too well how approximate and often non-specific the relationships are between lesion site and syndrome type. For example, I suspect that many of you believe that there is a sound relationship between lesions to anterior cortex and damage to grammar (e.g. the Broca's-aphasia-as-agrammatism story). But how many know about the 1983 study by Linebarger et al. (followed by many replications, in several different languages) showing that so-called agrammatic Broca's aphasics can make spectacularly good and quite fine-grained judgments of grammaticality? How do you square that finding with the claim that Broca's area is the "grammar box"? In our own cross-linguistic research, we have found that Turkish Broca's aphasics look radically different from Serbo-Croatians, who look radically different from Italians, who look radically different from English-speakers, and so on. In studies with a Patient Group by Language Group design (e.g. Broca's and Wernicke's in several different languages) it is invariably the case that Language accounts for 4 - 5 times more variance than aphasia group! You can predict more of the linguistic behavior of a brain-damaged patient by knowing his premorbid language than you can by knowing his lesion site and/or his aphasic classification. These findings can ONLY be explained if we assume that a great deal (if not all) of linguistic knowledge is spared. Aphasics suffer from some poorly-understood problems in accessing and deploying this knowledge (and there are a lot of new proposals on board right now to try and explain the nature of these performance deficits). But the specificity is far less than textbook stories would have you believe. The Good News for Connectionists: the "real" data from patients with focal brain injury are in fact much more compatible with a neural network story (i.e. a story in which representations are broadly distributed and activated by degree) than they are with an old-fashioned 19th century Thing-In-a-Box theory of the brain. -liz bates From slehar at bucasb.bu.edu Sun Feb 25 21:14:41 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Sun, 25 Feb 90 21:14:41 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Sun, 25 Feb 90 12:39:47 PST <9002252039.AA16585@amos.ucsd.edu> Message-ID: <9002260214.AA29908@bucasd.bu.edu> Thank you for your lengthy reply to my posting. I do not dispute the variability of functional organization between individuals' brains, and I am intrigued by the organizational differences based on language that you pointed out. My point was not that brains are identical enough that pinpointing a lesion can necessarily lead to an accurate prediction of deficits (although admittedly that is what I said). What I meant to say is that the functionality of areas has been identified to a level of detail that would surprise many "neural network" modellers. The fact that the 'task' of speech, for example is functionally divided into the components generally performed by Brocca's area (grammar and articulation), Wernicke's (meaning), angular gyrus (vocabulary), right hemisphere (prosidy), frontal areas (initiation of speech), motor strip (execution of speech) etc. is extremely interesting to the neural modeler, as it gives a clue as to how a parallel speech system can be organized, while leaving open the tantalizing question of the fine level microstructure required for such a system to be actually implemented. It is the specific functionality of each area that has been mapped in such detail, not the physical location of that area in any particular individual. (In other words, if Broccas area is pinpointed in a particular individual, then lesion of that area will produce predictable deficits) In fact the very variability of the actual locations of such areas is equally interesting, and provides further clues as to the underlying mechanisms. The fact that a lesion nearby can induce a functional area like Broccas area to 'move over' to an adjacent region really emphasizes the adaptability and variability of the system, and until we duplicate that type of adaptability, we will not have duplicated the functionality either. Thank you for all your references to interesting work- I will preserve them for future reading. Steve Lehar From yaski at ntt-20.ntt.jp Mon Feb 26 10:14:50 1990 From: yaski at ntt-20.ntt.jp (Yasuki Saito) Date: Mon, 26 Feb 90 10:14:50 I Subject: Levels Message-ID: <12569294135.18.YASKI@NTT-20.NTT.JP> e ------- From bates at amos.ucsd.edu Mon Feb 26 00:11:46 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Sun, 25 Feb 90 21:11:46 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002260511.AA18795@amos.ucsd.edu> But in fact, you stil have the facts wrong: In richly-inflected languages, Wernicke's aphasics look just as bad as Broca's in the domain of grammar. The supposed grammar/semantics division is a peculiarity of English. When we first got these findings, I went back to Arnold Pick, the long-ago originator of the term "agrammatism." Pick worked with Czech & German patients -- and guess what? He in fact postulated two forms of agrammatism: non-fluent (anterior) and fluent (posterior). Of these two, he believed that the fluent was the most interesting of the two, revealing more about the point in processing (dynamically/temporally considered) at which assignment of grammatical forms is made. Yes, you are right, the brain is more than a bowl of oatmeal: there are lines running from the eyes to the occiptal lobes, there is such a thing as a motor strip, and so on. And of course these things need to be taken into account by connectionist models. But even if you COULD pinpoint broca's area with precision for any given individual, that would not nail down for you ANY particularly linguistic domain. Re right hemisphere language: Gazzaniga claims to have new evidence that the right hemisphere (in split brain folks) can make grammaticality judgments!! where does that leave you? Broca's and Wernicke's BOTH have semantic problems (e.g. in priming) and BOTH have grammatical problems (as noted above). In short -- you have bought a used car. -liz bates From slehar at bucasb.bu.edu Mon Feb 26 10:25:38 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Mon, 26 Feb 90 10:25:38 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Sun, 25 Feb 90 21:11:46 PST <9002260511.AA18795@amos.ucsd.edu> Message-ID: <9002261525.AA08045@bucasd.bu.edu> You say: "But even if you COULD pinpoint broca's area with precision for any given individual, that would not nail down for you ANY particularly linguistic domain." Do you mean that if, for an English speaking subject, Brocca's area is identified, located, and ablated, that we could not predict the resulting deficits? I don't know if we are splitting hairs here, I'm sure we both agree that the subject would become "Brocca's aphasic", a well defined syndrome with specific characteristics. True, those characteristics are defined in somewhat fuzzy terms, and even so, our patient is not guaranteed to suffer all the components of the defined syndrome. Indeed, immediately after the ablation the subject would immediately begin to re-organize his functional areas to compensate for the loss, and the resulting mapping will be changing in time and very individualized. Even in "normals" it is clear that every individual organizes his / her brain in their own fashon, so that the distribution of functionalities is somewhat individualized. I don't dispute any of these facts, and I'm not entirely certain what your criticism is. I suspect that you misunderstand my original contention. I did not mean to say that brain functionality is segmented into predictable and well defined spatial locations such that grammar, for instance, is performed exclusively in the grammar area, and nowhere else is grammer performed. Some functions are performed in more localized areas (including grammar) while other functions are performed in more distributed areas (spatial thinking, higher cognition, ...). These functionalities are flexible and adaptive, and even localized functions like grammar are not fully localized, but have fuzzy and overlapping boundaries, and receive influence from beyond those boundaries as well. My point is, that people who work in the field understand these things. That neurologsts are beginning to understand the fundamental principles of brain organization. The very points that you were making reflect a new insight into the ways of the brain that was hard to find ten years ago. In order to contradict my contention you would have to say "We don't know anything about brain organization, everything is confused." On the contrary, it is clear that we are beginning to get a good grasp of the global principles, even though those principles define a fuzzy and ill defined scheme. My point is that the neuropsychological understanding of the brain is quite good at a global level, where it has difficulties is at the fine grained level. How are the signals propagated within the brain in order to produce the kind of global organization that we observe? This, I say, is the question to be adressed by neural modelers, and my argument was that we should use the findings and insights of neuropsychology to guide the direction of research in neural networks. Stated another way, you yourself would be critical of a neural model that is brittle, inflexible, too clearly defined and localized, because you know that that is not the way it works in the brain. My point is simply that neural modelers should listen to people like you for guidance as to whether they are on the right track. That the science is ready for a coming together of the local mathematical models and the global neuropsychological ones. Surely you don't disagree with that? (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) (O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O) (O)((O))((( (617) 424-7035 (H) (617) 353-6425 (W) )))((O))(O) (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) From bates at amos.ucsd.edu Mon Feb 26 14:02:18 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Mon, 26 Feb 90 11:02:18 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002261902.AA22328@amos.ucsd.edu> Do I think the brain is cottage cheese, all the same everywhere? No, of course not. And to be sure, there are clearcut differentiations by modality (visual cortex, etc.). But I indeed insist, based on all we now know, that EVEN IF WE COULD PINPOINT BROCA'S AREA (notice the spelling, only on "c") WE COULD NOT NECESSARILY PREDICT THE PATIENT'S BEHAVIOR. that is EXACTLY what the current data suggest. For example, there are age-related changes that occur WITHIN individuals, as follows: up to some time between 7 and 12 years of age (no one knows the cutoff), anterior and posterior lesions both result in a non-fluent aphasia; then things stabilize into the usual correlation between lesion site and aphasia type (a loose correlation at that); then again, some time after 50 (no one knows the cutoff), the pattern changes again, with the probability of a FLUENT aphasia going up even with an anterior lesion. As for the other issues: in fact there is no real evidence (not anymore...) linking grammar with a particular region within the left hemisphere. Grammatical errors occur in fluent and nonfluent patients, with lesions all over the left half of the brain. Your belief in the right hemisphere's role in prosody (note the spelling ) is also an oversimplification. It isn't clear at all whether the current prosody results are more than a by-product of a right hemisphere bias for certain kinds of emotional signals. In short, the whole story is and remains MUCH less differentiated that you have been taught to believe. Is there specialization of some sort? Yes, of course, but we are so far off from mapping it out for language that it is a poor time to recommend that connectionists pied-pipe after neurologists (by the way, the people doing the best experimental work on aphasia tend not to be neurologists anyway; they are usually experimental psychologists working with some neurologist nearby to read the CT scans....). -liz bates From pa1490%sdcc13 at ucsd.edu Mon Feb 26 17:42:00 1990 From: pa1490%sdcc13 at ucsd.edu (Dave Scotese) Date: Mon, 26 Feb 90 14:42:00 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002262242.AA04062@sdcc13.UCSD.EDU> I am not very well versed in the whole idea of tractability or whatnot or even nueral nets themselves. In my humble and perhaps erroneous model of what we are discussing, it seems that any simulation of cerebral activity would necessarily avoid convergence (= tractability?). This comes from my idea that if the cerebral activity in a human did converge, he would have stopped thinking. While this might be the goal of the devout follower of eastern philosophy, I think it is impossible. Sorry if my insight reflects the ramblings of a misguided simpleton, really, as I feel really uneducated when I read most of the stuff here. -Dave Scotese *%) From mesard at BBN.COM Mon Feb 26 22:34:46 1990 From: mesard at BBN.COM (mesard@BBN.COM) Date: Mon, 26 Feb 90 22:34:46 -0500 Subject: Convergence (was Re: Mathematical Tractability of Neural Nets) In-Reply-To: Dave Scotese's message dated Mon, 26 Feb 90 14:42:00 PST Message-ID: > This comes from my idea that if the cerebral activity > in a human did converge, he would have stopped thinking. While this > might be the goal of the devout follower of eastern philosophy, I > think it is impossible. There are two assumptions made in the case of artificial neural nets [or a large class of them anyway] that don't generally hold for a brain: 1) The set of input patterns is finite. 2) There is no random neural activity (aside from an random initial state). Remove these assumptions, and an ANN can quite easily be made to never converge. Impose these assumptions on a brain, and it would very likely stop thinking. (In fact, even one assumption may be enough to produce a sort of convergence. Consider the effects of solitary confinement, etc.) -- void Wayne_Mesard(); Mesard at BBN.COM Bolt Beranek and Newman, Cambridge, MA From slehar at bucasb.bu.edu Tue Feb 27 12:47:37 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Tue, 27 Feb 90 12:47:37 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: Elizabeth Bates's message of Mon, 26 Feb 90 11:02:18 PST <9002261902.AA22328@amos.ucsd.edu> Message-ID: <9002271747.AA26951@bucasd.bu.edu> On the subject of the transfer of knowledge from neurobiology to neural science you write: -> we are so far off from mapping it out for language that it is a poor -> time to recommend that connectionists pied-pipe after neurologists... Not only is it the right time, but whether you like it or not, it is already happening! Significant advances have already been made in the use of neurobiological and psychophysical data in models of vision, cognition, motor control and speech. (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) (O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O) (O)((O))((( (617) 424-7035 (H) (617) 353-6425 (W) )))((O))(O) (O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O) From bates at amos.ucsd.edu Tue Feb 27 13:07:42 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Tue, 27 Feb 90 10:07:42 PST Subject: Mathematical Tractability of Neural Nets Message-ID: <9002271807.AA05458@amos.ucsd.edu> I think you need to distinguish between neuroscience in general (where significant progress is being made in many areas), and the particular area of neurology, with particular reference to language. I think that someday, in retrospect, we will see that progress WAS made in the neurology of language during this period in our history, but much of that progress will prove to be the debunking of classic disconnection and localization theories. Witness, for example, the stunning papers by Posner, Pedersen, Fox, Raichle, etc. on metabolic activity during language use -- fascinating, but only marginally compatible with anything that we previously believed. Looking at a PET scan or and ERP study of "live" language use, one can only be impressed with HOW MUCH of the brain is very active during language use -- which, of course, fits with other anomalous findings that have been around but largely ignored (e.g. Ojemann's findings on the many many different points in the brain that can interrupt language processing when an electric stimulus is applied during cortical mapping prior to surgery). We are, without question, in a period of transition and serious rethinking. For example, Geoff Hinton and Tim Shallice (a former believer in old-fashioned localization) have been carrying out simulations in which a neural network is trained up on some language task and then "lesioned". Some very specific but totally unexpected "syndromes" fall out of randomly placed or indeed randomly distributed damage to the net. Specific syndromes can be a "local minimum", a fact about the mathematics of a distributed network rather than a result (of the typical sort) induced by "subtracting" some local and highly specific piece-of-the-machine. When you were trying to recommend "findings" by "neurologists" that connectionists should follow, you stressed some classic claims about Grammar (Broca's area), semantics (Wernicke's area), frontal lobs (that's lobes -- speech initiation), in short the Geschwind view that was so popular through the 1970's. That is the view that I am objecting to now, not the more general and indeed very fruitful union between neuroscience and computation. One cannot compare our knowledge of the visual system (which is extensive) with our knowledge of how the brain is organized for language (which is, right now, totally up for grabs). -liz bates From turk%picadilly.media.mit.edu at media-lab.media.mit.edu Tue Feb 27 13:57:40 1990 From: turk%picadilly.media.mit.edu at media-lab.media.mit.edu (Matthew Turk) Date: Tue, 27 Feb 90 13:57:40 EST Subject: Mathematical Tractability of Neural Nets In-Reply-To: slehar@bucasb.bu.edu's message of Tue, 27 Feb 90 12:47:37 EST <9002271747.AA26951@bucasd.bu.edu> Message-ID: <9002271857.AA01089@picadilly.media.mit.edu> > > Not only is it the right time, but whether you like it or not, it is > already happening! Significant advances have already been made in the > use of neurobiological and psychophysical data in models of vision, > cognition, motor control and speech. > > (O)((O))((( slehar at bucasb.bu.edu )))((O))(O) I hate to be a naysayer, but this sounds a bit too optimistic to me. I think the point was that neurobiologists don't know as much about the workings of the brain as connectionists often think (or hope, or tell others) they do -- the example given was language areas. I think we should be conservative in our claims, in any scientific endeavor, as to "significant advances". Perhaps this is a good forum to discuss in house just what we think are currently the advances and gaps in connectionist models of vision, cognition, motor control, and speech. Since this is basically a "closed" group, we can affort to honestly point out shortcomings, and not just hype the field. Matthew Turk MIT Media Lab turk at media-lab.media.mit.edu 20 Ames St., E15-414 uunet!mit-amt!turk Cambridge, MA 02139 (617)253-0381 From jbower at smaug.cns.caltech.edu Tue Feb 27 19:05:30 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Tue, 27 Feb 90 16:05:30 PST Subject: Biology Message-ID: <9002280005.AA24297@smaug.cns.caltech.edu> The current exchange concerning the neural basis of language processing reflects a general and perhaps growing tension in this field between what is really known about biology, what is claimed to be known about biology (often in this field by those that don't actually do biology but synthesize selected biological facts), and engineers interested in using the nervous system as a source of ideas for neural network implementations. A few comments seem appropriate: First, there is absolutely no question that our real understanding of how the nervous system works is extremely rudimentary. This is as true at the cognitive level as it is at the level of the neurobiological details. If someone states otherwise they are probably selling something. Second, understanding what is and is not known about biology requires a considerable commitment to the study of biology itself. Summary articles and general lectures at neural network conferences are not enough to develop the intuition necessary to interpret neurobiological data. This is especially true in the case of lesion and psychophysical data which are in any event problematically related to the actual structure of the brain. Third, while neurobiologists have and are continuing to collect massive amounts of structural information about the nervous system, our ignorance is such that it is very difficult to even know where to begin in relating the abstract imaginings of neurologists, cognitive psychologists or connectionists to neural structure. Yet it is the firm belief of some of us that the structure of the brain itself must guide these more abstract musings. Only hard work and cross training will allow this correspondence to be made. Non-biologists should also keep in mind that the lack of formalism in biology is not related exclusively to the inclinations of biologists. It is also the case that we are studying the most complicated structures known anywhere. Physicists are still debating how to formally characterize the behavior of dripping faucets. With respect to the ongoing discussion of levels, for example, it is not at all clear that feedforward networks of the connectionist type are even a particularly appropriate metaphor for thinking about levels within the brain. This is especially true if a hierarchical organization is also implied. Specifically, the usual description of a sensory to motor path within the brain, with "lower levels" of local sensory processing units feeding "higher integrating levels" that in turn coordinate motor response is certainly a vast oversimplification and quite possibly conceptually wrong. In the case of visual processing, the often mentioned but still completely not understood fact that there are 10 to 100 times more connections from the visual cortex to the geniculate than vice versa at least obscures any simple causal processing hierarchy. Further, the sensory to motor, lower to higher to effector view of the brain would seem to completely break down when one realizes that, under normal operating conditions (i.e., monkeys not in chairs looking at television screens), an animal itself controls the way it seeks data. This sensory acquisition process almost certainly reflects a complex and evolving understanding of the object being explored. Figuring out how the deepest levels of the brain control sensory acquisition and thus the neural flow of information through direct neural and indirect behavioral loops is likely to be an essential part of understanding how brains operate in the world. Clearly, our understanding of how the nervous system works will not only benefit from, but will be dependent on the fusion of computational and neurobiological research. However, any attempt to fake a fusion by smoothing over the facts, and proceeding at full pace without concern for the structural details of the nervous system itself, is likely to do more harm than good. Jim Bower Div. of Biology Computational Neural Systems Program Caltech From HORN%TAUNIVM.BITNET at VMA.CC.CMU.EDU Wed Feb 28 17:03:30 1990 From: HORN%TAUNIVM.BITNET at VMA.CC.CMU.EDU (David Horn) Date: Wed, 28 Feb 90 17:03:30 IST Subject: Convergence Message-ID: In-reply-to: Dave Scortese and Wayne Mesard We have demonstrated how a convergent neural network of the Hopfield type can turn into a system which displays an open- ended motion in pattern-space (the space of all its input memories). Its dynamical motion converges on a short-time scale, moving in the direction of an attractor, but escapes it leading to a non- convergent motion on a long time scale. Adding pointers connecting different memories we obtain a process which has some resemblance to associative thinking. It is interesting to note that such a non-convergent behavior does not necessitate random neural activity. The way we made it work was by introducing dynamical thresholds as new degrees of freedom. The threshold is being changed as a function of the firing history of the neuron to which it belongs (e.g. mimicking fatigue). This can lead to the destabilization of the attractors of the neural network, turning them into transients of its motion. Refrences: D. Horn and M. Usher, Neural Networks with Dynamical Thresholds, Phys. Rev. A 40 (1989) 1036-1044; Motion in the Space of Memory Patterns,IJCNN (Washington meeting June 1989) I-61-66; Excitatory-Inhibitory Networks with Dynamical Thresholds, preprint. From pjh%compsci.stirling.ac.uk at NSFnet-Relay.AC.UK Wed Feb 28 04:54:59 1990 From: pjh%compsci.stirling.ac.uk at NSFnet-Relay.AC.UK (Peter J.B. Hancock) Date: 28 Feb 90 09:54:59 GMT (Wed) Subject: No subject Message-ID: <9002280954.AA05738@uk.ac.stir.cs.lira> I've been following your comments on localisation (or lack of it) of language with interest. I'd be further interested to know what you think of dissociation findings such as that reported by McCarthy and Warrington (Nature 334, 428-430). They have a patient who is apparently unable to identify animals from their spoken names, but quite able if presented with a picture. There are many other such dissociations. Do you think they are of general applicability, or is language processing so variable that they should be treated as one offs? Peter Hancock From bates at amos.ucsd.edu Wed Feb 28 12:42:58 1990 From: bates at amos.ucsd.edu (Elizabeth Bates) Date: Wed, 28 Feb 90 09:42:58 PST Subject: Biology Message-ID: <9002281742.AA14794@amos.ucsd.edu> Hear-Hear!! I fully (and humbly) concur with Bower's well-crafted statement, and happily leave this debate with that statement hopefully functioning as the final word. -liz bates From schmidhu at tumult.informatik.tu-muenchen.de Wed Feb 28 10:37:59 1990 From: schmidhu at tumult.informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 28 Feb 90 16:37:59 +0100 Subject: FKI-REPORTS AVAILABLE Message-ID: <9002281537.AA12412@kiss.informatik.tu-muenchen.de> Three reports on three quite different on-line algorithms for recurrent neural networks with external feedback (through a non-stationary environment) are available. A LOCAL LEARNING ALGORITHM FOR DYNAMIC FEEDFORWARD AND RECURRENT NETWORKS Juergen Schmidhuber FKI-Report 90-124 Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually can not deal sensibly with `hidden units'. In contrast, as far as we can judge by now, learning rules in biological systems with many `hidden units' are local in both space and time. In this paper we propose a parallel on-line learning algorithm which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are `hidden in time'. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent `neural' dissipative system which is consuming `weight-substance' and permanently trying to distribute this substance onto its connections in an appropriate way. Experiments demonstrating the feasability of the algorithm are reported. NETWORKS ADJUSTING NETWORKS Juergen Schmidhuber FKI-Report 90-125 An approach to spatiotemporal credit assignment in recurrent reinforcement learning networks is presented. The algorithm may be viewed as an application of Sutton's `Temporal Difference Methods' to the temporal evolution of recurrent networks. State transitions in a completely recurrent network are observed by a second non-recurrent adaptive network which receives as input the complete activation vectors of the recurrent one. Differences between successive state evaluations made by the second network provide update information for the recurrent network. In a reinforcement learning system an adaptive critic (like the one used in Barto, Sutton and Anderson's AHC algorithm) controls the temporal evolution of a recurrent network in a changing environment. This is done by letting the critic learn learning rates for a Hebb-like rule used to associate or disassociate successive states in the recurrent network. Only computations local in space and time take place. With a linear critic this scheme can be applied to tasks without linear solutions. It was successfully tested on a delayed XOR-problem, and a complicated pole balancing task with asymmetrically scaled inputs). We finally consider how in a changing environment a recurrent dynamic supervised learning critic can interact with a recurrent dynamic reinforcement learning network in order to improve its performance. MAKING THE WORLD DIFFERENTIABLE: ON USING SUPERVISED LEARNING FULLY RECURRENT NEURAL NETWORKS FOR DYNAMIC REINFORCEMENT LEARNING AND PLANNING IN NON-STATIONARY ENVIRONMENTS. Juergen Schmidhuber FKI-Report 90-126 First a brief introduction to supervised and reinforcement learning with recurrent networks in non-stationary environments is given. The introduction also covers the basic principle of SYSTEM IDENTIFICATION as employed by Munro, Robinson and Fallside, Werbos, Jordan, and Widrow. This principle allows to employ supervised learning techniques for reinforcement learning. Then a very general on-line algorithm for a reinforcement learning neural network with internal and external feedback in a non-stationary reactive environment is described. Internal feedback is given by connections that allow cyclic activation flow through the network. External feedback is given by output actions that may change the state of the environment thus influencing subsequent input activations. The network's main goal is to receive as much reinforcement (or as few `pain') as possible. Arbitrary time lags between actions and later consequences are possible. Although the approach is based on `supervised' learning algorithms for fully recurrent dynamic networks, no teacher is required. An adaptive model of the environmental dynamics is constructed which includes a model of future reinforcement to be received. This model is used for learning goal directed behavior. For reasons of efficiency the on-line algorithm CONCURRENTLY learns the model and learns to pursue the main goal. The algorithm is applied to the most difficult pole balancing problem ever given to any neural network. A connection to `META-learning' (learning how to learn) is noted. The possibility to use the model for learning by `mental simulation' of the environmental dynamics is investigated. The approach is compared to approaches based on Sutton's methods of temporal differences and Werbos' heuristic dynamic programming. Finally it is described how the algorithm can be augmented by dynamic CURIOSITY and BOREDOM . This can be done by introducing (delayed) reinforcement for controller actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance. Please direct requests to schmidhu at lan.informatik.tu-muenchen.dbp.de Only if this does not work for some reason, try schmidhu at tumult.informatik.tu-muenchen.de Leave nothing but your physical address (subject: FKI-Reports). DO NOT USE `REPLY'. Of course, those who asked for copies at IJCNN in Washington will receive them without any further requests. From crg-tech-reports at cs.toronto.edu Wed Feb 28 15:03:31 1990 From: crg-tech-reports at cs.toronto.edu (crg-tech-reports@cs.toronto.edu) Date: Wed, 28 Feb 90 15:03:31 EST Subject: U of Toronto CRG-TR-90-3 announcement Message-ID: <90Feb28.150343est.10568@ephemeral.ai.toronto.edu> DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS *************************************************** The following technical report is now available. If you'd like a copy please send me your real mail address (omitting all other information from your message). Also, do not reply to the entire mailing list. ------------------------------------------------------------------------------- EXPERIMENTS ON DISCOVERING HIGH ORDER FEATURES WITH MEAN FIELD MODULES Conrad C. Galland & Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 CRG-TR-90-3 A new form of the deterministic Boltzmann machine (DBM) learning procedure is presented which can efficiently train network modules to discriminate between input vectors according to some criterion. The new technique directly utilizes the free energy of these "mean field modules" to represent the probability that the criterion is met, the free energy being readily manipulated by the learning procedure. Although conventional deterministic Boltzmann learning fails to extract the higher order feature of shift at a network bottleneck, combining the new mean field modules with the mutual information objective function rapidly produces modules that perfectly extract this important higher order feature without direct external supervision. -------------------------------------------------------------------------------