From mozer at neuron.Colorado.EDU Mon Jan 1 14:45:04 1990 From: mozer at neuron.Colorado.EDU (Michael C. Mozer) Date: Mon, 1 Jan 90 12:45:04 MST Subject: tech report announcement Message-ID: <9001011945.AA07817@neuron.colorado.edu> PLEASE MAIL REQUESTS FOR COPIES TO: conn_tech_report at boulder.colorado.edu PLEASE DO NOT FORWARD TO OTHER BBOARDS. =============================================================================== Discovering the Structure of a Reactive Environment by Exploration Michael C. Mozer University of Colorado, Boulder Jonathan Bachrach University of Massachusetts, Amherst Tech Report CU-CS-451-89 December 1989 Consider a robot wandering around an unfamiliar environment, per- forming actions and sensing the resulting environmental states. The robot's task is to construct an internal model of its en- vironment, a model that will allow it to predict the consequences of its actions and to determine what sequences of actions to take to reach particular goal states. Rivest and Schapire (1987a, 1987b; Schapire, 1988) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of "finite state" environments. The heart of this algorithm is a clever representation of the environment called an "update graph." We have developed a connectionist implementation of the update graph using a highly-specialized network architec- ture. With back propagation learning and a trivial exploration strategy -- choosing random actions -- the connectionist network can outperform the Rivest and Schapire algorithm on simple prob- lems. The network has the additional strength that it can ac- comodate stochastic environments. Perhaps the greatest virtue of the connectionist approach is that it suggests generalizations of the update graph representation that do not arise from a tradi- tional, symbolic perspective. This report also serves to set up the ultimate connectionist light bulb joke, which goes something like this: How many connectionist networks does it take to change a light bulb? Only one, but it requires about 6,000 trials. From gary%cs at UCSD.EDU Mon Jan 1 18:46:06 1990 From: gary%cs at UCSD.EDU (Gary Cottrell) Date: Mon, 1 Jan 90 15:46:06 PST Subject: tech report announcement Message-ID: <9001012346.AA01217@desi.UCSD.EDU> No, Mike, it goes like this: How many connectionist networks does it take to change a lightbulb? From carol at ai.toronto.edu Tue Jan 2 11:21:20 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 2 Jan 90 11:21:20 EST Subject: CRG-TR-89-5 available Message-ID: <90Jan2.112126est.10744@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS ********************************************************** The following paper was presented at a recent meeting of the Acoustical Society of America: Context-Modulated Discrimination of Similar Vowels Using Second-Order Connectionist Networks R. Watrous Dept. of Computer Science, University of Toronto and Siemens Corporate Research Discrimination of two vowels in the context of voiced and unvoiced stop consonants is investigated using connectionist networks. Separate discrimination networks were generated from samples of the vowel centers of [e,ae] for the six contexts [b,d,g,p,t,k] for one speaker. A single context-independent network was similarly generated. The context-specific error rate was 1%, whereas the context-independent error rate was 9%. A method for merging isomorphic context-specific networks into a single network is described, that uses singular value decomposition to find a minimal basis for the set of context-specific weight vectors. Context-dependent linear combinations of the basis vectors may then computed using second-order network units. Compact networks can thus be obtained in which the vowel discrimination surfaces are modulated by the phonetic context. In one experiment, as the number of basis vectors was reduced from 6 to 3, the error rate increased from 1% to 3%. A context-modulated network with three basis vectors and a context-independent network were also trained on the same data using standard methods of nonlinear optimization. The discrimination error rate using the context-independent network was as low as 2.6%, whereas the context-specific recognition error rate was as low as 0.3%. It is concluded that compact context-sensitive connectionist networks which result in very high phoneme discrimination accuracy can be constructed and trained. --------- To obtain a copy of this paper please send your real mailing address to carol at ai.toronto.edu --------- From wilson at Think.COM Tue Jan 2 14:25:39 1990 From: wilson at Think.COM (Stewart Wilson) Date: Tue, 02 Jan 90 14:25:39 EST Subject: SAB90 Call for Papers Message-ID: <9001021925.AA05846@pozzo> Dear colleagues, Dr. Meyer and I would be very grateful if you would distribute the following call for papers on your email list. Sincerely, Stewart Wilson ============================================================================== ============================================================================== Call for Papers SIMULATION OF ADAPTIVE BEHAVIOR: FROM ANIMALS TO ANIMATS An International Conference to be held in Paris September 24-28, 1990 The object of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contact among scientists from diverse disciplines should contribute to better appreciation of each other's approaches and vocabularies, to cross-fertilization of fundamental and applied research, and to defining objectives, constraints, and challenges for future work. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behaviors Autonomous robots Action selection and behavioral Hierarchical and parallel organizations sequences Self organization of behavioral Conditioning, learning and induction modules Neural correlates of behavior Problem solving and planning Perception and motor control Goal directed behavior Motivation and emotion Neural networks and classifier Behavioral ontogeny systems Cognitive maps and internal Emergent structures and behaviors world models Authors are requested to send two copies (hard copy only) of a full paper to each of the Conference chairmen: Jean-Arcady MEYER Stewart WILSON Groupe de Bioinformatique The Rowland Institute for Science URA686.Ecole Normale Superieure 100 Cambridge Parkway 46 rue d'Ulm Cambridge, MA 02142 75230 Paris Cedex 05 USA France e-mail: meyer%FRULM63.bitnet@ e-mail: wilson at think.com cunyvm.cuny.edu A brief preliminary letter to one chairman indicating the intention to participate--with the tentative title of the intended paper and a list of the topics addressed--would be appreciated for planning purposes. For conference information, please also contact one of the chairmen. Conference committee: Conference Chair J.A. Meyer, S. Wilson Organizing Committee Groupe de BioInformatique.ENS.France. and local arrangements A. Guillot, J.A. Meyer, P. Tarroux, P. Vincens Program Committee L. Booker, USA R. Brooks, USA P. Colgan, Canada P. Greussay, France D. McFarland, UK L. Steels, Belgium R. Sutton, USA F. Toates, UK D. Waltz, USA Official Language: English Important Dates 31 May 90 Submissions must be received by the chairmen 30 June 90 Notification of acceptance or rejection 31 August 90 Camera ready revised versions due 24-28 September 90 Conference dates =============================================================================== =============================================================================== From kersten%scf.decnet at nwc.navy.mil Wed Jan 3 01:00:00 1990 From: kersten%scf.decnet at nwc.navy.mil (SCF::KERSTEN) Date: 2 Jan 90 23:00:00 PDT Subject: newletter Message-ID: Dear Connectionist: I wondered how one gets on the distribution list for the connectionist? Is it free?? Is it possible to obtain a statement of purpose of the newsletter, who contributes, etc. From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Jan 3 14:13:56 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Wed, 03 Jan 90 14:13:56 EST Subject: No subject Message-ID: i would like the following information from kind fellow netters that do speech research: reading some of the current TR's on the use of nn's for speech recognition i could not figure just how many parameters are involved in the training problem and how much time it takes to train them. i would be interested in some statement of the form: 2000 paramete 17 days of sun time. maybe also how the training time scales with the size of the optimization problem ... i will be happy to collect answers and post to the list. please send me personal mail. also , please answer only from your own personal research experience. thanasis From bharucha at eleazar.dartmouth.edu Wed Jan 3 14:45:10 1990 From: bharucha at eleazar.dartmouth.edu (Jamshed Bharucha) Date: Wed, 3 Jan 90 14:45:10 -0500 Subject: summer institute in cognitive neuroscience Message-ID: <9001031945.AA02873@eleazar.dartmouth.edu> JAMES S. MCDONNEL FOUNDATION THIRD ANNUAL SUMMER INSTITUTE IN COGNITIVE NEUROSCIENCE Dartmouth College and Medical School July 2 - July 13, 1990 The Third Annual Summer Institute will be held from July 2 through July 13, 1990. The two week course will examine how information about the brain bears on issues in cognitive science, and how approaches in cognitive science apply to neuroscience research. A distinguished faculty will lecture on current topics in perception and language; laboratories and demonstrations will offer practical experience with cognitive neuropsychology experiments, connectionist/computational modeling, and neuroanatomy. At every stage, the relationship between cognitive issues and underlying neural circuits will be explored. The Institute directors will be Michael Gazzaniga, George A. Miller, Wolf Singer and Gordon Shepherd. Applications are invited from beginning and established researchers. The Foundation is providing limited support for travel expenses and room/board. Visiting Faculty Include: Sheila Blumstein Eric Knudsen Kenneth Nakayama Patricia A. Carpenter Mark Konishi Steven Pinker Albert M. Galaburda Stephen M. Kosslyn Michael I. Posner Lila Gleitman Marta Kutas Marcus E. Raichle David H. Hubel Ralph Linsker Pasko Rakic Marcel A. Just Margaret Livingstone Gordon M. Shepherd Jon H. Kaas James McClelland Wolf Singer Herbert P. Killackey George A. Miller Barry E. Stein Vernon B. Mountcastle Host Faculty: Kathleen Baynes Carol A. Fowler Patricia A. Reuter-Lorenz Jamshed Bharucha Michael S. Gazzaniga Mark J. Tramo Robert Fendrich Howard C. Hughes George L. Wolford For further information please send email to reuter-lorenz at mac.dartmouth.edu APPLICATIONS MUST BE POSTMARKED BY JANUARY 12, 1990 *************************Application form*********************** NAME: INSTITUTIONAL AFFILIATION: POSITION: HOME ADDRESS: WORK ADDRESS: TELEPHONES: Housing and some meal costs will be covered by the Foundation. There will also be limited travel support available. Please indicate the percent of your need for this support: ___ % APPLICATION DEADLINE: Postmarked by January 12, 1990 NOTIFICATION OF ACCEPTANCE: March 5, 1990 PLEASE SEND THIS FORM, TOGETHER WITH: 1. A one-page statement explaining why you wish to attend. 2. A curriculum vitae. 3. Two letters of recommendation. (Supporting materials will be accepted until February 1). Send applications to the following address (do not email applications): Dr. M.S. Gazzaniga McDonnell Summer Institute HB 7915-A Dartmouth Medical School Hanover, New Hampshire 03756 ****************************************************************** From reggia at cs.UMD.EDU Thu Jan 4 16:26:42 1990 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Thu, 4 Jan 90 16:26:42 -0500 Subject: call for papers Message-ID: <9001042126.AA20909@mimsy.UMD.EDU> CALL FOR PAPERS: Connectionist/neural models in medicine The 14th Annual Symposium on Computer Applications in Medical Care (Nov 4 - 7, 1990) will have sessions dealing with connectionist modelling research relevant to biomedicine. Previous papers have included presentations of new learning methods, models of portions of the nervous system of specific organisms, methods for classification and diagnosis of medical disorders, models of higher cortical functions and their disorders (e.g., aphasia, dyslexia, dementia), and methods for device control. Papers on these and a much broader range of topics are sought. Manuscripts are due March 1, 1990. For a copy of instructions for authors or any further information contact SCAMC Office of CME George Washington University Medical Center 2300 K Street, NW Washington, DC 20037 or via phone call (202) 994-8928. From ajr%engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Fri Jan 5 11:00:15 1990 From: ajr%engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Tony Robinson) Date: Fri, 5 Jan 90 16:00:15 GMT Subject: problems with large training sets Message-ID: <1806.9001051600@dsl.eng.cam.ac.uk> Dear Connectionists: I have a problem which I believe is shared by many others. In taking error propagation networks out of the "toy problem" domain, and into the "real world", the number of examples in the training set increases rapidly. For weight updating, true gradient descent requires calculating the partial gradient from every element in the training set and taking a small step in the opposite direction to the total gradient. Both these requirements are impractical when the training set is large. Adaptive step size techniques can give an order of magnitude decrease in computation over a fixed scaling of the gradient and, for initial training, small subsets can give a sufficiently accurate estimation of the gradient. My problem is that I don't have an adpative step size algorithm that works on the noisy gradient obtained from a subset of the training set. Does anyone have any ideas? (I'd be glad to coordinate suggestions and short summaries of published work and post back to the list.) To kick off, my best technique to date is included below. Thanks, Tony Robinson. (ajr at eng.cam.ac.uk) From jbower at smaug.cns.caltech.edu Fri Jan 5 17:28:03 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 5 Jan 90 14:28:03 PST Subject: GENESIS Message-ID: <9001052228.AA00516@smaug.cns.caltech.edu> Software availability announcement GENESIS (GEneral NEural SImulation System) and XODUS (X-windows Output and Display Utility for Simulations) This combined neural simulation system is now available for general distribution via FTP from Caltech. The software was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. For the last two years GENESIS has provided the basis for laboratory courses in neural simulation at both Caltech and the Marine Biological Laboratory in Woods Hole, MA. Most current GENESIS applications involve realistic simulations of biological neural systems, however, it has also been used to model more abstract networks. The system is not, however, a particularly efficient way to construct and run simple feedforward back propagation type simulations. More information on the simulator and its interface can be obtained from an article by our group in last years NIPS proceedings. (Wilson, M.A., Bhalla, U.S., Uhley, J.D., and Bower, J.M. 1989 GENESIS: A system for simulating neural networks. In: Advances in Neural information processing systems. D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA. 485-492. ). The interface will also be the subject of an oral presentation at the upcoming USENIX meeting in Washington D.C. GENESIS and XODUS are written in C and run on SUN and DEC graphics work stations under UNIX (version 4.0 and up), and X-windows (version 11). The software requires 14 meg of disk space and the tar file is approximately 1 meg. Full source for the simulator is available via FTP from genesis.cns.caltech.edu (131.215.135.64). To acquire FTP access to this machine it is necessary to first register for distribution by using telnet or rlogin to login under user "genesis" and then follow the instructions (e.g. 'telnet genesis.cns.caltech.edu' and ' login as 'genesis'). When necessary, tapes can be provided for a small handling fee ($50). Those requiring tapes should send requests to genesis-req at caltech.bitnet. Any other questions about the system or its distribution should also be sent to this address. The current distribution includes full source for both GENESIS and XODUS as well as three tutorial simulations (squid axon, multicell, visual cortex). Documentation for these tutorials as well as three papers describing the structure of the simulator are also included. As described in more detail in the "readme" file at the FTP address, those interested in developing new GENESIS applications are encouraged to become registered members of the GENESIS users group (BABEL) for an additional one time $200 registration fee. As a registered user, one is provided documentation on the simulator itself (currently in an early stage), access to additional simulator components, bug report listings, and access to a user's bulletin board. In addition we are establishing a depository for additional completed simulations. Finally, it should be noted that this software system, which currently consists of approximately 60,000 lines of code, and represents almost four years of programing effort, is being provided for general distribution as a public service to the neural network and computational neuroscience communities. We make no claims as to the quality or functionality of this software for any purpose whatsoever and its release does not constitute a commitment on our part to provide support of any kind. Jim Bower From terry%sdbio2 at ucsd.edu Sat Jan 6 20:42:07 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Sat, 6 Jan 90 17:42:07 PST Subject: Neural Computation, Vol. 1, No. 4 Message-ID: <9001070142.AA15178@sdbio2.UCSD.EDU> Neural Computation Volume 1, Number 4 Reviews: Learning in artificial neural networks Halbert White Notes: Representation properties of networks: Kolmogorov's theorem is irrelevant Frederico Girosi and Tomaso Poggio Sigmoids distinguish more efficiently than heavisides Eduardo D. Sontag Letters: How cortical interconnectedness varies with network size Charles F. Stevens A canonical microcircuit for neocortex Rodney J. Douglas, Kevan A. C. Martin, and David Whitteridge Snythetic neural circuits using current-domain signal representations Andreas G. Andreou and Kwabena A. Boahen Random neural networks with negative and positive signals and product form solution Erol Gelenbe Nonlinear optimization using generalized Hopfield networks Athanasios G. Tsirukis, Gintaras V. Reklaitis, and Manoel F. Tenorio Discrete Synchronous neural algorithm for minimization Hyuk Lee Approximation of boolean functions by sigmoidal networks: Part I: XOR and other two-variable functions E. K. Blum Backpropagation applied to handwritten zip code recognition Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel A subgrouping strategy that reduces complexity and speeds up learning in recurrent networks David Zipser Unification as constraint satisfaction in structured connectionist networks Andreas Stolcke SUBSCRIPTIONS: This will be the last opportunity to receive all issues for volume 1. Subscriptions for volume 2 will not receive back issues for volume 1. Back issues of single issues will, however, be available for $25. each. Volume 1 Volume 2 ______ $35. ______ $40. Student ______ $45. ______ $50. Individual ______ $90. ______ $100. Institution Add $9. for postage outside USA and Canada surface mail or $17. for air mail. MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From whart%cs at ucsd.edu Mon Jan 8 11:40:33 1990 From: whart%cs at ucsd.edu (Bill Hart) Date: Mon, 8 Jan 90 08:40:33 PST Subject: New Subscriber Message-ID: <9001081640.AA01571@beowulf.UCSD.EDU> Please add me to the connectionists mailing list. Thanks. From der at thorin.stanford.edu Mon Jan 8 19:01:50 1990 From: der at thorin.stanford.edu (Dave Rumelhart) Date: Mon, 8 Jan 90 16:01:50 PST Subject: Neural Computation, Vol. 1, No. 4 In-Reply-To: Terry Sejnowski's message of Sat, 6 Jan 90 17:42:07 PST <9001070142.AA15178@sdbio2.UCSD.EDU> Message-ID: <9001090001.AA16001@thorin> From SATINDER at cs.umass.EDU Tue Jan 9 12:30:00 1990 From: SATINDER at cs.umass.EDU (SATINDER@cs.umass.EDU) Date: Tue, 9 Jan 90 12:30 EST Subject: Technical Reports available Message-ID: <9001091728.AA23874@crash.cs.umass.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** Two technical reports are available. Send requests to Ms. Connie Smith, Department of Computer and Information Science, University of Massachusetts, Amherst MA 01003. Or via e-mail to: Smith at cs.umass.EDU. Do not sent requests to the sender of this message. A postscript version of Technical Report 89-118 should be available via ftp from cheops.cis.ohio-state.edu as described by Pollack in previous messages to this bboard. It is stored as "houk.control.ps.Z" in pub/neuroprose. The extension ".Z" is for the compressed form. **************************************************************** An Adaptive Sensorimotor Network Inspired by the Anatomy and Physiology of the Cerebellum James C. Houk Department of Physiology Northwestern University, Chicago, IL 60611 Satinder P. Singh, Charles Fisher, Andrew G. Barto Department of Computer and Information Science University of Massachusetts, Amherst, MA 01003 COINS Technical Report 89-108 November 1989 Abstract: In this report we review the anatomy and physiology of the cerebellum, stressing new knowledge about information processing in cerebellar circuits, novel biophysical properties of Purkinje neurons and cellular mechanisms for adjusting synaptic weights. We then explore the impact of these ideas on designs for adaptive sensorimotor networks. A network is proposed that is comprised of an array of adjustable pattern generators. Each pattern generator in the array produces an element of a composite motor program. Motor programs can be stored, retrieved and executed using adjustable pattern generator modules. ********************************************************************** Cooperative Control of Limb Movements by the Motor Cortex, Brainstem and Cerebellum James C. Houk Department of Physiology Northwestern University, Chicago, IL 60611 COINS Technical Report 89-118 December 1989 Abstract: The model of sensory-motor coordination proposed here involves two primary processes that are bound together by positive feedback loops. One primary process links sensory triggers to potential movements. While this process may occur at other sites, I emphasize the role of combinatorial maps in the motor cortex in this report. Combinatorial maps make it possible for many different stimuli to trigger many different motor programs, and for preferential linkages to be associatively formed. A second primary process stores motor programs and regulates their expression. The programs are believed to be stored in the cerebellar cortex, in the synaptic weights between parallel fibers and Purkinje cells. Positive feedback loops between the motor cortex and the cerebellum bind the combinatorial maps to the motor programs. The capability for self-sustained activity in these loops is the postulated driving force for generating programs, whereas inhibition from cerebellar Purkinje cells is the main mechanism that regulates their expression. Execution of a program is triggered when a sensory input succeeds in initiating regenerative loop activity. **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** From paul at NMSU.Edu Wed Jan 10 01:00:33 1990 From: paul at NMSU.Edu (paul@NMSU.Edu) Date: Tue, 9 Jan 90 23:00:33 MST Subject: No subject Message-ID: <9001100600.AA05592@NMSU.Edu> Updated CFP: PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS SUBJECT: Please post the following in your Laboratory/Department/Journal: Cut--------------------------------------------------------------------------- SUBJECT: Please post the following in your Laboratory/Department/Journal: CALL FOR PAPERS Pragmatics in Artificial Intelligence 5th Rocky Mountain Conference on Artificial Intelligence (RMCAI-90) Las Cruces, New Mexico, USA, June 28-30, 1990 PRAGMATICS PROBLEM: The problem of pragmatics in AI is one of developing theories, models, and implementations of systems that make effective use of contextual information to solve problems in changing environments. CONFERENCE GOAL: This conference will provide a forum for researchers from all subfields of AI to discuss the problem of pragmatics in AI. The implications that each area has for the others in tackling this problem are of particular interest. COOPERATION: American Association for Artificial Intelligence (AAAI) Association for Computing Machinery (ACM) Special Interest Group in Artificial Intelligence (SIGART) IEEE Computer Society U S WEST Advanced Technologies and the Rocky Mountain Society for Artificial Intelligence (RMSAI) SPONSORSHIP: Association for Computing Machinery (ACM) Special Interest Group in Artificial Intelligence (SIGART) U S WEST Advanced Technologies and the Rocky Mountain Society for Artificial Intelligence (RMSAI) INVITED SPEAKERS: The following researchers have agreed to present papers at the conference: *Martin Casdagli, Los Alamos National Laboratory, Los Alamos USA *Arthur Cater, University College Dublin, Ireland EC *Jerry Feldman, University of California at Berkeley, Berkeley USA & International Computer Science Institute, Berkeley USA *Barbara Grosz, Harvard University, Cambridge USA *James Martin, University of Colorado at Boulder, Boulder USA *Derek Partridge, University of Exeter, United Kingdom EC *Philip Stenton, Hewlett Packard, United Kingdom EC *Robert Wilensky, University of California at Berkeley Berkeley USA THE LAND OF ENCHANTMENT: Las Cruces, lies in THE LAND OF ENCHANTMENT (New Mexico), USA and is situated in the Rio Grande Corridor with the scenic Organ Mountains overlooking the city. The city is close to Mexico, Carlsbad Caverns, and White Sands National Monument. There are a number of Indian Reservations and Pueblos in the Land Of Enchantment and the cultural and scenic cities of Taos and Santa Fe lie to the north. New Mexico has an interesting mixture of Indian, Mexican and Spanish culture. There is quite a variation of Mexican and New Mexican food to be found here too. GENERAL INFORMATION: The Rocky Mountain Conference on Artificial Intelligence is a major regional forum in the USA for scientific exchange and presentation of AI research. The conference emphasizes discussion and informal interaction as well as presentations. The conference encourages the presentation of completed research, ongoing research, and preliminary investigations. Researchers from both within and outside the region are invited to participate. Some travel awards will be available for qualified applicants. FORMAT FOR PAPERS: Submitted papers should be double spaced and no more than 5 pages long. E-mail versions will not be accepted. Papers will be published in the proceedings and there is the possibility of a published book. Send 3 copies of your paper to: Paul Mc Kevitt, Program Chairperson, RMCAI-90, Computing Research Laboratory (CRL), Dept. 3CRL, Box 30001, New Mexico State University, Las Cruces, NM 88003-0001, USA. DEADLINES: Paper submission: April 1st, 1990 Pre-registration: April 1st, 1990 Notice of acceptance: May 1st, 1990 Final papers due: June 1st, 1990 LOCAL ARRANGEMENTS: Local Arrangements Chairperson, RMCAI-90. (same postal address as above). INQUIRIES: Inquiries regarding conference brochure and registration form should be addressed to the Local Arrangements Chairperson. Inquiries regarding the conference program should be addressed to the Program Chairperson. Local Arrangements Chairperson: E-mail: INTERNET: rmcai at nmsu.edu Phone: (+ 1 505)-646-5466 Fax: (+ 1 505)-646-6218. Program Chairperson: E-mail: INTERNET: paul at nmsu.edu Phone: (+ 1 505)-646-5109 Fax: (+ 1 505)-646-6218. TOPICS OF INTEREST: You are invited to submit a research paper addressing Pragmatics in AI, with any of the following orientations: Philosophy, Foundations and Methodology Knowledge Representation Neural Networks and Connectionism Genetic Algorithms, Emergent Computation, Nonlinear Systems Natural Language and Speech Understanding Problem Solving, Planning, Reasoning Machine Learning Vision and Robotics Applications PROGRAM COMMITTEE: *John Barnden, New Mexico State University (Connectionism, Beliefs, Metaphor processing) *Hans Brunner, U S WEST Advanced Technologies (Natural language interfaces, Dialogue interfaces) *Martin Casdagli, Los Alamos National Laboratory (Dynamical systems, Artificial neural networks, Applications) *Mike Coombs, New Mexico State University (Problem solving, Adaptive systems, Planning) *Thomas Eskridge, Lockheed Missile and Space Co. (Analogy, Problem solving) *Chris Fields, New Mexico State University (Neural networks, Nonlinear systems, Applications) *Roger Hartley, New Mexico State University (Knowledge Representation, Planning, Problem Solving) *Victor Johnson, New Mexico State University (Genetic Algorithms) *Paul Mc Kevitt, New Mexico State University (Natural language interfaces, Dialogue modeling) *Joe Pfeiffer, New Mexico State University (Computer Vision, Parallel architectures) *Keith Phillips, University of Colorado at Colorado Springs (Computer vision, Mathematical modelling) *Yorick Wilks, New Mexico State University (Natural language processing, Knowledge representation) *Scott Wolff, U S WEST Advanced Technologies (Intelligent tutoring, User interface design, Cognitive modeling) REGISTRATION: Pre-Registration: Professionals: $50.00; Students $30.00 (Pre-Registration cutoff date is April 1st 1990) Registration: Professionals: $70.00; Students $50.00 (Copied proof of student status is required). Registration form (IN BLOCK CAPITALS). Enclose payment made out to New Mexico State University. (ONLY checks in US dollars will be accepted). Send to the following address (MARKED REGISTRATION): Local Arrangements Chairperson, RMCAI-90 Computing Research Laboratory Dept. 3CRL, Box 30001, NMSU Las Cruces, NM 88003-0001, USA. Name:_______________________________ E-mail_____________________________ Phone__________________________ Affiliation: ____________________________________________________ Fax: ____________________________________________________ Address: ____________________________________________________ ____________________________________________________ ____________________________________________________ COUNTRY__________________________________________ Organizing Committee RMCAI-90: Paul Mc Kevitt Yorick Wilks Research Scientist Director CRL CRL cut------------------------------------------------------------------------ From dambrosi at turing Wed Jan 10 14:44:22 1990 From: dambrosi at turing (dambrosi@turing) Date: Wed, 10 Jan 90 11:44:22 PST Subject: Call For Papers - Uncertainty in AI Message-ID: <9001101944.AA04813@turing.CS.ORST.EDU> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CALL FOR PAPERS: SIXTH CONFERENCE ON UNCERTAINTY IN AI Cambridge, Mass., July 27th-29th, 1990 (preceding the AAAI-90 Conference) DEADLINE: MARCH 12, 1990 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The sixth annual Conference on Uncertainty in AI is concerned with the full gamut of approaches to automated and interactive reasoning and decision making under uncertainty, including both quantitative and qualitative methods. We invite original contributions on fundamental theoretical issues, on the development of software tools embedding approximate reasoning theories, and on the validation of such theories and technologies on challenging applications. Topics of particular interest include: - - Semantics of qualitative and quantitative uncertainty representations - - The role of uncertainty in deductive, abductive, defeasible, or analogical (case-based) reasoning - - Control of reasoning; planning under uncertainty - - Comparison and integration of qualitative and quantitative schemes - - Knowledge engineering tools and techniques for building approximate reasoning systems - - User Interface: explanation and summarization of uncertain infromation - - Applications of approximate reasoning techniques Papers will be carefully refereed. All accepted papers will be included in the proceedings, which will be available at the conference. Papers may be accepted for presentation in plenary sessions or poster sessions. Four copies of each paper should be sent to the Program Chair by March 12, 1990. Acceptance will be sent by May 4, 1990. Final camera-ready papers, incorporating reviewers' comments, will be due by May 31, 1990. There will be an eight page limit on the camera-ready copy (with a few extra pages available for a nominal fee.) ---------------------------------------------------------------------------- Program Chair: General Chair: Piero P. Bonissone, UAI-90 Max Henrion, General Electric Rockwell Science Center, Corporate Research and Development, Palo Alto Facility, 1 River Rd., Bldg. K1-5C32A, 444 High Street, Schenectady, NY 12301 Palo Alto, Ca 94301 (518) 387-5155 (415) 325-1892 Bonissone at crd.ge.com Henrion at sumex-aim.stanford.edu Program Committee: Peter Cheeseman, Paul Cohen, Laveen Kanal, Henry Kyburg, John Lemmer, Tod Levitt, Ramesh Patil, Judea Pearl, Enrique Ruspini, Glenn Shafer, Lofti Zadeh ---------------------------------------------------------------------------- ------- End of Forwarded Message From mike at bucasb.bu.edu Fri Jan 12 02:06:41 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Fri, 12 Jan 90 02:06:41 EST Subject: Call for Papers Wang Conference Message-ID: <9001120706.AA28881@bucasb.bu.edu> CALL FOR PAPERS NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION MAY 11--13, 1990 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This research conference at the cutting edge of neural network science and technology will bring together leading experts in academe, government, and industry to present their latest results on automatic target recognition in invited lectures and contributed posters. Invited lecturers include: JOE BROWN, Martin Marietta, "Multi-Sensor ATR using Neural Nets" GAIL CARPENTER, Boston University, "Target Recognition by Adaptive Resonance: ART for ATR" NABIL FARHAT, University of Pennsylvania, "Bifurcating Networks for Target Recognition" STEPHEN GROSSBERG, Boston University, "Recent Results on Self-Organizing ATR Networks" ROBERT HECHT-NIELSEN, HNC, "Spatiotemporal Attention Focusing by Expectation Feedback" KEN JOHNSON, Hughes Aircraft, "The Application of Neural Networks to the Acquisition and Tracking of Maneuvering Tactical Targets in High Clutter IR Imagery" PAUL KOLODZY, MIT Lincoln Laboratory, "A Multi-Dimensional ATR System" MICHAEL KUPERSTEIN, Neurogen, "Adaptive Sensory-Motor Coordination using the INFANT Controller" YANN LECUN, AT&T Bell Labs, "Structured Back Propagation Networks for Handwriting Recognition" CHRISTOPHER SCOFIELD, Nestor, "Neural Network Automatic Target Recognition by Active and Passive Sonar Signals" STEVEN SIMMES, Science Applications International Co., "Massively Parallel Approaches to Automatic Target Recognition" ALEX WAIBEL, Carnegie Mellon University, "Patterns, Sequences and Variability: Advances in Connectionist Speech Recognition" ALLEN WAXMAN, MIT Lincoln Laboratory, "Invariant Learning and Recognition of 3D Objects from Temporal View Sequences" FRED WEINGARD, Booz-Allen and Hamilton, "Current Status and Results of Two Major Government Programs in Neural Network-Based ATR" BARBARA YOON, DARPA, "DARPA Artificial Neural Networks Technology Program: Automatic Target Recognition" CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR neural network research will be held on May 12, 1990. Attendees who wish to present a poster should submit 3 copies of an extended abstract (1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include with the abstract the name, address, and telephone number of the corresponding author. Mail to: ATR Poster Session, Neural Networks Conference, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors will be informed of abstract acceptance by March 31, 1990. SITE: The Wang Institute possesses excellent conference facilities on a beautiful 220-acre campus. It is easily reached from Boston's Logan Airport and Route 128. REGISTRATION FEE: Regular attendee--$90; full-time student--$70. Registration fee includes admission to all lectures and poster session, abstract book, one reception, two continental breakfasts, one lunch, one dinner, daily morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available. For information, call (508) 649-9731. TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Fri Jan 12 04:46:15 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Fri, 12 Jan 90 04:46:15 EST Subject: i hope i am asking this at the right place... Message-ID: a friend of mine wants to set up a serious/fairly big speech+neural nets project $ (say 100-200 K) he asked me about equipment to buy. anyone wants to put in their two cents? i could see a big bucks proposal based around sun workstations and a smaller one based around, say, 386 or 486 Dos machines. i seem to recall the SUN sparc station has a bunch of built-in DSP functions. i assume a waveform editor (software) mike, earphones and amplifier, some kind of A/D - D/A converter and maybe some central file server with a lot of hard disk space to store a speech database. i would not bother all of you with such a marginal message, but the person in question has to meet a deadline for a proposal and he was quite pressed for time. we do not want him to miss the chance to spend all this good money , do we ? if anyone has some names and/or ballpark estimates for prices for the above items, and maybe some other stuff i forgot, please mail me .... thanasis From mike at bucasb.bu.edu Fri Jan 12 02:07:49 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Fri, 12 Jan 90 02:07:49 EST Subject: Neural Net Course Message-ID: <9001120707.AA28974@bucasb.bu.edu> NEURAL NETWORKS: FROM FOUNDATIONS TO APPLICATIONS May 6--11, 1989 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This in-depth, systematic, 5-day course is based upon the world's leading graduate curriculum in the technology, computation, mathematics, and biology of neural networks. Developed at the Center for Adaptive Systems (CAS) and the Graduate Program in Cognitive and Neural Systems (CNS) of Boston University, twenty-eight hours of the course will be taught by six CAS/CNS faculty. Three distinguished guest lecturers will present eight hours of the course. COURSE OUTLINE MAY 7, 1990 ----------- ---Morning Session (Professor Stephen Grossberg) Historical Overview, Content Addressable Memory, Competitive Decision Making, Associative Learning ---Afternoon Session (Professors Michael Jordan (MIT) and Ennio Mingolla) Combinational Optimization, Perceptrons, Introduction to Back Propagation, Recent Developments of Back Propagation MAY 8, 1990 ----------- ---Morning Session (Professors Gail Carpenter and Stephen Grossberg) Adaptive Pattern Recognition, Introduction to Adaptive Resonance Theory, Analysis of ART 1 ---Afternoon Session (Professor Gail Carpenter) Analysis of ART 2, Analysis of ART 3, Self-Organization of Invariant Pattern Recognition Codes, Neocognitron MAY 9, 1990 ----------- ---Morning Session (Professors Stephen Grossberg and Ennio Mingolla) Vision and Image Processing ---Afternoon Session (Professors Daniel Bullock, Michael Cohen, and Stephen Grossberg) Adaptive Sensory-Motor Control and Robotics, Speech Perception and Production MAY 10, 1990 ------------ ---Morning Session (Professors Michael Cohen, Stephen Grossberg, and John Merrill) Speech Perception and Production, Reinforcement Learning and Prediction ---Afternoon Session (Professors Stephen Grossberg and John Merrill and Dr. Robert Hecht-Nielsen, HNC) Reinforcement Learning and Prediction, Recent Developments in the Neurocomputer Industry MAY 11, 1990 ------------ ---Morning Session (Dr. Federico Faggin, Synaptics Inc.) VLSI Implementation of Neural Networks TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. For further information about registration and STUDENT FELLOWSHIPS, see below. REGISTRATION FEE: Regular attendee--$950; full-time student--$250. Registration fee includes five days of tutorials, course notebooks, one reception, five continental breakfasts, five lunches, four dinners, daily morning and afternoon coffee service, evening discussion sessions. STUDENT FELLOWSHIPS supporting travel, registration, and lodging for the Course are available to full-time graduate students in a PhD program. Applications must be postmarked by March 1, 1990. Send curriculum vitae, a one-page essay describing your interest in neural networks, and a letter from a faculty advisor to: Student Fellowships, Neural Networks Course, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From jbower at smaug.cns.caltech.edu Fri Jan 12 12:24:45 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 12 Jan 90 09:24:45 PST Subject: Flame Message-ID: <9001121724.AA15622@smaug.cns.caltech.edu> I must say that I for one am getting tired of being inundated with literature from the center of "the worlds greatest graduate program in the technology, computation, and biology of neural networks". Dispite the "renowned" nature of the faculty, and the "extraordinary" apparent range of their expertise, enough is enough. If the problem is a limited number of applicants, maybe someone should lower the price. Certainly, all "industry leading neural architects" should be able to understand that. Or, dare I suggest it, maybe the "worlds leading graduate curriculum" should be somewhat modified so that is seems more likely that the subject matter can actually be covered in a "self-contained systematic" fashion. Presumably, faculty on "the cutting edge" "who know the field as only its creators can" would be up to this task without compromising the week of its "rare intellectual excitment". Jim Bower From AMR at ibm.com Fri Jan 12 15:04:58 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 12 Jan 90 15:04:58 EST Subject: No subject Message-ID: This is Alexis Manaster-Ramer at T.J. Watson. At Stevan Harnad's suggestion, I direct the following question to you: is there a formal result about the equivalence (or, God forbid, nonequivalence) of connectionist models and Turing machines? If so, I would appreciate a reference or a clue towards one. Alexis From cbond at amber.bacs.indiana.edu Fri Jan 12 16:05:00 1990 From: cbond at amber.bacs.indiana.edu (cbond@amber.bacs.indiana.edu) Date: 12 Jan 90 16:05:00 EST Subject: Neural Nets and Targeting Message-ID: Remove me immediately from this mailing list. There is quite enough slime around here to put up with without having to be exposed to noxious, militaristic garbage in my mail queue, or the filth who would participate in such trash. If I ever receive anything from this mailing list again, I will be in touch with root at cs.cmu.edu. From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jan 12 21:16:05 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 12 Jan 90 21:16:05 EST Subject: Flame In-Reply-To: Your message of Fri, 12 Jan 90 09:24:45 -0800. <9001121724.AA15622@smaug.cns.caltech.edu> Message-ID: <5050.632196965@DST.BOLTZ.CS.CMU.EDU> Jim, have you heard George Lakoff's story about the origin of the Japanese phrase "bellybutton makes tea"? In the Japanese tea ceremony, the water must be heated to a precise temperature just short of boiling. If done correctly, instead of seeing the bubbles and turbulence associated with true boiling, the surface of the water merely rises and falls quickly while remaining smooth. In Japanese society it is impolite to laugh openly at someone, even when they're making a fool of themself. Instead one laughs inwardly while maintaining a calm exterior. Since the Japanese see the belly as the metaphorical center of the self, when one is laughing inside while trying to keep a straight face it is the belly that shows one's true feelings. They say the belly gently shakes with suppressed laughter. Hence, "bellybutton makes tea." I just wanted to share this little story with you in case you were wondering why no one else bothers to respond to certain posts. -- Dave From hinton at ai.toronto.edu Sat Jan 13 12:58:35 1990 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Sat, 13 Jan 90 12:58:35 EST Subject: email total war Message-ID: <90Jan13.125854est.10519@ephemeral.ai.toronto.edu> I think it would be a pity if the connectionists email facility that Dave Touretzky and CMU have generously provided was cluttered up by a war between the BU camp and its rivals. The neural network community has already suffered a lot from this split, and it would be nice if we could de-escalate it. Maybe some people could refrain from claiming publicly and at great length to be the best group in the world, and, as Dave suggests, others could resist the temptation to publicly object. I have been known to indulge in public polemics myself, but I now think its a mistake. Geoff From rik%cs at ucsd.edu Sat Jan 13 19:01:52 1990 From: rik%cs at ucsd.edu (Rik Belew) Date: Sat, 13 Jan 90 16:01:52 PST Subject: Flame Message-ID: <9001140001.AA02919@roland.UCSD.EDU> Dave, That was absolutely perfect invocation of another culture. I think it applies equally well to some reactions other than laughter (like my reactions to the target recognition conference, for example). That traditional Japanese culture be useful in new-wave Email culture is striking, too. Connectionists has grown into a pretty big, valuable mailing list, and often this means that it is time for a moderator to step in and ... moderate. But I for one find real value in the sometimes-brawling nature of this group and think something would be lost if we took this step, and I would prefer to hit my own delete key rather than having someone else try to do it for me. Rik Belew (rik at cs.ucsd.edu) From Dave.Touretzky at B.GP.CS.CMU.EDU Sun Jan 14 01:39:03 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Sun, 14 Jan 90 01:39:03 EST Subject: instructions to NIPS authors Message-ID: <6163.632299143@DST.BOLTZ.CS.CMU.EDU> I apologize for sending this message to NIPS authors to the entire CONNECTIONISTS list, but it's the most efficient way to reach people quickly. The deadline for NIPS papers to arrive at Morgan Kaufmann is this Wednesday, January 17. There have been some requests for clarifications of the paper format, so here they are. First, although the instructions claim that the paper title is boldface, the LaTeX code for \maketitle actually produces an italic title. Go with the code, not the documentation. Use the italic title. Second, note that the LaTeX \author command doesn't work in the NIPS style file. That's why the instructions say to format the author list manually. This is necessary because the format will differ depending on the number of authors. * For a single-author paper, just center the name and address. Simple. * For a two-author paper where the authors are from different institutions, I suggest the following formatting code: \begin{center} \parbox{2in}{\begin{center} {\bf David S. Touretzky}\\ School of Computer Science\\ Carnegie Mellon University\\ Pittsburgh, PA 15213 \end{center}} \hfill \parbox{2in}{\begin{center} {\bf Deirdre W. Wheeler}\\ Department of Linguistics\\ University of Pittsburgh\\ Pittsburgh, PA 15260 \end{center}} \end{center} If the authors are from the same institution, do it this way: \begin{center} {\bf Ajay N. Jain \ \ \ Alex H. Waibel}\\ School of Computer Science\\ Carnegie Mellon University\\ Pittsburgh, PA 15213 \end{center} * For a three-author paper, the instructions advise centering the name and address of the first author, making the second author flush-left, and the third author flush-right. Some people have objected that putting the first author in the middle makes it look like he/she is really the second author. If you wish to deviate from the suggested format for three-author papers, go ahead. Do whatever you think is reasonable. * If you have more than three authors, or some other special problem not covered by the above, use your own best judgement about format. Remember that your paper should be at most 8 pages long, and you need to turn in an indexing form and "Permission to Publish" form along with your camera-ready copy. The proceedings are due out in April. They can be ordered now from Morgan Kaufmann Publishers, Inc., San Mateo, CA. An example of a correct citation to this proceedings is: Barto, A. G., and Sutton, R. S. (1990) Sequential decision problems and neural networks. In D. S. Touretzky (ed.), {\it Advances in Neural Information Processing Systems 2.} San Mateo, CA: Morgan Kaufmann. -- Dave From zeiden at cs.wisc.edu Sun Jan 14 10:59:28 1990 From: zeiden at cs.wisc.edu (Matthew Zeidenberg) Date: Sun, 14 Jan 90 09:59:28 -0600 Subject: email total war Message-ID: <9001141559.AA18442@ai.cs.wisc.edu> I do not agree that people should not publicly object to militarist uses of science. As most people know, Japan is a relatively conformist society. Open discourse is a strength of the West compared to Japan. Many netters (myself included) object to such uses as "Neural Networks in Automatic Target Recognition" and the general prevalence of DOD and ONR funding in CS as opposed to NSF. It leads to a militarist focus. Where's science's "Peace Dividend"? Matt Zeidenberg From noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Sun Jan 14 12:27:23 1990 From: noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Noel Sharkey) Date: Sun, 14 Jan 90 17:27:23 GMT Subject: No subject In-Reply-To: AMR@com.ibm.almaden's message of Fri, 12 Jan 90 15:04:58 EST <10496.9001130347@expya.cs.exeter.ac.uk Message-ID: <10907.9001141727@entropy.cs.exeter.ac.uk> I keep hearing that there is a formal result about the equivalence of connect. models and Turing machines, but I have never seen it. I always hear from someone who knows someone else who was told by someone who knew someone else whose friend thought that one of their workmates had once worked with someone who knew of a formal proof by someone whose name they couldn`t quite remember, but they were at one of the big league universities. Please can I see it as well if there is one? Though I am sure I would have seen it by now. noel From jbower at smaug.cns.caltech.edu Sun Jan 14 16:03:55 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Sun, 14 Jan 90 13:03:55 PST Subject: Flame Message-ID: <9001142103.AA17201@smaug.cns.caltech.edu> Maybe it wasn't clear. I, like many others, judging from my recent personal email, tolerated in silence the first several postings of these announcements on the connectionists mailing list. Further, I also have no interest in cluttering connectionists with nontechnical garbage, and believe that a free and open discourse should be maintained. While the form of my objection may not have been entirely appropriate, the intent was precisely to induce some self regulation in those who, in my view, were taking advantage of both connectionists and even the larger field. Without self regulation, free forums don't work. Finally, I must object to the suggestion that my reaction has to do with factionalism. Any event sponsored by any institution whose advertising is insulting to other workers in the field should not be tolerated no matter what one thinks of ones belly button. That said, I suggest we all go back to work. Jim Bower (ommmmmmmmmmmm...) From honavar at cs.wisc.edu Sun Jan 14 16:38:58 1990 From: honavar at cs.wisc.edu (Vasant Honavar) Date: Sun, 14 Jan 90 15:38:58 -0600 Subject: Turing Equivalence of Connectionist Nets Message-ID: <9001142138.AA26012@goat.cs.wisc.edu> On the Turing-equivalence of Connectionist networks. Warren S. McCulloch & Walter H. Pitts, 1943. "A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathyematical Biophysics, Vol. 5, pp 115-133. Chicago: University of Chicago Press. (Reprinted in "Embodiments of Mind", 1988. Cambridge, MA: MIT Press). There is also a paper by Kleene in a collected volume titled "Automata Studies" published by the Princeton Univeristy Press (sorry, I don't have the exact citation) in the '50s or '60s which addresses similar issues. The basic argument is: Consider a net which includes (but is not limited to) a set of nodes (possibly infinite in number) that serve as input and/or output nodes (a role akin to that of the inifinite tape in the Turing machine). Any such net can compute only those numbers that can be computed by the Turing machine. Furthermore, each of the numbers computable by the Turing machine can be computed by such a net. However, connectionist networks that we build are finite machines just as the general purpose computers modeled on Turing machines are finite machines and are therefore equivalent to Turing machines with finite tapes. Vasant Honavar honavar at cs.wisc.edu From kube%cs at ucsd.edu Sun Jan 14 21:16:49 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Sun, 14 Jan 90 18:16:49 PST Subject: on the Turing computability of networks In-Reply-To: Noel Sharkey's message of Sun, 14 Jan 90 17:27:23 GMT <10907.9001141727@entropy.cs.exeter.ac.uk> Message-ID: <9001150216.AA17819@kokoro.UCSD.EDU> From: Noel Sharkey Date: Sun, 14 Jan 90 17:27:23 GMT I keep hearing that there is a formal result about the equivalence of connect. models and Turing machines, but I have never seen it. If you don't restrict the class of connectionist models and consider networks whose state transitions are solutions to algebraic differential equations (this class is universal for analog computation in some sense; see references below), then it's pretty obvious that a connectionist network can simulate a universal digital machine. For example, the Sun on my desk is implemented as an analog network. Going the other way is a little trickier, since you have to make decisions about how to represent real numbers on Turing machine tapes etc., but there are reasonable ways to do this. Here the main results are Pour-El's that there are ADE's defined in terms of computable functions which have UNcomputable solutions from computable initial conditions (the shockwave solution to the wave equation has this property, in fact), and Vergis, Steiglitz and Dickinson's that if the second derivative of the solution is bounded, then the system is efficiently Turing simulatable. So if you bound second derivatives of network state (which amounts to bounding things like acceleration, force, energy, bandwidth, etc. in physical systems), then networks and Turing machines are equivalent. Beyond this, one would like a hierarchy of connectionist models ordered by computational power along the lines of what you get in classical automata theory, but I don't know of anything done on this since _Perceptrons_ (so far as I'm aware, none of the recent universal-approximator results separate any classes). --Paul Kube kube at cs.ucsd.edu ----------------------------------------------------------------------------- References: %A Anastasios Vergis %A Kenneth Steiglitz %A Bradley Dickinson %T The complexity of analog computation %J Mathematics and Computers in Simulation %V 28 %D 1986 %P 91-113 %A Marian Boykan Pour-El %A Ian Richards %T Noncomputability in models of physical phenomena %J International Journal of Theoretical Physics %V 21 %D 1982 %P 553-555 %X lots more interesting stuff in this number of the journal %A Marian Boykan Pour-El %A Ian Richards %T A computable ordinary differential equation which possesses no computable solution %J Ann. Math. Logic %V 17 %D 1979 %P 61-90 %A Lee A. Rubel %T A universal differential equation %J Bulletin of the American Mathematical Society %V 4 %D 1981 %P 345-349 %A Marian Boykan Pour-El %A Ian Richards %T The wave equation with computable initial data such that its unique solution is not computable %J Advances in Mathematics %V 39 %D 1981 %P 215-239 %A A. Grzegorczyk %T On the definitions of computable real continuous functions %J Fundamenta Mathematicae %V 44 %D 1957 %P 61-71 %A A. M. Turing %T On computable numbers, with an application to the Entscheidungsproblem %J Proceedings of the London Mathematics Society %V 42 %N 2 %D 1936-7 %P 230-265 From Dave.Touretzky at B.GP.CS.CMU.EDU Sun Jan 14 21:30:52 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Sun, 14 Jan 90 21:30:52 EST Subject: No subject In-Reply-To: Your message of Sun, 14 Jan 90 17:27:23 +0000. <10907.9001141727@entropy.cs.exeter.ac.uk> Message-ID: <6961.632370652@DST.BOLTZ.CS.CMU.EDU> "Connectionist models" is too vague a term to use when discussing Turing equivalence. Let's agree that AND, OR, and NOT gates are sufficiently close to simulated "neurons" for our purposes. Then a connectionist net composed of thse "neurons" can simulate a Vax. If you believe a Vax is Turing-equivalent, then connectionist nets are too. Of course Vaxen don't really have infinite-length tapes. That's okay; connectionist nets don't have an infinite number of neurons either. Jordan Pollack has shown that a connectionist net with a finite number of units can simulate a Turing machine if one assumes the neurons can perform infinite-precision arithmetic. The trick is to use the neuron activation levels to simulate a two-counter machine, which is formally equivalent to a Turing machine. (See any basic automata theory textbook for the proof.) Now, if you don't like AND gates as neurons, and you don't like units that represent infinite precision numbers as neurons, then you have to define what sorts of neurons you do like. My guess is that any kind of neuron you define can be used to build boolean AND/OR/NOT gates, and from there one proceeds via the Vax argument to Turing equivalence. A different kind of question is whether a connectionist net with a particular architecture can simulate a Turing machine in a particular way. For instance, can DCPS, Touretzky and Hinton's Distributed Connectionist Production System, simulate a Turing machine by representing the cells of the tape as working memory elements? The answer to this question is left as an exercise for the reader. -- Dave From jose at neuron.siemens.com Mon Jan 15 10:55:14 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Mon, 15 Jan 90 10:55:14 EST Subject: utms and connex Message-ID: <9001151555.AA06772@neuron.siemens.com.siemens.com> Another sort of interesting question concerning turing equivalence is learning. The sort of McCulloch and Pitts question and others that are usually posed concerns representation, can a net "with cycles" (cf. M&P, 1943) represent a turing machine? Another question along these lines is can a net learn to represent a turing machine (see williams and zipser for example) and under what conditions etc... anybody thought about that? Are there standard references from the learnability/complexity literature or more recent ones? Steve From pollack at cis.ohio-state.edu Mon Jan 15 13:27:55 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 15 Jan 90 13:27:55 EST Subject: Neuring Machine In-Reply-To: Dave.Touretzky@B.GP.CS.CMU.EDU's message of Sun, 14 Jan 90 21:30:52 EST <6961.632370652@DST.BOLTZ.CS.CMU.EDU> Message-ID: <9001151827.AA00263@toto.cis.ohio-state.edu> McCulloch and Pitts showed that their network could act as the finite state control of a Turing Machine, acting upon an EXTERNAL tape. How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. As Dave alluded to, in my May 1987 thesis*, I showed that a Turing Machine could be constructed out of a network with rational weights and outputs, linear combinations, thresholds, and multiplicative connections. This was a demonstration of the sufficiency of these elements in combination, not a formal proof of the necessity of any single element. (But, without multiplicative connections, it could take an unbounded amount of time to gate rational outputs, which is necessary to move both ways on the tape in response to threshold logic; without thresholds, its tough to make a decision; and without pure linear combinations, its hard not to lose information...) This is just like the argument that with a couple of registers which can hold unbounded integers, and the operators Increment, Decrement, and Branch-on-zero, a stored program machine can universally compute. I think that other folk, such as Szu and Abu-Mostafa have also worked on the theoretical two-neuron tape. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 * CS Dept, Univ. of Illinois. Available for $6 as MCCS-87-100 from Librarian/Computing Research Lab/NMSU/Las Cruces, NM 88003. From rr%eusip.edinburgh.ac.uk at NSFnet-Relay.AC.UK Mon Jan 15 11:18:30 1990 From: rr%eusip.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Richard Rohwer) Date: Mon, 15 Jan 90 16:18:30 GMT Subject: NNs & TMs Message-ID: <6693.9001151618@eusip.ed.ac.uk> One way to emulate a Turing machine with a neural net is to hand-build little subnets for NAND gates and FLIP FLOPS, and then wire them up to do the job. This observation led me to contemplate a project no student seems to want to take up (and I don't want to do it anytime soon)-- so I'll put it up for grabs in case anyone would be keen on it. It should be tedious but straightforward to write a compiler which turns C-code (or your favorite language) into weight matrices. Why bother? Well, training algorithms are still in a pretty primitive state when it comes to training nets to do complex temporal tasks; eg., parsing. A weight matrix compiler would at least provide an automatic way to initialize weight matrices to do anything a conventional program can do, albeit without using distributed representations in an interesting or efficient way. But perhaps something like minimizing an entropy measure from such a starting point could lead to something interesting. Richard Rohwer JANET: rr at uk.ac.ed.eusip Centre for Speech Technology Research ARPA: rr%ed.eusip at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at eusip.ed.ac.uk, 80, South Bridge rr%eusip.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!eusip!rr From hendler at cs.UMD.EDU Mon Jan 15 11:07:21 1990 From: hendler at cs.UMD.EDU (Jim Hendler) Date: Mon, 15 Jan 90 11:07:21 -0500 Subject: No subject Message-ID: <9001151607.AA29919@dormouse.cs.UMD.EDU> I'm a little confused by Dave's argument, and some of the others I've seen. There is a difference (and a formal one) between performing computation and being Turing equivalent. It is easily done (and has been in the past) to build a small general purpose computer out of tinkertoys (a tinkertoy computer was on display in the computer museum in Boston for a while -- pretty limited, but a computer none the less). Does this mean that tinkertoys are Turing complete? My Sun, built out of wires and etc. is also not equivalent to a Turing machine, it has limited state, etc. There are very definitely Turing computable functions that my Sun can't compute. To say that a Turing machine could be built from connectionist components is not to argue that they are formally equivalent. The only demonstration of equivalence I've seen is the result Dave cites from Jordan Pollack (see his thesis) in which he shows a reduction of a Turing machien to a particular connectionist network (given infinite precision integers). This is not a "simulation" of a TM, but rather a full-fledged reduction. So Dave is only "almost" right (I suspect just using language loosely), the question is whether connectionist networks with particular architectures are Turing equivalents -- in the formal sense (not whether they "simulate" a TM in the weaker sense that tinkertoys do). Anyone know of any proofs of this class? (Note, it is important to recognize that any such thing must be able to represent the arbitrarily large tape of the Turing machine) -Jim Hendler From pollack at cis.ohio-state.edu Mon Jan 15 19:18:12 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 15 Jan 90 19:18:12 -0500 Subject: Neuring Machine In-Reply-To: Dave.Touretzky@B.GP.CS.CMU.EDU's message of Sun, 14 Jan 90 21:30:52 EST <6961.632370652@DST.BOLTZ.CS.CMU.EDU> Message-ID: <9001160018.AA02638@giza.cis.ohio-state.edu> (I am having mailer trouble with SUNOS 4.1, so this is a second attempt) McCulloch and Pitts showed that their network could act as the finite state control of a Turing Machine, acting upon an EXTERNAL tape. How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. As Dave alluded to, in my May 1987 thesis*, I showed that a Turing Machine could be constructed out of a network with rational weights and outputs, linear combinations, thresholds, and multiplicative connections. This was a demonstration of the sufficiency of these elements in combination, not a formal proof of the necessity of any single element. (But, without multiplicative connections, it could take an unbounded amount of time to gate rational outputs, which is necessary to move both ways on the tape in response to threshold logic; without thresholds, its tough to make a decision; and without pure linear combinations, its hard not to lose information...) This is just like the argument that with a couple of registers which can hold unbounded integers, and the operators Increment, Decrement, and Branch-on-zero, a stored program machine can universally compute. I think that other folk, such as Szu and Abu-Mostafa have also worked on the theoretical two-neuron tape. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 * CS Dept, Univ. of Illinois. Available for $6 as MCCS-87-100 from Librarian/Computing Research Lab/NMSU/Las Cruces, NM 88003. From rudnick at cse.ogi.edu Mon Jan 15 20:03:14 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Mon, 15 Jan 90 17:03:14 PST Subject: tech report/bib: GA/ANN Message-ID: <9001160103.AA07080@cse.ogi.edu> The following tech report/bibliography is available: A Bibliography of the Intersection of Genetic Search and Artificial Neural Networks Mike Rudnick Department of Computer Science and Engineering Oregon Graduate Institute Technical Report No. CS/E 90-001 January 1990 This is a fairly informal bibliography of work relating artificial neural networks (ANNs) and genetic search. It is a collection of books, papers, presentations, reports, and the like, which I've come across in the course of pursuing my interest in using genetic search techniques for the design of ANNs and in operating an electronic mailing list on GA/ANN. The bibliography contains no references which I feel relate solely to ANNs or GAs (genetic algorithms). To receive a copy, simply request the report by name and number; send email to kelly at cse.ogi.edu or smail to: Kelly Atkinson Department of Computer Science and Engineering Oregon Graduate Institute 19600 NW Von Neumann Dr. Beaverton, OR 97006-1999 Mike Rudnick From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Tue Jan 16 00:59:00 1990 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Tue, 16 Jan 90 00:59 EST Subject: List Address, ICJNN, and Turing Machines Message-ID: I am posting this for Jurgen Schmidhuber (schmidhu at tumult.informatik.tu-muenchen.de). He told me at ICJNN that he was getting this mailing list but was having problems sending submissions to it...is there a special addressing method he needs from Germany besides the .edu? Not to waste bandwith, I'll add that I seem to remember neural nets being trained to be Turing Machines in Zisper (et al.)'s work on recurrent network backpropogation learning. -Thomas Edwards tedwards at cmsun.nrl.navy.mil ins_atge at jhuvms.BITNET ins_atge at jhunix.hcf.jhu.edu From terry%sdbio2 at ucsd.edu Tue Jan 16 01:12:54 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Mon, 15 Jan 90 22:12:54 PST Subject: Summer Course on Learning Message-ID: <9001160612.AA07677@sdbio2.UCSD.EDU> Summer Course on COMPUTATIONAL NEUROSCIENCE: LEARNING AND MEMORY July 14-17, 1990 Cold Spring Harbor Laboratory Organizers: Michael Jordan, MIT Terrence Sejnowski, Salk Institute and UCSD This is an intensive laboratory and lecture course that will examine computational approaches to problems in learning and memory. Problems and techniques from both neuroscience and cognitive science will be covered, including learning procedures that have been developed recently for neural network models. The course will include a computer-based laboratory so that students can actively explore computational issues. Students will be able to informally interact with the lecturers. A strong grounding in mathematics and previous exposure to neurobiology is essential for students. Partial list of Instructors: Richard Sutton Yan LeCun David Rumelhart Tom Brown Jack Byrne Richard Durbin Gerald Tesauro Stephen Lisberger Ralph Linsker John Moody Nelson Donnegan Chris Atkeson Tomaso Poggio DEADLINE: MARCH 15, 1990 Applications and additional information may be obtained from: Registrar Cold Spring Harbor Laboratory Cold Spring Harbor, New York 11724 Tuition, Room and Board: $1,390. ----- From kube%cs at ucsd.edu Tue Jan 16 02:15:59 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Mon, 15 Jan 90 23:15:59 PST Subject: Neuring Machine Message-ID: <9001160715.AA18893@kokoro.UCSD.EDU> Date: Mon, 15 Jan 90 13:27:55 EST From: Jordan B Pollack How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. Or assume an unbounded network. Date: Mon, 15 Jan 90 11:07:21 -0500 From: Jim Hendler I'm a little confused by Dave's argument, and some of the others I've seen. My Sun, built out of wires and etc. is not equivalent to a Turing machine, it has limited state, etc. See above. To say that a Turing machine could be built from connectionist components is not to argue that they are formally equivalent. Showing how you can simulate the operation of a universal TM in another machine is usually how you show that TM's are no more powerful than the other machine. The only demonstration of equivalence I've seen is the result Dave cites from Jordan Pollack (see his thesis) in which he shows a reduction of a Turing machien to a particular connectionist network (given infinite precision integers). This is not a "simulation" of a TM, but rather a full-fledged reduction. Why you think it's okay to suppose you have infinite-precision neurons but not okay to suppose you have infinitely many neurons is a mystery to me, since each seems about as impossible as the other. --Paul kube at cs.ucsd.edu From rudnick at cse.ogi.edu Tue Jan 16 12:51:51 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Tue, 16 Jan 90 09:51:51 PST Subject: ANN fault tol. Message-ID: <9001161751.AA12369@cse.ogi.edu> Below is a synopsis of the references/material I received in response to an earlier request for pointers to work on the fault tolerance of artificial neural networks. Although there has been some work done relating directly to ANN models, most of the work appears to have been motivated by VLSI implementation and fault tolerance concerns. Apparently, and this is speculation on my part, the folklore that artificial neural networks are fault tolerant derives mostly from the fact that they resemble biological neural networks, which generally don't stop working when a few neurons die here and there. Although it looks like I'm not going to be doing ANN fault tolerance as my dissertation topic, I can't help but feel this line of research contains a number of outstanding phd topics. Mike Rudnick Computer Science & Eng. Dept. Domain: rudnick at cse.ogi.edu Oregon Graduate Institute (was OGC) UUCP: {tektronix,verdix}!ogicse!rudnick 19600 N.W. von Neumann Dr. (503) 690-1121 X7390 (or X7309) Beaverton, OR. 97006-1999 ----- From: platt at synaptics.com (John Platt) Well, one of the original papers about building a neural network in analog VLSI had a chip where about half of then synapses were broken, but the chip still worked. Look at ``VLSI Architecutres for Implementation of Neural Networks'' by Massimo A. Sivilotti, Michael Emerling, and Carver A. Mead, in ``Neural Networks for Computing'', AIP Conference Proceedings 151, John S. Denker, ed., pp. 408-413 ----- From: Jonathan Mills You might be interested in a paper submitted to the 20th Symposium on Multiple-Valued Logic titled "Lukasiewicz Logic Arrays", describing work done by M. G. Beavers, C. A. Daffinger and myself. These arrays (LLAs for short) can be used with other circuit components to fabricate neural nets, expert systems, fuzzy inference engines, sparse distributed memories and so forth. They are analog circuits, massively parallel, based on my work on inference cellular automata, and are inherently fault-tolerant. In simulations I have conducted, the LLAs produce increasingly noisy output as individual processors fail, or as groups of processors randomly experience stuck-at-one and/or stuck-at-zero faults. While we have much more work to do, it does appear that with some form of averaging the output of an LLA can be preserved without noticeable error with up to one-third of the processors faulty (as long as paths exist from some inputs to the output). If the absolute value of the output is taken, a chain of pulses results so that a failing LLA will signal its graceful degradation. VLSI implementations of LLAs are described in the paper, with an example device submitted to MOSIS, and due back in January 1990. We are aware of the work of Alspector et. al. and Graf et. al., which is specific to neural architectures. Our work is more general in that it arises from a logic with both algebraic and logical semantics, lending the dual semantics (and its generality) to the resulting device. LLAs can also be integrated with the receptor circuits of Mead, leading to a design project here for a single circuit that emulates the first several levels of the visual system, not simply the retina. This is almost necessary because I can put over 2,000 processors on a single chip, but haven't the input pins to drive them! Thus, a chip that uses fewer processors with the majority of inputs generated on chip is quite attractive -- especially since even with faults I'll still get a valid result from the computational part of the device. Sincerely, Jonathan Wayne Mills Assistant Professor Computer Science Department Indiana University Bloomington, Indiana 47405 (812) 331-8533 ----- From: risto at CS.UCLA.EDU (Risto Miikkulainen) I did a brief analysis of the fault tolerance of distributed representations. In short, as more units are removed from the representation, the performance degrades linearly. This result is documented in a paper I submitted to Cognitive Science a few days ago: Risto Miikkulainen and Michael G. Dyer (1989). Natural Language Processing with Modular Neural Networks and Distributed Lexicon. Some preliminary results are mentioned in: @InProceedings{miikkulainen:cmss, author = "Risto Miikkulainen and Michael G. Dyer", title = "Encoding Input/Output Representations in Connectionist Cognitive Systems", booktitle = "Proceedings of the 1988 Connectionist Models Summer School", year = "1989", editor = "David S. Touretzky and Geoffrey E. Hinton and Terrence J. Sejnowski", publisher = KAUF, address = KAUF-ADDR, } ----- "Implementation of Fault Tolerant Control Algorithms Using Neural Networks", systematix, Inc., Report Number 4007-1000-08-89, August 1989. ----- From: kddlab!tokyo-gas.co.jp!hajime at uunet.uu.net > " A study of high reliable systems > against electric noises and element failures " > > -- Apllication of neural network systems -- > > ISNCR '89 means "International Symposium on Noise and Clutter Rejection > in Radars and Image Processing in 1989". > It was held in Kyoto, JAPAN from Nov.13 to Nov.17. Hajime FURUSAWA JUNET: hajime at tokyo-gas.co.jp Masayuki KADO JUNET: kado at tokyo-gas.co.jp Research & Development Institute Tokyo Gas Co., Ltd. 1-16-25 Shibaura, Minato-Ku Tokyo 105 JAPAN ----- From: Mike Carter "Operational Fault Tolerance of CMAC Networks", NIPS-89, by Mikeael J. Carter, Frank Rudolph, and Adam Nucci, University of New Hampshire Mike Carter also says he has a non-technical overview of NN fault tolerance which he wrote some time ago which contains references to papers which have some association with fault tolerance (although only 1 of which had fault tolerance as its focus). ----- From: Martin Emmerson I am working on simulating faults in neural-networks using a program running on a Sun (Unix and C). I am particularly interested in qualitative methods for assessing performance of a network and also faults that might occur in a real VLSI implementation. From pollack at cis.ohio-state.edu Tue Jan 16 12:33:33 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 16 Jan 90 12:33:33 -0500 Subject: Neuring Machine In-Reply-To: Paul Kube's message of Mon, 15 Jan 90 23:15:59 PST <9001160715.AA18893@kokoro.UCSD.EDU> Message-ID: <9001161733.AA07782@giza.cis.ohio-state.edu> >> Why you think it's okay to suppose you have infinite-precision neurons >> but not okay to suppose you have infinitely many neurons is a mystery to me, >> since each seems about as impossible as the other. >> --Paul >> kube at cs.ucsd.edu There are two reasons not to assume an unbounded network: 1) Each new unit doesnt just add more memory, but also adds "control." 2) The Neuron Factory would still be "outside" the system, which is the original problem with McPitt's tape. Also, there IS a difference between infinite and unbounded (but still finite in practice). Various proofs of the "universal approximation" of neural networks (such as White, et al) depend on an unbounded (but not infinite) number of hidden units. Finally, there is also a difference between a theoretical argument about competency and a practical argument about what machines can be physically built. Since, as someone in AI, I have always simulated every connectionist model I've researched (including the Neuring machine), Paul Kube (along with several readers of my thesis) seemed to take my theoretical argument (about a sufficient set of primitive elements) as a programme of building neural networks in a physically impossible and very stilted fashion. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From hare at amos.ucsd.edu Tue Jan 16 13:28:35 1990 From: hare at amos.ucsd.edu (Mary Hare) Date: Tue, 16 Jan 90 10:28:35 PST Subject: Technical report available Message-ID: <9001161828.AA22358@amos.ucsd.edu> "The Role of Similarity in Hungarian Vowel Harmony: A connectionist account" Technical Report CRL-9004 Mary Hare Department of Linguistics & Center for Research in Language Over the last 10 years, the assimilation process referred to as vowel harmony has served as a test case for a number of proposals in phonological theory. Current autosegmental approaches successfully capture the intuition that vowel harmony is a dynamic process involving the interaction of a sequence of vowels; still, no theoretical analysis has offered a non-stipulative account of the inconsistent behavior of the so-called "transparent", or disharmonic, segments. The current paper proposes a connectionist processing account of the vowel harmony phenomenon, using data from Hungarian. The strength of this account is that it demonstrates that the same general principle of assimilation which underlies the behavior of the "harmonic" forms accounts as well for the apparently exceptional "transparent" cases, without stipulation. The account proceeds in three steps. After presenting the data and current theoretical analyses, the paper describes the model of sequential processing introduced by Jordan (1986), and motivates this as a model of assimilation processes in phonology. The paper then presents the results of a series of parametric studies that were run with this model, using arbitrary bit patterns as stimuli. These results establish certain conditions on assimilation in a network of this type. Finally, these findings are related to the Hungarian data, where the same conditions are shown to predict the correct pattern of behavior for both the regular harmonic and irregular transparent vowels. ---------------------------------------- Copies of this report may be obtained by sending an email request for TR CRL-9004 to 'yvonne at amos.ucsd.edu', or surface mail to the Center for Research in Language, C-008; University of California, San Diego; La Jolla CA 92093. From kube%cs at ucsd.edu Tue Jan 16 18:44:38 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Tue, 16 Jan 90 15:44:38 PST Subject: Neuring Machine In-Reply-To: Jordan B Pollack's message of Tue, 16 Jan 90 12:33:33 -0500 <9001161733.AA07782@giza.cis.ohio-state.edu> Message-ID: <9001162344.AA19527@kokoro.UCSD.EDU> Date: Tue, 16 Jan 90 12:33:33 -0500 From: Jordan B Pollack There are two reasons not to assume an unbounded network: 1) Each new unit doesnt just add more memory, but also adds "control." Yes, you'd get two different connectionist models in the two cases (inifinite size vs. infinite precision). But they'd both be connectionist models, more or less equally abstract, so for arguments about theoretical reduction of TM's to networks they seem prima facie as good. That's all I was getting at. 2) The Neuron Factory would still be "outside" the system, which is the original problem with McPitt's tape. It seems to me the issue turns on whether scaling the machine up to handle bigger problems makes the machine of a different class. So usually everybody agrees Jim's Sun is a universal machine; adding more memory (and maybe an addressing mode to the 68020) is "trivial". Adding unbounded memory to a finite state machine, or another stack to a PDA, are nontrivial changes. I would think that adding more neurons to a net keeps it a neural net, though you could put restrictions on the nets you're interested in to prevent that. Since no natural computational classification of nets is yet known, how you define your model class is up to you; but the original question seemed to be about Turing equivalence of unrestricted connectionist networks. Also, there IS a difference between infinite and unbounded (but still finite in practice). Various proofs of the "universal approximation" of neural networks (such as White, et al) depend on an unbounded (but not infinite) number of hidden units. Isn't it just a matter of how you order the quantifiers? Every TM computation requires only finite tape, but no finite tape will suffice for every TM computation. Similarly in White et al every approximation bound on a Borel-measurable function can be satisfied with a finite net, but no finite net can achieve every approximation bound on every function. Paul Kube (along with several readers of my thesis) seemed to take my theoretical argument (about a sufficient set of primitive elements) as a programme of building neural networks in a physically impossible and very stilted fashion. Sorry, I didn't mean to imply that you were trying to do anything impossible, or even stilted! I think that results on computational power of abstract models are interesting, and that questions about their relation to practical machines are useful to think about. But in the absence of any more concrete results, maybe we should give it a rest, and get back to discussing advertising and marketing strategies for our graduate programs. :-) --Paul kube at cs.ucsd.edu From dfausett at zach.fit.edu Tue Jan 16 10:55:03 1990 From: dfausett at zach.fit.edu ( Donald W. Fausett) Date: Tue, 16 Jan 90 10:55:03 EST Subject: utms and connex Message-ID: <9001161555.AA24092@zach.fit.edu> Are you familiar with the following paper? It may address some of your questions. Max Garzon and Stan Franklin, "Neural Computability II", IJCNN, v. 1, 631-637, 1989. The authors claim to present a general framework "... within which the computability of solutions to problems by various types of automata networks (neural networks and cellular automata included) can be compared and their complexity analized." -- Don Fausett From gary%cs at ucsd.edu Tue Jan 16 20:58:28 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Tue, 16 Jan 90 17:58:28 PST Subject: List Address, ICJNN, and Turing Machines Message-ID: <9001170158.AA12394@desi.UCSD.EDU> Zipser & Williams trained a net to be the Finite State control of a turing machine, something which is quite different. I did something like that in my paper with Fu-Sheng Tsung in IJCNN89. We trained a network to learn to be a while loop with an if-then in it. The network was adding multi-digit numbers. We also showed that Elman's networks are more powerful than Jordan's because a n output-recurrent network can forget things about its input that are not reflected on its output. I.e., the output, due to the teaching signal, may filter information that you need. This was easily demonstrated by reversing the order of two statements in the while loop, which turned it into a program that one could learn and not the other. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (ARPA) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From DUFOSSE%FRMOP11.BITNET at VMA.CC.CMU.EDU Wed Jan 17 04:41:57 1990 From: DUFOSSE%FRMOP11.BITNET at VMA.CC.CMU.EDU (DUFOSSE Michel) Date: Wed, 17 Jan 90 09:41:57 GMT Subject: help please Message-ID: I cannot connect to NEURON-REQUEST at HPLAB.HP.COM in order to ask for registration to NEURON mailing list ? Would you help me please? thak thank you (DUFOSSE at FRMOP11.BITNET) From slehar at bucasb.bu.edu Wed Jan 17 16:43:20 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Wed, 17 Jan 90 16:43:20 EST Subject: Technical report available In-Reply-To: connectionists@c.cs.cmu.edu's message of 17 Jan 90 03:39:03 GM Message-ID: <9001172143.AA00899@thalamus.bu.edu> Az Ipafai papnak fa pipaya van azert az Ipafai papi pipa papi fa pipa! "the priest of Ipafa has a wooden pipe therefore the Ipafay priestly pipe is a priestly wooden pipe!" From hendler at cs.umd.edu Wed Jan 17 14:04:34 1990 From: hendler at cs.umd.edu (Jim Hendler) Date: Wed, 17 Jan 90 14:04:34 -0500 Subject: Turing Machines and Conenctionist networks Message-ID: <9001171904.AA03022@dormouse.cs.UMD.EDU> For what it is worth, I've just been having a chat with my friend down the hall, a learning theorist. We've sketched out a proof that shows that non-recurrent back-propagation learning cannot be Turing equivalent (we can show a class of Turing computable functions which such a machine could not learn - this is even assuming perfect generalization from the training set to an infinite function). Recurrent BP might or might not, depending on details of the learning algorithms which we'll have to think about. cheers Jim H. From schraudo%cs at ucsd.edu Thu Jan 18 13:08:17 1990 From: schraudo%cs at ucsd.edu (Nici Schraudolph) Date: Thu, 18 Jan 90 10:08:17 PST Subject: Turing Machines and Conenctionist networks Message-ID: <9001181808.AA08107@beowulf.UCSD.EDU> >From: Jim Hendler >We've sketched out a proof that shows that non-recurrent >back-propagation learning cannot be Turing equivalent (we >can show a class of Turing computable functions which such >a machine could not learn [...] Wait a minute - did anybody ever claim that backprop could LEARN any Turing-computable function? It seems clear that this is not the case: given that backprop is a gradient descent method, all you have to do is construct a function whose solution in weight space is surrounded by a local minimum "moat" in the error surface. The question was whether a neural net could COMPUTE any Turing- -computable function, given the right set of weights A PRIORI. The answer to that depends on what class of architectures you mean by "neural net": in general such nets are obviously Turing equivalent since you can construct a Turing Machine from connec- tionist components; more restricted classes such as one hidden layer feedforward nets are where it gets interesting. -- Nici Schraudolph, C-014 nschraudolph at ucsd.edu University of California, San Diego nschraudolph at ucsd.bitnet La Jolla, CA 92093 ...!ucsd!nschraudolph From jlm+ at ANDREW.CMU.EDU Thu Jan 18 13:15:17 1990 From: jlm+ at ANDREW.CMU.EDU (James L. McClelland) Date: Thu, 18 Jan 90 13:15:17 -0500 (EST) Subject: Turing Machines and Conenctionist networks In-Reply-To: <9001171904.AA03022@dormouse.cs.UMD.EDU> References: <9001171904.AA03022@dormouse.cs.UMD.EDU> Message-ID: <0ZhUSp200jWD80V2Fh@andrew.cmu.edu> The recent exchanges prompt me to reflect on how much time it's worth spending on the issue of Turing equivalence. One of the characteristics of connectionist models is the distinctly different style of processing and storage that they provide relative to conventional architectures. One of the motivations for thinking that these characteristics might be worth pursuing is that Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: Speech perception, pattern recognition, retrieval of contextually relevant information from memory, language understanding, and intuitive thinking. We need to start thinking about ways of going beyond Turing equivalence to find tests that can indicate the sufficiency of mechanisms to exhibit natural cognitive capabilities like those enumerated above. Turing equivalence has in my opinion virtually nothing to do with this matter. In principle results about what can be computed using discrete unambiguous sequences of symbols and a totally infallible, infinite memory will not help us much in understanding how we cope in real time with mutiple, graded and uncertain cues. The need for robustness in performance and learning in the face of an ambiguous world is not addressed by Turing equivalence, yet every time we understand a spoken sentence these issues of robustness arise! How do humans made of connectoplasm achieve Turing equivalence? The equivalence exists at a MACROLEVEL, and should not be sought in the microstructure (the units and connections). As whole organisms, we certainly can compute any computable function. The procedures that make up the microstructure of each step in such a computation are, I would submit, finite and probabilistic. But we can string sequences of such steps together, particularly with the aid of real external memory (pencils and paper, etc), and enough error checking, to compute anything we want. More on these and related matters may be found in Chapters 1 and 4 of Vol 1 and Chapter 14 of Vol 2 of the PDP books. -- Jay McClelland From alexis at CS.UCLA.EDU Thu Jan 18 13:15:31 1990 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Thu, 18 Jan 90 10:15:31 -0800 Subject: Turing Machines and Conenctionist networks In-Reply-To: Jim Hendler's message of Wed, 17 Jan 90 14:04:34 -0500 <9001171904.AA03022@dormouse.cs.UMD.EDU> Message-ID: <9001181815.AA13186@oahu.cs.ucla.edu> Well sure, recurrent networks are unquestionably needed to do powerful stuff, you're not going to implement a Turing machine with feed-forward networks alone. Recurrence allows memory as found in a computer. To really harness that power you want to "learn" the recurrent part of the network as well, allowing the system to decide where and when it needs some memory. BP (as well as ART, vect. quant, Boltzman, etc, etc) are all quite interesting and powerful, but if we can't figure out how to make a net learn where and when recurrence / memory is needed ... connectionism will probably die out again fairly soon. alexis. (UCLA) alexis at cs.ucla.edu or (MITRE Corp.) alexis at yummy.mitre.org From Michael.Witbrock at CS.CMU.EDU Thu Jan 18 19:50:06 1990 From: Michael.Witbrock at CS.CMU.EDU (Michael.Witbrock@CS.CMU.EDU) Date: Thu, 18 Jan 90 19:50:06 -0500 (EST) Subject: In-Reply-To: <9001151607.AA29919@dormouse.cs.UMD.EDU> References: <9001151607.AA29919@dormouse.cs.UMD.EDU> Message-ID: Jim Hendler argues for the non turing equivalence of SUNs on the basis of their backing store limitation. A SUN with access to unlimited memory (rather difficult to arrange given the limited address space of its mpu) could be shown to be turing equivalent, however. Similarly, it should be possible to prove the result for a network which can grow extra units and connections in unlimited number. This would be more satisfying than Jordan Pollack's neuring machine. michael From slehar at bucasb.bu.edu Fri Jan 19 00:18:31 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Fri, 19 Jan 90 00:18:31 EST Subject: Turing Machines and Conenctionist networks In-Reply-To: connectionists@c.cs.cmu.edu's message of 19 Jan 90 01:01:01 GM Message-ID: <9001190518.AA10272@thalamus.bu.edu> IN REPLY TO YOUR POSTING about neural nets vs. Turing machines: WELL SAID! I was getting really tired of the debate, and I cannot agree with you more! By the way, you're not THE McClelland of PDP fame are you? What a pleasure to exchange electrons with you! I enjoyed your book very much and recommend it highly to people all the time. From pollack at cis.ohio-state.edu Fri Jan 19 00:15:18 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Fri, 19 Jan 90 00:15:18 EST Subject: Turing Machines and Connectionist networks In-Reply-To: "James L. McClelland"'s message of Thu, 18 Jan 90 13:15:17 -0500 (EST) <0ZhUSp200jWD80V2Fh@andrew.cmu.edu> Message-ID: <9001190515.AA02576@toto.cis.ohio-state.edu> I disagree with Jay, and have disagreed since before I read (and quoted in my thesis) his bit about paper and pencil which appeared in chapter 4. Certainly, while isomorphism to the STRUCTURE to a Turing machine should not be a concern of connectionist research, the question of the FUNCTIONAL isomorphism (in idealized models) is a real concern. Especially in language processing, where fixed-width systems with non-recursive representations aren't even playing the game! Humans don't need paper and pencil to understand embedded clauses. At CMU Summer School 1986, I pointed out these generative capacity limits in all known connectionist models, and predicted that "the first such model to attract the attention of Chomskyists would get the authors shot out at the knees." While psychological realism and robustness are not built-in features of conventional stored program computers, recursive representations and computations are "of the nature" of cognitive and real-world tasks, and cannot be wished away simply because we don't YET know how to achieve them simultaneously with the good qualities of existing connectionist architectures. I certainly agree with Jay that recursive-like powers can emerge from a massively parallel iterative "microstructure". Witness Wolfram's automata or Mandelbrot's set. Neither are finite OR probabilistic, however, which may give us a clue... Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From jose at neuron.siemens.com Fri Jan 19 09:42:08 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Fri, 19 Jan 90 09:42:08 EST Subject: Turing Machines and Conenctionist networks Message-ID: <9001191442.AA03448@neuron.siemens.com.siemens.com> mathematical query...is it contradictory that feedforward networks are claimed BOTH to be able to approximate any real valued function mapping and NOT be able (as Hendler suggests) be turing equivalent? Cannot specific turing machines be seen as a real valued function mapping? Are there any mathematicians out there that can explain this to me please. Steve From dave at cogsci.indiana.edu Fri Jan 19 19:03:20 1990 From: dave at cogsci.indiana.edu (David Chalmers) Date: Fri, 19 Jan 90 19:03:20 EST Subject: Turing Machines and Conenctionist networks Message-ID: Steve Hanson asks: >mathematical query...is it contradictory that feedforward >networks are claimed BOTH to be able to approximate any >real valued function mapping and NOT be able (as Hendler suggests) >be turing equivalent? Cannot specific turing machines be >seen as a real valued function mapping? Feedforward networks can approximate functions from any *finite* domain. Turing equivalence requires the ability to compute general recursive functions defined on *infinite* domains (such as the natural numbers). The only ways that I can see to allow connectionist networks to handle functions on infinite domains are: (1) Arbitrarily high-precision inputs and processing; or (2) Arbitrarily large numbers of input units (along with arbitrarily large network size); or (3) Inputs extended over arbitrarily large periods of time. (Of course a feed-forward network would be no good here. We'd need some form of recurrence to preserve information.) Note that we never need *infinite* precision/size/time, as some have suggested. We merely need the ability to extend precision/size/time to an arbitrarily large (but still finite) extent, depending on the problem and the input. Infinite precision etc would give us something new again -- the ability to handle functions from *uncountable* domains (not even Turing machines can do this). Incidentally, of the methods above, I think that (3) is the most plausible. But the importance of Turing equivalence to cognition is questionable. Dave Chalmers (dave at cogsci.indiana.edu) Center for Research on Concepts and Cognition Indiana University. From terry%sdbio2 at ucsd.edu Sat Jan 20 00:35:46 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Fri, 19 Jan 90 21:35:46 PST Subject: Cold Spring Harbor - Date Correction Message-ID: <9001200535.AA19462@sdbio2.UCSD.EDU> (Incorrect dates were given in previous listing) Summer Course on COMPUTATIONAL NEUROSCIENCE: LEARNING AND MEMORY July 14-27, 1990 Cold Spring Harbor Laboratory Organizers: Michael Jordan, MIT Terrence Sejnowski, Salk Institute and UCSD This is an intensive laboratory and lecture course that will examine computational approaches to problems in learning and memory. Problems and techniques from both neuroscience and cognitive science will be covered, including learning procedures that have been developed recently for neural network models. The course will include a computer-based laboratory so that students can actively explore computational issues. Students will be able to informally interact with the lecturers. A strong grounding in mathematics and previous exposure to neurobiology is essential for students. Instructors: Eric Baum (NEC) Richard Sutton (GTE) Yan LeCun (ATT) David Rumelhart (Stanford) Tom Brown (Yale) Jack Byrne (Univ. Texas Houston) Richard Durbin (Stanford) Gerald Tesauro (IBM) Stephen Lisberger (UC San Farancisco) Ralph Linsker (IBM) John Moody (Yale) Nelson Donegan (Yale) Chris Atkeson (MIT) Tomaso Poggio (MIT) Mike Kearns (MIT) DEADLINE: MARCH 15, 1990 Applications and additional information may be obtained from: Registrar Cold Spring Harbor Laboratory Cold Spring Harbor, New York 11724 Tuition, Room and Board: $1,390. Partial Scholarships available to qualified students. ----- From noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Sun Jan 21 08:21:31 1990 From: noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Noel Sharkey) Date: Sun, 21 Jan 90 13:21:31 GMT Subject: turing equivalence Message-ID: <12411.9001211321@entropy.cs.exeter.ac.uk> I have been pleased with the response to the Turing equivalence issue both on the net and in my personal mail. I have not had time to digest all of this yet, but i have learned a few fundamental things that i didn`t know and will pass them on by and by. But i still have not seen a real formal proof. Jay McClelland does not think that this is a very worthwhile pursuit because "Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: Speech perception, pattern recognition, retrieval of contextually relevant information from memory, language understanding and intuitive thinking." While this may be true, we do not know whether something like Turing equivalence is a NECESSARY condition for the performance of such human phenomena. Jay says, and I agree, that, "We need to start thinking of ways of going beyond Turing equivalence." But the question here must be, how do we know when we have gone beyond Turing equivalence without first having found out whether or not we have it. I am working in connectionism because I am interested in explanations of human cognition and certainly, at present, connectionism offers a new and exciting approach. I think from this perspective it is useful the use of external memory aids etc. by humans is interesting - Turing discusses this himself. However, computer science has been developing formal analytic tools for a long time, let us not throw all these insights away because of a lot of flag waving enthusiasm. If we can find formal equivalences then we know where our new theory stands in relation to the old and we can demonstrate its power without descending into waffleware. Having said this, I wouldn't like to spend too much time on it myself. noel p.s. if this puts people off writing to the net about turing equivalence, I would still be very happy to have your replies directed to me personally. From lakoff at cogsci.berkeley.edu Sun Jan 21 17:47:20 1990 From: lakoff at cogsci.berkeley.edu (George Lakoff) Date: Sun, 21 Jan 90 14:47:20 -0800 Subject: No subject Message-ID: <9001212247.AA06634@cogsci.berkeley.edu> Subject: McClelland and Turing I agree strongly with Jay McClelland's comments about the irrelevance of Turing computability for issues in cognitive science. I would add one major point: Discussions of computability in general ignore the CONTENT of what is computed, in particular, the content of all natural language concepts. To me, the most important part of studies in neural grounding of the conceptual system, is the promise it holds out for accounting for the content of concepts, not just the form of representations and the characteristics of computability. As soon as discussion is directed to details of content, it becomes clear that computability discussions get us virtually nowhere. George Lakoff From gary%cs at ucsd.edu Mon Jan 22 16:49:13 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Mon, 22 Jan 90 13:49:13 PST Subject: turing equivalence Message-ID: <9001222149.AA03430@desi.UCSD.EDU> Noel writes: Jay McClelland does not think that this is a very worthwhile pursuit because "Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: [long list...] While this may be true, we do not know whether something like Turing equivalence is a NECESSARY condition for the performance of such human phenomena. Jay says, and I agree, that, "We need to start thinking of ways of going beyond Turing equivalence." [end of noel] Wait, this is crazy unless you believe in Quantum machines or some such [which is a perfectly reasonable response to the following, but for now, let's pretend it's not]. If you are a normal computer scientis/AI/PDP/Cognitive Science researcher, then you believe the basic assumption underlying AI that >>thinking is a kind of computation<<. All of the known kinds of computation are equivalent to the kind performed by a TM. So if we could show that a PDP net is equivalent to a TM, then we would have captured all of those things Jay was talking about. The problem is the proof is not constructive. If anything, what we need to do is find the class of functions that are easily *learnable* by particular PDP architectures. This will be a *subset* of the things computable by a TM. Hence, proving TM equivalence is not NECESSARY, however, it sure would be SUFFICIENT to show that it isn't crazy to try to find a learning algorithm and an architecture that could learn some particular class of problems, since whatever we would want to compute is certainly do-able by a neural net. Interesting work along these lines has been done by Servan-Schreiber et al., & Jeff Elman, where they show that certain kinds of FSM's are hard for simple recurrent nets to learn, and Jeff shows that a net can learn the syntactic structure of English, but is poor at center embedding, while being fine with tail recursive clauses. gary cottrell From lakoff at cogsci.berkeley.edu Tue Jan 23 04:20:27 1990 From: lakoff at cogsci.berkeley.edu (George Lakoff) Date: Tue, 23 Jan 90 01:20:27 -0800 Subject: No subject Message-ID: <9001230920.AA11360@cogsci.berkeley.edu> Response to Harlan: By ``content'' I have in mind cases like the following: (1) Color: The nature and distribution of color categories has been shown to depend on the neurophysiology of color vision. This is not just a matter of computation by the neurons, but of what they are hooked up to. (See my Women, Fire, and Dangerous Things, pp. 24 - 30.) (2) Basic-level categories, whose properties depend on gestalt perception, motor programs, and imagining capacity. Here we have not just a matter of abstract computation but again a matter of how the neurons doing the computing are hooked up to the body. (3) Spatial relations, e.g., in, out, to, from, through, over, and all others. Here the visual system, rather than the olfactory system, will count. Again, simply looking at abstract computations doesn't help. (4) Emotional concepts, like anger, which are partly understood via complex metaphorical mappings, but which are constrained by phsyiology. See Women, Fire, Case study 1. (5) Cultural concepts, like marriage. (6) Scientific concepts like relativity. I simply do not see how pure computation tells us anything whatever about the content of the concepts -- not just what inference patterns they occur in relative to other concepts, but the nature of, say, GREEN as opposed to ACROSS or SCARED, as well as the various properties of the concepts, e.g., their prototype structure, their place in the basic-level hierarchy, their associated mental images, whether they are metaphorically constituted and if so how they are understood, etc. As a cognitive scientist, I am concerned with all these issues and a myriad of other ones of greater complexity. Discussions of abstract computability issues, as interesting as they are in themselves, just don't help here. I am interested in connectionism partly because it holds out the promise of insight into the neural grounding of concepts and into the thousands of issues in conceptual analysis that require an understanding of such grounding. Turing computability is a technical issue and is of some technical interest, but has nothing whatever to say to the visceral issues concerning the content of concepts. George From shastri at central.cis.upenn.edu Tue Jan 23 09:10:20 1990 From: shastri at central.cis.upenn.edu (shastri@central.cis.upenn.edu) Date: Tue, 23 Jan 90 09:10:20 -0500 Subject: (New Tech. Report) From Simple Associations to Systematic Reasoning Message-ID: <9001231410.AA26064@central.cis.upenn.edu> The following report may be of interest to some of you. Please direct e-mail requests to: dawn at central.cis.upenn.edu --------------------------------------------- From Simple Associations to Systematic Reasoning: A connectionist representation of rules, variables and dynamic bindings Lokendra Shastri and Venkat Ajjanagadde Computer and Information Science Department University of Pennsylvania Philadelphia, PA 19104 December 1989 Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency --- as though these inferences are a reflex response of their cognitive apparatus. The work presented in this paper is a step toward a computational account of this remarkable reasoning ability. We describe how a connectionist system made up of simple and slow neuron-like elements can encode millions of facts and rules involving n-ary predicates and variables, and yet perform a variety of inferences within hundreds of milliseconds. We observe that an efficient reasoning system must represent and propagate, dynamically, a large number of variable bindings. The proposed system does so by propagating rhythmic patterns of activity wherein dynamic bindings are represented as the in-phase, i.e., synchronous, firing of appropriate nodes. The mechanisms for representing and propagating dynamic bindings are biologically plausible. Neurophysiological evidence suggests that similar mechanisms may in fact be used by the brain to represent and process sensorimotor information. From marek at iuvax.cs.indiana.edu Tue Jan 23 10:42:00 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Tue, 23 Jan 90 10:42:00 -0500 Subject: Computational Metabolism on the Connection Machine and Other Stories... Message-ID: Indiana University Computer Science Departamental Colloquium Computational Metabolism on a Connection Machine and Other Stories... --------------------------------------------------------------------- Elisabeth M. Freeman, Eric T. Freeman & Marek W. Lugowski graduate students, Computer Science Department Indiana University Wednesday, 31 January 1990, 7:30 p.m. Ballantine Hall 228 Indiana University campus, Bloomington, Indiana This is work in progress, to be shown at the Artificial Life Workshop II, Santa Fe, February 5-9, 1990. Connection Machine (CM) is a supercomputer for massive parallelism. Computational Metabolism (ComMet) is such computation. ComMet is a tiling where tiles swap places with neighbors or change their state when noticing their neighbors. ComMet is a programmable digital liquid. Reference: Artificial Life, C. Langton, ed., "Computational Metabolism: Towards Biological Geometries for Computing", M. Lugowski, pp. 343-368, Addison-Wesley, Reading, MA: 1989, ISBN 0-201-09356-1/paperbound. Emergent mosaics: ---------------- This class of ComMet instances arise from generalizing the known ComMet solution of Dijkstra's Dutch Flag problem. This has implications for cryptology and noise-resistant data encodings. We observed deterministic and indeterministic behavior intertwined, apparently a function of geometry. A preliminary computational theory of metaphor: ---------------------------------------------- We are working on a theory of metaphor as transformations within ComMet. Metaphor is loosely defined as expressing one entity in terms of another, and so it must underlie categorization and perception. We postulate that well-defined elementary events capable of spawning an emergent computation are needed to encode the process of metaphor. We use ComMet to effect this. A generalization of Prisoner's Dilemma (PD) for computational ethics: --------------------------------------------------------------------- The emergence of cooperation in iterated PD interactions is known. We propose a further generalization of PD into a communication between two potentially complex but not necessarily aware of each other agents. These agents are expressed as initial configurations of ComMet spatially arranged to allow communication through tile propagation and tile state change. Connection Machine (CM) implementation: -------------------------------------- We will show a video animation of our results, obtained on a 16k-processor CM, including emergent mosaics, thus confirmed after we predicted them theoretically. Our CM program computes in 3 minutes what took 7 days to do on a Lisp Machine. Our output is a 128x128 color pixel map. Our code will run in virtual mode, if need be, with up to 32 ComMet tiles per CM processor, yielding a 2M-tile tiling (over 2 million tiles) on a 64k-processor CM. From gary%cs at ucsd.edu Tue Jan 23 13:47:11 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Tue, 23 Jan 90 10:47:11 PST Subject: Hype-a-mania strikes Message-ID: <9001231847.AA04455@desi.UCSD.EDU> Jordan Pollack wrote to me: you write: >>Interesting work along these lines has been done by Servan-Schreiber >>et al., & Jeff Elman, where they show that certain kinds of FSM's >>are hard for simple recurrent nets to learn, and Jeff shows that >>a net can learn the syntactic structure of English, but is poor >>at center embedding, while being fine with tail recursive clauses. Isn't this overclaiming elman's work just a TAD, gary? And SS showed that SRN's didn't have a hope of doing unbounded dependencies unless they were statistically differentiated. jordan Mea Culpa!!! Yes, I'm sorry, I should not have said "learns the syntactic structure of english". I should have said "learns the structure of sentences generated by a grammar with embedded clauses". Re: Servan-schreiber's work, I thought I *did* say that "certain FSM's are hard to learn" about SS et al. In any case, all they showed was that theye were hard to learn using their training scheme. Other training schemes may work, such as Combined Subset Training (Tsung & Cottrell, 1989, IJCNN; Cottrell & Tsung, 1989, Cog Sci Proc), which is similar to Jeff Elman's technique of starting with simple sentences and progressively adding more complex ones. Hype-a-mania strikes deep... Into your life it will creep... gary cottrell From jcollins at shalamanser.cs.uiuc.edu Wed Jan 24 04:07:50 1990 From: jcollins at shalamanser.cs.uiuc.edu (John Collins) Date: Wed, 24 Jan 90 03:07:50 CST Subject: turing equivalence Message-ID: <9001240907.AA07251@shalamanser.cs.uiuc.edu> Proving that a particular connectionist architecture is turing equivalent may convince some skeptics that we are not completely wasting our time, but in fact it shouldn't. A computer built of tinker-toys might be turing equivalent, but that implies neither that it is an interesting model of cognition, nor that it is capable of generating any useful results IN A REASONABLE AMOUNT OF TIME. For some 30+ years AI researchers have had at their disposal universal computers which are for all purposes turing equivalent. So why has AI failed to achieve its lofty goals? Clearly turing equivalence is no guarantee of success in modeling cognition. The term "equivalent" is misleading. How can my PC be "equivalent" to a Cray, and yet be so much slower? A large part of cognition involves interacting with the real world in real time; turing equivalence tells us nothing about the speed, efficiency, or appropriateness of computations given our noisy and uncertain world. I am convinced that neural nets ARE turing equivalent; but dispite this, I remain optimistic that connectionism will inevitably succeed where GOFAI has failed. ;-) John Collins jcollins at cs.uiuc.edu From tenorio at ee.ecn.purdue.edu Wed Jan 24 16:43:52 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Wed, 24 Jan 90 16:43:52 EST Subject: NEO-cognitron Message-ID: <9001242143.AA09343@ee.ecn.purdue.edu> Bcc: -------- Of the early second generation algorithms, the Neocognitron is one that I heard very little of lately. I have two questions: - Can anyone point me to anywork that uses the algorithm, but it is not its original designer? - Has anyone implemented the algorithm in C or some other easily available language? The algorithm claims to use microfeatures for classification, and I would like to compare it with other algorithms such as the SONN and the GDR. Thanks. --ft. From hbs at lucid.com Wed Jan 24 17:12:18 1990 From: hbs at lucid.com (Harlan Sexton) Date: Wed, 24 Jan 90 14:12:18 PST Subject: concepts In-Reply-To: George Lakoff's message of Tue, 23 Jan 90 01:20:27 -0800 <9001230920.AA11360@cogsci.berkeley.edu> Message-ID: <9001242212.AA00867@kent-state> I think that we may be talking about the same thing using slightly different language. Obviously what a neuron computes can't be legitimately considered a "concept", but I had intended to convey the idea that what the totality of them compute was the concept. In other words, what it means to say that something has a given color can be defined operationally without regard to introspection of an individual (in other words, we don't need to depend on a definition of just one person or on access (which is currently unavailable) to internal states of mind of a collection of people). My contention is that it is possible (only in principle at the moment, and of course I may be wrong) to construct a machine that is operationally equivalent to a person as far as "green" is concerned. Of such a machine I would then say that it understood the concept of green. I know form a much simple class of problems how the interconnections and protocols of processors can allow a network of really simple things to do much more complex computations, and I even "understand" in a sense how this works for these cases. What I don't understand is how this could be extended to sorts of AI problems that neuro-computers are aimed at, but I believe that it is knowable. --Harlan From rba at flash.bellcore.com Wed Jan 24 21:03:29 1990 From: rba at flash.bellcore.com (Robert B Allen) Date: Wed, 24 Jan 90 21:03:29 EST Subject: No subject Message-ID: <9001250203.AA07130@flash.bellcore.com> Subject: shaping recurrent nets The technique of initially training recurrent nets with short sequences and gradually introducing longer sequences has been previously described in: Allen, R.B. Adaptive Training of Connectionist State Machines. ACM Computer Science Conference, Louisville, Feb, 1989, 428. From nowlan at ai.toronto.edu Thu Jan 25 08:34:35 1990 From: nowlan at ai.toronto.edu (Steven J. Nowlan) Date: Thu, 25 Jan 90 08:34:35 EST Subject: shaping recurrent nets In-Reply-To: Your message of Wed, 24 Jan 90 21:03:29 -0500. Message-ID: <90Jan25.083303est.10527@ephemeral.ai.toronto.edu> Another related form of shaping is described in: Nowlan, S.J. Gain Variation in Recurrent Error Propagation Networks. Complex Systems 2 (1988) 305-320. In this case, a robust attractor for a recurrent network is developed by first training from initial states near the attractor, and then gradually increasing the distance of initial states from the attractor. - Steve From AMR at IBM.COM Thu Jan 25 20:36:20 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Thu, 25 Jan 90 20:36:20 EST Subject: Turing machines = connectionist models (?) Message-ID: Having started the whole debate about the Turing equivalence of connectionist models, I feel grateful to the many contributors to the ensuing debate. I also feel compelled to point out that, in view of the obvious confusion and disunity concerning what ought to be a simple mathematical question, somebody needs to try to set the record straight. I am sure this will take some time and the efforts of many, but let me try to start the ball rolling. (1) George Lakoff's comment about the irrelevance of this issue in light of the fact that it does not address the question of "content" esp. of natural language concepts bothers me because all the talk about "content" (alias "intentionality", I guess) is so much hand- waving in the absence of any hint (not to mention a full-blown account) of what this is supposed to be. If we grant that there is no more to human beings that mortal flesh, then there is no currently available basis in any empirical science, any branch of mathematics, or I suspect (but am not sure) any branch of philosophy for such a concept. All we can say is that, in virtue of the architecture of human beings, certain causal connections exist (or tend to exist, to be precise, for there are always abnormal cases such as perhaps autism or worse) between certain states of an environment and certain states (as well as certain external actions) of human beings in that environment. There is no magic in this, no soul, and nothing that distinguishes human beings crucially from other living beings or from machines. Perhaps that is wrong but it is enough to speculate about something like "content" or "intentionality". One has to try to make sense of it, and I know of no such attempt that does not either (a) work equally well for robots as it does for human beings (e.g., Harnad's proposals about "grounding") or (b) fail to show how the factors they are talking about might be relevant (e.g., Searle's suggestion that it might be crucial that human beings are biological in nature. The answer surely is that this might be crucial, but that Searle has failed not only that it is but even how it might be). I would argue that in order to understand how human beings function, we need to take the environment into account but the same applies to all animals and to many non-biological systems, including robots. So, while grounding in some sense may be necessary, it is not sufficient to explain anything about the uniqueness of human mentality and behavior (e.g., natural language). (2) Given what we know from the theory of computation, even though grounding is necessary, it does not follow that there is any useful theoretical difference between a program simulating the interaction of a being with an environment and a robot interacting with a real environment. In practice, of course, the task of simulating a realistic environment may be so complex that it would make more sense in certain cases to build the robot and let it run wild than to attempt such a simulation, but in other cases the cost and complexity of building the robot as opposed to writing the program are such that it is more reasonable to do it the other way around. In real research, both strategies must be used, and it should be obvious that the same reasoning shows that, as far as we know, there is no theoretical difference between a human being in a real environment and a human being (or a piece of one, such as the proverbial brain in the vat) in a suitably simulated environment, but that there may be tremendous practical advantages to one or the other approach depending on the particular problem we are studying. But, again, "grounding" does not allow us differentiate biological from nonbiological or human from nonhuman. (2) In light of the above, it seems to me that, while the classical tools provided by the theory of computation may not be enough, they are the best that we have got in the way of tools for making sense of the issues. (2) There is some confusion about Turing machines, which possess infinite memory, and physically realizable machines, which do not. This makes a lot of difference in one way because the real lesson of the theory of computation is not that, if human beings are algorithmic beings, then they are equivalent to Turing machines, but rather that they would be equivalent to finite-state machines. The same applies to physically realized computers and any physically realized connectionist hardware that anyone might care to assemble. From jti at AI.MIT.EDU Fri Jan 26 11:23:00 1990 From: jti at AI.MIT.EDU (Jeff Inman) Date: Fri, 26 Jan 90 11:23 EST Subject: handwaving and "content" In-Reply-To: <9001260836.AA06806@life.ai.mit.edu> Message-ID: <19900126162316.2.JTI@WORKER-3.AI.MIT.EDU> Date: Thu, 25 Jan 90 20:36:20 EST From: AMR at ibm.com (1) George Lakoff's comment about the irrelevance of this issue in light of the fact that it does not address the question of "content" esp. of natural language concepts bothers me because all the talk about "content" (alias "intentionality", I guess) is so much hand- waving in the absence of any hint (not to mention a full-blown account) of what this is supposed to be. If we grant that there is no more to human beings that mortal flesh, then there is no currently available basis in any empirical science, any branch of mathematics, or I suspect (but am not sure) any branch of philosophy for such a concept. All we can say is that, in virtue of the architecture of human beings, certain causal connections exist (or tend to exist, to be precise, for there are always abnormal cases such as perhaps autism or worse) between certain states of an environment and certain states (as well as certain external actions) of human beings in that environment. There is no magic in this, no soul, and nothing that distinguishes human beings crucially from other living beings or from machines. Please pardon my philosophical intrusion in this technical forum, but I must respond to your statement. I think you have touched on a critical issue that underlies much of AI, cognitive science, etc. It is good that we examine this issue occassionally, because we may eventually have to face fundamentalist picketers, machines that don't "want" to be powered off, or machines that produce wonderful music, art, science, etc. I appreciate this recap, as it provides focus for the discussion. That the issue is popular can be seen by the fact that it appears in the form of a pair of "dueling" articles, in this month's Scientific American. For my money, however, the crucial idea appears in a profile of Claude Shannon [pg 22], where he says "we're machines, and we think, don't we?". It is uneccessary to devalue the human experience, as you do (above), by attempting to *reduce* it through its contigency in physicality. If humans are machines (as we agree they are), then that indicates that physicality is more complicated than a lot of science has acknowledged, rather than indicating that experience is really "nothing", or that biology is really based in lifeless material. The latter point seems equally to be "handwaving" to me. I agree that "there is nothing that distinguishes human beings .. from other living beings or from machines", or even from clouds of hydrogen atoms. A little more complexity perhaps, or perhaps not, but nothing significant on the grand scale. As you suggest elsewhere, these systems are all embedded in a certain specific universe, and I take that to be extremely important. I also agree that there are causal connections at the root of all phenomena (though the important ones are systemic and difficult or impossible to isolate). Somehow, plain ordinary matter is capable of "experiencing", or whatever it is that we do. As scientists, let's take it from there. Please note, I am not suggesting at all that there are some concepts that shouldn't be examined. By all means, let's investigate this fascinating field. I'd just like to see us remain open to discovering things that we don't already know. Thanks, Jeff From gary%cs at ucsd.edu Fri Jan 26 15:30:54 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Fri, 26 Jan 90 12:30:54 PST Subject: reference Message-ID: <9001262030.AA09026@desi.UCSD.EDU> A number of people have asked me where the work was of Jeff Elman's that I was referring to. Here it is: Representation and Structure in Connectionist models CRL TR 8903, available from Center for Research in Language C-008 UCSD La Jolla, Ca 92093 The grammar is: S -> NP VP "." NP -> PropN | N | N RC VP -> V (NP) RC -> who NP VP | who VP (NP) N -> boy, girl cat dog boys girls cats dogs PropN -> John, Mary V -> chase, chases, 9 others Also number agreement is enforced and there are transitivity requirements. The success criterion is that the network successfully predict the possible classes of the next word. Thus it has to remember number agreement between the subj and the verb across embedded clauses, and only predict verbs of the proper number. gary From cfields at NMSU.Edu Fri Jan 26 16:02:10 1990 From: cfields at NMSU.Edu (cfields@NMSU.Edu) Date: Fri, 26 Jan 90 14:02:10 MST Subject: No subject Message-ID: <9001262102.AA26830@NMSU.Edu> _________________________________________________________________________ The following are abstracts of papers appearing in the fourth issue of the Journal of Experimental and Theoretical Artificial Intelligence, which appeared in November, 1989. The next issue, 2(1), will be published in March, 1990. For submission information, please contact either of the editors: Eric Dietrich Chris Fields PACSS - Department of Philosophy Box 30001/3CRL SUNY Binghamton New Mexico State University Binghamton, NY 13901 Las Cruces, NM 88003-0001 dietrich at bingvaxu.cc.binghamton.edu cfields at nmsu.edu JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia _________________________________________________________________________ Problem solving architecture at the knowledge level. Jon Sticklen, AI/KBS Group, CPS Department, Michigan State University, East Lansine, MI 48824, USA The concept of an identifiable "knowledge level" has proven to be important by shifting emphasis from purely representational issues to implementation-free decsriptions of problem solving. The knowledge level proposal enables retrospective analysis of existing problem-solving agents, but sheds little light on how theories of problem solving can make predictive statements while remaining aloof from implementation details. In this report, we discuss the knowledge level architecture, a proposal which extends the concepts of Newell and which enables verifiable prediction. The only prerequisite for application of our approach is that a problem solving agent must be decomposable to the cooperative actions of a number of more primitive subagents. Implications for our work are in two areas. First, at the practical level, our framework provides a means for guiding the development of AI systems which embody previously-understood problem-solving methods. Second, at the foundations of AI level, our results provide a focal point about which a number of pivotal ideas of AI are merged to yield a new perspective on knowledge-based problem solving. We conclude with a discussion of how our proposal relates to other threads of current research. With commentaries by: William Clancy: "Commentary on Jon Stcklen's 'Problem solving architecture at the knowledge level'". James Hendler: "Below the knowledge level architecture". Brian Slator: "Decomposing meat: A commentary on Sticklen's 'Problem solving architecture at the knowledge level'". and Sticklen's response. __________________________________________________________________________ Natural language analysis by stochastic optimization: A progress report on Project APRIL Geoffrey Sampson, Robin Haigh, and Eric Atwell, Centre for Computer Analysis of Language and Speech, Department of Linguistics & Phonetics, University of Leeds, Leeds LS2 9JT, UK. Parsing techniques based on rules defining grammaticality are difficult to use with authentic natural-language inputs, which are often grammatically messy. Instead, the APRIL systems seeks a labelled tree structure which maximizes a numerical measure of conformity to statistical norms derived from a sample of parsed text. No distinction between legal and illegal trees arises: any labelled tree has a value. Because the search space is large and has an irregular geometry, APRIL seeks the best tree using simulated annealing, a stochastic optmization technique. Beginning with an arbitrary tree, many randomly-generated local modifications are considered and adopted or rejected according to their effect on tree-value: acceptance decisions are made probabilistically, subject to a bias against adverse moves which is very weak at the outset but is made to increase as the random walk through the search space continues. This enables the system to converge on the global optimum without getting trapped in local optima. Performance of an early verson of the APRIL system on authentic inputs had been yielding analyses with a mean accuracy of 75%, using a schedule which increases processing linearly with sentence length; modifications currently being implemented should eliminate many of the remaining errors. _________________________________________________________________________ On designing a visual system (Towards a Gibsonian computational model of vision) Aaron Sloman, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QN, UK This paper contrasts the standard (in AI) "modular" theory of the nature of vision with a more general theory of vision as involving multiple functions and multiple relationships with other subsystems of an intelligent system. The modular theory (e.g. as expounded by Marr) treats vision as entirely, and permanently, concerned with the production of a limited range of descriptions of visual surfaces, for a central database; while the "labyrithine" design allows any output that a visual system can be trained to associate reliably with features of an optic array and allows forms of learning that set up new communication channels. The labyrithine theory turns out to have much in common with J. J. Gibson's theory of affordances, while not eschewing information processing as he did. It also seems to fit better than the modular theory with neurophysiological evidence of rich interconnectivity within and between subsystems in the brain. Some of the trade-offs between different designs are discussed in order to provide a unifying framework for future empirical investigations and engineering design studies. However, the paper is more about requirements than detailed designs. ________________________________________________________________________ From watrous at ai.toronto.edu Fri Jan 26 16:17:57 1990 From: watrous at ai.toronto.edu (Raymond Watrous) Date: Fri, 26 Jan 90 16:17:57 EST Subject: Request for Data Message-ID: <90Jan26.161758est.11301@ephemeral.ai.toronto.edu> The classic data of Peterson/Barney (1952), consisting of formant and pitch values for 10 vowels in the [hVd] context for 76 speakers, has been used in several comparisons of classifiers, connectionist and otherwise (eg. Huang/Lippmann, Moody, Bridle, Nowlan). The data used in these comparisons was a subset of the original data digitized by Huang/Lippmann from the original paper, from which the speaker identity was not recoverable. I have received a version of the original data courtesy of Ann Syrdal (AT&T Bell Labs) which is organized by speaker and vowel. However, this data is also incomplete in that 1 speaker is missing, as are 6 tokens of the [o] vowel. Can anyone supply the complete, original data set? (I would be happy to supplement what I have from printed listings, or punched cards, if the data is not on line.) Thanks. Ray Watrous From AMR at ibm.com Fri Jan 26 16:37:49 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 26 Jan 90 16:37:49 EST Subject: handwaving and "content" Message-ID: My point is not that there is nothing that distinguishes people from, say, machines of the usual sort. In fact, lots of things do. Rather, I contend that the sort of vague statements that people make about content or semantics or intentionality or the less vague ones about grounding do not help make that distinction. Nor do any simplistic claims about biology vs. non-biology. So, my point is to try to find where the difference does lie, and to show that some initially appealing proposals for this are either wrong (in the case of grounding) or too vague to be either right or wrong (in some of the other cases). The only hope that I see for making the distinction is in terms of some precise notions that either already exist or more likely (as my slogan suggests) remain to be developed within a formal discipline such as the theory of computation. From AMR at ibm.com Fri Jan 26 16:43:51 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 26 Jan 90 16:43:51 EST Subject: Turing machines = connectionist models Message-ID: My point was to claim that biological systems are indistinguishable from robots. Rather it is that a variety of recent proposals about how they are distinguished do not make the grade. In particular, grounding as I understand Harnad's proposal is not even intended to make the distinction. I think he contends that robots are just as grounded as people are, but that disembodied (non-robotic) programs are not. I think the distinction between robots and programs is very great in practice but not all in principle (as I tried to elucidate) and I should make it clear if I have not yet that I don't think that existing or currently imaginable robots are any closer to being human-like than are existing or currently imaginable programs. So, for me, the crucial question is precisely what makes people different from baboons, or robots, or programs. And grounding does not help us answer this. The question of how biological systems differ from nonbiological is equally interesting, but that has to do with the fundamental problem of WHAT IS LIFE not (or at least not obviously) with the fundamental problem of WHAT IS INTELLIGENCE. The claim that somehow people have minds but robots and programs do not BECAUSE people are biological in nature (a claim that Searle seems close to making) seems prima facie unlikely, since we do not attribute minds to cells or to amoebas, which are biological entities, and in any case the claim has not been either articulated precisely or defended even in outline. Additional proposals have been made, having to do with intentionality or semantics, for example, but these again do not seem to help. They are either vague or they do not distinguish people from robots or perhaps do no more than recapitulate the problem without throwing any new light on it. I should finally say that when I urge the study of the theory of com putation on people, I do not mean to disparage other branches of mathematics. If there are other branches of mathematics that are equally or more useful here, I would like to know more about them. I would just say that (a) such results are ultimately going to have to be put together with what we already have in the theory of computation, to the undoubted benefit of the latter, (b) such results should, if they are to be helpful, make the distinctions I have been referring to (and I would like to see an example of this), (c) such results will I think be equally unconfortable for the handwavers as the classic results from the theory of computation precisely because they will tell us in precise and mathematical terms what kind of machine (different in some important way from other kinds of machine) people actually are, and (d) (referring back to (b) such results are unlikely, it seems to me, to show either that only a biological system could be intelligent or that only a net of some kind could. The question of whether connectionist models are equivalent (in various senses) to finite-state machines, then, is not intended to be the central question. Most of the questions are actually empirical, but the fact remains that we need a precise framework for carrying onn the empirical and theoretical work, and, in any case, we cannot hope for useful results from work which starts out by flouting the few truths that we do know for sure from the mathematics of the theory of computation. From elman at amos.ucsd.edu Fri Jan 26 23:37:00 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 26 Jan 90 20:37:00 PST Subject: TR announcement: W. Levelt, Multilayer FF Nets & Turing machines Message-ID: <9001270437.AA04565@amos.ucsd.edu> I am forwarding the following Tech Report announcement on behalf of Pim Levelt, Max-Planck-Institute for Psycholinguistics. Note that requests for reprints should go to 'pim at hnympi51.bitnet' -- not to me! Jeff -------------------------------------------------------------------- Jeff Elman suggested I announce the existence of the following new paper: Willem J.M. Levelt, ARE MULTILAYER FEEDFORWARD NETWORKS EFFECTIVELY TURING MACHINES? (Paper for conference on "Domains of Mental Functioning: Attempts at a synthesis". Center for Interdisciplinary Research, Bielefeld, December 4-8, 1989. Proceedings to be published in Psychological Research, 1990). Abstract: Can connectionist networks implement any symbolic computation? That would be the case if networks have effective Turing machine power. It has been claimed that a recent mathematical result by Hornik, Stinchcombe and White on the generative power of multilayer feedforward networks has that implication. The present paper considers whether that claim is correct. It is shown that finite approximation measures, as used in Hornik et al.'s proof, are not adequate for capturing the infinite recursiveness of recursive functions. Therefore, the result is irrelevant to the issue at hand. Willem Levelt, Max Plank Institute for Psycholinguistics, Nijmegen, The Netherlands, e-mail: PIM at HNYMPI51 (on BITNET). (PS. Jeff Elman and Don Norman own copies of the paper) From harnad at Princeton.EDU Sat Jan 27 16:54:30 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 27 Jan 90 16:54:30 EST Subject: robots and simulation Message-ID: <9001272154.AA01941@reason.Princeton.EDU> Alexis Manaster-Ramer AMR at ibm.com wrote: > I know of no... attempt [to make sense of "content" or > "intentionality"] that does not either (a) work equally well for robots > as it does for human beings (e.g., Harnad's proposals about > "grounding")... [G]rounding as I understand Harnad's proposal is > not... intended to [to distinguish biological systems > from robots]. I think he contends that robots are just as grounded as > people are, but that disembodied (non-robotic) programs are not. You're quite right that the grounding proposal (in "The Symbol Grounding Problem," Physica D 1990, in press) does not distinguish robots from biological systems -- because biological systems ARE robots of a special kind. That's why I've called this position "robotic functionalism" (in opposition to "symbolic functionalism"). But you leave out a crucial distinction that I DO make, over and over: that between ordinary, dumb robots, and those that have the capacity to pass the Total Turing Test [TTT] (i.e., perform and behave in the world for a lifetime indistinguishably from the way we do). Grounding is trivial without TTT-power. And the difference is like night and day. (And being a methodological epiphenomenalist, I think that's about as much as you can say about "content" or "intentionality.") > Given what we know from the theory of computation, even though > grounding is necessary, it does not follow that there is any useful > theoretical difference between a program simulating the interaction > of a being with an environment and a robot interacting with a real > environment. As explained quite explicitly in "Minds, Machines and Searle" (J. Exp. Theor. A.I. 1(1), 1989), there is indeed no "theoretical difference," in that all the INFORMATION is there in a simulation, but there is another difference (and this applies only to TTT-scale robots), one that doesn't seem to be given full justice by calling it merely a "practical" difference, namely, that simulated minds can no more think than simulated planes can fly or simulated fires can burn. And don't forget that TTT-scale simulations have to contend with the problem of encoding all the possible real-world contingencies a TTT-scale robot would be able to handle, and how; a lot to pack into a pure symbol cruncher... Stevan Harnad From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Sat Jan 27 20:45:08 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Sat, 27 Jan 90 20:45:08 EST Subject: probability learning mini bibliography Message-ID: at the risk of becoming boring ... a while ago i had asked pointers to probability learning by neural nets. there were very few replies to the query, which i summarize below. format is bibtex, as usual. (remark: one of the netters -cannot recall his name- remarked that mean field theory can be onterpreted as probability learning. prima facie this sounds right but i did not follow his lead. people interested in mf theory, i have included some references in my dynamic neural nets bibliography, available by FTP from this site.) usual disclaimer: this bibliography is far from complete and if somebody's work is not included, do not flame me , send me the refernce ... thanasis @article{kn:And83a, title ="Cognitive and Psychological Computation with Neural Models", author ="J.A. Anderson", journal ="IEEE Trans. on Systems, Man and Cybernetics", volume ="SMC-13", year ="1983" } @article{kn:And77a, title ="Distinctive Features, Categorical Perception and Learning:some Applications of a Neural Model ", author ="J.A. Anderson and others ", journal ="Psychological Review", year ="1977", volume ="84", page ="413-451", } @inproceedings{kn:Gol87a, title ="Probabilistic Characterization of Neural Model Computation", booktitle ="Neural Information Processing Systems", author ="R.M. Golden", editor ="J.S. Denker", year ="1987", organization ="American Institute for Physics" } @INCOLLECTION{KN:HIN86A, title ="Learning and Relearning in Boltzmann Machines", booktitle ="Parallel Distributed Processing", author ="G. Hinton and T. Sejnowski", volume ="1", year ="1986", PUBLISHER ="MIT" } @inproceedings{kn:Pea86a, title ="G-Maximization:An Unsupervised Learning Procedure for discovering Regularities", booktitle ="Neural Networks for Computing", author ="B. Pearlmutter and G. Hinton", editor ="J.S. Denker", year ="1986", pages ="333-338", organization ="American Institute for Physics" } @techreport{kn:Lan89a, author ="A. Lansner and O. Ekeberg", title ="A One-Layer Feedback Artificial Neural Network with a Bayesian Leraning Rule", number ="TRITA-NA-P8910", institution ="Roy. Inst. of Technology, Stockholm", year ="1989" } @TECHREPORT{KN:SUN89A, AUTHOR ="R. SUN", TITLE ="The Discrete Neuronal Model and the Probabilistic Discrete Neuronal Model", NUMBER ="?", INSTITUTION ="Computer Sc. Dept., Brandeis Un.", year ="1989" } @incollection{kn:Smo86a, title ="Information Processing in Dynamical Systems", booktitle ="Parallel Distributed Processing", author ="P. Smolensky", volume ="1", year ="1986", PUBLISHER ="MIT" } @article{kn:Sol88a, author ="S. Solla", title ="Accelerated Learning Experiments in Layered Neural Networks", journal ="Complex Systems", year ="1988", volume ="2", } @techreport{kn:Sus88a, author ="H. Sussman", title ="On the Convergence of Learning Algorithms for Boltzmann Machines", number ="sycon-88-03", institution ="Rutgers Center for Systems and Control", year ="1988" } @inproceedings{kn:Sun89a, author= "R. Sun", title ="The Discrete Neuronal model and the Probabilistic Discrete Neuronal Model", booktitle ="Int. Neural Network Conf.", year ="1989" } @TECHREPORT{KN:WIL86A, author ="R. Williams", title ="Reinforcement learning in connectionist networks", number ="TR 8605", organization ="ICS, University of California, San Diego", year ="1986" } From Bill_McKellin at mtsg.ubc.ca Sun Jan 28 21:17:36 1990 From: Bill_McKellin at mtsg.ubc.ca (Bill_McKellin@mtsg.ubc.ca) Date: Sun, 28 Jan 90 18:17:36 PST Subject: No subject Message-ID: <2029417@mtsg.ubc.ca> SUB CONNECTIONISTS-REQUEST BILL MCKELLIN From AMR at IBM.COM Sun Jan 28 23:25:02 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Sun, 28 Jan 90 23:25:02 EST Subject: robots and simulation Message-ID: (1) I am glad that SOME issues are getting clarified, to wit, I hope that everybody that has been confusing Harnad's arguments about grounding with other people's (perhaps Searle or Lakoff's) arguments about content, intentionality, and/or semantics will finally accept that there is a difference. (2) I omitted to refer to Harnad's distinction between ordinary robots, ones that fail the Total Turing Test, and and the theoretical ones that do pass the TTT, for two unrelated reasons. One was that I was not trying to present a complete accoun, merely, to raise certain issues, clarify certain points, and answer certain objections that had arisen. The other was that I do not agree with Harnad on this issue, and that for a number of reasons. First, I believe that a Searlean argument is still possible even for a robot that passes the TTT. Two, the TTT is much too strong since no one human being can pass it for another, and we would not be surprised I think to find an intelligent species of Martians or what have you that would, obviously, fail abysmally on the TTT but might pass a suitable version of the ordinary Turing Test. Third, the TTT is still a criterion of equivalence that is based exclusively on I/O, and I keep pointing out that that is not the right basis for judging whether two systems are equivalent (I won't belabor this last point, because that is the main thing that I have to say that is new, and I would hope to address it in detail in the near future, assuming there is interest in it out there.) (3) Likewise, I omitted to refer to Harnad's position on simulation because (a) I thought I could get away with it and (b) because I do not agree with that one either. The reason I disagree is that I regard simulation of a system X by a system Y as a situation in which system Y is VIEWED by an investigator as sufficiently like X with respect to a certain (usually very specific and limited) characteristic to be a useful model of X. In other words, the simulation is something which in no sense does what the original thing does. However, a hypothetical program (like the one presupposed by Searle in his Chinese room argument) that uses Chinese like a native speaker to engage in a conversation that its interlocutor finds meaningful and satisfying would be doing more than simulating the linguistic and conversational abilities of a human Chinese speaker; it would actually be duplicating these. In addition--and perhaps this is even more important--the use of the term simulation with respect to an observable, external behavior (I/O behavior again) is one thing, its use with reference to nonobservable stuff like thought, feeling, or intelligence is quite another. Thus, we know what it would mean to duplicate (i.e., simulate to perfection) the use of a human language; we do not know what it would mean to duplicate (or even simulate partially) something that is not observable like thought or intelligence or feeling. That in fact is precisely the open question. And, again, it seems to me that the relevant issue here is what notion of equivalence we employ. In a nutshell, the point is that everybody (incl. Harnad) seems to be operating with notions of equivalence that are based on I/O behavior even though everybody would, I hope, agree that the phenomenon we call intelligence (likewise thought, feeling, consciousness) are NOT definable in I/O terms. That is, I am assuming here that "everybody" has accepted the implications of Searle's argument at least to the extent that IF A PROGRAM BEHAVES LIKE A HUMAN BEING, IT NEED NOT FOLLOW THAT IT THINKS, FEELS, ETC., LIKE ONE. Searle, of course, goes further (without I think any justification) to contend that IF A A PROGRAM BEHAVES LIKE A HUMAN BEING, IT IS NOT POSSIBLE THAT IT THINKS, FEELS, ETC., LIKE ONE. The question that no one has been able to answer though is, if the two behave the same, in what sense are they not equivalent, and that, of course, is where we need to insist that we are no longer talking about I/O equivalence. This is, of course, where Turing (working in the heyday of behaviorism) made his mistake in proposing the Turing Test. From J.Kingdon at Cs.Ucl.AC.UK Sun Jan 28 12:45:04 1990 From: J.Kingdon at Cs.Ucl.AC.UK (J.Kingdon@Cs.Ucl.AC.UK) Date: Sun, 28 Jan 90 17:45:04 +0000 Subject: join list Message-ID: I am a postgraduate research student working on the mathematical theory of connectionist models. I wondered if it would be possible to have my name added to the mailing list of this group. Many thanks, Jason Kingdon. jason at uk.ac.ucl.cs From elman at amos.ucsd.edu Sat Jan 27 01:02:27 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 26 Jan 90 22:02:27 PST Subject: 2nd call: Connectionist Models Summer School Message-ID: <9001270602.AA04988@amos.ucsd.edu> * Please post * January 26, 1990 2nd call ANNOUNCEMENT & SOLICITATION FOR APPLICATIONS CONNECTIONIST MODELS SUMMER SCHOOL / SUMMER 1990 UCSD La Jolla, California The next Connectionist Models Summer School will be held at the University of California, San Diego from June 19 to 29, 1990. This will be the third session in the series which was held at Carnegie Mellon in the summers of 1986 and 1988. Previous summer schools have been extremely success- ful, and we look forward to the 1990 session with anticipa- tion of another exciting summer school. The summer school will offer courses in a variety of areas of connectionist modelling, with emphasis on computa- tional neuroscience, cognitive models, and hardware imple- mentation. A variety of leaders in the field will serve as Visiting Faculty (the list of invited faculty appears below). In addition to daily lectures, there will be a series of shorter tutorials and public colloquia. Proceed- ings of the summer school will be published the following fall by Morgan-Kaufmann (previous proceedings appeared as 'Proceedings of the 1988 Connectionist Models Summer School', Ed., David Touretzky, Morgan-Kaufmann). As in the past, participation will be limited to gradu- ate students enrolled in PhD. programs (full- or part-time). Admission will be on a competitive basis. Tuition is sub- sidized for all students and scholarships are available to cover housing costs ($250). Applications should include the following: (1) A statement of purpose, explaining major areas of interest and prior background in connectionist model- ing (if any). (2) A description of a problem area you are interested in modeling. (3) A list of relevant coursework, with instructors' names and grades. (4) Names of the three individuals whom you will be ask- ing for letters of recommendation (see below). (5) If you are requesting support for housing, please include a statement explaining the basis for need. Please also arrange to have letters of recommendation sent directly from three individuals who know your current work. Applications should be sent to: Marilee Bateman Institute for Neural Computation, B-047 University of California, San Diego La Jolla, CA 92093 (619) 534-7880 / marilee at sdbio2.ucsd.edu All application material must be received by March 15, 1990. Decisions about acceptance and scholarship awards will be announced April 1. If you have further questions, contact Marilee Bateman (address above), or one of the members of the Organizing Committee. Jeff Elman Terry Sejnowski UCSD UCSD/Salk Institute elman at amos.ucsd.edu terry at sdbio2.ucsd.edu Geoff Hinton Dave Touretzky Toronto CMU hinton at ai.toronto.edu touretzky at cs.cmu.edu ------------ INVITED FACULTY: Yaser Abu-Mostafa (CalTech) Richard Lippmann (MIT Lincoln Labs) Dana Ballard (Rochester) Shawn Lockery (Salk) Andy Barto (UMass/Amherst) Jay McClelland (CMU) Rik Belew (UCSD) Carver Mead (CalTech) Gail Carpenter (BU) David Rumelhart (Stanford) Patricia Churchland (UCSD) Terry Sejnowski (UCSD/Salk) Gary Cottrell (UCSD) Marty Sereno (UCSD) Jack Cowan (Chicago) Al Selverston (UCSD) Richard Durbin (Stanford) Marty Sereno (UCSD) Jeff Elman (UCSD) Paul Smolensky (Colorado) Jerry Feldman (ICSI/UCB) David Tank (Bell Labs) Geoffrey Hinton (Toronto) David Touretzky (Carnegie Mellon) Michael Jordan (MIT) Halbert White (UCSD) Teuvo Kohonen (Helsinki) Ron Williams (Northeastern) George Lakoff (UCB) David Zipser (UCSD) From harnad at Princeton.EDU Mon Jan 29 09:25:52 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Mon, 29 Jan 90 09:25:52 EST Subject: Redirecting flow Message-ID: <9001291425.AA06811@cognito.Princeton.EDU> I will not reply to Alexis Manaster-Ramer's (amr at ibm.com) posting about grounding on connectionists because I do not think the discussion belongs here. His original query about whether nets were equivalent to Turing Machines was appropriate for this list, but now the discussion has gone too far afield. I will be replying to him on the symbol grounding list, which I maintain. If anyone wants to follow the discussion, write me and I'll add your name to the list. --- Stevan Harnad From rr%cstr.edinburgh.ac.uk at nsfnet-relay.ac.uk Sun Jan 28 08:19:09 1990 From: rr%cstr.edinburgh.ac.uk at nsfnet-relay.ac.uk (Richard Rohwer) Date: Sun, 28 Jan 90 13:19:09 GMT Subject: Apologizing in advance... Message-ID: <17869.9001281319@cstr.ed.ac.uk> A CONNECTIONIST THEORY OF ONTOLOGY Richard Rohwer 3 May 1989 OK, all you philosophical bullshitters! Here's the definitive theory on life, the universe, and connectionism. Let's start by trashing the silly idea that computer simulations of thunderstorms just aren't wet enough. Well allright, they're not wet enough on the Met's wimpy CRAYs. But those CRAYs are only simulating part of the thunderstorm experience. They cut up the storm into a gridwork of little fluid elements, each of which is just a bunch of numbers. As the simulated thunderstorm rages along, the numbers in the fluid elements change this way and that according to some approximated physical laws. In order to save on computational expense, only part of the structure of a thunderstorm is simulated-- the structure which exists on scales large compared to the grid size. Of course that simulation isn't wet. My claim is that a more complete simulation is necessary. You will probably object that even if I simulate every subatomic particle in every detail, then it's still just a bunch of numbers and it doesn't feel wet. That's because you probably think I mean to leave the observer out of the simulation. Well I don't. Of course the simulation doesn't seem wet to the programmer looking at all the reams of numbers. The interaction between the storm and the observer is quite important in the business of making the observer wet, so it's no good leaving that out of the simulation. The programmer cannot be made wet by the simulated thunderstorm any more readily than an observer watching an Amazonian thunderstorm through a telescope in the Martian desert. But that doesn't mean the jungle's inhabitants keep dry. So we simulate, atom-by-atom, an observer standing in the simulated thunderstorm. Perhaps now you object that without actually being the simulated observer, we cannot be sure that the simulated observer is the least bit conscious and feels anything. In that case, you are probably one of those silly solipsists, in which case I write you off as a hopeless case. True it is, I must assume at this juncture that given enough machinery of brain, mind will be present. And I assume, furthermore, that the formal connectionist structure of the brain's machinery is all that is required to support a mind; that it's the program running on the pattern of connections that does the trick, and it really doesn't matter what the machinery is made of. In particular, it really doesn't matter that the machinery happens to be quite a lot of silicon chips constituting a really groovy computer. So there you have it. Given just a few innocent assumptions (which can be solidly proven using a few pints of real ale) we have established that, when set up properly, simulated thunderstorms really are wet. But we can push this a little further. What's the purpose of this super-duper computer that runs the simulation? It's just there to provide an instantiation of a formal mathematical system in which the simulation is expressed. The simulation doesn't really need a physical device on which to be simulated; it only needs the formal mathematical system. And that's cheap. It exists as a mathematical tautology, just like any formal system. So we can throw away the physical computer and still keep the thunderstorm together with all its wetness. Physics can be built out of mathematics using this connectionist theory of ontology: To get the machinery required to support a mind, it's enough for a connectionist mathematical formalism (augmented with a physics formalism) to be a theoretical possibility. Have another pint. Now consider two simulations of the same thunderstorm-observer system running on different machines. Are these different systems? Certainly not. The machines serve as a representation/communication medium between the world of the simulated thunderstorm and the world of our own subjective experience, much as coordinates serve to represent vectors. The simulated thunderstorm exists for the simulated observer regardless of whether we bother to simulate it, rather like a vector exists regardless of whether we give it coordinates. But we cannot learn much about the simulated system without using a simulator, much as we cannot write down a vector without using coordinates. And the simulated system is the same system regardless of which, or how many simulators we use to "view" it, much as a vector is independent of the coordinate system or systems used to represent it. So any simulatable world with simulatable observers exists physically for those observers just as the world for which we are observers exists physically for us. So there must be quite a lot of physical worlds out there, including many which are quite like our own, except for some fine, distinguishing details. So how similar do these other worlds have to be to our own in order for us to be able to physically observe them, just as we physically observe our own world? Surely not every minute detail is important. In some sense, we must be partially aware of worlds which are very similar to ours; minds which are very similar to ours. But then that makes our world more like an ensemble of similar worlds of which we are partially aware. For my money, this is what quantum interference is all about. It behooves me to derive the quantitative quantum formalism from this point of view but alas, (due to a looming hangover) I can only support the idea with some qualitative points: a) Quantum theory is at pains not to ascribe physical reality to anything which has not been carefully observed, and b) In its purest form (ie., without collapse of the wavefunction), quantum measurement theory is a many-worlds theory. Richard Rohwer JANET: rr at uk.ac.ed.cstr Centre for Speech Technology Research ARPA: rr%ed.cstr at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at cstr.ed.ac.uk, 80, South Bridge rr%cstr.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!cstr!rr From AMR at ibm.com Mon Jan 29 12:03:06 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Mon, 29 Jan 90 12:03:06 EST Subject: No subject Message-ID: I just realized that I am only getting stuff if it is forwarded by a kind soul. Could someone put me on the list? From pollack at cis.ohio-state.edu Tue Jan 30 01:12:37 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 30 Jan 90 01:12:37 EST Subject: Are thoughts REALLY Random? In-Reply-To: Richard Rohwer's message of Sun, 28 Jan 90 13:19:09 GMT <17869.9001281319@cstr.ed.ac.uk> Message-ID: <9001300612.AA00163@toto.cis.ohio-state.edu> **Don't Forward this one** Richard just reminded me that I recently read Penrose's unfortunately named book, making a nice Dancing Woolly (thats Wu Li) appear like Surly Lion (thats Searlie). Which, by the way, reminds me of a topic to bring up for discussion, now that Turing has finally been grounded, at least symbolically. You see, Penrose went on and on about his new-age religious experience of accessing the Platonic Universe of Absolutely True Mathematical Ideas (He obviously didn't finish Jaynes), and how complex and beautiful mathematical structures, such as Mandelbrot's set, complex numbers, and his own tilings, are in "there" to be discovered, rather than invented, and would continue to exist whether we found them or not. This universe seems like a pretty big space to me; if only I could dip into it to create publications! So, consider when a computation (or a mathematician) transfers a collection of "information" from the infinite (but ghostly) Platonic Universe into our own finite (but corporeal) universe, by generating a fractal picture or writing down a brand-new theorem. How can this bunch of bits be quantified? Apologizing for the brutal reduction of other's lifework, I know about counting binary distinctions (Shannon), I know about counting instructions (Kolmogorov), I know that only a truly random string is really complex (Chaitin), I know that machines can be ordered lexicographically (Godel), and I even know that the computable languages form a strict hierarchy (Chomsky). However, since programs of the same size, when executed, can yield structures with vastly different APPARENT randomness and bit-counts, some useful measure of the "Platonic density" seems to be missing from the complexity menu. I find it difficult to believe that the Mandelbrot set is only as complex as the program to "access" it! Can somebody please help me make sense out of this? Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From drosen at psych.Stanford.EDU Tue Jan 30 02:32:08 1990 From: drosen at psych.Stanford.EDU (Daniel Rosen) Date: Mon, 29 Jan 90 23:32:08 PST Subject: join list Message-ID: Could you please add my name to the connectionist mailing list. Thanks a lot. -Daniel Rosen drosen at psych.stanford.edu From Scott.Fahlman at B.GP.CS.CMU.EDU Tue Jan 30 06:13:48 1990 From: Scott.Fahlman at B.GP.CS.CMU.EDU (Scott.Fahlman@B.GP.CS.CMU.EDU) Date: Tue, 30 Jan 90 06:13:48 EST Subject: Are thoughts REALLY Random? In-Reply-To: Your message of Tue, 30 Jan 90 01:12:37 -0500. <9001300612.AA00163@toto.cis.ohio-state.edu> Message-ID: ** I guess this shouldn't be forwarded either, since Jordan doesn't want ** ** the initial message to be. ** However, since programs of the same size, when executed, can yield structures with vastly different APPARENT randomness and bit-counts, some useful measure of the "Platonic density" seems to be missing from the complexity menu. I find it difficult to believe that the Mandelbrot set is only as complex as the program to "access" it! Can somebody please help me make sense out of this? As a medium in which useful information is to be "mined" from an apparently random heap of bits, do you see any fundamental difference between the Mandelbrot set and the proverbial infinite number of monkeys with typewriters? Offhand, I don't see any fundamental difference, as long as we are sure that the Mandelbrot set does not sytematically exclude any particular set of bit patterns as it folds itself into a pattern of infinite variability. Both are very low-density forms of "information ore", and probably not worth mining for that reason. Sure, anything you might want is "in there" in some useless sense, but at these low densities the work of rejecting the nonsense is certainly greater than the work of creating the patterns you want in some more direct way. (-: Some have claimed the same thing for the typical collection of papers currently being written in this field. :-) Maybe the right measure of Platonic density is something like the expected length of the address (M bits) that you would need to point to a specific N-bit pattern that you want to locate somewhere in this infinite heap of not-exactly-random bits. If M >= N on the average, then the structure is not of any particular use as a generator. You're better off storing the N-bit patterns directly than storing the M-bit adrress along with a Madelbrot chip. And if you want to systematically search a space, to make sure that you visit all possibilities eventually, you're better off searching the N-bit space of bit-patterns directly than wandering through the Mandelbroth. Even if you exhaustively search an M-bit subspace of the Mandelbrot set, you have no guarantee that your pattern is in there. Any reason to believe that M < N for Mandelbrot sets or monkey type? I've seen no compelling arguments that this should be so. If M >= N for all these sets, then worrying about which has greater or less density seems foolish -- they're all useless. -- Scott From smk at flash.bellcore.com Tue Jan 30 09:17:50 1990 From: smk at flash.bellcore.com (Selma M Kaufman) Date: Tue, 30 Jan 90 09:17:50 EST Subject: No subject Message-ID: <9001301417.AA23985@flash.bellcore.com> Subject: TR Available - Connectionist Language Users Connectionist Language Users Robert B. Allen Bellcore The Connectionist Language Users (CLUES) paradigm employs neural learning algorithms to develop reactive intelligent agents. In much of the research reported here, these agents "use" language to answer questions and to interact with their environment. The model is applied to simple examples of generating verbal descrip- tions, answering questions, pronoun reference, labeling actions, and verbal interactions between agents. In addition the agents are shown to be able to model other intelligent activities such as planning, grammars and simple analogies, and an adaptive pedagogy is introduced. Overall, these networks provide a natur- al account of many aspects of language use. _______ This report, which integrates 2 years of work and numerous short- er papers, takes the next step. 'Grounded' recurrent networks are are applied to a wide range of linguistic problems. Request reports from: smk at bellcore.com From pollack at cis.ohio-state.edu Tue Jan 30 12:59:32 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 30 Jan 90 12:59:32 EST Subject: Are thoughts REALLY Random? In-Reply-To: Scott.Fahlman@B.GP.CS.CMU.EDU's message of Tue, 30 Jan 90 06:13:48 EST <9001301114.AA21049@cheops.cis.ohio-state.edu> Message-ID: <9001301759.AA00428@toto.cis.ohio-state.edu> (Background: Scott is commenting less on the question than on the unspoken subtext, which is my idea of building a very large reconstructive memory based on quasi-inverting something like the Mandelbrot set. Given a figure, find a pointer; then only store the pointer and simple reconstruction function; This has been mentioned twice in print, in a survey article in AI Review, and in NIPS 1988. Yesterday's note was certainly related; but I wanted to ignore the search question right now!) >> Do you see any fundamental difference between the >> Mandelbrot set and the proverbial infinite number of monkeys with >> typewriters? I think that the difference is that the initial-conditions/reduced descriptions/pointers to the Mandelbrot set can be precisely stored by physical computers. This leads to a replicability of "access" not available to the monkeys. >> Maybe the right measure of Platonic density is something like the expected >> length of the address (M bits) that you would need to point to a specific >> N-bit pattern that you want to locate somewhere in this infinite heap of >> not-exactly-random bits. Thanks! Not bad for a starting point! The Platonic Complexity (ratio of N/M) would decrease to 0 at the GIGO limit, and increase to infinity if it took effectively 0 bits to access arbitrary information. This is very satisfying. >> Why shouldn't M be much greater than N? Normally, we computer types live with a density of 1, as we convert symbolic information into bit-packed data-structures. Thus we already have lots of systems with PC=1! Also I can point to systems with PC <1 (Bank teller machines) and with PC>1 (Postscript). Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From kroger at cognet.ucla.edu Tue Jan 30 18:14:15 1990 From: kroger at cognet.ucla.edu (James Kroger) Date: Tue, 30 Jan 90 15:14:15 PST Subject: No subject Message-ID: <9001302314.AA04216@tinman.cognet.ucla.edu> Hi. May I receive a copy of "Connectinist Language Users?" Jim Kroger Dept. of Psychology UCLA Los Angeles, Ca. 90024 Thanks. --Jim K. From tenorio at ee.ecn.purdue.edu Wed Jan 31 08:53:42 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Wed, 31 Jan 90 08:53:42 EST Subject: Are thoughts REALLY Random? In-Reply-To: Your message of Tue, 30 Jan 90 12:59:32 EST. <9001301759.AA00428@toto.cis.ohio-state.edu> Message-ID: <9001311353.AA24293@ee.ecn.purdue.edu> -------- The ideas of minimum description lenght (mdl), Kolmogorov-Chaitin complexity, and other measures are certainly related in some way. But the fact that some nonlinear systems can display chaotic behavior, and through that from a finite and small description generate infinite output (or very large) is at least puzzling. One interesting and simple chaotic equation when plotted in a polar system, displayed a behavior that looked like a wheel marked at a point, moving at a period which when divided by the perimeter of the wheel gave an irrational number. So that the point, when the wheel was sample periodically, was NEVER (infinite precision) at the same point; never periodic. We are accustomed to ideas of semi periodicity such as a sine wave modulated by a music signal. But for truly non repetitive behavior, the lenght of the music signal has to be the same of the modulated signal. So , even more amazing it seems is this chaotic business. I read a while ago in IEEE computer of a group in an university around the DC area (reference please), funded by DARPA, that was doing image compression using chaotic series, at amazing ratios. Rissanen shown the realtionship among image compression, sys. id., prediction, etc. using measures of complexity. It would be great if the same ideas from the chaotic image compression, in some form, could be applied to our problems of memory, description and computation. Unfortunatelly, in my experience, a property of these systems seems to hinder the best of efforts. In trying to reduce a string to its generator, a simple change in precision of parameters or initial conditions can send you in the wrong direction. It is a MANY-to -one mapping in the worst sense, making the inverse process almost hopeless. It is interesting to see that we painted ourselves into a linear corner, and now, forced to think about a nonlinear universe, we are confronted with phenomena that our theories don't support. Good luck to the one who is trying to break this code. --ft. < Manoel Fernando Tenorio (tenorio at ee.ecn.purdue.edu) > < MSEE233D > < School of Electrical Engineering > < Purdue University > < W. Lafayette, IN, 47907 > --- Your message of: Tuesday,01/30/90 --- From: Jordan B Pollack Subject: Are thoughts REALLY Random? (Background: Scott is commenting less on the question than on the unspoken subtext, which is my idea of building a very large reconstructive memory based on quasi-inverting something like the Mandelbrot set. Given a figure, find a pointer; then only store the pointer and simple reconstruction function; This has been mentioned twice in print, in a survey article in AI Review, and in NIPS 1988. Yesterday's note was certainly related; but I wanted to ignore the search question right now!) >> Do you see any fundamental difference between the >> Mandelbrot set and the proverbial infinite number of monkeys with >> typewriters? I think that the difference is that the initial-conditions/reduced descriptions/pointers to the Mandelbrot set can be precisely stored by physical computers. This leads to a replicability of "access" not available to the monkeys. >> Maybe the right measure of Platonic density is something like the expecte d >> length of the address (M bits) that you would need to point to a specific >> N-bit pattern that you want to locate somewhere in this infinite heap of >> not-exactly-random bits. Thanks! Not bad for a starting point! The Platonic Complexity (ratio of N/M) would decrease to 0 at the GIGO limit, and increase to infinity if it took effectively 0 bits to access arbitrary information. This is very satisfying. >> Why shouldn't M be much greater than N? Normally, we computer types live with a density of 1, as we convert symbolic information into bit-packed data-structures. Thus we already have lots of systems with PC=1! Also I can point to systems with PC <1 (Bank teller machines) and with PC>1 (Postscript). Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 --- end of message --- From lab at maxine.wpi.edu Wed Jan 31 10:31:56 1990 From: lab at maxine.wpi.edu (Lee Becker) Date: Wed, 31 Jan 90 10:31:56 -0500 Subject: clues Message-ID: <9001311531.AA27797@maxine.wpi.edu> I'd very much like to have a copy of your tech report on CLUES. Lee A. Becker, Dept. of Computer Science, Worcester Polytechnic Inst. Worcester, MA 01609 THANKS From rsun at chaos.cs.brandeis.edu.cs.brandeis.edu Wed Jan 31 16:29:27 1990 From: rsun at chaos.cs.brandeis.edu.cs.brandeis.edu (Ron Sun) Date: Wed, 31 Jan 90 16:29:27 -0500 Subject: No subject Message-ID: <9001312129.AA01945@chaos> Fuzzy logic (or fuzzy set theory in general) has widespread impact on many different areas. It bears significant resemblances to connectionist models. Thus it is only natural to investigate systematically the similarity and difference between them, and how they can join force towards a new kind of AI, especially I am interested in knowing if there are ways we can take things from fuzzy logic and apply them in high level connectionist cognitive modeling. Anyway connectionist models must deal with the same problem: uncertainty and vagueness (fuzziness), which is what FL is for. In terms of using evolutionary and learning algorithms (GA for example), these numbers (degree of membership, etc) might be useful. There are some work that I know of, e.g. Kosko's paper in J. of Approx. Reasoning, and my own one (in Proc. IIth cogSci Conf. and Proc IEA/AIE-89) My questions are 1) Any other work in the area (the relation between the two; the combination of the two)? pointers? references? 2) Any comments and opinions regarding the combination of coneectionism and fuzzy logic? 3) Any conferences, symposia or workshops dealing extensively with the issue (the combination of the two, not each individually)? Ron Sun Brandeis University Computer Science Waltham, MA 02254 rsun%cs.brandeis.edu at relay.cs.net From mozer at neuron.Colorado.EDU Mon Jan 1 14:45:04 1990 From: mozer at neuron.Colorado.EDU (Michael C. Mozer) Date: Mon, 1 Jan 90 12:45:04 MST Subject: tech report announcement Message-ID: <9001011945.AA07817@neuron.colorado.edu> PLEASE MAIL REQUESTS FOR COPIES TO: conn_tech_report at boulder.colorado.edu PLEASE DO NOT FORWARD TO OTHER BBOARDS. =============================================================================== Discovering the Structure of a Reactive Environment by Exploration Michael C. Mozer University of Colorado, Boulder Jonathan Bachrach University of Massachusetts, Amherst Tech Report CU-CS-451-89 December 1989 Consider a robot wandering around an unfamiliar environment, per- forming actions and sensing the resulting environmental states. The robot's task is to construct an internal model of its en- vironment, a model that will allow it to predict the consequences of its actions and to determine what sequences of actions to take to reach particular goal states. Rivest and Schapire (1987a, 1987b; Schapire, 1988) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of "finite state" environments. The heart of this algorithm is a clever representation of the environment called an "update graph." We have developed a connectionist implementation of the update graph using a highly-specialized network architec- ture. With back propagation learning and a trivial exploration strategy -- choosing random actions -- the connectionist network can outperform the Rivest and Schapire algorithm on simple prob- lems. The network has the additional strength that it can ac- comodate stochastic environments. Perhaps the greatest virtue of the connectionist approach is that it suggests generalizations of the update graph representation that do not arise from a tradi- tional, symbolic perspective. This report also serves to set up the ultimate connectionist light bulb joke, which goes something like this: How many connectionist networks does it take to change a light bulb? Only one, but it requires about 6,000 trials. From gary%cs at UCSD.EDU Mon Jan 1 18:46:06 1990 From: gary%cs at UCSD.EDU (Gary Cottrell) Date: Mon, 1 Jan 90 15:46:06 PST Subject: tech report announcement Message-ID: <9001012346.AA01217@desi.UCSD.EDU> No, Mike, it goes like this: How many connectionist networks does it take to change a lightbulb? From carol at ai.toronto.edu Tue Jan 2 11:21:20 1990 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 2 Jan 90 11:21:20 EST Subject: CRG-TR-89-5 available Message-ID: <90Jan2.112126est.10744@ephemeral.ai.toronto.edu> PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OR MAILING LISTS ********************************************************** The following paper was presented at a recent meeting of the Acoustical Society of America: Context-Modulated Discrimination of Similar Vowels Using Second-Order Connectionist Networks R. Watrous Dept. of Computer Science, University of Toronto and Siemens Corporate Research Discrimination of two vowels in the context of voiced and unvoiced stop consonants is investigated using connectionist networks. Separate discrimination networks were generated from samples of the vowel centers of [e,ae] for the six contexts [b,d,g,p,t,k] for one speaker. A single context-independent network was similarly generated. The context-specific error rate was 1%, whereas the context-independent error rate was 9%. A method for merging isomorphic context-specific networks into a single network is described, that uses singular value decomposition to find a minimal basis for the set of context-specific weight vectors. Context-dependent linear combinations of the basis vectors may then computed using second-order network units. Compact networks can thus be obtained in which the vowel discrimination surfaces are modulated by the phonetic context. In one experiment, as the number of basis vectors was reduced from 6 to 3, the error rate increased from 1% to 3%. A context-modulated network with three basis vectors and a context-independent network were also trained on the same data using standard methods of nonlinear optimization. The discrimination error rate using the context-independent network was as low as 2.6%, whereas the context-specific recognition error rate was as low as 0.3%. It is concluded that compact context-sensitive connectionist networks which result in very high phoneme discrimination accuracy can be constructed and trained. --------- To obtain a copy of this paper please send your real mailing address to carol at ai.toronto.edu --------- From wilson at Think.COM Tue Jan 2 14:25:39 1990 From: wilson at Think.COM (Stewart Wilson) Date: Tue, 02 Jan 90 14:25:39 EST Subject: SAB90 Call for Papers Message-ID: <9001021925.AA05846@pozzo> Dear colleagues, Dr. Meyer and I would be very grateful if you would distribute the following call for papers on your email list. Sincerely, Stewart Wilson ============================================================================== ============================================================================== Call for Papers SIMULATION OF ADAPTIVE BEHAVIOR: FROM ANIMALS TO ANIMATS An International Conference to be held in Paris September 24-28, 1990 The object of the conference is to bring together researchers in ethology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on simulation models in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contact among scientists from diverse disciplines should contribute to better appreciation of each other's approaches and vocabularies, to cross-fertilization of fundamental and applied research, and to defining objectives, constraints, and challenges for future work. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behaviors Autonomous robots Action selection and behavioral Hierarchical and parallel organizations sequences Self organization of behavioral Conditioning, learning and induction modules Neural correlates of behavior Problem solving and planning Perception and motor control Goal directed behavior Motivation and emotion Neural networks and classifier Behavioral ontogeny systems Cognitive maps and internal Emergent structures and behaviors world models Authors are requested to send two copies (hard copy only) of a full paper to each of the Conference chairmen: Jean-Arcady MEYER Stewart WILSON Groupe de Bioinformatique The Rowland Institute for Science URA686.Ecole Normale Superieure 100 Cambridge Parkway 46 rue d'Ulm Cambridge, MA 02142 75230 Paris Cedex 05 USA France e-mail: meyer%FRULM63.bitnet@ e-mail: wilson at think.com cunyvm.cuny.edu A brief preliminary letter to one chairman indicating the intention to participate--with the tentative title of the intended paper and a list of the topics addressed--would be appreciated for planning purposes. For conference information, please also contact one of the chairmen. Conference committee: Conference Chair J.A. Meyer, S. Wilson Organizing Committee Groupe de BioInformatique.ENS.France. and local arrangements A. Guillot, J.A. Meyer, P. Tarroux, P. Vincens Program Committee L. Booker, USA R. Brooks, USA P. Colgan, Canada P. Greussay, France D. McFarland, UK L. Steels, Belgium R. Sutton, USA F. Toates, UK D. Waltz, USA Official Language: English Important Dates 31 May 90 Submissions must be received by the chairmen 30 June 90 Notification of acceptance or rejection 31 August 90 Camera ready revised versions due 24-28 September 90 Conference dates =============================================================================== =============================================================================== From kersten%scf.decnet at nwc.navy.mil Wed Jan 3 01:00:00 1990 From: kersten%scf.decnet at nwc.navy.mil (SCF::KERSTEN) Date: 2 Jan 90 23:00:00 PDT Subject: newletter Message-ID: Dear Connectionist: I wondered how one gets on the distribution list for the connectionist? Is it free?? Is it possible to obtain a statement of purpose of the newsletter, who contributes, etc. From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Wed Jan 3 14:13:56 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Wed, 03 Jan 90 14:13:56 EST Subject: No subject Message-ID: i would like the following information from kind fellow netters that do speech research: reading some of the current TR's on the use of nn's for speech recognition i could not figure just how many parameters are involved in the training problem and how much time it takes to train them. i would be interested in some statement of the form: 2000 paramete 17 days of sun time. maybe also how the training time scales with the size of the optimization problem ... i will be happy to collect answers and post to the list. please send me personal mail. also , please answer only from your own personal research experience. thanasis From bharucha at eleazar.dartmouth.edu Wed Jan 3 14:45:10 1990 From: bharucha at eleazar.dartmouth.edu (Jamshed Bharucha) Date: Wed, 3 Jan 90 14:45:10 -0500 Subject: summer institute in cognitive neuroscience Message-ID: <9001031945.AA02873@eleazar.dartmouth.edu> JAMES S. MCDONNEL FOUNDATION THIRD ANNUAL SUMMER INSTITUTE IN COGNITIVE NEUROSCIENCE Dartmouth College and Medical School July 2 - July 13, 1990 The Third Annual Summer Institute will be held from July 2 through July 13, 1990. The two week course will examine how information about the brain bears on issues in cognitive science, and how approaches in cognitive science apply to neuroscience research. A distinguished faculty will lecture on current topics in perception and language; laboratories and demonstrations will offer practical experience with cognitive neuropsychology experiments, connectionist/computational modeling, and neuroanatomy. At every stage, the relationship between cognitive issues and underlying neural circuits will be explored. The Institute directors will be Michael Gazzaniga, George A. Miller, Wolf Singer and Gordon Shepherd. Applications are invited from beginning and established researchers. The Foundation is providing limited support for travel expenses and room/board. Visiting Faculty Include: Sheila Blumstein Eric Knudsen Kenneth Nakayama Patricia A. Carpenter Mark Konishi Steven Pinker Albert M. Galaburda Stephen M. Kosslyn Michael I. Posner Lila Gleitman Marta Kutas Marcus E. Raichle David H. Hubel Ralph Linsker Pasko Rakic Marcel A. Just Margaret Livingstone Gordon M. Shepherd Jon H. Kaas James McClelland Wolf Singer Herbert P. Killackey George A. Miller Barry E. Stein Vernon B. Mountcastle Host Faculty: Kathleen Baynes Carol A. Fowler Patricia A. Reuter-Lorenz Jamshed Bharucha Michael S. Gazzaniga Mark J. Tramo Robert Fendrich Howard C. Hughes George L. Wolford For further information please send email to reuter-lorenz at mac.dartmouth.edu APPLICATIONS MUST BE POSTMARKED BY JANUARY 12, 1990 *************************Application form*********************** NAME: INSTITUTIONAL AFFILIATION: POSITION: HOME ADDRESS: WORK ADDRESS: TELEPHONES: Housing and some meal costs will be covered by the Foundation. There will also be limited travel support available. Please indicate the percent of your need for this support: ___ % APPLICATION DEADLINE: Postmarked by January 12, 1990 NOTIFICATION OF ACCEPTANCE: March 5, 1990 PLEASE SEND THIS FORM, TOGETHER WITH: 1. A one-page statement explaining why you wish to attend. 2. A curriculum vitae. 3. Two letters of recommendation. (Supporting materials will be accepted until February 1). Send applications to the following address (do not email applications): Dr. M.S. Gazzaniga McDonnell Summer Institute HB 7915-A Dartmouth Medical School Hanover, New Hampshire 03756 ****************************************************************** From reggia at cs.UMD.EDU Thu Jan 4 16:26:42 1990 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Thu, 4 Jan 90 16:26:42 -0500 Subject: call for papers Message-ID: <9001042126.AA20909@mimsy.UMD.EDU> CALL FOR PAPERS: Connectionist/neural models in medicine The 14th Annual Symposium on Computer Applications in Medical Care (Nov 4 - 7, 1990) will have sessions dealing with connectionist modelling research relevant to biomedicine. Previous papers have included presentations of new learning methods, models of portions of the nervous system of specific organisms, methods for classification and diagnosis of medical disorders, models of higher cortical functions and their disorders (e.g., aphasia, dyslexia, dementia), and methods for device control. Papers on these and a much broader range of topics are sought. Manuscripts are due March 1, 1990. For a copy of instructions for authors or any further information contact SCAMC Office of CME George Washington University Medical Center 2300 K Street, NW Washington, DC 20037 or via phone call (202) 994-8928. From ajr%engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Fri Jan 5 11:00:15 1990 From: ajr%engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Tony Robinson) Date: Fri, 5 Jan 90 16:00:15 GMT Subject: problems with large training sets Message-ID: <1806.9001051600@dsl.eng.cam.ac.uk> Dear Connectionists: I have a problem which I believe is shared by many others. In taking error propagation networks out of the "toy problem" domain, and into the "real world", the number of examples in the training set increases rapidly. For weight updating, true gradient descent requires calculating the partial gradient from every element in the training set and taking a small step in the opposite direction to the total gradient. Both these requirements are impractical when the training set is large. Adaptive step size techniques can give an order of magnitude decrease in computation over a fixed scaling of the gradient and, for initial training, small subsets can give a sufficiently accurate estimation of the gradient. My problem is that I don't have an adpative step size algorithm that works on the noisy gradient obtained from a subset of the training set. Does anyone have any ideas? (I'd be glad to coordinate suggestions and short summaries of published work and post back to the list.) To kick off, my best technique to date is included below. Thanks, Tony Robinson. (ajr at eng.cam.ac.uk) From jbower at smaug.cns.caltech.edu Fri Jan 5 17:28:03 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 5 Jan 90 14:28:03 PST Subject: GENESIS Message-ID: <9001052228.AA00516@smaug.cns.caltech.edu> Software availability announcement GENESIS (GEneral NEural SImulation System) and XODUS (X-windows Output and Display Utility for Simulations) This combined neural simulation system is now available for general distribution via FTP from Caltech. The software was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components. For the last two years GENESIS has provided the basis for laboratory courses in neural simulation at both Caltech and the Marine Biological Laboratory in Woods Hole, MA. Most current GENESIS applications involve realistic simulations of biological neural systems, however, it has also been used to model more abstract networks. The system is not, however, a particularly efficient way to construct and run simple feedforward back propagation type simulations. More information on the simulator and its interface can be obtained from an article by our group in last years NIPS proceedings. (Wilson, M.A., Bhalla, U.S., Uhley, J.D., and Bower, J.M. 1989 GENESIS: A system for simulating neural networks. In: Advances in Neural information processing systems. D. Touretzky, editor. Morgan Kaufmann, San Mateo, CA. 485-492. ). The interface will also be the subject of an oral presentation at the upcoming USENIX meeting in Washington D.C. GENESIS and XODUS are written in C and run on SUN and DEC graphics work stations under UNIX (version 4.0 and up), and X-windows (version 11). The software requires 14 meg of disk space and the tar file is approximately 1 meg. Full source for the simulator is available via FTP from genesis.cns.caltech.edu (131.215.135.64). To acquire FTP access to this machine it is necessary to first register for distribution by using telnet or rlogin to login under user "genesis" and then follow the instructions (e.g. 'telnet genesis.cns.caltech.edu' and ' login as 'genesis'). When necessary, tapes can be provided for a small handling fee ($50). Those requiring tapes should send requests to genesis-req at caltech.bitnet. Any other questions about the system or its distribution should also be sent to this address. The current distribution includes full source for both GENESIS and XODUS as well as three tutorial simulations (squid axon, multicell, visual cortex). Documentation for these tutorials as well as three papers describing the structure of the simulator are also included. As described in more detail in the "readme" file at the FTP address, those interested in developing new GENESIS applications are encouraged to become registered members of the GENESIS users group (BABEL) for an additional one time $200 registration fee. As a registered user, one is provided documentation on the simulator itself (currently in an early stage), access to additional simulator components, bug report listings, and access to a user's bulletin board. In addition we are establishing a depository for additional completed simulations. Finally, it should be noted that this software system, which currently consists of approximately 60,000 lines of code, and represents almost four years of programing effort, is being provided for general distribution as a public service to the neural network and computational neuroscience communities. We make no claims as to the quality or functionality of this software for any purpose whatsoever and its release does not constitute a commitment on our part to provide support of any kind. Jim Bower From terry%sdbio2 at ucsd.edu Sat Jan 6 20:42:07 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Sat, 6 Jan 90 17:42:07 PST Subject: Neural Computation, Vol. 1, No. 4 Message-ID: <9001070142.AA15178@sdbio2.UCSD.EDU> Neural Computation Volume 1, Number 4 Reviews: Learning in artificial neural networks Halbert White Notes: Representation properties of networks: Kolmogorov's theorem is irrelevant Frederico Girosi and Tomaso Poggio Sigmoids distinguish more efficiently than heavisides Eduardo D. Sontag Letters: How cortical interconnectedness varies with network size Charles F. Stevens A canonical microcircuit for neocortex Rodney J. Douglas, Kevan A. C. Martin, and David Whitteridge Snythetic neural circuits using current-domain signal representations Andreas G. Andreou and Kwabena A. Boahen Random neural networks with negative and positive signals and product form solution Erol Gelenbe Nonlinear optimization using generalized Hopfield networks Athanasios G. Tsirukis, Gintaras V. Reklaitis, and Manoel F. Tenorio Discrete Synchronous neural algorithm for minimization Hyuk Lee Approximation of boolean functions by sigmoidal networks: Part I: XOR and other two-variable functions E. K. Blum Backpropagation applied to handwritten zip code recognition Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel A subgrouping strategy that reduces complexity and speeds up learning in recurrent networks David Zipser Unification as constraint satisfaction in structured connectionist networks Andreas Stolcke SUBSCRIPTIONS: This will be the last opportunity to receive all issues for volume 1. Subscriptions for volume 2 will not receive back issues for volume 1. Back issues of single issues will, however, be available for $25. each. Volume 1 Volume 2 ______ $35. ______ $40. Student ______ $45. ______ $50. Individual ______ $90. ______ $100. Institution Add $9. for postage outside USA and Canada surface mail or $17. for air mail. MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From whart%cs at ucsd.edu Mon Jan 8 11:40:33 1990 From: whart%cs at ucsd.edu (Bill Hart) Date: Mon, 8 Jan 90 08:40:33 PST Subject: New Subscriber Message-ID: <9001081640.AA01571@beowulf.UCSD.EDU> Please add me to the connectionists mailing list. Thanks. From der at thorin.stanford.edu Mon Jan 8 19:01:50 1990 From: der at thorin.stanford.edu (Dave Rumelhart) Date: Mon, 8 Jan 90 16:01:50 PST Subject: Neural Computation, Vol. 1, No. 4 In-Reply-To: Terry Sejnowski's message of Sat, 6 Jan 90 17:42:07 PST <9001070142.AA15178@sdbio2.UCSD.EDU> Message-ID: <9001090001.AA16001@thorin> From SATINDER at cs.umass.EDU Tue Jan 9 12:30:00 1990 From: SATINDER at cs.umass.EDU (SATINDER@cs.umass.EDU) Date: Tue, 9 Jan 90 12:30 EST Subject: Technical Reports available Message-ID: <9001091728.AA23874@crash.cs.umass.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** Two technical reports are available. Send requests to Ms. Connie Smith, Department of Computer and Information Science, University of Massachusetts, Amherst MA 01003. Or via e-mail to: Smith at cs.umass.EDU. Do not sent requests to the sender of this message. A postscript version of Technical Report 89-118 should be available via ftp from cheops.cis.ohio-state.edu as described by Pollack in previous messages to this bboard. It is stored as "houk.control.ps.Z" in pub/neuroprose. The extension ".Z" is for the compressed form. **************************************************************** An Adaptive Sensorimotor Network Inspired by the Anatomy and Physiology of the Cerebellum James C. Houk Department of Physiology Northwestern University, Chicago, IL 60611 Satinder P. Singh, Charles Fisher, Andrew G. Barto Department of Computer and Information Science University of Massachusetts, Amherst, MA 01003 COINS Technical Report 89-108 November 1989 Abstract: In this report we review the anatomy and physiology of the cerebellum, stressing new knowledge about information processing in cerebellar circuits, novel biophysical properties of Purkinje neurons and cellular mechanisms for adjusting synaptic weights. We then explore the impact of these ideas on designs for adaptive sensorimotor networks. A network is proposed that is comprised of an array of adjustable pattern generators. Each pattern generator in the array produces an element of a composite motor program. Motor programs can be stored, retrieved and executed using adjustable pattern generator modules. ********************************************************************** Cooperative Control of Limb Movements by the Motor Cortex, Brainstem and Cerebellum James C. Houk Department of Physiology Northwestern University, Chicago, IL 60611 COINS Technical Report 89-118 December 1989 Abstract: The model of sensory-motor coordination proposed here involves two primary processes that are bound together by positive feedback loops. One primary process links sensory triggers to potential movements. While this process may occur at other sites, I emphasize the role of combinatorial maps in the motor cortex in this report. Combinatorial maps make it possible for many different stimuli to trigger many different motor programs, and for preferential linkages to be associatively formed. A second primary process stores motor programs and regulates their expression. The programs are believed to be stored in the cerebellar cortex, in the synaptic weights between parallel fibers and Purkinje cells. Positive feedback loops between the motor cortex and the cerebellum bind the combinatorial maps to the motor programs. The capability for self-sustained activity in these loops is the postulated driving force for generating programs, whereas inhibition from cerebellar Purkinje cells is the main mechanism that regulates their expression. Execution of a program is triggered when a sensory input succeeds in initiating regenerative loop activity. **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** From paul at NMSU.Edu Wed Jan 10 01:00:33 1990 From: paul at NMSU.Edu (paul@NMSU.Edu) Date: Tue, 9 Jan 90 23:00:33 MST Subject: No subject Message-ID: <9001100600.AA05592@NMSU.Edu> Updated CFP: PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS IN AI PRAGMATICS SUBJECT: Please post the following in your Laboratory/Department/Journal: Cut--------------------------------------------------------------------------- SUBJECT: Please post the following in your Laboratory/Department/Journal: CALL FOR PAPERS Pragmatics in Artificial Intelligence 5th Rocky Mountain Conference on Artificial Intelligence (RMCAI-90) Las Cruces, New Mexico, USA, June 28-30, 1990 PRAGMATICS PROBLEM: The problem of pragmatics in AI is one of developing theories, models, and implementations of systems that make effective use of contextual information to solve problems in changing environments. CONFERENCE GOAL: This conference will provide a forum for researchers from all subfields of AI to discuss the problem of pragmatics in AI. The implications that each area has for the others in tackling this problem are of particular interest. COOPERATION: American Association for Artificial Intelligence (AAAI) Association for Computing Machinery (ACM) Special Interest Group in Artificial Intelligence (SIGART) IEEE Computer Society U S WEST Advanced Technologies and the Rocky Mountain Society for Artificial Intelligence (RMSAI) SPONSORSHIP: Association for Computing Machinery (ACM) Special Interest Group in Artificial Intelligence (SIGART) U S WEST Advanced Technologies and the Rocky Mountain Society for Artificial Intelligence (RMSAI) INVITED SPEAKERS: The following researchers have agreed to present papers at the conference: *Martin Casdagli, Los Alamos National Laboratory, Los Alamos USA *Arthur Cater, University College Dublin, Ireland EC *Jerry Feldman, University of California at Berkeley, Berkeley USA & International Computer Science Institute, Berkeley USA *Barbara Grosz, Harvard University, Cambridge USA *James Martin, University of Colorado at Boulder, Boulder USA *Derek Partridge, University of Exeter, United Kingdom EC *Philip Stenton, Hewlett Packard, United Kingdom EC *Robert Wilensky, University of California at Berkeley Berkeley USA THE LAND OF ENCHANTMENT: Las Cruces, lies in THE LAND OF ENCHANTMENT (New Mexico), USA and is situated in the Rio Grande Corridor with the scenic Organ Mountains overlooking the city. The city is close to Mexico, Carlsbad Caverns, and White Sands National Monument. There are a number of Indian Reservations and Pueblos in the Land Of Enchantment and the cultural and scenic cities of Taos and Santa Fe lie to the north. New Mexico has an interesting mixture of Indian, Mexican and Spanish culture. There is quite a variation of Mexican and New Mexican food to be found here too. GENERAL INFORMATION: The Rocky Mountain Conference on Artificial Intelligence is a major regional forum in the USA for scientific exchange and presentation of AI research. The conference emphasizes discussion and informal interaction as well as presentations. The conference encourages the presentation of completed research, ongoing research, and preliminary investigations. Researchers from both within and outside the region are invited to participate. Some travel awards will be available for qualified applicants. FORMAT FOR PAPERS: Submitted papers should be double spaced and no more than 5 pages long. E-mail versions will not be accepted. Papers will be published in the proceedings and there is the possibility of a published book. Send 3 copies of your paper to: Paul Mc Kevitt, Program Chairperson, RMCAI-90, Computing Research Laboratory (CRL), Dept. 3CRL, Box 30001, New Mexico State University, Las Cruces, NM 88003-0001, USA. DEADLINES: Paper submission: April 1st, 1990 Pre-registration: April 1st, 1990 Notice of acceptance: May 1st, 1990 Final papers due: June 1st, 1990 LOCAL ARRANGEMENTS: Local Arrangements Chairperson, RMCAI-90. (same postal address as above). INQUIRIES: Inquiries regarding conference brochure and registration form should be addressed to the Local Arrangements Chairperson. Inquiries regarding the conference program should be addressed to the Program Chairperson. Local Arrangements Chairperson: E-mail: INTERNET: rmcai at nmsu.edu Phone: (+ 1 505)-646-5466 Fax: (+ 1 505)-646-6218. Program Chairperson: E-mail: INTERNET: paul at nmsu.edu Phone: (+ 1 505)-646-5109 Fax: (+ 1 505)-646-6218. TOPICS OF INTEREST: You are invited to submit a research paper addressing Pragmatics in AI, with any of the following orientations: Philosophy, Foundations and Methodology Knowledge Representation Neural Networks and Connectionism Genetic Algorithms, Emergent Computation, Nonlinear Systems Natural Language and Speech Understanding Problem Solving, Planning, Reasoning Machine Learning Vision and Robotics Applications PROGRAM COMMITTEE: *John Barnden, New Mexico State University (Connectionism, Beliefs, Metaphor processing) *Hans Brunner, U S WEST Advanced Technologies (Natural language interfaces, Dialogue interfaces) *Martin Casdagli, Los Alamos National Laboratory (Dynamical systems, Artificial neural networks, Applications) *Mike Coombs, New Mexico State University (Problem solving, Adaptive systems, Planning) *Thomas Eskridge, Lockheed Missile and Space Co. (Analogy, Problem solving) *Chris Fields, New Mexico State University (Neural networks, Nonlinear systems, Applications) *Roger Hartley, New Mexico State University (Knowledge Representation, Planning, Problem Solving) *Victor Johnson, New Mexico State University (Genetic Algorithms) *Paul Mc Kevitt, New Mexico State University (Natural language interfaces, Dialogue modeling) *Joe Pfeiffer, New Mexico State University (Computer Vision, Parallel architectures) *Keith Phillips, University of Colorado at Colorado Springs (Computer vision, Mathematical modelling) *Yorick Wilks, New Mexico State University (Natural language processing, Knowledge representation) *Scott Wolff, U S WEST Advanced Technologies (Intelligent tutoring, User interface design, Cognitive modeling) REGISTRATION: Pre-Registration: Professionals: $50.00; Students $30.00 (Pre-Registration cutoff date is April 1st 1990) Registration: Professionals: $70.00; Students $50.00 (Copied proof of student status is required). Registration form (IN BLOCK CAPITALS). Enclose payment made out to New Mexico State University. (ONLY checks in US dollars will be accepted). Send to the following address (MARKED REGISTRATION): Local Arrangements Chairperson, RMCAI-90 Computing Research Laboratory Dept. 3CRL, Box 30001, NMSU Las Cruces, NM 88003-0001, USA. Name:_______________________________ E-mail_____________________________ Phone__________________________ Affiliation: ____________________________________________________ Fax: ____________________________________________________ Address: ____________________________________________________ ____________________________________________________ ____________________________________________________ COUNTRY__________________________________________ Organizing Committee RMCAI-90: Paul Mc Kevitt Yorick Wilks Research Scientist Director CRL CRL cut------------------------------------------------------------------------ From dambrosi at turing Wed Jan 10 14:44:22 1990 From: dambrosi at turing (dambrosi@turing) Date: Wed, 10 Jan 90 11:44:22 PST Subject: Call For Papers - Uncertainty in AI Message-ID: <9001101944.AA04813@turing.CS.ORST.EDU> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CALL FOR PAPERS: SIXTH CONFERENCE ON UNCERTAINTY IN AI Cambridge, Mass., July 27th-29th, 1990 (preceding the AAAI-90 Conference) DEADLINE: MARCH 12, 1990 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The sixth annual Conference on Uncertainty in AI is concerned with the full gamut of approaches to automated and interactive reasoning and decision making under uncertainty, including both quantitative and qualitative methods. We invite original contributions on fundamental theoretical issues, on the development of software tools embedding approximate reasoning theories, and on the validation of such theories and technologies on challenging applications. Topics of particular interest include: - - Semantics of qualitative and quantitative uncertainty representations - - The role of uncertainty in deductive, abductive, defeasible, or analogical (case-based) reasoning - - Control of reasoning; planning under uncertainty - - Comparison and integration of qualitative and quantitative schemes - - Knowledge engineering tools and techniques for building approximate reasoning systems - - User Interface: explanation and summarization of uncertain infromation - - Applications of approximate reasoning techniques Papers will be carefully refereed. All accepted papers will be included in the proceedings, which will be available at the conference. Papers may be accepted for presentation in plenary sessions or poster sessions. Four copies of each paper should be sent to the Program Chair by March 12, 1990. Acceptance will be sent by May 4, 1990. Final camera-ready papers, incorporating reviewers' comments, will be due by May 31, 1990. There will be an eight page limit on the camera-ready copy (with a few extra pages available for a nominal fee.) ---------------------------------------------------------------------------- Program Chair: General Chair: Piero P. Bonissone, UAI-90 Max Henrion, General Electric Rockwell Science Center, Corporate Research and Development, Palo Alto Facility, 1 River Rd., Bldg. K1-5C32A, 444 High Street, Schenectady, NY 12301 Palo Alto, Ca 94301 (518) 387-5155 (415) 325-1892 Bonissone at crd.ge.com Henrion at sumex-aim.stanford.edu Program Committee: Peter Cheeseman, Paul Cohen, Laveen Kanal, Henry Kyburg, John Lemmer, Tod Levitt, Ramesh Patil, Judea Pearl, Enrique Ruspini, Glenn Shafer, Lofti Zadeh ---------------------------------------------------------------------------- ------- End of Forwarded Message From mike at bucasb.bu.edu Fri Jan 12 02:06:41 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Fri, 12 Jan 90 02:06:41 EST Subject: Call for Papers Wang Conference Message-ID: <9001120706.AA28881@bucasb.bu.edu> CALL FOR PAPERS NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION MAY 11--13, 1990 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This research conference at the cutting edge of neural network science and technology will bring together leading experts in academe, government, and industry to present their latest results on automatic target recognition in invited lectures and contributed posters. Invited lecturers include: JOE BROWN, Martin Marietta, "Multi-Sensor ATR using Neural Nets" GAIL CARPENTER, Boston University, "Target Recognition by Adaptive Resonance: ART for ATR" NABIL FARHAT, University of Pennsylvania, "Bifurcating Networks for Target Recognition" STEPHEN GROSSBERG, Boston University, "Recent Results on Self-Organizing ATR Networks" ROBERT HECHT-NIELSEN, HNC, "Spatiotemporal Attention Focusing by Expectation Feedback" KEN JOHNSON, Hughes Aircraft, "The Application of Neural Networks to the Acquisition and Tracking of Maneuvering Tactical Targets in High Clutter IR Imagery" PAUL KOLODZY, MIT Lincoln Laboratory, "A Multi-Dimensional ATR System" MICHAEL KUPERSTEIN, Neurogen, "Adaptive Sensory-Motor Coordination using the INFANT Controller" YANN LECUN, AT&T Bell Labs, "Structured Back Propagation Networks for Handwriting Recognition" CHRISTOPHER SCOFIELD, Nestor, "Neural Network Automatic Target Recognition by Active and Passive Sonar Signals" STEVEN SIMMES, Science Applications International Co., "Massively Parallel Approaches to Automatic Target Recognition" ALEX WAIBEL, Carnegie Mellon University, "Patterns, Sequences and Variability: Advances in Connectionist Speech Recognition" ALLEN WAXMAN, MIT Lincoln Laboratory, "Invariant Learning and Recognition of 3D Objects from Temporal View Sequences" FRED WEINGARD, Booz-Allen and Hamilton, "Current Status and Results of Two Major Government Programs in Neural Network-Based ATR" BARBARA YOON, DARPA, "DARPA Artificial Neural Networks Technology Program: Automatic Target Recognition" CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR neural network research will be held on May 12, 1990. Attendees who wish to present a poster should submit 3 copies of an extended abstract (1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include with the abstract the name, address, and telephone number of the corresponding author. Mail to: ATR Poster Session, Neural Networks Conference, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors will be informed of abstract acceptance by March 31, 1990. SITE: The Wang Institute possesses excellent conference facilities on a beautiful 220-acre campus. It is easily reached from Boston's Logan Airport and Route 128. REGISTRATION FEE: Regular attendee--$90; full-time student--$70. Registration fee includes admission to all lectures and poster session, abstract book, one reception, two continental breakfasts, one lunch, one dinner, daily morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available. For information, call (508) 649-9731. TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Fri Jan 12 04:46:15 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Fri, 12 Jan 90 04:46:15 EST Subject: i hope i am asking this at the right place... Message-ID: a friend of mine wants to set up a serious/fairly big speech+neural nets project $ (say 100-200 K) he asked me about equipment to buy. anyone wants to put in their two cents? i could see a big bucks proposal based around sun workstations and a smaller one based around, say, 386 or 486 Dos machines. i seem to recall the SUN sparc station has a bunch of built-in DSP functions. i assume a waveform editor (software) mike, earphones and amplifier, some kind of A/D - D/A converter and maybe some central file server with a lot of hard disk space to store a speech database. i would not bother all of you with such a marginal message, but the person in question has to meet a deadline for a proposal and he was quite pressed for time. we do not want him to miss the chance to spend all this good money , do we ? if anyone has some names and/or ballpark estimates for prices for the above items, and maybe some other stuff i forgot, please mail me .... thanasis From mike at bucasb.bu.edu Fri Jan 12 02:07:49 1990 From: mike at bucasb.bu.edu (Michael Cohen) Date: Fri, 12 Jan 90 02:07:49 EST Subject: Neural Net Course Message-ID: <9001120707.AA28974@bucasb.bu.edu> NEURAL NETWORKS: FROM FOUNDATIONS TO APPLICATIONS May 6--11, 1989 Sponsored by the Center for Adaptive Systems, the Graduate Program in Cognitive and Neural Systems, and the Wang Institute of Boston University with partial support from The Air Force Office of Scientific Research This in-depth, systematic, 5-day course is based upon the world's leading graduate curriculum in the technology, computation, mathematics, and biology of neural networks. Developed at the Center for Adaptive Systems (CAS) and the Graduate Program in Cognitive and Neural Systems (CNS) of Boston University, twenty-eight hours of the course will be taught by six CAS/CNS faculty. Three distinguished guest lecturers will present eight hours of the course. COURSE OUTLINE MAY 7, 1990 ----------- ---Morning Session (Professor Stephen Grossberg) Historical Overview, Content Addressable Memory, Competitive Decision Making, Associative Learning ---Afternoon Session (Professors Michael Jordan (MIT) and Ennio Mingolla) Combinational Optimization, Perceptrons, Introduction to Back Propagation, Recent Developments of Back Propagation MAY 8, 1990 ----------- ---Morning Session (Professors Gail Carpenter and Stephen Grossberg) Adaptive Pattern Recognition, Introduction to Adaptive Resonance Theory, Analysis of ART 1 ---Afternoon Session (Professor Gail Carpenter) Analysis of ART 2, Analysis of ART 3, Self-Organization of Invariant Pattern Recognition Codes, Neocognitron MAY 9, 1990 ----------- ---Morning Session (Professors Stephen Grossberg and Ennio Mingolla) Vision and Image Processing ---Afternoon Session (Professors Daniel Bullock, Michael Cohen, and Stephen Grossberg) Adaptive Sensory-Motor Control and Robotics, Speech Perception and Production MAY 10, 1990 ------------ ---Morning Session (Professors Michael Cohen, Stephen Grossberg, and John Merrill) Speech Perception and Production, Reinforcement Learning and Prediction ---Afternoon Session (Professors Stephen Grossberg and John Merrill and Dr. Robert Hecht-Nielsen, HNC) Reinforcement Learning and Prediction, Recent Developments in the Neurocomputer Industry MAY 11, 1990 ------------ ---Morning Session (Dr. Federico Faggin, Synaptics Inc.) VLSI Implementation of Neural Networks TO REGISTER: By phone, call (508) 649-9731; by mail, write for further information to: Neural Networks, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. For further information about registration and STUDENT FELLOWSHIPS, see below. REGISTRATION FEE: Regular attendee--$950; full-time student--$250. Registration fee includes five days of tutorials, course notebooks, one reception, five continental breakfasts, five lunches, four dinners, daily morning and afternoon coffee service, evening discussion sessions. STUDENT FELLOWSHIPS supporting travel, registration, and lodging for the Course are available to full-time graduate students in a PhD program. Applications must be postmarked by March 1, 1990. Send curriculum vitae, a one-page essay describing your interest in neural networks, and a letter from a faculty advisor to: Student Fellowships, Neural Networks Course, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. From jbower at smaug.cns.caltech.edu Fri Jan 12 12:24:45 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Fri, 12 Jan 90 09:24:45 PST Subject: Flame Message-ID: <9001121724.AA15622@smaug.cns.caltech.edu> I must say that I for one am getting tired of being inundated with literature from the center of "the worlds greatest graduate program in the technology, computation, and biology of neural networks". Dispite the "renowned" nature of the faculty, and the "extraordinary" apparent range of their expertise, enough is enough. If the problem is a limited number of applicants, maybe someone should lower the price. Certainly, all "industry leading neural architects" should be able to understand that. Or, dare I suggest it, maybe the "worlds leading graduate curriculum" should be somewhat modified so that is seems more likely that the subject matter can actually be covered in a "self-contained systematic" fashion. Presumably, faculty on "the cutting edge" "who know the field as only its creators can" would be up to this task without compromising the week of its "rare intellectual excitment". Jim Bower From AMR at ibm.com Fri Jan 12 15:04:58 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 12 Jan 90 15:04:58 EST Subject: No subject Message-ID: This is Alexis Manaster-Ramer at T.J. Watson. At Stevan Harnad's suggestion, I direct the following question to you: is there a formal result about the equivalence (or, God forbid, nonequivalence) of connectionist models and Turing machines? If so, I would appreciate a reference or a clue towards one. Alexis From cbond at amber.bacs.indiana.edu Fri Jan 12 16:05:00 1990 From: cbond at amber.bacs.indiana.edu (cbond@amber.bacs.indiana.edu) Date: 12 Jan 90 16:05:00 EST Subject: Neural Nets and Targeting Message-ID: Remove me immediately from this mailing list. There is quite enough slime around here to put up with without having to be exposed to noxious, militaristic garbage in my mail queue, or the filth who would participate in such trash. If I ever receive anything from this mailing list again, I will be in touch with root at cs.cmu.edu. From Dave.Touretzky at B.GP.CS.CMU.EDU Fri Jan 12 21:16:05 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Fri, 12 Jan 90 21:16:05 EST Subject: Flame In-Reply-To: Your message of Fri, 12 Jan 90 09:24:45 -0800. <9001121724.AA15622@smaug.cns.caltech.edu> Message-ID: <5050.632196965@DST.BOLTZ.CS.CMU.EDU> Jim, have you heard George Lakoff's story about the origin of the Japanese phrase "bellybutton makes tea"? In the Japanese tea ceremony, the water must be heated to a precise temperature just short of boiling. If done correctly, instead of seeing the bubbles and turbulence associated with true boiling, the surface of the water merely rises and falls quickly while remaining smooth. In Japanese society it is impolite to laugh openly at someone, even when they're making a fool of themself. Instead one laughs inwardly while maintaining a calm exterior. Since the Japanese see the belly as the metaphorical center of the self, when one is laughing inside while trying to keep a straight face it is the belly that shows one's true feelings. They say the belly gently shakes with suppressed laughter. Hence, "bellybutton makes tea." I just wanted to share this little story with you in case you were wondering why no one else bothers to respond to certain posts. -- Dave From hinton at ai.toronto.edu Sat Jan 13 12:58:35 1990 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Sat, 13 Jan 90 12:58:35 EST Subject: email total war Message-ID: <90Jan13.125854est.10519@ephemeral.ai.toronto.edu> I think it would be a pity if the connectionists email facility that Dave Touretzky and CMU have generously provided was cluttered up by a war between the BU camp and its rivals. The neural network community has already suffered a lot from this split, and it would be nice if we could de-escalate it. Maybe some people could refrain from claiming publicly and at great length to be the best group in the world, and, as Dave suggests, others could resist the temptation to publicly object. I have been known to indulge in public polemics myself, but I now think its a mistake. Geoff From rik%cs at ucsd.edu Sat Jan 13 19:01:52 1990 From: rik%cs at ucsd.edu (Rik Belew) Date: Sat, 13 Jan 90 16:01:52 PST Subject: Flame Message-ID: <9001140001.AA02919@roland.UCSD.EDU> Dave, That was absolutely perfect invocation of another culture. I think it applies equally well to some reactions other than laughter (like my reactions to the target recognition conference, for example). That traditional Japanese culture be useful in new-wave Email culture is striking, too. Connectionists has grown into a pretty big, valuable mailing list, and often this means that it is time for a moderator to step in and ... moderate. But I for one find real value in the sometimes-brawling nature of this group and think something would be lost if we took this step, and I would prefer to hit my own delete key rather than having someone else try to do it for me. Rik Belew (rik at cs.ucsd.edu) From Dave.Touretzky at B.GP.CS.CMU.EDU Sun Jan 14 01:39:03 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Sun, 14 Jan 90 01:39:03 EST Subject: instructions to NIPS authors Message-ID: <6163.632299143@DST.BOLTZ.CS.CMU.EDU> I apologize for sending this message to NIPS authors to the entire CONNECTIONISTS list, but it's the most efficient way to reach people quickly. The deadline for NIPS papers to arrive at Morgan Kaufmann is this Wednesday, January 17. There have been some requests for clarifications of the paper format, so here they are. First, although the instructions claim that the paper title is boldface, the LaTeX code for \maketitle actually produces an italic title. Go with the code, not the documentation. Use the italic title. Second, note that the LaTeX \author command doesn't work in the NIPS style file. That's why the instructions say to format the author list manually. This is necessary because the format will differ depending on the number of authors. * For a single-author paper, just center the name and address. Simple. * For a two-author paper where the authors are from different institutions, I suggest the following formatting code: \begin{center} \parbox{2in}{\begin{center} {\bf David S. Touretzky}\\ School of Computer Science\\ Carnegie Mellon University\\ Pittsburgh, PA 15213 \end{center}} \hfill \parbox{2in}{\begin{center} {\bf Deirdre W. Wheeler}\\ Department of Linguistics\\ University of Pittsburgh\\ Pittsburgh, PA 15260 \end{center}} \end{center} If the authors are from the same institution, do it this way: \begin{center} {\bf Ajay N. Jain \ \ \ Alex H. Waibel}\\ School of Computer Science\\ Carnegie Mellon University\\ Pittsburgh, PA 15213 \end{center} * For a three-author paper, the instructions advise centering the name and address of the first author, making the second author flush-left, and the third author flush-right. Some people have objected that putting the first author in the middle makes it look like he/she is really the second author. If you wish to deviate from the suggested format for three-author papers, go ahead. Do whatever you think is reasonable. * If you have more than three authors, or some other special problem not covered by the above, use your own best judgement about format. Remember that your paper should be at most 8 pages long, and you need to turn in an indexing form and "Permission to Publish" form along with your camera-ready copy. The proceedings are due out in April. They can be ordered now from Morgan Kaufmann Publishers, Inc., San Mateo, CA. An example of a correct citation to this proceedings is: Barto, A. G., and Sutton, R. S. (1990) Sequential decision problems and neural networks. In D. S. Touretzky (ed.), {\it Advances in Neural Information Processing Systems 2.} San Mateo, CA: Morgan Kaufmann. -- Dave From zeiden at cs.wisc.edu Sun Jan 14 10:59:28 1990 From: zeiden at cs.wisc.edu (Matthew Zeidenberg) Date: Sun, 14 Jan 90 09:59:28 -0600 Subject: email total war Message-ID: <9001141559.AA18442@ai.cs.wisc.edu> I do not agree that people should not publicly object to militarist uses of science. As most people know, Japan is a relatively conformist society. Open discourse is a strength of the West compared to Japan. Many netters (myself included) object to such uses as "Neural Networks in Automatic Target Recognition" and the general prevalence of DOD and ONR funding in CS as opposed to NSF. It leads to a militarist focus. Where's science's "Peace Dividend"? Matt Zeidenberg From noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Sun Jan 14 12:27:23 1990 From: noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Noel Sharkey) Date: Sun, 14 Jan 90 17:27:23 GMT Subject: No subject In-Reply-To: AMR@com.ibm.almaden's message of Fri, 12 Jan 90 15:04:58 EST <10496.9001130347@expya.cs.exeter.ac.uk Message-ID: <10907.9001141727@entropy.cs.exeter.ac.uk> I keep hearing that there is a formal result about the equivalence of connect. models and Turing machines, but I have never seen it. I always hear from someone who knows someone else who was told by someone who knew someone else whose friend thought that one of their workmates had once worked with someone who knew of a formal proof by someone whose name they couldn`t quite remember, but they were at one of the big league universities. Please can I see it as well if there is one? Though I am sure I would have seen it by now. noel From jbower at smaug.cns.caltech.edu Sun Jan 14 16:03:55 1990 From: jbower at smaug.cns.caltech.edu (Jim Bower) Date: Sun, 14 Jan 90 13:03:55 PST Subject: Flame Message-ID: <9001142103.AA17201@smaug.cns.caltech.edu> Maybe it wasn't clear. I, like many others, judging from my recent personal email, tolerated in silence the first several postings of these announcements on the connectionists mailing list. Further, I also have no interest in cluttering connectionists with nontechnical garbage, and believe that a free and open discourse should be maintained. While the form of my objection may not have been entirely appropriate, the intent was precisely to induce some self regulation in those who, in my view, were taking advantage of both connectionists and even the larger field. Without self regulation, free forums don't work. Finally, I must object to the suggestion that my reaction has to do with factionalism. Any event sponsored by any institution whose advertising is insulting to other workers in the field should not be tolerated no matter what one thinks of ones belly button. That said, I suggest we all go back to work. Jim Bower (ommmmmmmmmmmm...) From honavar at cs.wisc.edu Sun Jan 14 16:38:58 1990 From: honavar at cs.wisc.edu (Vasant Honavar) Date: Sun, 14 Jan 90 15:38:58 -0600 Subject: Turing Equivalence of Connectionist Nets Message-ID: <9001142138.AA26012@goat.cs.wisc.edu> On the Turing-equivalence of Connectionist networks. Warren S. McCulloch & Walter H. Pitts, 1943. "A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathyematical Biophysics, Vol. 5, pp 115-133. Chicago: University of Chicago Press. (Reprinted in "Embodiments of Mind", 1988. Cambridge, MA: MIT Press). There is also a paper by Kleene in a collected volume titled "Automata Studies" published by the Princeton Univeristy Press (sorry, I don't have the exact citation) in the '50s or '60s which addresses similar issues. The basic argument is: Consider a net which includes (but is not limited to) a set of nodes (possibly infinite in number) that serve as input and/or output nodes (a role akin to that of the inifinite tape in the Turing machine). Any such net can compute only those numbers that can be computed by the Turing machine. Furthermore, each of the numbers computable by the Turing machine can be computed by such a net. However, connectionist networks that we build are finite machines just as the general purpose computers modeled on Turing machines are finite machines and are therefore equivalent to Turing machines with finite tapes. Vasant Honavar honavar at cs.wisc.edu From kube%cs at ucsd.edu Sun Jan 14 21:16:49 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Sun, 14 Jan 90 18:16:49 PST Subject: on the Turing computability of networks In-Reply-To: Noel Sharkey's message of Sun, 14 Jan 90 17:27:23 GMT <10907.9001141727@entropy.cs.exeter.ac.uk> Message-ID: <9001150216.AA17819@kokoro.UCSD.EDU> From: Noel Sharkey Date: Sun, 14 Jan 90 17:27:23 GMT I keep hearing that there is a formal result about the equivalence of connect. models and Turing machines, but I have never seen it. If you don't restrict the class of connectionist models and consider networks whose state transitions are solutions to algebraic differential equations (this class is universal for analog computation in some sense; see references below), then it's pretty obvious that a connectionist network can simulate a universal digital machine. For example, the Sun on my desk is implemented as an analog network. Going the other way is a little trickier, since you have to make decisions about how to represent real numbers on Turing machine tapes etc., but there are reasonable ways to do this. Here the main results are Pour-El's that there are ADE's defined in terms of computable functions which have UNcomputable solutions from computable initial conditions (the shockwave solution to the wave equation has this property, in fact), and Vergis, Steiglitz and Dickinson's that if the second derivative of the solution is bounded, then the system is efficiently Turing simulatable. So if you bound second derivatives of network state (which amounts to bounding things like acceleration, force, energy, bandwidth, etc. in physical systems), then networks and Turing machines are equivalent. Beyond this, one would like a hierarchy of connectionist models ordered by computational power along the lines of what you get in classical automata theory, but I don't know of anything done on this since _Perceptrons_ (so far as I'm aware, none of the recent universal-approximator results separate any classes). --Paul Kube kube at cs.ucsd.edu ----------------------------------------------------------------------------- References: %A Anastasios Vergis %A Kenneth Steiglitz %A Bradley Dickinson %T The complexity of analog computation %J Mathematics and Computers in Simulation %V 28 %D 1986 %P 91-113 %A Marian Boykan Pour-El %A Ian Richards %T Noncomputability in models of physical phenomena %J International Journal of Theoretical Physics %V 21 %D 1982 %P 553-555 %X lots more interesting stuff in this number of the journal %A Marian Boykan Pour-El %A Ian Richards %T A computable ordinary differential equation which possesses no computable solution %J Ann. Math. Logic %V 17 %D 1979 %P 61-90 %A Lee A. Rubel %T A universal differential equation %J Bulletin of the American Mathematical Society %V 4 %D 1981 %P 345-349 %A Marian Boykan Pour-El %A Ian Richards %T The wave equation with computable initial data such that its unique solution is not computable %J Advances in Mathematics %V 39 %D 1981 %P 215-239 %A A. Grzegorczyk %T On the definitions of computable real continuous functions %J Fundamenta Mathematicae %V 44 %D 1957 %P 61-71 %A A. M. Turing %T On computable numbers, with an application to the Entscheidungsproblem %J Proceedings of the London Mathematics Society %V 42 %N 2 %D 1936-7 %P 230-265 From Dave.Touretzky at B.GP.CS.CMU.EDU Sun Jan 14 21:30:52 1990 From: Dave.Touretzky at B.GP.CS.CMU.EDU (Dave.Touretzky@B.GP.CS.CMU.EDU) Date: Sun, 14 Jan 90 21:30:52 EST Subject: No subject In-Reply-To: Your message of Sun, 14 Jan 90 17:27:23 +0000. <10907.9001141727@entropy.cs.exeter.ac.uk> Message-ID: <6961.632370652@DST.BOLTZ.CS.CMU.EDU> "Connectionist models" is too vague a term to use when discussing Turing equivalence. Let's agree that AND, OR, and NOT gates are sufficiently close to simulated "neurons" for our purposes. Then a connectionist net composed of thse "neurons" can simulate a Vax. If you believe a Vax is Turing-equivalent, then connectionist nets are too. Of course Vaxen don't really have infinite-length tapes. That's okay; connectionist nets don't have an infinite number of neurons either. Jordan Pollack has shown that a connectionist net with a finite number of units can simulate a Turing machine if one assumes the neurons can perform infinite-precision arithmetic. The trick is to use the neuron activation levels to simulate a two-counter machine, which is formally equivalent to a Turing machine. (See any basic automata theory textbook for the proof.) Now, if you don't like AND gates as neurons, and you don't like units that represent infinite precision numbers as neurons, then you have to define what sorts of neurons you do like. My guess is that any kind of neuron you define can be used to build boolean AND/OR/NOT gates, and from there one proceeds via the Vax argument to Turing equivalence. A different kind of question is whether a connectionist net with a particular architecture can simulate a Turing machine in a particular way. For instance, can DCPS, Touretzky and Hinton's Distributed Connectionist Production System, simulate a Turing machine by representing the cells of the tape as working memory elements? The answer to this question is left as an exercise for the reader. -- Dave From jose at neuron.siemens.com Mon Jan 15 10:55:14 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Mon, 15 Jan 90 10:55:14 EST Subject: utms and connex Message-ID: <9001151555.AA06772@neuron.siemens.com.siemens.com> Another sort of interesting question concerning turing equivalence is learning. The sort of McCulloch and Pitts question and others that are usually posed concerns representation, can a net "with cycles" (cf. M&P, 1943) represent a turing machine? Another question along these lines is can a net learn to represent a turing machine (see williams and zipser for example) and under what conditions etc... anybody thought about that? Are there standard references from the learnability/complexity literature or more recent ones? Steve From pollack at cis.ohio-state.edu Mon Jan 15 13:27:55 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 15 Jan 90 13:27:55 EST Subject: Neuring Machine In-Reply-To: Dave.Touretzky@B.GP.CS.CMU.EDU's message of Sun, 14 Jan 90 21:30:52 EST <6961.632370652@DST.BOLTZ.CS.CMU.EDU> Message-ID: <9001151827.AA00263@toto.cis.ohio-state.edu> McCulloch and Pitts showed that their network could act as the finite state control of a Turing Machine, acting upon an EXTERNAL tape. How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. As Dave alluded to, in my May 1987 thesis*, I showed that a Turing Machine could be constructed out of a network with rational weights and outputs, linear combinations, thresholds, and multiplicative connections. This was a demonstration of the sufficiency of these elements in combination, not a formal proof of the necessity of any single element. (But, without multiplicative connections, it could take an unbounded amount of time to gate rational outputs, which is necessary to move both ways on the tape in response to threshold logic; without thresholds, its tough to make a decision; and without pure linear combinations, its hard not to lose information...) This is just like the argument that with a couple of registers which can hold unbounded integers, and the operators Increment, Decrement, and Branch-on-zero, a stored program machine can universally compute. I think that other folk, such as Szu and Abu-Mostafa have also worked on the theoretical two-neuron tape. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 * CS Dept, Univ. of Illinois. Available for $6 as MCCS-87-100 from Librarian/Computing Research Lab/NMSU/Las Cruces, NM 88003. From rr%eusip.edinburgh.ac.uk at NSFnet-Relay.AC.UK Mon Jan 15 11:18:30 1990 From: rr%eusip.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Richard Rohwer) Date: Mon, 15 Jan 90 16:18:30 GMT Subject: NNs & TMs Message-ID: <6693.9001151618@eusip.ed.ac.uk> One way to emulate a Turing machine with a neural net is to hand-build little subnets for NAND gates and FLIP FLOPS, and then wire them up to do the job. This observation led me to contemplate a project no student seems to want to take up (and I don't want to do it anytime soon)-- so I'll put it up for grabs in case anyone would be keen on it. It should be tedious but straightforward to write a compiler which turns C-code (or your favorite language) into weight matrices. Why bother? Well, training algorithms are still in a pretty primitive state when it comes to training nets to do complex temporal tasks; eg., parsing. A weight matrix compiler would at least provide an automatic way to initialize weight matrices to do anything a conventional program can do, albeit without using distributed representations in an interesting or efficient way. But perhaps something like minimizing an entropy measure from such a starting point could lead to something interesting. Richard Rohwer JANET: rr at uk.ac.ed.eusip Centre for Speech Technology Research ARPA: rr%ed.eusip at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at eusip.ed.ac.uk, 80, South Bridge rr%eusip.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!eusip!rr From hendler at cs.UMD.EDU Mon Jan 15 11:07:21 1990 From: hendler at cs.UMD.EDU (Jim Hendler) Date: Mon, 15 Jan 90 11:07:21 -0500 Subject: No subject Message-ID: <9001151607.AA29919@dormouse.cs.UMD.EDU> I'm a little confused by Dave's argument, and some of the others I've seen. There is a difference (and a formal one) between performing computation and being Turing equivalent. It is easily done (and has been in the past) to build a small general purpose computer out of tinkertoys (a tinkertoy computer was on display in the computer museum in Boston for a while -- pretty limited, but a computer none the less). Does this mean that tinkertoys are Turing complete? My Sun, built out of wires and etc. is also not equivalent to a Turing machine, it has limited state, etc. There are very definitely Turing computable functions that my Sun can't compute. To say that a Turing machine could be built from connectionist components is not to argue that they are formally equivalent. The only demonstration of equivalence I've seen is the result Dave cites from Jordan Pollack (see his thesis) in which he shows a reduction of a Turing machien to a particular connectionist network (given infinite precision integers). This is not a "simulation" of a TM, but rather a full-fledged reduction. So Dave is only "almost" right (I suspect just using language loosely), the question is whether connectionist networks with particular architectures are Turing equivalents -- in the formal sense (not whether they "simulate" a TM in the weaker sense that tinkertoys do). Anyone know of any proofs of this class? (Note, it is important to recognize that any such thing must be able to represent the arbitrarily large tape of the Turing machine) -Jim Hendler From pollack at cis.ohio-state.edu Mon Jan 15 19:18:12 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 15 Jan 90 19:18:12 -0500 Subject: Neuring Machine In-Reply-To: Dave.Touretzky@B.GP.CS.CMU.EDU's message of Sun, 14 Jan 90 21:30:52 EST <6961.632370652@DST.BOLTZ.CS.CMU.EDU> Message-ID: <9001160018.AA02638@giza.cis.ohio-state.edu> (I am having mailer trouble with SUNOS 4.1, so this is a second attempt) McCulloch and Pitts showed that their network could act as the finite state control of a Turing Machine, acting upon an EXTERNAL tape. How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. As Dave alluded to, in my May 1987 thesis*, I showed that a Turing Machine could be constructed out of a network with rational weights and outputs, linear combinations, thresholds, and multiplicative connections. This was a demonstration of the sufficiency of these elements in combination, not a formal proof of the necessity of any single element. (But, without multiplicative connections, it could take an unbounded amount of time to gate rational outputs, which is necessary to move both ways on the tape in response to threshold logic; without thresholds, its tough to make a decision; and without pure linear combinations, its hard not to lose information...) This is just like the argument that with a couple of registers which can hold unbounded integers, and the operators Increment, Decrement, and Branch-on-zero, a stored program machine can universally compute. I think that other folk, such as Szu and Abu-Mostafa have also worked on the theoretical two-neuron tape. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 * CS Dept, Univ. of Illinois. Available for $6 as MCCS-87-100 from Librarian/Computing Research Lab/NMSU/Las Cruces, NM 88003. From rudnick at cse.ogi.edu Mon Jan 15 20:03:14 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Mon, 15 Jan 90 17:03:14 PST Subject: tech report/bib: GA/ANN Message-ID: <9001160103.AA07080@cse.ogi.edu> The following tech report/bibliography is available: A Bibliography of the Intersection of Genetic Search and Artificial Neural Networks Mike Rudnick Department of Computer Science and Engineering Oregon Graduate Institute Technical Report No. CS/E 90-001 January 1990 This is a fairly informal bibliography of work relating artificial neural networks (ANNs) and genetic search. It is a collection of books, papers, presentations, reports, and the like, which I've come across in the course of pursuing my interest in using genetic search techniques for the design of ANNs and in operating an electronic mailing list on GA/ANN. The bibliography contains no references which I feel relate solely to ANNs or GAs (genetic algorithms). To receive a copy, simply request the report by name and number; send email to kelly at cse.ogi.edu or smail to: Kelly Atkinson Department of Computer Science and Engineering Oregon Graduate Institute 19600 NW Von Neumann Dr. Beaverton, OR 97006-1999 Mike Rudnick From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Tue Jan 16 00:59:00 1990 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Tue, 16 Jan 90 00:59 EST Subject: List Address, ICJNN, and Turing Machines Message-ID: I am posting this for Jurgen Schmidhuber (schmidhu at tumult.informatik.tu-muenchen.de). He told me at ICJNN that he was getting this mailing list but was having problems sending submissions to it...is there a special addressing method he needs from Germany besides the .edu? Not to waste bandwith, I'll add that I seem to remember neural nets being trained to be Turing Machines in Zisper (et al.)'s work on recurrent network backpropogation learning. -Thomas Edwards tedwards at cmsun.nrl.navy.mil ins_atge at jhuvms.BITNET ins_atge at jhunix.hcf.jhu.edu From terry%sdbio2 at ucsd.edu Tue Jan 16 01:12:54 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Mon, 15 Jan 90 22:12:54 PST Subject: Summer Course on Learning Message-ID: <9001160612.AA07677@sdbio2.UCSD.EDU> Summer Course on COMPUTATIONAL NEUROSCIENCE: LEARNING AND MEMORY July 14-17, 1990 Cold Spring Harbor Laboratory Organizers: Michael Jordan, MIT Terrence Sejnowski, Salk Institute and UCSD This is an intensive laboratory and lecture course that will examine computational approaches to problems in learning and memory. Problems and techniques from both neuroscience and cognitive science will be covered, including learning procedures that have been developed recently for neural network models. The course will include a computer-based laboratory so that students can actively explore computational issues. Students will be able to informally interact with the lecturers. A strong grounding in mathematics and previous exposure to neurobiology is essential for students. Partial list of Instructors: Richard Sutton Yan LeCun David Rumelhart Tom Brown Jack Byrne Richard Durbin Gerald Tesauro Stephen Lisberger Ralph Linsker John Moody Nelson Donnegan Chris Atkeson Tomaso Poggio DEADLINE: MARCH 15, 1990 Applications and additional information may be obtained from: Registrar Cold Spring Harbor Laboratory Cold Spring Harbor, New York 11724 Tuition, Room and Board: $1,390. ----- From kube%cs at ucsd.edu Tue Jan 16 02:15:59 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Mon, 15 Jan 90 23:15:59 PST Subject: Neuring Machine Message-ID: <9001160715.AA18893@kokoro.UCSD.EDU> Date: Mon, 15 Jan 90 13:27:55 EST From: Jordan B Pollack How could you get the unbounded tape INSIDE a machine? For integer machines, assume unbounded registers. For cellular automata, assume an unbounded lattice. For connectionist models, assume that two units could encode the tape as binary rationals. Or assume an unbounded network. Date: Mon, 15 Jan 90 11:07:21 -0500 From: Jim Hendler I'm a little confused by Dave's argument, and some of the others I've seen. My Sun, built out of wires and etc. is not equivalent to a Turing machine, it has limited state, etc. See above. To say that a Turing machine could be built from connectionist components is not to argue that they are formally equivalent. Showing how you can simulate the operation of a universal TM in another machine is usually how you show that TM's are no more powerful than the other machine. The only demonstration of equivalence I've seen is the result Dave cites from Jordan Pollack (see his thesis) in which he shows a reduction of a Turing machien to a particular connectionist network (given infinite precision integers). This is not a "simulation" of a TM, but rather a full-fledged reduction. Why you think it's okay to suppose you have infinite-precision neurons but not okay to suppose you have infinitely many neurons is a mystery to me, since each seems about as impossible as the other. --Paul kube at cs.ucsd.edu From rudnick at cse.ogi.edu Tue Jan 16 12:51:51 1990 From: rudnick at cse.ogi.edu (Mike Rudnick) Date: Tue, 16 Jan 90 09:51:51 PST Subject: ANN fault tol. Message-ID: <9001161751.AA12369@cse.ogi.edu> Below is a synopsis of the references/material I received in response to an earlier request for pointers to work on the fault tolerance of artificial neural networks. Although there has been some work done relating directly to ANN models, most of the work appears to have been motivated by VLSI implementation and fault tolerance concerns. Apparently, and this is speculation on my part, the folklore that artificial neural networks are fault tolerant derives mostly from the fact that they resemble biological neural networks, which generally don't stop working when a few neurons die here and there. Although it looks like I'm not going to be doing ANN fault tolerance as my dissertation topic, I can't help but feel this line of research contains a number of outstanding phd topics. Mike Rudnick Computer Science & Eng. Dept. Domain: rudnick at cse.ogi.edu Oregon Graduate Institute (was OGC) UUCP: {tektronix,verdix}!ogicse!rudnick 19600 N.W. von Neumann Dr. (503) 690-1121 X7390 (or X7309) Beaverton, OR. 97006-1999 ----- From: platt at synaptics.com (John Platt) Well, one of the original papers about building a neural network in analog VLSI had a chip where about half of then synapses were broken, but the chip still worked. Look at ``VLSI Architecutres for Implementation of Neural Networks'' by Massimo A. Sivilotti, Michael Emerling, and Carver A. Mead, in ``Neural Networks for Computing'', AIP Conference Proceedings 151, John S. Denker, ed., pp. 408-413 ----- From: Jonathan Mills You might be interested in a paper submitted to the 20th Symposium on Multiple-Valued Logic titled "Lukasiewicz Logic Arrays", describing work done by M. G. Beavers, C. A. Daffinger and myself. These arrays (LLAs for short) can be used with other circuit components to fabricate neural nets, expert systems, fuzzy inference engines, sparse distributed memories and so forth. They are analog circuits, massively parallel, based on my work on inference cellular automata, and are inherently fault-tolerant. In simulations I have conducted, the LLAs produce increasingly noisy output as individual processors fail, or as groups of processors randomly experience stuck-at-one and/or stuck-at-zero faults. While we have much more work to do, it does appear that with some form of averaging the output of an LLA can be preserved without noticeable error with up to one-third of the processors faulty (as long as paths exist from some inputs to the output). If the absolute value of the output is taken, a chain of pulses results so that a failing LLA will signal its graceful degradation. VLSI implementations of LLAs are described in the paper, with an example device submitted to MOSIS, and due back in January 1990. We are aware of the work of Alspector et. al. and Graf et. al., which is specific to neural architectures. Our work is more general in that it arises from a logic with both algebraic and logical semantics, lending the dual semantics (and its generality) to the resulting device. LLAs can also be integrated with the receptor circuits of Mead, leading to a design project here for a single circuit that emulates the first several levels of the visual system, not simply the retina. This is almost necessary because I can put over 2,000 processors on a single chip, but haven't the input pins to drive them! Thus, a chip that uses fewer processors with the majority of inputs generated on chip is quite attractive -- especially since even with faults I'll still get a valid result from the computational part of the device. Sincerely, Jonathan Wayne Mills Assistant Professor Computer Science Department Indiana University Bloomington, Indiana 47405 (812) 331-8533 ----- From: risto at CS.UCLA.EDU (Risto Miikkulainen) I did a brief analysis of the fault tolerance of distributed representations. In short, as more units are removed from the representation, the performance degrades linearly. This result is documented in a paper I submitted to Cognitive Science a few days ago: Risto Miikkulainen and Michael G. Dyer (1989). Natural Language Processing with Modular Neural Networks and Distributed Lexicon. Some preliminary results are mentioned in: @InProceedings{miikkulainen:cmss, author = "Risto Miikkulainen and Michael G. Dyer", title = "Encoding Input/Output Representations in Connectionist Cognitive Systems", booktitle = "Proceedings of the 1988 Connectionist Models Summer School", year = "1989", editor = "David S. Touretzky and Geoffrey E. Hinton and Terrence J. Sejnowski", publisher = KAUF, address = KAUF-ADDR, } ----- "Implementation of Fault Tolerant Control Algorithms Using Neural Networks", systematix, Inc., Report Number 4007-1000-08-89, August 1989. ----- From: kddlab!tokyo-gas.co.jp!hajime at uunet.uu.net > " A study of high reliable systems > against electric noises and element failures " > > -- Apllication of neural network systems -- > > ISNCR '89 means "International Symposium on Noise and Clutter Rejection > in Radars and Image Processing in 1989". > It was held in Kyoto, JAPAN from Nov.13 to Nov.17. Hajime FURUSAWA JUNET: hajime at tokyo-gas.co.jp Masayuki KADO JUNET: kado at tokyo-gas.co.jp Research & Development Institute Tokyo Gas Co., Ltd. 1-16-25 Shibaura, Minato-Ku Tokyo 105 JAPAN ----- From: Mike Carter "Operational Fault Tolerance of CMAC Networks", NIPS-89, by Mikeael J. Carter, Frank Rudolph, and Adam Nucci, University of New Hampshire Mike Carter also says he has a non-technical overview of NN fault tolerance which he wrote some time ago which contains references to papers which have some association with fault tolerance (although only 1 of which had fault tolerance as its focus). ----- From: Martin Emmerson I am working on simulating faults in neural-networks using a program running on a Sun (Unix and C). I am particularly interested in qualitative methods for assessing performance of a network and also faults that might occur in a real VLSI implementation. From pollack at cis.ohio-state.edu Tue Jan 16 12:33:33 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 16 Jan 90 12:33:33 -0500 Subject: Neuring Machine In-Reply-To: Paul Kube's message of Mon, 15 Jan 90 23:15:59 PST <9001160715.AA18893@kokoro.UCSD.EDU> Message-ID: <9001161733.AA07782@giza.cis.ohio-state.edu> >> Why you think it's okay to suppose you have infinite-precision neurons >> but not okay to suppose you have infinitely many neurons is a mystery to me, >> since each seems about as impossible as the other. >> --Paul >> kube at cs.ucsd.edu There are two reasons not to assume an unbounded network: 1) Each new unit doesnt just add more memory, but also adds "control." 2) The Neuron Factory would still be "outside" the system, which is the original problem with McPitt's tape. Also, there IS a difference between infinite and unbounded (but still finite in practice). Various proofs of the "universal approximation" of neural networks (such as White, et al) depend on an unbounded (but not infinite) number of hidden units. Finally, there is also a difference between a theoretical argument about competency and a practical argument about what machines can be physically built. Since, as someone in AI, I have always simulated every connectionist model I've researched (including the Neuring machine), Paul Kube (along with several readers of my thesis) seemed to take my theoretical argument (about a sufficient set of primitive elements) as a programme of building neural networks in a physically impossible and very stilted fashion. Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From hare at amos.ucsd.edu Tue Jan 16 13:28:35 1990 From: hare at amos.ucsd.edu (Mary Hare) Date: Tue, 16 Jan 90 10:28:35 PST Subject: Technical report available Message-ID: <9001161828.AA22358@amos.ucsd.edu> "The Role of Similarity in Hungarian Vowel Harmony: A connectionist account" Technical Report CRL-9004 Mary Hare Department of Linguistics & Center for Research in Language Over the last 10 years, the assimilation process referred to as vowel harmony has served as a test case for a number of proposals in phonological theory. Current autosegmental approaches successfully capture the intuition that vowel harmony is a dynamic process involving the interaction of a sequence of vowels; still, no theoretical analysis has offered a non-stipulative account of the inconsistent behavior of the so-called "transparent", or disharmonic, segments. The current paper proposes a connectionist processing account of the vowel harmony phenomenon, using data from Hungarian. The strength of this account is that it demonstrates that the same general principle of assimilation which underlies the behavior of the "harmonic" forms accounts as well for the apparently exceptional "transparent" cases, without stipulation. The account proceeds in three steps. After presenting the data and current theoretical analyses, the paper describes the model of sequential processing introduced by Jordan (1986), and motivates this as a model of assimilation processes in phonology. The paper then presents the results of a series of parametric studies that were run with this model, using arbitrary bit patterns as stimuli. These results establish certain conditions on assimilation in a network of this type. Finally, these findings are related to the Hungarian data, where the same conditions are shown to predict the correct pattern of behavior for both the regular harmonic and irregular transparent vowels. ---------------------------------------- Copies of this report may be obtained by sending an email request for TR CRL-9004 to 'yvonne at amos.ucsd.edu', or surface mail to the Center for Research in Language, C-008; University of California, San Diego; La Jolla CA 92093. From kube%cs at ucsd.edu Tue Jan 16 18:44:38 1990 From: kube%cs at ucsd.edu (Paul Kube) Date: Tue, 16 Jan 90 15:44:38 PST Subject: Neuring Machine In-Reply-To: Jordan B Pollack's message of Tue, 16 Jan 90 12:33:33 -0500 <9001161733.AA07782@giza.cis.ohio-state.edu> Message-ID: <9001162344.AA19527@kokoro.UCSD.EDU> Date: Tue, 16 Jan 90 12:33:33 -0500 From: Jordan B Pollack There are two reasons not to assume an unbounded network: 1) Each new unit doesnt just add more memory, but also adds "control." Yes, you'd get two different connectionist models in the two cases (inifinite size vs. infinite precision). But they'd both be connectionist models, more or less equally abstract, so for arguments about theoretical reduction of TM's to networks they seem prima facie as good. That's all I was getting at. 2) The Neuron Factory would still be "outside" the system, which is the original problem with McPitt's tape. It seems to me the issue turns on whether scaling the machine up to handle bigger problems makes the machine of a different class. So usually everybody agrees Jim's Sun is a universal machine; adding more memory (and maybe an addressing mode to the 68020) is "trivial". Adding unbounded memory to a finite state machine, or another stack to a PDA, are nontrivial changes. I would think that adding more neurons to a net keeps it a neural net, though you could put restrictions on the nets you're interested in to prevent that. Since no natural computational classification of nets is yet known, how you define your model class is up to you; but the original question seemed to be about Turing equivalence of unrestricted connectionist networks. Also, there IS a difference between infinite and unbounded (but still finite in practice). Various proofs of the "universal approximation" of neural networks (such as White, et al) depend on an unbounded (but not infinite) number of hidden units. Isn't it just a matter of how you order the quantifiers? Every TM computation requires only finite tape, but no finite tape will suffice for every TM computation. Similarly in White et al every approximation bound on a Borel-measurable function can be satisfied with a finite net, but no finite net can achieve every approximation bound on every function. Paul Kube (along with several readers of my thesis) seemed to take my theoretical argument (about a sufficient set of primitive elements) as a programme of building neural networks in a physically impossible and very stilted fashion. Sorry, I didn't mean to imply that you were trying to do anything impossible, or even stilted! I think that results on computational power of abstract models are interesting, and that questions about their relation to practical machines are useful to think about. But in the absence of any more concrete results, maybe we should give it a rest, and get back to discussing advertising and marketing strategies for our graduate programs. :-) --Paul kube at cs.ucsd.edu From dfausett at zach.fit.edu Tue Jan 16 10:55:03 1990 From: dfausett at zach.fit.edu ( Donald W. Fausett) Date: Tue, 16 Jan 90 10:55:03 EST Subject: utms and connex Message-ID: <9001161555.AA24092@zach.fit.edu> Are you familiar with the following paper? It may address some of your questions. Max Garzon and Stan Franklin, "Neural Computability II", IJCNN, v. 1, 631-637, 1989. The authors claim to present a general framework "... within which the computability of solutions to problems by various types of automata networks (neural networks and cellular automata included) can be compared and their complexity analized." -- Don Fausett From gary%cs at ucsd.edu Tue Jan 16 20:58:28 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Tue, 16 Jan 90 17:58:28 PST Subject: List Address, ICJNN, and Turing Machines Message-ID: <9001170158.AA12394@desi.UCSD.EDU> Zipser & Williams trained a net to be the Finite State control of a turing machine, something which is quite different. I did something like that in my paper with Fu-Sheng Tsung in IJCNN89. We trained a network to learn to be a while loop with an if-then in it. The network was adding multi-digit numbers. We also showed that Elman's networks are more powerful than Jordan's because a n output-recurrent network can forget things about its input that are not reflected on its output. I.e., the output, due to the teaching signal, may filter information that you need. This was easily demonstrated by reversing the order of two statements in the while loop, which turned it into a program that one could learn and not the other. gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029 Computer Science and Engineering C-014 UCSD, La Jolla, Ca. 92093 gary at cs.ucsd.edu (ARPA) {ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET) gcottrell at ucsd.edu (BITNET) From DUFOSSE%FRMOP11.BITNET at VMA.CC.CMU.EDU Wed Jan 17 04:41:57 1990 From: DUFOSSE%FRMOP11.BITNET at VMA.CC.CMU.EDU (DUFOSSE Michel) Date: Wed, 17 Jan 90 09:41:57 GMT Subject: help please Message-ID: I cannot connect to NEURON-REQUEST at HPLAB.HP.COM in order to ask for registration to NEURON mailing list ? Would you help me please? thak thank you (DUFOSSE at FRMOP11.BITNET) From slehar at bucasb.bu.edu Wed Jan 17 16:43:20 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Wed, 17 Jan 90 16:43:20 EST Subject: Technical report available In-Reply-To: connectionists@c.cs.cmu.edu's message of 17 Jan 90 03:39:03 GM Message-ID: <9001172143.AA00899@thalamus.bu.edu> Az Ipafai papnak fa pipaya van azert az Ipafai papi pipa papi fa pipa! "the priest of Ipafa has a wooden pipe therefore the Ipafay priestly pipe is a priestly wooden pipe!" From hendler at cs.umd.edu Wed Jan 17 14:04:34 1990 From: hendler at cs.umd.edu (Jim Hendler) Date: Wed, 17 Jan 90 14:04:34 -0500 Subject: Turing Machines and Conenctionist networks Message-ID: <9001171904.AA03022@dormouse.cs.UMD.EDU> For what it is worth, I've just been having a chat with my friend down the hall, a learning theorist. We've sketched out a proof that shows that non-recurrent back-propagation learning cannot be Turing equivalent (we can show a class of Turing computable functions which such a machine could not learn - this is even assuming perfect generalization from the training set to an infinite function). Recurrent BP might or might not, depending on details of the learning algorithms which we'll have to think about. cheers Jim H. From schraudo%cs at ucsd.edu Thu Jan 18 13:08:17 1990 From: schraudo%cs at ucsd.edu (Nici Schraudolph) Date: Thu, 18 Jan 90 10:08:17 PST Subject: Turing Machines and Conenctionist networks Message-ID: <9001181808.AA08107@beowulf.UCSD.EDU> >From: Jim Hendler >We've sketched out a proof that shows that non-recurrent >back-propagation learning cannot be Turing equivalent (we >can show a class of Turing computable functions which such >a machine could not learn [...] Wait a minute - did anybody ever claim that backprop could LEARN any Turing-computable function? It seems clear that this is not the case: given that backprop is a gradient descent method, all you have to do is construct a function whose solution in weight space is surrounded by a local minimum "moat" in the error surface. The question was whether a neural net could COMPUTE any Turing- -computable function, given the right set of weights A PRIORI. The answer to that depends on what class of architectures you mean by "neural net": in general such nets are obviously Turing equivalent since you can construct a Turing Machine from connec- tionist components; more restricted classes such as one hidden layer feedforward nets are where it gets interesting. -- Nici Schraudolph, C-014 nschraudolph at ucsd.edu University of California, San Diego nschraudolph at ucsd.bitnet La Jolla, CA 92093 ...!ucsd!nschraudolph From jlm+ at ANDREW.CMU.EDU Thu Jan 18 13:15:17 1990 From: jlm+ at ANDREW.CMU.EDU (James L. McClelland) Date: Thu, 18 Jan 90 13:15:17 -0500 (EST) Subject: Turing Machines and Conenctionist networks In-Reply-To: <9001171904.AA03022@dormouse.cs.UMD.EDU> References: <9001171904.AA03022@dormouse.cs.UMD.EDU> Message-ID: <0ZhUSp200jWD80V2Fh@andrew.cmu.edu> The recent exchanges prompt me to reflect on how much time it's worth spending on the issue of Turing equivalence. One of the characteristics of connectionist models is the distinctly different style of processing and storage that they provide relative to conventional architectures. One of the motivations for thinking that these characteristics might be worth pursuing is that Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: Speech perception, pattern recognition, retrieval of contextually relevant information from memory, language understanding, and intuitive thinking. We need to start thinking about ways of going beyond Turing equivalence to find tests that can indicate the sufficiency of mechanisms to exhibit natural cognitive capabilities like those enumerated above. Turing equivalence has in my opinion virtually nothing to do with this matter. In principle results about what can be computed using discrete unambiguous sequences of symbols and a totally infallible, infinite memory will not help us much in understanding how we cope in real time with mutiple, graded and uncertain cues. The need for robustness in performance and learning in the face of an ambiguous world is not addressed by Turing equivalence, yet every time we understand a spoken sentence these issues of robustness arise! How do humans made of connectoplasm achieve Turing equivalence? The equivalence exists at a MACROLEVEL, and should not be sought in the microstructure (the units and connections). As whole organisms, we certainly can compute any computable function. The procedures that make up the microstructure of each step in such a computation are, I would submit, finite and probabilistic. But we can string sequences of such steps together, particularly with the aid of real external memory (pencils and paper, etc), and enough error checking, to compute anything we want. More on these and related matters may be found in Chapters 1 and 4 of Vol 1 and Chapter 14 of Vol 2 of the PDP books. -- Jay McClelland From alexis at CS.UCLA.EDU Thu Jan 18 13:15:31 1990 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Thu, 18 Jan 90 10:15:31 -0800 Subject: Turing Machines and Conenctionist networks In-Reply-To: Jim Hendler's message of Wed, 17 Jan 90 14:04:34 -0500 <9001171904.AA03022@dormouse.cs.UMD.EDU> Message-ID: <9001181815.AA13186@oahu.cs.ucla.edu> Well sure, recurrent networks are unquestionably needed to do powerful stuff, you're not going to implement a Turing machine with feed-forward networks alone. Recurrence allows memory as found in a computer. To really harness that power you want to "learn" the recurrent part of the network as well, allowing the system to decide where and when it needs some memory. BP (as well as ART, vect. quant, Boltzman, etc, etc) are all quite interesting and powerful, but if we can't figure out how to make a net learn where and when recurrence / memory is needed ... connectionism will probably die out again fairly soon. alexis. (UCLA) alexis at cs.ucla.edu or (MITRE Corp.) alexis at yummy.mitre.org From Michael.Witbrock at CS.CMU.EDU Thu Jan 18 19:50:06 1990 From: Michael.Witbrock at CS.CMU.EDU (Michael.Witbrock@CS.CMU.EDU) Date: Thu, 18 Jan 90 19:50:06 -0500 (EST) Subject: In-Reply-To: <9001151607.AA29919@dormouse.cs.UMD.EDU> References: <9001151607.AA29919@dormouse.cs.UMD.EDU> Message-ID: Jim Hendler argues for the non turing equivalence of SUNs on the basis of their backing store limitation. A SUN with access to unlimited memory (rather difficult to arrange given the limited address space of its mpu) could be shown to be turing equivalent, however. Similarly, it should be possible to prove the result for a network which can grow extra units and connections in unlimited number. This would be more satisfying than Jordan Pollack's neuring machine. michael From slehar at bucasb.bu.edu Fri Jan 19 00:18:31 1990 From: slehar at bucasb.bu.edu (slehar@bucasb.bu.edu) Date: Fri, 19 Jan 90 00:18:31 EST Subject: Turing Machines and Conenctionist networks In-Reply-To: connectionists@c.cs.cmu.edu's message of 19 Jan 90 01:01:01 GM Message-ID: <9001190518.AA10272@thalamus.bu.edu> IN REPLY TO YOUR POSTING about neural nets vs. Turing machines: WELL SAID! I was getting really tired of the debate, and I cannot agree with you more! By the way, you're not THE McClelland of PDP fame are you? What a pleasure to exchange electrons with you! I enjoyed your book very much and recommend it highly to people all the time. From pollack at cis.ohio-state.edu Fri Jan 19 00:15:18 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Fri, 19 Jan 90 00:15:18 EST Subject: Turing Machines and Connectionist networks In-Reply-To: "James L. McClelland"'s message of Thu, 18 Jan 90 13:15:17 -0500 (EST) <0ZhUSp200jWD80V2Fh@andrew.cmu.edu> Message-ID: <9001190515.AA02576@toto.cis.ohio-state.edu> I disagree with Jay, and have disagreed since before I read (and quoted in my thesis) his bit about paper and pencil which appeared in chapter 4. Certainly, while isomorphism to the STRUCTURE to a Turing machine should not be a concern of connectionist research, the question of the FUNCTIONAL isomorphism (in idealized models) is a real concern. Especially in language processing, where fixed-width systems with non-recursive representations aren't even playing the game! Humans don't need paper and pencil to understand embedded clauses. At CMU Summer School 1986, I pointed out these generative capacity limits in all known connectionist models, and predicted that "the first such model to attract the attention of Chomskyists would get the authors shot out at the knees." While psychological realism and robustness are not built-in features of conventional stored program computers, recursive representations and computations are "of the nature" of cognitive and real-world tasks, and cannot be wished away simply because we don't YET know how to achieve them simultaneously with the good qualities of existing connectionist architectures. I certainly agree with Jay that recursive-like powers can emerge from a massively parallel iterative "microstructure". Witness Wolfram's automata or Mandelbrot's set. Neither are finite OR probabilistic, however, which may give us a clue... Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From jose at neuron.siemens.com Fri Jan 19 09:42:08 1990 From: jose at neuron.siemens.com (Steve Hanson) Date: Fri, 19 Jan 90 09:42:08 EST Subject: Turing Machines and Conenctionist networks Message-ID: <9001191442.AA03448@neuron.siemens.com.siemens.com> mathematical query...is it contradictory that feedforward networks are claimed BOTH to be able to approximate any real valued function mapping and NOT be able (as Hendler suggests) be turing equivalent? Cannot specific turing machines be seen as a real valued function mapping? Are there any mathematicians out there that can explain this to me please. Steve From dave at cogsci.indiana.edu Fri Jan 19 19:03:20 1990 From: dave at cogsci.indiana.edu (David Chalmers) Date: Fri, 19 Jan 90 19:03:20 EST Subject: Turing Machines and Conenctionist networks Message-ID: Steve Hanson asks: >mathematical query...is it contradictory that feedforward >networks are claimed BOTH to be able to approximate any >real valued function mapping and NOT be able (as Hendler suggests) >be turing equivalent? Cannot specific turing machines be >seen as a real valued function mapping? Feedforward networks can approximate functions from any *finite* domain. Turing equivalence requires the ability to compute general recursive functions defined on *infinite* domains (such as the natural numbers). The only ways that I can see to allow connectionist networks to handle functions on infinite domains are: (1) Arbitrarily high-precision inputs and processing; or (2) Arbitrarily large numbers of input units (along with arbitrarily large network size); or (3) Inputs extended over arbitrarily large periods of time. (Of course a feed-forward network would be no good here. We'd need some form of recurrence to preserve information.) Note that we never need *infinite* precision/size/time, as some have suggested. We merely need the ability to extend precision/size/time to an arbitrarily large (but still finite) extent, depending on the problem and the input. Infinite precision etc would give us something new again -- the ability to handle functions from *uncountable* domains (not even Turing machines can do this). Incidentally, of the methods above, I think that (3) is the most plausible. But the importance of Turing equivalence to cognition is questionable. Dave Chalmers (dave at cogsci.indiana.edu) Center for Research on Concepts and Cognition Indiana University. From terry%sdbio2 at ucsd.edu Sat Jan 20 00:35:46 1990 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Fri, 19 Jan 90 21:35:46 PST Subject: Cold Spring Harbor - Date Correction Message-ID: <9001200535.AA19462@sdbio2.UCSD.EDU> (Incorrect dates were given in previous listing) Summer Course on COMPUTATIONAL NEUROSCIENCE: LEARNING AND MEMORY July 14-27, 1990 Cold Spring Harbor Laboratory Organizers: Michael Jordan, MIT Terrence Sejnowski, Salk Institute and UCSD This is an intensive laboratory and lecture course that will examine computational approaches to problems in learning and memory. Problems and techniques from both neuroscience and cognitive science will be covered, including learning procedures that have been developed recently for neural network models. The course will include a computer-based laboratory so that students can actively explore computational issues. Students will be able to informally interact with the lecturers. A strong grounding in mathematics and previous exposure to neurobiology is essential for students. Instructors: Eric Baum (NEC) Richard Sutton (GTE) Yan LeCun (ATT) David Rumelhart (Stanford) Tom Brown (Yale) Jack Byrne (Univ. Texas Houston) Richard Durbin (Stanford) Gerald Tesauro (IBM) Stephen Lisberger (UC San Farancisco) Ralph Linsker (IBM) John Moody (Yale) Nelson Donegan (Yale) Chris Atkeson (MIT) Tomaso Poggio (MIT) Mike Kearns (MIT) DEADLINE: MARCH 15, 1990 Applications and additional information may be obtained from: Registrar Cold Spring Harbor Laboratory Cold Spring Harbor, New York 11724 Tuition, Room and Board: $1,390. Partial Scholarships available to qualified students. ----- From noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU Sun Jan 21 08:21:31 1990 From: noel%CS.EXETER.AC.UK at VMA.CC.CMU.EDU (Noel Sharkey) Date: Sun, 21 Jan 90 13:21:31 GMT Subject: turing equivalence Message-ID: <12411.9001211321@entropy.cs.exeter.ac.uk> I have been pleased with the response to the Turing equivalence issue both on the net and in my personal mail. I have not had time to digest all of this yet, but i have learned a few fundamental things that i didn`t know and will pass them on by and by. But i still have not seen a real formal proof. Jay McClelland does not think that this is a very worthwhile pursuit because "Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: Speech perception, pattern recognition, retrieval of contextually relevant information from memory, language understanding and intuitive thinking." While this may be true, we do not know whether something like Turing equivalence is a NECESSARY condition for the performance of such human phenomena. Jay says, and I agree, that, "We need to start thinking of ways of going beyond Turing equivalence." But the question here must be, how do we know when we have gone beyond Turing equivalence without first having found out whether or not we have it. I am working in connectionism because I am interested in explanations of human cognition and certainly, at present, connectionism offers a new and exciting approach. I think from this perspective it is useful the use of external memory aids etc. by humans is interesting - Turing discusses this himself. However, computer science has been developing formal analytic tools for a long time, let us not throw all these insights away because of a lot of flag waving enthusiasm. If we can find formal equivalences then we know where our new theory stands in relation to the old and we can demonstrate its power without descending into waffleware. Having said this, I wouldn't like to spend too much time on it myself. noel p.s. if this puts people off writing to the net about turing equivalence, I would still be very happy to have your replies directed to me personally. From lakoff at cogsci.berkeley.edu Sun Jan 21 17:47:20 1990 From: lakoff at cogsci.berkeley.edu (George Lakoff) Date: Sun, 21 Jan 90 14:47:20 -0800 Subject: No subject Message-ID: <9001212247.AA06634@cogsci.berkeley.edu> Subject: McClelland and Turing I agree strongly with Jay McClelland's comments about the irrelevance of Turing computability for issues in cognitive science. I would add one major point: Discussions of computability in general ignore the CONTENT of what is computed, in particular, the content of all natural language concepts. To me, the most important part of studies in neural grounding of the conceptual system, is the promise it holds out for accounting for the content of concepts, not just the form of representations and the characteristics of computability. As soon as discussion is directed to details of content, it becomes clear that computability discussions get us virtually nowhere. George Lakoff From gary%cs at ucsd.edu Mon Jan 22 16:49:13 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Mon, 22 Jan 90 13:49:13 PST Subject: turing equivalence Message-ID: <9001222149.AA03430@desi.UCSD.EDU> Noel writes: Jay McClelland does not think that this is a very worthwhile pursuit because "Turing equivalence is not a guarantee of capturing the kinds of intelligence that people exhibit but Turing machines do not, such as: [long list...] While this may be true, we do not know whether something like Turing equivalence is a NECESSARY condition for the performance of such human phenomena. Jay says, and I agree, that, "We need to start thinking of ways of going beyond Turing equivalence." [end of noel] Wait, this is crazy unless you believe in Quantum machines or some such [which is a perfectly reasonable response to the following, but for now, let's pretend it's not]. If you are a normal computer scientis/AI/PDP/Cognitive Science researcher, then you believe the basic assumption underlying AI that >>thinking is a kind of computation<<. All of the known kinds of computation are equivalent to the kind performed by a TM. So if we could show that a PDP net is equivalent to a TM, then we would have captured all of those things Jay was talking about. The problem is the proof is not constructive. If anything, what we need to do is find the class of functions that are easily *learnable* by particular PDP architectures. This will be a *subset* of the things computable by a TM. Hence, proving TM equivalence is not NECESSARY, however, it sure would be SUFFICIENT to show that it isn't crazy to try to find a learning algorithm and an architecture that could learn some particular class of problems, since whatever we would want to compute is certainly do-able by a neural net. Interesting work along these lines has been done by Servan-Schreiber et al., & Jeff Elman, where they show that certain kinds of FSM's are hard for simple recurrent nets to learn, and Jeff shows that a net can learn the syntactic structure of English, but is poor at center embedding, while being fine with tail recursive clauses. gary cottrell From lakoff at cogsci.berkeley.edu Tue Jan 23 04:20:27 1990 From: lakoff at cogsci.berkeley.edu (George Lakoff) Date: Tue, 23 Jan 90 01:20:27 -0800 Subject: No subject Message-ID: <9001230920.AA11360@cogsci.berkeley.edu> Response to Harlan: By ``content'' I have in mind cases like the following: (1) Color: The nature and distribution of color categories has been shown to depend on the neurophysiology of color vision. This is not just a matter of computation by the neurons, but of what they are hooked up to. (See my Women, Fire, and Dangerous Things, pp. 24 - 30.) (2) Basic-level categories, whose properties depend on gestalt perception, motor programs, and imagining capacity. Here we have not just a matter of abstract computation but again a matter of how the neurons doing the computing are hooked up to the body. (3) Spatial relations, e.g., in, out, to, from, through, over, and all others. Here the visual system, rather than the olfactory system, will count. Again, simply looking at abstract computations doesn't help. (4) Emotional concepts, like anger, which are partly understood via complex metaphorical mappings, but which are constrained by phsyiology. See Women, Fire, Case study 1. (5) Cultural concepts, like marriage. (6) Scientific concepts like relativity. I simply do not see how pure computation tells us anything whatever about the content of the concepts -- not just what inference patterns they occur in relative to other concepts, but the nature of, say, GREEN as opposed to ACROSS or SCARED, as well as the various properties of the concepts, e.g., their prototype structure, their place in the basic-level hierarchy, their associated mental images, whether they are metaphorically constituted and if so how they are understood, etc. As a cognitive scientist, I am concerned with all these issues and a myriad of other ones of greater complexity. Discussions of abstract computability issues, as interesting as they are in themselves, just don't help here. I am interested in connectionism partly because it holds out the promise of insight into the neural grounding of concepts and into the thousands of issues in conceptual analysis that require an understanding of such grounding. Turing computability is a technical issue and is of some technical interest, but has nothing whatever to say to the visceral issues concerning the content of concepts. George From shastri at central.cis.upenn.edu Tue Jan 23 09:10:20 1990 From: shastri at central.cis.upenn.edu (shastri@central.cis.upenn.edu) Date: Tue, 23 Jan 90 09:10:20 -0500 Subject: (New Tech. Report) From Simple Associations to Systematic Reasoning Message-ID: <9001231410.AA26064@central.cis.upenn.edu> The following report may be of interest to some of you. Please direct e-mail requests to: dawn at central.cis.upenn.edu --------------------------------------------- From Simple Associations to Systematic Reasoning: A connectionist representation of rules, variables and dynamic bindings Lokendra Shastri and Venkat Ajjanagadde Computer and Information Science Department University of Pennsylvania Philadelphia, PA 19104 December 1989 Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency --- as though these inferences are a reflex response of their cognitive apparatus. The work presented in this paper is a step toward a computational account of this remarkable reasoning ability. We describe how a connectionist system made up of simple and slow neuron-like elements can encode millions of facts and rules involving n-ary predicates and variables, and yet perform a variety of inferences within hundreds of milliseconds. We observe that an efficient reasoning system must represent and propagate, dynamically, a large number of variable bindings. The proposed system does so by propagating rhythmic patterns of activity wherein dynamic bindings are represented as the in-phase, i.e., synchronous, firing of appropriate nodes. The mechanisms for representing and propagating dynamic bindings are biologically plausible. Neurophysiological evidence suggests that similar mechanisms may in fact be used by the brain to represent and process sensorimotor information. From marek at iuvax.cs.indiana.edu Tue Jan 23 10:42:00 1990 From: marek at iuvax.cs.indiana.edu (Marek Lugowski) Date: Tue, 23 Jan 90 10:42:00 -0500 Subject: Computational Metabolism on the Connection Machine and Other Stories... Message-ID: Indiana University Computer Science Departamental Colloquium Computational Metabolism on a Connection Machine and Other Stories... --------------------------------------------------------------------- Elisabeth M. Freeman, Eric T. Freeman & Marek W. Lugowski graduate students, Computer Science Department Indiana University Wednesday, 31 January 1990, 7:30 p.m. Ballantine Hall 228 Indiana University campus, Bloomington, Indiana This is work in progress, to be shown at the Artificial Life Workshop II, Santa Fe, February 5-9, 1990. Connection Machine (CM) is a supercomputer for massive parallelism. Computational Metabolism (ComMet) is such computation. ComMet is a tiling where tiles swap places with neighbors or change their state when noticing their neighbors. ComMet is a programmable digital liquid. Reference: Artificial Life, C. Langton, ed., "Computational Metabolism: Towards Biological Geometries for Computing", M. Lugowski, pp. 343-368, Addison-Wesley, Reading, MA: 1989, ISBN 0-201-09356-1/paperbound. Emergent mosaics: ---------------- This class of ComMet instances arise from generalizing the known ComMet solution of Dijkstra's Dutch Flag problem. This has implications for cryptology and noise-resistant data encodings. We observed deterministic and indeterministic behavior intertwined, apparently a function of geometry. A preliminary computational theory of metaphor: ---------------------------------------------- We are working on a theory of metaphor as transformations within ComMet. Metaphor is loosely defined as expressing one entity in terms of another, and so it must underlie categorization and perception. We postulate that well-defined elementary events capable of spawning an emergent computation are needed to encode the process of metaphor. We use ComMet to effect this. A generalization of Prisoner's Dilemma (PD) for computational ethics: --------------------------------------------------------------------- The emergence of cooperation in iterated PD interactions is known. We propose a further generalization of PD into a communication between two potentially complex but not necessarily aware of each other agents. These agents are expressed as initial configurations of ComMet spatially arranged to allow communication through tile propagation and tile state change. Connection Machine (CM) implementation: -------------------------------------- We will show a video animation of our results, obtained on a 16k-processor CM, including emergent mosaics, thus confirmed after we predicted them theoretically. Our CM program computes in 3 minutes what took 7 days to do on a Lisp Machine. Our output is a 128x128 color pixel map. Our code will run in virtual mode, if need be, with up to 32 ComMet tiles per CM processor, yielding a 2M-tile tiling (over 2 million tiles) on a 64k-processor CM. From gary%cs at ucsd.edu Tue Jan 23 13:47:11 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Tue, 23 Jan 90 10:47:11 PST Subject: Hype-a-mania strikes Message-ID: <9001231847.AA04455@desi.UCSD.EDU> Jordan Pollack wrote to me: you write: >>Interesting work along these lines has been done by Servan-Schreiber >>et al., & Jeff Elman, where they show that certain kinds of FSM's >>are hard for simple recurrent nets to learn, and Jeff shows that >>a net can learn the syntactic structure of English, but is poor >>at center embedding, while being fine with tail recursive clauses. Isn't this overclaiming elman's work just a TAD, gary? And SS showed that SRN's didn't have a hope of doing unbounded dependencies unless they were statistically differentiated. jordan Mea Culpa!!! Yes, I'm sorry, I should not have said "learns the syntactic structure of english". I should have said "learns the structure of sentences generated by a grammar with embedded clauses". Re: Servan-schreiber's work, I thought I *did* say that "certain FSM's are hard to learn" about SS et al. In any case, all they showed was that theye were hard to learn using their training scheme. Other training schemes may work, such as Combined Subset Training (Tsung & Cottrell, 1989, IJCNN; Cottrell & Tsung, 1989, Cog Sci Proc), which is similar to Jeff Elman's technique of starting with simple sentences and progressively adding more complex ones. Hype-a-mania strikes deep... Into your life it will creep... gary cottrell From jcollins at shalamanser.cs.uiuc.edu Wed Jan 24 04:07:50 1990 From: jcollins at shalamanser.cs.uiuc.edu (John Collins) Date: Wed, 24 Jan 90 03:07:50 CST Subject: turing equivalence Message-ID: <9001240907.AA07251@shalamanser.cs.uiuc.edu> Proving that a particular connectionist architecture is turing equivalent may convince some skeptics that we are not completely wasting our time, but in fact it shouldn't. A computer built of tinker-toys might be turing equivalent, but that implies neither that it is an interesting model of cognition, nor that it is capable of generating any useful results IN A REASONABLE AMOUNT OF TIME. For some 30+ years AI researchers have had at their disposal universal computers which are for all purposes turing equivalent. So why has AI failed to achieve its lofty goals? Clearly turing equivalence is no guarantee of success in modeling cognition. The term "equivalent" is misleading. How can my PC be "equivalent" to a Cray, and yet be so much slower? A large part of cognition involves interacting with the real world in real time; turing equivalence tells us nothing about the speed, efficiency, or appropriateness of computations given our noisy and uncertain world. I am convinced that neural nets ARE turing equivalent; but dispite this, I remain optimistic that connectionism will inevitably succeed where GOFAI has failed. ;-) John Collins jcollins at cs.uiuc.edu From tenorio at ee.ecn.purdue.edu Wed Jan 24 16:43:52 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Wed, 24 Jan 90 16:43:52 EST Subject: NEO-cognitron Message-ID: <9001242143.AA09343@ee.ecn.purdue.edu> Bcc: -------- Of the early second generation algorithms, the Neocognitron is one that I heard very little of lately. I have two questions: - Can anyone point me to anywork that uses the algorithm, but it is not its original designer? - Has anyone implemented the algorithm in C or some other easily available language? The algorithm claims to use microfeatures for classification, and I would like to compare it with other algorithms such as the SONN and the GDR. Thanks. --ft. From hbs at lucid.com Wed Jan 24 17:12:18 1990 From: hbs at lucid.com (Harlan Sexton) Date: Wed, 24 Jan 90 14:12:18 PST Subject: concepts In-Reply-To: George Lakoff's message of Tue, 23 Jan 90 01:20:27 -0800 <9001230920.AA11360@cogsci.berkeley.edu> Message-ID: <9001242212.AA00867@kent-state> I think that we may be talking about the same thing using slightly different language. Obviously what a neuron computes can't be legitimately considered a "concept", but I had intended to convey the idea that what the totality of them compute was the concept. In other words, what it means to say that something has a given color can be defined operationally without regard to introspection of an individual (in other words, we don't need to depend on a definition of just one person or on access (which is currently unavailable) to internal states of mind of a collection of people). My contention is that it is possible (only in principle at the moment, and of course I may be wrong) to construct a machine that is operationally equivalent to a person as far as "green" is concerned. Of such a machine I would then say that it understood the concept of green. I know form a much simple class of problems how the interconnections and protocols of processors can allow a network of really simple things to do much more complex computations, and I even "understand" in a sense how this works for these cases. What I don't understand is how this could be extended to sorts of AI problems that neuro-computers are aimed at, but I believe that it is knowable. --Harlan From rba at flash.bellcore.com Wed Jan 24 21:03:29 1990 From: rba at flash.bellcore.com (Robert B Allen) Date: Wed, 24 Jan 90 21:03:29 EST Subject: No subject Message-ID: <9001250203.AA07130@flash.bellcore.com> Subject: shaping recurrent nets The technique of initially training recurrent nets with short sequences and gradually introducing longer sequences has been previously described in: Allen, R.B. Adaptive Training of Connectionist State Machines. ACM Computer Science Conference, Louisville, Feb, 1989, 428. From nowlan at ai.toronto.edu Thu Jan 25 08:34:35 1990 From: nowlan at ai.toronto.edu (Steven J. Nowlan) Date: Thu, 25 Jan 90 08:34:35 EST Subject: shaping recurrent nets In-Reply-To: Your message of Wed, 24 Jan 90 21:03:29 -0500. Message-ID: <90Jan25.083303est.10527@ephemeral.ai.toronto.edu> Another related form of shaping is described in: Nowlan, S.J. Gain Variation in Recurrent Error Propagation Networks. Complex Systems 2 (1988) 305-320. In this case, a robust attractor for a recurrent network is developed by first training from initial states near the attractor, and then gradually increasing the distance of initial states from the attractor. - Steve From AMR at IBM.COM Thu Jan 25 20:36:20 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Thu, 25 Jan 90 20:36:20 EST Subject: Turing machines = connectionist models (?) Message-ID: Having started the whole debate about the Turing equivalence of connectionist models, I feel grateful to the many contributors to the ensuing debate. I also feel compelled to point out that, in view of the obvious confusion and disunity concerning what ought to be a simple mathematical question, somebody needs to try to set the record straight. I am sure this will take some time and the efforts of many, but let me try to start the ball rolling. (1) George Lakoff's comment about the irrelevance of this issue in light of the fact that it does not address the question of "content" esp. of natural language concepts bothers me because all the talk about "content" (alias "intentionality", I guess) is so much hand- waving in the absence of any hint (not to mention a full-blown account) of what this is supposed to be. If we grant that there is no more to human beings that mortal flesh, then there is no currently available basis in any empirical science, any branch of mathematics, or I suspect (but am not sure) any branch of philosophy for such a concept. All we can say is that, in virtue of the architecture of human beings, certain causal connections exist (or tend to exist, to be precise, for there are always abnormal cases such as perhaps autism or worse) between certain states of an environment and certain states (as well as certain external actions) of human beings in that environment. There is no magic in this, no soul, and nothing that distinguishes human beings crucially from other living beings or from machines. Perhaps that is wrong but it is enough to speculate about something like "content" or "intentionality". One has to try to make sense of it, and I know of no such attempt that does not either (a) work equally well for robots as it does for human beings (e.g., Harnad's proposals about "grounding") or (b) fail to show how the factors they are talking about might be relevant (e.g., Searle's suggestion that it might be crucial that human beings are biological in nature. The answer surely is that this might be crucial, but that Searle has failed not only that it is but even how it might be). I would argue that in order to understand how human beings function, we need to take the environment into account but the same applies to all animals and to many non-biological systems, including robots. So, while grounding in some sense may be necessary, it is not sufficient to explain anything about the uniqueness of human mentality and behavior (e.g., natural language). (2) Given what we know from the theory of computation, even though grounding is necessary, it does not follow that there is any useful theoretical difference between a program simulating the interaction of a being with an environment and a robot interacting with a real environment. In practice, of course, the task of simulating a realistic environment may be so complex that it would make more sense in certain cases to build the robot and let it run wild than to attempt such a simulation, but in other cases the cost and complexity of building the robot as opposed to writing the program are such that it is more reasonable to do it the other way around. In real research, both strategies must be used, and it should be obvious that the same reasoning shows that, as far as we know, there is no theoretical difference between a human being in a real environment and a human being (or a piece of one, such as the proverbial brain in the vat) in a suitably simulated environment, but that there may be tremendous practical advantages to one or the other approach depending on the particular problem we are studying. But, again, "grounding" does not allow us differentiate biological from nonbiological or human from nonhuman. (2) In light of the above, it seems to me that, while the classical tools provided by the theory of computation may not be enough, they are the best that we have got in the way of tools for making sense of the issues. (2) There is some confusion about Turing machines, which possess infinite memory, and physically realizable machines, which do not. This makes a lot of difference in one way because the real lesson of the theory of computation is not that, if human beings are algorithmic beings, then they are equivalent to Turing machines, but rather that they would be equivalent to finite-state machines. The same applies to physically realized computers and any physically realized connectionist hardware that anyone might care to assemble. From jti at AI.MIT.EDU Fri Jan 26 11:23:00 1990 From: jti at AI.MIT.EDU (Jeff Inman) Date: Fri, 26 Jan 90 11:23 EST Subject: handwaving and "content" In-Reply-To: <9001260836.AA06806@life.ai.mit.edu> Message-ID: <19900126162316.2.JTI@WORKER-3.AI.MIT.EDU> Date: Thu, 25 Jan 90 20:36:20 EST From: AMR at ibm.com (1) George Lakoff's comment about the irrelevance of this issue in light of the fact that it does not address the question of "content" esp. of natural language concepts bothers me because all the talk about "content" (alias "intentionality", I guess) is so much hand- waving in the absence of any hint (not to mention a full-blown account) of what this is supposed to be. If we grant that there is no more to human beings that mortal flesh, then there is no currently available basis in any empirical science, any branch of mathematics, or I suspect (but am not sure) any branch of philosophy for such a concept. All we can say is that, in virtue of the architecture of human beings, certain causal connections exist (or tend to exist, to be precise, for there are always abnormal cases such as perhaps autism or worse) between certain states of an environment and certain states (as well as certain external actions) of human beings in that environment. There is no magic in this, no soul, and nothing that distinguishes human beings crucially from other living beings or from machines. Please pardon my philosophical intrusion in this technical forum, but I must respond to your statement. I think you have touched on a critical issue that underlies much of AI, cognitive science, etc. It is good that we examine this issue occassionally, because we may eventually have to face fundamentalist picketers, machines that don't "want" to be powered off, or machines that produce wonderful music, art, science, etc. I appreciate this recap, as it provides focus for the discussion. That the issue is popular can be seen by the fact that it appears in the form of a pair of "dueling" articles, in this month's Scientific American. For my money, however, the crucial idea appears in a profile of Claude Shannon [pg 22], where he says "we're machines, and we think, don't we?". It is uneccessary to devalue the human experience, as you do (above), by attempting to *reduce* it through its contigency in physicality. If humans are machines (as we agree they are), then that indicates that physicality is more complicated than a lot of science has acknowledged, rather than indicating that experience is really "nothing", or that biology is really based in lifeless material. The latter point seems equally to be "handwaving" to me. I agree that "there is nothing that distinguishes human beings .. from other living beings or from machines", or even from clouds of hydrogen atoms. A little more complexity perhaps, or perhaps not, but nothing significant on the grand scale. As you suggest elsewhere, these systems are all embedded in a certain specific universe, and I take that to be extremely important. I also agree that there are causal connections at the root of all phenomena (though the important ones are systemic and difficult or impossible to isolate). Somehow, plain ordinary matter is capable of "experiencing", or whatever it is that we do. As scientists, let's take it from there. Please note, I am not suggesting at all that there are some concepts that shouldn't be examined. By all means, let's investigate this fascinating field. I'd just like to see us remain open to discovering things that we don't already know. Thanks, Jeff From gary%cs at ucsd.edu Fri Jan 26 15:30:54 1990 From: gary%cs at ucsd.edu (Gary Cottrell) Date: Fri, 26 Jan 90 12:30:54 PST Subject: reference Message-ID: <9001262030.AA09026@desi.UCSD.EDU> A number of people have asked me where the work was of Jeff Elman's that I was referring to. Here it is: Representation and Structure in Connectionist models CRL TR 8903, available from Center for Research in Language C-008 UCSD La Jolla, Ca 92093 The grammar is: S -> NP VP "." NP -> PropN | N | N RC VP -> V (NP) RC -> who NP VP | who VP (NP) N -> boy, girl cat dog boys girls cats dogs PropN -> John, Mary V -> chase, chases, 9 others Also number agreement is enforced and there are transitivity requirements. The success criterion is that the network successfully predict the possible classes of the next word. Thus it has to remember number agreement between the subj and the verb across embedded clauses, and only predict verbs of the proper number. gary From cfields at NMSU.Edu Fri Jan 26 16:02:10 1990 From: cfields at NMSU.Edu (cfields@NMSU.Edu) Date: Fri, 26 Jan 90 14:02:10 MST Subject: No subject Message-ID: <9001262102.AA26830@NMSU.Edu> _________________________________________________________________________ The following are abstracts of papers appearing in the fourth issue of the Journal of Experimental and Theoretical Artificial Intelligence, which appeared in November, 1989. The next issue, 2(1), will be published in March, 1990. For submission information, please contact either of the editors: Eric Dietrich Chris Fields PACSS - Department of Philosophy Box 30001/3CRL SUNY Binghamton New Mexico State University Binghamton, NY 13901 Las Cruces, NM 88003-0001 dietrich at bingvaxu.cc.binghamton.edu cfields at nmsu.edu JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia _________________________________________________________________________ Problem solving architecture at the knowledge level. Jon Sticklen, AI/KBS Group, CPS Department, Michigan State University, East Lansine, MI 48824, USA The concept of an identifiable "knowledge level" has proven to be important by shifting emphasis from purely representational issues to implementation-free decsriptions of problem solving. The knowledge level proposal enables retrospective analysis of existing problem-solving agents, but sheds little light on how theories of problem solving can make predictive statements while remaining aloof from implementation details. In this report, we discuss the knowledge level architecture, a proposal which extends the concepts of Newell and which enables verifiable prediction. The only prerequisite for application of our approach is that a problem solving agent must be decomposable to the cooperative actions of a number of more primitive subagents. Implications for our work are in two areas. First, at the practical level, our framework provides a means for guiding the development of AI systems which embody previously-understood problem-solving methods. Second, at the foundations of AI level, our results provide a focal point about which a number of pivotal ideas of AI are merged to yield a new perspective on knowledge-based problem solving. We conclude with a discussion of how our proposal relates to other threads of current research. With commentaries by: William Clancy: "Commentary on Jon Stcklen's 'Problem solving architecture at the knowledge level'". James Hendler: "Below the knowledge level architecture". Brian Slator: "Decomposing meat: A commentary on Sticklen's 'Problem solving architecture at the knowledge level'". and Sticklen's response. __________________________________________________________________________ Natural language analysis by stochastic optimization: A progress report on Project APRIL Geoffrey Sampson, Robin Haigh, and Eric Atwell, Centre for Computer Analysis of Language and Speech, Department of Linguistics & Phonetics, University of Leeds, Leeds LS2 9JT, UK. Parsing techniques based on rules defining grammaticality are difficult to use with authentic natural-language inputs, which are often grammatically messy. Instead, the APRIL systems seeks a labelled tree structure which maximizes a numerical measure of conformity to statistical norms derived from a sample of parsed text. No distinction between legal and illegal trees arises: any labelled tree has a value. Because the search space is large and has an irregular geometry, APRIL seeks the best tree using simulated annealing, a stochastic optmization technique. Beginning with an arbitrary tree, many randomly-generated local modifications are considered and adopted or rejected according to their effect on tree-value: acceptance decisions are made probabilistically, subject to a bias against adverse moves which is very weak at the outset but is made to increase as the random walk through the search space continues. This enables the system to converge on the global optimum without getting trapped in local optima. Performance of an early verson of the APRIL system on authentic inputs had been yielding analyses with a mean accuracy of 75%, using a schedule which increases processing linearly with sentence length; modifications currently being implemented should eliminate many of the remaining errors. _________________________________________________________________________ On designing a visual system (Towards a Gibsonian computational model of vision) Aaron Sloman, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QN, UK This paper contrasts the standard (in AI) "modular" theory of the nature of vision with a more general theory of vision as involving multiple functions and multiple relationships with other subsystems of an intelligent system. The modular theory (e.g. as expounded by Marr) treats vision as entirely, and permanently, concerned with the production of a limited range of descriptions of visual surfaces, for a central database; while the "labyrithine" design allows any output that a visual system can be trained to associate reliably with features of an optic array and allows forms of learning that set up new communication channels. The labyrithine theory turns out to have much in common with J. J. Gibson's theory of affordances, while not eschewing information processing as he did. It also seems to fit better than the modular theory with neurophysiological evidence of rich interconnectivity within and between subsystems in the brain. Some of the trade-offs between different designs are discussed in order to provide a unifying framework for future empirical investigations and engineering design studies. However, the paper is more about requirements than detailed designs. ________________________________________________________________________ From watrous at ai.toronto.edu Fri Jan 26 16:17:57 1990 From: watrous at ai.toronto.edu (Raymond Watrous) Date: Fri, 26 Jan 90 16:17:57 EST Subject: Request for Data Message-ID: <90Jan26.161758est.11301@ephemeral.ai.toronto.edu> The classic data of Peterson/Barney (1952), consisting of formant and pitch values for 10 vowels in the [hVd] context for 76 speakers, has been used in several comparisons of classifiers, connectionist and otherwise (eg. Huang/Lippmann, Moody, Bridle, Nowlan). The data used in these comparisons was a subset of the original data digitized by Huang/Lippmann from the original paper, from which the speaker identity was not recoverable. I have received a version of the original data courtesy of Ann Syrdal (AT&T Bell Labs) which is organized by speaker and vowel. However, this data is also incomplete in that 1 speaker is missing, as are 6 tokens of the [o] vowel. Can anyone supply the complete, original data set? (I would be happy to supplement what I have from printed listings, or punched cards, if the data is not on line.) Thanks. Ray Watrous From AMR at ibm.com Fri Jan 26 16:37:49 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 26 Jan 90 16:37:49 EST Subject: handwaving and "content" Message-ID: My point is not that there is nothing that distinguishes people from, say, machines of the usual sort. In fact, lots of things do. Rather, I contend that the sort of vague statements that people make about content or semantics or intentionality or the less vague ones about grounding do not help make that distinction. Nor do any simplistic claims about biology vs. non-biology. So, my point is to try to find where the difference does lie, and to show that some initially appealing proposals for this are either wrong (in the case of grounding) or too vague to be either right or wrong (in some of the other cases). The only hope that I see for making the distinction is in terms of some precise notions that either already exist or more likely (as my slogan suggests) remain to be developed within a formal discipline such as the theory of computation. From AMR at ibm.com Fri Jan 26 16:43:51 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Fri, 26 Jan 90 16:43:51 EST Subject: Turing machines = connectionist models Message-ID: My point was to claim that biological systems are indistinguishable from robots. Rather it is that a variety of recent proposals about how they are distinguished do not make the grade. In particular, grounding as I understand Harnad's proposal is not even intended to make the distinction. I think he contends that robots are just as grounded as people are, but that disembodied (non-robotic) programs are not. I think the distinction between robots and programs is very great in practice but not all in principle (as I tried to elucidate) and I should make it clear if I have not yet that I don't think that existing or currently imaginable robots are any closer to being human-like than are existing or currently imaginable programs. So, for me, the crucial question is precisely what makes people different from baboons, or robots, or programs. And grounding does not help us answer this. The question of how biological systems differ from nonbiological is equally interesting, but that has to do with the fundamental problem of WHAT IS LIFE not (or at least not obviously) with the fundamental problem of WHAT IS INTELLIGENCE. The claim that somehow people have minds but robots and programs do not BECAUSE people are biological in nature (a claim that Searle seems close to making) seems prima facie unlikely, since we do not attribute minds to cells or to amoebas, which are biological entities, and in any case the claim has not been either articulated precisely or defended even in outline. Additional proposals have been made, having to do with intentionality or semantics, for example, but these again do not seem to help. They are either vague or they do not distinguish people from robots or perhaps do no more than recapitulate the problem without throwing any new light on it. I should finally say that when I urge the study of the theory of com putation on people, I do not mean to disparage other branches of mathematics. If there are other branches of mathematics that are equally or more useful here, I would like to know more about them. I would just say that (a) such results are ultimately going to have to be put together with what we already have in the theory of computation, to the undoubted benefit of the latter, (b) such results should, if they are to be helpful, make the distinctions I have been referring to (and I would like to see an example of this), (c) such results will I think be equally unconfortable for the handwavers as the classic results from the theory of computation precisely because they will tell us in precise and mathematical terms what kind of machine (different in some important way from other kinds of machine) people actually are, and (d) (referring back to (b) such results are unlikely, it seems to me, to show either that only a biological system could be intelligent or that only a net of some kind could. The question of whether connectionist models are equivalent (in various senses) to finite-state machines, then, is not intended to be the central question. Most of the questions are actually empirical, but the fact remains that we need a precise framework for carrying onn the empirical and theoretical work, and, in any case, we cannot hope for useful results from work which starts out by flouting the few truths that we do know for sure from the mathematics of the theory of computation. From elman at amos.ucsd.edu Fri Jan 26 23:37:00 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 26 Jan 90 20:37:00 PST Subject: TR announcement: W. Levelt, Multilayer FF Nets & Turing machines Message-ID: <9001270437.AA04565@amos.ucsd.edu> I am forwarding the following Tech Report announcement on behalf of Pim Levelt, Max-Planck-Institute for Psycholinguistics. Note that requests for reprints should go to 'pim at hnympi51.bitnet' -- not to me! Jeff -------------------------------------------------------------------- Jeff Elman suggested I announce the existence of the following new paper: Willem J.M. Levelt, ARE MULTILAYER FEEDFORWARD NETWORKS EFFECTIVELY TURING MACHINES? (Paper for conference on "Domains of Mental Functioning: Attempts at a synthesis". Center for Interdisciplinary Research, Bielefeld, December 4-8, 1989. Proceedings to be published in Psychological Research, 1990). Abstract: Can connectionist networks implement any symbolic computation? That would be the case if networks have effective Turing machine power. It has been claimed that a recent mathematical result by Hornik, Stinchcombe and White on the generative power of multilayer feedforward networks has that implication. The present paper considers whether that claim is correct. It is shown that finite approximation measures, as used in Hornik et al.'s proof, are not adequate for capturing the infinite recursiveness of recursive functions. Therefore, the result is irrelevant to the issue at hand. Willem Levelt, Max Plank Institute for Psycholinguistics, Nijmegen, The Netherlands, e-mail: PIM at HNYMPI51 (on BITNET). (PS. Jeff Elman and Don Norman own copies of the paper) From harnad at Princeton.EDU Sat Jan 27 16:54:30 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 27 Jan 90 16:54:30 EST Subject: robots and simulation Message-ID: <9001272154.AA01941@reason.Princeton.EDU> Alexis Manaster-Ramer AMR at ibm.com wrote: > I know of no... attempt [to make sense of "content" or > "intentionality"] that does not either (a) work equally well for robots > as it does for human beings (e.g., Harnad's proposals about > "grounding")... [G]rounding as I understand Harnad's proposal is > not... intended to [to distinguish biological systems > from robots]. I think he contends that robots are just as grounded as > people are, but that disembodied (non-robotic) programs are not. You're quite right that the grounding proposal (in "The Symbol Grounding Problem," Physica D 1990, in press) does not distinguish robots from biological systems -- because biological systems ARE robots of a special kind. That's why I've called this position "robotic functionalism" (in opposition to "symbolic functionalism"). But you leave out a crucial distinction that I DO make, over and over: that between ordinary, dumb robots, and those that have the capacity to pass the Total Turing Test [TTT] (i.e., perform and behave in the world for a lifetime indistinguishably from the way we do). Grounding is trivial without TTT-power. And the difference is like night and day. (And being a methodological epiphenomenalist, I think that's about as much as you can say about "content" or "intentionality.") > Given what we know from the theory of computation, even though > grounding is necessary, it does not follow that there is any useful > theoretical difference between a program simulating the interaction > of a being with an environment and a robot interacting with a real > environment. As explained quite explicitly in "Minds, Machines and Searle" (J. Exp. Theor. A.I. 1(1), 1989), there is indeed no "theoretical difference," in that all the INFORMATION is there in a simulation, but there is another difference (and this applies only to TTT-scale robots), one that doesn't seem to be given full justice by calling it merely a "practical" difference, namely, that simulated minds can no more think than simulated planes can fly or simulated fires can burn. And don't forget that TTT-scale simulations have to contend with the problem of encoding all the possible real-world contingencies a TTT-scale robot would be able to handle, and how; a lot to pack into a pure symbol cruncher... Stevan Harnad From ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU Sat Jan 27 20:45:08 1990 From: ST401843%BROWNVM.BITNET at VMA.CC.CMU.EDU (thanasis kehagias) Date: Sat, 27 Jan 90 20:45:08 EST Subject: probability learning mini bibliography Message-ID: at the risk of becoming boring ... a while ago i had asked pointers to probability learning by neural nets. there were very few replies to the query, which i summarize below. format is bibtex, as usual. (remark: one of the netters -cannot recall his name- remarked that mean field theory can be onterpreted as probability learning. prima facie this sounds right but i did not follow his lead. people interested in mf theory, i have included some references in my dynamic neural nets bibliography, available by FTP from this site.) usual disclaimer: this bibliography is far from complete and if somebody's work is not included, do not flame me , send me the refernce ... thanasis @article{kn:And83a, title ="Cognitive and Psychological Computation with Neural Models", author ="J.A. Anderson", journal ="IEEE Trans. on Systems, Man and Cybernetics", volume ="SMC-13", year ="1983" } @article{kn:And77a, title ="Distinctive Features, Categorical Perception and Learning:some Applications of a Neural Model ", author ="J.A. Anderson and others ", journal ="Psychological Review", year ="1977", volume ="84", page ="413-451", } @inproceedings{kn:Gol87a, title ="Probabilistic Characterization of Neural Model Computation", booktitle ="Neural Information Processing Systems", author ="R.M. Golden", editor ="J.S. Denker", year ="1987", organization ="American Institute for Physics" } @INCOLLECTION{KN:HIN86A, title ="Learning and Relearning in Boltzmann Machines", booktitle ="Parallel Distributed Processing", author ="G. Hinton and T. Sejnowski", volume ="1", year ="1986", PUBLISHER ="MIT" } @inproceedings{kn:Pea86a, title ="G-Maximization:An Unsupervised Learning Procedure for discovering Regularities", booktitle ="Neural Networks for Computing", author ="B. Pearlmutter and G. Hinton", editor ="J.S. Denker", year ="1986", pages ="333-338", organization ="American Institute for Physics" } @techreport{kn:Lan89a, author ="A. Lansner and O. Ekeberg", title ="A One-Layer Feedback Artificial Neural Network with a Bayesian Leraning Rule", number ="TRITA-NA-P8910", institution ="Roy. Inst. of Technology, Stockholm", year ="1989" } @TECHREPORT{KN:SUN89A, AUTHOR ="R. SUN", TITLE ="The Discrete Neuronal Model and the Probabilistic Discrete Neuronal Model", NUMBER ="?", INSTITUTION ="Computer Sc. Dept., Brandeis Un.", year ="1989" } @incollection{kn:Smo86a, title ="Information Processing in Dynamical Systems", booktitle ="Parallel Distributed Processing", author ="P. Smolensky", volume ="1", year ="1986", PUBLISHER ="MIT" } @article{kn:Sol88a, author ="S. Solla", title ="Accelerated Learning Experiments in Layered Neural Networks", journal ="Complex Systems", year ="1988", volume ="2", } @techreport{kn:Sus88a, author ="H. Sussman", title ="On the Convergence of Learning Algorithms for Boltzmann Machines", number ="sycon-88-03", institution ="Rutgers Center for Systems and Control", year ="1988" } @inproceedings{kn:Sun89a, author= "R. Sun", title ="The Discrete Neuronal model and the Probabilistic Discrete Neuronal Model", booktitle ="Int. Neural Network Conf.", year ="1989" } @TECHREPORT{KN:WIL86A, author ="R. Williams", title ="Reinforcement learning in connectionist networks", number ="TR 8605", organization ="ICS, University of California, San Diego", year ="1986" } From Bill_McKellin at mtsg.ubc.ca Sun Jan 28 21:17:36 1990 From: Bill_McKellin at mtsg.ubc.ca (Bill_McKellin@mtsg.ubc.ca) Date: Sun, 28 Jan 90 18:17:36 PST Subject: No subject Message-ID: <2029417@mtsg.ubc.ca> SUB CONNECTIONISTS-REQUEST BILL MCKELLIN From AMR at IBM.COM Sun Jan 28 23:25:02 1990 From: AMR at IBM.COM (AMR@IBM.COM) Date: Sun, 28 Jan 90 23:25:02 EST Subject: robots and simulation Message-ID: (1) I am glad that SOME issues are getting clarified, to wit, I hope that everybody that has been confusing Harnad's arguments about grounding with other people's (perhaps Searle or Lakoff's) arguments about content, intentionality, and/or semantics will finally accept that there is a difference. (2) I omitted to refer to Harnad's distinction between ordinary robots, ones that fail the Total Turing Test, and and the theoretical ones that do pass the TTT, for two unrelated reasons. One was that I was not trying to present a complete accoun, merely, to raise certain issues, clarify certain points, and answer certain objections that had arisen. The other was that I do not agree with Harnad on this issue, and that for a number of reasons. First, I believe that a Searlean argument is still possible even for a robot that passes the TTT. Two, the TTT is much too strong since no one human being can pass it for another, and we would not be surprised I think to find an intelligent species of Martians or what have you that would, obviously, fail abysmally on the TTT but might pass a suitable version of the ordinary Turing Test. Third, the TTT is still a criterion of equivalence that is based exclusively on I/O, and I keep pointing out that that is not the right basis for judging whether two systems are equivalent (I won't belabor this last point, because that is the main thing that I have to say that is new, and I would hope to address it in detail in the near future, assuming there is interest in it out there.) (3) Likewise, I omitted to refer to Harnad's position on simulation because (a) I thought I could get away with it and (b) because I do not agree with that one either. The reason I disagree is that I regard simulation of a system X by a system Y as a situation in which system Y is VIEWED by an investigator as sufficiently like X with respect to a certain (usually very specific and limited) characteristic to be a useful model of X. In other words, the simulation is something which in no sense does what the original thing does. However, a hypothetical program (like the one presupposed by Searle in his Chinese room argument) that uses Chinese like a native speaker to engage in a conversation that its interlocutor finds meaningful and satisfying would be doing more than simulating the linguistic and conversational abilities of a human Chinese speaker; it would actually be duplicating these. In addition--and perhaps this is even more important--the use of the term simulation with respect to an observable, external behavior (I/O behavior again) is one thing, its use with reference to nonobservable stuff like thought, feeling, or intelligence is quite another. Thus, we know what it would mean to duplicate (i.e., simulate to perfection) the use of a human language; we do not know what it would mean to duplicate (or even simulate partially) something that is not observable like thought or intelligence or feeling. That in fact is precisely the open question. And, again, it seems to me that the relevant issue here is what notion of equivalence we employ. In a nutshell, the point is that everybody (incl. Harnad) seems to be operating with notions of equivalence that are based on I/O behavior even though everybody would, I hope, agree that the phenomenon we call intelligence (likewise thought, feeling, consciousness) are NOT definable in I/O terms. That is, I am assuming here that "everybody" has accepted the implications of Searle's argument at least to the extent that IF A PROGRAM BEHAVES LIKE A HUMAN BEING, IT NEED NOT FOLLOW THAT IT THINKS, FEELS, ETC., LIKE ONE. Searle, of course, goes further (without I think any justification) to contend that IF A A PROGRAM BEHAVES LIKE A HUMAN BEING, IT IS NOT POSSIBLE THAT IT THINKS, FEELS, ETC., LIKE ONE. The question that no one has been able to answer though is, if the two behave the same, in what sense are they not equivalent, and that, of course, is where we need to insist that we are no longer talking about I/O equivalence. This is, of course, where Turing (working in the heyday of behaviorism) made his mistake in proposing the Turing Test. From J.Kingdon at Cs.Ucl.AC.UK Sun Jan 28 12:45:04 1990 From: J.Kingdon at Cs.Ucl.AC.UK (J.Kingdon@Cs.Ucl.AC.UK) Date: Sun, 28 Jan 90 17:45:04 +0000 Subject: join list Message-ID: I am a postgraduate research student working on the mathematical theory of connectionist models. I wondered if it would be possible to have my name added to the mailing list of this group. Many thanks, Jason Kingdon. jason at uk.ac.ucl.cs From elman at amos.ucsd.edu Sat Jan 27 01:02:27 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Fri, 26 Jan 90 22:02:27 PST Subject: 2nd call: Connectionist Models Summer School Message-ID: <9001270602.AA04988@amos.ucsd.edu> * Please post * January 26, 1990 2nd call ANNOUNCEMENT & SOLICITATION FOR APPLICATIONS CONNECTIONIST MODELS SUMMER SCHOOL / SUMMER 1990 UCSD La Jolla, California The next Connectionist Models Summer School will be held at the University of California, San Diego from June 19 to 29, 1990. This will be the third session in the series which was held at Carnegie Mellon in the summers of 1986 and 1988. Previous summer schools have been extremely success- ful, and we look forward to the 1990 session with anticipa- tion of another exciting summer school. The summer school will offer courses in a variety of areas of connectionist modelling, with emphasis on computa- tional neuroscience, cognitive models, and hardware imple- mentation. A variety of leaders in the field will serve as Visiting Faculty (the list of invited faculty appears below). In addition to daily lectures, there will be a series of shorter tutorials and public colloquia. Proceed- ings of the summer school will be published the following fall by Morgan-Kaufmann (previous proceedings appeared as 'Proceedings of the 1988 Connectionist Models Summer School', Ed., David Touretzky, Morgan-Kaufmann). As in the past, participation will be limited to gradu- ate students enrolled in PhD. programs (full- or part-time). Admission will be on a competitive basis. Tuition is sub- sidized for all students and scholarships are available to cover housing costs ($250). Applications should include the following: (1) A statement of purpose, explaining major areas of interest and prior background in connectionist model- ing (if any). (2) A description of a problem area you are interested in modeling. (3) A list of relevant coursework, with instructors' names and grades. (4) Names of the three individuals whom you will be ask- ing for letters of recommendation (see below). (5) If you are requesting support for housing, please include a statement explaining the basis for need. Please also arrange to have letters of recommendation sent directly from three individuals who know your current work. Applications should be sent to: Marilee Bateman Institute for Neural Computation, B-047 University of California, San Diego La Jolla, CA 92093 (619) 534-7880 / marilee at sdbio2.ucsd.edu All application material must be received by March 15, 1990. Decisions about acceptance and scholarship awards will be announced April 1. If you have further questions, contact Marilee Bateman (address above), or one of the members of the Organizing Committee. Jeff Elman Terry Sejnowski UCSD UCSD/Salk Institute elman at amos.ucsd.edu terry at sdbio2.ucsd.edu Geoff Hinton Dave Touretzky Toronto CMU hinton at ai.toronto.edu touretzky at cs.cmu.edu ------------ INVITED FACULTY: Yaser Abu-Mostafa (CalTech) Richard Lippmann (MIT Lincoln Labs) Dana Ballard (Rochester) Shawn Lockery (Salk) Andy Barto (UMass/Amherst) Jay McClelland (CMU) Rik Belew (UCSD) Carver Mead (CalTech) Gail Carpenter (BU) David Rumelhart (Stanford) Patricia Churchland (UCSD) Terry Sejnowski (UCSD/Salk) Gary Cottrell (UCSD) Marty Sereno (UCSD) Jack Cowan (Chicago) Al Selverston (UCSD) Richard Durbin (Stanford) Marty Sereno (UCSD) Jeff Elman (UCSD) Paul Smolensky (Colorado) Jerry Feldman (ICSI/UCB) David Tank (Bell Labs) Geoffrey Hinton (Toronto) David Touretzky (Carnegie Mellon) Michael Jordan (MIT) Halbert White (UCSD) Teuvo Kohonen (Helsinki) Ron Williams (Northeastern) George Lakoff (UCB) David Zipser (UCSD) From harnad at Princeton.EDU Mon Jan 29 09:25:52 1990 From: harnad at Princeton.EDU (Stevan Harnad) Date: Mon, 29 Jan 90 09:25:52 EST Subject: Redirecting flow Message-ID: <9001291425.AA06811@cognito.Princeton.EDU> I will not reply to Alexis Manaster-Ramer's (amr at ibm.com) posting about grounding on connectionists because I do not think the discussion belongs here. His original query about whether nets were equivalent to Turing Machines was appropriate for this list, but now the discussion has gone too far afield. I will be replying to him on the symbol grounding list, which I maintain. If anyone wants to follow the discussion, write me and I'll add your name to the list. --- Stevan Harnad From rr%cstr.edinburgh.ac.uk at nsfnet-relay.ac.uk Sun Jan 28 08:19:09 1990 From: rr%cstr.edinburgh.ac.uk at nsfnet-relay.ac.uk (Richard Rohwer) Date: Sun, 28 Jan 90 13:19:09 GMT Subject: Apologizing in advance... Message-ID: <17869.9001281319@cstr.ed.ac.uk> A CONNECTIONIST THEORY OF ONTOLOGY Richard Rohwer 3 May 1989 OK, all you philosophical bullshitters! Here's the definitive theory on life, the universe, and connectionism. Let's start by trashing the silly idea that computer simulations of thunderstorms just aren't wet enough. Well allright, they're not wet enough on the Met's wimpy CRAYs. But those CRAYs are only simulating part of the thunderstorm experience. They cut up the storm into a gridwork of little fluid elements, each of which is just a bunch of numbers. As the simulated thunderstorm rages along, the numbers in the fluid elements change this way and that according to some approximated physical laws. In order to save on computational expense, only part of the structure of a thunderstorm is simulated-- the structure which exists on scales large compared to the grid size. Of course that simulation isn't wet. My claim is that a more complete simulation is necessary. You will probably object that even if I simulate every subatomic particle in every detail, then it's still just a bunch of numbers and it doesn't feel wet. That's because you probably think I mean to leave the observer out of the simulation. Well I don't. Of course the simulation doesn't seem wet to the programmer looking at all the reams of numbers. The interaction between the storm and the observer is quite important in the business of making the observer wet, so it's no good leaving that out of the simulation. The programmer cannot be made wet by the simulated thunderstorm any more readily than an observer watching an Amazonian thunderstorm through a telescope in the Martian desert. But that doesn't mean the jungle's inhabitants keep dry. So we simulate, atom-by-atom, an observer standing in the simulated thunderstorm. Perhaps now you object that without actually being the simulated observer, we cannot be sure that the simulated observer is the least bit conscious and feels anything. In that case, you are probably one of those silly solipsists, in which case I write you off as a hopeless case. True it is, I must assume at this juncture that given enough machinery of brain, mind will be present. And I assume, furthermore, that the formal connectionist structure of the brain's machinery is all that is required to support a mind; that it's the program running on the pattern of connections that does the trick, and it really doesn't matter what the machinery is made of. In particular, it really doesn't matter that the machinery happens to be quite a lot of silicon chips constituting a really groovy computer. So there you have it. Given just a few innocent assumptions (which can be solidly proven using a few pints of real ale) we have established that, when set up properly, simulated thunderstorms really are wet. But we can push this a little further. What's the purpose of this super-duper computer that runs the simulation? It's just there to provide an instantiation of a formal mathematical system in which the simulation is expressed. The simulation doesn't really need a physical device on which to be simulated; it only needs the formal mathematical system. And that's cheap. It exists as a mathematical tautology, just like any formal system. So we can throw away the physical computer and still keep the thunderstorm together with all its wetness. Physics can be built out of mathematics using this connectionist theory of ontology: To get the machinery required to support a mind, it's enough for a connectionist mathematical formalism (augmented with a physics formalism) to be a theoretical possibility. Have another pint. Now consider two simulations of the same thunderstorm-observer system running on different machines. Are these different systems? Certainly not. The machines serve as a representation/communication medium between the world of the simulated thunderstorm and the world of our own subjective experience, much as coordinates serve to represent vectors. The simulated thunderstorm exists for the simulated observer regardless of whether we bother to simulate it, rather like a vector exists regardless of whether we give it coordinates. But we cannot learn much about the simulated system without using a simulator, much as we cannot write down a vector without using coordinates. And the simulated system is the same system regardless of which, or how many simulators we use to "view" it, much as a vector is independent of the coordinate system or systems used to represent it. So any simulatable world with simulatable observers exists physically for those observers just as the world for which we are observers exists physically for us. So there must be quite a lot of physical worlds out there, including many which are quite like our own, except for some fine, distinguishing details. So how similar do these other worlds have to be to our own in order for us to be able to physically observe them, just as we physically observe our own world? Surely not every minute detail is important. In some sense, we must be partially aware of worlds which are very similar to ours; minds which are very similar to ours. But then that makes our world more like an ensemble of similar worlds of which we are partially aware. For my money, this is what quantum interference is all about. It behooves me to derive the quantitative quantum formalism from this point of view but alas, (due to a looming hangover) I can only support the idea with some qualitative points: a) Quantum theory is at pains not to ascribe physical reality to anything which has not been carefully observed, and b) In its purest form (ie., without collapse of the wavefunction), quantum measurement theory is a many-worlds theory. Richard Rohwer JANET: rr at uk.ac.ed.cstr Centre for Speech Technology Research ARPA: rr%ed.cstr at nsfnet-relay.ac.uk Edinburgh University BITNET: rr at cstr.ed.ac.uk, 80, South Bridge rr%cstr.ed.UKACRL Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4} !mcvax!ukc!cstr!rr From AMR at ibm.com Mon Jan 29 12:03:06 1990 From: AMR at ibm.com (AMR@ibm.com) Date: Mon, 29 Jan 90 12:03:06 EST Subject: No subject Message-ID: I just realized that I am only getting stuff if it is forwarded by a kind soul. Could someone put me on the list? From pollack at cis.ohio-state.edu Tue Jan 30 01:12:37 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 30 Jan 90 01:12:37 EST Subject: Are thoughts REALLY Random? In-Reply-To: Richard Rohwer's message of Sun, 28 Jan 90 13:19:09 GMT <17869.9001281319@cstr.ed.ac.uk> Message-ID: <9001300612.AA00163@toto.cis.ohio-state.edu> **Don't Forward this one** Richard just reminded me that I recently read Penrose's unfortunately named book, making a nice Dancing Woolly (thats Wu Li) appear like Surly Lion (thats Searlie). Which, by the way, reminds me of a topic to bring up for discussion, now that Turing has finally been grounded, at least symbolically. You see, Penrose went on and on about his new-age religious experience of accessing the Platonic Universe of Absolutely True Mathematical Ideas (He obviously didn't finish Jaynes), and how complex and beautiful mathematical structures, such as Mandelbrot's set, complex numbers, and his own tilings, are in "there" to be discovered, rather than invented, and would continue to exist whether we found them or not. This universe seems like a pretty big space to me; if only I could dip into it to create publications! So, consider when a computation (or a mathematician) transfers a collection of "information" from the infinite (but ghostly) Platonic Universe into our own finite (but corporeal) universe, by generating a fractal picture or writing down a brand-new theorem. How can this bunch of bits be quantified? Apologizing for the brutal reduction of other's lifework, I know about counting binary distinctions (Shannon), I know about counting instructions (Kolmogorov), I know that only a truly random string is really complex (Chaitin), I know that machines can be ordered lexicographically (Godel), and I even know that the computable languages form a strict hierarchy (Chomsky). However, since programs of the same size, when executed, can yield structures with vastly different APPARENT randomness and bit-counts, some useful measure of the "Platonic density" seems to be missing from the complexity menu. I find it difficult to believe that the Mandelbrot set is only as complex as the program to "access" it! Can somebody please help me make sense out of this? Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From drosen at psych.Stanford.EDU Tue Jan 30 02:32:08 1990 From: drosen at psych.Stanford.EDU (Daniel Rosen) Date: Mon, 29 Jan 90 23:32:08 PST Subject: join list Message-ID: Could you please add my name to the connectionist mailing list. Thanks a lot. -Daniel Rosen drosen at psych.stanford.edu From Scott.Fahlman at B.GP.CS.CMU.EDU Tue Jan 30 06:13:48 1990 From: Scott.Fahlman at B.GP.CS.CMU.EDU (Scott.Fahlman@B.GP.CS.CMU.EDU) Date: Tue, 30 Jan 90 06:13:48 EST Subject: Are thoughts REALLY Random? In-Reply-To: Your message of Tue, 30 Jan 90 01:12:37 -0500. <9001300612.AA00163@toto.cis.ohio-state.edu> Message-ID: ** I guess this shouldn't be forwarded either, since Jordan doesn't want ** ** the initial message to be. ** However, since programs of the same size, when executed, can yield structures with vastly different APPARENT randomness and bit-counts, some useful measure of the "Platonic density" seems to be missing from the complexity menu. I find it difficult to believe that the Mandelbrot set is only as complex as the program to "access" it! Can somebody please help me make sense out of this? As a medium in which useful information is to be "mined" from an apparently random heap of bits, do you see any fundamental difference between the Mandelbrot set and the proverbial infinite number of monkeys with typewriters? Offhand, I don't see any fundamental difference, as long as we are sure that the Mandelbrot set does not sytematically exclude any particular set of bit patterns as it folds itself into a pattern of infinite variability. Both are very low-density forms of "information ore", and probably not worth mining for that reason. Sure, anything you might want is "in there" in some useless sense, but at these low densities the work of rejecting the nonsense is certainly greater than the work of creating the patterns you want in some more direct way. (-: Some have claimed the same thing for the typical collection of papers currently being written in this field. :-) Maybe the right measure of Platonic density is something like the expected length of the address (M bits) that you would need to point to a specific N-bit pattern that you want to locate somewhere in this infinite heap of not-exactly-random bits. If M >= N on the average, then the structure is not of any particular use as a generator. You're better off storing the N-bit patterns directly than storing the M-bit adrress along with a Madelbrot chip. And if you want to systematically search a space, to make sure that you visit all possibilities eventually, you're better off searching the N-bit space of bit-patterns directly than wandering through the Mandelbroth. Even if you exhaustively search an M-bit subspace of the Mandelbrot set, you have no guarantee that your pattern is in there. Any reason to believe that M < N for Mandelbrot sets or monkey type? I've seen no compelling arguments that this should be so. If M >= N for all these sets, then worrying about which has greater or less density seems foolish -- they're all useless. -- Scott From smk at flash.bellcore.com Tue Jan 30 09:17:50 1990 From: smk at flash.bellcore.com (Selma M Kaufman) Date: Tue, 30 Jan 90 09:17:50 EST Subject: No subject Message-ID: <9001301417.AA23985@flash.bellcore.com> Subject: TR Available - Connectionist Language Users Connectionist Language Users Robert B. Allen Bellcore The Connectionist Language Users (CLUES) paradigm employs neural learning algorithms to develop reactive intelligent agents. In much of the research reported here, these agents "use" language to answer questions and to interact with their environment. The model is applied to simple examples of generating verbal descrip- tions, answering questions, pronoun reference, labeling actions, and verbal interactions between agents. In addition the agents are shown to be able to model other intelligent activities such as planning, grammars and simple analogies, and an adaptive pedagogy is introduced. Overall, these networks provide a natur- al account of many aspects of language use. _______ This report, which integrates 2 years of work and numerous short- er papers, takes the next step. 'Grounded' recurrent networks are are applied to a wide range of linguistic problems. Request reports from: smk at bellcore.com From pollack at cis.ohio-state.edu Tue Jan 30 12:59:32 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 30 Jan 90 12:59:32 EST Subject: Are thoughts REALLY Random? In-Reply-To: Scott.Fahlman@B.GP.CS.CMU.EDU's message of Tue, 30 Jan 90 06:13:48 EST <9001301114.AA21049@cheops.cis.ohio-state.edu> Message-ID: <9001301759.AA00428@toto.cis.ohio-state.edu> (Background: Scott is commenting less on the question than on the unspoken subtext, which is my idea of building a very large reconstructive memory based on quasi-inverting something like the Mandelbrot set. Given a figure, find a pointer; then only store the pointer and simple reconstruction function; This has been mentioned twice in print, in a survey article in AI Review, and in NIPS 1988. Yesterday's note was certainly related; but I wanted to ignore the search question right now!) >> Do you see any fundamental difference between the >> Mandelbrot set and the proverbial infinite number of monkeys with >> typewriters? I think that the difference is that the initial-conditions/reduced descriptions/pointers to the Mandelbrot set can be precisely stored by physical computers. This leads to a replicability of "access" not available to the monkeys. >> Maybe the right measure of Platonic density is something like the expected >> length of the address (M bits) that you would need to point to a specific >> N-bit pattern that you want to locate somewhere in this infinite heap of >> not-exactly-random bits. Thanks! Not bad for a starting point! The Platonic Complexity (ratio of N/M) would decrease to 0 at the GIGO limit, and increase to infinity if it took effectively 0 bits to access arbitrary information. This is very satisfying. >> Why shouldn't M be much greater than N? Normally, we computer types live with a density of 1, as we convert symbolic information into bit-packed data-structures. Thus we already have lots of systems with PC=1! Also I can point to systems with PC <1 (Bank teller machines) and with PC>1 (Postscript). Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From kroger at cognet.ucla.edu Tue Jan 30 18:14:15 1990 From: kroger at cognet.ucla.edu (James Kroger) Date: Tue, 30 Jan 90 15:14:15 PST Subject: No subject Message-ID: <9001302314.AA04216@tinman.cognet.ucla.edu> Hi. May I receive a copy of "Connectinist Language Users?" Jim Kroger Dept. of Psychology UCLA Los Angeles, Ca. 90024 Thanks. --Jim K. From tenorio at ee.ecn.purdue.edu Wed Jan 31 08:53:42 1990 From: tenorio at ee.ecn.purdue.edu (Manoel Fernando Tenorio) Date: Wed, 31 Jan 90 08:53:42 EST Subject: Are thoughts REALLY Random? In-Reply-To: Your message of Tue, 30 Jan 90 12:59:32 EST. <9001301759.AA00428@toto.cis.ohio-state.edu> Message-ID: <9001311353.AA24293@ee.ecn.purdue.edu> -------- The ideas of minimum description lenght (mdl), Kolmogorov-Chaitin complexity, and other measures are certainly related in some way. But the fact that some nonlinear systems can display chaotic behavior, and through that from a finite and small description generate infinite output (or very large) is at least puzzling. One interesting and simple chaotic equation when plotted in a polar system, displayed a behavior that looked like a wheel marked at a point, moving at a period which when divided by the perimeter of the wheel gave an irrational number. So that the point, when the wheel was sample periodically, was NEVER (infinite precision) at the same point; never periodic. We are accustomed to ideas of semi periodicity such as a sine wave modulated by a music signal. But for truly non repetitive behavior, the lenght of the music signal has to be the same of the modulated signal. So , even more amazing it seems is this chaotic business. I read a while ago in IEEE computer of a group in an university around the DC area (reference please), funded by DARPA, that was doing image compression using chaotic series, at amazing ratios. Rissanen shown the realtionship among image compression, sys. id., prediction, etc. using measures of complexity. It would be great if the same ideas from the chaotic image compression, in some form, could be applied to our problems of memory, description and computation. Unfortunatelly, in my experience, a property of these systems seems to hinder the best of efforts. In trying to reduce a string to its generator, a simple change in precision of parameters or initial conditions can send you in the wrong direction. It is a MANY-to -one mapping in the worst sense, making the inverse process almost hopeless. It is interesting to see that we painted ourselves into a linear corner, and now, forced to think about a nonlinear universe, we are confronted with phenomena that our theories don't support. Good luck to the one who is trying to break this code. --ft. < Manoel Fernando Tenorio (tenorio at ee.ecn.purdue.edu) > < MSEE233D > < School of Electrical Engineering > < Purdue University > < W. Lafayette, IN, 47907 > --- Your message of: Tuesday,01/30/90 --- From: Jordan B Pollack Subject: Are thoughts REALLY Random? (Background: Scott is commenting less on the question than on the unspoken subtext, which is my idea of building a very large reconstructive memory based on quasi-inverting something like the Mandelbrot set. Given a figure, find a pointer; then only store the pointer and simple reconstruction function; This has been mentioned twice in print, in a survey article in AI Review, and in NIPS 1988. Yesterday's note was certainly related; but I wanted to ignore the search question right now!) >> Do you see any fundamental difference between the >> Mandelbrot set and the proverbial infinite number of monkeys with >> typewriters? I think that the difference is that the initial-conditions/reduced descriptions/pointers to the Mandelbrot set can be precisely stored by physical computers. This leads to a replicability of "access" not available to the monkeys. >> Maybe the right measure of Platonic density is something like the expecte d >> length of the address (M bits) that you would need to point to a specific >> N-bit pattern that you want to locate somewhere in this infinite heap of >> not-exactly-random bits. Thanks! Not bad for a starting point! The Platonic Complexity (ratio of N/M) would decrease to 0 at the GIGO limit, and increase to infinity if it took effectively 0 bits to access arbitrary information. This is very satisfying. >> Why shouldn't M be much greater than N? Normally, we computer types live with a density of 1, as we convert symbolic information into bit-packed data-structures. Thus we already have lots of systems with PC=1! Also I can point to systems with PC <1 (Bank teller machines) and with PC>1 (Postscript). Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 --- end of message --- From lab at maxine.wpi.edu Wed Jan 31 10:31:56 1990 From: lab at maxine.wpi.edu (Lee Becker) Date: Wed, 31 Jan 90 10:31:56 -0500 Subject: clues Message-ID: <9001311531.AA27797@maxine.wpi.edu> I'd very much like to have a copy of your tech report on CLUES. Lee A. Becker, Dept. of Computer Science, Worcester Polytechnic Inst. Worcester, MA 01609 THANKS From rsun at chaos.cs.brandeis.edu.cs.brandeis.edu Wed Jan 31 16:29:27 1990 From: rsun at chaos.cs.brandeis.edu.cs.brandeis.edu (Ron Sun) Date: Wed, 31 Jan 90 16:29:27 -0500 Subject: No subject Message-ID: <9001312129.AA01945@chaos> Fuzzy logic (or fuzzy set theory in general) has widespread impact on many different areas. It bears significant resemblances to connectionist models. Thus it is only natural to investigate systematically the similarity and difference between them, and how they can join force towards a new kind of AI, especially I am interested in knowing if there are ways we can take things from fuzzy logic and apply them in high level connectionist cognitive modeling. Anyway connectionist models must deal with the same problem: uncertainty and vagueness (fuzziness), which is what FL is for. In terms of using evolutionary and learning algorithms (GA for example), these numbers (degree of membership, etc) might be useful. There are some work that I know of, e.g. Kosko's paper in J. of Approx. Reasoning, and my own one (in Proc. IIth cogSci Conf. and Proc IEA/AIE-89) My questions are 1) Any other work in the area (the relation between the two; the combination of the two)? pointers? references? 2) Any comments and opinions regarding the combination of coneectionism and fuzzy logic? 3) Any conferences, symposia or workshops dealing extensively with the issue (the combination of the two, not each individually)? Ron Sun Brandeis University Computer Science Waltham, MA 02254 rsun%cs.brandeis.edu at relay.cs.net