From sontag at hilbert.rutgers.edu Sun Sep 2 11:11:44 1990 From: sontag at hilbert.rutgers.edu (Eduardo Sontag) Date: Sun, 2 Sep 90 11:11:44 EDT Subject: Book recently published Message-ID: <9009021511.AA05380@hilbert.rutgers.edu> The following textbook in control and systems theory may be useful to those working on neural nets, especially if interested in recurrent nets and other dynamic behavior. The level is begining-graduate; it is written in a careful mathematical style, but its contents should be accessible to anyone with a good undergraduate-level math background including calculus, linear algebra, and differential equations: Eduardo D. Sontag, __Mathematical Control Theory: Deterministic Finite Dimensional Systems__ Springer, New York, 1990. (396+xiii pages) Some highlights: ** Introductory chapter describing intuitively modern control theory ** Automata and linear systems covered in a *unified* fashion ** Dynamic programming, including variants such as forward programming ** Passing from dynamic i/o data to internal recurrent state representations ** Stability, including Lyapunov functions ** Tracking of time-varying signals ** Kalman filtering as deterministic optimal observation ** Linear optimal control, including Riccati equations ** Determining internal states from input/output experiments ** Classification of internal state representations under equivalence ** Frequency domain considerations: Nyquist criterion, transfer functions ** Feedback, as a general concept, and linear feedback; pole-shifting ** Volterra series ** Appendix: differential equation theorems ** Appendix: singular values and related matters ** Detailed bibliography (400 up-to-date entries) ** Large computer-generated index Some data: Springer-Verlag, ISBN: 0-387-97366-4; 3-540-97366-4 Series: Textbooks in Applied Mathematics, Number 6. Hardcover, $39.00 [Can be ordered in the USA from 1-800-SPRINGER (in NJ, 201-348-4033)] From ai-vie!georg at relay.EU.net Sat Sep 1 10:18:04 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Sat, 1 Sep 90 13:18:04 -0100 Subject: connectionism conference Message-ID: <9009011118.AA02971@ai-vie.uucp> Sixth Austrian Artificial Intelligence Conference --------------------------------------------------------------- Connectionism in Artificial Intelligence and Cognitive Science --------------------------------------------------------------- organized by the Austrian Society for Artificial Intelligence (OGAI) in cooperation with the Gesellschaft fuer Informatik (GI, German Society for Computer Science), Section for Connectionism Sep 18 - 21, 1990 Hotel Schaffenrath Salzburg, Austria Conference chair: Georg Dorffner (Univ. of Vienna, Austria) Program committee: J. Diederich (GMD St. Augustin, Germany) C. Freksa (Techn. Univ. Munich, Germany) Ch. Lischka (GMD St.Augustin, Germany) A. Kobsa (Univ. of Saarland, Germany) M. Koehle (Techn. Univ. Vienna, Austria) B. Neumann (Univ. Hamburg, Germany) H. Schnelle (Univ. Bochum, Germany) Z. Schreter (Univ. Zurich, Switzerland) invited lectures: Paul Churchland (UCSD) Gary Cottrell (UCSD) Noel Sharkey (Univ. of Exeter) Workshops: Massive Parallelism and Cognition Localist Network Models Connectionism and Language Processing Panel: Explanation and Transparency of Connectionist Systems IMPORTANT! The conference languages are German and English. Below, the letter 'E' indicates that a talk or workshop will be held in English. ===================================================================== Scientific Program (Wed, Sep 19 til Fri, Sep 21): Wednesday, Sep 19, 1990: U. Schade (Univ. Bielefeld) Kohaerenz und Monitor in konnektionistischen Sprachproduktionsmodellen C. Kunze (Ruhr-Univ. Bochum) A Syllable-Based Net-Linguistic Approach to Lexical Access R. Wilkens, H. Schnelle (Ruhr-Univ. Bochum) A Connectionist Parser for Context-Free Phrase Structure Grammars S.C.Kwasny (Washington Univ. St.Louis), K.A.Faisal (King Fahd Univ. Dhahran) Overcoming Limitations of Rule-based Systems: An Example of a Hybrid Deterministic Parser (E) N. Sharkey (Univ. of Exeter), eingeladener Vortrag Connectionist Representation for Natural Language: Old and New (E) Workshop: Connectionism and Language Processing (chair: H. Schnelle) (E) T. van Gelder (Indiana University) Connectionism and Language Processing H. Schnelle (Ruhr-Univ. Bochum) Connectionism for Cognitive Linguistics G. Dorffner (Univ. Wien, Oest. Forschungsinst. f. AI) A Radical View on Connectionist Language Modeling R. Deffner, K. Eder, H. Geiger (Kratzer Automatisierung Muenchen) Word Recognition as a First Step Towards Natural Language Processing with Artificial Neural Networks N. Sharkey (Univ. of Exeter) Implementing Soft Preferences for Structural Disambiguation Paul Churchland, UCSD (invited talk) Some Further Thoughts on Learning and Conceptual Change (E) -------------------------------------------------------------------------- Thursday, Sep 20,1990: G. Cottrell, UCSD (invited talk) Will Connectionism replace symbolic AI? (E) T. van Gelder (Indiana Univ.) Why Distributed Representation is Inherently Non-Symbolic (E) M. Kurthen, D.B. Linke, P. Hamilton (Univ. Bonn) Connectionist Cognition M. Mohnhaupt (Univ. Hamburg) On the Importance of Pictorial Representations for the Symbolic/Subsymbolic Distinction M. Rotter, G. Dorffner (Univ. Wien, Oest. Forschungsinst. f. AI) Struktur und Konzeptrelationen in verteilten Netzwerken C. Mannes (Oest. Forschungsinst. f. AI) Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning A. Standfuss, K. Moeller, J. Funke (Univ. Bonn) Wissenserwerb ueber dynamische Systeme: Befunde konnektionistischer Modellierung Workshop: Massive Parallelism and Cognition (chair: C. Lischka) (E) C. Lischka (GMD St. Augustin) Massive Parallelism and Cognition: An Introduction T. Goschke (Univ. Osnabrueck) Representation of Implicit Knowledge in Massively Parallel Architectures G. Helm (Univ. Muenchen) Pictorial Representations in Connectionist Systems M. Kurthen (Univ. Bonn) Connectionist Cognition: A Summary S. Thrun, K. Moeller (Univ. Bonn), A. Linden (GMD St. Augustin) Adaptive Look-Ahead Planning Panel: Explanation and Transparency of Connectionist Systems (E) speakers: J. Diederich, C. Lischka (GMD), G. Goerz (Univ. Hamburg), P. Churchland (UCSD), --------------------------------------------------------------------- Friday, Sep 21, 1990: Workshop: Localist Network Models (chair: J. Diederich) (E) S. Hoelldobler (ICSI Berkeley) On High-Level Inferencing and the Variable Binding Problem in Connectionist Networks J. Diederich (GMD St.Augustin, UC Davis) Recruitment vs. Backpropagation Learning: An Empirical Study on Re-Learning in Connectionist Networks W.M. Rayburn, J. Diederich (UC Davis) Some Remarks on Emotion, Cognition, and Connectionist Systems G. Paass (GMD St. Augustin) A Stochastic EM Learning Algorithm for Structured Probabilistic Neural Networks T. Waschulzik, H. Geiger (Kratzer Automatisierung Muenchen) Eine Entwicklungsmethodik fuer strukturierte konnektionistische Systeme G. Cottrell (UCSD) Why Localist Connectionism is a Mistake A.N. Refenes (Univ. College London) ConSTrainer: A Generic Toolkit for Connectionist Dataset Selection (E) J.L. van Hemmen, W. Gerstner(TU Muenchen), A. Herz, R. Kuehn, B. Sulzer, M. Vaas (Univ. Heidelberg) Encoding and Decoding of Patterns which are Correlated in Space and Time R. Salomon (TU Berlin) Beschleunigtes Lernen durch adaptive Regelung der Lernrate bei back-propagation in feed-forward Netzen T. Waschulzik, H. Geiger (Kratzer Automatisierung Muenchen) Theorie und Anwendung strukturierter konnektionistischer Systeme H. Bischof, A. Pinz (Univ.f.Bodenkultur Wien) Verwendung von neuralen Netzwerken zur Klassifikation natuerlicher Objekte am Beispiel der Baumerkennung aus Farb-Infrarot-Luftbildern. H.G. Ziegeler, K.W. Kratky (Univ. Wien) A Connectionist Realization Applying Knowledge-Compilation and Auto-Segmentation in a Symbolic Assignment Problem A. Lebeda, M. Koehle (TU Wien) Buchstabenerkennung unter Beruecksichtigung von kontextueller Information ======================================================================== Registration: Please send the following form to: Georg Dorffner Inst.f. Med. Kybernetik und Artificial Intelligence Universitaet Wien Freyung 6/2 A-1010 Vienna, Austria For further questions write to the same address or contact directly Georg Dorffner (Tel: +43 1 535 32 810, Fax: +43 1 63 06 52, email: georg at ai-vie.uucp) ------------------------------------------------------------------------ Connectionism in AI and Cognitive Science (KONNAI) Registration Application Form: I herewith apply for registration at the 6th Austrian AI conference Name: __________________________________________________________________ Address: _______________________________________________________________ _______________________________________________________________ _______________________________________________________________ Telephone: __________________________________ email: _____________________ I will participate in the following events: o Plenary lectures, scient. program, Panel AS 1.950,-- (DM 280,--) reduced price for OGAI members AS 1.800,-- (DM 260,--) reduced price for students (with ID!) AS 1.000,-- (DM 150,--) --------------- Amount: _______________ o Workshops (price is included in conference fee) o Massive Parallelism and Cognition o Localist Network Models o Connectionism and Language Processing o Ich want to demonstrate a program and need the following hard- and software: __________________________________________________ o I transfer the mony to the checking account of the OGAI at the Ersten Oesterreichischen Spar-Casse-Bank, No. 004-71186 o I am sending a eurocheque o I need an invoice signature: ____________________________________ ====================================================================== Accomodation: The conference will be held at Hotel Schaffenrath, Alpenstrasse 115, A-5020 Salzburg. No rooms are available any more at that hotel. You can, however, send the form below to the Hotel Schaffenrath, who will forward the reservation to another nearby hotel. ===================================================================== Connectionism in AI and Cognitive Science (KONNAI) Hotel reservation I want a room from __________________ to _______________________ (day of arrival) (day of departure) ein o single AS 640,-- incl. breakfast o double AS 990,-- incl. breakfast o three beds AS 1200,-- incl. breakfast Name: ________________________________________________________________ Address: _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ Telephone: __________________________________ From N.E.Sharkey at cs.exeter.ac.uk Tue Sep 4 14:28:22 1990 From: N.E.Sharkey at cs.exeter.ac.uk (Noel Sharkey) Date: Tue, 4 Sep 90 14:28:22 BST Subject: PSYCHOLOGICAL PROCESSES Message-ID: <20234.9009041328@entropy.cs.exeter.ac.uk> I have been getting a lot of enquiries about the special issue of connection science on psychological processes (i the announcement months ago and of course people have lost it). So here it is again folk. noel ******************** CALL FOR PAPERS ****************** CONNECTION SCIENCE SPECIAL ISSUE CONNECTIONIST MODELLING OF PSYCHOLOGICAL PROCESSES EDITOR Noel Sharkey SPECIAL BOARD Jim Anderson Andy Barto Thomas Bever Glyn Humphreys Walter Kintsch Dennis Norris Kim Plunkett Ronan Reilly Dave Rumelhart Antony Sanford The journal Connection Science would like to encourage submissions from researchers modelling psychological data or conducting experiments comparing models within the connectionist framework. Papers of this nature may be submitted to our regular issues or to the special issue. Authors wishing to submit papers to the special issue should mark them SPECIAL PSYCHOLOGY ISSUE. Good quality papers not accepted for the special issue may appear in later regular issues. DEADLINE FOR SUBMISSION 12th October, 1990. Notification of acceptance or rejection will be by the end of December/beginning of January. From fogler at sirius.unm.edu Wed Sep 5 12:01:08 1990 From: fogler at sirius.unm.edu (fogler@sirius.unm.edu) Date: Wed, 5 Sep 90 10:01:08 MDT Subject: feature extraction in the striate cortex Message-ID: <9009051601.AA15312@sirius.unm.edu> I am investigating feature extraction for object recognition that mimics in some fashion the algorithms and perhaps architectures that occur in animal vision. I have read a number of papers and/or books by Hubel, Wiesel, Marr, Poggio, Ullman, Grimson, Wilson and others. I am looking for papers that relate to the orientation columns in the striate cortex and how they might be interconnected for the purpose of feature extraction. Specifically, I am looking for information on the encoding scheme(s) used to represent the orientation angles of features. My background is in algorithms and hardware for signal processing, computer vision, and neural networks. I welcome comments and suggestions. Joe Fogler EECE Department University of New Mexico fogler at wayback.unm.edu From mikek at boulder.Colorado.EDU Wed Sep 5 12:14:27 1990 From: mikek at boulder.Colorado.EDU (Mike Kranzdorf) Date: Wed, 5 Sep 90 10:14:27 MDT Subject: Mactivation 3.3 on new ftp site Message-ID: <9009051614.AA21672@fred.colorado.edu> Mactivation version 3.3 is available via anonymous ftp on alumni.Colorado.EDU (internet address 128.138.240.32) The file is in /pub and is called mactivation.3.3.sit.hqx (It is stuffited and binhex'ed) To get it, try this: ftp alumni.Colorado.EDU anonymous binary cd /pub get mactivation.3.3.sit.hqx Then get it to your Mac and use Stuffit to uncompress it and BinHex 4.0 to make it back into an application. If you can't make ftp work, or you want a copy with the nice MS Word docs, then send $5 to: Mike Kranzdorf P.O. Box 1379 Nederland, CO 80466-1379 USA For those who don't know about Mactivation, here's the summary: Mactivation is an introductory neural network simulator which runs on all Apple Macintosh computers. A graphical interface provides direct access to units, connections, and patterns. Basic concepts of network operations can be explored, with many low level parameters available for modification. Back-propagation is not supported (coming in 4.0) A user's manual containing an introduction to connectionist networks and program documentation is included. The ftp version includes a plain text file, while the MS Word version available from the author contains nice graphics and footnotes. The program may be freely copied, including for classroom distribution. --mikek internet: mikek at boulder.colorado.edu uucp:{ncar|nbires}!boulder!mikek AppleLink: oblio From fritz_dg%ncsd.dnet at gte.com Wed Sep 5 09:38:48 1990 From: fritz_dg%ncsd.dnet at gte.com (fritz_dg%ncsd.dnet@gte.com) Date: Wed, 5 Sep 90 09:38:48 -0400 Subject: voice discrimination Message-ID: <9009051338.AA19239@bunny.gte.com> I'm looking for references on neural network research on voice discrimination, that is, telling one language and/or speaker apart from another without necessarily understanding the words. Any leads at all will be appreciated. I will summarize & return to the list any responses. Thanks. Dave Fritz fritz_dg%ncsd at gte.com ----------------------------------------------- From sankar at caip.rutgers.edu Wed Sep 5 14:54:29 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Wed, 5 Sep 90 14:54:29 EDT Subject: No subject Message-ID: <9009051854.AA25553@caip.rutgers.edu> Please address any further requests for the technical report: "Tree Structured Neural Nets" to Barbara Daniels CAIP Center Brett and Bowser Roads P.O. Box 1390 Piscataway, NJ 08855-1390 Please enclose a check made out to "Rutgers University CAIP Center" for $5:00 if you are from the USA or Canada and for $8:00 if you are from any other country. Thank you. Ananth Sankar From kruschke at ucs.indiana.edu Wed Sep 5 16:46:00 1990 From: kruschke at ucs.indiana.edu (KRUSCHKE,JOHN,PSY) Date: 5 Sep 90 15:46:00 EST Subject: speech spectrogram recognition Message-ID: I have a student interested in connectionist models of speech *spectrogram* recognition. As speech is not my area of expertise, I'm hoping you can suggest references to us. I realize that there are a tremendous number of papers on the topic of connectionist speech recognition, so references to good survey papers and especially good recent papers, which cite earlier work, would be most appreciated. Thanks. --John Kruschke (No need to "reply" to the whole list; you can send directly to: kruschke at ucs.indiana.edu ) From birnbaum at fido.ils.nwu.edu Fri Sep 7 11:03:56 1990 From: birnbaum at fido.ils.nwu.edu (Lawrence Birnbaum) Date: Fri, 7 Sep 90 10:03:56 CDT Subject: ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING Message-ID: <9009071503.AA05805@fido.ils.nwu.edu> ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING CALL FOR WORKSHOP PROPOSALS AND PRELIMINARY CALL FOR PAPERS On behalf of the organizing committee, we are pleased to solicit proposals for the workshops that will constitute ML91, the Eighth International Workshop on Machine Learning, to be held in late June, 1991, at Northwestern University, Evanston, Illinois, USA. We anticipate choosing six workshops to be held in parallel over the three days of the meeting. Our goal in evaluating workshop proposals is to ensure high quality and broad coverage of work in machine learning. Workshop committees -- which will operate for the most part independently in selecting work to be presented at ML91 -- should include two to four people, preferably at different institutions. The organizing committee may select some workshops as proposed, or may suggest changes or combinations of proposals in order to achieve the goals of quality and balance. Proposals are due October 10, 1990, preferably by email to: ml91 at ils.nwu.edu although hardcopy may also be sent to the following address: ML91 Northwestern University The Institute for the Learning Sciences 1890 Maple Avenue Evanston, IL 60201 USA fax (708) 491-5258 Please include the following information: 1. Workshop topic 2. Names, addresses, and positions of workshop committee members 3. Brief description of topic 4. Workshop format 5. Justification for workshop, including assessment of breadth of appeal Workshop format is somewhat flexible, and may include invited talks, panel discussions, short presentations, and even small working group meetings. However, it is expected that the majority of time will be devoted to technical presentations of 20 to 30 minutes in length, and we encourage the inclusion of a poster session in each workshop. Each workshop will be allocated approximately 100 pages in the Proceedings, and papers to be published must have a minimum length of (most likely) 4 to 5 pages in double column format. Workshop committee members should be aware of these space limitations in designing their workshops. We encourage proposals in all areas of machine learning, including induction, explanation-based learning, connectionist and neural net models, adaptive control, pattern recognition, computational models of human learning, perceptual learning, genetic algorithms, computational approaches to teaching informed by learning theories, scientific theory formation, etc. Proposals centered around research problems that can fruitfully be addressed from a variety of perspectives are particularly welcome. The workshops to be held at ML91 will be announced towards the end of October. In the meantime, we would like to announce a preliminary call for papers; the submission deadline is February 1, 1990. Authors should bear in mind the space limitations described above. On behalf of the organizing committee, Larry Birnbaum Gregg Collins Program co-chairs, ML91 (This announcement is being sent/posted to ML-LIST, CONNECTIONISTS, ALife, PSYCOLOQUY, NEWS.ANNOUNCE.CONFERENCES, COMP.AI, COMP.AI.EDU, COMP.AI.NEURAL-NETS, COMP.ROBOTICS, and SCI.PSYCHOLOGY. We encourage readers to forward it to any other relevant mailing list or bulletin board.) From kayama at CS.UCLA.EDU Fri Sep 7 17:49:18 1990 From: kayama at CS.UCLA.EDU (Masahiro Kayama) Date: Fri, 7 Sep 90 14:49:18 -0700 Subject: Request Message-ID: <9009072149.AA12711@oahu.cs.ucla.edu> How do you do? I am Masahiro Kayama, a visiting scholar of UCLA from Hitachi Ltd., Japan. Mr. Michio Morioka, who is a visiting researcher of CMT, introdused your group to me. I would like to attend the following meeting, but I have not obtained detail information. IASTED International Symposium. Machine learning and Neural Networks which will be held on October 10-12, 1990 in New York Penta Hotel. Especially, address, telephone number, fax number of the Penta Hotel and program of the conference are efficient. If you have the above information, Could you please reply me. I can reach by e-mail, "kayama at cs.ucla.edu" . Thank you. From Masahiro Kayama. From LIFY447 at IV3.CC.UTEXAS.EDU Sat Sep 8 14:53:03 1990 From: LIFY447 at IV3.CC.UTEXAS.EDU (Steve Chandler) Date: Sat, 8 Sep 1990 13:53:03 CDT Subject: Request to join Message-ID: <900908135303.22800c31@IV3.CC.UTEXAS.EDU> Would you please add me to the connectionists list. I was on it in a previous guise at the Univ. of Idaho, but had you drop me there recently in anticiaption of my sabbatical here at the university of Texas. My interests include connectionist modeling of natural language acquisition and processing. I've recently had a paper accepted on connectionist lexical modeling, and my sabbatical project involves connectionist modeling of child language acquisition. Thanks. Steve Chandler From B344DSL at UTARLG.UTARL.EDU Sat Sep 8 19:41:00 1990 From: B344DSL at UTARLG.UTARL.EDU (B344DSL@UTARLG.UTARL.EDU) Date: Sat, 8 Sep 90 18:41 CDT Subject: No subject Message-ID: <393D65648E1F002A83@utarlg.utarl.edu> Announcement NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) October 4-6, 1990 IBM Westlake, TX (near Dallas - Fort Worth Airport) Conference Organizers: Daniel Levine, University of Texas at Arlington (Mathematics) Manuel Aparicio, IBM Application Solutions Division Speakers will include: James Anderson, Brown University (Psychology) Jean-Paul Banquet, Hospital de la Salpetriere, Paris John Barnden, New Mexico State University (Computer Science) Claude Cruz, Plexus Systems Incorporated Robert Dawes, Martingale Research Corporation Richard Golden, University of Texas at Dallas (Human Development) Janet Metcalfe, Dartmouth College (Psychology) Jordan Pollack, Ohio State University (Computer Science) Karl Pribram, Radford University (Brain Research Institute) Lokendra Shastri, University of Pennsylvania (Computer Science) Topics will include: Connectionist models of semantic comprehension. Architectures for evidential and case-based reasoning. Connectionist approaches to symbolic problems in AI such as truth maintenance and dynamic binding. Representations of logical primitives, data structures, and constitutive relations. Biological mechanisms for knowledge representation and knowledge-based planning. We plan to follow the talks by a structured panel discussion on the questions: Can neural networks do numbers? Will architectures for pattern matching also be useful for precise reasoning, planning, and inference? Tutorial Session: Robert Dawes, President of Martingale Research Corporation, will present a three hour tutorial on neurocomputing the evening of October 3. This preparation for the workshop will be free of charge to all pre-registrants. ------------------------------------------------------------------------------- Registration Form NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) Name: _____________________________________________________ Affiliation: ______________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ Telephone number: _________________________________________ Electronic mail: __________________________________________ Conference fee enclosed (please check appropriate line): $50 for MIND members before September 30 ______ $60 for MIND members on/after September 30 ______ $60 for non-members before September 30 ______ $70 for non-members on/after September 30 ______ $10 for student MIND members any time ______ $20 for student non-members any time ______ Tutorial session (check if you plan to attend): ______ Note: This is free of charge to pre-registrants. Suggested Hotels: Solana Marriott Hotel. Next to IBM complex, with continuous shuttle bus available to meeting site; ask for MIND conference rate of $80/night. Call (817) 430-3848 or (800) 228-9290. Campus Inn, Arlington. 30 minutes from conference, but rides are available if needed; $39.55 for single/night. Call (817) 860-2323. American Airlines. Minus 40% on coach or 5% over and above Super Saver. Call (800)-433-1790 for specific information and reservations, under Star File #02oz76 for MIND Conference. Conference programs, maps, and other information will be mailed to pre-registrants in mid-September. Please send this form with check or money order to: Dr. Manuel Aparicio IBM Mail Stop 03-04-40 5 West Kirkwood Blvd. Roanoke, TX 76299-0001 (817) 962-5944 From todd at galadriel.Stanford.EDU Sun Sep 9 03:44:56 1990 From: todd at galadriel.Stanford.EDU (Peter Todd) Date: Sun, 09 Sep 90 00:44:56 PDT Subject: Request for music and connectionism articles Message-ID: I am currently preparing the "summary of current work" section for our book Music and Connectionism (to appear next spring, MIT Press, Gareth Loy co-editor), so I wanted to ask mailing-list members for references to any work in this area we may have missed. If you know of any papers or unpublished efforts (or simply names of researchers) concerning connectionist/neural network/PDP approaches and applications to musical problems or domains (other than Bharucha, Kohonen, J.P. Lewis, Leman, Lischka, or the authors in our two issues of the Computer Music Journal), I would greatly appreciate hearing about them, and having the chance to spread their work to a wider audience. I will post word when our book becomes available around March. Thanks for your help-- Peter Todd Psychology Dept. Stanford U. From plunkett at amos.ucsd.edu Mon Sep 10 14:37:33 1990 From: plunkett at amos.ucsd.edu (Kim Plunkett) Date: Mon, 10 Sep 90 11:37:33 PDT Subject: No subject Message-ID: <9009101837.AA02230@amos.ucsd.edu> The following TR is now available: From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets Kim Plunkett University of Aarhus Denmark Virginia Marchman Center for Research in Language University of California, San Diego Abstract The traditional account of the acquisition of English verb morphology supposes that a dual mechanism architecture underlies the transition from early rote learning processes (in which past tense forms of verbs are correctly produced) to the systematic treatment of verbs (in which irregular verbs are prone to error). A connectionist account supposes that this transition can occur in a single mechanism (in the form of a neural network) driven by gradual quantitative changes in the size of the training set to which the network is exposed. In this paper, a series of simulations is reported in which a multi-layered perceptron learns to map verb stems to past tense forms analogous to the mappings found in the English past tense system. By expanding the training set in a gradual, incremental fashion and evaluat- ing network performance on both trained and novel verbs at successive points in learning, we demonstrate that the net- work undergoes reorganizations that result in a shift from a mode of rote learning to a systematic treatment of verbs. Furthermore, we show that this reorganizational transition is contingent upon a critical mass in the training set and is sensitive to the phonological sub-regularities character- izing the irregular verbs. The optimal levels of performance achieved in this series of simulations compared to previous work derives from the incremental training procedures exploited in the current simulations. The pattern of errors observed are compared to those of children acquiring the English past tense, as well as children's performance on experimental studies with nonsense verbs. Incremental learn- ing procedures are discussed in light of theories of cogni- tive development. It is concluded that a connectionist approach offers a viable alternative account of the acquisi- tion of English verb morphology, given the current state of empirical evidence relating to processes of acquisition in young children. Copies of the TR can be obtained by contacting "staight at amos.ucsd.edu" and requesting CRL TR #9020. Please remember to provide your hardmail address. Alternatively, a compressed PostScript file is available by anonymous ftp from "amos.ucsd.edu" (internet address 128.54.16.43). The relevant file is "crl_tr9020.ps.Z" and is in the directory "~ftp/pub". Kim Plunkett From oruiz at fi.upm.es Wed Sep 12 09:34:00 1990 From: oruiz at fi.upm.es (Oscar Ruiz) Date: 12 Sep 90 15:34 +0200 Subject: neural efficiency Message-ID: <54*oruiz@fi.upm.es> Subject: Neural efficiency. For some time I have been interested in the efficiency of neural network (NN) fitness, but I have still very little information about this matter -I got a bibliography with articles and books about this matter, but I cannot find them here in Spain, and the mail is (exasperatingly) slow. I have heard that the NN fitness is an NP-complete (perhaps even NP-hard) problem. I know what this means in discrete problems, but I am not sure what it means in continuous problems, as in the case of a NN whose units have a continuous activation function (e.g.: a sigmoid), where the exact fitness is, in general, impossible. Efficiency problems can be formalized in terms of Turing Machines, which are essentially discrete objects. But how can it be done with continuous problems? On the other hand, reference (1) below states that generally the number of steps needed to optimize a function of n variables, with a given relative error, is an exponential function of n (see for a more rigorous formulation of this result below). Since fitting a neural network is equivalent to minimizing its error function (whose variables are the NN weights), the search for an efficient general method to fit the weights in a NN is doomed to failure (except for some particular cases). I would like to know if this is right. Reference: (1) Nemirovsky et al.: Problem Complexity and Method Efficiency in Optimization (John Wiley & Sons). (In this book, concrete classes of problems and the class of methods corresponding to these problems are considered. Each of these methods applied to the class of problems considered is characterized by its laboriousness and error, i.e. by upper bounds -over the problems of the class- for the number of steps in its work on the problem and by the error of the result. The complexity N(v) of a given class of problems is defined as the least possible laboriousness of a method which solves every problem of the class with a relative error not exceeding v. The main result is the following: For the complexity N(v) of the class of all extremal problems with k-times continuously differential functionals on a compact field G in E^n, the lower bound c(k,G)(1/v)^(n/k) holds both for the ordinary -deterministic- methods of solution and for random-search methods.) Miguel A. Lerma Sancho Davila 18 28028 MADRID - SPAIN  From rich at gte.com Wed Sep 12 11:36:48 1990 From: rich at gte.com (Rich Sutton) Date: Wed, 12 Sep 90 11:36:48 -0400 Subject: Reinforcement Learning -- Special Issue of Machine Learning Journal Message-ID: <9009121536.AA09672@bunny.gte.com> ---------------------------------------------------------------------- CALL FOR PAPERS The journal Machine Learning will be publishing a special issue on REINFORCEMENT LEARNING in 1991. By "reinforcement learning" I mean trial-and-error learning from performance feedback without an explicit teacher other than the external environment. Of particular interest is the learning of mappings from situation to action in this way. Reinforcement learning has most often been studied within connectionist or classifier-system (genetic) paradigms, but it need not be. Manuscripts must be received by March 1, 1991, to assure full consideration. One copy should be mailed to the editor: Richard S. Sutton GTE Laboratories, MS-44 40 Sylvan Road Waltham, MA 02254 USA In addition, four copies should be mailed to: Karen Cullen MACH Editorial Office Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, MA 02061 USA Papers will be subject to the standard review process. ------------------------------------------------------------------------ From sankar at caip.rutgers.edu Thu Sep 13 10:40:18 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Thu, 13 Sep 90 10:40:18 EDT Subject: Room-mate at NIPS Message-ID: <9009131440.AA04973@caip.rutgers.edu> I will be attending this years NIPS conference and workshop. Anyone interested in sharing a room, please get in touch with me as soon as possible. Thanks, Ananth Sankar CAIP Center Rutgers University Brett and Bowser Roads P.O. Box 1390 Piscataway, NJ 08855-1390 Phone: (201)932-5549 (off) (609)936-9024 (res) From plunkett at amos.ucsd.edu Thu Sep 13 16:49:38 1990 From: plunkett at amos.ucsd.edu (Kim Plunkett) Date: Thu, 13 Sep 90 13:49:38 PDT Subject: No subject Message-ID: <9009132049.AA24317@amos.ucsd.edu> Please note that Jordan Pollack has kindly posted a recently announced TR on the neuroprose directory under "plunkett.tr9020.ps.Z". Just to remind you of the contents, the abstract follows: ===================================================================== From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets Kim Plunkett University of Aarhus Denmark Virginia Marchman Center for Research in Language University of California, San Diego Abstract The traditional account of the acquisition of English verb morphology supposes that a dual mechanism architecture underlies the transition from early rote learning processes (in which past tense forms of verbs are correctly produced) to the systematic treatment of verbs (in which irregular verbs are prone to error). A connectionist account supposes that this transition can occur in a single mechanism (in the form of a neural network) driven by gradual quantitative changes in the size of the training set to which the network is exposed. In this paper, a series of simulations is reported in which a multi-layered perceptron learns to map verb stems to past tense forms analogous to the mappings found in the English past tense system. By expanding the training set in a gradual, incremental fashion and evaluat- ing network performance on both trained and novel verbs at successive points in learning, we demonstrate that the net- work undergoes reorganizations that result in a shift from a mode of rote learning to a systematic treatment of verbs. Furthermore, we show that this reorganizational transition is contingent upon a critical mass in the training set and is sensitive to the phonological sub-regularities character- izing the irregular verbs. The optimal levels of performance achieved in this series of simulations compared to previous work derives from the incremental training procedures exploited in the current simulations. The pattern of errors observed are compared to those of children acquiring the English past tense, as well as children's performance on experimental studies with nonsense verbs. Incremental learn- ing procedures are discussed in light of theories of cogni- tive development. It is concluded that a connectionist approach offers a viable alternative account of the acquisi- tion of English verb morphology, given the current state of empirical evidence relating to processes of acquisition in young children. From tgd at turing.CS.ORST.EDU Thu Sep 13 17:08:26 1990 From: tgd at turing.CS.ORST.EDU (Tom Dietterich) Date: Thu, 13 Sep 90 14:08:26 PDT Subject: Tech Report Available Message-ID: <9009132108.AA04726@turing.CS.ORST.EDU> The following tech report is available in compressed postscript format from the neuroprose archive at Ohio State. A Comparison of ID3 and Backpropagation for English Text-to-Speech Mapping Thomas G. Dietterich Hermann Hild Ghulum Bakiri Department of Computer Science Oregon State University Corvallis, OR 97331-3102 Abstract The performance of the error backpropagation (BP) and decision tree (ID3) learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping. This is an expanded version of a short paper that appeared at the Seventh International Conference on Machine Learning at Austin TX in June. To retrieve via FTP, use the following procedure: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get (remote-file) dietterich.comparison.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps unix> lpr -P(your_local_postscript_printer) foo.ps From skrzypek at CS.UCLA.EDU Thu Sep 13 18:25:48 1990 From: skrzypek at CS.UCLA.EDU (Dr. Josef Skrzypek) Date: Thu, 13 Sep 90 15:25:48 PDT Subject: NN AND VISION -IJPRAI-special issue Message-ID: <9009132225.AA27988@retina.cs.ucla.edu> Because of repeat enquiries about the special issue of IJPRAI (Intl. J. of Pattern Recognition and AI) I am posting the announcement again. IJPRAI CALL FOR PAPERS IJPRAI We are organizing a special issue of IJPRAI (Intl. Journal of Pattern Recognition and Artificial Intelligence) dedicated to the subject of neural networks in vision and pattern recognition. Papers will be refereed. The plan calls for the issue to be published in the fall of 1991. I would like to invite your participation. DEADLINE FOR SUBMISSION: 10th of December, 1990 VOLUME TITLE: Neural Networks in Vision and Pattern Recognition VOLUME GUEST EDITORS: Prof. Josef Skrzypek and Prof. Walter Karplus Department of Computer Science, 3532 BH UCLA Los Angeles CA 90024-1596 Email: skrzypek at cs.ucla.edu or karplus at cs.ucla.edu Tel: (213) 825 2381 Fax: (213) UCLA CSD DESCRIPTION The capabilities of neural architectures (supervised and unsupervised learning, feature detection and analysis through approximate pattern matching, categorization and self-organization, adaptation, soft constraints, and signal based processing) suggest new approaches to solving problems in vision, image processing and pattern recognition as applied to visual stimuli. The purpose of this special issue is to encourage further work and discussion in this area. The volume will include both invited and submitted peer-reviewed articles. We are seeking submissions from researchers in relevant fields, including, natural and artificial vision, scientific computing, artificial intelligence, psychology, image processing and pattern recognition. "We encourage submission of: 1) detailed presentations of models or supporting mechanisms, 2) formal theoretical analyses, 3) empirical and methodological studies. 4) critical reviews of neural networks applicability to various subfields of vision, image processing and pattern recognition. Submitted papers may be enthusiastic or critical on the applicability of neural networks to processing of visual information. The IJPRAI journal would like to encourage submissions from both , researchers engaged in analysis of biological systems such as modeling psychological/neurophysiological data using neural networks as well as from members of the engineering community who are synthesizing neural network models. The number of papers that can be included in this special issue will be limited. Therefore, some qualified papers may be encouraged for submission to the regular issues of IJPRAI. SUBMISSION PROCEDURE Submissions should be sent to Josef Skrzypek, by 12-10-1990. The suggested length is 20-22 double-spaced pages including figures, references, abstract and so on. Format details, etc. will be supplied on request. Authors are strongly encouraged to discuss ideas for possible submissions with the editors. The Journal is published by the World Scientific and was established in 1986. Thank you for your considerations. From kanal at cs.UMD.EDU Thu Sep 13 20:34:25 1990 From: kanal at cs.UMD.EDU (Laveen N. KANAL) Date: Thu, 13 Sep 90 20:34:25 -0400 Subject: Notice of Technical Reports Message-ID: <9009140034.AA21259@mimsy.UMD.EDU> What follows is the abstract of a TR printed this summer which has been subitted for publication. Also included in this message are the titles of two earlier reports by the sdame authors which were put out in Dec. 1988 but whic may be of interest now in view of some titles I have seen on the net. UMIACS-TR-90-99 July 1990 CS-TR-2508 ASYMMETRIC MEAN-FIELD NEURAL NETWORKS FOR MULTIPROCESSOR SCHECDULING Benjamin J. Hellstrom Laveen N. Kanal Abstract Hopfield and Tank's proposed technique for embedding optimization problems, such as the travelling salesman, in mean-field thermodynamic networks suffers from several restrictions. In particular, each discrete optimization problem must be reduced to the minimization of a 0-1 Hamiltonian. Hopfield and Tank's technique yields fully-connected networks of functionally homogeneous visible units with low-order symmetric connections. We present a program-constructive approach to embedding difficult problems in neural networks. Our derivation method overcomes the Hamiltonian reducibility requirement and promotes networks with functionally heterogeneous hidden units and asymmetric connections of both low and high-order. The underlying mechanism involves the decomposition of arbitrary problem energy gradients into piecewise linear functions which can be modeled as the outputs of sets of hidden units. To illustrate our method, we derive thermodynamic mean-field neural networks for multiprocessor scheduling. The performance of these networks is analyzed by observing phase transitions and several improvements are suggested. Tuned networks of up to 2400 units are shown to yield very good, and often exact solutions. The earlier reports are CS-TR-2149 Dec. 1988 by Hellstrom and Kanal, titled " Linear Programming Approaches to Learning in Thermodynamic Models of Neural Networks" Cs-TR-2150, Dec. 1988 by Hellstrom and Kanal, titled " Encoding via Meta-Stable Activation Levels: A Case Study of the 3-1-3 Encoder". Reports are available free until the current supply lasts after which they will be available(for a small charge) from the publications group at the Computer Science Center of the Univ. of Maryland, College Park, Md., 20742. The address for the current su supply is : Prof. L.N. Kanal, Dept. of Computer Science, A.V. Williams Bldg, Univ. ofMaryland, College Park, MD. 20742. L.K. From erol at ehei.ehei.fr Mon Sep 17 11:20:28 1990 From: erol at ehei.ehei.fr (Erol Gelenbe) Date: Mon, 17 Sep 90 15:22:28 +2 Subject: Application of the random neural network model to NP-Hard problems Message-ID: <9009171421.AA20868@inria.inria.fr> We are doing work on the application of the random neural network model, introduced in two recent papers of the journal Neural Computation (E. Gelenbe : Vol. 1,No 4, and Vol. 2, No. 2), to combinatorial optmisation problems. Our first results concern the Graph Covering problem. We have considered 400 graphs drawn at random, with 20, 50 and 100 nodes. Over this sample we observe that : - The random neural network solution (which is purely analytic, i.e. not simulated as with the Hopfield network) provides on the average better results than the usual heuristic (the greedy algorithm), and considerably better results than the Hopfield-Tank approach. - The random neural network solution is more time consuming than the greedy algorithm, but considerably less time consuming than the Hopfield-Tank approach. A report can be obtained by writing, or e-mailing me : erol at ehei.ehei.fr Erol Gelenbe EHEI 45 rue des Saints-Peres 75006 Paris, France From mikek at wasteheat.colorado.edu Mon Sep 17 11:31:50 1990 From: mikek at wasteheat.colorado.edu (Mike Kranzdorf) Date: Mon, 17 Sep 90 09:31:50 -0600 Subject: Mactivation Word docs coming to ftp Message-ID: <9009171531.AA13946@wasteheat.colorado.edu> I thought Connectionists might be interested in the end result of this, specifically that I will be posting a new copy of Mactivation 3.3 including MS Word documentation to alumni.colorado.edu real soon now. Date: Sun, 16 Sep 90 03:37:36 GMT-0600 From: james at visual2.tamu.edu (James Saxon) Message-Id: <9009160937.AA25939 at visual2.tamu.edu> To: mikek at boulder.colorado.edu *** Subject: Mactivation Documentation I was going to post this to the net but I figured I'd let you do it if you feel it's necessary. If you're going to give out the bloody program, you might as well have just stuck in the decent readable documentation because nobody in their right mind is going to pay $5.00 for it. It's really a cheap move and if you don't replace the ftp file you might just lose all your business because, I like many others just started playing with the package. I don't see any macros for learning repetitive things and so I was going to give up because I don't want to spend all day trying to figure out how to not switch from the mouse to the keyboard trying to set the layer outputs for everything... And then I'm certainly not going to turn to an unformatted Geneva document just to prove that the program is not very powerful... So you can decide what you want do do but I suggest not making everybody pissed off at you. --- I sincerely apologize if my original posting gave the impression that I was trying to make money from this. Mactivation, along with all the documentation, has been available via ftp for over 3 years now. Since I recently had to switch ftp machines here, I thought I would save some bandwidth and post a smaller copy (in fact this was suggested by several people). Downloading these things over a 1200 baud modem is very slow. The point of documentation in this case is to be able to use the program, and I still think a text file does fine. The $5 request was not for prettier docs, but for the disk, the postage, and my time. I get plenty of letters saying "Thank you for letting me avoid ftp", and that was the idea. The $5 actually started as an alternative for people who didn't want to bother sending me a disk and a self addressed stamped envelope, which used to be part of my offer. However, I got too many 5 1/4" disks and unstamped envelopes, so I dropped that option this round. --- I am presently collecting NN software for a class that my professor is teaching here at A&M and will keep your program around for the students but I warn them about the users manual. :-0 And while this isn't a contest, your program will be competing with the Rochester Connectionist Simulator, SFINX, DESCARTES, and a bunch more... Lucky I don't have MacBrain... which if you haven't seen, you should. Of course, that's $1000, but the manual's free. --- If you think you're getting MacBrain for free or a Mac version of the Rochester Simulator, then don't bother downloading Mactivation. You will be dissapointed. I wrote Mactivation for myself, and it is not supported by a company or a university. It's not for research, it's an introduction which can be used to teach some basics. (Actually you can do research, but only on the effects of low-level parameters on small nets. As a point of interest, my research involved making optical neural nets out of spatial light modulators, and these parameters were important while the ability to make large or complex nets was not.) --- James Saxon Scientific Visualization Laboratory Texas A&M University james@#visual2.tamu.edu --- ***The end result of this is that I will post a new copy complete with the Word docs. I am not a proficient telecommunicator though, so it may take a week or so. I apologize for the delay. --mikek From stucki at cis.ohio-state.edu Mon Sep 17 13:53:33 1990 From: stucki at cis.ohio-state.edu (David J Stucki) Date: Mon, 17 Sep 90 13:53:33 -0400 Subject: Application of the random neural network model to NP-Hard problems In-Reply-To: Erol Gelenbe's message of Mon, 17 Sep 90 15:22:28 +2 <9009171421.AA20868@inria.inria.fr> Message-ID: <9009171753.AA13351@retina.cis.ohio-state.edu> I would like a copy of the report you advertised on connectionists. thanks, David J Stucki Dept. of Computer and Information Science 2036 Neil Avenue Mall Columbus, Ohio 43210 From kris at boulder.Colorado.EDU Mon Sep 17 19:24:33 1990 From: kris at boulder.Colorado.EDU (Kris Johnson) Date: Mon, 17 Sep 90 17:24:33 MDT Subject: Mactivation Word docs coming to ftp Message-ID: <9009172324.AA10374@fred.colorado.edu> sounds like james at visula asks alot for not mush From gmdzi!st at relay.EU.net Sat Sep 15 11:21:06 1990 From: gmdzi!st at relay.EU.net (Sebastian Thrun) Date: Sat, 15 Sep 90 13:21:06 -0200 Subject: No subject Message-ID: <9009151121.AA02208@gmdzi.UUCP> The following might be interesting for everybody who works with the PDP backpropagation simulator and has access to a Connection Machine: ******************************************************** ** ** ** PDP-Backpropagation on the Connection Machine ** ** ** ******************************************************** For testing our new Connection Machine CM/2 I extended the PDP backpropagation simulator by Rumelhart, McClelland et al. with a parallel training procedure for the Connection Machine (Interface C/Paris, Version 5). Following some ideas by R.M. Faber and A. Singer I simply made use of the inherent parallelism of the training set: Each processor on the connection machine (there are at most 65536) evaluates the forward and backward propagation phase for one training pattern only. Thus the whole training set is evaluated in parallel and the training time does not depend on the size of this set any longer. Especially at large training sets this reduces the training time greatly. For example: I trained a network with 28 nodes, 133 links and 23 biases to approximate the differential equations for the pole balancing task adopted from Anderson's dissertation. With a training set of 16384 patterns, using the conventional "strain" command, one learning epoch took about 110.6 seconds on a SUN 4/110 - the connection machine with this SUN on the frontend managed the same in 0.076 seconds. --> This reduces one week exhaustive training to approximately seven minutes! (By parallelizing the networks themselves similar acceleration can be achieved also with smaller training sets.) -------------- The source is written in C (Interface to Connection Machine: PARIS) and can easily be embedded into the PDP software package. All origin functions of the simulator are not touched - it is also still possible to use the extended version without a Connection Machine. If you want to have the source, please mail me! Sebastian Thrun, st at gmdzi.uucp You can also obtain the source via ftp: ftp 129.26.1.90 Name: anonymous Password: ftp> cd pub ftp> cd gmd ftp> get pdp-cm.c ftp> bye From marvit at hplpm.hpl.hp.com Tue Sep 18 17:58:12 1990 From: marvit at hplpm.hpl.hp.com (Peter Marvit) Date: Tue, 18 Sep 90 14:58:12 PDT Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009182158.AA20192@hplpm.hpl.hp.com> A fellow at lunch today asked a seemingly innocuous question. I am embarrassed to say, I do not know the answer. I assume some theoretical work has been done on this subject, but I'm ignorant. So: Are neural networks Turing equivalent? Broadcast responses are fine, e-mail responses will be summarized. -Peter "Definitely non-Turing" From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Tue Sep 18 19:02:08 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Tue, 18 Sep 90 19:02:08 EDT Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Your message of Tue, 18 Sep 90 14:58:12 -0700. <9009182158.AA20192@hplpm.hpl.hp.com> Message-ID: <8770.653698928@DST.BOLTZ.CS.CMU.EDU> The Turing equivalence question has come up on this list before. Here's a simple answer: No finite machine is Turing equivalent. This rules out any computer that physically exists in the real world. You can make non-finite neural nets by assuming, say, numbers with unbounded precision. Jordan Pollack showed in his thesis how to encode a Turing machine tape as two binary fractions, each of which was an activation value of a "neuron". This is no more ridiculous than assuming a tape of unbounded length. If you are willing to allow nets to have an unbounded number of units, then you can use finite preceision units to simulate the tape and perhaps build a Turing machine that way; it would depend on whether you view the wiring scheme of the infinite neural net as having a finite or infinite description. (Classical Turing machines have a finite description because you don't have to specify each square of the infinite tape individually.) If you view the tape as external to the Turing machine, then all that's left inside is a finite state automaton, and those can easily be implemented with neural nets. -- Dave From sun at umiacs.UMD.EDU Tue Sep 18 21:24:40 1990 From: sun at umiacs.UMD.EDU (Guo-Zheng Sun) Date: Tue, 18 Sep 90 21:24:40 -0400 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009190124.AA00328@neudec.umiacs.UMD.EDU> Recently, we studied the computability of neural nets and proved sevaral theorems. Basically, the results are that (1) Given an arbitrary Turing machine there exists a uniform recurrent neural net with second-order connection weights which can simulate it. (2) Therefore, neural nets can simulate universal Turing machines. The preprint will be available soon. Guo-Zheng Sun Institute for Advanced Computer Studies University of Maryland From sontag at control.rutgers.edu Tue Sep 18 21:38:13 1990 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Tue, 18 Sep 90 21:38:13 BST Subject: Reference to article on Neural Nets for cancer diagnosis Message-ID: <9009190138.AA13055@control.rutgers.edu> People in this list may be interested in reading the latest (September) SIAM News. The leading front-page article is about cancer diagnosis via neural nets (the title says "linear programming", but the text explains the relation to nn's). The method appears to be extremely succesful, almost 100% accurate for breast cancer. An outline of the algorithm is as follows (as I understood it from the article): if the data is not linearly separable, then first sandwich the intersection of the convex hulls of the training data between two hyperplanes. Ignore the rest (already separated), restrict to this sandwich region, and iterate. The authors prove (in the references) that this gives a polynomial time algorithm (essentially using LP for each sandwich construction; the "size" is unclear), presumably under the assumption that a polyhedral boundary exists. The references are to various papers in SIAM publications and IEEE/IT. The authors are Olvi L. Mangasarian (olvi at cs.wisc.edu) from the Math and CS depts at Madison, and W.H. Wolberg from the Wisconsin medical school. -eduardo From harnad at clarity.Princeton.EDU Wed Sep 19 11:04:46 1990 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Wed, 19 Sep 90 11:04:46 EDT Subject: Anderson/Cognition: BBS Call for Commentators Message-ID: <9009191504.AA15649@psycho.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you are selected as a commentator. ____________________________________________________________________ IS HUMAN COGNITION ADAPTIVE? John R. Anderson Psychology Department Carnegie Mellon University Pittsburgh,PA 15213-3890 ABSTRACT: Can the output of human cognition be predicted from the assumption that it is an optimal response to the information-processing demands of the environment? A methodology called rational analysis is described for deriving predictions about cognitive phenomena using optimization assumptions. The predictions flow from the statistical structure of the environment and not the assumed structure of the mind. Bayesian inference is used, assuming that people start with a weak prior model of the world which they integrate with experience to develop stronger models of specific aspects of the world. Cognitive performance maximizes the difference between the expected gain and cost of mental effort. (1) Memory performance can be predicted on the assumption that retrieval seeks a maximal trade-off between the probability of finding the relevant memories and the effort required to do so; in (2) categorization performance there is a similar trade-off between accuracy in predicting object features and the cost of hypothesis formation; in (3) casual inference the trade-off is between accuracy in predicting future events and the cost of hypothesis formation; and in (4) problem solving it is between the probability of achieving goals and the cost of both external and mental problem-solving search. The implemention of these rational prescriptions in neurally plausible architecture is also discussed. ------------------ A draft is retrievable by anonymous ftp from princeton.edu in directory /ftp/pub/harnad as compressed file anderson.article.Z Retrieve using "binary". Use scribe to print. This can't be done form Bitnet directly, but there is a fileserver called bitftp at pucc.bitnet that will do it for you. Send it the one line message: help File must be uncompressed after receipt. From pollack at cis.ohio-state.edu Wed Sep 19 10:47:30 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 19 Sep 90 10:47:30 -0400 Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Guo-Zheng Sun's message of Tue, 18 Sep 90 21:24:40 -0400 <9009190124.AA00328@neudec.umiacs.UMD.EDU> Message-ID: <9009191447.AA05299@dendrite.cis.ohio-state.edu> In my 1987 dissertation, as Touretzky pointed out, I assumed rational output values between 0 and 1 for two neurons in order to represent an unbounded binary tape. Besides that assumption, the construction of the "neuring machine" (sorry) required linear combinations, thresholds, and multiplicative connections. Linear combinations are subsumed by multiplicative connections and a bias unit. Without thresholds you cant make a decision, and without multiplicative connections, you cant (efficiently) gate rational values, which is necessary for moving in both directions on the tape. Proofs of computability should be used necessarily be used as architectures to build upon further (which I think a few people misunderstood my thesis to imply), but as an indication of what collection of primitives are necessary in a machine. One wouldn't want to build a theoretical stored program computer without some sort of conditional branch, or a practical stored program computer without a connection between program and data memory. I took this result to argue that higher-order connections are crucial to general purpose neural-style computation. It is interesting to note that GZ Sun's theorem involves second-order connection weights. It probably involves thresholds as well. I temporarily posted a revised version of the chapter of my thesis in neuroprose, as pollack.neuring.ps.Z Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From elman at amos.ucsd.edu Wed Sep 19 11:59:49 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Wed, 19 Sep 90 08:59:49 PDT Subject: job announcement: UCSD Cognitive Science Message-ID: <9009191559.AA10400@amos.ucsd.edu> Assistant Professor Cognitive Science UNIVERSITY OF CALIFORNIA, SAN DIEGO The Department of Cognitive Science at UCSD expects to receive permission to hire one person at the assistant professor level (tenure-track). We seek someone whose interests cut across conventional disciplines. The Department takes a broadly based approach covering experimental, theoretical, and computational investigations of the biological basis of cognition, cognition in individuals and social groups, and machine intelligence. Candidates should send a vita, reprints, a short letter describing their background and interests, and names and addresses of at least three references to: UCSD Search Committee/Cognitive Science 0515e 9500 Gilman Dr. La Jolla, CA 92093-0515-e Applications must be received prior to January 15, 1991. Salary will be commensurate with experience and qualifications, and will be based upon UC pay schedules. Women and minorities are especially encouraged to apply. The University of California, San Diego is an Affirmative Action/Equal Opportunity Employer. From FRANKLINS%MEMSTVX1.BITNET at VMA.CC.CMU.EDU Wed Sep 19 17:01:00 1990 From: FRANKLINS%MEMSTVX1.BITNET at VMA.CC.CMU.EDU (FRANKLINS%MEMSTVX1.BITNET@VMA.CC.CMU.EDU) Date: Wed, 19 Sep 90 16:01 CDT Subject: Are Neural Nets Turing Equivalent? Message-ID: Here are some additional references on the "Turing equivalence of neural networks question". The term "neural network" will refer to networks that are discrete in time and in activation values. In their first paper, McCulloch and Pitts showed that logical gates can easily be simulated by threshold networks. They also claimed, but did not prove, Turing equivalence. W.S. McCulloch and W. Pitts, "A logical calculus of the ideas immanent in nervous activity", Bull. Math. Biophys. 5(1943) 115--133. Hartley and Szu noted that finite neural networks were computationally equivalent to finite state machines. They also asserted Turing equivalence of potentially infinite (unbounded) neural networks and sketched a proof that a Turing machine can be simulated by a neural network. R. Hartley and H. Szu, "A Comparison of the Computational Power of Neural Network Models", in Proc. IEEE First International Conference on Neural Networks (1987) III 17--22. Max Garzon and I gave a detailed description of a neural network simulation of an arbitrary Turing machine. The network would stabilize if and only if the Turing machine halts. Thus the stability problem for neural networks turns out to be Turing unsolvable. One could argue, I think, that our unbounded neural network simulation of a Turing machine even has a finite description. Stan Franklin and Max Garzon, "Neural Computability" in O. M. Omidvar, ed., Progress In Neural Networks, vol 1, Ablex, Norwood NJ, 1990. Unbounded neural networks (without finite descriptions) are strictly more powerful than Turing machines. Such a beast, if there were one, could solve the halting problem, for example, by essentially reducing it to a lookup table. But neural networks are computationally equivalent to cellular automata for graphs of finite bandwidth. Max and I proved this using a universal neural network. Max Garzon and Stan Franklin, "Computation on graphs", in O. M. Omidvar, ed., Progress in Neural Networks, vol 2, Ablex, Norwood NJ, 1990, to appear. Max Garzon and Stan Franklin, "Neural computability II", Proc. 3rd Int. Joint. Conf. on Neural Networks, Washington, D.C. 1989 I, 631-637 Stan Franklin Math Sciences Memphis State Memphis TN 38152 BITNET:franklins at memstvx1 From peterc at chaos.cs.brandeis.edu Thu Sep 20 03:17:05 1990 From: peterc at chaos.cs.brandeis.edu (Peter Cariani) Date: Thu, 20 Sep 90 03:17:05 edt Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Peter Marvit's message of Tue, 18 Sep 90 14:58:12 PDT <9009182158.AA20192@hplpm.hpl.hp.com> Message-ID: <9009200717.AA09682@chaos.cs.brandeis.edu> Dear Peter "Definitely non-Turing", When you say "Are neural networks Turing equivalent?" are you talking about strictly finite neural networks (finite # elements, finite & discrete state sets for each element) or are you allowing for potentially-infinite neural networks (indefinitely extendible # elements and/or state sets)? The first I think are equivalent to finite state automata (or Turing machines with fixed, finite tapes) while the second would be equivalent to Turing machines with potentially infinite tapes. I would argue that potentially infinite tapes are purely Platonic constructions; no physically realized (not to mention humanly usable) automaton can have an indefinitely- extendible tape and operate without temporal bounds (i.e. the stability of the physical computational device, the lifespan of the human observer(s)). For this reason, it could be argued that potentially-infinite automata (and the whole realm of computability considerations) really have no relevance to real-world computational problems, whereas finite automata and computational complexity (including speed & reliability issues) have everything to do with real-world computation. Does anyone have an example where computability issues (Godel's Proof, for example) have any bearing whatsoever on the problems we daily encounter with our finite machines? Do computability considerations in any way constrain what we can (or cannot) do beyond those already imposed by finite memory, limited processing speed, and imperfectly reliable elements? -Peter "Definitely non-Turing, but possibly for different reasons" P.S. If we consider neural nets as physical adaptive devices rather than purely formal constructions (as in the early Perceptrons and Sceptrons, which actually had real sensors attached to the computational part), then there are contingent measurement processes, which, strictly speaking are not formal operations. Turing machines, finite or potentially infinite, simply don't sense anything beyond what's already on their tapes and/or in their state-transition tables, while robotically implemented neural nets operate contingent upon (often unpredictable) external events and circumstances. From uhr at cs.wisc.edu Thu Sep 20 13:26:34 1990 From: uhr at cs.wisc.edu (Leonard Uhr) Date: Thu, 20 Sep 90 12:26:34 -0500 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009201726.AA00355@thor.cs.wisc.edu> Peter Cariani's description of the equivalence of NN to finite state automata, but to Turing machines when given a potentially infinite ability (either the traditional memory tape or unbounded processors) is good, and nicely simple. Two comments: NN processors execute directly on what flows into them; TM interpret processes stored on the memory tape - so it's more natural to think of NN with potentially infinite numbers of processors, rather than memory plus the processors now interpreting. You can add sensors to a TM as well as an NN - and will need to for the same reasons. As soon as you actually realize a "potentially infinite" TM you must give it sensors that in effect make the real world the tape (e.g., TV, with motors to move it from place to place). So there's really no difference. Len Uhr From ANDERSON%BROWNCOG.BITNET at mitvma.mit.edu Fri Sep 21 15:00:00 1990 From: ANDERSON%BROWNCOG.BITNET at mitvma.mit.edu (ANDERSON%BROWNCOG.BITNET@mitvma.mit.edu) Date: Fri, 21 Sep 90 15:00 EDT Subject: Technical Report Message-ID: A technical report is available: "Why, having so many neurons, do we have so few thoughts?" Technical Report 90-1 Brown University Department of Cognitive and Linguistic Sciences James A. Anderson Department of Cognitive and Linguistic Sciences Box 1978 Brown University Providence, RI 02912 This is a chapter to appear in: Relating Theory and Data Edited by W.E. Hockley and S. Lewandowsky, Hillsdale, NJ: Erlbaum (LEA) Abstract Experimental cognitive psychology often involves recording two quite distinct kinds of data. The first is whether the computation itself is done correctly or incorrectly and the second records how long it took to get an answer. Neural network computations are often loosely described as being `brain-like.' This suggests that it might be possible to model experimental reaction time data simply by seeing how long it takes for the network to generate the answer and error data by looking at the computed results in the same system. Simple feedforward nets usually do not give direct computation time data. However, network models realizing dynamical systems can give `reaction times' directly by noting the time required for the network computation to be completed. In some cases genuine random processes are necessary to generate differing reaction times, but in other cases deterministic, noise free systems can also give distributions of reaction times. This report can be obtained by sending an email message to: LI700008 at brownvm.BITNET or anderson at browncog.BITNET and asking for Cognitive Science Technical Report 90-1 on reaction times, or by sending a note by regular mail to the address above. From sun at umiacs.UMD.EDU Thu Sep 20 20:39:54 1990 From: sun at umiacs.UMD.EDU (Guo-Zheng Sun) Date: Thu, 20 Sep 90 20:39:54 -0400 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009210039.AA01890@neudec.umiacs.UMD.EDU> In addition to Peter Cariani's description, I would like to make a short comment: Is any finite state machine with a potential unlimited number of states (e.g. a recurrent neural net state machine with potential unbounded precision) equivalent to a Turing machine? The answer is certainly "NO", because the classical definition of Turing machine requires both an infinite tape and a set of processing rules with finite description. Therefore, whether we say one neural net is equivalent to a finite automaton with potential unlimited number of states or it is equivalnet to a Turing machine depends on if we can find a set of transition rules with finite description ( or as Touretzky's words "the wiring schedule with finite description). Guo-Zheng Sun From bmb at Think.COM Thu Sep 20 21:01:26 1990 From: bmb at Think.COM (bmb@Think.COM) Date: Thu, 20 Sep 90 21:01:26 EDT Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009210101.AA01186@regin.think.com> As I recall, another interesting point that somebody (I don't remember who it was) brought up in our last discussion about this topic is the possibility of a difference in the capabilities of analog and digital implementations of neural nets (or of any other type of computation for that matter). This speculation is based on the work of Pour-El and Richards at the Department of Mathematics at the University of Minnesota. The general idea is as follows: Real numbers that can be produced by a Turing machine are called "computable." There must be a countable number of these, since there is a countable number of Turing machines. (Note: Since the real numbers are uncountable, there's clearly lots more of them than there are computable numbers.) Now, Pour-El and Richards showed that the wave equation, with computable initial conditions, evolving for a computable length of time, can give rise to noncomputable values of the dependent variable. This, of course, makes one wonder whether or not the same is true for other continuum equations of mathematical physics, such as those that govern the passage of signals through wires (basically the wave equation with complicated boundary conditions and other bells and whistles) and semiconductors (ditto plus some nonlinearities). Since analog computation takes place in the presumably continuous real world where physical processes are governed by continuum equations, whereas digital circuitry goes through great pains to wash out all but the "binariness" of signals, one might conclude that there is a possibility that analog circuitry can do things that digital circuitry can't. This conclusion is far from clear, but I'd say it's definitely worth thinking about. In "The Emperor's New Mind," Penrose cites the above work in his attack on Strong AI (since it seems to imply that there is a stronger notion of computability than Turing equivalence). In any event, here's the reference in case anybody's interested: M. Pour-El, I. Richards, Advances in Mathematics, Vol. 39, pp. 215-239 (1981). Bruce Boghosian Thinking Machines Corporation bmb at think.com From CFoster at cogsci.edinburgh.ac.uk Fri Sep 21 11:14:26 1990 From: CFoster at cogsci.edinburgh.ac.uk (CFoster@cogsci.edinburgh.ac.uk) Date: Fri, 21 Sep 90 11:14:26 BST Subject: Are Neural Nets Turing Equivalent Message-ID: <17705.9009211014@scott.cogsci.ed.ac.uk> Given the discussion thus far, and in particular Dave Touretzky and Peter Cariani's comments on the discrepancy between theoretical computer science and inherently finite real cognitive and computational systems, you may be interested in the following. I have just submitted my Ph.D. thesis 'Algorithms, Abstraction and Implementation: A Massively Multilevel Theory of Strong Equivalence of Complex Systems'. It is a formalisation of a notion of algorithms that can be used across languages, hardware and architectures (notably connectionist and classical -- this was my starting point), and that can ground a stronger equivalence than just input-output (weak) equivalence. I spell this out as equivalence in terms of states which systems pass through -- at a level of description. The main point is this: I started with the assumption of finiteness. Algorithms are defined as finite sets of finite sequences of finite states. In trying to relate them to Turing computability, I ended up characterising the class of functions computed (or at least described) by them as 'bounded computable functions' or, to put it another way, as the class of functions computed by 'finite tape machines'. In contrast to some common usage, a finite tape machine (a Turing machine with a finite tape) is NOT the same as a finite state machine. The latter is generally defined over all integers and actually gets its constraints from restrictions on HOW it reads the infinite tape, not from its restriction to finitely many states at all. A general Turing machine only has finitely many states of course, so this cannot be the interesting distinction. Somewhat surprisingly, the super-finite bounded computable functions do not even seem to be a subset of computable functions, but possibly of partially computable functions. This is because any inputs not defined for the finite tape machine may actually be defined and cause the system to halt with a sensible output at a lower level of description (depending on the implementation), but then again they may not be defined there either. We just don't know what happens to them. This is quite similar to the case for unexpected inputs to actual computer systems. C. Foster From GOLDFARB%UNB.CA at VMA.CC.CMU.EDU Thu Sep 27 16:28:09 1990 From: GOLDFARB%UNB.CA at VMA.CC.CMU.EDU (GOLDFARB%UNB.CA@VMA.CC.CMU.EDU) Date: Thu, 27 Sep 90 17:28:09 ADT Subject: Turing Machines and New Reconfigurable Learning Machines Message-ID: In connection with the discussion on relation between Turing machines (TM) and neural nets (NN), I would like to draw your attention to my resent paper in Pattern Recognition Vol.23 No.6 (June 1990) pp.595-616, "On the Foundations of Intelligent Processes I: An Evolving Model for Pattern Learning" (as well as several other submitted papers). In it I proposed a finite model of a learning machine (LM) which could be viewed as a far-reaching "symbolic" generalization of the NN and which is more powerful than any other known finite machine (including the NN). This learning power is achieved only because the LM embodies the first model of a reconfigurable machine, i.e., machine that can learn any set of classes by modifying its set of operations. In retrospect, the idea of the reconfigurable machine appears to be quite natural and the only realistic alternative if we want a finite machine to achieve potentially unbounded learning capabilities in some infinite environment. The new model is quite different from the known NNs, and I expect some may not see any similarity immediately. NNs operate on vectors, while the LM can operate on any chosen pattern representation (vectors, strings, trees. etc.; discrete or continuous). NNs have essentially two groups of numeric operations: the accumulating additive operations and the transmitting nonlinear operations (the second group does not "actively" participate in the learning, since no weights are associated with these operations). In addition, the structure (global and local) of the NN imposes restrictions on the variety of metric point transformations that can realized by each layer as well as by the NN itself (think of each layer as changing the metric structure of the input vector space; see section 5.5 of Y.-H.Pao "Adaptive Pattern Recognition and Neural Networks",Addison-Wesley,1989). In the proposed model the operations are not necessarily numeric operations but rather they correspond to the chosen pattern representation. During learning, to find the best separation of the training classes, the LM seeks an optimal weighting scheme for its current set of operations. If the current set of operations is not sufficient, new operations, formed as the compositions of the current operations, are gradually introduced (with the help of several functions defined on the current space of weights) and the optimization process is repeated. Even for vector patterns, one of the most important distinctions between the NN and the LM is that the operations realized by the nodes of the NN are connected in a fixed manner while the operations of the LM are decoupled (to the extent that they have to be decoupled) and they may grow in number as the need arises during learning. There is a strong evidence that the learning stage for the LM always converges, is much more efficient, and produces a much smaller machine than the NN. From jose at learning.siemens.com Fri Sep 28 08:04:54 1990 From: jose at learning.siemens.com (Steve Hanson) Date: Fri, 28 Sep 90 08:04:54 EDT Subject: NIPS PROGRAM --Correction Message-ID: <9009281204.AA09239@learning.siemens.com.siemens.com> We had inadvertently excluded some of the posters from the preliminary program. We apologize for any confusion that may have caused. --Steve Hanson Below is a complete and correct version of the NIPS preliminary program. ------------------------------------------- NIPS 1990 Preliminary Program, November 26-29, Denver, Colorado Monday, November 26, 1990 12:00 PM: Registration Begins 6:30 PM: Reception and Conference Banquet 8:30 PM: After Banquet Talk, "Cortical Memory Systems in Humans", by Antonio Damasio. Tuesday, November 27, 1990 7:30 AM: Continental Breakfast 8:30 AM: Oral Session 1: Learning and Memory 10:30 AM: Break 11:00 AM: Oral Session 2: Navigation and Planning 12:35 PM: Poster Preview Session I, Demos 2:30 PM: Oral Session 3: Temporal and Real Time Processing 4:10 PM: Break 4:40 PM: Oral Session 4: Representation, Learning, and Generalization I 6:40 PM: Free 7:30 PM: Refreshments and Poster Session I Wednesday, November 28, 1990 7:30 AM: Continental Breakfast 8:30AM: Oral Session 5: Visual Processing 10:20 AM: Break 10:50 AM: Oral Session 6: Speech Processing 12:20 PM: Poster Preview Session II, Demos 2:30 PM: Oral Session 7: Representation, Learning, and Generalization II 4:10 PM: Break 4:40 PM: Oral Session 8: Control 6:40 PM: Free 7:30 PM: Refreshments and Poster Session II Thursday, November 29, 1990 7:30 AM: Continental Breakfast 8:30 AM: Oral Session 9: Self-Organization and Unsupervised Learning 10:20 AM: Break 10:50 AM: Session Continues 12:10 PM: Conference Adjourns 5:00 PM Reception and Registration for Post-Conference Workshop (Keystone, CO) Friday, November 30 -- Saturday, December 1, 1990 Post-Conference Workshops at Keystone ------------------------------------------------------------------------------ ORAL PROGRAM Monday, November 26, 1990 12:00 PM: Registration Begins 6:30 PM: Reception and Conference Banquet 8:30 PM: After Banquet Talk, "Cortical Memory Systems in Humans", by Antonio Damasio. Tuesday, November 27, 1990 7:30 AM: Continental Breakfast ORAL SESSION 1: LEARNING AND MEMORY Session Chair: John Moody, Yale University. 8:30 AM: "Multiple Components of Learning and Memory in Aplysia", by Thomas Carew. 9:00 AM: "VLSI Implementations of Learning and Memory Systems: A Review", by Mark Holler. 9:30 AM: "A Short-Term Memory Architecture for the Learning of Morphophonemic Rules", by Michael Gasser and Chan-Do Lee. 9:50 AM "Short Term Active Memory: A Recurrent Network Model of the Neural Mechanism", by David Zipser. 10:10 AM "Direct Memory Access Using Two Cues: Finding the Intersection of Sets in a Connectionist Model", by Janet Wiles, Michael Humphreys and John Bain. 10:30 AM Break ORAL SESSION 2: NAVIGATION AND PLANNING Session Chair: Lee Giles, NEC Research. 11:00 AM "Real-Time Autonomous Robot Navigation Using VLSI Neural Networks", by Alan Murray, Lionel Tarassenko and Michael Brownlow. 11:20 AM "Planning with an Adaptive World Model" by Sebastian B. Thrun, Knutt Moller and Alexander Linden . 11:40 AM "A Connectionist Learning Control Architecture for Navigation", by Jonathan Bachrach. 12:00 PM Spotlight on Language: Posters La1 and La3. 12:10 PM Spotlight on Applications: Posters App1, App6, App7, App10, and App11. 12:35 PM Poster Preview Session I, Demos ORAL SESSION 3: TEMPORAL AND REAL TIME PROCESSING Session Chair: Josh Alspector, Bellcore 2:30 PM "Learning and Adaptation in Real Time Systems", by Carver Mead. 3:00 PM "Applications of Neural Networks in Video Signal Processing", by John Pearson. 3:30 PM "Predicting the Future: A Connectionist Approach", by Andreas S. Weigend, Bernardo Huberman and David E. Rumelhart. 3:50 PM "Algorithmic Musical Composition with Melodic and Stylistic Constraints", by Michael Mozer and Todd Soukup. 4:10 PM Break ORAL SESSION 4: REPRESENTATION, LEARNING, AND GENERALIZATION I Session Chair: Gerry Tesauro, IBM Research Labs. 4:40 PM "An Overview of Representation and Convergence Results for Multilayer Feedforward Networks", by Hal White . 5:10 PM "A Simplified Linear-Threshold-Based Neural Network Pattern Classifier", by Terrence L. Fine. 5:30 PM "A Novel approach to predicition of the 3-dimensional structures of protein backbones by neural networks", by H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, H. Fredholm, B. Lautrup and S.B. Petersen. 5:50 PM "On the Circuit Complexity of Neural Networks", by Vwani Roychowdhury, Kai- Yeung Siu, Alon Orlitsky and Thomas Kailath . 6:10 PM Spotlight on Learning and Generalization: Posters LG2, LG3, LG8, LS2, LS5, and LS8. 6:40 PM Free 7:30 PM Refreshments and Poster Session I Wednesday, November 28, 1990 7:30 AM Continental Breakfast ORAL SESSION 5: VISUAL PROCESSING Session Chair: Yann Le Cun, AT&T Bell Labs 8:30 AM "Neural Dynamics of Motion Segmentation", by Ennio Mingolla. 9:00 AM "VLSI Implementation of a Network for Color Constancy", by Andrew Moore, John Allman, Geoffrey Fox and Rodney Goodman. 9:20 AM "Optimal Filtering in the Salamander Retina", by Fred Rieke, Geoffrey Owen and William Bialek. 9:40 AM "Grouping Contour Elements Using a Locally Connected Network", by Amnon Shashua and Shimon Ullman. 10:00 AM Spotlight on Visual Motion Processing: Posters VP3, VP6, VP9, and VP12. 10:20 AM Break ORAL SESSION 6: SPEECH PROCESSING Session Chair: Richard Lippmann, MIT Lincoln Labs 10:50 AM "From Speech Recognition to Understanding: Development of the MIT, SUMMIT, and VOYAGER Systems", by James Glass. 11:20 PM "Speech Recognition using Connectionist Approaches", by K.Chouki, S. Soudoplatoff, A. Wallyn, F. Bimbot and H. Valbret. 11:40 AM "Continuous Speech Recognition Using Linked Predictive Neural Networks", by Joe Tebelskis, Alex Waibel and Bojan Petek. 12:00 PM Spotlight on Speech and Signal Processing: Posters Sig1, Sig2, Sp2, and Sp7. 12:20 PM Poster Preview Session II, Demos ORAL SESSION 7: REPRESENTATION, LEARNING AND GENERALIZATION II Session Chair: Steve Hanson, Siemens Research. 2:30 PM "Learning and Understanding Functions of Many Variables Through Adaptive Spline Networks", by Jerome Friedman. 3:00 PM "Connectionist Modeling of Generalization and Classification", by Roger Shepard. 3:30 PM "Bumptrees for Efficient Function, Constraint, and Classification Learning", by Stephen M.Omohundro. 3:50 PM "Generalization Properties of Networks using the Least Mean Square Algorithm", by Yves Chauvin. 4:10 PM Break ORAL SESSION 8: CONTROL Session Chair: David Touretzky, Carnegie-Mellon University. 4:40 PM "Neural Network Application to Diagnostics and Control of Vehicle Control Systems", by Kenneth Marko. 5:10 PM "Neural Network Models Reveal the Organizational Principles of the Vestibulo- Ocular Reflex and Explain the Properties of its Interneurons", by T.J. Anastasio. 5:30 PM "A General Network Architecture for Nonlinear Control Problems", by Charles Schley, Yves Chauvin, Van Henkle and Richard Golden. 5:50 PM "Design and Implementation of a High Speed CMAC Neural Network Using Programmable CMOS Logic Cell Arrays", by W. Thomas Miller, Brain A. Box, Erich C. Whitney and James M. Glynn. 6:10 PM Spotlight on Control: Posters CN2, CN6, and CN7. 6:25 PM Spotlight on Oscillations: Posters Osc1, Osc2, and Osc3. 6:40 PM Free 7:30 PM Refreshments and Poster Session II Thursday, November 29, 1990 7:30 AM Continental Breakfast ORAL SESSION 9: SELF ORGANIZATION AND UNSUPERVISED LEARNING Session Chair: Terry Sejnowki, The Salk Institute. 8:30 AM "Self-Organization in a Developing Visual Pattern", by Martha Constantine-Paton. 9:00 AM "Models for the Development of Eye-Brain Maps", by Jack Cowan. 9:20 AM "VLSI Implementation of TInMANN", by Matt Melton, Tan Pahn and Doug Reeves. 9:40 AM "Fast Adaptive K-Means Clustering", by Chris Darken and John Moody. 10:00 AM "Learning Theory and Experiments with Competitive Networks", by Griff Bilbro and David Van den Bout. 10:20 AM Break 10:50 AM "Self-Organization and Non-Linear Processing in Hippocampal Neurons", by Thomas H. Brown, Zachary Mainen, Anthony Zador and Brenda Claiborne. 11:10 AM "Weight-Space Dynamics of Recurrent Hebbian Networks", by Todd K. Leen. 11:30 AM "Discovering and Using the Single Viewpoint Constraint", by Richard S. Zemel and Geoffrey Hinton. 11:50 AM "Task Decompostion Through Competition in A Modular Connectionist Architecture: The What and Where Vision Tasks", by Robert A. Jacobs, Michael Jordan and Andrew Barto. 12:10 PM Conference Adjourns 5:00 PM Post-Conference Workshop Begins (Keystone, CO) ------------------------------------------------------------------------------ POSTER PROGRAM POSTER SESSION I Tuesday, November 27 (* denotes poster spotlight) APPLICATIONS App1* "A B-P ANN Commodity Trader", by J.E. Collard. App2 "Analog Neural Networks as Decoders", by Ruth A. Erlanson and Yaser Abu- Mostafa. App3 "Proximity Effect Corrections in Electron Beam Lithography Using a Neural Network", by Robert C. Frye, Kevin Cummings and Edward Rietman. App4 "A Neural Expert System with Automated Extraction of Fuzzy IF-THEN Rules and Its Application to Medical Diagnosis", by Yoichi Hayashi. App5 "Integrated Segmentation and Recognition of Machine and Hand--printed Characters", by James D. Keeler, Eric Hartman and Wee-Hong Leow. App6* "Training Knowledge-Based Neural Networks to Recognize Genes in DNA Sequences", by Michael O. Noordewier, Geoffrey Towell and Jude Shavlik. App7* "Seismic Event Identification Using Artificial Neural Networks", by John L. Perry and Douglas Baumgardt. App8 "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation", by Dean A. Pomerleau. App9 "Sequential Adaptation of Radial Basis Function Neural Networks and its Application to Time-series Prediction", by V. Kadirkamanathan, M. Niranjan and F. Fallside. App10* "EMPATH: Face, Emotion, and Gender Recognition Using Holons", by Garrison W. Cottrell and Janet Metcalf. App11* "Sexnet: A Neural Network Identifies Sex from Human Faces", by B. Golomb, D. Lawrence and T.J. Sejnowski. EVOLUTION AND LEARNING EL1 "Using Genetic Algorithm to Improve Pattern Classification Performance", by Eric I. Chang and Richard P. Lippmann. EL2 "Evolution and Learning in Neural Networks: The Number and Distribution of Learning Trials Affect the Rate of Evolution", by Ron Kessing and David Stork. LANGUAGE La1* "Harmonic Grammar", by Geraldine Legendre, Yoshiro Miyata and Paul Smolensky. La2 "Translating Locative Prepostions", by Paul Munro and Mary Tabasko. La3* "Language Acquisition via Strange Automata", by Jordon B. Pollack. La4 "Exploiting Syllable Structure in a Connectionist Phonology Model", by David S. Touretzky and Deirdre Wheeler. LEARNING AND GENERALIZATION LG1 "Generalization Properties of Radial Basis Functions", by Sherif M.Botros and C.G. Atkeson. LG2* "Neural Net Algorithms That Learn In Polynomial Time From Examples and Queries", by Eric Baum. LG3* "Looking for the gap: Experiments on the cause of exponential generalization", by David Cohn and Geral Tesauro. LG4 "Dynamics of Generalization in Linear Perceptrons ", by A. Krogh and John Hertz. LG5 "Second Order Properties of Error Surfaces, Learning Time, and Generalization", by Yann LeCun, Ido Kanter and Sara Solla. LG6 "Kolmogorow Complexity and Generalization in Neural Networks", by Barak A. Pearlmutter and Ronal Rosenfeld. LG7 "Learning Versus Generalization in a Boolean Neural Network", by Johathan Shapiro. LG8* "On Stochastic Complexity and Admissible Models for Neural Network Classifiers", by Padhraic Smyth. LG9 "Asympotic slowing down of the nearest-neighbor classifier", by Robert R. Snapp, Demetri Psaltis and Santosh Venkatesh. LG10 "Remarks on Interpolation and Recognition Using Neural Nets", by Eduardo D. Sontag. LG11 "Epsilon-Entropy and the Complexity of Feedforward Neural Networks", by Robert C. Williamson. LEARNING SYSTEMS LS1 "Analysis of the Convergence Properties of Kohonen's LVQ", by John S. Baras and Anthony LaVigna. LS2* "A Framework for the Cooperation of Learning Algorithms", by Leon Bottou and Patrick Gallinari. LS3 "Back-Propagation is Sensitive to Initial Conditions", by John F. Kolen and Jordan Pollack. LS4 "Discovering Discrete Distributed Representations with Recursive Competitive Learning", by Michael C. Mozer. LS5* "From Competitive Learning to Adaptive Mixtures of Experts", by Steven J. Nowlan and Geoffrey Hinton. LS6 "ALCOVE: A connectionist Model of Category Learning", by John K. Kruschke. LS7 "Transforming NN Output Activation Levels to Probability Distributions", by John S. Denker and Yann LeCunn. LS8* "Closed-Form Inversion of Backropagation Networks: Theory and Optimization Issues", by Michael L. Rossen. LOCALIZED BASIS FUNCTIONS LBF1 "Computing with Arrays of Bell Shaped Functions Bernstein Polynomials and the Heat Equation", by Pierre Baldi. LBF2 "Function Approximation Using Multi-Layered Neural Networks with B-Spline Receptive Fields", by Stephen H. Lane, David Handelman, Jack Gelfand and Marshall Flax. LBF3 "A Resource-Allocating Neural Network for Function Interpolation" by John Platt. LBF4 "Adaptive Range Coding", by B.E. Rosen, J.M. Goodwin and J.J. Vidal. LBF5 "Oriented Nonradial Basis Function Networks for Image Coding and Analysis", by Avi Saha, Jim christian, D.S. Tang and Chuan-Lin Wu. LBF6 "A Tree-Structured Network for Approximation on High-Dimensional Spaces", by T. Sanger. LBF7 "Spherical Units as Dynamic Reconfigurable Consequential Regions and their Implications for Modeling Human Learning and Generalization", by Stephen Jose Hanson and Mark Gluck. LBF8 "Feedforward Neural Networks: Analysis and Synthesis Using Discrete Affine Wavelet Transformations", by Y.C. Pati and P.S. Krishnaprasad. LBF9 "A Network that Learns from Unreliable Data and Negative Examples", by Fredico Girosi, Tomaso Poggio and Bruno Caprile. LBF10 "How Receptive Field Parameters Affect Neural Learning", by Bartlett W. Mel and Stephen Omohundro. MEMORY SYSTEMS MS1 "The Devil and the Network: What Sparsity Implies to Robustness and Memory", by Sanjay Biswas and Santosh Venkatesh. MS2 "Cholinergic modulation selective for intrinsic fiber synapses may enhance associative memory properties of piriform cortex", by Michael E. Hasselmo, Brooke Anderson and James Bower. MS3 "Associative Memory in a Network of 'Biological' Neurons", by Wulfram Gerstner. MS4 "A Learning Rule for Guaranteed CAM Storage of Analog Patterns and Continuous Sequences in a Network of 3N^2 Weights", by William Baird. VLSI IMPLEMENTATIONS VLSI1 "A Highly Compact Linear Weight Function Based on the use of EEPROMs", by A. Krammer, C.K. Sin, R. Chu and P.K. Ko. VLSI2 "Back Propagation Implementation on the Adaptive Solutions Neurocomputer Chip", Hal McCartor. VLSI3 "Analog Non-Volatile VLSI Neural Network Chip and Back-Propagation Training", by Simon Tam, Bhusan Gupta, Hernan A. Castro and Mark Holler. VLSI4 "An Analog VLSI Splining Circuit", by D.B. Schwartz and V.K. Samalam. VLSI5 "Reconfigurable Neural Net Chip with 32k Connections", by H.P.Graf and D. Henderson. VLSI6 "Relaxation Networks for Large Supervised Learning Problems", by Joshua Alspector, Robert Allan and Anthony Jayakumare. POSTER SESSION II Wednesday, November 28 (* denotes poster spotlight) CONTROL AND NAVIGATION CN1 "A Reinforcement Learning Variant for Control Scheduling", by Aloke Guha. CN2* "Learning Trajectory and Force Control of an Artificial Muscle Arm by Parallel- Hierarchical Neural Network Model", by Masazumi Katayama and Mitsuo Kawato. CN3 "Identification and Control of a Queueing System with Neural Networks", by Rodolfo A. Milito, Isabelle Guyon and Sara Solla. CN4 "Conditioning And Spatial Learning Tasks", by Peter Dayan. CN5 "Reinforcement Learning in Non-Markovian Environments", by Jurgen Schmidhuber. CN6* "A Model for Distributed Sensorimotor Control of the Cockroach Escape Turn", by Randall D. Beer, Gary Kacmarcik, Roy Ritzman and Hillel Chiel. CN7* "Flight Control in the Dragonfly: A Neurobiological Simulation", by W.E. Faller and M.W. Luttges. CN8 "Integrated Modeling and Control Based on Reinforcement Learning and Dynamic Programming", by Richard S. Sutton. DEVELOPMENT Dev1 "Development of the Spatial Structure of Cortical Feature Maps: A Model Study", by K. Obermayer and H. Ritter and K. Schulten. Dev2 "Interaction Among Ocular Dominance, Retinotopic Order and On-Center/Off- Center Pathways During Development", by Shiqeru Tanaka. Dev3 "Simple Spin Models for the development of Ocular Dominance and Iso-Orientation Columns", by Jack Cowan. NEURODYNAMICS ND1 "Reduction of Order for Systems of Equations Describing the Behavior of Complex Neurons", by T.B.Kepler, L.F. Abbot and E. Marder. ND2 "An Attractor Neural Network Model of Recall and Recognition", by E. Ruppin, Y. Yeshurun. ND3 "Stochastic Neurodynamics", by Jack Cowan. ND4 "A Method for the Efficient Design of Boltzman Machines for Classification Problems", by Ajay Gupta and Wolfgang Maass. ND5 "Analog Neural Networks that are Parallel and Stable", by C.M. Marcus, F.R. Waugh and R.M. Westervelt. ND6 "A Lagrangian Approach to Fixpoints ", by Eric Mjolsness and Willard Miranker. ND7 "Shaping the State Space Landscape in Recurrent Networks", by Patrice Y. Simard, Jean Pierre Raysz and Bernard Victorri. ND8 "Adjoint-Operators and non-Adiabatic Learning Algorithms in Neural Networks", by N. Toomarian and J. Barhen. OSCILLATIONS Osc1* "Connectivity and Oscillations in Two Dimensional Models of Neural Populations", by Daniel M. Kammen, Ernst Niebur and Christof Koch. Osc2* "Oscillation Onset in Neural Delayed Feedback", by Andre Longtin. Osc3* "Analog Computation at a Critical Point: A Novel Function for Neuronal Oscillations? ", by Leonid Kruglyak. PERFORMANCE COMPARISONS PC1 "Comparison of three classification techniques, Cart, C4.5 and multi-layer perceptions", by A.C. Tsoi and R.A. Pearson. PC2 "A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers", by Kenny Ng and Richard Lippmann. PC3 "Time Trials on Second-Order and Variable-Learning-Rate Algorithms", by Richard Rohwer. PC4 "Kohonen Networks and Clustering: Comparative Performance in Color Clusterng", by Wesley Snyder, Daniel Nissman, David Van den Bout and Griff Bilbro. SIGNAL PROCESSING Sig1* "Natural Dolphin Echo Recognition Using An Integrator Gateway Network", by H. L. Roitblat, P.W.B. Moore, R.H. Penner and P.E. Nachtigall. Sig2* "Signal Processing by Multiplexing and Demultiplexing in Neurons", by David C. Tam. SPEECH PROCESSING Sp1 "A Temporal Neural Network for Word Identification from Continuous Phoneme Strings", by Robert B. Allen and Candace Kamm. Sp2* "Connectionist Approaches to the use of Markov Models for Speech Recognition", by H.Bourlard and N. Morgan. Sp3 "The Temp 2 Algorithm: Adjusting Time-Delays by Supervised Learning", by Ulrich Bodenhausen. Sp4 "Spoken Letter Recognition", by Mark Fanty and Ronald A.Cole. Sp5 "Speech Recognition Using Demi-Syllable Neural Prediction Model", by Ken-ichi Iso and Takao Watanabe. Sp6 "RECNORM: Simultaneous Normalisation and Classification Applied to Speech Recognition", by John S. Bridle and Steven Cox. Sp7* "Exploratory Feature Extraction in Speech Signals", by Nathan Intrator. SP8 "Detection and Classification of Phonemes Using Context-Independent Error Back- Propagation", by Hong C. Leung, James R. Glass, Michael S. Phillips and Victor W. Zue. TEMPORAL PROCESSING TP1 "Modeling Time Varying Systems Using a Hidden Control Neural Network Architecture", by Esther Levin. TP2 "A New Neural Network Model for Temporal Processing", by Bert de Vries and Jose Principe. TP3 "ART2/BP architecture for adaptive estimation of dynamic processes", by Einar Sorheim. TP4 "Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interaction", by Andreas V.M. Herz, Zahoping Li, Wulfram Gerstner and J. Leo van Hemmen. TP5 "Learning Time Varying Concepts", by Anthony Kuh and Thomas Petsche. TP6 "The Recurrent Cascade-Correlation Architecture" by Scott E. Fahlman. VISUAL PROCESSING VP1 "Steropsis by Neural Networks Which Learn the Constraints", by Alireza Khotanzad and Ying-Wung Lee. VP2 "A Neural Network Approach for Three-Dimensional Object Recognition", by Volker Tresp. VP3* "A Multiresolution Network Model of Motion Computation in Primates", by H. Taichi Wang, Bimal Mathur and Christof Koch. VP4 "A Second-Order Translation, Rotation and Scale Invariant Neural Network ", by Shelly D.D.Goggin, Kristina Johnson and Karl Gustafson. VP5 "Optimal Sampling of Natural Images: A Design Principle for the Visual System?", by William Bialek, Daniel Ruderman and A. Zee. VP6* "Learning to See Rotation and Dilation with a Hebb Rule", by Martin I. Sereno and Margaret E. Sereno. VP7 "Feedback Synapse to Cone and Light Adaptation", by Josef Skrzypek. VP8 "A Four Neuron Circuit Accounts for Change Sensitive Inhibition in Salamander Retina", by J.L. Teeters, F. H. Eeckman, G.W. Maguire, S.D. Eliasof and F.S. Werblin. VP9* "Qualitative structure from motion", by Daphana Weinshall. VP10 "An Analog VLSI Chip for Finding Edges from Zero-Crossings", by Wyeth Bair. VP11 "A CCD Parallel Processing Architecture and Simulation of CCD Implementation of the Neocognitron", by Michael Chuang. VP12* "A Correlation-based Motion Detection Chip", by Timothy Horiuchi, John Lazzaro, Andy Moore and Christof Koch. ------- From jagota at cs.Buffalo.EDU Fri Sep 28 17:11:27 1990 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 28 Sep 90 17:11:27 EDT Subject: Tech Report Available Message-ID: <9009282111.AA19160@sybil.cs.Buffalo.EDU> *************** DO NOT FORWARD TO OTHER BBOARDS***************** *************** DO NOT FORWARD TO OTHER BBOARDS***************** The following technical report is available: The Hopfield-style network as a Maximal-Cliques Graph Machine Arun Jagota (jagota at cs.buffalo.edu) Department of Computer Science State University Of New York At Buffalo TR 90-25 ABSTRACT The Hopfield-style network, a variant of the popular Hopfield neural network, has earlier been shown to have fixed points (stable states) that correspond 1-1 with the maximal cliques of the underlying graph. The network sequentially transforms an initial state (set of vertices) to a final state (maximal clique) via certain greedy operations. It has also been noted that this network can be used to store simple, undirected graphs. In the following paper, we exploit these properties to view the Hopfield-style Network as a Maximal Clique Graph Machine. We show that certain problems can be reduced to finding Maximal Cliques on graphs in such a way that the network computations lead to the desired solutions. The theory of NP-Completeness shows us one such problem, SAT, that can be reduced to the Clique problem. In this paper, we show how this reduction allows us to answer certain questions about a CNF formula, via network computations on the corresponding maximal cliques. We also present a novel transformation of finite regular languages to Cliques in graphs and discuss which (language) questions can be answered and how. Our main general result is that we have expanded the problem-solving ability of the Hopfield-style network without detracting from its simplicity and while preserving its feasibility of hardware implementation. ------------------------------------------------------------------------ The report is available in compressed PostScript form by anonymous ftp as follows: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get jagota.tr90-25.ps.Z ftp> quit unix> uncompress jagota.tr90-25.ps.Z unix> lpr jagota.tr90-25.ps (use flag [if any] your printer needs for Postscript) ------------------------------------------------------------------------ Due to cost of surface mail, I request use of 'ftp' facility whenever convenient. The report, however, is also available by surface mail. I am also willing to transmit the LaTeX sources by e-mail. Send requests (for one or the other) by e-mail to jagota at cs.buffalo.edu. Please do not reply with 'r' or 'R' to this message. Arun Jagota jagota at cs.buffalo.edu Dept Of Computer Science 226 Bell Hall, State University Of New York At Buffalo, NY - 14260 *************** DO NOT FORWARD TO OTHER BBOARDS***************** *************** DO NOT FORWARD TO OTHER BBOARDS***************** From tsejnowski at UCSD.EDU Sat Sep 29 18:55:49 1990 From: tsejnowski at UCSD.EDU (Terry Sejnowski) Date: Sat, 29 Sep 90 15:55:49 PDT Subject: Neural Computation 2:3 Message-ID: <9009292255.AA22679@sdbio2.UCSD.EDU> NEURAL COMPUTATION Volume 2, Number 3 Review: Parallel Distributed Approaches to Combinatorial Optimization -- Benchmark Studies on the Traveling Salesman Problem Carsten Peterson Note: Faster Learning for Dynamical Recurrent Backpropagation Yan Fang and Terrence J. Sejnowski Letters: A Dynamical Neural Network Model of Sensorimotor Transformations in the Leech Shawn R. Lockery, Yan Fang, and Terrence J. Sejnowski Control of Neuronal Output by Inhibition At the Axon Initial Segment Rodney J. Douglas and Kevan A. C. Martin Feature Linking Via Synchronization Among Distributed Assemblies: Results From Cat Visual Cortex and From Simulations R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke Toward a Theory of Early Visual Processing Joseph J. Atick and A. Norman Redlich Derivation of Hebbian Equations From a Nonlinear Model Kenneth D. Miller Spontaneous Development of Modularity in Simple Cortical Models Alex Chernjavsky and John Moody The Bootstrap Widrow-Hoff Rule As a Cluster-Formation Algorithm Geoffrey E. Hinton and Steven J. Nowlan The Effects of Precision Constraints in a Back-Propagation Learning Network Paul W. Hollis, John S. Harper, and John J. Paulos Exhaustive Learning D. B. Schwartz, Sarah A. Solla, V. K. Samalam, and J. S. Denker A Method for Designing Neural Networks Using Non-Linear Multivariate Analysis: Application to Speaker-Independent Vowel Recognition Toshio Irino and Hideki Kawahara SUBSCRIPTIONS: Volume 2 ______ $35 Student ______ $50 Individual ______ $100 Institution Add $12. for postage outside USA and Canada surface mail. Add $18. for air mail. (Back issues of volume 1 are available for $25 each.) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From John.Hampshire at SPEECH2.CS.CMU.EDU Sun Sep 30 20:28:16 1990 From: John.Hampshire at SPEECH2.CS.CMU.EDU (John.Hampshire@SPEECH2.CS.CMU.EDU) Date: Sun, 30 Sep 90 20:28:16 EDT Subject: MLP classifiers == Bayes Message-ID: EQUIVALENCE PROOFS FOR MULTI-LAYER PERCEPTRON CLASSIFIERS AND THE BAYESIAN DISCRIMINANT FUNCTION John B. Hampshire II and Barak A. Pearlmutter Carnegie Mellon University -------------------------------- We show the conditions necessary for an MLP classifier to yield (optimal) Bayesian classification performance. Background: ========== Back in 1973, Duda and Hart showed that a simple perceptron trained with the Mean-Squared Error (MSE) objective function would minimize the squared approximation error to the Bayesian discriminant function. If the two-class random vector (RV) being classified were linearly separable, then the MSE-trained perceptron would produce outputs that converged to the a posteriori probabilities of the RV, given an asymptotically large set of statistically independent training samples of the RV. Since then, a number of connectionists have re-stated this proof in various forms for MLP classifiers. What's new: ========== We show (in painful mathematical detail) that the proof holds not just for MSE-trained MLPs, it also holds for MLPs trained with any of two broad classes of objective functions. The number of classes associated with the input RV is arbitrary, as is the dimensionality of the RV, and the specific parameterization of the MLP. Again, we state the conditions necessary for Bayesian equivalence to hold. The first class of "reasonable error measures" yields Bayesian performance by producing MLP outputs that converge to the a posterioris of the RV. MSE and a number of information theoretic learning rules leading to the Cross Entropy objective function are familiar examples of reasonable error measures. The second class of objective functions, known as Classification Figures of Merit (CFM), yield (theoretically limited) Bayesian performance by producing MLP outputs that reflect the identity of the largest a posteriori of the input RV. How to get a copy: ================= To appear in the "Proceedings of the 1990 Connectionist Models Summer School," Touretzky, Elman, Sejnowski, and Hinton, eds., San Mateo, CA: Morgan Kaufmann, 1990. This text will be available at NIPS in late November. If you can't wait, pre-prints may be obtained from the OSU connectionist literature database using the following procedure: % ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get hampshire.bayes90.ps.Z 261245 bytes sent in 9.9 seconds (26 Kbytes/s) ftp> quit % uncompress hampshire.bayes90.ps.Z % lpr hampshire.bayes90.ps From sontag at hilbert.rutgers.edu Sun Sep 2 11:11:44 1990 From: sontag at hilbert.rutgers.edu (Eduardo Sontag) Date: Sun, 2 Sep 90 11:11:44 EDT Subject: Book recently published Message-ID: <9009021511.AA05380@hilbert.rutgers.edu> The following textbook in control and systems theory may be useful to those working on neural nets, especially if interested in recurrent nets and other dynamic behavior. The level is begining-graduate; it is written in a careful mathematical style, but its contents should be accessible to anyone with a good undergraduate-level math background including calculus, linear algebra, and differential equations: Eduardo D. Sontag, __Mathematical Control Theory: Deterministic Finite Dimensional Systems__ Springer, New York, 1990. (396+xiii pages) Some highlights: ** Introductory chapter describing intuitively modern control theory ** Automata and linear systems covered in a *unified* fashion ** Dynamic programming, including variants such as forward programming ** Passing from dynamic i/o data to internal recurrent state representations ** Stability, including Lyapunov functions ** Tracking of time-varying signals ** Kalman filtering as deterministic optimal observation ** Linear optimal control, including Riccati equations ** Determining internal states from input/output experiments ** Classification of internal state representations under equivalence ** Frequency domain considerations: Nyquist criterion, transfer functions ** Feedback, as a general concept, and linear feedback; pole-shifting ** Volterra series ** Appendix: differential equation theorems ** Appendix: singular values and related matters ** Detailed bibliography (400 up-to-date entries) ** Large computer-generated index Some data: Springer-Verlag, ISBN: 0-387-97366-4; 3-540-97366-4 Series: Textbooks in Applied Mathematics, Number 6. Hardcover, $39.00 [Can be ordered in the USA from 1-800-SPRINGER (in NJ, 201-348-4033)] From ai-vie!georg at relay.EU.net Sat Sep 1 10:18:04 1990 From: ai-vie!georg at relay.EU.net (Georg Dorffner) Date: Sat, 1 Sep 90 13:18:04 -0100 Subject: connectionism conference Message-ID: <9009011118.AA02971@ai-vie.uucp> Sixth Austrian Artificial Intelligence Conference --------------------------------------------------------------- Connectionism in Artificial Intelligence and Cognitive Science --------------------------------------------------------------- organized by the Austrian Society for Artificial Intelligence (OGAI) in cooperation with the Gesellschaft fuer Informatik (GI, German Society for Computer Science), Section for Connectionism Sep 18 - 21, 1990 Hotel Schaffenrath Salzburg, Austria Conference chair: Georg Dorffner (Univ. of Vienna, Austria) Program committee: J. Diederich (GMD St. Augustin, Germany) C. Freksa (Techn. Univ. Munich, Germany) Ch. Lischka (GMD St.Augustin, Germany) A. Kobsa (Univ. of Saarland, Germany) M. Koehle (Techn. Univ. Vienna, Austria) B. Neumann (Univ. Hamburg, Germany) H. Schnelle (Univ. Bochum, Germany) Z. Schreter (Univ. Zurich, Switzerland) invited lectures: Paul Churchland (UCSD) Gary Cottrell (UCSD) Noel Sharkey (Univ. of Exeter) Workshops: Massive Parallelism and Cognition Localist Network Models Connectionism and Language Processing Panel: Explanation and Transparency of Connectionist Systems IMPORTANT! The conference languages are German and English. Below, the letter 'E' indicates that a talk or workshop will be held in English. ===================================================================== Scientific Program (Wed, Sep 19 til Fri, Sep 21): Wednesday, Sep 19, 1990: U. Schade (Univ. Bielefeld) Kohaerenz und Monitor in konnektionistischen Sprachproduktionsmodellen C. Kunze (Ruhr-Univ. Bochum) A Syllable-Based Net-Linguistic Approach to Lexical Access R. Wilkens, H. Schnelle (Ruhr-Univ. Bochum) A Connectionist Parser for Context-Free Phrase Structure Grammars S.C.Kwasny (Washington Univ. St.Louis), K.A.Faisal (King Fahd Univ. Dhahran) Overcoming Limitations of Rule-based Systems: An Example of a Hybrid Deterministic Parser (E) N. Sharkey (Univ. of Exeter), eingeladener Vortrag Connectionist Representation for Natural Language: Old and New (E) Workshop: Connectionism and Language Processing (chair: H. Schnelle) (E) T. van Gelder (Indiana University) Connectionism and Language Processing H. Schnelle (Ruhr-Univ. Bochum) Connectionism for Cognitive Linguistics G. Dorffner (Univ. Wien, Oest. Forschungsinst. f. AI) A Radical View on Connectionist Language Modeling R. Deffner, K. Eder, H. Geiger (Kratzer Automatisierung Muenchen) Word Recognition as a First Step Towards Natural Language Processing with Artificial Neural Networks N. Sharkey (Univ. of Exeter) Implementing Soft Preferences for Structural Disambiguation Paul Churchland, UCSD (invited talk) Some Further Thoughts on Learning and Conceptual Change (E) -------------------------------------------------------------------------- Thursday, Sep 20,1990: G. Cottrell, UCSD (invited talk) Will Connectionism replace symbolic AI? (E) T. van Gelder (Indiana Univ.) Why Distributed Representation is Inherently Non-Symbolic (E) M. Kurthen, D.B. Linke, P. Hamilton (Univ. Bonn) Connectionist Cognition M. Mohnhaupt (Univ. Hamburg) On the Importance of Pictorial Representations for the Symbolic/Subsymbolic Distinction M. Rotter, G. Dorffner (Univ. Wien, Oest. Forschungsinst. f. AI) Struktur und Konzeptrelationen in verteilten Netzwerken C. Mannes (Oest. Forschungsinst. f. AI) Learning Sensory-Motor Coordination by Experimentation and Reinforcement Learning A. Standfuss, K. Moeller, J. Funke (Univ. Bonn) Wissenserwerb ueber dynamische Systeme: Befunde konnektionistischer Modellierung Workshop: Massive Parallelism and Cognition (chair: C. Lischka) (E) C. Lischka (GMD St. Augustin) Massive Parallelism and Cognition: An Introduction T. Goschke (Univ. Osnabrueck) Representation of Implicit Knowledge in Massively Parallel Architectures G. Helm (Univ. Muenchen) Pictorial Representations in Connectionist Systems M. Kurthen (Univ. Bonn) Connectionist Cognition: A Summary S. Thrun, K. Moeller (Univ. Bonn), A. Linden (GMD St. Augustin) Adaptive Look-Ahead Planning Panel: Explanation and Transparency of Connectionist Systems (E) speakers: J. Diederich, C. Lischka (GMD), G. Goerz (Univ. Hamburg), P. Churchland (UCSD), --------------------------------------------------------------------- Friday, Sep 21, 1990: Workshop: Localist Network Models (chair: J. Diederich) (E) S. Hoelldobler (ICSI Berkeley) On High-Level Inferencing and the Variable Binding Problem in Connectionist Networks J. Diederich (GMD St.Augustin, UC Davis) Recruitment vs. Backpropagation Learning: An Empirical Study on Re-Learning in Connectionist Networks W.M. Rayburn, J. Diederich (UC Davis) Some Remarks on Emotion, Cognition, and Connectionist Systems G. Paass (GMD St. Augustin) A Stochastic EM Learning Algorithm for Structured Probabilistic Neural Networks T. Waschulzik, H. Geiger (Kratzer Automatisierung Muenchen) Eine Entwicklungsmethodik fuer strukturierte konnektionistische Systeme G. Cottrell (UCSD) Why Localist Connectionism is a Mistake A.N. Refenes (Univ. College London) ConSTrainer: A Generic Toolkit for Connectionist Dataset Selection (E) J.L. van Hemmen, W. Gerstner(TU Muenchen), A. Herz, R. Kuehn, B. Sulzer, M. Vaas (Univ. Heidelberg) Encoding and Decoding of Patterns which are Correlated in Space and Time R. Salomon (TU Berlin) Beschleunigtes Lernen durch adaptive Regelung der Lernrate bei back-propagation in feed-forward Netzen T. Waschulzik, H. Geiger (Kratzer Automatisierung Muenchen) Theorie und Anwendung strukturierter konnektionistischer Systeme H. Bischof, A. Pinz (Univ.f.Bodenkultur Wien) Verwendung von neuralen Netzwerken zur Klassifikation natuerlicher Objekte am Beispiel der Baumerkennung aus Farb-Infrarot-Luftbildern. H.G. Ziegeler, K.W. Kratky (Univ. Wien) A Connectionist Realization Applying Knowledge-Compilation and Auto-Segmentation in a Symbolic Assignment Problem A. Lebeda, M. Koehle (TU Wien) Buchstabenerkennung unter Beruecksichtigung von kontextueller Information ======================================================================== Registration: Please send the following form to: Georg Dorffner Inst.f. Med. Kybernetik und Artificial Intelligence Universitaet Wien Freyung 6/2 A-1010 Vienna, Austria For further questions write to the same address or contact directly Georg Dorffner (Tel: +43 1 535 32 810, Fax: +43 1 63 06 52, email: georg at ai-vie.uucp) ------------------------------------------------------------------------ Connectionism in AI and Cognitive Science (KONNAI) Registration Application Form: I herewith apply for registration at the 6th Austrian AI conference Name: __________________________________________________________________ Address: _______________________________________________________________ _______________________________________________________________ _______________________________________________________________ Telephone: __________________________________ email: _____________________ I will participate in the following events: o Plenary lectures, scient. program, Panel AS 1.950,-- (DM 280,--) reduced price for OGAI members AS 1.800,-- (DM 260,--) reduced price for students (with ID!) AS 1.000,-- (DM 150,--) --------------- Amount: _______________ o Workshops (price is included in conference fee) o Massive Parallelism and Cognition o Localist Network Models o Connectionism and Language Processing o Ich want to demonstrate a program and need the following hard- and software: __________________________________________________ o I transfer the mony to the checking account of the OGAI at the Ersten Oesterreichischen Spar-Casse-Bank, No. 004-71186 o I am sending a eurocheque o I need an invoice signature: ____________________________________ ====================================================================== Accomodation: The conference will be held at Hotel Schaffenrath, Alpenstrasse 115, A-5020 Salzburg. No rooms are available any more at that hotel. You can, however, send the form below to the Hotel Schaffenrath, who will forward the reservation to another nearby hotel. ===================================================================== Connectionism in AI and Cognitive Science (KONNAI) Hotel reservation I want a room from __________________ to _______________________ (day of arrival) (day of departure) ein o single AS 640,-- incl. breakfast o double AS 990,-- incl. breakfast o three beds AS 1200,-- incl. breakfast Name: ________________________________________________________________ Address: _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ Telephone: __________________________________ From N.E.Sharkey at cs.exeter.ac.uk Tue Sep 4 14:28:22 1990 From: N.E.Sharkey at cs.exeter.ac.uk (Noel Sharkey) Date: Tue, 4 Sep 90 14:28:22 BST Subject: PSYCHOLOGICAL PROCESSES Message-ID: <20234.9009041328@entropy.cs.exeter.ac.uk> I have been getting a lot of enquiries about the special issue of connection science on psychological processes (i the announcement months ago and of course people have lost it). So here it is again folk. noel ******************** CALL FOR PAPERS ****************** CONNECTION SCIENCE SPECIAL ISSUE CONNECTIONIST MODELLING OF PSYCHOLOGICAL PROCESSES EDITOR Noel Sharkey SPECIAL BOARD Jim Anderson Andy Barto Thomas Bever Glyn Humphreys Walter Kintsch Dennis Norris Kim Plunkett Ronan Reilly Dave Rumelhart Antony Sanford The journal Connection Science would like to encourage submissions from researchers modelling psychological data or conducting experiments comparing models within the connectionist framework. Papers of this nature may be submitted to our regular issues or to the special issue. Authors wishing to submit papers to the special issue should mark them SPECIAL PSYCHOLOGY ISSUE. Good quality papers not accepted for the special issue may appear in later regular issues. DEADLINE FOR SUBMISSION 12th October, 1990. Notification of acceptance or rejection will be by the end of December/beginning of January. From fogler at sirius.unm.edu Wed Sep 5 12:01:08 1990 From: fogler at sirius.unm.edu (fogler@sirius.unm.edu) Date: Wed, 5 Sep 90 10:01:08 MDT Subject: feature extraction in the striate cortex Message-ID: <9009051601.AA15312@sirius.unm.edu> I am investigating feature extraction for object recognition that mimics in some fashion the algorithms and perhaps architectures that occur in animal vision. I have read a number of papers and/or books by Hubel, Wiesel, Marr, Poggio, Ullman, Grimson, Wilson and others. I am looking for papers that relate to the orientation columns in the striate cortex and how they might be interconnected for the purpose of feature extraction. Specifically, I am looking for information on the encoding scheme(s) used to represent the orientation angles of features. My background is in algorithms and hardware for signal processing, computer vision, and neural networks. I welcome comments and suggestions. Joe Fogler EECE Department University of New Mexico fogler at wayback.unm.edu From mikek at boulder.Colorado.EDU Wed Sep 5 12:14:27 1990 From: mikek at boulder.Colorado.EDU (Mike Kranzdorf) Date: Wed, 5 Sep 90 10:14:27 MDT Subject: Mactivation 3.3 on new ftp site Message-ID: <9009051614.AA21672@fred.colorado.edu> Mactivation version 3.3 is available via anonymous ftp on alumni.Colorado.EDU (internet address 128.138.240.32) The file is in /pub and is called mactivation.3.3.sit.hqx (It is stuffited and binhex'ed) To get it, try this: ftp alumni.Colorado.EDU anonymous binary cd /pub get mactivation.3.3.sit.hqx Then get it to your Mac and use Stuffit to uncompress it and BinHex 4.0 to make it back into an application. If you can't make ftp work, or you want a copy with the nice MS Word docs, then send $5 to: Mike Kranzdorf P.O. Box 1379 Nederland, CO 80466-1379 USA For those who don't know about Mactivation, here's the summary: Mactivation is an introductory neural network simulator which runs on all Apple Macintosh computers. A graphical interface provides direct access to units, connections, and patterns. Basic concepts of network operations can be explored, with many low level parameters available for modification. Back-propagation is not supported (coming in 4.0) A user's manual containing an introduction to connectionist networks and program documentation is included. The ftp version includes a plain text file, while the MS Word version available from the author contains nice graphics and footnotes. The program may be freely copied, including for classroom distribution. --mikek internet: mikek at boulder.colorado.edu uucp:{ncar|nbires}!boulder!mikek AppleLink: oblio From fritz_dg%ncsd.dnet at gte.com Wed Sep 5 09:38:48 1990 From: fritz_dg%ncsd.dnet at gte.com (fritz_dg%ncsd.dnet@gte.com) Date: Wed, 5 Sep 90 09:38:48 -0400 Subject: voice discrimination Message-ID: <9009051338.AA19239@bunny.gte.com> I'm looking for references on neural network research on voice discrimination, that is, telling one language and/or speaker apart from another without necessarily understanding the words. Any leads at all will be appreciated. I will summarize & return to the list any responses. Thanks. Dave Fritz fritz_dg%ncsd at gte.com ----------------------------------------------- From sankar at caip.rutgers.edu Wed Sep 5 14:54:29 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Wed, 5 Sep 90 14:54:29 EDT Subject: No subject Message-ID: <9009051854.AA25553@caip.rutgers.edu> Please address any further requests for the technical report: "Tree Structured Neural Nets" to Barbara Daniels CAIP Center Brett and Bowser Roads P.O. Box 1390 Piscataway, NJ 08855-1390 Please enclose a check made out to "Rutgers University CAIP Center" for $5:00 if you are from the USA or Canada and for $8:00 if you are from any other country. Thank you. Ananth Sankar From kruschke at ucs.indiana.edu Wed Sep 5 16:46:00 1990 From: kruschke at ucs.indiana.edu (KRUSCHKE,JOHN,PSY) Date: 5 Sep 90 15:46:00 EST Subject: speech spectrogram recognition Message-ID: I have a student interested in connectionist models of speech *spectrogram* recognition. As speech is not my area of expertise, I'm hoping you can suggest references to us. I realize that there are a tremendous number of papers on the topic of connectionist speech recognition, so references to good survey papers and especially good recent papers, which cite earlier work, would be most appreciated. Thanks. --John Kruschke (No need to "reply" to the whole list; you can send directly to: kruschke at ucs.indiana.edu ) From birnbaum at fido.ils.nwu.edu Fri Sep 7 11:03:56 1990 From: birnbaum at fido.ils.nwu.edu (Lawrence Birnbaum) Date: Fri, 7 Sep 90 10:03:56 CDT Subject: ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING Message-ID: <9009071503.AA05805@fido.ils.nwu.edu> ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING CALL FOR WORKSHOP PROPOSALS AND PRELIMINARY CALL FOR PAPERS On behalf of the organizing committee, we are pleased to solicit proposals for the workshops that will constitute ML91, the Eighth International Workshop on Machine Learning, to be held in late June, 1991, at Northwestern University, Evanston, Illinois, USA. We anticipate choosing six workshops to be held in parallel over the three days of the meeting. Our goal in evaluating workshop proposals is to ensure high quality and broad coverage of work in machine learning. Workshop committees -- which will operate for the most part independently in selecting work to be presented at ML91 -- should include two to four people, preferably at different institutions. The organizing committee may select some workshops as proposed, or may suggest changes or combinations of proposals in order to achieve the goals of quality and balance. Proposals are due October 10, 1990, preferably by email to: ml91 at ils.nwu.edu although hardcopy may also be sent to the following address: ML91 Northwestern University The Institute for the Learning Sciences 1890 Maple Avenue Evanston, IL 60201 USA fax (708) 491-5258 Please include the following information: 1. Workshop topic 2. Names, addresses, and positions of workshop committee members 3. Brief description of topic 4. Workshop format 5. Justification for workshop, including assessment of breadth of appeal Workshop format is somewhat flexible, and may include invited talks, panel discussions, short presentations, and even small working group meetings. However, it is expected that the majority of time will be devoted to technical presentations of 20 to 30 minutes in length, and we encourage the inclusion of a poster session in each workshop. Each workshop will be allocated approximately 100 pages in the Proceedings, and papers to be published must have a minimum length of (most likely) 4 to 5 pages in double column format. Workshop committee members should be aware of these space limitations in designing their workshops. We encourage proposals in all areas of machine learning, including induction, explanation-based learning, connectionist and neural net models, adaptive control, pattern recognition, computational models of human learning, perceptual learning, genetic algorithms, computational approaches to teaching informed by learning theories, scientific theory formation, etc. Proposals centered around research problems that can fruitfully be addressed from a variety of perspectives are particularly welcome. The workshops to be held at ML91 will be announced towards the end of October. In the meantime, we would like to announce a preliminary call for papers; the submission deadline is February 1, 1990. Authors should bear in mind the space limitations described above. On behalf of the organizing committee, Larry Birnbaum Gregg Collins Program co-chairs, ML91 (This announcement is being sent/posted to ML-LIST, CONNECTIONISTS, ALife, PSYCOLOQUY, NEWS.ANNOUNCE.CONFERENCES, COMP.AI, COMP.AI.EDU, COMP.AI.NEURAL-NETS, COMP.ROBOTICS, and SCI.PSYCHOLOGY. We encourage readers to forward it to any other relevant mailing list or bulletin board.) From kayama at CS.UCLA.EDU Fri Sep 7 17:49:18 1990 From: kayama at CS.UCLA.EDU (Masahiro Kayama) Date: Fri, 7 Sep 90 14:49:18 -0700 Subject: Request Message-ID: <9009072149.AA12711@oahu.cs.ucla.edu> How do you do? I am Masahiro Kayama, a visiting scholar of UCLA from Hitachi Ltd., Japan. Mr. Michio Morioka, who is a visiting researcher of CMT, introdused your group to me. I would like to attend the following meeting, but I have not obtained detail information. IASTED International Symposium. Machine learning and Neural Networks which will be held on October 10-12, 1990 in New York Penta Hotel. Especially, address, telephone number, fax number of the Penta Hotel and program of the conference are efficient. If you have the above information, Could you please reply me. I can reach by e-mail, "kayama at cs.ucla.edu" . Thank you. From Masahiro Kayama. From LIFY447 at IV3.CC.UTEXAS.EDU Sat Sep 8 14:53:03 1990 From: LIFY447 at IV3.CC.UTEXAS.EDU (Steve Chandler) Date: Sat, 8 Sep 1990 13:53:03 CDT Subject: Request to join Message-ID: <900908135303.22800c31@IV3.CC.UTEXAS.EDU> Would you please add me to the connectionists list. I was on it in a previous guise at the Univ. of Idaho, but had you drop me there recently in anticiaption of my sabbatical here at the university of Texas. My interests include connectionist modeling of natural language acquisition and processing. I've recently had a paper accepted on connectionist lexical modeling, and my sabbatical project involves connectionist modeling of child language acquisition. Thanks. Steve Chandler From B344DSL at UTARLG.UTARL.EDU Sat Sep 8 19:41:00 1990 From: B344DSL at UTARLG.UTARL.EDU (B344DSL@UTARLG.UTARL.EDU) Date: Sat, 8 Sep 90 18:41 CDT Subject: No subject Message-ID: <393D65648E1F002A83@utarlg.utarl.edu> Announcement NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) October 4-6, 1990 IBM Westlake, TX (near Dallas - Fort Worth Airport) Conference Organizers: Daniel Levine, University of Texas at Arlington (Mathematics) Manuel Aparicio, IBM Application Solutions Division Speakers will include: James Anderson, Brown University (Psychology) Jean-Paul Banquet, Hospital de la Salpetriere, Paris John Barnden, New Mexico State University (Computer Science) Claude Cruz, Plexus Systems Incorporated Robert Dawes, Martingale Research Corporation Richard Golden, University of Texas at Dallas (Human Development) Janet Metcalfe, Dartmouth College (Psychology) Jordan Pollack, Ohio State University (Computer Science) Karl Pribram, Radford University (Brain Research Institute) Lokendra Shastri, University of Pennsylvania (Computer Science) Topics will include: Connectionist models of semantic comprehension. Architectures for evidential and case-based reasoning. Connectionist approaches to symbolic problems in AI such as truth maintenance and dynamic binding. Representations of logical primitives, data structures, and constitutive relations. Biological mechanisms for knowledge representation and knowledge-based planning. We plan to follow the talks by a structured panel discussion on the questions: Can neural networks do numbers? Will architectures for pattern matching also be useful for precise reasoning, planning, and inference? Tutorial Session: Robert Dawes, President of Martingale Research Corporation, will present a three hour tutorial on neurocomputing the evening of October 3. This preparation for the workshop will be free of charge to all pre-registrants. ------------------------------------------------------------------------------- Registration Form NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND) Name: _____________________________________________________ Affiliation: ______________________________________________ Address: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ Telephone number: _________________________________________ Electronic mail: __________________________________________ Conference fee enclosed (please check appropriate line): $50 for MIND members before September 30 ______ $60 for MIND members on/after September 30 ______ $60 for non-members before September 30 ______ $70 for non-members on/after September 30 ______ $10 for student MIND members any time ______ $20 for student non-members any time ______ Tutorial session (check if you plan to attend): ______ Note: This is free of charge to pre-registrants. Suggested Hotels: Solana Marriott Hotel. Next to IBM complex, with continuous shuttle bus available to meeting site; ask for MIND conference rate of $80/night. Call (817) 430-3848 or (800) 228-9290. Campus Inn, Arlington. 30 minutes from conference, but rides are available if needed; $39.55 for single/night. Call (817) 860-2323. American Airlines. Minus 40% on coach or 5% over and above Super Saver. Call (800)-433-1790 for specific information and reservations, under Star File #02oz76 for MIND Conference. Conference programs, maps, and other information will be mailed to pre-registrants in mid-September. Please send this form with check or money order to: Dr. Manuel Aparicio IBM Mail Stop 03-04-40 5 West Kirkwood Blvd. Roanoke, TX 76299-0001 (817) 962-5944 From todd at galadriel.Stanford.EDU Sun Sep 9 03:44:56 1990 From: todd at galadriel.Stanford.EDU (Peter Todd) Date: Sun, 09 Sep 90 00:44:56 PDT Subject: Request for music and connectionism articles Message-ID: I am currently preparing the "summary of current work" section for our book Music and Connectionism (to appear next spring, MIT Press, Gareth Loy co-editor), so I wanted to ask mailing-list members for references to any work in this area we may have missed. If you know of any papers or unpublished efforts (or simply names of researchers) concerning connectionist/neural network/PDP approaches and applications to musical problems or domains (other than Bharucha, Kohonen, J.P. Lewis, Leman, Lischka, or the authors in our two issues of the Computer Music Journal), I would greatly appreciate hearing about them, and having the chance to spread their work to a wider audience. I will post word when our book becomes available around March. Thanks for your help-- Peter Todd Psychology Dept. Stanford U. From plunkett at amos.ucsd.edu Mon Sep 10 14:37:33 1990 From: plunkett at amos.ucsd.edu (Kim Plunkett) Date: Mon, 10 Sep 90 11:37:33 PDT Subject: No subject Message-ID: <9009101837.AA02230@amos.ucsd.edu> The following TR is now available: From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets Kim Plunkett University of Aarhus Denmark Virginia Marchman Center for Research in Language University of California, San Diego Abstract The traditional account of the acquisition of English verb morphology supposes that a dual mechanism architecture underlies the transition from early rote learning processes (in which past tense forms of verbs are correctly produced) to the systematic treatment of verbs (in which irregular verbs are prone to error). A connectionist account supposes that this transition can occur in a single mechanism (in the form of a neural network) driven by gradual quantitative changes in the size of the training set to which the network is exposed. In this paper, a series of simulations is reported in which a multi-layered perceptron learns to map verb stems to past tense forms analogous to the mappings found in the English past tense system. By expanding the training set in a gradual, incremental fashion and evaluat- ing network performance on both trained and novel verbs at successive points in learning, we demonstrate that the net- work undergoes reorganizations that result in a shift from a mode of rote learning to a systematic treatment of verbs. Furthermore, we show that this reorganizational transition is contingent upon a critical mass in the training set and is sensitive to the phonological sub-regularities character- izing the irregular verbs. The optimal levels of performance achieved in this series of simulations compared to previous work derives from the incremental training procedures exploited in the current simulations. The pattern of errors observed are compared to those of children acquiring the English past tense, as well as children's performance on experimental studies with nonsense verbs. Incremental learn- ing procedures are discussed in light of theories of cogni- tive development. It is concluded that a connectionist approach offers a viable alternative account of the acquisi- tion of English verb morphology, given the current state of empirical evidence relating to processes of acquisition in young children. Copies of the TR can be obtained by contacting "staight at amos.ucsd.edu" and requesting CRL TR #9020. Please remember to provide your hardmail address. Alternatively, a compressed PostScript file is available by anonymous ftp from "amos.ucsd.edu" (internet address 128.54.16.43). The relevant file is "crl_tr9020.ps.Z" and is in the directory "~ftp/pub". Kim Plunkett From oruiz at fi.upm.es Wed Sep 12 09:34:00 1990 From: oruiz at fi.upm.es (Oscar Ruiz) Date: 12 Sep 90 15:34 +0200 Subject: neural efficiency Message-ID: <54*oruiz@fi.upm.es> Subject: Neural efficiency. For some time I have been interested in the efficiency of neural network (NN) fitness, but I have still very little information about this matter -I got a bibliography with articles and books about this matter, but I cannot find them here in Spain, and the mail is (exasperatingly) slow. I have heard that the NN fitness is an NP-complete (perhaps even NP-hard) problem. I know what this means in discrete problems, but I am not sure what it means in continuous problems, as in the case of a NN whose units have a continuous activation function (e.g.: a sigmoid), where the exact fitness is, in general, impossible. Efficiency problems can be formalized in terms of Turing Machines, which are essentially discrete objects. But how can it be done with continuous problems? On the other hand, reference (1) below states that generally the number of steps needed to optimize a function of n variables, with a given relative error, is an exponential function of n (see for a more rigorous formulation of this result below). Since fitting a neural network is equivalent to minimizing its error function (whose variables are the NN weights), the search for an efficient general method to fit the weights in a NN is doomed to failure (except for some particular cases). I would like to know if this is right. Reference: (1) Nemirovsky et al.: Problem Complexity and Method Efficiency in Optimization (John Wiley & Sons). (In this book, concrete classes of problems and the class of methods corresponding to these problems are considered. Each of these methods applied to the class of problems considered is characterized by its laboriousness and error, i.e. by upper bounds -over the problems of the class- for the number of steps in its work on the problem and by the error of the result. The complexity N(v) of a given class of problems is defined as the least possible laboriousness of a method which solves every problem of the class with a relative error not exceeding v. The main result is the following: For the complexity N(v) of the class of all extremal problems with k-times continuously differential functionals on a compact field G in E^n, the lower bound c(k,G)(1/v)^(n/k) holds both for the ordinary -deterministic- methods of solution and for random-search methods.) Miguel A. Lerma Sancho Davila 18 28028 MADRID - SPAIN  From rich at gte.com Wed Sep 12 11:36:48 1990 From: rich at gte.com (Rich Sutton) Date: Wed, 12 Sep 90 11:36:48 -0400 Subject: Reinforcement Learning -- Special Issue of Machine Learning Journal Message-ID: <9009121536.AA09672@bunny.gte.com> ---------------------------------------------------------------------- CALL FOR PAPERS The journal Machine Learning will be publishing a special issue on REINFORCEMENT LEARNING in 1991. By "reinforcement learning" I mean trial-and-error learning from performance feedback without an explicit teacher other than the external environment. Of particular interest is the learning of mappings from situation to action in this way. Reinforcement learning has most often been studied within connectionist or classifier-system (genetic) paradigms, but it need not be. Manuscripts must be received by March 1, 1991, to assure full consideration. One copy should be mailed to the editor: Richard S. Sutton GTE Laboratories, MS-44 40 Sylvan Road Waltham, MA 02254 USA In addition, four copies should be mailed to: Karen Cullen MACH Editorial Office Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, MA 02061 USA Papers will be subject to the standard review process. ------------------------------------------------------------------------ From sankar at caip.rutgers.edu Thu Sep 13 10:40:18 1990 From: sankar at caip.rutgers.edu (ananth sankar) Date: Thu, 13 Sep 90 10:40:18 EDT Subject: Room-mate at NIPS Message-ID: <9009131440.AA04973@caip.rutgers.edu> I will be attending this years NIPS conference and workshop. Anyone interested in sharing a room, please get in touch with me as soon as possible. Thanks, Ananth Sankar CAIP Center Rutgers University Brett and Bowser Roads P.O. Box 1390 Piscataway, NJ 08855-1390 Phone: (201)932-5549 (off) (609)936-9024 (res) From plunkett at amos.ucsd.edu Thu Sep 13 16:49:38 1990 From: plunkett at amos.ucsd.edu (Kim Plunkett) Date: Thu, 13 Sep 90 13:49:38 PDT Subject: No subject Message-ID: <9009132049.AA24317@amos.ucsd.edu> Please note that Jordan Pollack has kindly posted a recently announced TR on the neuroprose directory under "plunkett.tr9020.ps.Z". Just to remind you of the contents, the abstract follows: ===================================================================== From Rote Learning to System Building: Acquiring Verb Morphology in Children and Connectionist Nets Kim Plunkett University of Aarhus Denmark Virginia Marchman Center for Research in Language University of California, San Diego Abstract The traditional account of the acquisition of English verb morphology supposes that a dual mechanism architecture underlies the transition from early rote learning processes (in which past tense forms of verbs are correctly produced) to the systematic treatment of verbs (in which irregular verbs are prone to error). A connectionist account supposes that this transition can occur in a single mechanism (in the form of a neural network) driven by gradual quantitative changes in the size of the training set to which the network is exposed. In this paper, a series of simulations is reported in which a multi-layered perceptron learns to map verb stems to past tense forms analogous to the mappings found in the English past tense system. By expanding the training set in a gradual, incremental fashion and evaluat- ing network performance on both trained and novel verbs at successive points in learning, we demonstrate that the net- work undergoes reorganizations that result in a shift from a mode of rote learning to a systematic treatment of verbs. Furthermore, we show that this reorganizational transition is contingent upon a critical mass in the training set and is sensitive to the phonological sub-regularities character- izing the irregular verbs. The optimal levels of performance achieved in this series of simulations compared to previous work derives from the incremental training procedures exploited in the current simulations. The pattern of errors observed are compared to those of children acquiring the English past tense, as well as children's performance on experimental studies with nonsense verbs. Incremental learn- ing procedures are discussed in light of theories of cogni- tive development. It is concluded that a connectionist approach offers a viable alternative account of the acquisi- tion of English verb morphology, given the current state of empirical evidence relating to processes of acquisition in young children. From tgd at turing.CS.ORST.EDU Thu Sep 13 17:08:26 1990 From: tgd at turing.CS.ORST.EDU (Tom Dietterich) Date: Thu, 13 Sep 90 14:08:26 PDT Subject: Tech Report Available Message-ID: <9009132108.AA04726@turing.CS.ORST.EDU> The following tech report is available in compressed postscript format from the neuroprose archive at Ohio State. A Comparison of ID3 and Backpropagation for English Text-to-Speech Mapping Thomas G. Dietterich Hermann Hild Ghulum Bakiri Department of Computer Science Oregon State University Corvallis, OR 97331-3102 Abstract The performance of the error backpropagation (BP) and decision tree (ID3) learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping. This is an expanded version of a short paper that appeared at the Seventh International Conference on Machine Learning at Austin TX in June. To retrieve via FTP, use the following procedure: unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): ftp> cd pub/neuroprose ftp> binary ftp> get (remote-file) dietterich.comparison.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps unix> lpr -P(your_local_postscript_printer) foo.ps From skrzypek at CS.UCLA.EDU Thu Sep 13 18:25:48 1990 From: skrzypek at CS.UCLA.EDU (Dr. Josef Skrzypek) Date: Thu, 13 Sep 90 15:25:48 PDT Subject: NN AND VISION -IJPRAI-special issue Message-ID: <9009132225.AA27988@retina.cs.ucla.edu> Because of repeat enquiries about the special issue of IJPRAI (Intl. J. of Pattern Recognition and AI) I am posting the announcement again. IJPRAI CALL FOR PAPERS IJPRAI We are organizing a special issue of IJPRAI (Intl. Journal of Pattern Recognition and Artificial Intelligence) dedicated to the subject of neural networks in vision and pattern recognition. Papers will be refereed. The plan calls for the issue to be published in the fall of 1991. I would like to invite your participation. DEADLINE FOR SUBMISSION: 10th of December, 1990 VOLUME TITLE: Neural Networks in Vision and Pattern Recognition VOLUME GUEST EDITORS: Prof. Josef Skrzypek and Prof. Walter Karplus Department of Computer Science, 3532 BH UCLA Los Angeles CA 90024-1596 Email: skrzypek at cs.ucla.edu or karplus at cs.ucla.edu Tel: (213) 825 2381 Fax: (213) UCLA CSD DESCRIPTION The capabilities of neural architectures (supervised and unsupervised learning, feature detection and analysis through approximate pattern matching, categorization and self-organization, adaptation, soft constraints, and signal based processing) suggest new approaches to solving problems in vision, image processing and pattern recognition as applied to visual stimuli. The purpose of this special issue is to encourage further work and discussion in this area. The volume will include both invited and submitted peer-reviewed articles. We are seeking submissions from researchers in relevant fields, including, natural and artificial vision, scientific computing, artificial intelligence, psychology, image processing and pattern recognition. "We encourage submission of: 1) detailed presentations of models or supporting mechanisms, 2) formal theoretical analyses, 3) empirical and methodological studies. 4) critical reviews of neural networks applicability to various subfields of vision, image processing and pattern recognition. Submitted papers may be enthusiastic or critical on the applicability of neural networks to processing of visual information. The IJPRAI journal would like to encourage submissions from both , researchers engaged in analysis of biological systems such as modeling psychological/neurophysiological data using neural networks as well as from members of the engineering community who are synthesizing neural network models. The number of papers that can be included in this special issue will be limited. Therefore, some qualified papers may be encouraged for submission to the regular issues of IJPRAI. SUBMISSION PROCEDURE Submissions should be sent to Josef Skrzypek, by 12-10-1990. The suggested length is 20-22 double-spaced pages including figures, references, abstract and so on. Format details, etc. will be supplied on request. Authors are strongly encouraged to discuss ideas for possible submissions with the editors. The Journal is published by the World Scientific and was established in 1986. Thank you for your considerations. From kanal at cs.UMD.EDU Thu Sep 13 20:34:25 1990 From: kanal at cs.UMD.EDU (Laveen N. KANAL) Date: Thu, 13 Sep 90 20:34:25 -0400 Subject: Notice of Technical Reports Message-ID: <9009140034.AA21259@mimsy.UMD.EDU> What follows is the abstract of a TR printed this summer which has been subitted for publication. Also included in this message are the titles of two earlier reports by the sdame authors which were put out in Dec. 1988 but whic may be of interest now in view of some titles I have seen on the net. UMIACS-TR-90-99 July 1990 CS-TR-2508 ASYMMETRIC MEAN-FIELD NEURAL NETWORKS FOR MULTIPROCESSOR SCHECDULING Benjamin J. Hellstrom Laveen N. Kanal Abstract Hopfield and Tank's proposed technique for embedding optimization problems, such as the travelling salesman, in mean-field thermodynamic networks suffers from several restrictions. In particular, each discrete optimization problem must be reduced to the minimization of a 0-1 Hamiltonian. Hopfield and Tank's technique yields fully-connected networks of functionally homogeneous visible units with low-order symmetric connections. We present a program-constructive approach to embedding difficult problems in neural networks. Our derivation method overcomes the Hamiltonian reducibility requirement and promotes networks with functionally heterogeneous hidden units and asymmetric connections of both low and high-order. The underlying mechanism involves the decomposition of arbitrary problem energy gradients into piecewise linear functions which can be modeled as the outputs of sets of hidden units. To illustrate our method, we derive thermodynamic mean-field neural networks for multiprocessor scheduling. The performance of these networks is analyzed by observing phase transitions and several improvements are suggested. Tuned networks of up to 2400 units are shown to yield very good, and often exact solutions. The earlier reports are CS-TR-2149 Dec. 1988 by Hellstrom and Kanal, titled " Linear Programming Approaches to Learning in Thermodynamic Models of Neural Networks" Cs-TR-2150, Dec. 1988 by Hellstrom and Kanal, titled " Encoding via Meta-Stable Activation Levels: A Case Study of the 3-1-3 Encoder". Reports are available free until the current supply lasts after which they will be available(for a small charge) from the publications group at the Computer Science Center of the Univ. of Maryland, College Park, Md., 20742. The address for the current su supply is : Prof. L.N. Kanal, Dept. of Computer Science, A.V. Williams Bldg, Univ. ofMaryland, College Park, MD. 20742. L.K. From erol at ehei.ehei.fr Mon Sep 17 11:20:28 1990 From: erol at ehei.ehei.fr (Erol Gelenbe) Date: Mon, 17 Sep 90 15:22:28 +2 Subject: Application of the random neural network model to NP-Hard problems Message-ID: <9009171421.AA20868@inria.inria.fr> We are doing work on the application of the random neural network model, introduced in two recent papers of the journal Neural Computation (E. Gelenbe : Vol. 1,No 4, and Vol. 2, No. 2), to combinatorial optmisation problems. Our first results concern the Graph Covering problem. We have considered 400 graphs drawn at random, with 20, 50 and 100 nodes. Over this sample we observe that : - The random neural network solution (which is purely analytic, i.e. not simulated as with the Hopfield network) provides on the average better results than the usual heuristic (the greedy algorithm), and considerably better results than the Hopfield-Tank approach. - The random neural network solution is more time consuming than the greedy algorithm, but considerably less time consuming than the Hopfield-Tank approach. A report can be obtained by writing, or e-mailing me : erol at ehei.ehei.fr Erol Gelenbe EHEI 45 rue des Saints-Peres 75006 Paris, France From mikek at wasteheat.colorado.edu Mon Sep 17 11:31:50 1990 From: mikek at wasteheat.colorado.edu (Mike Kranzdorf) Date: Mon, 17 Sep 90 09:31:50 -0600 Subject: Mactivation Word docs coming to ftp Message-ID: <9009171531.AA13946@wasteheat.colorado.edu> I thought Connectionists might be interested in the end result of this, specifically that I will be posting a new copy of Mactivation 3.3 including MS Word documentation to alumni.colorado.edu real soon now. Date: Sun, 16 Sep 90 03:37:36 GMT-0600 From: james at visual2.tamu.edu (James Saxon) Message-Id: <9009160937.AA25939 at visual2.tamu.edu> To: mikek at boulder.colorado.edu *** Subject: Mactivation Documentation I was going to post this to the net but I figured I'd let you do it if you feel it's necessary. If you're going to give out the bloody program, you might as well have just stuck in the decent readable documentation because nobody in their right mind is going to pay $5.00 for it. It's really a cheap move and if you don't replace the ftp file you might just lose all your business because, I like many others just started playing with the package. I don't see any macros for learning repetitive things and so I was going to give up because I don't want to spend all day trying to figure out how to not switch from the mouse to the keyboard trying to set the layer outputs for everything... And then I'm certainly not going to turn to an unformatted Geneva document just to prove that the program is not very powerful... So you can decide what you want do do but I suggest not making everybody pissed off at you. --- I sincerely apologize if my original posting gave the impression that I was trying to make money from this. Mactivation, along with all the documentation, has been available via ftp for over 3 years now. Since I recently had to switch ftp machines here, I thought I would save some bandwidth and post a smaller copy (in fact this was suggested by several people). Downloading these things over a 1200 baud modem is very slow. The point of documentation in this case is to be able to use the program, and I still think a text file does fine. The $5 request was not for prettier docs, but for the disk, the postage, and my time. I get plenty of letters saying "Thank you for letting me avoid ftp", and that was the idea. The $5 actually started as an alternative for people who didn't want to bother sending me a disk and a self addressed stamped envelope, which used to be part of my offer. However, I got too many 5 1/4" disks and unstamped envelopes, so I dropped that option this round. --- I am presently collecting NN software for a class that my professor is teaching here at A&M and will keep your program around for the students but I warn them about the users manual. :-0 And while this isn't a contest, your program will be competing with the Rochester Connectionist Simulator, SFINX, DESCARTES, and a bunch more... Lucky I don't have MacBrain... which if you haven't seen, you should. Of course, that's $1000, but the manual's free. --- If you think you're getting MacBrain for free or a Mac version of the Rochester Simulator, then don't bother downloading Mactivation. You will be dissapointed. I wrote Mactivation for myself, and it is not supported by a company or a university. It's not for research, it's an introduction which can be used to teach some basics. (Actually you can do research, but only on the effects of low-level parameters on small nets. As a point of interest, my research involved making optical neural nets out of spatial light modulators, and these parameters were important while the ability to make large or complex nets was not.) --- James Saxon Scientific Visualization Laboratory Texas A&M University james@#visual2.tamu.edu --- ***The end result of this is that I will post a new copy complete with the Word docs. I am not a proficient telecommunicator though, so it may take a week or so. I apologize for the delay. --mikek From stucki at cis.ohio-state.edu Mon Sep 17 13:53:33 1990 From: stucki at cis.ohio-state.edu (David J Stucki) Date: Mon, 17 Sep 90 13:53:33 -0400 Subject: Application of the random neural network model to NP-Hard problems In-Reply-To: Erol Gelenbe's message of Mon, 17 Sep 90 15:22:28 +2 <9009171421.AA20868@inria.inria.fr> Message-ID: <9009171753.AA13351@retina.cis.ohio-state.edu> I would like a copy of the report you advertised on connectionists. thanks, David J Stucki Dept. of Computer and Information Science 2036 Neil Avenue Mall Columbus, Ohio 43210 From kris at boulder.Colorado.EDU Mon Sep 17 19:24:33 1990 From: kris at boulder.Colorado.EDU (Kris Johnson) Date: Mon, 17 Sep 90 17:24:33 MDT Subject: Mactivation Word docs coming to ftp Message-ID: <9009172324.AA10374@fred.colorado.edu> sounds like james at visula asks alot for not mush From gmdzi!st at relay.EU.net Sat Sep 15 11:21:06 1990 From: gmdzi!st at relay.EU.net (Sebastian Thrun) Date: Sat, 15 Sep 90 13:21:06 -0200 Subject: No subject Message-ID: <9009151121.AA02208@gmdzi.UUCP> The following might be interesting for everybody who works with the PDP backpropagation simulator and has access to a Connection Machine: ******************************************************** ** ** ** PDP-Backpropagation on the Connection Machine ** ** ** ******************************************************** For testing our new Connection Machine CM/2 I extended the PDP backpropagation simulator by Rumelhart, McClelland et al. with a parallel training procedure for the Connection Machine (Interface C/Paris, Version 5). Following some ideas by R.M. Faber and A. Singer I simply made use of the inherent parallelism of the training set: Each processor on the connection machine (there are at most 65536) evaluates the forward and backward propagation phase for one training pattern only. Thus the whole training set is evaluated in parallel and the training time does not depend on the size of this set any longer. Especially at large training sets this reduces the training time greatly. For example: I trained a network with 28 nodes, 133 links and 23 biases to approximate the differential equations for the pole balancing task adopted from Anderson's dissertation. With a training set of 16384 patterns, using the conventional "strain" command, one learning epoch took about 110.6 seconds on a SUN 4/110 - the connection machine with this SUN on the frontend managed the same in 0.076 seconds. --> This reduces one week exhaustive training to approximately seven minutes! (By parallelizing the networks themselves similar acceleration can be achieved also with smaller training sets.) -------------- The source is written in C (Interface to Connection Machine: PARIS) and can easily be embedded into the PDP software package. All origin functions of the simulator are not touched - it is also still possible to use the extended version without a Connection Machine. If you want to have the source, please mail me! Sebastian Thrun, st at gmdzi.uucp You can also obtain the source via ftp: ftp 129.26.1.90 Name: anonymous Password: ftp> cd pub ftp> cd gmd ftp> get pdp-cm.c ftp> bye From marvit at hplpm.hpl.hp.com Tue Sep 18 17:58:12 1990 From: marvit at hplpm.hpl.hp.com (Peter Marvit) Date: Tue, 18 Sep 90 14:58:12 PDT Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009182158.AA20192@hplpm.hpl.hp.com> A fellow at lunch today asked a seemingly innocuous question. I am embarrassed to say, I do not know the answer. I assume some theoretical work has been done on this subject, but I'm ignorant. So: Are neural networks Turing equivalent? Broadcast responses are fine, e-mail responses will be summarized. -Peter "Definitely non-Turing" From Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU Tue Sep 18 19:02:08 1990 From: Dave.Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave.Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Tue, 18 Sep 90 19:02:08 EDT Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Your message of Tue, 18 Sep 90 14:58:12 -0700. <9009182158.AA20192@hplpm.hpl.hp.com> Message-ID: <8770.653698928@DST.BOLTZ.CS.CMU.EDU> The Turing equivalence question has come up on this list before. Here's a simple answer: No finite machine is Turing equivalent. This rules out any computer that physically exists in the real world. You can make non-finite neural nets by assuming, say, numbers with unbounded precision. Jordan Pollack showed in his thesis how to encode a Turing machine tape as two binary fractions, each of which was an activation value of a "neuron". This is no more ridiculous than assuming a tape of unbounded length. If you are willing to allow nets to have an unbounded number of units, then you can use finite preceision units to simulate the tape and perhaps build a Turing machine that way; it would depend on whether you view the wiring scheme of the infinite neural net as having a finite or infinite description. (Classical Turing machines have a finite description because you don't have to specify each square of the infinite tape individually.) If you view the tape as external to the Turing machine, then all that's left inside is a finite state automaton, and those can easily be implemented with neural nets. -- Dave From sun at umiacs.UMD.EDU Tue Sep 18 21:24:40 1990 From: sun at umiacs.UMD.EDU (Guo-Zheng Sun) Date: Tue, 18 Sep 90 21:24:40 -0400 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009190124.AA00328@neudec.umiacs.UMD.EDU> Recently, we studied the computability of neural nets and proved sevaral theorems. Basically, the results are that (1) Given an arbitrary Turing machine there exists a uniform recurrent neural net with second-order connection weights which can simulate it. (2) Therefore, neural nets can simulate universal Turing machines. The preprint will be available soon. Guo-Zheng Sun Institute for Advanced Computer Studies University of Maryland From sontag at control.rutgers.edu Tue Sep 18 21:38:13 1990 From: sontag at control.rutgers.edu (sontag@control.rutgers.edu) Date: Tue, 18 Sep 90 21:38:13 BST Subject: Reference to article on Neural Nets for cancer diagnosis Message-ID: <9009190138.AA13055@control.rutgers.edu> People in this list may be interested in reading the latest (September) SIAM News. The leading front-page article is about cancer diagnosis via neural nets (the title says "linear programming", but the text explains the relation to nn's). The method appears to be extremely succesful, almost 100% accurate for breast cancer. An outline of the algorithm is as follows (as I understood it from the article): if the data is not linearly separable, then first sandwich the intersection of the convex hulls of the training data between two hyperplanes. Ignore the rest (already separated), restrict to this sandwich region, and iterate. The authors prove (in the references) that this gives a polynomial time algorithm (essentially using LP for each sandwich construction; the "size" is unclear), presumably under the assumption that a polyhedral boundary exists. The references are to various papers in SIAM publications and IEEE/IT. The authors are Olvi L. Mangasarian (olvi at cs.wisc.edu) from the Math and CS depts at Madison, and W.H. Wolberg from the Wisconsin medical school. -eduardo From harnad at clarity.Princeton.EDU Wed Sep 19 11:04:46 1990 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Wed, 19 Sep 90 11:04:46 EDT Subject: Anderson/Cognition: BBS Call for Commentators Message-ID: <9009191504.AA15649@psycho.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you are selected as a commentator. ____________________________________________________________________ IS HUMAN COGNITION ADAPTIVE? John R. Anderson Psychology Department Carnegie Mellon University Pittsburgh,PA 15213-3890 ABSTRACT: Can the output of human cognition be predicted from the assumption that it is an optimal response to the information-processing demands of the environment? A methodology called rational analysis is described for deriving predictions about cognitive phenomena using optimization assumptions. The predictions flow from the statistical structure of the environment and not the assumed structure of the mind. Bayesian inference is used, assuming that people start with a weak prior model of the world which they integrate with experience to develop stronger models of specific aspects of the world. Cognitive performance maximizes the difference between the expected gain and cost of mental effort. (1) Memory performance can be predicted on the assumption that retrieval seeks a maximal trade-off between the probability of finding the relevant memories and the effort required to do so; in (2) categorization performance there is a similar trade-off between accuracy in predicting object features and the cost of hypothesis formation; in (3) casual inference the trade-off is between accuracy in predicting future events and the cost of hypothesis formation; and in (4) problem solving it is between the probability of achieving goals and the cost of both external and mental problem-solving search. The implemention of these rational prescriptions in neurally plausible architecture is also discussed. ------------------ A draft is retrievable by anonymous ftp from princeton.edu in directory /ftp/pub/harnad as compressed file anderson.article.Z Retrieve using "binary". Use scribe to print. This can't be done form Bitnet directly, but there is a fileserver called bitftp at pucc.bitnet that will do it for you. Send it the one line message: help File must be uncompressed after receipt. From pollack at cis.ohio-state.edu Wed Sep 19 10:47:30 1990 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Wed, 19 Sep 90 10:47:30 -0400 Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Guo-Zheng Sun's message of Tue, 18 Sep 90 21:24:40 -0400 <9009190124.AA00328@neudec.umiacs.UMD.EDU> Message-ID: <9009191447.AA05299@dendrite.cis.ohio-state.edu> In my 1987 dissertation, as Touretzky pointed out, I assumed rational output values between 0 and 1 for two neurons in order to represent an unbounded binary tape. Besides that assumption, the construction of the "neuring machine" (sorry) required linear combinations, thresholds, and multiplicative connections. Linear combinations are subsumed by multiplicative connections and a bias unit. Without thresholds you cant make a decision, and without multiplicative connections, you cant (efficiently) gate rational values, which is necessary for moving in both directions on the tape. Proofs of computability should be used necessarily be used as architectures to build upon further (which I think a few people misunderstood my thesis to imply), but as an indication of what collection of primitives are necessary in a machine. One wouldn't want to build a theoretical stored program computer without some sort of conditional branch, or a practical stored program computer without a connection between program and data memory. I took this result to argue that higher-order connections are crucial to general purpose neural-style computation. It is interesting to note that GZ Sun's theorem involves second-order connection weights. It probably involves thresholds as well. I temporarily posted a revised version of the chapter of my thesis in neuroprose, as pollack.neuring.ps.Z Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Fax/Phone: (614) 292-4890 From elman at amos.ucsd.edu Wed Sep 19 11:59:49 1990 From: elman at amos.ucsd.edu (Jeff Elman) Date: Wed, 19 Sep 90 08:59:49 PDT Subject: job announcement: UCSD Cognitive Science Message-ID: <9009191559.AA10400@amos.ucsd.edu> Assistant Professor Cognitive Science UNIVERSITY OF CALIFORNIA, SAN DIEGO The Department of Cognitive Science at UCSD expects to receive permission to hire one person at the assistant professor level (tenure-track). We seek someone whose interests cut across conventional disciplines. The Department takes a broadly based approach covering experimental, theoretical, and computational investigations of the biological basis of cognition, cognition in individuals and social groups, and machine intelligence. Candidates should send a vita, reprints, a short letter describing their background and interests, and names and addresses of at least three references to: UCSD Search Committee/Cognitive Science 0515e 9500 Gilman Dr. La Jolla, CA 92093-0515-e Applications must be received prior to January 15, 1991. Salary will be commensurate with experience and qualifications, and will be based upon UC pay schedules. Women and minorities are especially encouraged to apply. The University of California, San Diego is an Affirmative Action/Equal Opportunity Employer. From FRANKLINS%MEMSTVX1.BITNET at VMA.CC.CMU.EDU Wed Sep 19 17:01:00 1990 From: FRANKLINS%MEMSTVX1.BITNET at VMA.CC.CMU.EDU (FRANKLINS%MEMSTVX1.BITNET@VMA.CC.CMU.EDU) Date: Wed, 19 Sep 90 16:01 CDT Subject: Are Neural Nets Turing Equivalent? Message-ID: Here are some additional references on the "Turing equivalence of neural networks question". The term "neural network" will refer to networks that are discrete in time and in activation values. In their first paper, McCulloch and Pitts showed that logical gates can easily be simulated by threshold networks. They also claimed, but did not prove, Turing equivalence. W.S. McCulloch and W. Pitts, "A logical calculus of the ideas immanent in nervous activity", Bull. Math. Biophys. 5(1943) 115--133. Hartley and Szu noted that finite neural networks were computationally equivalent to finite state machines. They also asserted Turing equivalence of potentially infinite (unbounded) neural networks and sketched a proof that a Turing machine can be simulated by a neural network. R. Hartley and H. Szu, "A Comparison of the Computational Power of Neural Network Models", in Proc. IEEE First International Conference on Neural Networks (1987) III 17--22. Max Garzon and I gave a detailed description of a neural network simulation of an arbitrary Turing machine. The network would stabilize if and only if the Turing machine halts. Thus the stability problem for neural networks turns out to be Turing unsolvable. One could argue, I think, that our unbounded neural network simulation of a Turing machine even has a finite description. Stan Franklin and Max Garzon, "Neural Computability" in O. M. Omidvar, ed., Progress In Neural Networks, vol 1, Ablex, Norwood NJ, 1990. Unbounded neural networks (without finite descriptions) are strictly more powerful than Turing machines. Such a beast, if there were one, could solve the halting problem, for example, by essentially reducing it to a lookup table. But neural networks are computationally equivalent to cellular automata for graphs of finite bandwidth. Max and I proved this using a universal neural network. Max Garzon and Stan Franklin, "Computation on graphs", in O. M. Omidvar, ed., Progress in Neural Networks, vol 2, Ablex, Norwood NJ, 1990, to appear. Max Garzon and Stan Franklin, "Neural computability II", Proc. 3rd Int. Joint. Conf. on Neural Networks, Washington, D.C. 1989 I, 631-637 Stan Franklin Math Sciences Memphis State Memphis TN 38152 BITNET:franklins at memstvx1 From peterc at chaos.cs.brandeis.edu Thu Sep 20 03:17:05 1990 From: peterc at chaos.cs.brandeis.edu (Peter Cariani) Date: Thu, 20 Sep 90 03:17:05 edt Subject: Are Neural Nets Turing Equivalent? In-Reply-To: Peter Marvit's message of Tue, 18 Sep 90 14:58:12 PDT <9009182158.AA20192@hplpm.hpl.hp.com> Message-ID: <9009200717.AA09682@chaos.cs.brandeis.edu> Dear Peter "Definitely non-Turing", When you say "Are neural networks Turing equivalent?" are you talking about strictly finite neural networks (finite # elements, finite & discrete state sets for each element) or are you allowing for potentially-infinite neural networks (indefinitely extendible # elements and/or state sets)? The first I think are equivalent to finite state automata (or Turing machines with fixed, finite tapes) while the second would be equivalent to Turing machines with potentially infinite tapes. I would argue that potentially infinite tapes are purely Platonic constructions; no physically realized (not to mention humanly usable) automaton can have an indefinitely- extendible tape and operate without temporal bounds (i.e. the stability of the physical computational device, the lifespan of the human observer(s)). For this reason, it could be argued that potentially-infinite automata (and the whole realm of computability considerations) really have no relevance to real-world computational problems, whereas finite automata and computational complexity (including speed & reliability issues) have everything to do with real-world computation. Does anyone have an example where computability issues (Godel's Proof, for example) have any bearing whatsoever on the problems we daily encounter with our finite machines? Do computability considerations in any way constrain what we can (or cannot) do beyond those already imposed by finite memory, limited processing speed, and imperfectly reliable elements? -Peter "Definitely non-Turing, but possibly for different reasons" P.S. If we consider neural nets as physical adaptive devices rather than purely formal constructions (as in the early Perceptrons and Sceptrons, which actually had real sensors attached to the computational part), then there are contingent measurement processes, which, strictly speaking are not formal operations. Turing machines, finite or potentially infinite, simply don't sense anything beyond what's already on their tapes and/or in their state-transition tables, while robotically implemented neural nets operate contingent upon (often unpredictable) external events and circumstances. From uhr at cs.wisc.edu Thu Sep 20 13:26:34 1990 From: uhr at cs.wisc.edu (Leonard Uhr) Date: Thu, 20 Sep 90 12:26:34 -0500 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009201726.AA00355@thor.cs.wisc.edu> Peter Cariani's description of the equivalence of NN to finite state automata, but to Turing machines when given a potentially infinite ability (either the traditional memory tape or unbounded processors) is good, and nicely simple. Two comments: NN processors execute directly on what flows into them; TM interpret processes stored on the memory tape - so it's more natural to think of NN with potentially infinite numbers of processors, rather than memory plus the processors now interpreting. You can add sensors to a TM as well as an NN - and will need to for the same reasons. As soon as you actually realize a "potentially infinite" TM you must give it sensors that in effect make the real world the tape (e.g., TV, with motors to move it from place to place). So there's really no difference. Len Uhr From ANDERSON%BROWNCOG.BITNET at mitvma.mit.edu Fri Sep 21 15:00:00 1990 From: ANDERSON%BROWNCOG.BITNET at mitvma.mit.edu (ANDERSON%BROWNCOG.BITNET@mitvma.mit.edu) Date: Fri, 21 Sep 90 15:00 EDT Subject: Technical Report Message-ID: A technical report is available: "Why, having so many neurons, do we have so few thoughts?" Technical Report 90-1 Brown University Department of Cognitive and Linguistic Sciences James A. Anderson Department of Cognitive and Linguistic Sciences Box 1978 Brown University Providence, RI 02912 This is a chapter to appear in: Relating Theory and Data Edited by W.E. Hockley and S. Lewandowsky, Hillsdale, NJ: Erlbaum (LEA) Abstract Experimental cognitive psychology often involves recording two quite distinct kinds of data. The first is whether the computation itself is done correctly or incorrectly and the second records how long it took to get an answer. Neural network computations are often loosely described as being `brain-like.' This suggests that it might be possible to model experimental reaction time data simply by seeing how long it takes for the network to generate the answer and error data by looking at the computed results in the same system. Simple feedforward nets usually do not give direct computation time data. However, network models realizing dynamical systems can give `reaction times' directly by noting the time required for the network computation to be completed. In some cases genuine random processes are necessary to generate differing reaction times, but in other cases deterministic, noise free systems can also give distributions of reaction times. This report can be obtained by sending an email message to: LI700008 at brownvm.BITNET or anderson at browncog.BITNET and asking for Cognitive Science Technical Report 90-1 on reaction times, or by sending a note by regular mail to the address above. From sun at umiacs.UMD.EDU Thu Sep 20 20:39:54 1990 From: sun at umiacs.UMD.EDU (Guo-Zheng Sun) Date: Thu, 20 Sep 90 20:39:54 -0400 Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009210039.AA01890@neudec.umiacs.UMD.EDU> In addition to Peter Cariani's description, I would like to make a short comment: Is any finite state machine with a potential unlimited number of states (e.g. a recurrent neural net state machine with potential unbounded precision) equivalent to a Turing machine? The answer is certainly "NO", because the classical definition of Turing machine requires both an infinite tape and a set of processing rules with finite description. Therefore, whether we say one neural net is equivalent to a finite automaton with potential unlimited number of states or it is equivalnet to a Turing machine depends on if we can find a set of transition rules with finite description ( or as Touretzky's words "the wiring schedule with finite description). Guo-Zheng Sun From bmb at Think.COM Thu Sep 20 21:01:26 1990 From: bmb at Think.COM (bmb@Think.COM) Date: Thu, 20 Sep 90 21:01:26 EDT Subject: Are Neural Nets Turing Equivalent? Message-ID: <9009210101.AA01186@regin.think.com> As I recall, another interesting point that somebody (I don't remember who it was) brought up in our last discussion about this topic is the possibility of a difference in the capabilities of analog and digital implementations of neural nets (or of any other type of computation for that matter). This speculation is based on the work of Pour-El and Richards at the Department of Mathematics at the University of Minnesota. The general idea is as follows: Real numbers that can be produced by a Turing machine are called "computable." There must be a countable number of these, since there is a countable number of Turing machines. (Note: Since the real numbers are uncountable, there's clearly lots more of them than there are computable numbers.) Now, Pour-El and Richards showed that the wave equation, with computable initial conditions, evolving for a computable length of time, can give rise to noncomputable values of the dependent variable. This, of course, makes one wonder whether or not the same is true for other continuum equations of mathematical physics, such as those that govern the passage of signals through wires (basically the wave equation with complicated boundary conditions and other bells and whistles) and semiconductors (ditto plus some nonlinearities). Since analog computation takes place in the presumably continuous real world where physical processes are governed by continuum equations, whereas digital circuitry goes through great pains to wash out all but the "binariness" of signals, one might conclude that there is a possibility that analog circuitry can do things that digital circuitry can't. This conclusion is far from clear, but I'd say it's definitely worth thinking about. In "The Emperor's New Mind," Penrose cites the above work in his attack on Strong AI (since it seems to imply that there is a stronger notion of computability than Turing equivalence). In any event, here's the reference in case anybody's interested: M. Pour-El, I. Richards, Advances in Mathematics, Vol. 39, pp. 215-239 (1981). Bruce Boghosian Thinking Machines Corporation bmb at think.com From CFoster at cogsci.edinburgh.ac.uk Fri Sep 21 11:14:26 1990 From: CFoster at cogsci.edinburgh.ac.uk (CFoster@cogsci.edinburgh.ac.uk) Date: Fri, 21 Sep 90 11:14:26 BST Subject: Are Neural Nets Turing Equivalent Message-ID: <17705.9009211014@scott.cogsci.ed.ac.uk> Given the discussion thus far, and in particular Dave Touretzky and Peter Cariani's comments on the discrepancy between theoretical computer science and inherently finite real cognitive and computational systems, you may be interested in the following. I have just submitted my Ph.D. thesis 'Algorithms, Abstraction and Implementation: A Massively Multilevel Theory of Strong Equivalence of Complex Systems'. It is a formalisation of a notion of algorithms that can be used across languages, hardware and architectures (notably connectionist and classical -- this was my starting point), and that can ground a stronger equivalence than just input-output (weak) equivalence. I spell this out as equivalence in terms of states which systems pass through -- at a level of description. The main point is this: I started with the assumption of finiteness. Algorithms are defined as finite sets of finite sequences of finite states. In trying to relate them to Turing computability, I ended up characterising the class of functions computed (or at least described) by them as 'bounded computable functions' or, to put it another way, as the class of functions computed by 'finite tape machines'. In contrast to some common usage, a finite tape machine (a Turing machine with a finite tape) is NOT the same as a finite state machine. The latter is generally defined over all integers and actually gets its constraints from restrictions on HOW it reads the infinite tape, not from its restriction to finitely many states at all. A general Turing machine only has finitely many states of course, so this cannot be the interesting distinction. Somewhat surprisingly, the super-finite bounded computable functions do not even seem to be a subset of computable functions, but possibly of partially computable functions. This is because any inputs not defined for the finite tape machine may actually be defined and cause the system to halt with a sensible output at a lower level of description (depending on the implementation), but then again they may not be defined there either. We just don't know what happens to them. This is quite similar to the case for unexpected inputs to actual computer systems. C. Foster From GOLDFARB%UNB.CA at VMA.CC.CMU.EDU Thu Sep 27 16:28:09 1990 From: GOLDFARB%UNB.CA at VMA.CC.CMU.EDU (GOLDFARB%UNB.CA@VMA.CC.CMU.EDU) Date: Thu, 27 Sep 90 17:28:09 ADT Subject: Turing Machines and New Reconfigurable Learning Machines Message-ID: In connection with the discussion on relation between Turing machines (TM) and neural nets (NN), I would like to draw your attention to my resent paper in Pattern Recognition Vol.23 No.6 (June 1990) pp.595-616, "On the Foundations of Intelligent Processes I: An Evolving Model for Pattern Learning" (as well as several other submitted papers). In it I proposed a finite model of a learning machine (LM) which could be viewed as a far-reaching "symbolic" generalization of the NN and which is more powerful than any other known finite machine (including the NN). This learning power is achieved only because the LM embodies the first model of a reconfigurable machine, i.e., machine that can learn any set of classes by modifying its set of operations. In retrospect, the idea of the reconfigurable machine appears to be quite natural and the only realistic alternative if we want a finite machine to achieve potentially unbounded learning capabilities in some infinite environment. The new model is quite different from the known NNs, and I expect some may not see any similarity immediately. NNs operate on vectors, while the LM can operate on any chosen pattern representation (vectors, strings, trees. etc.; discrete or continuous). NNs have essentially two groups of numeric operations: the accumulating additive operations and the transmitting nonlinear operations (the second group does not "actively" participate in the learning, since no weights are associated with these operations). In addition, the structure (global and local) of the NN imposes restrictions on the variety of metric point transformations that can realized by each layer as well as by the NN itself (think of each layer as changing the metric structure of the input vector space; see section 5.5 of Y.-H.Pao "Adaptive Pattern Recognition and Neural Networks",Addison-Wesley,1989). In the proposed model the operations are not necessarily numeric operations but rather they correspond to the chosen pattern representation. During learning, to find the best separation of the training classes, the LM seeks an optimal weighting scheme for its current set of operations. If the current set of operations is not sufficient, new operations, formed as the compositions of the current operations, are gradually introduced (with the help of several functions defined on the current space of weights) and the optimization process is repeated. Even for vector patterns, one of the most important distinctions between the NN and the LM is that the operations realized by the nodes of the NN are connected in a fixed manner while the operations of the LM are decoupled (to the extent that they have to be decoupled) and they may grow in number as the need arises during learning. There is a strong evidence that the learning stage for the LM always converges, is much more efficient, and produces a much smaller machine than the NN. From jose at learning.siemens.com Fri Sep 28 08:04:54 1990 From: jose at learning.siemens.com (Steve Hanson) Date: Fri, 28 Sep 90 08:04:54 EDT Subject: NIPS PROGRAM --Correction Message-ID: <9009281204.AA09239@learning.siemens.com.siemens.com> We had inadvertently excluded some of the posters from the preliminary program. We apologize for any confusion that may have caused. --Steve Hanson Below is a complete and correct version of the NIPS preliminary program. ------------------------------------------- NIPS 1990 Preliminary Program, November 26-29, Denver, Colorado Monday, November 26, 1990 12:00 PM: Registration Begins 6:30 PM: Reception and Conference Banquet 8:30 PM: After Banquet Talk, "Cortical Memory Systems in Humans", by Antonio Damasio. Tuesday, November 27, 1990 7:30 AM: Continental Breakfast 8:30 AM: Oral Session 1: Learning and Memory 10:30 AM: Break 11:00 AM: Oral Session 2: Navigation and Planning 12:35 PM: Poster Preview Session I, Demos 2:30 PM: Oral Session 3: Temporal and Real Time Processing 4:10 PM: Break 4:40 PM: Oral Session 4: Representation, Learning, and Generalization I 6:40 PM: Free 7:30 PM: Refreshments and Poster Session I Wednesday, November 28, 1990 7:30 AM: Continental Breakfast 8:30AM: Oral Session 5: Visual Processing 10:20 AM: Break 10:50 AM: Oral Session 6: Speech Processing 12:20 PM: Poster Preview Session II, Demos 2:30 PM: Oral Session 7: Representation, Learning, and Generalization II 4:10 PM: Break 4:40 PM: Oral Session 8: Control 6:40 PM: Free 7:30 PM: Refreshments and Poster Session II Thursday, November 29, 1990 7:30 AM: Continental Breakfast 8:30 AM: Oral Session 9: Self-Organization and Unsupervised Learning 10:20 AM: Break 10:50 AM: Session Continues 12:10 PM: Conference Adjourns 5:00 PM Reception and Registration for Post-Conference Workshop (Keystone, CO) Friday, November 30 -- Saturday, December 1, 1990 Post-Conference Workshops at Keystone ------------------------------------------------------------------------------ ORAL PROGRAM Monday, November 26, 1990 12:00 PM: Registration Begins 6:30 PM: Reception and Conference Banquet 8:30 PM: After Banquet Talk, "Cortical Memory Systems in Humans", by Antonio Damasio. Tuesday, November 27, 1990 7:30 AM: Continental Breakfast ORAL SESSION 1: LEARNING AND MEMORY Session Chair: John Moody, Yale University. 8:30 AM: "Multiple Components of Learning and Memory in Aplysia", by Thomas Carew. 9:00 AM: "VLSI Implementations of Learning and Memory Systems: A Review", by Mark Holler. 9:30 AM: "A Short-Term Memory Architecture for the Learning of Morphophonemic Rules", by Michael Gasser and Chan-Do Lee. 9:50 AM "Short Term Active Memory: A Recurrent Network Model of the Neural Mechanism", by David Zipser. 10:10 AM "Direct Memory Access Using Two Cues: Finding the Intersection of Sets in a Connectionist Model", by Janet Wiles, Michael Humphreys and John Bain. 10:30 AM Break ORAL SESSION 2: NAVIGATION AND PLANNING Session Chair: Lee Giles, NEC Research. 11:00 AM "Real-Time Autonomous Robot Navigation Using VLSI Neural Networks", by Alan Murray, Lionel Tarassenko and Michael Brownlow. 11:20 AM "Planning with an Adaptive World Model" by Sebastian B. Thrun, Knutt Moller and Alexander Linden . 11:40 AM "A Connectionist Learning Control Architecture for Navigation", by Jonathan Bachrach. 12:00 PM Spotlight on Language: Posters La1 and La3. 12:10 PM Spotlight on Applications: Posters App1, App6, App7, App10, and App11. 12:35 PM Poster Preview Session I, Demos ORAL SESSION 3: TEMPORAL AND REAL TIME PROCESSING Session Chair: Josh Alspector, Bellcore 2:30 PM "Learning and Adaptation in Real Time Systems", by Carver Mead. 3:00 PM "Applications of Neural Networks in Video Signal Processing", by John Pearson. 3:30 PM "Predicting the Future: A Connectionist Approach", by Andreas S. Weigend, Bernardo Huberman and David E. Rumelhart. 3:50 PM "Algorithmic Musical Composition with Melodic and Stylistic Constraints", by Michael Mozer and Todd Soukup. 4:10 PM Break ORAL SESSION 4: REPRESENTATION, LEARNING, AND GENERALIZATION I Session Chair: Gerry Tesauro, IBM Research Labs. 4:40 PM "An Overview of Representation and Convergence Results for Multilayer Feedforward Networks", by Hal White . 5:10 PM "A Simplified Linear-Threshold-Based Neural Network Pattern Classifier", by Terrence L. Fine. 5:30 PM "A Novel approach to predicition of the 3-dimensional structures of protein backbones by neural networks", by H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, H. Fredholm, B. Lautrup and S.B. Petersen. 5:50 PM "On the Circuit Complexity of Neural Networks", by Vwani Roychowdhury, Kai- Yeung Siu, Alon Orlitsky and Thomas Kailath . 6:10 PM Spotlight on Learning and Generalization: Posters LG2, LG3, LG8, LS2, LS5, and LS8. 6:40 PM Free 7:30 PM Refreshments and Poster Session I Wednesday, November 28, 1990 7:30 AM Continental Breakfast ORAL SESSION 5: VISUAL PROCESSING Session Chair: Yann Le Cun, AT&T Bell Labs 8:30 AM "Neural Dynamics of Motion Segmentation", by Ennio Mingolla. 9:00 AM "VLSI Implementation of a Network for Color Constancy", by Andrew Moore, John Allman, Geoffrey Fox and Rodney Goodman. 9:20 AM "Optimal Filtering in the Salamander Retina", by Fred Rieke, Geoffrey Owen and William Bialek. 9:40 AM "Grouping Contour Elements Using a Locally Connected Network", by Amnon Shashua and Shimon Ullman. 10:00 AM Spotlight on Visual Motion Processing: Posters VP3, VP6, VP9, and VP12. 10:20 AM Break ORAL SESSION 6: SPEECH PROCESSING Session Chair: Richard Lippmann, MIT Lincoln Labs 10:50 AM "From Speech Recognition to Understanding: Development of the MIT, SUMMIT, and VOYAGER Systems", by James Glass. 11:20 PM "Speech Recognition using Connectionist Approaches", by K.Chouki, S. Soudoplatoff, A. Wallyn, F. Bimbot and H. Valbret. 11:40 AM "Continuous Speech Recognition Using Linked Predictive Neural Networks", by Joe Tebelskis, Alex Waibel and Bojan Petek. 12:00 PM Spotlight on Speech and Signal Processing: Posters Sig1, Sig2, Sp2, and Sp7. 12:20 PM Poster Preview Session II, Demos ORAL SESSION 7: REPRESENTATION, LEARNING AND GENERALIZATION II Session Chair: Steve Hanson, Siemens Research. 2:30 PM "Learning and Understanding Functions of Many Variables Through Adaptive Spline Networks", by Jerome Friedman. 3:00 PM "Connectionist Modeling of Generalization and Classification", by Roger Shepard. 3:30 PM "Bumptrees for Efficient Function, Constraint, and Classification Learning", by Stephen M.Omohundro. 3:50 PM "Generalization Properties of Networks using the Least Mean Square Algorithm", by Yves Chauvin. 4:10 PM Break ORAL SESSION 8: CONTROL Session Chair: David Touretzky, Carnegie-Mellon University. 4:40 PM "Neural Network Application to Diagnostics and Control of Vehicle Control Systems", by Kenneth Marko. 5:10 PM "Neural Network Models Reveal the Organizational Principles of the Vestibulo- Ocular Reflex and Explain the Properties of its Interneurons", by T.J. Anastasio. 5:30 PM "A General Network Architecture for Nonlinear Control Problems", by Charles Schley, Yves Chauvin, Van Henkle and Richard Golden. 5:50 PM "Design and Implementation of a High Speed CMAC Neural Network Using Programmable CMOS Logic Cell Arrays", by W. Thomas Miller, Brain A. Box, Erich C. Whitney and James M. Glynn. 6:10 PM Spotlight on Control: Posters CN2, CN6, and CN7. 6:25 PM Spotlight on Oscillations: Posters Osc1, Osc2, and Osc3. 6:40 PM Free 7:30 PM Refreshments and Poster Session II Thursday, November 29, 1990 7:30 AM Continental Breakfast ORAL SESSION 9: SELF ORGANIZATION AND UNSUPERVISED LEARNING Session Chair: Terry Sejnowki, The Salk Institute. 8:30 AM "Self-Organization in a Developing Visual Pattern", by Martha Constantine-Paton. 9:00 AM "Models for the Development of Eye-Brain Maps", by Jack Cowan. 9:20 AM "VLSI Implementation of TInMANN", by Matt Melton, Tan Pahn and Doug Reeves. 9:40 AM "Fast Adaptive K-Means Clustering", by Chris Darken and John Moody. 10:00 AM "Learning Theory and Experiments with Competitive Networks", by Griff Bilbro and David Van den Bout. 10:20 AM Break 10:50 AM "Self-Organization and Non-Linear Processing in Hippocampal Neurons", by Thomas H. Brown, Zachary Mainen, Anthony Zador and Brenda Claiborne. 11:10 AM "Weight-Space Dynamics of Recurrent Hebbian Networks", by Todd K. Leen. 11:30 AM "Discovering and Using the Single Viewpoint Constraint", by Richard S. Zemel and Geoffrey Hinton. 11:50 AM "Task Decompostion Through Competition in A Modular Connectionist Architecture: The What and Where Vision Tasks", by Robert A. Jacobs, Michael Jordan and Andrew Barto. 12:10 PM Conference Adjourns 5:00 PM Post-Conference Workshop Begins (Keystone, CO) ------------------------------------------------------------------------------ POSTER PROGRAM POSTER SESSION I Tuesday, November 27 (* denotes poster spotlight) APPLICATIONS App1* "A B-P ANN Commodity Trader", by J.E. Collard. App2 "Analog Neural Networks as Decoders", by Ruth A. Erlanson and Yaser Abu- Mostafa. App3 "Proximity Effect Corrections in Electron Beam Lithography Using a Neural Network", by Robert C. Frye, Kevin Cummings and Edward Rietman. App4 "A Neural Expert System with Automated Extraction of Fuzzy IF-THEN Rules and Its Application to Medical Diagnosis", by Yoichi Hayashi. App5 "Integrated Segmentation and Recognition of Machine and Hand--printed Characters", by James D. Keeler, Eric Hartman and Wee-Hong Leow. App6* "Training Knowledge-Based Neural Networks to Recognize Genes in DNA Sequences", by Michael O. Noordewier, Geoffrey Towell and Jude Shavlik. App7* "Seismic Event Identification Using Artificial Neural Networks", by John L. Perry and Douglas Baumgardt. App8 "Rapidly Adapting Artificial Neural Networks for Autonomous Navigation", by Dean A. Pomerleau. App9 "Sequential Adaptation of Radial Basis Function Neural Networks and its Application to Time-series Prediction", by V. Kadirkamanathan, M. Niranjan and F. Fallside. App10* "EMPATH: Face, Emotion, and Gender Recognition Using Holons", by Garrison W. Cottrell and Janet Metcalf. App11* "Sexnet: A Neural Network Identifies Sex from Human Faces", by B. Golomb, D. Lawrence and T.J. Sejnowski. EVOLUTION AND LEARNING EL1 "Using Genetic Algorithm to Improve Pattern Classification Performance", by Eric I. Chang and Richard P. Lippmann. EL2 "Evolution and Learning in Neural Networks: The Number and Distribution of Learning Trials Affect the Rate of Evolution", by Ron Kessing and David Stork. LANGUAGE La1* "Harmonic Grammar", by Geraldine Legendre, Yoshiro Miyata and Paul Smolensky. La2 "Translating Locative Prepostions", by Paul Munro and Mary Tabasko. La3* "Language Acquisition via Strange Automata", by Jordon B. Pollack. La4 "Exploiting Syllable Structure in a Connectionist Phonology Model", by David S. Touretzky and Deirdre Wheeler. LEARNING AND GENERALIZATION LG1 "Generalization Properties of Radial Basis Functions", by Sherif M.Botros and C.G. Atkeson. LG2* "Neural Net Algorithms That Learn In Polynomial Time From Examples and Queries", by Eric Baum. LG3* "Looking for the gap: Experiments on the cause of exponential generalization", by David Cohn and Geral Tesauro. LG4 "Dynamics of Generalization in Linear Perceptrons ", by A. Krogh and John Hertz. LG5 "Second Order Properties of Error Surfaces, Learning Time, and Generalization", by Yann LeCun, Ido Kanter and Sara Solla. LG6 "Kolmogorow Complexity and Generalization in Neural Networks", by Barak A. Pearlmutter and Ronal Rosenfeld. LG7 "Learning Versus Generalization in a Boolean Neural Network", by Johathan Shapiro. LG8* "On Stochastic Complexity and Admissible Models for Neural Network Classifiers", by Padhraic Smyth. LG9 "Asympotic slowing down of the nearest-neighbor classifier", by Robert R. Snapp, Demetri Psaltis and Santosh Venkatesh. LG10 "Remarks on Interpolation and Recognition Using Neural Nets", by Eduardo D. Sontag. LG11 "Epsilon-Entropy and the Complexity of Feedforward Neural Networks", by Robert C. Williamson. LEARNING SYSTEMS LS1 "Analysis of the Convergence Properties of Kohonen's LVQ", by John S. Baras and Anthony LaVigna. LS2* "A Framework for the Cooperation of Learning Algorithms", by Leon Bottou and Patrick Gallinari. LS3 "Back-Propagation is Sensitive to Initial Conditions", by John F. Kolen and Jordan Pollack. LS4 "Discovering Discrete Distributed Representations with Recursive Competitive Learning", by Michael C. Mozer. LS5* "From Competitive Learning to Adaptive Mixtures of Experts", by Steven J. Nowlan and Geoffrey Hinton. LS6 "ALCOVE: A connectionist Model of Category Learning", by John K. Kruschke. LS7 "Transforming NN Output Activation Levels to Probability Distributions", by John S. Denker and Yann LeCunn. LS8* "Closed-Form Inversion of Backropagation Networks: Theory and Optimization Issues", by Michael L. Rossen. LOCALIZED BASIS FUNCTIONS LBF1 "Computing with Arrays of Bell Shaped Functions Bernstein Polynomials and the Heat Equation", by Pierre Baldi. LBF2 "Function Approximation Using Multi-Layered Neural Networks with B-Spline Receptive Fields", by Stephen H. Lane, David Handelman, Jack Gelfand and Marshall Flax. LBF3 "A Resource-Allocating Neural Network for Function Interpolation" by John Platt. LBF4 "Adaptive Range Coding", by B.E. Rosen, J.M. Goodwin and J.J. Vidal. LBF5 "Oriented Nonradial Basis Function Networks for Image Coding and Analysis", by Avi Saha, Jim christian, D.S. Tang and Chuan-Lin Wu. LBF6 "A Tree-Structured Network for Approximation on High-Dimensional Spaces", by T. Sanger. LBF7 "Spherical Units as Dynamic Reconfigurable Consequential Regions and their Implications for Modeling Human Learning and Generalization", by Stephen Jose Hanson and Mark Gluck. LBF8 "Feedforward Neural Networks: Analysis and Synthesis Using Discrete Affine Wavelet Transformations", by Y.C. Pati and P.S. Krishnaprasad. LBF9 "A Network that Learns from Unreliable Data and Negative Examples", by Fredico Girosi, Tomaso Poggio and Bruno Caprile. LBF10 "How Receptive Field Parameters Affect Neural Learning", by Bartlett W. Mel and Stephen Omohundro. MEMORY SYSTEMS MS1 "The Devil and the Network: What Sparsity Implies to Robustness and Memory", by Sanjay Biswas and Santosh Venkatesh. MS2 "Cholinergic modulation selective for intrinsic fiber synapses may enhance associative memory properties of piriform cortex", by Michael E. Hasselmo, Brooke Anderson and James Bower. MS3 "Associative Memory in a Network of 'Biological' Neurons", by Wulfram Gerstner. MS4 "A Learning Rule for Guaranteed CAM Storage of Analog Patterns and Continuous Sequences in a Network of 3N^2 Weights", by William Baird. VLSI IMPLEMENTATIONS VLSI1 "A Highly Compact Linear Weight Function Based on the use of EEPROMs", by A. Krammer, C.K. Sin, R. Chu and P.K. Ko. VLSI2 "Back Propagation Implementation on the Adaptive Solutions Neurocomputer Chip", Hal McCartor. VLSI3 "Analog Non-Volatile VLSI Neural Network Chip and Back-Propagation Training", by Simon Tam, Bhusan Gupta, Hernan A. Castro and Mark Holler. VLSI4 "An Analog VLSI Splining Circuit", by D.B. Schwartz and V.K. Samalam. VLSI5 "Reconfigurable Neural Net Chip with 32k Connections", by H.P.Graf and D. Henderson. VLSI6 "Relaxation Networks for Large Supervised Learning Problems", by Joshua Alspector, Robert Allan and Anthony Jayakumare. POSTER SESSION II Wednesday, November 28 (* denotes poster spotlight) CONTROL AND NAVIGATION CN1 "A Reinforcement Learning Variant for Control Scheduling", by Aloke Guha. CN2* "Learning Trajectory and Force Control of an Artificial Muscle Arm by Parallel- Hierarchical Neural Network Model", by Masazumi Katayama and Mitsuo Kawato. CN3 "Identification and Control of a Queueing System with Neural Networks", by Rodolfo A. Milito, Isabelle Guyon and Sara Solla. CN4 "Conditioning And Spatial Learning Tasks", by Peter Dayan. CN5 "Reinforcement Learning in Non-Markovian Environments", by Jurgen Schmidhuber. CN6* "A Model for Distributed Sensorimotor Control of the Cockroach Escape Turn", by Randall D. Beer, Gary Kacmarcik, Roy Ritzman and Hillel Chiel. CN7* "Flight Control in the Dragonfly: A Neurobiological Simulation", by W.E. Faller and M.W. Luttges. CN8 "Integrated Modeling and Control Based on Reinforcement Learning and Dynamic Programming", by Richard S. Sutton. DEVELOPMENT Dev1 "Development of the Spatial Structure of Cortical Feature Maps: A Model Study", by K. Obermayer and H. Ritter and K. Schulten. Dev2 "Interaction Among Ocular Dominance, Retinotopic Order and On-Center/Off- Center Pathways During Development", by Shiqeru Tanaka. Dev3 "Simple Spin Models for the development of Ocular Dominance and Iso-Orientation Columns", by Jack Cowan. NEURODYNAMICS ND1 "Reduction of Order for Systems of Equations Describing the Behavior of Complex Neurons", by T.B.Kepler, L.F. Abbot and E. Marder. ND2 "An Attractor Neural Network Model of Recall and Recognition", by E. Ruppin, Y. Yeshurun. ND3 "Stochastic Neurodynamics", by Jack Cowan. ND4 "A Method for the Efficient Design of Boltzman Machines for Classification Problems", by Ajay Gupta and Wolfgang Maass. ND5 "Analog Neural Networks that are Parallel and Stable", by C.M. Marcus, F.R. Waugh and R.M. Westervelt. ND6 "A Lagrangian Approach to Fixpoints ", by Eric Mjolsness and Willard Miranker. ND7 "Shaping the State Space Landscape in Recurrent Networks", by Patrice Y. Simard, Jean Pierre Raysz and Bernard Victorri. ND8 "Adjoint-Operators and non-Adiabatic Learning Algorithms in Neural Networks", by N. Toomarian and J. Barhen. OSCILLATIONS Osc1* "Connectivity and Oscillations in Two Dimensional Models of Neural Populations", by Daniel M. Kammen, Ernst Niebur and Christof Koch. Osc2* "Oscillation Onset in Neural Delayed Feedback", by Andre Longtin. Osc3* "Analog Computation at a Critical Point: A Novel Function for Neuronal Oscillations? ", by Leonid Kruglyak. PERFORMANCE COMPARISONS PC1 "Comparison of three classification techniques, Cart, C4.5 and multi-layer perceptions", by A.C. Tsoi and R.A. Pearson. PC2 "A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers", by Kenny Ng and Richard Lippmann. PC3 "Time Trials on Second-Order and Variable-Learning-Rate Algorithms", by Richard Rohwer. PC4 "Kohonen Networks and Clustering: Comparative Performance in Color Clusterng", by Wesley Snyder, Daniel Nissman, David Van den Bout and Griff Bilbro. SIGNAL PROCESSING Sig1* "Natural Dolphin Echo Recognition Using An Integrator Gateway Network", by H. L. Roitblat, P.W.B. Moore, R.H. Penner and P.E. Nachtigall. Sig2* "Signal Processing by Multiplexing and Demultiplexing in Neurons", by David C. Tam. SPEECH PROCESSING Sp1 "A Temporal Neural Network for Word Identification from Continuous Phoneme Strings", by Robert B. Allen and Candace Kamm. Sp2* "Connectionist Approaches to the use of Markov Models for Speech Recognition", by H.Bourlard and N. Morgan. Sp3 "The Temp 2 Algorithm: Adjusting Time-Delays by Supervised Learning", by Ulrich Bodenhausen. Sp4 "Spoken Letter Recognition", by Mark Fanty and Ronald A.Cole. Sp5 "Speech Recognition Using Demi-Syllable Neural Prediction Model", by Ken-ichi Iso and Takao Watanabe. Sp6 "RECNORM: Simultaneous Normalisation and Classification Applied to Speech Recognition", by John S. Bridle and Steven Cox. Sp7* "Exploratory Feature Extraction in Speech Signals", by Nathan Intrator. SP8 "Detection and Classification of Phonemes Using Context-Independent Error Back- Propagation", by Hong C. Leung, James R. Glass, Michael S. Phillips and Victor W. Zue. TEMPORAL PROCESSING TP1 "Modeling Time Varying Systems Using a Hidden Control Neural Network Architecture", by Esther Levin. TP2 "A New Neural Network Model for Temporal Processing", by Bert de Vries and Jose Principe. TP3 "ART2/BP architecture for adaptive estimation of dynamic processes", by Einar Sorheim. TP4 "Statistical Mechanics of Temporal Association in Neural Networks with Delayed Interaction", by Andreas V.M. Herz, Zahoping Li, Wulfram Gerstner and J. Leo van Hemmen. TP5 "Learning Time Varying Concepts", by Anthony Kuh and Thomas Petsche. TP6 "The Recurrent Cascade-Correlation Architecture" by Scott E. Fahlman. VISUAL PROCESSING VP1 "Steropsis by Neural Networks Which Learn the Constraints", by Alireza Khotanzad and Ying-Wung Lee. VP2 "A Neural Network Approach for Three-Dimensional Object Recognition", by Volker Tresp. VP3* "A Multiresolution Network Model of Motion Computation in Primates", by H. Taichi Wang, Bimal Mathur and Christof Koch. VP4 "A Second-Order Translation, Rotation and Scale Invariant Neural Network ", by Shelly D.D.Goggin, Kristina Johnson and Karl Gustafson. VP5 "Optimal Sampling of Natural Images: A Design Principle for the Visual System?", by William Bialek, Daniel Ruderman and A. Zee. VP6* "Learning to See Rotation and Dilation with a Hebb Rule", by Martin I. Sereno and Margaret E. Sereno. VP7 "Feedback Synapse to Cone and Light Adaptation", by Josef Skrzypek. VP8 "A Four Neuron Circuit Accounts for Change Sensitive Inhibition in Salamander Retina", by J.L. Teeters, F. H. Eeckman, G.W. Maguire, S.D. Eliasof and F.S. Werblin. VP9* "Qualitative structure from motion", by Daphana Weinshall. VP10 "An Analog VLSI Chip for Finding Edges from Zero-Crossings", by Wyeth Bair. VP11 "A CCD Parallel Processing Architecture and Simulation of CCD Implementation of the Neocognitron", by Michael Chuang. VP12* "A Correlation-based Motion Detection Chip", by Timothy Horiuchi, John Lazzaro, Andy Moore and Christof Koch. ------- From jagota at cs.Buffalo.EDU Fri Sep 28 17:11:27 1990 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Fri, 28 Sep 90 17:11:27 EDT Subject: Tech Report Available Message-ID: <9009282111.AA19160@sybil.cs.Buffalo.EDU> *************** DO NOT FORWARD TO OTHER BBOARDS***************** *************** DO NOT FORWARD TO OTHER BBOARDS***************** The following technical report is available: The Hopfield-style network as a Maximal-Cliques Graph Machine Arun Jagota (jagota at cs.buffalo.edu) Department of Computer Science State University Of New York At Buffalo TR 90-25 ABSTRACT The Hopfield-style network, a variant of the popular Hopfield neural network, has earlier been shown to have fixed points (stable states) that correspond 1-1 with the maximal cliques of the underlying graph. The network sequentially transforms an initial state (set of vertices) to a final state (maximal clique) via certain greedy operations. It has also been noted that this network can be used to store simple, undirected graphs. In the following paper, we exploit these properties to view the Hopfield-style Network as a Maximal Clique Graph Machine. We show that certain problems can be reduced to finding Maximal Cliques on graphs in such a way that the network computations lead to the desired solutions. The theory of NP-Completeness shows us one such problem, SAT, that can be reduced to the Clique problem. In this paper, we show how this reduction allows us to answer certain questions about a CNF formula, via network computations on the corresponding maximal cliques. We also present a novel transformation of finite regular languages to Cliques in graphs and discuss which (language) questions can be answered and how. Our main general result is that we have expanded the problem-solving ability of the Hopfield-style network without detracting from its simplicity and while preserving its feasibility of hardware implementation. ------------------------------------------------------------------------ The report is available in compressed PostScript form by anonymous ftp as follows: unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get jagota.tr90-25.ps.Z ftp> quit unix> uncompress jagota.tr90-25.ps.Z unix> lpr jagota.tr90-25.ps (use flag [if any] your printer needs for Postscript) ------------------------------------------------------------------------ Due to cost of surface mail, I request use of 'ftp' facility whenever convenient. The report, however, is also available by surface mail. I am also willing to transmit the LaTeX sources by e-mail. Send requests (for one or the other) by e-mail to jagota at cs.buffalo.edu. Please do not reply with 'r' or 'R' to this message. Arun Jagota jagota at cs.buffalo.edu Dept Of Computer Science 226 Bell Hall, State University Of New York At Buffalo, NY - 14260 *************** DO NOT FORWARD TO OTHER BBOARDS***************** *************** DO NOT FORWARD TO OTHER BBOARDS***************** From tsejnowski at UCSD.EDU Sat Sep 29 18:55:49 1990 From: tsejnowski at UCSD.EDU (Terry Sejnowski) Date: Sat, 29 Sep 90 15:55:49 PDT Subject: Neural Computation 2:3 Message-ID: <9009292255.AA22679@sdbio2.UCSD.EDU> NEURAL COMPUTATION Volume 2, Number 3 Review: Parallel Distributed Approaches to Combinatorial Optimization -- Benchmark Studies on the Traveling Salesman Problem Carsten Peterson Note: Faster Learning for Dynamical Recurrent Backpropagation Yan Fang and Terrence J. Sejnowski Letters: A Dynamical Neural Network Model of Sensorimotor Transformations in the Leech Shawn R. Lockery, Yan Fang, and Terrence J. Sejnowski Control of Neuronal Output by Inhibition At the Axon Initial Segment Rodney J. Douglas and Kevan A. C. Martin Feature Linking Via Synchronization Among Distributed Assemblies: Results From Cat Visual Cortex and From Simulations R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke Toward a Theory of Early Visual Processing Joseph J. Atick and A. Norman Redlich Derivation of Hebbian Equations From a Nonlinear Model Kenneth D. Miller Spontaneous Development of Modularity in Simple Cortical Models Alex Chernjavsky and John Moody The Bootstrap Widrow-Hoff Rule As a Cluster-Formation Algorithm Geoffrey E. Hinton and Steven J. Nowlan The Effects of Precision Constraints in a Back-Propagation Learning Network Paul W. Hollis, John S. Harper, and John J. Paulos Exhaustive Learning D. B. Schwartz, Sarah A. Solla, V. K. Samalam, and J. S. Denker A Method for Designing Neural Networks Using Non-Linear Multivariate Analysis: Application to Speaker-Independent Vowel Recognition Toshio Irino and Hideki Kawahara SUBSCRIPTIONS: Volume 2 ______ $35 Student ______ $50 Individual ______ $100 Institution Add $12. for postage outside USA and Canada surface mail. Add $18. for air mail. (Back issues of volume 1 are available for $25 each.) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. (617) 253-2889. ----- From John.Hampshire at SPEECH2.CS.CMU.EDU Sun Sep 30 20:28:16 1990 From: John.Hampshire at SPEECH2.CS.CMU.EDU (John.Hampshire@SPEECH2.CS.CMU.EDU) Date: Sun, 30 Sep 90 20:28:16 EDT Subject: MLP classifiers == Bayes Message-ID: EQUIVALENCE PROOFS FOR MULTI-LAYER PERCEPTRON CLASSIFIERS AND THE BAYESIAN DISCRIMINANT FUNCTION John B. Hampshire II and Barak A. Pearlmutter Carnegie Mellon University -------------------------------- We show the conditions necessary for an MLP classifier to yield (optimal) Bayesian classification performance. Background: ========== Back in 1973, Duda and Hart showed that a simple perceptron trained with the Mean-Squared Error (MSE) objective function would minimize the squared approximation error to the Bayesian discriminant function. If the two-class random vector (RV) being classified were linearly separable, then the MSE-trained perceptron would produce outputs that converged to the a posteriori probabilities of the RV, given an asymptotically large set of statistically independent training samples of the RV. Since then, a number of connectionists have re-stated this proof in various forms for MLP classifiers. What's new: ========== We show (in painful mathematical detail) that the proof holds not just for MSE-trained MLPs, it also holds for MLPs trained with any of two broad classes of objective functions. The number of classes associated with the input RV is arbitrary, as is the dimensionality of the RV, and the specific parameterization of the MLP. Again, we state the conditions necessary for Bayesian equivalence to hold. The first class of "reasonable error measures" yields Bayesian performance by producing MLP outputs that converge to the a posterioris of the RV. MSE and a number of information theoretic learning rules leading to the Cross Entropy objective function are familiar examples of reasonable error measures. The second class of objective functions, known as Classification Figures of Merit (CFM), yield (theoretically limited) Bayesian performance by producing MLP outputs that reflect the identity of the largest a posteriori of the input RV. How to get a copy: ================= To appear in the "Proceedings of the 1990 Connectionist Models Summer School," Touretzky, Elman, Sejnowski, and Hinton, eds., San Mateo, CA: Morgan Kaufmann, 1990. This text will be available at NIPS in late November. If you can't wait, pre-prints may be obtained from the OSU connectionist literature database using the following procedure: % ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get hampshire.bayes90.ps.Z 261245 bytes sent in 9.9 seconds (26 Kbytes/s) ftp> quit % uncompress hampshire.bayes90.ps.Z % lpr hampshire.bayes90.ps