From Connectionists-Request at cs.cmu.edu Thu Sep 1 00:05:14 1994 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Thu, 01 Sep 94 00:05:14 -0400 Subject: Bi-monthly Reminder Message-ID: <26184.778392314@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated July 18, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From bernabe at cnm.us.es Fri Sep 2 06:44:45 1994 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Fri, 2 Sep 94 12:44:45 +0200 Subject: DISCUSSION: applications for real-time clustering chips Message-ID: <9409021044.AA08951@sparc1.cnm.us.es> This message is in hopes of starting a discussion about the usefulness of real-time clustering chips. In our institution we are developing a real-time clustering chip for possible application in speech recognition, where as the speaker changes, a certain adaptation needs to be performed. I have identified in the literature several ways of doing this (see refs. [1]-[7] below). Our chip is able to cluster 100-binary-pixels input patterns into up to 18 different categories. By assembling an NxM array of these chips Nx100-binary-pixels patterns can be clustered into up to Mx18 categories. Patterns are classified (and the corresponding clusters are re-adapted) in less than 1 micro-second. The discussion I would like to start is wether or not these kind of chips are useful for this or other applications. If you have experience with any application in which it would be desirable to have a real-time clustering chip, please enter the discussion. If possible, indicate the type of patterns that would need to be real-time-clustered (binary, digital, or analog), speed at which this clustering would be desirable, and other requirements you would desire. Please describe briefly your application and provide some typical references (if possible). I am not aware of other chips of this nature that are reported in the literature. For a brief description of our chip please see ref. [8] (a copy is available in the neuroprose archieve as pub/neuroprose/bernabe.art1chip.ps.Z). Dr. Bernabe Linares-Barranco National Microelectronics Center (CNM) Ed. CICA, Av. Reina Mercedes s/n 41012 Sevilla, SPAIN Phone: 34-5-4239923 FAX: 34-5-4624506 E-mail: bernabe at cnm.us.es References: [1] J. B. Hampshire and A. H. Waible, "The Meta-Pi network: connectionist rapid adaptation for high-performance multi-speaker phoneme recognition," ICASSP'90, pp. 165- 168, vol. 1, 1990. [2] Y. Gang and J. P. Haton, "Signal-to-string conversion based on high likelihood regions using embedded dynamic processing," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, No. 3, pp. 297-302, March 1991. [3] G. Rigoll, "Baseform adaptation for large vocabulary hidden markov model based speech recognition systems", ICASSP'90, pp. 141-144, vol. 1. [4] S. Cox, "Speaker adaptation in speech recognition using linear regression techniques," Electronics Letters, vol. 28, No. 22, pp. 2093-2094, 22 Oct. 1992. [5] P. G. Bamberg and M. A. Mandel, "Adaptable phoneme-based models for large-vocabulary speech recognition," Speech Communication, vol. 10, No. 5-6, pp. 437-451, Dec. 1991. [6] X. Huang and K. F. Lee, "On speaker-independent, speaker-dependent, and speaker-adaptive speech recognition," IEEE Trans. on Speech and Audio Processing, vol. 1, No. 2, pp. 150-157, April 1993. [7] M. Witbrock and P. Haffner, "Rapid connectionist speaker adaptation," ICASSP'92, pp. 453-456, vol. 1. [8] T. Serrano, B. Linares-Barranco, and J. L. Huertas, "A CMOS VLSI Analog Current-Mode High-Speed ART1 Chip," Proc. of the 1994 IEEE Int. Conference on Neural Networks, Orlando, Florida, July 1994, pp. 1912-1916, vol. 3.  From ersoy at ecn.purdue.edu Fri Sep 2 13:51:55 1994 From: ersoy at ecn.purdue.edu (Okan K Ersoy) Date: Fri, 2 Sep 1994 12:51:55 -0500 Subject: Call for papers: European Conference on Circuit Theory and Design Message-ID: <199409021751.MAA08841@dynamo.ecn.purdue.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 6630 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/98e78b94/attachment.ksh From henders at linc.cis.upenn.edu Sat Sep 3 11:03:53 1994 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Sat, 3 Sep 1994 11:03:53 -0400 Subject: dissertation available on connectionist NLP/temporal synchrony Message-ID: <199409031503.LAA04825@linc.cis.upenn.edu> FTP-host: linc.cis.upenn.edu FTP-filename: pub/henderson/thesis.ps.Z FTP-filename: pub/henderson/chapter1.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/henderson.thesis.ps.Z The following dissertation on the feasibility and implications of using temporal synchrony variable binding to do syntactic parsing is available by anonymous ftp or as a technical report. The complete dissertation (196 pages) can be ftp'ed from archive.cis.ohio-state.edu in pub/neuroprose/Thesis/ as henderson.thesis.ps.Z, or from linc.cis.upenn.edu in pub/henderson/ as thesis.ps.Z. The first chapter of this document is a summary (42 pages), and it can be ftp'ed from linc.cis.upenn.edu in pub/henderson/ as chapter1.ps.Z. The complete dissertation is also available as a technical report (IRCS Report #94-12) by contacting Jodi Kerper (jbkerper at central.cis.upenn.edu, (215) 898-0362). Comments welcome! - Jamie Henderson University of Pennsylvania -------- Description Based Parsing in a Connectionist Network James Henderson Mitchell Marcus (Supervisor) Recent developments in connectionist architectures for symbolic computation have made it possible to investigate parsing in a connectionist network while still taking advantage of the large body of work on parsing in symbolic frameworks. This dissertation investigates syntactic parsing in the temporal synchrony variable binding model of symbolic computation in a connectionist network. This computational architecture solves the basic problem with previous connectionist architectures, while keeping their advantages. However, the architecture does have some limitations, which impose computational constraints on parsing in this architecture. This dissertation argues that, despite these constraints, the architecture is computationally adequate for syntactic parsing, and that these constraints make significant linguistic predictions. To make these arguments, the nature of the architecture's limitations are first characterized as a set of constraints on symbolic computation. This allows the investigation of the feasibility and implications of parsing in the architecture to be investigated at the same level of abstraction as virtually all other investigations of syntactic parsing. Then a specific parsing model is developed and implemented in the architecture. The extensive use of partial descriptions of phrase structure trees is crucial to the ability of this model to recover the syntactic structure of sentences within the constraints. Finally, this parsing model is tested on those phenomena which are of particular concern given the constraints, and on an approximately unbiased sample of sentences to check for unforeseen difficulties. The results show that this connectionist architecture is powerful enough for syntactic parsing. They also show that some linguistic phenomena are predicted by the limitations of this architecture. In particular, explanations are given for many cases of unacceptable center embedding, and for several significant constraints on long distance dependencies. These results give evidence for the cognitive significance of this computational architecture and parsing model. This work also shows how the advantages of both connectionist and symbolic techniques can be unified in natural language processing applications. By analyzing how low level biological and computational considerations influence higher level processing, this work has furthered our understanding of the nature of language and how it can be efficiently and effectively processed.  From crg at ida.his.se Mon Sep 5 06:22:51 1994 From: crg at ida.his.se (Connectionist) Date: Mon, 5 Sep 94 12:22:51 +0200 Subject: SCC-95 call for papers: Deadline Oct 1st! Message-ID: <9409051022.AA13464@mhost.ida.his.se> ------------------------------------------------------------- DEADLINE EXTENDED TO OCTOBER 1ST, 1994! ------------------------------------------------------------- THE SECOND SWEDISH CONFERENCE ON CONNECTIONISM The Connectionist Research Group University of Skovde, SWEDEN March 2-4, 1995 in Skovde, Sweden CALL FOR PAPERS SPEAKERS Michael Mozer University of Colorado, USA Ronan Reilly University College Dublin, Ireland Paul Smolensky University of Colorado, USA David Touretzky Carnegie Mellon University, USA This list is under completion. PROGRAM COMMITTEE Jim Bower California Inst. of Technology, USA Harald Brandt Ellemtel, Sweden Ron Chrisley University of Sussex, UK Gary Cottrell University of California, San Diego, USA Georg Dorffner University of Vienna, Austria Tim van Gelder National University of Australia, Australia Agneta Gulz University of Skovde, Sweden Olle Gallmo Uppsala University, Sweden Tommy Garling Goteborg University, Sweden Dan Hammerstrom Adaptive Solutions Inc., USA Jim Hendler University of Maryland, USA Erland Hjelmquist Goteborg University, Sweden Anders Lansner Royal Inst. of Techn., Stockholm, Sweden Reiner Lenz Linkoping University, Sweden Ajit Narayanan University of Exeter, UK Jordan Pollack Ohio State University, USA Noel Sharkey University of Sheffield, UK Bertil Svensson Chalmers Inst. of Technology, Sweden Tere Vaden University of Tampere, Finland SCOPE OF THE CONFERENCE Understanding neural information processing properties characterizes the field of connectionism, also known as Ar- tificial Neural Networks (ANN). The rapid growth, expansion and great popularity of connec- tionism is motivated by the new way of approaching and understanding the problems of artificial intelligence, and its applicability in many real-world applications. There is a number of subfields of connectionism among which we distinguish the following. The importance of a "Theory of connectionism" cannot be overstressed. The interest in theoretical analysis of neu- ronal models, and the complex dynamics of network architec- tures grows rapidly. It is often argued that abstract neural network models are best understood by analysing their computational properties with respect to their biological counterparts. A clear theoretical approach to developing neural models also provides insight in dynamics, learning, functionality and probabilities of different connectionist networks. "Cognitive connectionism" is bridging the gap between the theory of connectionism and cognitive science by modelling higher order brain functions from psychology by using methods offered by connectionist models. The findings of this field are often evaluated by their neuropsychological validity and not by their functional applicability. Sometimes the field of connectionism is referred to as the "new AI". Its applicability in AI has spawned a belief that AI will benefit from a good understanding of neural informa- tion processing capabilities. The subfield "Connectionism and artificial intelligence" is also concerned with the dis- tinction between connectionist and symbolic representations. The wide applicability and problem-solving abilities of neural networks are exposed in "Real-world computing". Robotics, vision, speech and neural hardware are some of the topics in this field. "The philosophy of connectionism" is concerned with such diverse questions as the mind-body problem and relations between distributed representations, their semantics and im- plications for intelligent behaviour. Experimental studies in "Neurobiology" have implications on the validity and design of new, artificial neural architec- tures. This branch of connectionism addresses topics such as self-organisation, modelling of cortex, and associative memory models. A number of internationally renowned keynote speakers will be invited to give plenary talks on the subjects listed above. GUIDELINES FOR PAPER SUBMISSIONS Instructions for submissions of manuscripts: Papers may be submitted, in three (3) copies, to one of the following sessions. ~ Theory of connectionism ~ Cognitive connectionism ~ Connectionism and artificial intelligence ~ Real-world computing ~ The philosophy of connectionism ~ Neurobiology A note should state principal author and email address (if any). It should also indicate what session the paper is sub- mitted to. Length: Papers must be a maximum of ten (10) pages long (including figures and references), the text area should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), and in a 12-point font type. Template and style files conforming to these specifications for several text formatting programs, will be available to authors of accepted papers. Deadline: Papers must be received by Saturday, October 1st, 1994 to ensure reviewing. All submitted papers will be reviewed by members of the program committee on the basis of technical quality, research significance, novelty and clarity. The principal author will be notified of acceptance no later than Tuesday, November 1st, 1994. Proceedings: All accepted papers will appear in the conference proceed- ings. CONFERENCE CHAIRS Lars Niklasson, Mikael Boden lars.niklasson at ida.his.se mikael.boden at ida.his.se PLEASE ADDRESS ALL CORRESPONDENCE TO: "SCC-95" The Connectionist Research Group University of Skovde P.O. Box 408 541 28 Skovde, SWEDEN E-mail: crg at ida.his.se Tel. +46 (0)500 464600 Fax. +46 (0)500 464725  From adriaan at phil.ruu.nl Tue Sep 6 09:25:27 1994 From: adriaan at phil.ruu.nl (Adriaan Tijsseling) Date: Tue, 6 Sep 1994 15:25:27 +0200 Subject: mailing list for connectionist cognitive psychology Message-ID: <199409061325.PAA09157@laurel.stud.phil.ruu.nl> A new mailing list has been started. The main aim of this mailing list is to inform about connectionist research in cognitive psychology. The mailing list is primarily intended for discussion of issues relating to cognitive psychology and connectionism and the dissemination of information directly relevant to researchers in the fields of cognitive science. Examples of information are announcements of new techreports, dissertations, theses, conferences, seminars, or information about articles, research, or information about bibliographic issues. Requests to have a name added to the list, and similar administrative matters, should be sent to the following address: cogpsy-request at phil.ruu.nl Contributions can be sent to: cogpsy at phil.ruu.nl -- Adriaan Tijsseling, Department of Cognitive Artificial Intelligence and Department of Psychonomy, Utrecht University, The Netherlands.  From shultz at hebb.psych.mcgill.ca Tue Sep 6 09:46:10 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 6 Sep 94 09:46:10 EDT Subject: No subject Message-ID: <9409061346.AA27883@hebb.psych.mcgill.ca> Subject: Paper available: A connectionist model of the development of velocity, time, and distance concepts Date: 6 Sept '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/buckingham.velocity.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A connectionist model of the development of velocity, time, and distance concepts (6 pages) David Buckingham & Thomas R. Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec, Canada H3A 1B1 Abstract Connectionist simulations of children's acquisition of velocity (v), time (t), and distance (d) concepts were conducted using a generative algorithm, cascade-correlation (Fahlman & Lebiere, 1990). Diagnosis of network rules were consistent with the developmental course of childrens concepts (Wilkening, 1981, 1982) and predicted some new stages as well. Networks integrated the defining dimensions of the concepts first by identity rules (e.g., v = d), then additive rules (e.g., v = d-t), and finally multiplicative rules (e.g., v = d/t). Psychological effects of differential memory demands were also simulated. It is argued that cascade-correlation implements an explicit mechanism of developmental change involving incremental learning and qualitative increases in representational power. The paper has been published in the 1994 Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (pp. 72-77). Hillsdale, NJ: Lawrence Erlbaum. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get buckingham.velocity.ps.Z ftp> quit unix> uncompress buckingham.velocity.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz  From paolo at mcculloch.ing.unifi.it Tue Sep 6 11:23:12 1994 From: paolo at mcculloch.ing.unifi.it (Paolo Frasconi) Date: Tue, 6 Sep 94 17:23:12 +0200 Subject: Announce: WWW Neural Networks site Message-ID: <9409061523.AA28857@mcculloch.ing.unifi.it> The following page has been established at Dipartimento di Sistemi e Informatica (University of Florence, Italy): URL: http://www-dsi.ing.unifi.it/neural/home.html The page contains a description of research currently undertaken by our group and links to a collection of our papers that can be retrieved as postscript files. Links to other similar services around the world are also provided. --- Paolo Frasconi Universita' di Firenze Dipartimento di Sistemi tel: +39 (55) 479-6361 e Informatica fax: +39 (55) 479-6363 Via di Santa Marta 3 50139 Firenze (Italy) http://www-dsi.ing.unifi.it/~paolo/  From shawn_mikiten at biad23.uthscsa.edu Wed Sep 7 15:32:23 1994 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 7 Sep 94 15:32:23 U Subject: Brain Mapping Conference po Message-ID: Brain Mapping Conference post The upcoming Brain Mapping Database Conference on December 4 & 5 will be in San Antonio, TX. Anyone involved in, or interested in developing databases in brain mapping and/or behaviors are welcome to apply. If you have access to WWW the URL is: http://biad38.uthscsa.edu/brainmap/brainmap94.html ============================================== BrainMap '94 Conference Database Development in Brain and Behavior Session Topics - Community Databases - Anatomical Spaces - Local and Emerging Databases - Database Federation Featured Databases - BrainMap - Genesis - Brain Browser - NeuroDatabase - CHILDS - SP Map Speakers - Floyd Bloom The Scripps Research Institute La Jolla, CA - Fred Bookstein University of Michigan Ann Arbor, MI - James Bower Caltech Pasadena, CA - George Carman Salk Institute, Vision Center Lab San Diego, CA - Verne Caviness Harvard University, Massachusetts General Hospital Boston, MA - Anders Dale University of California at San Diego La Jolla, CA - Hunter Downs University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Heather Drury Washington University School of Medicine St. Louis, MO - Alan Evans Montreal Neurological Institute Montreal Quebec, Canada - Chris Fields Institute for Genomic Research Gaithersburg, MD - Peter Fox University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Jack Lancaster University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Brian MacWhinney Carnegie Mellon University Pittsburg, PA - John Mazziotta University of California at Los Angeles Los Angeles, CA - Mark Rapport Medical College of Ohio Toledo, OH - Robert Robbins National Institute of Medicine Maryland, VA - Per Roland Karolinska Institute Stockholm, Sweden - James Schwaber Neurobiology Lab Wilmington, DE - Martin Sereno University of California at San Diego La Jolla, CA - David Van Essen Washington University School of Medicine St. Louis, MO - Steven Wertheim New England Regional Primate Resource Center Southborough, MA - Roger Wood University of California at Los Angeles Los Angeles, CA _____________________________________ Attendence will be limited. Scientists developing databases in brain and/or behavior will receive preference. Applications must be received by September 15, 1994. Fill out the following application form to apply. An Advisory Board was recruited to provide external critique of the BrainMap concept, its implementation, and overall organization of the project. To apply, contact Sally Faulk - faulk at uthscsa.edu BrainMap '94 Research Imaging Center The University of Texas Health Science Center 7703 Floyd Curl Drive San Antonio, Tx 78284-6240 (210) 567-8070 (voice) (210) 567-8074 (fax) ===================================== BrainMap '94 Application Form Database Development in Brain and Behavior ________________________________________________ December 4 and 5, 1994, San Antonio, Texas Co-Organizers: Peter T. Fox M.D. and Jack L. Lancaster, Ph.D. To: Sally Faulk, BrainMap '94 Research Imaging Center-UTHSCSA 7703 Floyd Curl Drive San Antonio, TX 78284-6240 210-567-8070 (voice) 210-567-8074 (fax) faulk at uthscsa.edu (e-mail) Applications must be received by September 15, 1994 Name/Title:_________________________________________________________ Affiliation:________________________________________________________ Address:____________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ FAX:_______________________ Phone:_______________________ E-Mail:_____________________________________________________________ Do you do functional Brain mapping? yes________ no________ Modalities: PET________ MRI________ ERP________ MEG________ OTHER________ Have you used a database of brain anatomy or function? yes________ no________ Describe:___________________________________________________________ ____________________________________________________________________ Are you on the Internet?____________________________________________ What computer do you use?___________________________________________ What operating system?______________________________________________ Do you use TCP/IP? yes________ no________ Comments:___________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ ============================================== BrainMap '94 Database Development in Brain and Behavior December 4 & 5, 1994 The Plaza San Antonio Hotel ______________________________________ This is a preliminary draft of the agenda, may change without notice. December 4, 1994 , Sunday - a.m. 7:30 Continental Breakfast Foyer Hildalgo Ballroom Session I Hildalgo Ballroom Community Databases 8:00 Welcoming Comments Dean James Young University of Texas Health Science Center at San Antonio 8:10 General Introduction Peter Fox University of Texas Health Science Center at San Antonio 8:20 BrainMap Peter Fox University of Texas Health Science Center at San Antonio 9:10 BrainMap - A User's Perspective TBA 9:20 Genesis James Bower Caltech 10:10 Genesis - A User's Perspective TBA 10:20 Break 10:35 Development of the CHILDES Database Brian MacWhinney Carnegie Mellon University 11:25 The CHILDS Database - A User's Perspective TBA 11:35 Discussion 12:00 Lunch Session II Hildalgo Ballroom Local and Emerging Databases 1:30 Brain Browser Floyd Bloom The Scripps Research Institute 2:00 NeuroDatabase Steven Wertheim New England Regional Primate Resource Center 2:30 Rat Atlas James Schwaber Compational Neurobiology Lab 3:00 Structural Probability Database John Mazziotta University of California at Los Angeles 3:30 Break 3:45 Connectivity Database David Van Essen Washington University School of Medicine 4:15 Atlas Derived Database Per Roland Karolinska Institute 4:45 Discussion General Reception 6:30 - 8:00 PM December 5, 1994, Monday - a.m. 7:30 Continental Breakfast Session III Hildalgo Ballroom Theoretical Underpinnings: Anatomical Spaces 8:00 History of Talairach Atlases Mark Rapport Medical College of Ohio 8:30 Twelve-Parameter Affine Spatial Normalization Roger Wood University of California at Los Angeles 8:50 Convex Hull Spatial Normalization Hunter Downs University of Texas Health Science Center at San Antonio 9:10 Thin-Plate Spline Spatial Normalization Fred Bookstein University of Michigan 9:30 Spherical Projections Alan Evans Montreal Neurological Institute 9:50 Cortical Surface Relaxation Martin Sereno & Anders Dale University of California at San Diego 10:10 Cortical Surface Flattening Heather Drury Washington University School of Medicine 10:30 Break 10:45 Cortical Surface Metric Unfolding George Carman Salk Institute, Vision Center Lab 11:25 Culcal Functional Correspondance Verne Caviness Harvard University, Massachusetts General Hospital 11:45 Discussion 12:15 Lunch Session IV Hildalgo Ballroom Database Integration 1:45 Introductory Statement: A Database Federation Peter Fox & Jack Lancaster University of Texas Health Science Center at San Antonio 2:00 The Genome Experience: Is Retroactive Integration Possible? Robert Robbins National Institute of Medicine 2:30 Models for Proactive Integration Chris Fields Institute for Genomic Research 3:30 BrainMap: A Potential 'Hub' for a Federation of Neuroscience Databases? Jack Lancaster University of Texas Health Science Center at San Antonio 5:00 Discussion 6:00 End of Workshop III =============================================== BrainMap '94 Advisory Board ______________________________ Peter T. Fox, M.D. Research Imaging Center UTHSCSA Jack L. Lancaster, Ph.D. Research Imaging Center UTHSCSA Chistian Burks, Ph.D. Los Alamos National Laboratory George Carman, Ph.D. The Salk Institute Alan Evans, Ph.D. Montreal Neurological Institute Richard Frackowiak, M.D. Hammersmith Hospital, London Karl Fiston, M.D. Hammersmith Hospital, London Patricia Goldman-Rakic, Ph.D. Yale University Balazs Gulyas, M.D., Ph.D. The Karolinska Institute Paul C. Lauterbur, Ph.D. University of Illinois Brian MacWhinney, Ph.D. Carnegie Mellon University John Mazziotta, M.D., Ph.D. University of California M-Marsel Mesulam, M.D. Northwestern University Medical Center Lawrence Parsons, Ph.D. The University of Texas Austin Steven E. Petersen, Ph.D. Washington University Michael Posner, Ph.D. The University of Oregon Marcus E. Raichle, M.D. Washington University Robert J. Robbins, M.D. Johns Hopkins University Per Roland, M.D. The Karolinska Institute Bruce Rosen, M.D. Harvard University Arthur Toga, Ph.D. University of California Leslie G. Ungerleider, Ph.D. National Institute of Mental Health David Van Essen, Ph.D. Washington University C.C. Wood, Ph.D. Los Alamos National Laboratory  From mav at psych.psy.uq.oz.au Wed Sep 7 21:51:44 1994 From: mav at psych.psy.uq.oz.au (Simon Dennis) Date: Thu, 8 Sep 1994 11:51:44 +1000 (EST) Subject: Australasian Cognitive Science Conference Preliminary Announcement Message-ID: - Preliminary Announcement - 3rd Conference of the Australasian Cognitive Science Society at The University of Queensland Brisbane, Australia April 18-20, 1995 (Preceding the 22nd Australian Experimental Psychology Conference) Abstracts Due: December 2, 1994 This meeting of the Australian Cognitive Science Society follows the second conference which was hosted by the University of Melbourne in 1993. The 1995 Conference precedes the Experimental Psychology Conference which will also be held at the University of Queensland (April 21-23, 1995). VENUE The venue for the Conference is Emmanuel College which is located on the campus of the University of Queensland. The campus, only 4 kms from the centre of Brisbane, is spacious and leafy with exotic and colourful subtropical vegetation, and is surrounded on three sides by a sweeping bend of the Brisbane River. Brisbane, with a population of 1.4 million, is Australia's third largest city, and is the capital of the state of Queensland - the "Sunshine State". Although it is Australia's fastest growing city, it retains and cherishes an enviable lifestyle influenced by its sunny, subtropical climate. It is a scenic and cosmopolitan city of plam studded parks, colourful gardens, shady verandahs, riverside walks and cafes, and al fresco dining. The Brisbane River snakes lazily through the city from the forest clad foothills of the Great Dividing Range which frame the city to the west, to the Pacific Ocean which frames it to the east. Within 90 minutes drive of Brisbane are rainforested mountains, pristine Pacific beaches, tranquil sand islands, and buzzing coastal resorts. Brisbane is the gateway to Queensland which, one fifth the size of the USA, encompasses the Simpson Desert in the west, the tropical rainforests of the north, and of course the Great Barrier Reef along its beautiful Pacific coast. ACCOMMODATION Accommodation has been booked at King's College, University of Queensland, which is adjacent to the conference venue. Rooms will be allocated on a first-registered, first-served basis. Approximate cost per person per night is $40. Hotel accommodation will also be arranged. SCIENTIFIC PROGRAMME The aim of the Conference is to promote the interests of the multi- disciplinary field of Cognitive Science. The participation of scholars from all areas of Cognitive Science is invited, including: - Computer Science - Linguistics - Neuroscience - Philosophy - Psychology Additionally, the Conference aims to promote applications of Cognitive Science and encourage participation from researchers in the Asia-Pacific region. The Scientific Programme will include oral and poster presentations, together with symposia. SYMPOSIA Suggestions for symposia are invited and should be forwarded to the Conference Chair with an abstract and list of speakers. EXHIBITS The Conference will include displays of recent publications and applied projects, and a symposium or workshop on applications. SOCIAL PROGRAMME The proposed Social Programme includes opening and closing receptions, poster-session refreshments, and a Conference dinner. SPONSORSHIP The University of Queensland has agreed to contribute toward Conference costs. Additional sponsors are being sought to expand Conference activities, encourage attendance of grant holders, fund awards, and enable invitation of a keynote speaker. REGISTRATION There will be a reduced registration fee for those who wish to attend both CogSci'95 and the Experimental Psychology Conference (EPC'95). Joint registration forms will be distributed with the Call for Papers in late September. A discount will apply for registration fees received by February 28, 1995. SUBMISSION OF ABSTRACTS/PAPERS All abstracts/papers will be refereed. It is intended that selected papers will be published. The submission of full papers will be required if they are to be considered for publication, and will be reviewed. If publication is not desired, abstracts only are required and will not be reviewed externally. The deadline for submission of papers or abstracts is Friday December 2, 1994. Notification of acceptance for the conference as poster, oral presentation or symposium paper will be forwarded February 15, 1995. The deadline for submission of revised papers for publication is May 15, 1995. TIMETABLE & CLOSING DATES: Call for Papers Sept. '94 Submission of papers/abstracts 2/12/94 Notification of acceptance for Conference/book 15/2/95 Registration with discount 28/2/95 Submission of revised paper for publication 15/5/95 CogSci'95 ORGANISING COMMITTEE CONFERENCE CHAIR: Graeme Halford, Psychology, UQ SECRETARIAT: Kerry Chalmers (Sec), Psychology, UQ Glenda Andrews (Treas), Psychology, UQ Rebecca Farley (Admin), Psychology, UQ PROGRAMME & AWARDS: Doug Saddy, Psychology & English, UQ Janet Wiles, Psychology & Comp. Sci, UQ Simon Dennis, Psychology, UQ Terry Dartnall, Computing & IT, Griffith Marilyn Ford, Computing & IT, Griffith Ottmar Lipp, Psychology, UQ PUBLICATIONS & EDITING: Ellen Watson, Philosophy, UQ Terry Dartnall, Computing & IT, Griffith Peter Slezak, Philosophy, UNSW Doug Saddy, Psychology & English, UQ NOTICES & MEMBERSHIP: Kate Stevens, Psychology, UQ Rebecca Farley, Psychology, UQ SOCIAL PROGRAMME: Helen Purchase, Computer Science, UQ VENUE & ACCOMMODATION: Len Dalgleish, Psychology, UQ Helen Purchase, Computer Science, UQ SPONSORSHIP: Joachim Diederich, Computer Science, QUT INCORPORATION: Alan Hayes, Education, UQ Helen Purchase, Computer Science, UQ EXPRESSION OF INTEREST: Please send me the Call for Papers and Registration form for CogSci'95. Name: ___________________________________ Address: _________________________________ _________________________________ _________________________________ Phone: _________________________________ E-mail: _________________________________ E-mail or detach slip and post to:- Rebecca Farley CogSci'95 - Department of Psychology University of Queensland Brisbane QLD Australia 4072 Phone: (+617) 365 6230 Fax: (+617) 365 4466 E-mail: cogsci95 at psy.uq.oz.au *****************************  From smagt at fwi.uva.nl Thu Sep 8 06:04:46 1994 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Thu, 8 Sep 1994 12:04:46 +0200 (MET DST) Subject: neurobotics WWW at U of Amsterdam Message-ID: <199409081004.AA06105@carol.fwi.uva.nl> The department of Autonomous Systems at the University of Amsterdam has been working on a WWW page describing our work in neural networks and robotics. Though we may change its look, its contents are well enough established to announce it. You can find it at http://carol.fwi.uva.nl/~smagt/neuro Please send reactions and comments to smagt at fwi.uva.nl, or krose at fwi.uva.nl. Patrick van der Smagt Department of Computer Systems University of Amsterdam  From moody at chianti.cse.ogi.edu Thu Sep 8 14:01:28 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Thu, 8 Sep 94 11:01:28 -0700 Subject: Neural Networks in the Capital Markets Workshop Message-ID: <9409081801.AA14003@chianti.cse.ogi.edu> ******************************************************************* --- Registration Package and Preliminary Program --- NNCM-94 Second International Workshop NEURAL NETWORKS in the CAPITAL MARKETS Thursday-Friday, November 17-18, 1994 with tutorials on Wednesday, November 16, 1994 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School Neural networks have now been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the overwhelming interest in the first NNCM workshop held in London in November 1993, the second annual NNCM workshop will be held November 17-18, 1994, in Pasadena, California. This is a research meeting where original, high-quality contributions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, November 16) will be included to familiarize audiences of different backgrounds with the financial aspects, and the mathematical aspects, of the field. --Invited Speakers: The workshop will feature invited talks by four internationally recognized researchers: Dr. Andrew Lo, MIT Sloan School Dr. Paul Refenes, London Business School Dr. Robert Shiller, Yale University Dr. Hal White, UC San Diego --Contributed Papers: NNCM-94 will have 4 oral sessions and 2 poster sessions with more than 40 contributed papers presented by academicians and practitioners from both the neural networks side and the capital markets side. Each paper has been refereed by 4 experts in the field. The areas of the accepted papers include: Stock and bond valuation and trading, asset allocation and portfolio management, real trading using neural networks, foreign exchange rate prediction, option pricing, univariate time series analysis, neural network methodology, statistical analysis and hints, theory of forecasting, and neural network modeling. --Tutorials: Before the main program, there will be a day of tutorials on Wednesday, November 16, 1994. The morning session will focus on the financial side and the afternoon session will focus on the mathematical side. -Morning Session- Dynamics of Trading and Market Microstructure Dr. Larry Harris, University of Southern California Empirical Research on Market Inefficiencies Dr. Blake LeBaron, University of Wisconsin -Afternoon Session- Neural Networks, Time Series, and Finance Dr. John Moody, Oregon Graduate Institute Statistical Inference for Neural Networks Dr. Brian Ripley, Oxford University We are very pleased to have tutors of such caliber help bring new audiences from different backgrounds up to speed in this hybrid area. --Schedule Outline: Wednesday, November 16: 8:00-12:15 Tutorials I & II 1:30-5:45 Tutorials III & IV Thursday, November 17: 8:30-11:30 Oral Session I 11:30-2:00 Luncheon & Poster Session I 2:00-5:00 Oral Session II Friday, November 18: 8:30-11:30 Oral Session III 11:30-2:00 Luncheon & Poster Session II 2:00-5:00 Oral Session IV --Location: The workshop will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. One of the most beautiful hotels in the U.S., the Ritz is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Home of Caltech, Pasadena has recently become a major dining/hangout center for Southern California with the growth of its `Old Town', built along the styles of the 1950's. Among the cultural attractions of Pasadena are the Norton Simon Museum, the Huntington Library/ Gallery/Gardens, and a number of theaters including the Ambassador Theater. --Organizing Committee: Dr. Y. Abu-Mostafa, California Institute of Technology Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. J. Moody, Oregon Graduate Institute Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Munster Dr. A. Timmermann, Birkbeck College, London Dr. A. Weigend, University of Colorado Dr. H. White, University of California, San Diego --Registration and Hotel Reservation: Registration is done by mail on a first-come, first-served basis (last year we had to return the checks to more than 50 people for lack of space). To ensure your place at the workshop, please send the enclosed registration form and payment as soon as possible to Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A. Please make checks payable to Caltech. Hotel reservations are made by contacting the Ritz-Carlton Hotel directly. Their phone number is (818) 568-3900 and fax number is (818) 792-6613. Please mention that you are with NNCM-94 in order to get the (very) special rate that we negotiated. The rate is $79+taxes ($99 with $20 credited by NNCM-94 upon registration) per room (single or double occupancy) per night. Please make the hotel reservation IMMEDIATELY as the rate is based on availability. --Secretariat: For further information, please contact the NNCM-94 secretariat Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A. e-mail: lucinda at sunoptics.caltech.edu , phone (818) 395-4843, fax (818) 568-8437 ******************************************************************* -- NNCM-94 Registration Form -- Title:--------- Name:------------------------------------ Mailing Address:------------------------------------------- ----------------------------------------------------------- e-mail:---------------------------- fax:------------------- ********Please circle the applicable fees and write the total******** --Main Conference (November 17-18): Registration fee $500 Discounted fee for academicians $250 (letter on university letterhead required) Discounted fee for full-time students $125 (letter from registrar or faculty advisor required) --Tutorials (November 16): You must be registered for the main conference in order to register for the tutorials. Morning Session Only $100 Afternoon Session Only $100 Both Sessions $150 Full-time students $50 (letter from registrar or faculty advisor required) TOTAL: $-------- Please include payment (check or money order in US currency). PLEASE MAKE CHECK PAYABLE TO CALTECH. --Hotel Reservation: Please contact the Ritz-Carlton Huntington Hotel in Pasadena directly. The phone number is (818) 568-3900 and the fax number is (818) 792-6613. Ask for the NNCM-94 rate. We have negotiated an (incredible) rate of $79+taxes ($99 with $20 credited by NNCM-94 upon registration) per room (single or double occupancy) per night, based on availability. ********Please mail your completed registration form and payment to******** Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A.  From duch at phys.uni.torun.pl Fri Sep 9 03:50:26 1994 From: duch at phys.uni.torun.pl (Wlodzislaw Duch) Date: Fri, 9 Sep 1994 09:50:26 +0200 (MET DST) Subject: Neuroprose paper announcement Message-ID: <9409090750.AA14398@class1.phys.uni.torun.pl> A non-text attachment was scrubbed... Name: not available Type: text Size: 1866 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cbfc0bd5/attachment.ksh From duch at phys.uni.torun.pl Fri Sep 9 03:51:34 1994 From: duch at phys.uni.torun.pl (Wlodzislaw Duch) Date: Fri, 9 Sep 1994 09:51:34 +0200 (MET DST) Subject: paper announcement: Solution to fundamental problems of cognitive science Message-ID: <9409090751.AA14409@class1.phys.uni.torun.pl> A non-text attachment was scrubbed... Name: not available Type: text Size: 926 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/086d35ef/attachment.ksh From brunak at cbs.dth.dk Fri Sep 9 13:43:19 1994 From: brunak at cbs.dth.dk (Soren Brunak) Date: Fri, 9 Sep 94 13:43:19 METDST Subject: Neural network model of the genetic code Message-ID: Neural network model of the genetic code is strongly correlated to the GES scale of amino acid transfer free energies N. Tolstrup, J. Toftgaard, J. Engelbrecht and S. Brunak Centre for Biological Sequence Analysis Department of Physical Chemistry The Technical University of Denmark DK-2800 Lyngby, Denmark Journal of Molecular Biology, to appear. Abstract A neural network trained to classify the 61 nucleotide triplets of the genetic code into twenty amino acid categories develops in its internal representation a pattern matching the relative cost of transferring amino acids with satisfied backbone hydrogen bonds from water to an environment of dielectric constant of roughly 2.0. Such environments are typically found in lipid membranes or in the interior of proteins. In learning the mapping between the codons and the categories, the network groups the amino acids according to the scale of transfer free energies developed by Engelman, Goldman and Steitz. Several other scales based on internal preference statistics also agree reasonably well with the network grouping. The network is able to relate the structure of the genetic code to quantifications of amino acid hydrophobicity-philicity more systematicly than the numerous attempts made earlier. Due to its inherent non-linearity, the code is also shown to impose decisive constraints on algorithmic analysis of the protein coding potential of DNA. To obtain a copy, do: unix> ftp 129.142.74.40 (or ftp virus.fki.dth.dk) Name: anonymous Password: (your email address, please) ftp> binary ftp> cd pub ftp> get gcode.ps.gz ftp> bye unix> gunzip gcode.ps.gz unix> lpr gcode.ps URL ftp://virus.fki.dth.dk/pub/gcode.ps.gz  From terry at salk.edu Fri Sep 9 23:29:35 1994 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 9 Sep 94 20:29:35 PDT Subject: ** Walter Heiligenberg ** Message-ID: <9409100329.AA12544@salk.edu> Walter Heiligenberg died in the USAir crash on September 8 near Pittsburgh. He was returning from Germany and planning to attend a neuroscience retreat. Walter was a pioneer in modeling neural systems and made major contributions to our understanding of the jamming avoidance response in electric fish. He wrote about this system in his book "Neural Nets in Electric Fish", published by the MIT Press. His tragic death is a great personal loss to his many friends and colleagues. We will all miss him greatly. Terry -----  From yeh at harlequin.co.uk Sat Sep 10 12:27:57 1994 From: yeh at harlequin.co.uk (Yehouda Harpaz) Date: Sat, 10 Sep 94 12:27:57 BST Subject: the implementation of cognition in the human brain Message-ID: <14527.9409101127@wratting.cam.harlequin.co.uk> I have put in the addresses below my ideas about the way the cognition system is actually implemented in the brain. The text is not publication ready, but readable. I would appreciate comments. The text is oriented towards psychlogy and neurobiology, but input from connectionist point view would be also be useful, and I think connectionists will find it interetsing to read. The main points, from the point view of connectionism, are: 1) Concentrating on thinking, as opposed to feature recognition. 2) The suggestion that learning in the brain is directed by a global mechanism (which is described in the text), and is largely independent of local features. www: http://www.mrc-cpe.cam.ac.uk/yh1/cognition.html anonymous ftp: ftp.mrc-cpe.cam.ac.uk => pub/yh1/cognition.ps Thanks Yehouda Harpaz  From arbib at pollux.usc.edu Mon Sep 12 12:37:06 1994 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Mon, 12 Sep 1994 09:37:06 -0700 Subject: John Szentagothai Message-ID: <199409121637.JAA02879@pollux.usc.edu> John Szentagothai, the neuroanatomist, died at his home in Budapest on the morning of Thursday, September 8th. He had arisen early to work on a book, taken breakfast, and then sat down before going in to the Institute - and died immediately. He was almost 82. Professor Szentagothai has played a leading role in neuroanatomy for many decades, having already established a strong reputation prior to World War II. In the years since then, he has been active in neuroscience in general, and in Hungarian science in particular where he created a strong, and international, school of Hungarian neuroanatomists, as well as serving as a vigorous president of the Hungarian Academy of Sciences. His concern for his country continued with a recent term as member of the Hungarian parliament. Of his many contributions to neuroscience, perhaps two are best known to modelers - his 1969 book on "The Cerebellum as a Neuronal Machine" (with Eccles and Ito) inspired Marr and Albus and many other cerebellar modelers; his 1974/5 book on "Conceptual Models of Neural Organization", and related articles, did much to extend our view of the modular and columnar organization of the brain. His enthusiasm for exposition and his quest to understand the brain continued undiminished until the day he died. I am grateful that his voice was heard for so long, but saddened indeed that I shall not hear it again. Michael Arbib  From Patrik.Floreen at cs.Helsinki.FI Tue Sep 13 04:34:35 1994 From: Patrik.Floreen at cs.Helsinki.FI (Patrik Floreen) Date: Tue, 13 Sep 1994 11:34:35 +0300 Subject: Paper on Neuroprose: Complexity Issues in Discrete Hopfield Networks Message-ID: <199409130834.LAA02415@skiathos.Helsinki.FI> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/floreen.hopfield.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: P. Floreen and P. Orponen: Complexity Issues in Discrete Hopfield Networks (55 pages) Patrik Floreen Department of Computer Science P.O.Box 26 (Teollisuuskatu 23) FIN-00014 University of Helsinki floreen at cs.helsinki.fi Abstract We survey some aspects of the computational complexity theory of discrete-time and discrete-state Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worst-case results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. The text is a draft chapter for the forthcoming book "The Computational and Learning Complexity of Neural Networks: Advanced Topics" (ed. Ian Parberry). Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get floreen.hopfield.ps.Z ftp> quit unix> uncompress floreen.hopfield.ps.Z Thanks to Jordan Pollack for maintaining this archive. Patrik Floreen  From bradtke at picard.gteds.gte.com Tue Sep 13 08:43:00 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Tue, 13 Sep 1994 08:43:00 -0400 Subject: Paper on neuroprose archives Message-ID: <199409131245.IAA12909@harvey.gte.com> ftp://archive.cis.ohio-state.edu/pub/neuroprose/bradtke.RLforLQ.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bradtke.rlforlq.ps.Z Adaptive Linear Quadratic Control Using Policy Iteration (19 pages) CMPSCI Technical Report 94-49 Steven J. Bradtke (1), B. Erik Ydstie (2), and Andrew G. Barto (1) (1) Computer Science Department University of Massachusetts Amherst, MA 01003 (2) Department of Chemical Engineering Carnegie Mellon University Pittsburgh, PA 15213 bradtke at cs.umass.edu ydstie at andrew.cmu.edu barto at cs.umass.edu Abstract In this paper we present stability and convergence results for Dynamic Programming-based reinforcement learning applied to Linear Quadratic Regulation (LQR). The specific algorithm we analyze is based on Q-learning and it is proven to converge to the optimal controller provided that the underlying system is controllable and a particular signal vector is persistently excited. The performance of the algorithm is illustrated by applying it to a model of a flexible beam. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get bradtke.rlforlq.ps.Z ftp> quit unix> uncompress bradtke.rlforlq.ps.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From bradtke at picard.gteds.gte.com Tue Sep 13 08:44:40 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Tue, 13 Sep 1994 08:44:40 -0400 Subject: Thesis on neuroprose archives Message-ID: <199409131247.IAA12992@harvey.gte.com> ftp://archive.cis.ohio-state.edu/pub/neuroprose/thesis/bradtke.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/thesis/bradtke.thesis.Z Incremental Dynamic Programming for On-Line Adaptive Optimal Control (133 pages) CMPSCI Technical Report 94-62 Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Abstract Reinforcement learning algorithms based on the principles of Dynamic Programming (DP) have enjoyed a great deal of recent attention both empirically and theoretically. These algorithms have been referred to generically as Incremental Dynamic Programming (IDP) algorithms. IDP algorithms are intended for use in situations where the information or computational resources needed by traditional dynamic programming algorithms are not available. IDP algorithms attempt to find a global solution to a DP problem by incrementally improving local constraint satisfaction properties as experience is gained through interaction with the environment. This class of algorithms is not new, going back at least as far as Samuel's adaptive checkers-playing programs, but the links to DP have only been noted and understood very recently. This dissertation expands the theoretical and empirical understanding of IDP algorithms and increases their domain of practical application. We address a number of issues concerning the use of IDP algorithms for on-line adaptive optimal control. We present a new algorithm, Real-Time Dynamic Programming, that generalizes Korf's Learning Real-Time A* to a stochastic domain, and show that it has computational advantages over conventional DP approaches to such problems. We then describe several new IDP algorithms based on the theory of Least Squares function approximation. Finally, we begin the extension of IDP theory to continuous domains by considering the problem of Linear Quadratic Regulation. We present an algorithm based on Policy Iteration and Watkins' Q-functions and prove convergence of the algorithm (under the appropriate conditions) to the optimal policy. This is the first result proving convergence of a DP-based reinforcement learning algorithm to the optimal policy for any continuous domain. We also demonstrate that IDP algorithms cannot be applied blindly to problems from continous domains, even such simple domains as Linear Quadratic Regulation. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/thesis ftp> binary ftp> get bradtke.thesis.Z ftp> quit unix> uncompress bradtke.thesis.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From rsun at cs.ua.edu Tue Sep 13 16:23:03 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 13 Sep 1994 15:23:03 -0500 Subject: No subject Message-ID: <9409132023.AA32546@athos.cs.ua.edu> *** Announcing a new book *** available from Kluwer Academic Publishers: COMPUTATIONAL ARCHITECTURES INTEGRATING NEURAL AND SYMBOLIC PROCESSES: A PERSPECTIVE ON THE STATE OF THE ART Edited by Ron Sun and Larry Bookman ISBN 0-7923-9517-4 (Order information is in the end of this message) ------------------------------------------- The focus of this book is on a currently emerging body of research --- computational architectures integrating neural and symbolic processes. There has been a great deal of work in integrating neural and symbolic processes, both from a cognitive and/or applicational viewpoint, The editors of this book intend to address the underlying architectural aspects of this integration. In order to provide a basis for a deeper understanding of existing divergent approaches and provide insight for further developments in this field, the book presents (1) an examination of specific architectures (grouped together according to their approaches), their strengths and weaknesses, why they work, and what they predict, and (2) a critique/comparison of these approaches. The book will be of use to researchers, graduate students, and interested laymen, in areas such as cognitive science, artificial intelligence, computer science, cognitive psychology, and neurocomputing, in keeping up to date with the newest research trends. It can also serve as a comprehensive, in-depth introduction to this new emerging field. A unique feature of the book is a comprehensive bibliography at the end of the book. -------------------------------------------- TABLE OF CONTENTS Foreword by Michael Arbib Preface by Ron Sun and Larry Bookman Chapter 1 An Introduction: On Symbolic Processing in Neural Networks by Ron Sun Introduction Brief Review Existing Approaches Issues band Difficulties Future Directions, Or Where Should We Go From Here? Overview of the Chapters Summary Part I Localist Architectures Chapter 2 Complex Symbol-Processing in Conposit, A Transiently Localist Connectionist Architecture by John A. Barnden Introduction The Johnson-Laird Theory and Its Challenges Mental Models in Conposit Connectionist Realization of Conposit Coping with the Johnson-Laird Challenge Simulation Runs Discussion Summary Chapter 3 A Structured Connectionist Approach to Inferencing and Retrieval by Trent E. Lange Introduction Language Understanding and Memory Retrieval Models Inferencing in ROBIN Episodic Retrieval in REMIND Future Work Summary Chapter 4 Hierarchical Architectures for Reasoning by R.C. Lacher and K.D. Nguyen Introduction Computational Networks: A General Setting for Distributed Computations Type x00 Computational Networks Expert Systems Expert Networks Neural Networks Summary Part II Distributed Architectures Chapter 5 Subsymbolic Parsing of Embedded Structures by Risto Miikkulainen Introduction Overview of Subsymbolic Sentence Processing The SPEC Architecture Experiments Discussion Summary Chapter 6 Towards Instructable Connectionist Systems by David C. Noelle and Garrison W. Cottrell Introduction Systematic Action Linguistic Interaction Learning By Instruction Summary Chapter 7 An Internal Report for Connectionists by Noel E. Sharkey and Stuart A. Jackson Introduction The Origins of Connectionist Representation Representation and Decision Space Discussion Summary Part III Combined Architectures Chapter 8 A Two-Level Hybrid Architecture for Structuring Knowledge for Commonsense Reasoning by Ron Sun Introduction Developing A Two-Level Architecture Fine-Tuning the Structure Experiments Comparisons with Other Approaches Summary Chapter 9 A Framework for Integrating Relational and Associational Knowledge for Comprehension by Lawrence A. Bookman Introduction Overview of LeMICON Text Comprehension Encoding Semantic Memory Representation of Semantic Constraints Experiments and Results Algorithm Summary Chapter 10 Examining a Hybrid Connectionist/Symbolic System for the Analysis of Ballistic Signals by Charles Lin and James Hendler Introduction Related Work in Hybrid Systems Description of the SCRuFFY Architecture Analysis of Ballistic Signals Future Work Conclusion Part IV Commentaries Chapter 11 Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy by Vasant Honavar Introduction Shared Foundations of SAI and NANN Knowledge Representation Revisited A Closer Look at SAI and NANN Integration of SAI and NANN Summary Chapter 12 Connectionist Natural Language Processing: A Status Report by Michael G. Dyer Introduction Dynamic Bindings Functional Bindings and Structured Pattern Matching Encoding and Accessing Recursive Structures Forming Lexical Memories Forming Semantic and Episodic Memories Role of Working Memory Routing and Control Grounding Language in Perception Future Directions Conclusions Appendix Bibliography of Connectionist Models with Symbolic Processing Author Index Subjct Index --------------------------------------------- To order: ISBN 0-7923-9517-4 Kluwer, Order Dept. P.O.B. 358 Accord Station, Hingham, MA 02018-0358 (617) 871-6600 FAX: (617) 871-6528 e-mail: Kluwer at world.std.com ---------------------------------------------  From arbib at pollux.usc.edu Thu Sep 15 19:32:28 1994 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Thu, 15 Sep 1994 16:32:28 -0700 Subject: Two Positions Available: Data Bases, Visualization, and Simulation for Brain Research Message-ID: <199409152332.QAA25280@pollux.usc.edu> Professors Michael Arbib (Director), Michel Baudry, Theodore Berger, Peter Danzig, Shahram Ghandeharizadeh, Scott Grafton, Dennis McLeod, Thomas McNeill, Larry Swanson, and Richard Thompson have secured a Program Project grant from the Human Brain Project (a consortium of federal agencies led by the National Institute of Mental Health) for a 5 year project, "Neural Plasticity: Data and Computational Structures", to be conducted at the University of Southern California. The Project will combine research on databases with the development of tools for database construction and data recovery from multiple databases, simulation tools, and visualization tools for both rat neuroanatomy and human brain imaging. These tools will be used to construct databases for research at USC and elsewhere on mechanisms of neural plasticity in basal ganglia, cerebellum, and hippocampus. The grant will also support a core of neuroscience research (both experimental and computational) linked to several ongoing research programs to explore how experiments can be enhanced when coupled to databases enriched with powerful tools for modeling and visualization. The project is a major expression of USC's approach to the study of the brain which locates neuroscience in the context of a broad interdisciplinary program in Neural, Informational, and Behavioral Sciences (NIBS). The grant provides funding for two computer professionals to help us develop a system integrating databases, discovery tools, visualization and simulation for neuroscience. The DATA DEVELOPER is to function as a "knowledge engineer" helping neuroscientists explicate data and system needs. Experience is required with WWW's httpd and Mosaic, UNIX, and Macintosh software. A background in neuroscience, while welcome, is not required. We do require proven communication skills and ability to analyze scientific data, with at least three years professional experience. The SYSTEMS PROGRAMMER must have at least three years experience programming and developing object-oriented databases, including UNIX, C++, and DBMS experience. Experience with graphics, simulation tools and Internet protocols would be welcome. We require demonstrated ability to package software for public distribution, using multiple platforms. Send CV, references, and letter addressing the above qualifications to Paulina Tagle, Center for Neural Engineering, USC, Los Angeles, CA 90089-2520; Fax (213) 740-5687 paulina at pollux.usc.edu. USC is an equal opportunity employer.  From goodman at unr.edu Fri Sep 16 14:09:41 1994 From: goodman at unr.edu (Phil Goodman) Date: Fri, 16 Sep 1994 11:09:41 -0700 (PDT) Subject: Position Announcement Message-ID: <9409161809.AA15685@equinox.unr.edu> ******* Preliminary Position Announcement ******* NEURAL NETWORK METHODOLOGIST -- VISITING or RESEARCH FACULTY MEMBER (Basic and Applied Research; 100% of Time Protected for Project-Related and Independent Research) Center for Biomedical Modeling Research University of Nevada, Reno The University of Nevada Center for Biomedical Modeling Research (CBMR), located at the base of the Sierra Nevada Mountains near Lake Tahoe, is an interdisciplinary research project involving the Departments of Medicine, Electrical Engineering, and Computer Science. Under federal funding, CBMR faculty and collaborators apply neural network and advanced probabilistic/ statistical concepts to large health care databases. In particular, they are developing methods to: (1) improve the accuracy of predicting surgical mortality, (2) interpret nonlinearities and interactions among predictors, and (3) manage missing data. The CBMR seeks a PhD (or equivalent) methodologist trained in advanced artificial neural network theory, model generalization, probability and statistical theory, and C programming. This person will have major responsibility for the design of techniques that improve the ability of nonlinear models to generalize, and will supervise several C programmers to implement concepts into software (much of the resulting software will be freely distributed for use in many fields). Working knowledge of decision theory, Bayesian statistics, bootstrap, ROC analysis, or imputation of missing data is desirable. Starting date is November 15, with an expected duration of at least 2 years. Appointment possibilities include: * Research Assistant Professor (non-tenure track) * Visiting Professor (Assistant/Associate/Full) (salary could be added to available sabbatical or other partial funding) Funding is also available for a graduate student to work under the faculty member, and possibly a post-doctoral position. The position will remain open until filled. The University of Nevada employs only U.S. citizens and aliens lawfully authorized to work in the United States. AA-EOE. If interested, please send (by written, faxed, or plain-text electronic mail): a cover letter detailing your qualifications, and a resume that includes the names and phone numbers of three references. ______________________________________________________________________________ Philip H. Goodman, MD, MS E-mail: goodman at unr.edu Associate Professor of Medicine, Electrical Engineering, & Computer Science University of Nevada Center for Biomedical Modeling Research World-Wide Web: http://www.scs.unr.edu/~cbmr/ Washoe Medical Center, 77 Pringle Way, Reno, Nevada 89520 USA Voice: +1 702 328-4869 FAX: +1 702 328-4871 ______________________________________________________________________________  From esann at dice.ucl.ac.be Fri Sep 16 13:58:00 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 16 Sep 1994 19:58:00 +0200 Subject: Neural Processing Letters: WWW and FTP servers available Message-ID: <9409161747.AA02377@ns1.dice.ucl.ac.be> ***************************** * Neural Processing Letters * ***************************** Neural Processing Letters is a new rapid publication journal in the field of neural networks. This journal gives the possibility to authors to publish with very short delays (less than 3 months) new ideas, original developments and work in progress in all aspects of the artificial neural networks field, including, but not restricted to, theoretical developments, biological models, new formal models, learning, applications, software and hardware developments, and prospective researches. Because of the short delays of publication, the journal informs the readers about the LATEST develoments and results in the field, and should be used by all researchers concerned with neural networks to know the CURRENT status of the research in their area. Information on Neural Processing Letters is now available through WWW (Mosaic server) and anonymous FTP. If you do not have access to FTP or WWW, please don't hesitate to contact directly the publisher to have more information. FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html Publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________  From bradtke at picard.gteds.gte.com Fri Sep 16 12:22:47 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Fri, 16 Sep 1994 12:22:47 -0400 Subject: Thesis on neuroprose archives (corrected repost) Message-ID: <199409161625.MAA17435@harvey.gte.com> This repost corrects the directory path to the document, and the document name. I apologize for any problems that the last posting may have caused. Steve ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/bradtke.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/bradtke.thesis.ps.Z Incremental Dynamic Programming for On-Line Adaptive Optimal Control (133 pages) CMPSCI Technical Report 94-62 Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Abstract Reinforcement learning algorithms based on the principles of Dynamic Programming (DP) have enjoyed a great deal of recent attention both empirically and theoretically. These algorithms have been referred to generically as Incremental Dynamic Programming (IDP) algorithms. IDP algorithms are intended for use in situations where the information or computational resources needed by traditional dynamic programming algorithms are not available. IDP algorithms attempt to find a global solution to a DP problem by incrementally improving local constraint satisfaction properties as experience is gained through interaction with the environment. This class of algorithms is not new, going back at least as far as Samuel's adaptive checkers-playing programs, but the links to DP have only been noted and understood very recently. This dissertation expands the theoretical and empirical understanding of IDP algorithms and increases their domain of practical application. We address a number of issues concerning the use of IDP algorithms for on-line adaptive optimal control. We present a new algorithm, Real-Time Dynamic Programming, that generalizes Korf's Learning Real-Time A* to a stochastic domain, and show that it has computational advantages over conventional DP approaches to such problems. We then describe several new IDP algorithms based on the theory of Least Squares function approximation. Finally, we begin the extension of IDP theory to continuous domains by considering the problem of Linear Quadratic Regulation. We present an algorithm based on Policy Iteration and Watkins' Q-functions and prove convergence of the algorithm (under the appropriate conditions) to the optimal policy. This is the first result proving convergence of a DP-based reinforcement learning algorithm to the optimal policy for any continuous domain. We also demonstrate that IDP algorithms cannot be applied blindly to problems from continous domains, even such simple domains as Linear Quadratic Regulation. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get bradtke.thesis.ps.Z ftp> quit unix> uncompress bradtke.thesis.ps.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From massone at mimosa.eecs.nwu.edu Fri Sep 16 14:59:58 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Fri, 16 Sep 94 13:59:58 CDT Subject: Paper on dynamics of superior colliculus & recurrent backprop. Message-ID: <9409161859.AA16551@mimosa.eecs.nwu.edu> ftp-host: archive.cis.ohio-state.edu ftp-file: massone.colliculus.ps.Z The following paper has been placed in the connectionist archive. The paper has been accepted for publication in "Network". (Note: the paper takes a LONG time to print because of the figures.) Local Dynamic Interactions in the Collicular Motor Map: A Neural Network Model Lina L.E. Massone and Tony Khoshaba In this paper we explore the possibility that some of the dynamic properties of the neural activity in the gaze-related motor map (located in the intermediate layers of the superior colliculus) might be mediated by local interactions between movement-related neurons and fixation neurons. More specifically, the goal of this research is to demonstrate, from a computational standpoint, which classes of dynamic behaviors of the collicular neurons can be obtained without the intervention of feedback signals, and to hence begin exploring to what extent the gaze system needs feedback in order to operate. We modeled: (a) The collicular motor map as a dynamical system realized with a recurrent neural network. (b) The dynamics of the neural activity in the map as the transients of that system towards an equilibrium configuration that the network learned with a recurrent learning algorithm (recurrent backpropagation.) The results of our simulations demonstrate: (1) That the transients of the trained network are hill-flattening patterns as observed by some experimenters in the burst-neuron layer of the superior colliculus of rhesus monkeys. This result was obtained despite the fact that the learning algorithm did not specify what the network's transients should be. (2) That the connections in the trained network are excitatory within the fixation zone of the motor map and inhibitory elsewhere. (3) That the results of the learning are robust in the face of changes in the connectivity pattern and in the initialization of the weights, but that a local connectivity pattern favors the network's stability. (4) That nonlinearity is required in order to obtain meaningful dynamic behaviors. (5) That the trained network is robust to abnormal stimulation patterns such as noisy and multiple stimuli and that when multiple stimuli are utilized the response of the network remains a stereotyped flattening one. The results of the learning point out the possibility that the dynamics of the burst-neuron layer of the superior colliculus might be locally regulated rather than feedback-driven, and that the action of the feedback be confined to the layer of the buildup neurons. The results of the multiple-stimulation experiment support the hypothesis, already put forward by one of the authors in a previous work (Massone in press), that the averaging of the direction of movement following double stimulation of the motor map (Robinson 1972) does not occur at the level of the motor map. This paper constitutes also a study of the properties and responses of recurrent backpropagation under various choices for the network's and algorithm's parameters.  From arantza at cogs.susx.ac.uk Sat Sep 17 17:00:00 1994 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Sat, 17 Sep 94 17:00 BST Subject: Workshop Announcement Message-ID: A non-text attachment was scrubbed... Name: not available Type: x-sun-attachment Size: 22451 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/9d84a4dc/attachment.ksh From marshall at cs.unc.edu Sun Sep 18 19:51:29 1994 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Sun, 18 Sep 1994 19:51:29 -0400 Subject: Cognitive Science faculty job at Duke University Message-ID: <199409182351.TAA13811@marshall.cs.unc.edu> [Please reply to the address below, not to the poster.] ---------------------------------------------------------------------- From: greg at psych.duke.edu (Gregory Lockhead) Subject: Cognitive Science faculty job at Duke University Duke University announces a tenure-track assistant professorship in Cognitive Science. Specialty areas to be considered include but are not limited to: attention, imagery, memory, motor control, and vision in humans. Some combination of computational, developmental, experimental, mathematical, or neuroscience perspectives is preferred. Send a letter of application and at least three letters of recommendation to: Cognitive Science Search Committee Department of Psychology: Experimental Duke University Durham, NC 27708, USA. Applications received by December 1, 1994 will be guaranteed consideration. Duke University is an Equal Opportunity/Affirmative Action Employer. ----------------------------------------------------------------------  From Adriaan.Tijsseling at phil.ruu.nl Mon Sep 19 07:13:26 1994 From: Adriaan.Tijsseling at phil.ruu.nl (Adriaan Tijsseling) Date: Mon, 19 Sep 1994 13:13:26 +0200 (MET DST) Subject: Thesis available on Categorizatio Message-ID: <199409191113.NAA15756@laurel.stud.phil.ruu.nl> The following thesis is available by anonymous ftp from ftp.phil.ruu.nl: A Hybrid Framework for Categorization a Master Thesis by: Adriaan Tijsseling, Dept. of Cognitive Artificial Intelligence, Faculty of Philosophy, Utrecht University. The thesis is in the directory /pub/papers/tijsseling-Cat94.ps.Z Login as "anonymous" or "ftp" and use your email address as password. The file is 900 Kb compressed, 131 pages long. ABSTRACT: The thesis proposes a hybrid framework for categorization based on the framework by Stevan Harnad. First an extensive review is given of the state of the art in categorization and categorical perception research. The theory approach to categorization is argued to be the most promising paradigm. Second, Harnad's framework is treated in which categorical perception and higher order categorization is combined in one explanation model. I suggest a way to incorporate the theory approach. Third, this framework is related to the symbolic/connectionist paradigms. I argue that a hybrid approach together with a sensori-motor component is a best option sofar in implementing the framework. A second part of the thesis investigates the connectionist part of the hybrid framework. Here neural networks consisting of several CALM Map modules (based on the Categorization and Learning Module) are trained to learn to topologically orden lines of various orientations. It is shown that this network exhibits categorical perception. Adriaan Tijsseling  From david_field at qmrelay.mail.cornell.edu Mon Sep 19 11:34:23 1994 From: david_field at qmrelay.mail.cornell.edu (david field) Date: 19 Sep 1994 11:34:23 -0400 Subject: Position available Message-ID: Subject: Time:3:39 PM OFFICE MEMO Position available Date:9/18/94 The following position will be available in 1995. Candidates with connectionist and computational approaches to cognitive phenomena are especially encouraged to apply. There is a possiblity that a second position in this area will also become available. ______________________________ Cognitive Psychologist Cornell University The Department of Psychology at Cornell University is considering candidates for a tenure-track assistant professorship in cognition. Areas of specialization include but are not limited to: memory, attention, language and speech processing, concepts, knowledge representation, reasoning, problem solving, judgment and decision making, perception, motor control and action. Researchers with computational, mathematical, developmental, cross-cultural, or neuroscience perspectives, among others are encouraged to apply. The position will begin in August, 1995. Review of applications will begin December 1, 1994. Cornell University is an Equal Opportunity/Affirmative Action Employer. Interested applicants should submit a letter of application that includes one or more key words indicating their specific area(s) of interest or specialization, curriculum vitae, reprints or preprints of completed research, and letters of recommendation sent directly from three referees to: Secretary, Cognitive Psychology Search Committee Department of Psychology, Uris Hall, Cornell University Ithaca, NY 14853-7601, USA. email: kas10 at cornell.edu FAX: 607-255-8433 Voice: 607-255-6364  From hinton at cs.toronto.edu Mon Sep 19 11:32:11 1994 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Mon, 19 Sep 1994 11:32:11 -0400 Subject: Send us your data Message-ID: <94Sep19.113216edt.797@neuron.ai.toronto.edu> We are planning to create a database of tasks for evaluating supervised neural network learning procedures (both classification and regression). The main aim of the enterprise is to make it as easy as possible for neural net researchers to compare the performance of their latest algorithm with the performance of many other techniques on a wide variety of tasks. A subsidiary aim is to encourage neural net researchers to use systematic ways of setting "free parameters" in their algorithms (such as the number of hidden units, the weight-decay etc. etc.). Its easy to fudge these parameters on a single dataset but these fudges become far more evident when the same algorithm is applied to many different tasks. If you have a real-world dataset with 500 or more input-output cases that fits the criteria below we would really like to get it from you. You will be helping the research community, and by including it in this database you will ensure that lots of different methods get tried on your data. WHATS THE POINT OF YET ANOTHER DATABASE 1. Since some neural network learning procedures are quite slow, we want a database in which there is a designated training and test set for each task. We don't want to train many different times on different subsets of the data, testing on the remainder. To avoid excessive sampling error we want the designated test set to be quite large, so even though the aim is to evaluate performance on smallish training sets, we will avoid tasks where there is only a small amount of data available for testing. The justification for only using a single way of splitting the data into training and test sets is this: For a given amount of computer time, its better to evaluate perfomance on many different tasks once than to evaluate performance on one task many times since this cuts down on the noise caused by the random choice of task. 2. To make life easy we want to focus on tasks in which there are no missing values and all of the inputs are numerical. This could be viewed as tailoring the database to make life easy for algorithms that are limited in certain ways. That is precisely our intention. 3. We want all the tasks to be in the same format (which they are not if a researcher gets different tasks from different databases). 4. We want the database to include results from many different algorithms with an email guarantee that NONE of the free parameters of the algorithms were tuned by looking at the results on the test data. So for a result to be entered the researcher will have to specify how all the free parameters were set and the same recipe should preferably be used for all the tasks in the database. Its fine to say "I always use 80 hidden units because that worked nicely on the first example I ever tried". Just so long as there is a reason. We plan to run quite a few of the standard methods ourselves, so other researchers will be able to just run their favorite method and get a fair comparison with other methods. WHAT KINDS OF TASKS WILL WE INCLUDE In addition to excluding missing values and nominal attributes we will initially exclude time series tasks, so the order of the examples will be unimportant. Each task will have a description that includes known limits on input and output variables. We will include both real and synthetic tasks. For the synthetic tasks the description will specify the correct generative model (i.e. exactly how the data was generated), but researchers will only be allowed to use the training data for learning. They will have to pretend that they do not know the correct generative model when they are setting the free parameters of their algorithm. Tasks will vary in the following ways: Dimensionality of input. Dimensionality of output. Degree of non-linearity. Noise level and type of noise in both input and output. Number of irrelevant variables. The existence of topology on the input space and known invariances in the input-output mapping. WHERE WILL THE TASKS COME FROM? Many of them will come from existing databases (e.g. the UC Irvine machine learning database). Hopefully other connectionists will provide us with data or with pointers to data or databases. To be useful, the data should have few missing values (we'll simply leave out those cases), no nominal attributes, and at least 500 cases (preferably many more) so that the test set can be large. In addition to pointers to suitable datasets, now would be a good time to comment on the design of this database since it will soon be too late, and we would like this effort to be of use to as many fellow researchers as possible.  From UBJTP69 at CCS.BBK.AC.UK Mon Sep 19 11:20:00 1994 From: UBJTP69 at CCS.BBK.AC.UK (Gareth) Date: Mon, 19 Sep 94 11:20 BST Subject: Jobs Available Message-ID: CENTRE FOR SPEECH AND LANGUAGE Birkbeck College University of London RESEARCH ASSISTANTS AND SYSTEMS MANAGER Applications are invited for four research and research-support positions in the Center for Speech and Language at Birkbeck College, to work on experimental and computational research into spoken language comprehension in normal and language-impaired populations, with Professors William Marslen-Wilson and Lorraine K. Tyler. All of these positions have a starting date of January 1st 1995, and three positions, funded by an MRC Programme Grant, are potentially five year appointments, to December 1999. The fourth position, funded by an ESRC project grant, is available for 3 years, until December 1997. Position 1: Research Assistant This position is to support current experimental and computational research into the representation and processing of lexical form, focussing on the phonological aspects of speech comprehension, and working with William Marslen-Wilson and Gareth Gaskell. Candidates should have a background in experimental psycholinguistics and linguistics. Experience in computational modelling using connectionist techniques would be an advantage. This MRC-funded position is available for five years from January 1995. Salary will be on the 1A scale (16075-19141 English pounds, inclusive of London Allowance). Position 2: Research Assistant This position is to support experimental research into language comprehension in normal and language-impaired populations, ranging from lexical access to syntactic parsing and semantic interpretation, and working primarily with Lorraine K. Tyler. Candidates should have a background in cognitive psychology and/or psycholinguistics, and clinical experience would be an advantage. This MRC-funded position is available for two years from January 1995, with a strong possibility of extension for a further three years. Salary will be on the 1A scale (16075-19141 English pounds, inclusive of London Allowance). Position 3: Research Assistant This position is to support computational and experimental research into the mental representation and processing of English inflectional and derivational morphology, working with William Marslen-Wilson and Mary Hare (at UCSD). Candidates should have a background in language and connectionism, with experience both of experimentation and modelling. This ESRC-funded post is available for three years from January 1995. Salary is expected to be on the 1A scale (16075-23087 English pounds, inclusive of London Allowance). Position 4: Half-time Systems Manager Applications are invited for a half-time post supporting the computing needs of the Centre for Speech and Language. Applicants should have experience working with UNIX/C systems and PC/Mac-based networks. As well as supporting the research centre's computer network, the successful candidate will be responsible for the maintenance and possibly development of the experimental software used by the research group. An interest in Experimental Psychology would therefore be an advantage. The post would be suitable for someone wishing to pursue a part-time research degree within the research group or elsewhere in London. This MRC-funded position is tenable from January 1995, for up to five years. Salary is on the AR2 scale at 9570 English pounds, including London Allowance. To apply for any of these posts, send three copies of your CV, plus the names and addresses of 3 referees, to: Professor William Marslen- Wilson, Department of Psychology, Birkbeck College, Malet St., London WCIE 7HX. Fax: (44)-(0)71-631-6312. Email: ubjta38 at cu.bbk.ac.uk. Please make clear which post you are applying for. Closing date for applications is October 14th, 1994  From prechelt at ira.uka.de Tue Sep 20 10:51:46 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Tue, 20 Sep 1994 16:51:46 +0200 Subject: Send us your data In-Reply-To: Your message of "Mon, 19 Sep 1994 11:32:11 EDT." <94Sep19.113216edt.797@neuron.ai.toronto.edu> Message-ID: <"irafs2.ira.733:20.09.94.14.51.41"@ira.uka.de> > We are planning to create a database of tasks for evaluating supervised neural > network learning procedures (both classification and regression). The main You may be interested to know that I have started a similar project earlier this year, which will be finished in at most a few weeks. My benchmark collection, called Proben1, contains 45 datasets for 15 different learning problems from 12 different domains. All but one of these problems stem from the UCI machine learning databases archive. I chose an approach that differs from yours in a few points: - Small datasets, too. - I use a smaller part of the dataset as test set (25%) but use three different partitionings instead. - All data partitionings also include an exactly specified validation set (if one is needed, otherwise this, too, is part of the training set) - Problems with nominal attributes - Problems with missing values - canonical input and output representation (range 0...1) Nevertheless, you may want to have a look at my collection. I believe it would be good if you would for instance use the same (very simple) file format for the data in your collection, so that researchers can read the data from both collections using the same input procedure. My collection will be installed for anonymous ftp in the neural bench archive at CMU. The technical report describing it will be announced on this mailing list and will be available from neuroprose. [ Geoffrey, I'll send the draft version of my report to you by personal mail. ] Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From IDROR at miavx1.acs.muohio.edu Tue Sep 20 15:18:26 1994 From: IDROR at miavx1.acs.muohio.edu (IDROR@miavx1.acs.muohio.edu) Date: Tue, 20 Sep 1994 15:18:26 -0400 (EDT) Subject: Position available Message-ID: <01HHC8U527RY9JGYAZ@miavx1.acs.muohio.edu> ASSISTANT PROFESSOR OF COGNITIVE PSYCHOLOGY - MIAMI UNIVERSITY. The Department of Psychology at Miami University anticipates up to two tenure track positions in cognitive psychology, beginning August 1995. Areas of specialization is open, but applications with strong background in cognitive science and experience in computation modelling/cognitive simulations will be given special attention. Responsibilities include graduate and undergraduate teaching in the areas of cognitive science and quantitative methods, supervision of doctoral research, and continuing research in applicant's area of interest. Women and minorities are especially encouraged to apply. Salary will be commensurate with training, research productivity, and experience. Applicant should submit a letter describing research and teaching interests and experience, a vita, representative reprints, and at least three letter of recommendation to Richard C. Sherman, Cognitive Search Committee Chair, Department of Psychology, Miami University, Oxford Ohio, 45056. Review of applications will begin January 2, 1995. Miami University is an affirmative action equal opportunity employer.  From kehagias at eng.auth.gr Wed Sep 21 10:29:05 1994 From: kehagias at eng.auth.gr (Thanos Kehagias) Date: Wed, 21 Sep 94 17:29:05 +0300 Subject: machine learning databases Message-ID: <9409211429.AA22262@vergina.eng.auth.gr> on the subject of machine learning databases, here is a request. if one has a pointer to the kind of data i need (see next paragraph), or if the people setting up databases now would like to consider including this kind of data, i will be most grateful. i am looking at the problem of Time Series Classification. in other words, there is a number of possible sources, each producing a time series. previous instances of these time series have been observed, either labelled (supervised learning) or unlabelled (unsupervised learning). now a new time series is observed and one wants to decide which source generated it. there is a lot of algorithms in the literature that do this kind of thing (i have some of my own, even) and everyone seems to be using their own example problem and/or dataset. a classical example of this problem is, of course, phoneme recognition. what i have not been able to find is some standard datasets to be used for benchmarks (e.g. sonar, radar signals, EEG, ECG data and so on). i mean the raw time series, not some kinf of preprocessed data where the whole time series is reduced to a static features vector. does anyone know of any such data in the public domain (except speech data)? i think this would be a useful benchmark, and the kind of thing that i have not seen in, for instance, the uci collection. Thanks a lot, Thanasis  From petsche at scr.siemens.com Wed Sep 21 16:02:34 1994 From: petsche at scr.siemens.com (Thomas Petsche) Date: Wed, 21 Sep 1994 16:02:34 -0400 Subject: CALL FOR PARTICIPATION - NIPS workshop on novelty detection Message-ID: <199409212002.QAA02008@puffin.scr.siemens.com> There will be a NIPS workshop on Novelty detection and adaptive system monitoring which will focus on novelty detection, unsupervised learning, and algorithms designed to monitor a system to detect failures or incorrect behavior. A more detailed description is attached below. If you would be interested in making a presentation at this workshop, please send email to petsche at scr.siemens.com We are interested in presentations on * novelty detection and unsupervised learning algorithms; * models of biological novelty detection and unsupervised learning systems; * real-world examples of monitoring or novelty detection problems -- whether you have a final solution yet or not. TITLE Novelty detection and adaptive system monitoring DESCRIPTION The purpose of the discussion is to bring together researchers working on different real world system monitoring tasks and those working on novelty detection algorithms and models in order to hasten the development of broadly applicable adaptive monitoring algorithms. Unexpected failure of a machine or system can have severe and expensive consequences. One of the most infamous examples is the sudden failure of military helicopter rotor gearboxes, which lead to a complete loss of the helicopter and all aboard. There are many, more mundane, similar examples. The unexpected failure of a motor in a paper mill causes a loss of the product in production as well as lost production time while the motor is replaced. A computer or network overload, due to normal traffic or a virus invasion, can lead to a system crash that can cause loss of data and downtime. In these examples and others, it can be cost effective to ``monitor'' the system of interest and signal an operator when the monitored conditions indicate an imminent failure. Thus, one might assign a technician to listen to all the fire pumps on a ship and replace any that starts to sound like it is in danger of failing. This is analogous to periodically glancing at the fuel gauge in your car to make sure you do not run out of gas. An adaptive system monitor is an adaptive that estimates the condition of the system from a set of periodic measurements. This task is typically complicated by the fact that the measurements are complex and high dimensional. Adaptation is necessary since the measurements will depend on the peculiarities of the system being monitored and its environment. This workshop will focus on the use of novelty detection for the problem of system monitoring. A novelty detector is a device or algorithm which is trained on a set of examples and learns to recognize or reproduce those examples. Any new example that is significantly different from the training set is identified as ``novel'' because it is unlike any example in the training set. We will discuss various approaches to novelty detection; how it differs from multiple class supervised learning and purely unsupervised learning; biological relevance and how to use what is know about biological systems; complexity issues due to single class data; how to detect only certain types of novelty; and the use of novelty detection algorithms on real world devices such as helicopter rotor gears, electric motors, computers, networks, automobiles etc. FORMAT We aim to have presentations about real world monitoring problems, novelty detection and monitoring algorithms, and biological and psychological models that exhibit novelty detection all aimed to stir up questions and discussions. WORKSHOP CHAIRS Thomas Petsche and Stephen J. Hanson Siemens Corporate Research, Inc. Mark Gluck Rutgers University VERY BRIEF RESUMES Thomas Petsche leads a 2 year-old effort to develop a electric motor monitoring system. Stephen J. Hanson is the head of the Learning Systems Department at SCR and a frequent contributor to the motor monitoring project. Mark Gluck is a professor of neurobiology at Rutgers University and has authored several papers on a model of the hippocampus based on a neural network auto-associator which functions as a novelty detector.  From wahba at stat.wisc.edu Wed Sep 21 16:30:01 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 21 Sep 94 15:30:01 -0500 Subject: gacv-paper available-deg.fdm.sig Message-ID: <9409212030.AA08578@hera.stat.wisc.edu> The following paper is available by ftp in the ftp directory ftp.stat.wisc.edu/pub/wahba in the file gacv.ps.gz: A Generalized Approximate Cross Validation for Smoothing Splines with Non-Gaussian Data by Dong Xiang and Grace Wahba Abstract We consider the model Prob {Y_i= 1} = exp{f(t_i}/(1+exp{f(t_i}) Prob {Y_i =0} = 1/(1+exp{f(t_i)} where t is a vector of predictor variables, t_i is the vector of predictor variables for the i th subject/patient/instance and Y_i is the outcome (classification) for the i th subject. f(\cdot) is supposed to be a `smooth' function of t, and the goal is to estimate f by choosing f in an appropriate class of functions to minimize Log likelihood {Y_1, ...Y_n|f} + \lambda J(f) where J{f} is a an appropriate penalty functional which restricts the degrees of freedom for signal attributed to f. Our results concentrate on J(f) a `smoothness' penalty which results in spline and related (e. g. rbf) estimates. We propose a Generalized Approximate Cross Validation score (GACV) for estimating $\lambda$ (internally) from a relatively small data set. The GACV score is derived by first obtaining an approximation to the leaving-out-one cross validation function and then, in a step reminiscent of that used to get from leaving-out-one cross validation to GCV in the Gaussian data case, we replace diagonal elements of certain matrices by $\frac{1}{n}$ times the trace. A numerical simulation with `data' Y_i, i = 1,2..., n generated from an hypothesized `true' f is used to compare the $\lambda$ chosen by minimizing this GACV score with the $\lambda$ chosen from two often used algorithms based on the generalized cross validation procedure (O'Sullivan {\em et al} 1986, Gu, 1990, 1992). In the examples here, the GACV estimate produces a better fit to the true f in terms of minimizing the Kullback-Liebler distance of the estimate of f from the true f. Figures suggest that the GACV may be an approximately unbiased estimate of the Kullback-Leibler distance of the estimate to the true f, however, a theoretical proof is yet to be found. The work of Wong (1992) suggests that an exact unbiased estimate does not exist in the {0,1} data case. The present work is related to Moody(1991), The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, and Liu(199), Unbiased estimate of generalization error and model selection in neural network. University of Wisconsin-Madison Statistics Department TR 930 September, 1994 Keywords: Generalized Approximate Cross Validation, smoothing spline, penalized likelihood, generalized cross validation, Kullback-Leibler distance. Other papers of potential interest for supervised machine learning in the directory ftp.stat.wisc.edu/pub/wahba are in the files: (some previously announced) nonlin-learn.ps.gz ml-bib.ps.gz soft-class.ps.gz ssanova.ps.gz theses/ywang.thesis.README nips6.ps.gz tuning-nwp.ps.gz Department of Statistics, University of Wisconsin-Madison wahba at stat.wisc.edu xiang at stat.wisc.edu PS to Geoff Hinton- The database is a great idea!!  From risto at cs.utexas.edu Thu Sep 22 01:26:07 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Thu, 22 Sep 94 00:26:07 -0500 Subject: Connectionist NLP software available Message-ID: <9409220526.AA08371@cascais.cs.utexas.edu> The code and data for the DISCERN story processing model and the SPEC sentence understanding model are now available from our ftp/WWW site. These software packages are not general-purpose neural network simulators, but cleaned-up code for specific connectionist NLP models. I am making them available because they contain implementations of general ideas for debugging complex neural network systems through X11 graphics interface, for analyzing the performance of the models, and running experiments with such models. I've tried to pay special attention on making the code portable across platforms (it is based on ANSI/K&R C and X11R5 with Athena Widgets), and making the software easy to modify and built on. I hope the software can serve as a starting point for other experiments in connectionist NLP --- where building simulation programs from scratch turned out to be a heck of a lot of work :-) To get a quick feel of what these programs are like (without having to port them), take a look at the DISCERN demo under WWW at http://www.cs.utexas.edu/~nn/discern.html or by "telnet cascais.cs.utexas.edu 30000". The demo runs on remotely on cascais.cs.utexas.edu, with a display on your X11 screen. -- Risto Miikkulainen Here's a short discription of the software: DISCERN ------- DISCERN is a large modular system for processing script-based stories. It includes component models for lexical processing, episodic memory, and parsing, paraphrasing and question answering. The main reference is Miikkulainen (1993): "Subsymbolic Natural Language Processing: An integrated Model of Scripts, Lexicon and Memory", Cambridge, MA: MIT Press (a precis of this book was recently posted in the connectionists list). The DISCERN software consists of four components: (1) the full DISCERN performance system (i.e. the "demo" program), (2) training the simple recurrent and feedforward backprop networks for parsing, generating, and question answering, (3) training the lexicon feature maps and Hebbian associative connections, and (4) training the hierarchical feature maps of the episodic memory. All these are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/discern, or in WWW, from http://www.cs.utexas.edu/~nn. SPEC ---- SPEC is a model of parsing sentences with embedded relative clauses. It consists of the parser (a simple recurrent network), the stack (a RAAM network) and the segmenter (feedforward) networks that are trained together and generalize to novel sentence structures. For a quick description of the model, see our paper in AAAI-94, or a longer tech. report version from our ftp/www site. The SPEC software and papers are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/spec or in WWW, from http://www.cs.utexas.edu/~nn.  From mbrown at aero.soton.ac.uk Thu Sep 22 17:31:33 1994 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Thu, 22 Sep 94 17:31:33 BST Subject: Post Doc and Post Grad jobs Message-ID: <23530.9409221631@aero.soton.ac.uk> Could you please post the following announcement on your list. Department of Aeronautics and Astronautics Post-doctoral Research Fellow and PhD Research Studentship in Intelligent (Neurofuzzy based) State Estimation for Dynamic Processes Applications are invited for a Post-doctoral Research Fellow from researchers nearing or having completed PhD's in NeuroFuzzy Systems or Probability and Stochastic Processes or Advanced Control Theory or Approximation Theory for a 4 year EPSRC research grant on the development of a new theory of self-organising neuro-fuzzy state estimators. Salary will be within the ACRA range 13941-20953 UK pounds. Applications are also invited for a Research Studentship, tenable over 3 years, supported by a DRA funded research grant with particular reference to intelligent estimation and guidance problems. Support will be at the EPSRC studentship level with additional allowances (all educational fees are paid and a living allowance is provided). Informal enquiries for both posts and applications for the Research Studentship only should be directed to Professor C.J. Harris, Advanced Systems Research Group, Department of Aeronautics and Astronautics, University of Southampton, England, Tel (0703) 592353, Fax (0703) 593058, email cjh at aero.soton.ac.uk Application forms for the Post-doctoral Research Assistant may be obtained from the Personnel Department (R/78), University of Southampton, Highfield, Southampton, SO17 1BJ, UK. Telephone (0703) 592421. The closing data for the RETURN of completed application forms is 31st October, quoting reference number R/78.  From prechelt at ira.uka.de Thu Sep 22 12:50:54 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Thu, 22 Sep 1994 18:50:54 +0200 Subject: machine learning databases In-Reply-To: Your message of "Wed, 21 Sep 1994 17:29:05 +0300." <9409211429.AA22262@vergina.eng.auth.gr> Message-ID: <"irafs2.ira.791:22.09.94.16.51.25"@ira.uka.de> > what i have not been able to find > is some standard datasets to be used for benchmarks (e.g. sonar, > radar signals, EEG, ECG data and so on). i mean the raw time series, > not some kinf of preprocessed data where the whole time series is > reduced to a static features vector. does anyone know of any such data > in the public domain (except speech data)? From andy at twinearth.wustl.edu Thu Sep 22 20:14:54 1994 From: andy at twinearth.wustl.edu (Andy Clark) Date: Thu, 22 Sep 94 19:14:54 CDT Subject: Philosophy/Neuroscience/Psychology technical reports Message-ID: <9409230014.AA07897@twinearth.wustl.edu> This is to announce a new archive for technical reports for the Philosophy/Neuroscience/Psychology program at Washington University. Reports are available in a number of areas of cognitive science and philosophy of mind. Reports are stored in various formats -- most are in ASCII or compressed Postscript. The former (files with .ascii) can be retrieved and read or printed directly; the latter (files with .ps.Z) must be retrieved, uncompressed (with "uncompress ") and printed on a laser printer. Some papers are stored in both formats for convenience. To retrieve a report -- e.g., clark.folk-psychology.ascii: 1. ftp thalamus.wustl.edu 2. Login as "anonymous" or "ftp" 3. Password: 4. cd pub/pnp/papers 5. get clark.folk-psychology.ascii An index of papers in the archive so far is included below. The list will be expanding frequently; an updated index can be found in the file INDEX in pub/pnp/papers. Andy Clark (andy at twinearth.wustl.edu) Philosophy/Neuroscience/Psychology Program Washington University St Louis, MO 63130. ---------------------------------------------------------------------------- Archive of Philosophy/Neuroscience/Psychology technical reports (Washington University), on thalamus.wustl.edu in pub/pnp/papers. 94-01 kwasny.sraam.ps Tail-Recursive Distributed Representations and Simple Recurrent Networks Stan C. Kwasny & Barry L. Kalman, Department of Computer Science Representation poses important challenges to connectionism. The ability to structurally compose representations is critical in achieving the capability considered necessary for cognition. We provide a technique for mapping any ordered collection (forest) of hierarchical structures (trees) into a set of training patterns which can be used effectively in training a simple recurrent network (SRN) to develop RAAM-style distributed representations. The advantages in our technique are three-fold: first, the fixed-valence restriction on RAAM structures is removed; second, representations correspond to ordered forests of labeled trees thereby extending what can be represented; third, training can be accomplished with an auto-associative SRN, making training much more straightforward. 94-02 kalman.trainrec.ps TRAINREC: A System for Training Feedforward & Simple Recurrent Networks Efficiently and Correctly Barry L. Kalman & Stan C. Kwasny, Department of Computer Science TRAINREC is a system for training feedforward and recurrent neural networks that incorporates several ideas. It uses the more efficient conjugate gradient method; we derive a new error function with several desirable properties; we argue for skip (shortcut) connections where appropriate, and for a sigmoidal yielding values in the [-1,1] interval; we use singular value decomposition to avoid overanalyzing the input feature space. We have made an effort to discover methods that work in both theory and practice, motivated by considerations ranging from efficiency of training to accuracy of the result. 94-03 chalmers.computation.{ps,ascii} A Computational Foundation for the Study of Cognition David J. Chalmers, Department of Philosophy Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? This article develops a systematic framework that addresses all of these questions. A careful analysis of computation and its relation to cognition suggests that the ambitions of artificial intelligence and the centrality of computation in cognitive science are justified. 94-04 chalmers.content.{ps,ascii} The Components of Content David J. Chalmers, Department of Philosophy. Are the contents of thought in the head of the thinker, in the environment, or in a combination of the two? In this paper I develop a two-dimensional intensional account of content, decomposing a thought's content into its notional content -- which is internal to the thinker -- and its relational content. Notional content is fully semantic, having truth-conditions of its own; and notional content is what governs the dynamics and rationality of thought. I apply this two-dimensional picture to dissolve a number of problems in the philosophy of mind and language. 94-05 chalmers.bibliography.{intro,1,2,3,4,5} Contemporary Philosophy of Mind: An Annotated Bibliography David J. Chalmers, Department of Philosophy This is an annotated bibliography of work in the philosophy of mind from the last thirty years. There are about 1700 entries, divided into five parts: (1) Consciousness and Qualia; (2) Mental Content; (3) Psychophysical Relations and Psychological Explanation; (4) Philosophy of Artificial Intelligence; (5) Miscellaneous Topics. 94-06 clark.trading-spaces.ascii Trading Spaces: Computation, Representation, and the Limits of Learning Andy Clark (Dept. of Philosophy) and Chris Thornton (U. of Sussex) We argue that existing learning algorithms are often poorly equipped to solve problems involving a certain type of (important and widespread) statistical regularity, which we call `type-2 regularity'. The solution is to trade achieved representation against computational search. We investigate several ways in which such a trade-off may be pursued. The upshot is that various kinds of incremental learning (e.g. Elman 1991) emerge not as peripheral but as absolutely central and essential features of successful cognition. 94-07 clark.folk-psychology.ascii Dealing in Futures: Folk Psychology and the Role of Representations in Cognitive Science. Andy Clark, Department of Philosophy. The paper investigates the Churchlands' long-standing critique of folk psychology. I argue that the scientific advances upon which the Churchlands so ably draw will have their most profound impact NOT upon our assessment of the folk discourse but upon our conception of the role of representations in the explanatory projects of cognitive science. Representation, I suggest, will indeed be reconceived, somewhat marginalized, and will emerge as at best one of the objects of cognitive scientific explanation rather than as its foundation. 94-08 clark.autonomous-agents.ascii Autonomous Agents and Real-Time Success: Some Foundational Issues. Andy Clark, Department of Philosophy Recent developments in situated robotics and related fields claim to challenge the pervasive role of internal representations in the production of intelligent behavior. Such arguments, I show, are both suggestive and misguided. The true lesson, I argue, lies in forcing a much-needed re-evaluation of the notion of internal representation itself. The present paper begins the task of developing such a notion by pursuing two concrete examples of fully situated yet representation-dependent cognition: animate vision and motor emulation. 94-09 mccann.gold-market.ps A Neural Network Model for the Gold Market Peter J. McCann and Barry L. Kalman, Department of Computer Science A neural network trend predictor for the gold bullion market is presented. A simple recurrent neural network was trained to recognize turning points in the gold market based on a to-date history of ten market indices. The network was tested on data that was held back from training, and a significant amount of predictive power was observed. The turning point predictions can be used to time transactions in the gold bullion and gold mining company stock index markets to obtain a significant paper profit during the test period. 94-10 chalmers.consciousness.{ps,ascii} Facing Up to the Problem of Consciousness David Chalmers, Department of Philosophy The problems of consciousness fall into two classes: the easy problems and the hard problems. The easy problems include reportability, accessibility, the difference between wakefulness and sleep, and the like; the hard problem is subjective experience. Most recent work attacks only the easy problems. I illustrate this with a critique, and argue that reductive approaches to the hard problem must inevitably fail. I outline a new framework for the nonreductive explanation of consciousness, in terms of basic principles connecting physical processes to experience. Using this framework, I sketch a candidate theory of conscious experience, revolving around principles of structural coherence and organizational invariance, and a double-aspect theory of information. 94-11 chalmers.qualia.{ps,ascii} Absent Qualia, Fading Qualia, Dancing Qualia David Chalmers, Department of Philosophy In this paper I use thought-experiments to argue that systems with the same fine-grained functional organization will have the same conscious experiences, no matter what they are made out of. These thought-experiments appeal to scenarios involving gradual replacement of neurons by silicon chips. I argue against the "absent qualia" hypothesis by using a "fading qualia" scnario, and against the "inverted qualia" hypothesis by using a "dancing qualia" scenario. The conclusion is that absent qualia and inverted qualia are logically possible but empirically impossible, leading to a kind of nonreductive functionalism. 94-12 christiansen.language-learning.ps Language Learning in the Full or, Why the Stimulus Might Not be So Poor, After All Morten Christiansen, Department of Philosophy Language acquisition is often said to require a massive innate body of language specific knowledge in order to overcome the poverty of the stimulus. In this picture, language learning merely implies setting a number of parameters in an internal Universal Grammar. But is the primary linguistic evidence really so poor that it warrants such an extreme nativism? Is there no room for a more empiricist approach to language acquisition? In this paper, I argue against the extreme nativist position, discussing recent results from psycholinguistics and connectionist research on natural language. 94-13 christiansen.nlp-recursion.ps Natural Language Recursion and Recurrent Neural Networks Morten Christiansen (Dept. of Philosophy) and Nick Chater (U. of Oxford) The recursive structure of natural language was one of the principal sources of difficulty for associationist models of linguistic behaviour. More recently, it has become a focus in the debate on neural network models of language, which many regard as the natural heirs of the associationist legacy. Can neural networks learn to handle recursive structures? If not, many would argue, neural networks can be ruled out as viable models of language processing. In this paper, we reconsider the implications of natural language recursion for neural network models, and present simulations in which recurrent neural networks are trained on simple recursive structures. We suggest implications for theories of human language processing. 94-14 bechtel.embodied.ps Embodied Connectionism William Bechtel, Department of Philosophy Classical approaches to modeling cognition have treated the cognitive system as disembodied. This I argue is a consequence of a common strategy of theory development in which researchers attempt to decompose functions into component functions and assign these components functions to parts of systems. But one might question the decomposition that segregates a cognitive system from its environment. I suggest how connectionist modeling may facilitate the development of cognitive models that do not so isolate cognitive systems from their environment. While such an approach may seem natural for lower cognitive activities, such as navigating an environment, I suggest that the approach be pursued with higher cognitive functions as well, using natural deduction as the example. 94-15 bechtel.consciousness.ps Consciousness: Perspectives from Symbolic and Connectionist AI William Bechtel, Department of Philosophy While consciousness has not been a major concern of most AI researchers, some have tried to explore how computational models might explain it. I explore how far computational models might go in explaining consciousness, focusing on three aspects of conscious mental states: their intrinsic intentionality, a subject's awareness of the contents of these intentional states, and the distinctive qualitative character of these states. I describe and evaluate strategies for developing connectionist systems that satisfy these aspects of consciousness. My assessment is that connectionist models can do quite well with regard to the first two components, but face far greater difficulties in explaining the qualitative character of conscious states. 94-16 bechtel.language.ps What Knowledge Must be in the Head in Order to Acquire Langauge? William Bechtel, Department of Philosophy A common strategy in theorizing about the linguistic capacity has localized it within the mind of the language user. A result has been that the mind itself is often taken to operate according to linguistic principles. I propose an approach to modeling linguistic capacity which distributes that capacity over a cognitive system and external symbols. This lowers the requirements that must be satisfied by the cognitive system itself. For example, productivity and systematicity might not result from processing characteristics of the cognitive system, but from the system's interaction with external symbols which themselves adhere to syntactic principles. To indicate how a relatively weak processing system can exhibit linguistic competence, I describe a recent model by St. John and McClelland. 94-17 bechtel.deduction.ps Natural Deduction in Connectionist Systems William Bechtel, Department of Philosophy I have argued elsewhere that the systematicity of human thought might be explained as a result of the fact that we have learned natural languages which are themselves syntactically structured. According to this view, linguistic symbols are external to the cognitive system and what the system must learn to do is produce and comprehend such symbols. In this paper I pursue that idea by arguing that ability in natural deduction itself may rely on pattern recognition abilities that enable us to operate on external symbols rather than encodings of rules that might be applied to internal representations. To support this suggestion, I present a series of experiments with connectionist networks that have been trained to construct simple natural deductions in sentential logic.  From jfj at limsi.fr Fri Sep 23 13:04:29 1994 From: jfj at limsi.fr (Jean-Francois Jodouin) Date: Fri, 23 Sep 94 19:04:29 +0200 Subject: Book Announcement Message-ID: <9409231704.AA04189@m79.limsi.fr> Ths is a book announcement for two French-language books on neural networks. Though there is a large and ever-growing number of such texts in English, connectionists required to teach neural networks in French have difficulty finding the equivalent in their language. Because there is to my knowledge no French connectionist forum, I have permitted myself to post this message here, despite the fact that only a small proportion of you are francophone. The following two books are now available : Reseaux de neurones : Principes et definitions, Jean-Francois Jodouin editions Hermes, Paris, 140p, 1994. Reseaux neuromimctiques : Modeles et applications, Jean-Francois Jodouin editions Hermes, Paris, 260p, 1994. The first is short presentation destined for the general public. The second is an introductory textbook intended for graduate students. Abstract and Table of Contents (in French) for both books follow. =========================================== Reseaux de neurones : Principes et definitions Jean-Francois Jodouin editions Hermes, Paris, 140p, 1994 ------------------------------------------------------------------------------ Resume Les reseaux neuromimetiques sont inspires de la neurobiologie. Ils ont depuis des notions de plusieurs disciplines : leurs concepteurs auraient peine a les reconnaitre, vu la grande diversite de formes qu'ils revetent aujourd'hui. A la fois objets d'etude et outils applicatifs, les reseaux ont un role a jouer dans un nombre rapidement croissant de domaines, autant en recherche qu'en industrie. Ce livre s'adresse aux etudiants, enseignants et professionnels ayant une culture scientifique generale. Son but est de presenter sous une forme didactique les principes generaux qui sous-tendent les travaux en reseaux de neurones, et qui ne sont souvent qu'esquisses dans les textes plus specialises. C'est donc un premier apercu utile de l'etude des reseaux de neurones. Il constitue une introduction aux textes de presentation plus approfondis, en particulier a "Reseaux neuromimetiques : Modeles et applications" (Hermes, 1994). ------------------------------------------------------------------------------ Table des matieres Avant-propos Introduction 1. Le reseau de neurones 1.1. Apercu general 1.1.1. Les elements constitutifs d'un reseau neuromimetique 1.1.2. L'architecture d'un reseau neuromimetique 1.1.3. Le choix des poids synaptiques 1.1.4. Les entrees et les sorties du reseau 1.1.5. Discussion : regles locales et comportement emergent 1.2. Applications des reseaux neuromimetiques 1.3. Bibliographie 2. Neurones et activation 2.1. Un modele formel du neurone 2.1.1. Le modele de McCulloch et de Pitts 2.1.2. Un modele plus general 2.1.3. Neurone formel et neurone biologique 2.2. La fonction d'activation d'un neurone formel 2.2.1. Caracteristiques des fonctions d'activation 2.2.2. Quelques exemples de fonctions 2.3. La propagation de l'activation 2.3.1. Phenomenes connus de la propagation d'activation 2.3.1.1. Detection de traits 2.3.1.2. Memoire associative 2.3.1.3. Satisfaction de contraintes 2.3.2. Comportements dynamiques des reseaux 2.3.2.1. Convergence vers un point fixe 2.3.2.2. Comportements dynamiques plus complexes 2.3.3. Exemple : un petit reseau 2.4. Capacites de calcul d'un reseau neuromimetique 2.4.1. Quelques simplifications preliminaires 2.4.2. Le Perceptron et ses limites 2.4.3. L'importance des neurones caches 2.4.4. Les reseaux non-lineaires 2.4.5. Langages formels et reseaux de neurones 2.5. Exemple : le modele de McClelland et de Rumelhart 2.5.1. La reconnaissance des lettres en contexte 2.5.2. Un modele de la lecture 2.5.3. Comportement du modele 2.5.4. Competition et cooperation 2.6. Bibliographie 3. Apprentissage et erreur 3.1. Le protocole d'apprentissage 3.1.1. La procedure d'apprentissage 3.1.2. La procedure de validation croisee 3.2. Trois types d'apprentissage 3.2.1. Apprentissage non-supervise 3.2.2. Apprentissage supervise 3.2.3. Apprentissage semi-supervise 3.2.4. Problemes d'apprentissage 3.3. Exemple : apprendre la fonction "Impair-5" 3.4. Bibliographie 4. Environnement et codage 4.1. Considerations generales 4.2. Quelques exemples de codages 4.2.1. Codage local 4.2.2. Codage semi-distribue 4.2.2.1. Un codage du sens : les micro-traits 4.2.2.2. Codage de la position : champs recepteurs et codage grossier 4.2.2.3. Codage de valeurs numeriques : le thermometre 4.2.3. Codage distribue et representations internes 4.3. Le codage interne d'un reseau 4.3.1. L'experience des deux familles 4.3.2. Phrases actives et passives 4.3.3. Discussion 4.4. Quelques exemples 4.5. Bibliographie Bibliographie generale Index =========================================== Reseaux neuromimetiques : Modeles et applications Jean-Francois Jodouin editions Hermes, Paris, 260, 1994 ------------------------------------------------------------------------------ Resume Les reseaux neuromimetiques sont des modeles mathematiques et informatiques, des assemblages d'unites de calculs appeles neurones formels, et dont l'inspiration originelle etait un modele de la cellule nerveuse humaine. Cet heritage de la neurobiologie forme une composante importante de la matiere, et le souci de maintenir un certaine correspondance avec le systeme nerveux humain a anime et continue a animer une part importante des recherches dans ce domaine. Malgre cet heritage, l'essentiel des travaux d'aujourd'hui ont pour objet le reseau de neurones formels et non son correlat neurobiologique. Vu comme des systemes de calcul, les reseaux de neurones possedent plusieurs proprietes qui les rendent interessants d'un point de vue theorique, et fort utiles en pratique. C'est cette approche - plus technique que biologique - qui sera prise dans le present texte. Ce livre n'est pas un texte de vulgarisation. Il a pour vocation de servir de point de depart et de texte de reference a celui qui desire utiliser ou etudier les reseaux de neurones. Il constitue la suite logique de l'ouvrage "Les reseaux de neurones : Principes et definitions" (Hermes, 1994). ------------------------------------------------------------------------------ Table des matieres Avant-propos Introduction Partie I : Les reseaux a couches 1. Les reseaux a couches : un premier apercu 1.1. Les reseaux a deux couches 1.1.1. Le Perceptron 1.1.2. L'Adaline 1.2. Le Perceptron Multi-Couches 1.2.1. Structure du reseau 1.2.2. Apprentissage 1.2.3. Exemple de calcul 1.2.4. Exemple : NET-talk 1.3. Bibliographie 2. De la theorie a la pratique 2.1. La constitution des corpus 2.2. L'architecture du reseau 2.2.1. Correlation en cascade 2.2.2. Le neurochirurgien 2.3. Les minima locaux 2.4. Les parametres du reseau 2.4.1. La fonction d'activation 2.4.2. La fonction d'erreur 2.4.3. Le pas d'apprentissage 2.4.3.1. L'ajout d'un terme d'inertie 2.4.3.2. Quickprop 2.4.3.3. Delta-bar-delta 2.5. Discussion 2.6. Bibliographie 3. Le Perceptron Multi-Couches et la statistique 3.1. Apprentissage supervise et estimation statistique 3.1.1. Mesure d'erreur et qualite de l'estimation 3.1.2. Biais et variance 3.2. Generalisation, taille de corpus et nombre de neurones 3.2.1. Convergence uniforme des PMC 3.2.2. Generalisation et dimension de Vapnik et Cervonenkis 3.3. Bibliographie 4. Les reseaux RBF 4.1. Methode RBF 4.2. Architecture et fonctionnement du reseau RBF 4.3. Apprentissage 4.4. Discussion 4.5. Bibliographie 5. Code informatique : Adaline et PMC 5.1. Une premiere definition 5.1.1. Modes d'utilisation d'un reseau 5.1.2. Reseau, neurone et lien 5.2. De l'efficacite des calculs 5.2.1. Representer l'architecture du reseau 5.2.2. Tableaux homogenes 5.3. Definitions principales 5.3.1. Reseau, neurone et lien 5.3.2. Couches de neurones 5.3.3. Patrons et corpus 5.4. Adaline 5.4.1. Utilisation du reseau 5.4.1.1. Calcul de l'activation 5.4.1.2. Fonction principale : la Question 5.4.2. Apprentissage et erreur 5.4.2.1. Calcul d'erreur et correction des liens 5.4.2.2. Fonction principale 5.5. Le Perceptron Multi-Couches 5.5.1. Mode d'utilisation 5.5.1.1. Propagation d'activation 5.5.1.2. Fonction principale 5.5.2. Mode apprentissage 5.5.2.1. Retropropagation Partie II : Les reseaux recurrents 6. Les reseaux a competition 6.1. Un reseau a competition simple 6.1.1. Architecture competitive 6.1.1.1. Points fixes et contraintes de connectivite 6.1.1.2. Fonctionnement du reseau 6.1.2. Apprentissage competitif 6.1.2.1. Une regle d'apprentissage competitif 6.1.2.2. Un petit exemple 6.1.2.3. Discussion 6.2. Learning Vector Quantization 6.2.1. Vector Quantization 6.2.2. Learning Vector Quantization 6.2.3. Discussion 6.3. Les cartes topologiques 6.3.1. Architecture 6.3.2. Apprentissage 6.3.3. Discussion 6.4. Adaptive Resonance Theory 6.4.1. Preliminaire : une etude des fonctions d'activation du neurone 6.4.1.1. Fonctions additives 6.4.1.2. Fonctions multiplicatives 6.4.2. La dynamique de ART-1 6.4.2.1. Le dilemme du bruit et de la saturation 6.4.2.2. Le probleme du codage 6.4.2.3. Deceler les erreurs 6.4.2.4. La regle des deux-tiers 6.4.2.5. Le systeme d'orientation 6.4.3. Apprentissage 6.4.3.1. Le dilemme de la stabilite et de la plasticite 6.4.3.2. Le mecanisme d'apprentissage de ART-1 6.4.4. Utiliser ART-1 6.4.5. Exemple 6.4.6. Discussion 6.5. Bibliographie 7. Les reseaux a connexions symetriques 7.1. Le reseau de Hopfield 7.1.1. Le reseau et son comportement 7.1.1.1. Le modele des verres de spin de Ising 7.1.1.2. Exemple : les deux equipes 7.1.1.3. Architecture du reseau de Hopfield 7.1.1.4. Energie et convergence vers un etat stable 7.1.2. Apprentissage 7.1.2.1. Capacite de memoire et oubli catastrophique 7.1.3. Discussion 7.2. La machine de Boltzmann 7.2.1. Architecture de la machine de Boltzmann 7.2.2. Le recuit simule 7.2.3. Apprentissage 7.2.4. Discussion 7.3. Bibliographie 8. Les reseaux et le temps 8.1. Solutions non recurrentes 8.1.1. Les fenetres temporelles 8.1.2. Time-Delay Neural Networks 8.2. Les reseaux recurrents a couches 8.2.1. Le reseau de Jordan 8.2.2. Le "Simple Recurrent Network" 8.3. La retropropagation dans les reseaux recurrents 8.3.1. La retropropagation dans le temps 8.3.2. L'apprentissage en temps reel 8.4. Bibliographie 9. Code informatique : reseaux recurrents 9.1. La carte topologique de Kohonen 9.1.1. Utilisation 9.1.2. Apprentissage 9.2. Le reseau de Hopfield 9.2.1. Utilisation 9.2.2. Apprentissage 9.3. La retropropagation dans le temps 9.3.1. Utilisation 9.3.2. Apprentissage Bibliographie generale Index ===========================================  From back at elec.uq.oz.au Tue Sep 27 14:17:17 1994 From: back at elec.uq.oz.au (Andrew Back) Date: Tue, 27 Sep 94 13:17:17 EST Subject: NIPS*94 Workshop CFP Message-ID: <9409270317.AA16221@s1.elec.uq.oz.au> We are organizing the following workshop for NIPS*94. The aim of this one-day workshop will be to discuss issues of nonlinear signal processing using neural network models, specifically those which are in-between the usual MLP and fully recurrent network architectures. The intended audience is for those applying neural networks to signal processing problems, and active signal processing (but not necessarily neural network) researchers. There is room for contributed talks. If you would like to give a short paper or a brief presentation of your work, please send a few details to the organizers. Presenters of short papers will be allocated 15-20 mins. For those who would like to contribute on an informal basis, yet be able to have the opportunity to speak, 5 mins `soap-box' sessions will be. available. Ample time will be allocated for informal discussion. We would also like to hear from others interested in attending the workshop. Andrew Back Department of Electrical and Computer Engineering University of Queensland Brisbane. 4072 AUSTRALIA Ph: +61 7 365 3965 Fax: +61 7 365 4999 email: back at elec.uq.oz.au ============================================================================= C A L L F O R P A P E R S NIPS*94 Workshop: "Neural Network Architectures with Time Delay Connections for Nonlinear Signal Processing: Theory and Applications" Organizers: Andrew D. Back and Eric A. Wan Nonlinear signal processing methods using neural network models are a topic of recent interest in the various application areas. Recurrent networks offer a potentially rich and powerful modelling capability though may suffer from some problems in training. On the other hand, simpler network structures which have an overall feedforward structure, but draw more strongly on linear signal processing approaches have been proposed. The resulting structures can be viewed as a nonlinear generalizations of linear filters. It is clear that relatively little is known about how to understand the various architectures in a signal processing context. For the most part we are able to do simulations, but proving the capabilities of the network architectures is much more difficult. It appears that they offer a convenient NARMA modelling framework, but many aspects of the models are yet to be considered. This workshop is aimed at addressing some of the issues that arise when adopting a nonlinear signal processing methodology, in particular, those employing some form of time delay connections and limited recurrent connections. Issues that may be of interest are: * Representational capabilities for various network structures * Methods of analysis - what methods from linear signal processing theory can be extended to these neural network architectures ? What methods from analysis of nonlinear systems can be used for these networks ? * What advantages are there in using locally recurrent connections within networks as opposed to globally recurrent connections ? (e.g. Frasconi-Gori-Soda networks vs Williams-Zipser/Robinson networks). * Learning algorithms - what difficulties are encountered and what methods can be applied to overcome them? * What types of problems or data are the different models best suited for. * Given a set of time series data, what model should be selected on the basis of the observed data ? What tests can be applied to a particular data set to determine what type of model should be used ? * What issues need to be resolved in order for these models to be confidently applied to a given problem/data-set ? * Successes and failures of networks on practical problems and data sets. * Comparisons between the methods and results that have been established by various researchers. * Theoretical issues which still need to be addressed, (e.g. approximation capabilities, convergence, stability, and computational complexity) * New network architectures Aim: --- At the workshop we intend to consolidate some of the theoretical and practical results of current research. We also hope to identify open issues which should be addressed in on-going work. Format: ------ The workshop will be a one day workshop and it is planned to have a number of short presentations of either 15 mins or 5 mins (`soap-box' sessions). In this way a number of people will be able to speak in some detail, while others can simply raise issues they feel are important. As an outcome of the workshop it is intended that there should be a report summarizing where we are at in this research area, and goals for future work. Contributions will be welcomed and details of proposed talks should be sent to Andrew Back as soon as possible. Andrew D. Back*, Eric A. Wan** *Department of Electrical and Computer Engineering, University of Queensland, St. Lucia, Queensland 4072. Australia. Ph: +61 7 365 3965 Fax: +61 7 365 4999 back at elec.uq.oz.au **Department of Electrical Engineering and Applied Physics Oregon Graduate Institute of Science and Technology P.O. Box 91000, Portland, Oregon, 97291. USA. Ph: (503) 690 1164 Fax: (503) 690 1406 ericwan at eeap.ogi.edu  From mpp at watson.ibm.com Tue Sep 27 18:15:37 1994 From: mpp at watson.ibm.com (Michael Perrone) Date: Tue, 27 Sep 1994 18:15:37 -0400 (EDT) Subject: CFP: NIPS*94 Postconference Workshop Message-ID: <9409272215.AA21936@austen.watson.ibm.com> CALL FOR PARTICIPANTS --------------------- Below is the preliminary summary of the NIPS*94 Postconference Workshop on algorithms for high dimensional spaces. If you would like to contribute a talk, please send a title and abstract to the organizer. ======================================================================== Title ----- Algorithms for High Dimensional Space: What Works and Why Description ----------- The performance of certain regression algorithms is robust as the dimensionality of the data and parameter spaces are increased. Even in cases where the number of parameters is much larger than the number of data, performance is often robust. The central question of the workshop will be: What makes these techniques robust in high dimensions? High dimensional spaces have (asymptotic) properties that are nonintuitive when considered from the perspective of the two- and three-dimensional cases generally used for visual examples. Because of this fact, algorithm design in high dimensional spaces can not always be done by simple analogy with low dimensional problems. For example, a radial basis network is intuitively appealing for a one dimensional regression task; but it must be used with care for a 100 dimensional space and it may not work at all in 1000. Thus having a familiarity with the nonintuitive properties of high dimensional space may lead to the development of better algorithms. We will discuss the issues that surround successful nonlinear regression estimation in high dimensional spaces and what we can do to incorporate these techniques into other algorithms and apply them in real-world tasks. The workshop will cover topics including the Curse of Dimensionality, Projection Pursuit, techniques for dimensionality reduction, feature extraction techniques, statistical properties of high dimensional spaces, clustering in high dimensions and all of the tricks that go along with these techniques to make them work. The workshop is targeted on researchers interested in both theoretical and practical aspects of improving network performance. Length ------ One day. Format ------ Morning: 3 half hour talks each followed by 10 minutes of questions. Afternoon: 3 half hour talks each followed by 10 minutes of questions. Organizer --------- Michael P. Perrone mpp at watson.ibm.com IBM - Thomas J. Watson Research Center P.O. Box 704 / Rm J1-K08 Yorktown Heights, NY 10598  From iehava at ie.technion.ac.il Wed Sep 28 12:49:27 1994 From: iehava at ie.technion.ac.il (Hava Siegelmann) Date: Wed, 28 Sep 1994 18:49:27 +0200 Subject: nips recurrent nets workshop Message-ID: <199409281649.SAA29154@ie.technion.ac.il> Announcing: The NIPS Workshop on Recurrent Neural Networks =========================================================== Unlike feedforward-acyclic networks, recurrent nets contain feedback loops, and thus give rise to dynamical systems. Theoretically, recurrent networks are very strong computationally. However, their dynamics introduces difficulties for learning and convergence. This 2-day workshop will feature formal sessions, discussions, and a panel discussion aimed at understanding the dynamics, theoretical capabilities, and practical applicability of recurrent network. The panel will focus on future directions of recurrent networks research. The schedule is almost finalized. If you feel you have what to contribute to the workshop by talking / discussing / asking / or suggesting... - please feel free to contact me. I will wait a few days for responses before announcing the final program. Sincerely, Hava (Eva) Siegelmann Assistant Professor Department of Information Systems Engineering School of Industrial Engineering Technion (Israel Institute of Technology)  From ethem at psyche.mit.edu Wed Sep 28 16:31:40 1994 From: ethem at psyche.mit.edu (Ethem Alpaydin) Date: Wed, 28 Sep 94 16:31:40 EDT Subject: Paper: Estimating Road Travel Distances Message-ID: <9409282031.AA01457@psyche.mit.edu> FTP-host : archive.cis.ohio-state.edu FTP-file : pub/neuroprose/alpaydin.road-distance.ps.Z (16 pages, 582,257 bytes compressed, 1,357,862 bytes uncompressed) Parametric Distance Metrics vs. Nonparametric Neural Networks for Estimating Road Travel Distances Ethem Alpaydin*, I. Kuban Altinel+, Necati Aras+ {alpaydin,altinel,arasn}@boun.edu.tr * Dept of Computer Engineering + Dept of Industrial Engineering Bogazici University TR-80815 Istanbul Turkey The actual distance between two cities is the length of the shortest road connecting them. Measuring and storing the actual distance between any two points of a region is often not feasible and it is a common practice to estimate it. The usual approach is to use theoretical distance metrics which are parameterized functions of the coordinates of the points. We propose to use nonparametric approaches using neural networks for estimating actual distances. We consider multi-layer perceptrons trained with the back-propagation rule and regression neural networks implementing nonparametric regression using Gaussian kernels. We also consider training multiple estimators and combining them in a hybrid architecture using voting and stacking. On a real-world study using cities drawn from Turkey, we found that out that these approaches improve performance considerably. Estimating actual distances has many applications in location and distribution theory.  From jbarnden at crl.nmsu.edu Wed Sep 28 11:44:42 1994 From: jbarnden at crl.nmsu.edu (John Barnden) Date: Wed, 28 Sep 1994 09:44:42 -0600 Subject: 2 books on CONNECTIONISM & ANALOGY Message-ID: <199409281544.JAA10922@NMSU.Edu> *************************************************************** * This is to announce TWO NEW VOLUMES in the book series * * * * ``ADVANCES IN CONNECTIONIST AND NEURAL COMPUTATION THEORY'' * * * * published by Ablex (Norwood, NJ). * *************************************************************** The new volumes, numbered 2 and 3, appeared this summer. They were originally to have been just one book, and they are to be regarded as companion volumes. They have the same introductory chapter (not listed below). They are about the application of connectionist and hybrid connectionist/symbolic techniques to analogy, reminding, case-based reasoning and metaphor. Please note that the volumes don't have the editors in the same order. VOLUME TWO: ``Analogical Connections'' -------------------------------------- Edited by: Keith J. Holyoak, University of California, Los Angeles John A. Barnden, New Mexico State University MAIN CONTENTS Part I: INTEGRATED MODELS OF ANALOGICAL THINKING 1. The Copycat Project: A Model of Mental Fluidity and Analogy-Making Douglas R. Hofstadter & Melanie Mitchell 2. Component Processes in Analogical Transfer: Mapping, Pattern Completion, and Adaptation Keith J. Holyoak, Laura R. Novick & Eric R. Melz 3. Integrating Analogy with Rules and Explanations Greg Nelson, Paul Thagard & Susan Hardy 4. A Hybrid Model of Continuous Analogical Reasoning Thomas C. Eskridge 5. A Hybrid Model of Reasoning by Analogy Boicho N. Kokinov Part II: SIMILARITY AND ANALOGICAL MAPPING 6. Similarity, Interactive Activation, and Mapping: An Overview Robert Goldstone & Douglas Medin 7. Connectionist Implications for Processing Capacity Limitations in Analogies Graeme S. Halford, William H. Wilson, Jian Guo, Ross W. Gayler, Janet Wiles & J.E.M. Stewart 8. Analogical Mapping by Dynamic Binding: Preliminary Investigations John E. Hummel, Bruce Burns & Keith J. Holyoak 9. Spatial Inclusion and Set Membership: A Case Study of Analogy at Work Keith Stenning & Jon Oberlander Published 1994/504 pages Cloth: 1-56750-039-0/$69.50 (*** $35.00 prepaid *** / no further discount applies) VOLUME THREE: ``ANALOGY, METAPHOR, AND REMINDING'' -------------------------------------------------- Edited by: John A. Barnden, New Mexico State University Keith J. Holyoak, University of California, Los Angeles MAIN CONTENTS 1. REMIND: Retrieval from Episodic Memory by Inferencing and Disambiguation Trent E. Lange & Charles M. Wharton 2. The Role of Goals in Retrieving Analogical Cases Colleen Seifert 3. A Case Study of Case Indexing: Designing Index Feature Sets to Suit Task Demands and Support Parallelism Eric A. Domeshek 4. The Case for Nonconnectionist Associative Retrieval in Case-Based Reasoning Systems Piero P. Bonissone, Lisa F. Rau & George Berg 5. What is Metaphor? George Lakoff 6. A Structured Connectionist Model of Figurative Adjective-Noun Combinations Susan H. Weber 7. Back-Propagation Representations for the Rule-Analogy Continuum: Pros and Cons Catherine Harris 8. On the Connectionist Implementation of Analogy and Working Memory Matching John A. Barnden Published 1994/392 pages Cloth: 1-56750-101-X/$69.50 (*** $35.00 prepaid *** / no further discount applies) ABLEX ORDER FORM Please enter my order for Advances in Connectionist and Neural Computation Theory _____ Volume Three: Analogy, Metaphor and Reminding Cloth: 1-56750-101-X/$69.50 ($35.00 prepaid) _____ Volume Two: Analogical Connections Cloth: 1-56750-039-0/$69.50 ($35.00 prepaid) _____ Volume One: High-Level Connectionist Models Cloth: 0-89391-687-0/$67.50 ($34.50 prepaid) *** If you're not prepaying, you can buy volumes 2 and 3 together for $105 instead of $139. *** (There is no additional discount for buying both volumes at the prepaid rate.) PAYMENT METHOD: [] Payment Enclosed [] VISA [] MasterCard Card #_______________________________ Exp. Date__________ Signature________________________________________________ Name_____________________________________________________ Address__________________________________________________ __________________________________________________________ City_______________________________State______ZIP_________ ORDERING INFORMATION All individual orders must be prepaid. Ablex will pay postage and handling charges for all prepaid orders placed within US and Canada. Payments must be made by check, money order, VISA or Mastercard in US currency only. Orders placed by libraries and universities with Ablex accounts will be invoiced. Initial orders must be prepaid, at which time an account will be established. Titles being considered by faculty members for course adoption may be ordered as examination copies for a 30-day period. Request must be made on letterhead stationery citing course name and enrollment. An invoice will accompany the shipment and will be cancelled upon adoption of at least ten copies or return of examination copies. ======================================================================== The two volumes described above form a natural sequel to the first volume of the series, namely: VOLUME ONE: ``HIGH-LEVEL CONNECTIONIST MODELS'' ----------------------------------------------- Edited by John A. Barnden Jordan B. Pollack MAIN CONTENTS 1. Introduction: Problems for High-Level Connectionism John A. Barnden & Jordan B. Pollack 2. Connectionism and Compositional Semantics David S. Touretzky 3. Symbolic NeuroEngineering for Natural Language Processing: A Multilevel Research Approach Michael G. Dyer 4. Schema Recognition for Text Understanding: An Analog Semantic Feature Approach Lawrence A. Bookman & Richard Alterman 5. A Context-Free Connectionist Parser Which Is Not Connectionist, But Then It Is Not Really Context-Free Either Eugene Charniak & Eugene Santos, Jr. 6. Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds Wendy G. Lehnert 7. Developing Hybrid Symbolic/Connectionist Models James Hendler 8. Encoding Complex Symbolic Data Structures with Some Unusual Connectionist Techniques John A. Barnden 9. Finding a Maximally Plausible Model of an Inconsistent Theory Mark Derthick 10. The Relevance of Connectionism to AI: A Representation and Reasoning Perspective Lokendra Shastri 11. Steps Toward Knowledge-Intensive Connectionist Learning Joachim Diederich 12. Learning Simple Arithmetic Procedures Garrison W. Cottrell & Fu-Sheng Tsung 13. The Similarity Between Connectionist and Other Parallel Computation Models Jiawei Hong & Xiaonan Tan 14. Complex Features in Planning and Understanding: Problems and Opportunities for Connectionism Lawrence Birnbaum 15. Conclusion Jordan B. Pollack & John A. Barnden Published 1991/400 pages Cloth: 0-89391-687-0/$67.50 (*** $34.50 prepaid ***/no further discount applies)  From degaris at hip.atr.co.jp Wed Sep 28 19:14:57 1994 From: degaris at hip.atr.co.jp (Hugo de Garis) Date: Wed, 28 Sep 94 19:14:57 JST Subject: PerAc94 Conference Report, Hugo de Garis, ATR, Kyoto Message-ID: <9409281014.AA18563@gauss> PerAc94 Conference Report, Hugo de Garis, ATR, Kyoto Tom Ray and I were invited by the Federal Poytechnic of Lausanne, Switzerland, to give talks on our ATR work to an ALifey type tutorial, which lasted two days, (Monday Sept 5 and 6). This was followed by a 3 day conference called "PerAc94", i.e. from Perception to Action, whose organiser asked me to write up and distribute a conference report over the relevant email networks. Following the PerAc conference was a 20 man brain storming session of invited experts on the future of the subject. I was originally invited to this, so I had expected to be able to report to you on the main issues, but when I asked the organizer (a rather prickly person) if it would be ok if I attended only half of it, (for reasons explained below) I got disinvited. A formal write up of this brain storming session will appear in the "Robotics and Autonomous Systems Journal", (Elsevier) in the next few months. The following Monday was a seminar on the work of Professor Mange and his group on adaptible hardware using FPGAs (field programmable gate arrays) which I believe to be of great importance. This report will focus on the second and fourth of these events. PerAc94 Conference This conference was a Swiss equivalent of the PPSN (Parallel Problem Solving from Nature), SAB (Simulation of Adaptive Behavior), and ALife type of conferences, where the key words characterizing the conference can be taken from the session headings, namely, "collective intelligence, simple behavioral robots, genetic algorithms, active perception, building blocks and architectures for designing intelligent systems, complex architectures to control autonomous robots, cognition, collective intelligence, simple behavioral robots, active perception". When I accepted to speak at the tutorial it was for two reasons. One was to talk with Professor Mange and his group, who are pioneering a new field that I will talk about later in this report, and the other was to be in Switzerland, for me the most beautiful country in the world. When I had to leave it was with a real sigh. The PerAc conference itself I expected to be rather "small beer", but in fact the world was there. Admittedly only about 100 people were present the day I counted, but they were a specialised audience. If you are working in the field of autonomous robots and their related problems of control, and in particular the problems involved in converting perception into action, then getting hold of the proceedings of this conference for you is a must. Happily, that will not be difficult, because, thanks to characteristic Swiss efficiency, an IEEE (Computer Society Press) book was ready and distributed to conference attendees. The book is entitled "Proceedings From Perception to Action Conference, Lausanne, Switzerland, September 7-9, 1994", IEEE Computer Society Press, 1994, ISBN 0-8186-6482-7, edited by P. Gaussier and J-D. Nicoud. The 3 day conference was small enough not to need split sessions, so some 30 odd talks and a similar number of posters were available for all to hear and see. Unfortunately that was not true in my case. The great beauty of Switzerland attracted my wife to take a train down from Brussels (where she was catching up on old friends), and she was on a tight schedule. Similarly, a close Japanese friend of mine came up from Geneva on an equally tight schedule, and both of them wanted to yodel in the mountains with me. I ended up attending only 1 of the 3 conference days. Hence this report will be more limited and objective (based on the book) than subjective (based on impressions from someone who attended all the sessions). Conference Highlights The hero of the conference in my view was not a human being, but an (ice hockey) puck-shaped robot called "KHEPERA", developed by the Swiss, which I found myself in conversation referring to as the "puck robot". This flat, round, battery or mains driven, wheeled robot (containing infra red and other sensors), measures about 5 cms in diameter, about 2 cms high and is a very versatile little tool to perform "evolutionary robotics" experiments on your desk. Many of the papers at the conference used this "puck robot" to obtain their results. These researchers were performing real time (i.e. mechanical) fitness measurements of the robot's motions controlled by the neural net systems they were evolving. It is far more practical to do such a thing on a puck robot than a Unimate (of car assembling size). If you are interested in buying one of these puck robots, then try emailing Mr. Gomi, the manufacturer and distributor of Brook's insect robots and others. He told me he has already sold some 40 of these little robots. Gomi is a Japanese Canadian, who speaks excellent English. His email address is 71021.2755 at CompuServe.Com These little puck robots, crammed with electronics and sensors, cannot be too expensive. At the conference itself there was a fair sprinkling of big names from around the world, e.g. Pfeifer (Zurich Univ, Switzerland), the keynote speaker, made the claim that too much research effort is going into the unidirectional approach, i.e. from perception to action, whereas in the biological world, the reverse is also true, namely that what one perceives also depends on ones actions. He hoped future research in the field would be more bidirectional. Deneubourg (Brussels Univ, Belgium) spoke about the self organization of transport systems in ants and robots. Fukuda (Nagoya Univ, Japan) showed how his cellular robots could connect together to show group behavior. McFarland (Oxford Univ, England) used his expertise in the ethological field to advise roboticists what qualities autonomous robots need, particularly in regard to motivation and cognition. Cruse (Bielefeld Univ, Germany) presented a new neural net controller for a 6 legged walking system which reacts adaptively to the environment. Ferrell (MIT, USA) is Brooks' chief grad student who is helping him coordinate research on MIT's "COG" (torso) robot. She introduced her "Hannibal" hexapod robot (similar to Genghis) and put it thru its paces. Steels (Brussels Univ, Belgium) spoke on a mathemetical analysis of behavior systems where the main idea is that a behaving system is in fact a dynamical system whose state reaches equilibrium once the behavior it controls is attained. 3 Musketeers (i.e. Husbands, Harvey, Cliff, Sussex Univ, England) presented their latest results in automated mechanical fitness measurement in neural net based evolutionary robotics. Evolving a single neural net module takes them about a day or so. Interestingly they had an M.Sc. student who simulated the puck robot and evolved the simulation at much higher speeds. The resulting simulated elite chromosome was then down loaded to the KEPHERA which then performed in the real world as predicted by the simulation. Interesting. This "fast simulation vs. slow reality" issue I will pursue later. Nolfi et al (NRC, ROME, Italy) spoke on plasticity (i.e. learning) in phenotypic neural nets. The new idea is that the mapping from GA chromosome to neural net does not occur instantaneously, but takes place over the lifetime of the individual and is sensitive to the environment. Taylor (Kings College, England) (ab?)used his brow beating style and actor's resonating voice to explain to his audience his theories of the relational mind, i.e. "consciousness arises due to the active comparison of ongoing brain activity stemming from external inputs in the various modalities, with somewhat similar past activity stored in semantic and episodic memory". Taylor, an ex theoretical physicist, with already an encyclopaedic knowledge of his new field, was probably the smartest man at the conference. Unfortunately, he succombs too easily to the temptation to let everyone know it. The considerable respect for his abilities that everyone has would only be enhanced if he he were more low key. His constant, rather condescending interjections got on ones nerves after a while. Comments The conference felt like a mini SAB conference, especially judging by the overlap of the conference committees of SAB94 and this one. As mentioned above, the KEPHERA puck robot featured strongly, but to my mind, I found myself becoming irritated (beyond the usual 7 hour jet lag) as speaker after speaker presented his (nearly always his) results using the puck. My gut feeling is that this is not the way to go. My dream is to build artificial brains. Dave Cliff wants to build artificial brains. Lots of people want to build artificial brains, but to do so will probably require the evolution of probably millions of neural modules. Simple math says that with a one day evolution per neural net module (e.g. using the Sussex "gantry robot"), one will die before building even the brain's retina. Somehow, the evolutionary process so vital to this field, needs to be speeded up significantly. I believe the way to go is to evolve neural circuits in hardware at electronic speeds. I hope this was one of the issues discussed at the post-conference brain storming session by the 20 odd invited experts. If not, I would be surprised. Its seems to me to be a critical issue. Keep an eye out for the Robotics and Autonomous Systems Journal for a writeup of the conclusions coming out of this brain storming session. *** Professor Mange's "Embryological Electronics" The first I heard about Professor Mange's work (pronounced as in the french word "mange(r)" = to eat) was when my colleage at ATR, Tom Ray, passed me a copy of a paper he had to review for ECAL93 (Euro Conf on ALife). Tom was really impressed. So was I. Mange's dream is to achieve von Neumann's universal calculator, universal constructor, using his specially designed FPGAs (field programmable gate arrays). These FPGA "cells" are relatively simple and can be made in large numbers on wafer scale silicon slices. They have the ability to reproduce the circuit of any computable function, and of self repair. In its present design, when a cell becomes dysfunctional, its row and column in the grid of cells are switched off. I asked Professor Mange if it would be possible to switch off only the faulty cell. Yes, he said. This blew my mind, because the consequences of this alone are profound, let alone those following from the achievement of Mange's dream. One of the really big problems in VLSI is "yield" i.e. the percentage of chips that are fault free. This factor limits the size of chips made from wafers. If a faulty piece of a circuit can be switched off and its function somehow "moved" elsewhere on the wafer using reproduction, then circuits of wafer size become possible. The yield becomes irrelevant. Wow! Mange's et al's papers are now starting to appear regularly in the GA/NN/ALife/... conference circuit. In the few pages of these papers, it is difficult for a non electronics digital hardware specialist to follow his work easily. I made this point to him, saying that John Holland had only himself to blame that his invention of genetic algorithms took 20 years to become hot, because his book was so unreadable. GAs really only became popular after Goldberg wrote a text that everyone could understand. Mange got the point and will either write a solid (100+ page) technical report spelling out all the details so people can understand and copy, or he will write a book based on a course he will be teaching at the EPFL. Mange's colleague, Sanchez, wants to use these FPGA chips to perform evolution in hardware, perhaps by using Koza's Genetic Programming approach, and binary decision trees. Since any Boolean circuit can be expressed in terms of binary decision diagrams and hence binary trees, which in turn can be implemented on Mange's FPGAs, then since binary trees can be evolved with Koza's GP, it follows that any Boolean circuit should be evolvable on Mange's FPGAs. If so, then this will be the first example (as far as I know) of what I call "intrinsic evolvable hardware". Maybe I should explain. Evolvable Hardware (EHW) Since 1992, I have been pushing the idea of evolvable hardware. Evolution and (what I call) "evolutionary engineering" (or applied evolution) is one of the major research themes in computer science in the 90s, but it is virtually all software based. My dream is to see a machine, (i.e. hardware) which evolves. The basic idea is to conceive the software bit string that is used to configure PLDs (programmable logic devices) as a genetic algorithm chromosome, and thus evolve the configuration (architecture) of the circuit at electronic speeds. Unfortunately, PLDs are not designed with evolution in mind. If they are not RAM based, they are not "infinitely" rewritable, so they need to be RAM based. Personally I use cellular automata machine hardware to evolve CA based neural nets, but this is cheating in a way, because strictly speaking, the hardware of the CA machine remains fixed in its architecture. So far, noone has evolved circuits directly in hardware. If I am wrong here, please email me. I would love to be shown up on this point. There are however, several groups of people around the world who claim to be doing evolvable hardware. I will list them and their email addresses below. These groups are doing what I call "extrinsic evolvable hardware", i.e. they take a software simulated description of a hardware circuit, evolve it in software, take the elite chromosome (circuit description) and down load it into the rewritable hardware. That is, the hardware gets written to just once. The evolution occurs outside (extrinsic to) the circuit, by using the software simulated description. Intrinsic evolvable hardware would rewrite (reconfigure) a hardware circuit for each chromosome for each generation of the genetic algorithm. The evolution occurs inside (intrinsic to) the circuit. The Mange team in Switzerland hopes to do this, and if so, they will open up a new era in electronics and evolutionary engineering. Circuits will be grown/evolved rather than be designed. Hence they can become more complex and hopefully more performant. The following groups of people around the world that I know of are doing (extrinsic) evolvable hardware. Mange and Sanchez (EPFL, Lausanne, Switzerland) - described above. mange at di.epfl.ch sanchez at di.epfl.ch Hemmi (ATR, Kyoto, Japan) - Hemmi is a colleague in the same group as Tom Ray and myself. He uses an HDL (hardware description language) in the form of trees, to which he applies Koza's GP approach. He has evolved digital counter circuits and the like. hemmi at hip.atr.co.jp Higuchi (ETL, Tsukuba, Japan) - Higuchi is an ex colleague of mine from my postdoc days at ETL (Electro Technical Lab). He used a simulated GAL PLD chip description which he evolved in software to perform various digital funtions. Lately he has been working on a hardware design to perform genetic algorithms - a GA machine. higuchi at etl.go.jp Cliff (Sussex Univ, England) - Dave Cliff has a new grad student who is a VLSI designer who wants to evolve hardware. I met Dave at this conference and asked him whether his student would be doing intrinsic or extrinsic EHW. Dave said that the student is trying to make a deal with a Silicon Valley company in California. If the deal comes thru, then he wants to do intrinsic EHW, otherwise he will do extrinsic EHW like everybody else. davec at cogs.susx.ac.uk If there are other people doing EHW, please let me know. I believe the era of EHW and "Darwin Machines" is upon us, and should be vigorously supported. We will never have truly performant artificial nervous systems and artificial brains until we can overcome the "slowness of evolution" problem. Evolution at electronic speeds i.e. EHW, is the key breakthrough here. Cheers, Hugo de Garis. Dr. Hugo de Garis, Brain Builder Group, Evolutionary Systems Department. ATR Human Information Processing Research Laboratories, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kansai Science City, Kyoto-fu, 619-02, Japan. tel. (+ 81) (0)7749 5 1079, fax. (+ 81) (0)7749 5 1008, email. degaris at hip.atr.co.jp  From gluck at pavlov.rutgers.edu Fri Sep 30 09:50:20 1994 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Fri, 30 Sep 94 09:50:20 EDT Subject: Graduate Study at Rutgers/NJ in Cognitive & Computational Neuroscience Message-ID: <9409301350.AA09658@james.rutgers.edu> To: Students considering graduate study in Cognitive Neuroscience and/or Computational Neuroscience. The Center for Molecular & Behavioral Neuroscience at Rutgers University (the State Univ. of New Jersey) is a leading center for research in Cognitive Neuroscience and Computational Neuroscience. The Center offers a Ph.D. in "Behavioral & Neural Sciences" and emphasizes integration between levels of analysis (i.e., behavioral & biological) and across traditional interdisciplinary boundaries. The Center is one of the leading places in this country for the study of the neural bases of higher-cortical function (cognition) in humans, including labs devoted to memory, language, speech, motor-control, and vision. We also have a strong computational program for students interested in pursuing neural-network models as a tool for understanding psychological and biological issues. The Rutgers-Newark campus (as distinct from the New Brunswick campus), is 20 minutes outside New York City, and close to other major university research centers at NYU, Columbia, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and NEC. My own research concerns the neural and behavioral bases of learning and memory. Our work is focussed on the development of computational models of cortico-hippocampal function, and the testing of these models in both animals and human amnesics patient populations. If you are interested in applying to our graduate program, or possibly applying to one of the labs as paid research assistant or programmer, please email me at gluck at pavlov.rutgers.edu and I will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. - Mark Gluck ______________________________________________________________________ Dr. Mark A. Gluck Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (201) 648-1080 (Ext. 3221) Fax: (201) 648-1272 Email: gluck at pavlov.rutgers.edu  From Connectionists-Request at cs.cmu.edu Thu Sep 1 00:05:14 1994 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Thu, 01 Sep 94 00:05:14 -0400 Subject: Bi-monthly Reminder Message-ID: <26184.778392314@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated July 18, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From bernabe at cnm.us.es Fri Sep 2 06:44:45 1994 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Fri, 2 Sep 94 12:44:45 +0200 Subject: DISCUSSION: applications for real-time clustering chips Message-ID: <9409021044.AA08951@sparc1.cnm.us.es> This message is in hopes of starting a discussion about the usefulness of real-time clustering chips. In our institution we are developing a real-time clustering chip for possible application in speech recognition, where as the speaker changes, a certain adaptation needs to be performed. I have identified in the literature several ways of doing this (see refs. [1]-[7] below). Our chip is able to cluster 100-binary-pixels input patterns into up to 18 different categories. By assembling an NxM array of these chips Nx100-binary-pixels patterns can be clustered into up to Mx18 categories. Patterns are classified (and the corresponding clusters are re-adapted) in less than 1 micro-second. The discussion I would like to start is wether or not these kind of chips are useful for this or other applications. If you have experience with any application in which it would be desirable to have a real-time clustering chip, please enter the discussion. If possible, indicate the type of patterns that would need to be real-time-clustered (binary, digital, or analog), speed at which this clustering would be desirable, and other requirements you would desire. Please describe briefly your application and provide some typical references (if possible). I am not aware of other chips of this nature that are reported in the literature. For a brief description of our chip please see ref. [8] (a copy is available in the neuroprose archieve as pub/neuroprose/bernabe.art1chip.ps.Z). Dr. Bernabe Linares-Barranco National Microelectronics Center (CNM) Ed. CICA, Av. Reina Mercedes s/n 41012 Sevilla, SPAIN Phone: 34-5-4239923 FAX: 34-5-4624506 E-mail: bernabe at cnm.us.es References: [1] J. B. Hampshire and A. H. Waible, "The Meta-Pi network: connectionist rapid adaptation for high-performance multi-speaker phoneme recognition," ICASSP'90, pp. 165- 168, vol. 1, 1990. [2] Y. Gang and J. P. Haton, "Signal-to-string conversion based on high likelihood regions using embedded dynamic processing," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, No. 3, pp. 297-302, March 1991. [3] G. Rigoll, "Baseform adaptation for large vocabulary hidden markov model based speech recognition systems", ICASSP'90, pp. 141-144, vol. 1. [4] S. Cox, "Speaker adaptation in speech recognition using linear regression techniques," Electronics Letters, vol. 28, No. 22, pp. 2093-2094, 22 Oct. 1992. [5] P. G. Bamberg and M. A. Mandel, "Adaptable phoneme-based models for large-vocabulary speech recognition," Speech Communication, vol. 10, No. 5-6, pp. 437-451, Dec. 1991. [6] X. Huang and K. F. Lee, "On speaker-independent, speaker-dependent, and speaker-adaptive speech recognition," IEEE Trans. on Speech and Audio Processing, vol. 1, No. 2, pp. 150-157, April 1993. [7] M. Witbrock and P. Haffner, "Rapid connectionist speaker adaptation," ICASSP'92, pp. 453-456, vol. 1. [8] T. Serrano, B. Linares-Barranco, and J. L. Huertas, "A CMOS VLSI Analog Current-Mode High-Speed ART1 Chip," Proc. of the 1994 IEEE Int. Conference on Neural Networks, Orlando, Florida, July 1994, pp. 1912-1916, vol. 3.  From ersoy at ecn.purdue.edu Fri Sep 2 13:51:55 1994 From: ersoy at ecn.purdue.edu (Okan K Ersoy) Date: Fri, 2 Sep 1994 12:51:55 -0500 Subject: Call for papers: European Conference on Circuit Theory and Design Message-ID: <199409021751.MAA08841@dynamo.ecn.purdue.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 6630 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/98e78b94/attachment-0001.ksh From henders at linc.cis.upenn.edu Sat Sep 3 11:03:53 1994 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Sat, 3 Sep 1994 11:03:53 -0400 Subject: dissertation available on connectionist NLP/temporal synchrony Message-ID: <199409031503.LAA04825@linc.cis.upenn.edu> FTP-host: linc.cis.upenn.edu FTP-filename: pub/henderson/thesis.ps.Z FTP-filename: pub/henderson/chapter1.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/henderson.thesis.ps.Z The following dissertation on the feasibility and implications of using temporal synchrony variable binding to do syntactic parsing is available by anonymous ftp or as a technical report. The complete dissertation (196 pages) can be ftp'ed from archive.cis.ohio-state.edu in pub/neuroprose/Thesis/ as henderson.thesis.ps.Z, or from linc.cis.upenn.edu in pub/henderson/ as thesis.ps.Z. The first chapter of this document is a summary (42 pages), and it can be ftp'ed from linc.cis.upenn.edu in pub/henderson/ as chapter1.ps.Z. The complete dissertation is also available as a technical report (IRCS Report #94-12) by contacting Jodi Kerper (jbkerper at central.cis.upenn.edu, (215) 898-0362). Comments welcome! - Jamie Henderson University of Pennsylvania -------- Description Based Parsing in a Connectionist Network James Henderson Mitchell Marcus (Supervisor) Recent developments in connectionist architectures for symbolic computation have made it possible to investigate parsing in a connectionist network while still taking advantage of the large body of work on parsing in symbolic frameworks. This dissertation investigates syntactic parsing in the temporal synchrony variable binding model of symbolic computation in a connectionist network. This computational architecture solves the basic problem with previous connectionist architectures, while keeping their advantages. However, the architecture does have some limitations, which impose computational constraints on parsing in this architecture. This dissertation argues that, despite these constraints, the architecture is computationally adequate for syntactic parsing, and that these constraints make significant linguistic predictions. To make these arguments, the nature of the architecture's limitations are first characterized as a set of constraints on symbolic computation. This allows the investigation of the feasibility and implications of parsing in the architecture to be investigated at the same level of abstraction as virtually all other investigations of syntactic parsing. Then a specific parsing model is developed and implemented in the architecture. The extensive use of partial descriptions of phrase structure trees is crucial to the ability of this model to recover the syntactic structure of sentences within the constraints. Finally, this parsing model is tested on those phenomena which are of particular concern given the constraints, and on an approximately unbiased sample of sentences to check for unforeseen difficulties. The results show that this connectionist architecture is powerful enough for syntactic parsing. They also show that some linguistic phenomena are predicted by the limitations of this architecture. In particular, explanations are given for many cases of unacceptable center embedding, and for several significant constraints on long distance dependencies. These results give evidence for the cognitive significance of this computational architecture and parsing model. This work also shows how the advantages of both connectionist and symbolic techniques can be unified in natural language processing applications. By analyzing how low level biological and computational considerations influence higher level processing, this work has furthered our understanding of the nature of language and how it can be efficiently and effectively processed.  From crg at ida.his.se Mon Sep 5 06:22:51 1994 From: crg at ida.his.se (Connectionist) Date: Mon, 5 Sep 94 12:22:51 +0200 Subject: SCC-95 call for papers: Deadline Oct 1st! Message-ID: <9409051022.AA13464@mhost.ida.his.se> ------------------------------------------------------------- DEADLINE EXTENDED TO OCTOBER 1ST, 1994! ------------------------------------------------------------- THE SECOND SWEDISH CONFERENCE ON CONNECTIONISM The Connectionist Research Group University of Skovde, SWEDEN March 2-4, 1995 in Skovde, Sweden CALL FOR PAPERS SPEAKERS Michael Mozer University of Colorado, USA Ronan Reilly University College Dublin, Ireland Paul Smolensky University of Colorado, USA David Touretzky Carnegie Mellon University, USA This list is under completion. PROGRAM COMMITTEE Jim Bower California Inst. of Technology, USA Harald Brandt Ellemtel, Sweden Ron Chrisley University of Sussex, UK Gary Cottrell University of California, San Diego, USA Georg Dorffner University of Vienna, Austria Tim van Gelder National University of Australia, Australia Agneta Gulz University of Skovde, Sweden Olle Gallmo Uppsala University, Sweden Tommy Garling Goteborg University, Sweden Dan Hammerstrom Adaptive Solutions Inc., USA Jim Hendler University of Maryland, USA Erland Hjelmquist Goteborg University, Sweden Anders Lansner Royal Inst. of Techn., Stockholm, Sweden Reiner Lenz Linkoping University, Sweden Ajit Narayanan University of Exeter, UK Jordan Pollack Ohio State University, USA Noel Sharkey University of Sheffield, UK Bertil Svensson Chalmers Inst. of Technology, Sweden Tere Vaden University of Tampere, Finland SCOPE OF THE CONFERENCE Understanding neural information processing properties characterizes the field of connectionism, also known as Ar- tificial Neural Networks (ANN). The rapid growth, expansion and great popularity of connec- tionism is motivated by the new way of approaching and understanding the problems of artificial intelligence, and its applicability in many real-world applications. There is a number of subfields of connectionism among which we distinguish the following. The importance of a "Theory of connectionism" cannot be overstressed. The interest in theoretical analysis of neu- ronal models, and the complex dynamics of network architec- tures grows rapidly. It is often argued that abstract neural network models are best understood by analysing their computational properties with respect to their biological counterparts. A clear theoretical approach to developing neural models also provides insight in dynamics, learning, functionality and probabilities of different connectionist networks. "Cognitive connectionism" is bridging the gap between the theory of connectionism and cognitive science by modelling higher order brain functions from psychology by using methods offered by connectionist models. The findings of this field are often evaluated by their neuropsychological validity and not by their functional applicability. Sometimes the field of connectionism is referred to as the "new AI". Its applicability in AI has spawned a belief that AI will benefit from a good understanding of neural informa- tion processing capabilities. The subfield "Connectionism and artificial intelligence" is also concerned with the dis- tinction between connectionist and symbolic representations. The wide applicability and problem-solving abilities of neural networks are exposed in "Real-world computing". Robotics, vision, speech and neural hardware are some of the topics in this field. "The philosophy of connectionism" is concerned with such diverse questions as the mind-body problem and relations between distributed representations, their semantics and im- plications for intelligent behaviour. Experimental studies in "Neurobiology" have implications on the validity and design of new, artificial neural architec- tures. This branch of connectionism addresses topics such as self-organisation, modelling of cortex, and associative memory models. A number of internationally renowned keynote speakers will be invited to give plenary talks on the subjects listed above. GUIDELINES FOR PAPER SUBMISSIONS Instructions for submissions of manuscripts: Papers may be submitted, in three (3) copies, to one of the following sessions. ~ Theory of connectionism ~ Cognitive connectionism ~ Connectionism and artificial intelligence ~ Real-world computing ~ The philosophy of connectionism ~ Neurobiology A note should state principal author and email address (if any). It should also indicate what session the paper is sub- mitted to. Length: Papers must be a maximum of ten (10) pages long (including figures and references), the text area should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), and in a 12-point font type. Template and style files conforming to these specifications for several text formatting programs, will be available to authors of accepted papers. Deadline: Papers must be received by Saturday, October 1st, 1994 to ensure reviewing. All submitted papers will be reviewed by members of the program committee on the basis of technical quality, research significance, novelty and clarity. The principal author will be notified of acceptance no later than Tuesday, November 1st, 1994. Proceedings: All accepted papers will appear in the conference proceed- ings. CONFERENCE CHAIRS Lars Niklasson, Mikael Boden lars.niklasson at ida.his.se mikael.boden at ida.his.se PLEASE ADDRESS ALL CORRESPONDENCE TO: "SCC-95" The Connectionist Research Group University of Skovde P.O. Box 408 541 28 Skovde, SWEDEN E-mail: crg at ida.his.se Tel. +46 (0)500 464600 Fax. +46 (0)500 464725  From adriaan at phil.ruu.nl Tue Sep 6 09:25:27 1994 From: adriaan at phil.ruu.nl (Adriaan Tijsseling) Date: Tue, 6 Sep 1994 15:25:27 +0200 Subject: mailing list for connectionist cognitive psychology Message-ID: <199409061325.PAA09157@laurel.stud.phil.ruu.nl> A new mailing list has been started. The main aim of this mailing list is to inform about connectionist research in cognitive psychology. The mailing list is primarily intended for discussion of issues relating to cognitive psychology and connectionism and the dissemination of information directly relevant to researchers in the fields of cognitive science. Examples of information are announcements of new techreports, dissertations, theses, conferences, seminars, or information about articles, research, or information about bibliographic issues. Requests to have a name added to the list, and similar administrative matters, should be sent to the following address: cogpsy-request at phil.ruu.nl Contributions can be sent to: cogpsy at phil.ruu.nl -- Adriaan Tijsseling, Department of Cognitive Artificial Intelligence and Department of Psychonomy, Utrecht University, The Netherlands.  From shultz at hebb.psych.mcgill.ca Tue Sep 6 09:46:10 1994 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 6 Sep 94 09:46:10 EDT Subject: No subject Message-ID: <9409061346.AA27883@hebb.psych.mcgill.ca> Subject: Paper available: A connectionist model of the development of velocity, time, and distance concepts Date: 6 Sept '94 FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/buckingham.velocity.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A connectionist model of the development of velocity, time, and distance concepts (6 pages) David Buckingham & Thomas R. Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec, Canada H3A 1B1 Abstract Connectionist simulations of children's acquisition of velocity (v), time (t), and distance (d) concepts were conducted using a generative algorithm, cascade-correlation (Fahlman & Lebiere, 1990). Diagnosis of network rules were consistent with the developmental course of childrens concepts (Wilkening, 1981, 1982) and predicted some new stages as well. Networks integrated the defining dimensions of the concepts first by identity rules (e.g., v = d), then additive rules (e.g., v = d-t), and finally multiplicative rules (e.g., v = d/t). Psychological effects of differential memory demands were also simulated. It is argued that cascade-correlation implements an explicit mechanism of developmental change involving incremental learning and qualitative increases in representational power. The paper has been published in the 1994 Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (pp. 72-77). Hillsdale, NJ: Lawrence Erlbaum. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get buckingham.velocity.ps.Z ftp> quit unix> uncompress buckingham.velocity.ps.Z Thanks to Jordan Pollack for maintaining this archive. Tom Shultz  From paolo at mcculloch.ing.unifi.it Tue Sep 6 11:23:12 1994 From: paolo at mcculloch.ing.unifi.it (Paolo Frasconi) Date: Tue, 6 Sep 94 17:23:12 +0200 Subject: Announce: WWW Neural Networks site Message-ID: <9409061523.AA28857@mcculloch.ing.unifi.it> The following page has been established at Dipartimento di Sistemi e Informatica (University of Florence, Italy): URL: http://www-dsi.ing.unifi.it/neural/home.html The page contains a description of research currently undertaken by our group and links to a collection of our papers that can be retrieved as postscript files. Links to other similar services around the world are also provided. --- Paolo Frasconi Universita' di Firenze Dipartimento di Sistemi tel: +39 (55) 479-6361 e Informatica fax: +39 (55) 479-6363 Via di Santa Marta 3 50139 Firenze (Italy) http://www-dsi.ing.unifi.it/~paolo/  From shawn_mikiten at biad23.uthscsa.edu Wed Sep 7 15:32:23 1994 From: shawn_mikiten at biad23.uthscsa.edu (shawn mikiten) Date: 7 Sep 94 15:32:23 U Subject: Brain Mapping Conference po Message-ID: Brain Mapping Conference post The upcoming Brain Mapping Database Conference on December 4 & 5 will be in San Antonio, TX. Anyone involved in, or interested in developing databases in brain mapping and/or behaviors are welcome to apply. If you have access to WWW the URL is: http://biad38.uthscsa.edu/brainmap/brainmap94.html ============================================== BrainMap '94 Conference Database Development in Brain and Behavior Session Topics - Community Databases - Anatomical Spaces - Local and Emerging Databases - Database Federation Featured Databases - BrainMap - Genesis - Brain Browser - NeuroDatabase - CHILDS - SP Map Speakers - Floyd Bloom The Scripps Research Institute La Jolla, CA - Fred Bookstein University of Michigan Ann Arbor, MI - James Bower Caltech Pasadena, CA - George Carman Salk Institute, Vision Center Lab San Diego, CA - Verne Caviness Harvard University, Massachusetts General Hospital Boston, MA - Anders Dale University of California at San Diego La Jolla, CA - Hunter Downs University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Heather Drury Washington University School of Medicine St. Louis, MO - Alan Evans Montreal Neurological Institute Montreal Quebec, Canada - Chris Fields Institute for Genomic Research Gaithersburg, MD - Peter Fox University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Jack Lancaster University of Texas Health Science Center at San Antonio Research Imaging Center San Antonio, TX - Brian MacWhinney Carnegie Mellon University Pittsburg, PA - John Mazziotta University of California at Los Angeles Los Angeles, CA - Mark Rapport Medical College of Ohio Toledo, OH - Robert Robbins National Institute of Medicine Maryland, VA - Per Roland Karolinska Institute Stockholm, Sweden - James Schwaber Neurobiology Lab Wilmington, DE - Martin Sereno University of California at San Diego La Jolla, CA - David Van Essen Washington University School of Medicine St. Louis, MO - Steven Wertheim New England Regional Primate Resource Center Southborough, MA - Roger Wood University of California at Los Angeles Los Angeles, CA _____________________________________ Attendence will be limited. Scientists developing databases in brain and/or behavior will receive preference. Applications must be received by September 15, 1994. Fill out the following application form to apply. An Advisory Board was recruited to provide external critique of the BrainMap concept, its implementation, and overall organization of the project. To apply, contact Sally Faulk - faulk at uthscsa.edu BrainMap '94 Research Imaging Center The University of Texas Health Science Center 7703 Floyd Curl Drive San Antonio, Tx 78284-6240 (210) 567-8070 (voice) (210) 567-8074 (fax) ===================================== BrainMap '94 Application Form Database Development in Brain and Behavior ________________________________________________ December 4 and 5, 1994, San Antonio, Texas Co-Organizers: Peter T. Fox M.D. and Jack L. Lancaster, Ph.D. To: Sally Faulk, BrainMap '94 Research Imaging Center-UTHSCSA 7703 Floyd Curl Drive San Antonio, TX 78284-6240 210-567-8070 (voice) 210-567-8074 (fax) faulk at uthscsa.edu (e-mail) Applications must be received by September 15, 1994 Name/Title:_________________________________________________________ Affiliation:________________________________________________________ Address:____________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ FAX:_______________________ Phone:_______________________ E-Mail:_____________________________________________________________ Do you do functional Brain mapping? yes________ no________ Modalities: PET________ MRI________ ERP________ MEG________ OTHER________ Have you used a database of brain anatomy or function? yes________ no________ Describe:___________________________________________________________ ____________________________________________________________________ Are you on the Internet?____________________________________________ What computer do you use?___________________________________________ What operating system?______________________________________________ Do you use TCP/IP? yes________ no________ Comments:___________________________________________________________ ____________________________________________________________________ ____________________________________________________________________ ============================================== BrainMap '94 Database Development in Brain and Behavior December 4 & 5, 1994 The Plaza San Antonio Hotel ______________________________________ This is a preliminary draft of the agenda, may change without notice. December 4, 1994 , Sunday - a.m. 7:30 Continental Breakfast Foyer Hildalgo Ballroom Session I Hildalgo Ballroom Community Databases 8:00 Welcoming Comments Dean James Young University of Texas Health Science Center at San Antonio 8:10 General Introduction Peter Fox University of Texas Health Science Center at San Antonio 8:20 BrainMap Peter Fox University of Texas Health Science Center at San Antonio 9:10 BrainMap - A User's Perspective TBA 9:20 Genesis James Bower Caltech 10:10 Genesis - A User's Perspective TBA 10:20 Break 10:35 Development of the CHILDES Database Brian MacWhinney Carnegie Mellon University 11:25 The CHILDS Database - A User's Perspective TBA 11:35 Discussion 12:00 Lunch Session II Hildalgo Ballroom Local and Emerging Databases 1:30 Brain Browser Floyd Bloom The Scripps Research Institute 2:00 NeuroDatabase Steven Wertheim New England Regional Primate Resource Center 2:30 Rat Atlas James Schwaber Compational Neurobiology Lab 3:00 Structural Probability Database John Mazziotta University of California at Los Angeles 3:30 Break 3:45 Connectivity Database David Van Essen Washington University School of Medicine 4:15 Atlas Derived Database Per Roland Karolinska Institute 4:45 Discussion General Reception 6:30 - 8:00 PM December 5, 1994, Monday - a.m. 7:30 Continental Breakfast Session III Hildalgo Ballroom Theoretical Underpinnings: Anatomical Spaces 8:00 History of Talairach Atlases Mark Rapport Medical College of Ohio 8:30 Twelve-Parameter Affine Spatial Normalization Roger Wood University of California at Los Angeles 8:50 Convex Hull Spatial Normalization Hunter Downs University of Texas Health Science Center at San Antonio 9:10 Thin-Plate Spline Spatial Normalization Fred Bookstein University of Michigan 9:30 Spherical Projections Alan Evans Montreal Neurological Institute 9:50 Cortical Surface Relaxation Martin Sereno & Anders Dale University of California at San Diego 10:10 Cortical Surface Flattening Heather Drury Washington University School of Medicine 10:30 Break 10:45 Cortical Surface Metric Unfolding George Carman Salk Institute, Vision Center Lab 11:25 Culcal Functional Correspondance Verne Caviness Harvard University, Massachusetts General Hospital 11:45 Discussion 12:15 Lunch Session IV Hildalgo Ballroom Database Integration 1:45 Introductory Statement: A Database Federation Peter Fox & Jack Lancaster University of Texas Health Science Center at San Antonio 2:00 The Genome Experience: Is Retroactive Integration Possible? Robert Robbins National Institute of Medicine 2:30 Models for Proactive Integration Chris Fields Institute for Genomic Research 3:30 BrainMap: A Potential 'Hub' for a Federation of Neuroscience Databases? Jack Lancaster University of Texas Health Science Center at San Antonio 5:00 Discussion 6:00 End of Workshop III =============================================== BrainMap '94 Advisory Board ______________________________ Peter T. Fox, M.D. Research Imaging Center UTHSCSA Jack L. Lancaster, Ph.D. Research Imaging Center UTHSCSA Chistian Burks, Ph.D. Los Alamos National Laboratory George Carman, Ph.D. The Salk Institute Alan Evans, Ph.D. Montreal Neurological Institute Richard Frackowiak, M.D. Hammersmith Hospital, London Karl Fiston, M.D. Hammersmith Hospital, London Patricia Goldman-Rakic, Ph.D. Yale University Balazs Gulyas, M.D., Ph.D. The Karolinska Institute Paul C. Lauterbur, Ph.D. University of Illinois Brian MacWhinney, Ph.D. Carnegie Mellon University John Mazziotta, M.D., Ph.D. University of California M-Marsel Mesulam, M.D. Northwestern University Medical Center Lawrence Parsons, Ph.D. The University of Texas Austin Steven E. Petersen, Ph.D. Washington University Michael Posner, Ph.D. The University of Oregon Marcus E. Raichle, M.D. Washington University Robert J. Robbins, M.D. Johns Hopkins University Per Roland, M.D. The Karolinska Institute Bruce Rosen, M.D. Harvard University Arthur Toga, Ph.D. University of California Leslie G. Ungerleider, Ph.D. National Institute of Mental Health David Van Essen, Ph.D. Washington University C.C. Wood, Ph.D. Los Alamos National Laboratory  From mav at psych.psy.uq.oz.au Wed Sep 7 21:51:44 1994 From: mav at psych.psy.uq.oz.au (Simon Dennis) Date: Thu, 8 Sep 1994 11:51:44 +1000 (EST) Subject: Australasian Cognitive Science Conference Preliminary Announcement Message-ID: - Preliminary Announcement - 3rd Conference of the Australasian Cognitive Science Society at The University of Queensland Brisbane, Australia April 18-20, 1995 (Preceding the 22nd Australian Experimental Psychology Conference) Abstracts Due: December 2, 1994 This meeting of the Australian Cognitive Science Society follows the second conference which was hosted by the University of Melbourne in 1993. The 1995 Conference precedes the Experimental Psychology Conference which will also be held at the University of Queensland (April 21-23, 1995). VENUE The venue for the Conference is Emmanuel College which is located on the campus of the University of Queensland. The campus, only 4 kms from the centre of Brisbane, is spacious and leafy with exotic and colourful subtropical vegetation, and is surrounded on three sides by a sweeping bend of the Brisbane River. Brisbane, with a population of 1.4 million, is Australia's third largest city, and is the capital of the state of Queensland - the "Sunshine State". Although it is Australia's fastest growing city, it retains and cherishes an enviable lifestyle influenced by its sunny, subtropical climate. It is a scenic and cosmopolitan city of plam studded parks, colourful gardens, shady verandahs, riverside walks and cafes, and al fresco dining. The Brisbane River snakes lazily through the city from the forest clad foothills of the Great Dividing Range which frame the city to the west, to the Pacific Ocean which frames it to the east. Within 90 minutes drive of Brisbane are rainforested mountains, pristine Pacific beaches, tranquil sand islands, and buzzing coastal resorts. Brisbane is the gateway to Queensland which, one fifth the size of the USA, encompasses the Simpson Desert in the west, the tropical rainforests of the north, and of course the Great Barrier Reef along its beautiful Pacific coast. ACCOMMODATION Accommodation has been booked at King's College, University of Queensland, which is adjacent to the conference venue. Rooms will be allocated on a first-registered, first-served basis. Approximate cost per person per night is $40. Hotel accommodation will also be arranged. SCIENTIFIC PROGRAMME The aim of the Conference is to promote the interests of the multi- disciplinary field of Cognitive Science. The participation of scholars from all areas of Cognitive Science is invited, including: - Computer Science - Linguistics - Neuroscience - Philosophy - Psychology Additionally, the Conference aims to promote applications of Cognitive Science and encourage participation from researchers in the Asia-Pacific region. The Scientific Programme will include oral and poster presentations, together with symposia. SYMPOSIA Suggestions for symposia are invited and should be forwarded to the Conference Chair with an abstract and list of speakers. EXHIBITS The Conference will include displays of recent publications and applied projects, and a symposium or workshop on applications. SOCIAL PROGRAMME The proposed Social Programme includes opening and closing receptions, poster-session refreshments, and a Conference dinner. SPONSORSHIP The University of Queensland has agreed to contribute toward Conference costs. Additional sponsors are being sought to expand Conference activities, encourage attendance of grant holders, fund awards, and enable invitation of a keynote speaker. REGISTRATION There will be a reduced registration fee for those who wish to attend both CogSci'95 and the Experimental Psychology Conference (EPC'95). Joint registration forms will be distributed with the Call for Papers in late September. A discount will apply for registration fees received by February 28, 1995. SUBMISSION OF ABSTRACTS/PAPERS All abstracts/papers will be refereed. It is intended that selected papers will be published. The submission of full papers will be required if they are to be considered for publication, and will be reviewed. If publication is not desired, abstracts only are required and will not be reviewed externally. The deadline for submission of papers or abstracts is Friday December 2, 1994. Notification of acceptance for the conference as poster, oral presentation or symposium paper will be forwarded February 15, 1995. The deadline for submission of revised papers for publication is May 15, 1995. TIMETABLE & CLOSING DATES: Call for Papers Sept. '94 Submission of papers/abstracts 2/12/94 Notification of acceptance for Conference/book 15/2/95 Registration with discount 28/2/95 Submission of revised paper for publication 15/5/95 CogSci'95 ORGANISING COMMITTEE CONFERENCE CHAIR: Graeme Halford, Psychology, UQ SECRETARIAT: Kerry Chalmers (Sec), Psychology, UQ Glenda Andrews (Treas), Psychology, UQ Rebecca Farley (Admin), Psychology, UQ PROGRAMME & AWARDS: Doug Saddy, Psychology & English, UQ Janet Wiles, Psychology & Comp. Sci, UQ Simon Dennis, Psychology, UQ Terry Dartnall, Computing & IT, Griffith Marilyn Ford, Computing & IT, Griffith Ottmar Lipp, Psychology, UQ PUBLICATIONS & EDITING: Ellen Watson, Philosophy, UQ Terry Dartnall, Computing & IT, Griffith Peter Slezak, Philosophy, UNSW Doug Saddy, Psychology & English, UQ NOTICES & MEMBERSHIP: Kate Stevens, Psychology, UQ Rebecca Farley, Psychology, UQ SOCIAL PROGRAMME: Helen Purchase, Computer Science, UQ VENUE & ACCOMMODATION: Len Dalgleish, Psychology, UQ Helen Purchase, Computer Science, UQ SPONSORSHIP: Joachim Diederich, Computer Science, QUT INCORPORATION: Alan Hayes, Education, UQ Helen Purchase, Computer Science, UQ EXPRESSION OF INTEREST: Please send me the Call for Papers and Registration form for CogSci'95. Name: ___________________________________ Address: _________________________________ _________________________________ _________________________________ Phone: _________________________________ E-mail: _________________________________ E-mail or detach slip and post to:- Rebecca Farley CogSci'95 - Department of Psychology University of Queensland Brisbane QLD Australia 4072 Phone: (+617) 365 6230 Fax: (+617) 365 4466 E-mail: cogsci95 at psy.uq.oz.au *****************************  From smagt at fwi.uva.nl Thu Sep 8 06:04:46 1994 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Thu, 8 Sep 1994 12:04:46 +0200 (MET DST) Subject: neurobotics WWW at U of Amsterdam Message-ID: <199409081004.AA06105@carol.fwi.uva.nl> The department of Autonomous Systems at the University of Amsterdam has been working on a WWW page describing our work in neural networks and robotics. Though we may change its look, its contents are well enough established to announce it. You can find it at http://carol.fwi.uva.nl/~smagt/neuro Please send reactions and comments to smagt at fwi.uva.nl, or krose at fwi.uva.nl. Patrick van der Smagt Department of Computer Systems University of Amsterdam  From moody at chianti.cse.ogi.edu Thu Sep 8 14:01:28 1994 From: moody at chianti.cse.ogi.edu (John Moody) Date: Thu, 8 Sep 94 11:01:28 -0700 Subject: Neural Networks in the Capital Markets Workshop Message-ID: <9409081801.AA14003@chianti.cse.ogi.edu> ******************************************************************* --- Registration Package and Preliminary Program --- NNCM-94 Second International Workshop NEURAL NETWORKS in the CAPITAL MARKETS Thursday-Friday, November 17-18, 1994 with tutorials on Wednesday, November 16, 1994 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School Neural networks have now been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the overwhelming interest in the first NNCM workshop held in London in November 1993, the second annual NNCM workshop will be held November 17-18, 1994, in Pasadena, California. This is a research meeting where original, high-quality contributions to the field are presented and discussed. In addition, a day of introductory tutorials (Wednesday, November 16) will be included to familiarize audiences of different backgrounds with the financial aspects, and the mathematical aspects, of the field. --Invited Speakers: The workshop will feature invited talks by four internationally recognized researchers: Dr. Andrew Lo, MIT Sloan School Dr. Paul Refenes, London Business School Dr. Robert Shiller, Yale University Dr. Hal White, UC San Diego --Contributed Papers: NNCM-94 will have 4 oral sessions and 2 poster sessions with more than 40 contributed papers presented by academicians and practitioners from both the neural networks side and the capital markets side. Each paper has been refereed by 4 experts in the field. The areas of the accepted papers include: Stock and bond valuation and trading, asset allocation and portfolio management, real trading using neural networks, foreign exchange rate prediction, option pricing, univariate time series analysis, neural network methodology, statistical analysis and hints, theory of forecasting, and neural network modeling. --Tutorials: Before the main program, there will be a day of tutorials on Wednesday, November 16, 1994. The morning session will focus on the financial side and the afternoon session will focus on the mathematical side. -Morning Session- Dynamics of Trading and Market Microstructure Dr. Larry Harris, University of Southern California Empirical Research on Market Inefficiencies Dr. Blake LeBaron, University of Wisconsin -Afternoon Session- Neural Networks, Time Series, and Finance Dr. John Moody, Oregon Graduate Institute Statistical Inference for Neural Networks Dr. Brian Ripley, Oxford University We are very pleased to have tutors of such caliber help bring new audiences from different backgrounds up to speed in this hybrid area. --Schedule Outline: Wednesday, November 16: 8:00-12:15 Tutorials I & II 1:30-5:45 Tutorials III & IV Thursday, November 17: 8:30-11:30 Oral Session I 11:30-2:00 Luncheon & Poster Session I 2:00-5:00 Oral Session II Friday, November 18: 8:30-11:30 Oral Session III 11:30-2:00 Luncheon & Poster Session II 2:00-5:00 Oral Session IV --Location: The workshop will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. One of the most beautiful hotels in the U.S., the Ritz is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Home of Caltech, Pasadena has recently become a major dining/hangout center for Southern California with the growth of its `Old Town', built along the styles of the 1950's. Among the cultural attractions of Pasadena are the Norton Simon Museum, the Huntington Library/ Gallery/Gardens, and a number of theaters including the Ambassador Theater. --Organizing Committee: Dr. Y. Abu-Mostafa, California Institute of Technology Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. J. Moody, Oregon Graduate Institute Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Munster Dr. A. Timmermann, Birkbeck College, London Dr. A. Weigend, University of Colorado Dr. H. White, University of California, San Diego --Registration and Hotel Reservation: Registration is done by mail on a first-come, first-served basis (last year we had to return the checks to more than 50 people for lack of space). To ensure your place at the workshop, please send the enclosed registration form and payment as soon as possible to Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A. Please make checks payable to Caltech. Hotel reservations are made by contacting the Ritz-Carlton Hotel directly. Their phone number is (818) 568-3900 and fax number is (818) 792-6613. Please mention that you are with NNCM-94 in order to get the (very) special rate that we negotiated. The rate is $79+taxes ($99 with $20 credited by NNCM-94 upon registration) per room (single or double occupancy) per night. Please make the hotel reservation IMMEDIATELY as the rate is based on availability. --Secretariat: For further information, please contact the NNCM-94 secretariat Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A. e-mail: lucinda at sunoptics.caltech.edu , phone (818) 395-4843, fax (818) 568-8437 ******************************************************************* -- NNCM-94 Registration Form -- Title:--------- Name:------------------------------------ Mailing Address:------------------------------------------- ----------------------------------------------------------- e-mail:---------------------------- fax:------------------- ********Please circle the applicable fees and write the total******** --Main Conference (November 17-18): Registration fee $500 Discounted fee for academicians $250 (letter on university letterhead required) Discounted fee for full-time students $125 (letter from registrar or faculty advisor required) --Tutorials (November 16): You must be registered for the main conference in order to register for the tutorials. Morning Session Only $100 Afternoon Session Only $100 Both Sessions $150 Full-time students $50 (letter from registrar or faculty advisor required) TOTAL: $-------- Please include payment (check or money order in US currency). PLEASE MAKE CHECK PAYABLE TO CALTECH. --Hotel Reservation: Please contact the Ritz-Carlton Huntington Hotel in Pasadena directly. The phone number is (818) 568-3900 and the fax number is (818) 792-6613. Ask for the NNCM-94 rate. We have negotiated an (incredible) rate of $79+taxes ($99 with $20 credited by NNCM-94 upon registration) per room (single or double occupancy) per night, based on availability. ********Please mail your completed registration form and payment to******** Ms. Lucinda Acosta, Caltech 116-81, Pasadena, CA 91125, U.S.A.  From duch at phys.uni.torun.pl Fri Sep 9 03:50:26 1994 From: duch at phys.uni.torun.pl (Wlodzislaw Duch) Date: Fri, 9 Sep 1994 09:50:26 +0200 (MET DST) Subject: Neuroprose paper announcement Message-ID: <9409090750.AA14398@class1.phys.uni.torun.pl> A non-text attachment was scrubbed... Name: not available Type: text Size: 1866 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/cbfc0bd5/attachment-0001.ksh From duch at phys.uni.torun.pl Fri Sep 9 03:51:34 1994 From: duch at phys.uni.torun.pl (Wlodzislaw Duch) Date: Fri, 9 Sep 1994 09:51:34 +0200 (MET DST) Subject: paper announcement: Solution to fundamental problems of cognitive science Message-ID: <9409090751.AA14409@class1.phys.uni.torun.pl> A non-text attachment was scrubbed... Name: not available Type: text Size: 926 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/086d35ef/attachment-0001.ksh From brunak at cbs.dth.dk Fri Sep 9 13:43:19 1994 From: brunak at cbs.dth.dk (Soren Brunak) Date: Fri, 9 Sep 94 13:43:19 METDST Subject: Neural network model of the genetic code Message-ID: Neural network model of the genetic code is strongly correlated to the GES scale of amino acid transfer free energies N. Tolstrup, J. Toftgaard, J. Engelbrecht and S. Brunak Centre for Biological Sequence Analysis Department of Physical Chemistry The Technical University of Denmark DK-2800 Lyngby, Denmark Journal of Molecular Biology, to appear. Abstract A neural network trained to classify the 61 nucleotide triplets of the genetic code into twenty amino acid categories develops in its internal representation a pattern matching the relative cost of transferring amino acids with satisfied backbone hydrogen bonds from water to an environment of dielectric constant of roughly 2.0. Such environments are typically found in lipid membranes or in the interior of proteins. In learning the mapping between the codons and the categories, the network groups the amino acids according to the scale of transfer free energies developed by Engelman, Goldman and Steitz. Several other scales based on internal preference statistics also agree reasonably well with the network grouping. The network is able to relate the structure of the genetic code to quantifications of amino acid hydrophobicity-philicity more systematicly than the numerous attempts made earlier. Due to its inherent non-linearity, the code is also shown to impose decisive constraints on algorithmic analysis of the protein coding potential of DNA. To obtain a copy, do: unix> ftp 129.142.74.40 (or ftp virus.fki.dth.dk) Name: anonymous Password: (your email address, please) ftp> binary ftp> cd pub ftp> get gcode.ps.gz ftp> bye unix> gunzip gcode.ps.gz unix> lpr gcode.ps URL ftp://virus.fki.dth.dk/pub/gcode.ps.gz  From terry at salk.edu Fri Sep 9 23:29:35 1994 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 9 Sep 94 20:29:35 PDT Subject: ** Walter Heiligenberg ** Message-ID: <9409100329.AA12544@salk.edu> Walter Heiligenberg died in the USAir crash on September 8 near Pittsburgh. He was returning from Germany and planning to attend a neuroscience retreat. Walter was a pioneer in modeling neural systems and made major contributions to our understanding of the jamming avoidance response in electric fish. He wrote about this system in his book "Neural Nets in Electric Fish", published by the MIT Press. His tragic death is a great personal loss to his many friends and colleagues. We will all miss him greatly. Terry -----  From yeh at harlequin.co.uk Sat Sep 10 12:27:57 1994 From: yeh at harlequin.co.uk (Yehouda Harpaz) Date: Sat, 10 Sep 94 12:27:57 BST Subject: the implementation of cognition in the human brain Message-ID: <14527.9409101127@wratting.cam.harlequin.co.uk> I have put in the addresses below my ideas about the way the cognition system is actually implemented in the brain. The text is not publication ready, but readable. I would appreciate comments. The text is oriented towards psychlogy and neurobiology, but input from connectionist point view would be also be useful, and I think connectionists will find it interetsing to read. The main points, from the point view of connectionism, are: 1) Concentrating on thinking, as opposed to feature recognition. 2) The suggestion that learning in the brain is directed by a global mechanism (which is described in the text), and is largely independent of local features. www: http://www.mrc-cpe.cam.ac.uk/yh1/cognition.html anonymous ftp: ftp.mrc-cpe.cam.ac.uk => pub/yh1/cognition.ps Thanks Yehouda Harpaz  From arbib at pollux.usc.edu Mon Sep 12 12:37:06 1994 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Mon, 12 Sep 1994 09:37:06 -0700 Subject: John Szentagothai Message-ID: <199409121637.JAA02879@pollux.usc.edu> John Szentagothai, the neuroanatomist, died at his home in Budapest on the morning of Thursday, September 8th. He had arisen early to work on a book, taken breakfast, and then sat down before going in to the Institute - and died immediately. He was almost 82. Professor Szentagothai has played a leading role in neuroanatomy for many decades, having already established a strong reputation prior to World War II. In the years since then, he has been active in neuroscience in general, and in Hungarian science in particular where he created a strong, and international, school of Hungarian neuroanatomists, as well as serving as a vigorous president of the Hungarian Academy of Sciences. His concern for his country continued with a recent term as member of the Hungarian parliament. Of his many contributions to neuroscience, perhaps two are best known to modelers - his 1969 book on "The Cerebellum as a Neuronal Machine" (with Eccles and Ito) inspired Marr and Albus and many other cerebellar modelers; his 1974/5 book on "Conceptual Models of Neural Organization", and related articles, did much to extend our view of the modular and columnar organization of the brain. His enthusiasm for exposition and his quest to understand the brain continued undiminished until the day he died. I am grateful that his voice was heard for so long, but saddened indeed that I shall not hear it again. Michael Arbib  From Patrik.Floreen at cs.Helsinki.FI Tue Sep 13 04:34:35 1994 From: Patrik.Floreen at cs.Helsinki.FI (Patrik Floreen) Date: Tue, 13 Sep 1994 11:34:35 +0300 Subject: Paper on Neuroprose: Complexity Issues in Discrete Hopfield Networks Message-ID: <199409130834.LAA02415@skiathos.Helsinki.FI> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/floreen.hopfield.ps.Z ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: P. Floreen and P. Orponen: Complexity Issues in Discrete Hopfield Networks (55 pages) Patrik Floreen Department of Computer Science P.O.Box 26 (Teollisuuskatu 23) FIN-00014 University of Helsinki floreen at cs.helsinki.fi Abstract We survey some aspects of the computational complexity theory of discrete-time and discrete-state Hopfield networks. The emphasis is on topics that are not adequately covered by the existing survey literature, most significantly: 1. the known upper and lower bounds for the convergence times of Hopfield nets (here we consider mainly worst-case results); 2. the power of Hopfield nets as general computing devices (as opposed to their applications to associative memory and optimization); 3. the complexity of the synthesis ("learning") and analysis problems related to Hopfield nets as associative memories. The text is a draft chapter for the forthcoming book "The Computational and Learning Complexity of Neural Networks: Advanced Topics" (ed. Ian Parberry). Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get floreen.hopfield.ps.Z ftp> quit unix> uncompress floreen.hopfield.ps.Z Thanks to Jordan Pollack for maintaining this archive. Patrik Floreen  From bradtke at picard.gteds.gte.com Tue Sep 13 08:43:00 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Tue, 13 Sep 1994 08:43:00 -0400 Subject: Paper on neuroprose archives Message-ID: <199409131245.IAA12909@harvey.gte.com> ftp://archive.cis.ohio-state.edu/pub/neuroprose/bradtke.RLforLQ.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bradtke.rlforlq.ps.Z Adaptive Linear Quadratic Control Using Policy Iteration (19 pages) CMPSCI Technical Report 94-49 Steven J. Bradtke (1), B. Erik Ydstie (2), and Andrew G. Barto (1) (1) Computer Science Department University of Massachusetts Amherst, MA 01003 (2) Department of Chemical Engineering Carnegie Mellon University Pittsburgh, PA 15213 bradtke at cs.umass.edu ydstie at andrew.cmu.edu barto at cs.umass.edu Abstract In this paper we present stability and convergence results for Dynamic Programming-based reinforcement learning applied to Linear Quadratic Regulation (LQR). The specific algorithm we analyze is based on Q-learning and it is proven to converge to the optimal controller provided that the underlying system is controllable and a particular signal vector is persistently excited. The performance of the algorithm is illustrated by applying it to a model of a flexible beam. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get bradtke.rlforlq.ps.Z ftp> quit unix> uncompress bradtke.rlforlq.ps.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From bradtke at picard.gteds.gte.com Tue Sep 13 08:44:40 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Tue, 13 Sep 1994 08:44:40 -0400 Subject: Thesis on neuroprose archives Message-ID: <199409131247.IAA12992@harvey.gte.com> ftp://archive.cis.ohio-state.edu/pub/neuroprose/thesis/bradtke.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/thesis/bradtke.thesis.Z Incremental Dynamic Programming for On-Line Adaptive Optimal Control (133 pages) CMPSCI Technical Report 94-62 Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Abstract Reinforcement learning algorithms based on the principles of Dynamic Programming (DP) have enjoyed a great deal of recent attention both empirically and theoretically. These algorithms have been referred to generically as Incremental Dynamic Programming (IDP) algorithms. IDP algorithms are intended for use in situations where the information or computational resources needed by traditional dynamic programming algorithms are not available. IDP algorithms attempt to find a global solution to a DP problem by incrementally improving local constraint satisfaction properties as experience is gained through interaction with the environment. This class of algorithms is not new, going back at least as far as Samuel's adaptive checkers-playing programs, but the links to DP have only been noted and understood very recently. This dissertation expands the theoretical and empirical understanding of IDP algorithms and increases their domain of practical application. We address a number of issues concerning the use of IDP algorithms for on-line adaptive optimal control. We present a new algorithm, Real-Time Dynamic Programming, that generalizes Korf's Learning Real-Time A* to a stochastic domain, and show that it has computational advantages over conventional DP approaches to such problems. We then describe several new IDP algorithms based on the theory of Least Squares function approximation. Finally, we begin the extension of IDP theory to continuous domains by considering the problem of Linear Quadratic Regulation. We present an algorithm based on Policy Iteration and Watkins' Q-functions and prove convergence of the algorithm (under the appropriate conditions) to the optimal policy. This is the first result proving convergence of a DP-based reinforcement learning algorithm to the optimal policy for any continuous domain. We also demonstrate that IDP algorithms cannot be applied blindly to problems from continous domains, even such simple domains as Linear Quadratic Regulation. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/thesis ftp> binary ftp> get bradtke.thesis.Z ftp> quit unix> uncompress bradtke.thesis.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From rsun at cs.ua.edu Tue Sep 13 16:23:03 1994 From: rsun at cs.ua.edu (Ron Sun) Date: Tue, 13 Sep 1994 15:23:03 -0500 Subject: No subject Message-ID: <9409132023.AA32546@athos.cs.ua.edu> *** Announcing a new book *** available from Kluwer Academic Publishers: COMPUTATIONAL ARCHITECTURES INTEGRATING NEURAL AND SYMBOLIC PROCESSES: A PERSPECTIVE ON THE STATE OF THE ART Edited by Ron Sun and Larry Bookman ISBN 0-7923-9517-4 (Order information is in the end of this message) ------------------------------------------- The focus of this book is on a currently emerging body of research --- computational architectures integrating neural and symbolic processes. There has been a great deal of work in integrating neural and symbolic processes, both from a cognitive and/or applicational viewpoint, The editors of this book intend to address the underlying architectural aspects of this integration. In order to provide a basis for a deeper understanding of existing divergent approaches and provide insight for further developments in this field, the book presents (1) an examination of specific architectures (grouped together according to their approaches), their strengths and weaknesses, why they work, and what they predict, and (2) a critique/comparison of these approaches. The book will be of use to researchers, graduate students, and interested laymen, in areas such as cognitive science, artificial intelligence, computer science, cognitive psychology, and neurocomputing, in keeping up to date with the newest research trends. It can also serve as a comprehensive, in-depth introduction to this new emerging field. A unique feature of the book is a comprehensive bibliography at the end of the book. -------------------------------------------- TABLE OF CONTENTS Foreword by Michael Arbib Preface by Ron Sun and Larry Bookman Chapter 1 An Introduction: On Symbolic Processing in Neural Networks by Ron Sun Introduction Brief Review Existing Approaches Issues band Difficulties Future Directions, Or Where Should We Go From Here? Overview of the Chapters Summary Part I Localist Architectures Chapter 2 Complex Symbol-Processing in Conposit, A Transiently Localist Connectionist Architecture by John A. Barnden Introduction The Johnson-Laird Theory and Its Challenges Mental Models in Conposit Connectionist Realization of Conposit Coping with the Johnson-Laird Challenge Simulation Runs Discussion Summary Chapter 3 A Structured Connectionist Approach to Inferencing and Retrieval by Trent E. Lange Introduction Language Understanding and Memory Retrieval Models Inferencing in ROBIN Episodic Retrieval in REMIND Future Work Summary Chapter 4 Hierarchical Architectures for Reasoning by R.C. Lacher and K.D. Nguyen Introduction Computational Networks: A General Setting for Distributed Computations Type x00 Computational Networks Expert Systems Expert Networks Neural Networks Summary Part II Distributed Architectures Chapter 5 Subsymbolic Parsing of Embedded Structures by Risto Miikkulainen Introduction Overview of Subsymbolic Sentence Processing The SPEC Architecture Experiments Discussion Summary Chapter 6 Towards Instructable Connectionist Systems by David C. Noelle and Garrison W. Cottrell Introduction Systematic Action Linguistic Interaction Learning By Instruction Summary Chapter 7 An Internal Report for Connectionists by Noel E. Sharkey and Stuart A. Jackson Introduction The Origins of Connectionist Representation Representation and Decision Space Discussion Summary Part III Combined Architectures Chapter 8 A Two-Level Hybrid Architecture for Structuring Knowledge for Commonsense Reasoning by Ron Sun Introduction Developing A Two-Level Architecture Fine-Tuning the Structure Experiments Comparisons with Other Approaches Summary Chapter 9 A Framework for Integrating Relational and Associational Knowledge for Comprehension by Lawrence A. Bookman Introduction Overview of LeMICON Text Comprehension Encoding Semantic Memory Representation of Semantic Constraints Experiments and Results Algorithm Summary Chapter 10 Examining a Hybrid Connectionist/Symbolic System for the Analysis of Ballistic Signals by Charles Lin and James Hendler Introduction Related Work in Hybrid Systems Description of the SCRuFFY Architecture Analysis of Ballistic Signals Future Work Conclusion Part IV Commentaries Chapter 11 Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy by Vasant Honavar Introduction Shared Foundations of SAI and NANN Knowledge Representation Revisited A Closer Look at SAI and NANN Integration of SAI and NANN Summary Chapter 12 Connectionist Natural Language Processing: A Status Report by Michael G. Dyer Introduction Dynamic Bindings Functional Bindings and Structured Pattern Matching Encoding and Accessing Recursive Structures Forming Lexical Memories Forming Semantic and Episodic Memories Role of Working Memory Routing and Control Grounding Language in Perception Future Directions Conclusions Appendix Bibliography of Connectionist Models with Symbolic Processing Author Index Subjct Index --------------------------------------------- To order: ISBN 0-7923-9517-4 Kluwer, Order Dept. P.O.B. 358 Accord Station, Hingham, MA 02018-0358 (617) 871-6600 FAX: (617) 871-6528 e-mail: Kluwer at world.std.com ---------------------------------------------  From arbib at pollux.usc.edu Thu Sep 15 19:32:28 1994 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Thu, 15 Sep 1994 16:32:28 -0700 Subject: Two Positions Available: Data Bases, Visualization, and Simulation for Brain Research Message-ID: <199409152332.QAA25280@pollux.usc.edu> Professors Michael Arbib (Director), Michel Baudry, Theodore Berger, Peter Danzig, Shahram Ghandeharizadeh, Scott Grafton, Dennis McLeod, Thomas McNeill, Larry Swanson, and Richard Thompson have secured a Program Project grant from the Human Brain Project (a consortium of federal agencies led by the National Institute of Mental Health) for a 5 year project, "Neural Plasticity: Data and Computational Structures", to be conducted at the University of Southern California. The Project will combine research on databases with the development of tools for database construction and data recovery from multiple databases, simulation tools, and visualization tools for both rat neuroanatomy and human brain imaging. These tools will be used to construct databases for research at USC and elsewhere on mechanisms of neural plasticity in basal ganglia, cerebellum, and hippocampus. The grant will also support a core of neuroscience research (both experimental and computational) linked to several ongoing research programs to explore how experiments can be enhanced when coupled to databases enriched with powerful tools for modeling and visualization. The project is a major expression of USC's approach to the study of the brain which locates neuroscience in the context of a broad interdisciplinary program in Neural, Informational, and Behavioral Sciences (NIBS). The grant provides funding for two computer professionals to help us develop a system integrating databases, discovery tools, visualization and simulation for neuroscience. The DATA DEVELOPER is to function as a "knowledge engineer" helping neuroscientists explicate data and system needs. Experience is required with WWW's httpd and Mosaic, UNIX, and Macintosh software. A background in neuroscience, while welcome, is not required. We do require proven communication skills and ability to analyze scientific data, with at least three years professional experience. The SYSTEMS PROGRAMMER must have at least three years experience programming and developing object-oriented databases, including UNIX, C++, and DBMS experience. Experience with graphics, simulation tools and Internet protocols would be welcome. We require demonstrated ability to package software for public distribution, using multiple platforms. Send CV, references, and letter addressing the above qualifications to Paulina Tagle, Center for Neural Engineering, USC, Los Angeles, CA 90089-2520; Fax (213) 740-5687 paulina at pollux.usc.edu. USC is an equal opportunity employer.  From goodman at unr.edu Fri Sep 16 14:09:41 1994 From: goodman at unr.edu (Phil Goodman) Date: Fri, 16 Sep 1994 11:09:41 -0700 (PDT) Subject: Position Announcement Message-ID: <9409161809.AA15685@equinox.unr.edu> ******* Preliminary Position Announcement ******* NEURAL NETWORK METHODOLOGIST -- VISITING or RESEARCH FACULTY MEMBER (Basic and Applied Research; 100% of Time Protected for Project-Related and Independent Research) Center for Biomedical Modeling Research University of Nevada, Reno The University of Nevada Center for Biomedical Modeling Research (CBMR), located at the base of the Sierra Nevada Mountains near Lake Tahoe, is an interdisciplinary research project involving the Departments of Medicine, Electrical Engineering, and Computer Science. Under federal funding, CBMR faculty and collaborators apply neural network and advanced probabilistic/ statistical concepts to large health care databases. In particular, they are developing methods to: (1) improve the accuracy of predicting surgical mortality, (2) interpret nonlinearities and interactions among predictors, and (3) manage missing data. The CBMR seeks a PhD (or equivalent) methodologist trained in advanced artificial neural network theory, model generalization, probability and statistical theory, and C programming. This person will have major responsibility for the design of techniques that improve the ability of nonlinear models to generalize, and will supervise several C programmers to implement concepts into software (much of the resulting software will be freely distributed for use in many fields). Working knowledge of decision theory, Bayesian statistics, bootstrap, ROC analysis, or imputation of missing data is desirable. Starting date is November 15, with an expected duration of at least 2 years. Appointment possibilities include: * Research Assistant Professor (non-tenure track) * Visiting Professor (Assistant/Associate/Full) (salary could be added to available sabbatical or other partial funding) Funding is also available for a graduate student to work under the faculty member, and possibly a post-doctoral position. The position will remain open until filled. The University of Nevada employs only U.S. citizens and aliens lawfully authorized to work in the United States. AA-EOE. If interested, please send (by written, faxed, or plain-text electronic mail): a cover letter detailing your qualifications, and a resume that includes the names and phone numbers of three references. ______________________________________________________________________________ Philip H. Goodman, MD, MS E-mail: goodman at unr.edu Associate Professor of Medicine, Electrical Engineering, & Computer Science University of Nevada Center for Biomedical Modeling Research World-Wide Web: http://www.scs.unr.edu/~cbmr/ Washoe Medical Center, 77 Pringle Way, Reno, Nevada 89520 USA Voice: +1 702 328-4869 FAX: +1 702 328-4871 ______________________________________________________________________________  From esann at dice.ucl.ac.be Fri Sep 16 13:58:00 1994 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 16 Sep 1994 19:58:00 +0200 Subject: Neural Processing Letters: WWW and FTP servers available Message-ID: <9409161747.AA02377@ns1.dice.ucl.ac.be> ***************************** * Neural Processing Letters * ***************************** Neural Processing Letters is a new rapid publication journal in the field of neural networks. This journal gives the possibility to authors to publish with very short delays (less than 3 months) new ideas, original developments and work in progress in all aspects of the artificial neural networks field, including, but not restricted to, theoretical developments, biological models, new formal models, learning, applications, software and hardware developments, and prospective researches. Because of the short delays of publication, the journal informs the readers about the LATEST develoments and results in the field, and should be used by all researchers concerned with neural networks to know the CURRENT status of the research in their area. Information on Neural Processing Letters is now available through WWW (Mosaic server) and anonymous FTP. If you do not have access to FTP or WWW, please don't hesitate to contact directly the publisher to have more information. FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html Publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________  From bradtke at picard.gteds.gte.com Fri Sep 16 12:22:47 1994 From: bradtke at picard.gteds.gte.com (Steve Bradtke) Date: Fri, 16 Sep 1994 12:22:47 -0400 Subject: Thesis on neuroprose archives (corrected repost) Message-ID: <199409161625.MAA17435@harvey.gte.com> This repost corrects the directory path to the document, and the document name. I apologize for any problems that the last posting may have caused. Steve ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/bradtke.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/bradtke.thesis.ps.Z Incremental Dynamic Programming for On-Line Adaptive Optimal Control (133 pages) CMPSCI Technical Report 94-62 Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Abstract Reinforcement learning algorithms based on the principles of Dynamic Programming (DP) have enjoyed a great deal of recent attention both empirically and theoretically. These algorithms have been referred to generically as Incremental Dynamic Programming (IDP) algorithms. IDP algorithms are intended for use in situations where the information or computational resources needed by traditional dynamic programming algorithms are not available. IDP algorithms attempt to find a global solution to a DP problem by incrementally improving local constraint satisfaction properties as experience is gained through interaction with the environment. This class of algorithms is not new, going back at least as far as Samuel's adaptive checkers-playing programs, but the links to DP have only been noted and understood very recently. This dissertation expands the theoretical and empirical understanding of IDP algorithms and increases their domain of practical application. We address a number of issues concerning the use of IDP algorithms for on-line adaptive optimal control. We present a new algorithm, Real-Time Dynamic Programming, that generalizes Korf's Learning Real-Time A* to a stochastic domain, and show that it has computational advantages over conventional DP approaches to such problems. We then describe several new IDP algorithms based on the theory of Least Squares function approximation. Finally, we begin the extension of IDP theory to continuous domains by considering the problem of Linear Quadratic Regulation. We present an algorithm based on Policy Iteration and Watkins' Q-functions and prove convergence of the algorithm (under the appropriate conditions) to the optimal policy. This is the first result proving convergence of a DP-based reinforcement learning algorithm to the optimal policy for any continuous domain. We also demonstrate that IDP algorithms cannot be applied blindly to problems from continous domains, even such simple domains as Linear Quadratic Regulation. Instructions for ftp retrieval of this paper are given below. Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get bradtke.thesis.ps.Z ftp> quit unix> uncompress bradtke.thesis.ps.Z Thanks to Jordan Pollack for maintaining this archive. Steve Bradtke ======================================================================= Steve Bradtke (813) 978-6285 GTE Data Services DC F4M Internet: One E. Telecom Parkway bradtke@[138.83.42.66]@gte.com Temple Terrace, FL 33637 bradtke at cs.umass.edu =======================================================================  From massone at mimosa.eecs.nwu.edu Fri Sep 16 14:59:58 1994 From: massone at mimosa.eecs.nwu.edu (Lina Massone) Date: Fri, 16 Sep 94 13:59:58 CDT Subject: Paper on dynamics of superior colliculus & recurrent backprop. Message-ID: <9409161859.AA16551@mimosa.eecs.nwu.edu> ftp-host: archive.cis.ohio-state.edu ftp-file: massone.colliculus.ps.Z The following paper has been placed in the connectionist archive. The paper has been accepted for publication in "Network". (Note: the paper takes a LONG time to print because of the figures.) Local Dynamic Interactions in the Collicular Motor Map: A Neural Network Model Lina L.E. Massone and Tony Khoshaba In this paper we explore the possibility that some of the dynamic properties of the neural activity in the gaze-related motor map (located in the intermediate layers of the superior colliculus) might be mediated by local interactions between movement-related neurons and fixation neurons. More specifically, the goal of this research is to demonstrate, from a computational standpoint, which classes of dynamic behaviors of the collicular neurons can be obtained without the intervention of feedback signals, and to hence begin exploring to what extent the gaze system needs feedback in order to operate. We modeled: (a) The collicular motor map as a dynamical system realized with a recurrent neural network. (b) The dynamics of the neural activity in the map as the transients of that system towards an equilibrium configuration that the network learned with a recurrent learning algorithm (recurrent backpropagation.) The results of our simulations demonstrate: (1) That the transients of the trained network are hill-flattening patterns as observed by some experimenters in the burst-neuron layer of the superior colliculus of rhesus monkeys. This result was obtained despite the fact that the learning algorithm did not specify what the network's transients should be. (2) That the connections in the trained network are excitatory within the fixation zone of the motor map and inhibitory elsewhere. (3) That the results of the learning are robust in the face of changes in the connectivity pattern and in the initialization of the weights, but that a local connectivity pattern favors the network's stability. (4) That nonlinearity is required in order to obtain meaningful dynamic behaviors. (5) That the trained network is robust to abnormal stimulation patterns such as noisy and multiple stimuli and that when multiple stimuli are utilized the response of the network remains a stereotyped flattening one. The results of the learning point out the possibility that the dynamics of the burst-neuron layer of the superior colliculus might be locally regulated rather than feedback-driven, and that the action of the feedback be confined to the layer of the buildup neurons. The results of the multiple-stimulation experiment support the hypothesis, already put forward by one of the authors in a previous work (Massone in press), that the averaging of the direction of movement following double stimulation of the motor map (Robinson 1972) does not occur at the level of the motor map. This paper constitutes also a study of the properties and responses of recurrent backpropagation under various choices for the network's and algorithm's parameters.  From arantza at cogs.susx.ac.uk Sat Sep 17 17:00:00 1994 From: arantza at cogs.susx.ac.uk (Arantza Etxeberria) Date: Sat, 17 Sep 94 17:00 BST Subject: Workshop Announcement Message-ID: A non-text attachment was scrubbed... Name: not available Type: x-sun-attachment Size: 22451 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/9d84a4dc/attachment-0001.ksh From marshall at cs.unc.edu Sun Sep 18 19:51:29 1994 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Sun, 18 Sep 1994 19:51:29 -0400 Subject: Cognitive Science faculty job at Duke University Message-ID: <199409182351.TAA13811@marshall.cs.unc.edu> [Please reply to the address below, not to the poster.] ---------------------------------------------------------------------- From: greg at psych.duke.edu (Gregory Lockhead) Subject: Cognitive Science faculty job at Duke University Duke University announces a tenure-track assistant professorship in Cognitive Science. Specialty areas to be considered include but are not limited to: attention, imagery, memory, motor control, and vision in humans. Some combination of computational, developmental, experimental, mathematical, or neuroscience perspectives is preferred. Send a letter of application and at least three letters of recommendation to: Cognitive Science Search Committee Department of Psychology: Experimental Duke University Durham, NC 27708, USA. Applications received by December 1, 1994 will be guaranteed consideration. Duke University is an Equal Opportunity/Affirmative Action Employer. ----------------------------------------------------------------------  From Adriaan.Tijsseling at phil.ruu.nl Mon Sep 19 07:13:26 1994 From: Adriaan.Tijsseling at phil.ruu.nl (Adriaan Tijsseling) Date: Mon, 19 Sep 1994 13:13:26 +0200 (MET DST) Subject: Thesis available on Categorizatio Message-ID: <199409191113.NAA15756@laurel.stud.phil.ruu.nl> The following thesis is available by anonymous ftp from ftp.phil.ruu.nl: A Hybrid Framework for Categorization a Master Thesis by: Adriaan Tijsseling, Dept. of Cognitive Artificial Intelligence, Faculty of Philosophy, Utrecht University. The thesis is in the directory /pub/papers/tijsseling-Cat94.ps.Z Login as "anonymous" or "ftp" and use your email address as password. The file is 900 Kb compressed, 131 pages long. ABSTRACT: The thesis proposes a hybrid framework for categorization based on the framework by Stevan Harnad. First an extensive review is given of the state of the art in categorization and categorical perception research. The theory approach to categorization is argued to be the most promising paradigm. Second, Harnad's framework is treated in which categorical perception and higher order categorization is combined in one explanation model. I suggest a way to incorporate the theory approach. Third, this framework is related to the symbolic/connectionist paradigms. I argue that a hybrid approach together with a sensori-motor component is a best option sofar in implementing the framework. A second part of the thesis investigates the connectionist part of the hybrid framework. Here neural networks consisting of several CALM Map modules (based on the Categorization and Learning Module) are trained to learn to topologically orden lines of various orientations. It is shown that this network exhibits categorical perception. Adriaan Tijsseling  From david_field at qmrelay.mail.cornell.edu Mon Sep 19 11:34:23 1994 From: david_field at qmrelay.mail.cornell.edu (david field) Date: 19 Sep 1994 11:34:23 -0400 Subject: Position available Message-ID: Subject: Time:3:39 PM OFFICE MEMO Position available Date:9/18/94 The following position will be available in 1995. Candidates with connectionist and computational approaches to cognitive phenomena are especially encouraged to apply. There is a possiblity that a second position in this area will also become available. ______________________________ Cognitive Psychologist Cornell University The Department of Psychology at Cornell University is considering candidates for a tenure-track assistant professorship in cognition. Areas of specialization include but are not limited to: memory, attention, language and speech processing, concepts, knowledge representation, reasoning, problem solving, judgment and decision making, perception, motor control and action. Researchers with computational, mathematical, developmental, cross-cultural, or neuroscience perspectives, among others are encouraged to apply. The position will begin in August, 1995. Review of applications will begin December 1, 1994. Cornell University is an Equal Opportunity/Affirmative Action Employer. Interested applicants should submit a letter of application that includes one or more key words indicating their specific area(s) of interest or specialization, curriculum vitae, reprints or preprints of completed research, and letters of recommendation sent directly from three referees to: Secretary, Cognitive Psychology Search Committee Department of Psychology, Uris Hall, Cornell University Ithaca, NY 14853-7601, USA. email: kas10 at cornell.edu FAX: 607-255-8433 Voice: 607-255-6364  From hinton at cs.toronto.edu Mon Sep 19 11:32:11 1994 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Mon, 19 Sep 1994 11:32:11 -0400 Subject: Send us your data Message-ID: <94Sep19.113216edt.797@neuron.ai.toronto.edu> We are planning to create a database of tasks for evaluating supervised neural network learning procedures (both classification and regression). The main aim of the enterprise is to make it as easy as possible for neural net researchers to compare the performance of their latest algorithm with the performance of many other techniques on a wide variety of tasks. A subsidiary aim is to encourage neural net researchers to use systematic ways of setting "free parameters" in their algorithms (such as the number of hidden units, the weight-decay etc. etc.). Its easy to fudge these parameters on a single dataset but these fudges become far more evident when the same algorithm is applied to many different tasks. If you have a real-world dataset with 500 or more input-output cases that fits the criteria below we would really like to get it from you. You will be helping the research community, and by including it in this database you will ensure that lots of different methods get tried on your data. WHATS THE POINT OF YET ANOTHER DATABASE 1. Since some neural network learning procedures are quite slow, we want a database in which there is a designated training and test set for each task. We don't want to train many different times on different subsets of the data, testing on the remainder. To avoid excessive sampling error we want the designated test set to be quite large, so even though the aim is to evaluate performance on smallish training sets, we will avoid tasks where there is only a small amount of data available for testing. The justification for only using a single way of splitting the data into training and test sets is this: For a given amount of computer time, its better to evaluate perfomance on many different tasks once than to evaluate performance on one task many times since this cuts down on the noise caused by the random choice of task. 2. To make life easy we want to focus on tasks in which there are no missing values and all of the inputs are numerical. This could be viewed as tailoring the database to make life easy for algorithms that are limited in certain ways. That is precisely our intention. 3. We want all the tasks to be in the same format (which they are not if a researcher gets different tasks from different databases). 4. We want the database to include results from many different algorithms with an email guarantee that NONE of the free parameters of the algorithms were tuned by looking at the results on the test data. So for a result to be entered the researcher will have to specify how all the free parameters were set and the same recipe should preferably be used for all the tasks in the database. Its fine to say "I always use 80 hidden units because that worked nicely on the first example I ever tried". Just so long as there is a reason. We plan to run quite a few of the standard methods ourselves, so other researchers will be able to just run their favorite method and get a fair comparison with other methods. WHAT KINDS OF TASKS WILL WE INCLUDE In addition to excluding missing values and nominal attributes we will initially exclude time series tasks, so the order of the examples will be unimportant. Each task will have a description that includes known limits on input and output variables. We will include both real and synthetic tasks. For the synthetic tasks the description will specify the correct generative model (i.e. exactly how the data was generated), but researchers will only be allowed to use the training data for learning. They will have to pretend that they do not know the correct generative model when they are setting the free parameters of their algorithm. Tasks will vary in the following ways: Dimensionality of input. Dimensionality of output. Degree of non-linearity. Noise level and type of noise in both input and output. Number of irrelevant variables. The existence of topology on the input space and known invariances in the input-output mapping. WHERE WILL THE TASKS COME FROM? Many of them will come from existing databases (e.g. the UC Irvine machine learning database). Hopefully other connectionists will provide us with data or with pointers to data or databases. To be useful, the data should have few missing values (we'll simply leave out those cases), no nominal attributes, and at least 500 cases (preferably many more) so that the test set can be large. In addition to pointers to suitable datasets, now would be a good time to comment on the design of this database since it will soon be too late, and we would like this effort to be of use to as many fellow researchers as possible.  From UBJTP69 at CCS.BBK.AC.UK Mon Sep 19 11:20:00 1994 From: UBJTP69 at CCS.BBK.AC.UK (Gareth) Date: Mon, 19 Sep 94 11:20 BST Subject: Jobs Available Message-ID: CENTRE FOR SPEECH AND LANGUAGE Birkbeck College University of London RESEARCH ASSISTANTS AND SYSTEMS MANAGER Applications are invited for four research and research-support positions in the Center for Speech and Language at Birkbeck College, to work on experimental and computational research into spoken language comprehension in normal and language-impaired populations, with Professors William Marslen-Wilson and Lorraine K. Tyler. All of these positions have a starting date of January 1st 1995, and three positions, funded by an MRC Programme Grant, are potentially five year appointments, to December 1999. The fourth position, funded by an ESRC project grant, is available for 3 years, until December 1997. Position 1: Research Assistant This position is to support current experimental and computational research into the representation and processing of lexical form, focussing on the phonological aspects of speech comprehension, and working with William Marslen-Wilson and Gareth Gaskell. Candidates should have a background in experimental psycholinguistics and linguistics. Experience in computational modelling using connectionist techniques would be an advantage. This MRC-funded position is available for five years from January 1995. Salary will be on the 1A scale (16075-19141 English pounds, inclusive of London Allowance). Position 2: Research Assistant This position is to support experimental research into language comprehension in normal and language-impaired populations, ranging from lexical access to syntactic parsing and semantic interpretation, and working primarily with Lorraine K. Tyler. Candidates should have a background in cognitive psychology and/or psycholinguistics, and clinical experience would be an advantage. This MRC-funded position is available for two years from January 1995, with a strong possibility of extension for a further three years. Salary will be on the 1A scale (16075-19141 English pounds, inclusive of London Allowance). Position 3: Research Assistant This position is to support computational and experimental research into the mental representation and processing of English inflectional and derivational morphology, working with William Marslen-Wilson and Mary Hare (at UCSD). Candidates should have a background in language and connectionism, with experience both of experimentation and modelling. This ESRC-funded post is available for three years from January 1995. Salary is expected to be on the 1A scale (16075-23087 English pounds, inclusive of London Allowance). Position 4: Half-time Systems Manager Applications are invited for a half-time post supporting the computing needs of the Centre for Speech and Language. Applicants should have experience working with UNIX/C systems and PC/Mac-based networks. As well as supporting the research centre's computer network, the successful candidate will be responsible for the maintenance and possibly development of the experimental software used by the research group. An interest in Experimental Psychology would therefore be an advantage. The post would be suitable for someone wishing to pursue a part-time research degree within the research group or elsewhere in London. This MRC-funded position is tenable from January 1995, for up to five years. Salary is on the AR2 scale at 9570 English pounds, including London Allowance. To apply for any of these posts, send three copies of your CV, plus the names and addresses of 3 referees, to: Professor William Marslen- Wilson, Department of Psychology, Birkbeck College, Malet St., London WCIE 7HX. Fax: (44)-(0)71-631-6312. Email: ubjta38 at cu.bbk.ac.uk. Please make clear which post you are applying for. Closing date for applications is October 14th, 1994  From prechelt at ira.uka.de Tue Sep 20 10:51:46 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Tue, 20 Sep 1994 16:51:46 +0200 Subject: Send us your data In-Reply-To: Your message of "Mon, 19 Sep 1994 11:32:11 EDT." <94Sep19.113216edt.797@neuron.ai.toronto.edu> Message-ID: <"irafs2.ira.733:20.09.94.14.51.41"@ira.uka.de> > We are planning to create a database of tasks for evaluating supervised neural > network learning procedures (both classification and regression). The main You may be interested to know that I have started a similar project earlier this year, which will be finished in at most a few weeks. My benchmark collection, called Proben1, contains 45 datasets for 15 different learning problems from 12 different domains. All but one of these problems stem from the UCI machine learning databases archive. I chose an approach that differs from yours in a few points: - Small datasets, too. - I use a smaller part of the dataset as test set (25%) but use three different partitionings instead. - All data partitionings also include an exactly specified validation set (if one is needed, otherwise this, too, is part of the training set) - Problems with nominal attributes - Problems with missing values - canonical input and output representation (range 0...1) Nevertheless, you may want to have a look at my collection. I believe it would be good if you would for instance use the same (very simple) file format for the data in your collection, so that researchers can read the data from both collections using the same input procedure. My collection will be installed for anonymous ftp in the neural bench archive at CMU. The technical report describing it will be announced on this mailing list and will be available from neuroprose. [ Geoffrey, I'll send the draft version of my report to you by personal mail. ] Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From IDROR at miavx1.acs.muohio.edu Tue Sep 20 15:18:26 1994 From: IDROR at miavx1.acs.muohio.edu (IDROR@miavx1.acs.muohio.edu) Date: Tue, 20 Sep 1994 15:18:26 -0400 (EDT) Subject: Position available Message-ID: <01HHC8U527RY9JGYAZ@miavx1.acs.muohio.edu> ASSISTANT PROFESSOR OF COGNITIVE PSYCHOLOGY - MIAMI UNIVERSITY. The Department of Psychology at Miami University anticipates up to two tenure track positions in cognitive psychology, beginning August 1995. Areas of specialization is open, but applications with strong background in cognitive science and experience in computation modelling/cognitive simulations will be given special attention. Responsibilities include graduate and undergraduate teaching in the areas of cognitive science and quantitative methods, supervision of doctoral research, and continuing research in applicant's area of interest. Women and minorities are especially encouraged to apply. Salary will be commensurate with training, research productivity, and experience. Applicant should submit a letter describing research and teaching interests and experience, a vita, representative reprints, and at least three letter of recommendation to Richard C. Sherman, Cognitive Search Committee Chair, Department of Psychology, Miami University, Oxford Ohio, 45056. Review of applications will begin January 2, 1995. Miami University is an affirmative action equal opportunity employer.  From kehagias at eng.auth.gr Wed Sep 21 10:29:05 1994 From: kehagias at eng.auth.gr (Thanos Kehagias) Date: Wed, 21 Sep 94 17:29:05 +0300 Subject: machine learning databases Message-ID: <9409211429.AA22262@vergina.eng.auth.gr> on the subject of machine learning databases, here is a request. if one has a pointer to the kind of data i need (see next paragraph), or if the people setting up databases now would like to consider including this kind of data, i will be most grateful. i am looking at the problem of Time Series Classification. in other words, there is a number of possible sources, each producing a time series. previous instances of these time series have been observed, either labelled (supervised learning) or unlabelled (unsupervised learning). now a new time series is observed and one wants to decide which source generated it. there is a lot of algorithms in the literature that do this kind of thing (i have some of my own, even) and everyone seems to be using their own example problem and/or dataset. a classical example of this problem is, of course, phoneme recognition. what i have not been able to find is some standard datasets to be used for benchmarks (e.g. sonar, radar signals, EEG, ECG data and so on). i mean the raw time series, not some kinf of preprocessed data where the whole time series is reduced to a static features vector. does anyone know of any such data in the public domain (except speech data)? i think this would be a useful benchmark, and the kind of thing that i have not seen in, for instance, the uci collection. Thanks a lot, Thanasis  From petsche at scr.siemens.com Wed Sep 21 16:02:34 1994 From: petsche at scr.siemens.com (Thomas Petsche) Date: Wed, 21 Sep 1994 16:02:34 -0400 Subject: CALL FOR PARTICIPATION - NIPS workshop on novelty detection Message-ID: <199409212002.QAA02008@puffin.scr.siemens.com> There will be a NIPS workshop on Novelty detection and adaptive system monitoring which will focus on novelty detection, unsupervised learning, and algorithms designed to monitor a system to detect failures or incorrect behavior. A more detailed description is attached below. If you would be interested in making a presentation at this workshop, please send email to petsche at scr.siemens.com We are interested in presentations on * novelty detection and unsupervised learning algorithms; * models of biological novelty detection and unsupervised learning systems; * real-world examples of monitoring or novelty detection problems -- whether you have a final solution yet or not. TITLE Novelty detection and adaptive system monitoring DESCRIPTION The purpose of the discussion is to bring together researchers working on different real world system monitoring tasks and those working on novelty detection algorithms and models in order to hasten the development of broadly applicable adaptive monitoring algorithms. Unexpected failure of a machine or system can have severe and expensive consequences. One of the most infamous examples is the sudden failure of military helicopter rotor gearboxes, which lead to a complete loss of the helicopter and all aboard. There are many, more mundane, similar examples. The unexpected failure of a motor in a paper mill causes a loss of the product in production as well as lost production time while the motor is replaced. A computer or network overload, due to normal traffic or a virus invasion, can lead to a system crash that can cause loss of data and downtime. In these examples and others, it can be cost effective to ``monitor'' the system of interest and signal an operator when the monitored conditions indicate an imminent failure. Thus, one might assign a technician to listen to all the fire pumps on a ship and replace any that starts to sound like it is in danger of failing. This is analogous to periodically glancing at the fuel gauge in your car to make sure you do not run out of gas. An adaptive system monitor is an adaptive that estimates the condition of the system from a set of periodic measurements. This task is typically complicated by the fact that the measurements are complex and high dimensional. Adaptation is necessary since the measurements will depend on the peculiarities of the system being monitored and its environment. This workshop will focus on the use of novelty detection for the problem of system monitoring. A novelty detector is a device or algorithm which is trained on a set of examples and learns to recognize or reproduce those examples. Any new example that is significantly different from the training set is identified as ``novel'' because it is unlike any example in the training set. We will discuss various approaches to novelty detection; how it differs from multiple class supervised learning and purely unsupervised learning; biological relevance and how to use what is know about biological systems; complexity issues due to single class data; how to detect only certain types of novelty; and the use of novelty detection algorithms on real world devices such as helicopter rotor gears, electric motors, computers, networks, automobiles etc. FORMAT We aim to have presentations about real world monitoring problems, novelty detection and monitoring algorithms, and biological and psychological models that exhibit novelty detection all aimed to stir up questions and discussions. WORKSHOP CHAIRS Thomas Petsche and Stephen J. Hanson Siemens Corporate Research, Inc. Mark Gluck Rutgers University VERY BRIEF RESUMES Thomas Petsche leads a 2 year-old effort to develop a electric motor monitoring system. Stephen J. Hanson is the head of the Learning Systems Department at SCR and a frequent contributor to the motor monitoring project. Mark Gluck is a professor of neurobiology at Rutgers University and has authored several papers on a model of the hippocampus based on a neural network auto-associator which functions as a novelty detector.  From wahba at stat.wisc.edu Wed Sep 21 16:30:01 1994 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 21 Sep 94 15:30:01 -0500 Subject: gacv-paper available-deg.fdm.sig Message-ID: <9409212030.AA08578@hera.stat.wisc.edu> The following paper is available by ftp in the ftp directory ftp.stat.wisc.edu/pub/wahba in the file gacv.ps.gz: A Generalized Approximate Cross Validation for Smoothing Splines with Non-Gaussian Data by Dong Xiang and Grace Wahba Abstract We consider the model Prob {Y_i= 1} = exp{f(t_i}/(1+exp{f(t_i}) Prob {Y_i =0} = 1/(1+exp{f(t_i)} where t is a vector of predictor variables, t_i is the vector of predictor variables for the i th subject/patient/instance and Y_i is the outcome (classification) for the i th subject. f(\cdot) is supposed to be a `smooth' function of t, and the goal is to estimate f by choosing f in an appropriate class of functions to minimize Log likelihood {Y_1, ...Y_n|f} + \lambda J(f) where J{f} is a an appropriate penalty functional which restricts the degrees of freedom for signal attributed to f. Our results concentrate on J(f) a `smoothness' penalty which results in spline and related (e. g. rbf) estimates. We propose a Generalized Approximate Cross Validation score (GACV) for estimating $\lambda$ (internally) from a relatively small data set. The GACV score is derived by first obtaining an approximation to the leaving-out-one cross validation function and then, in a step reminiscent of that used to get from leaving-out-one cross validation to GCV in the Gaussian data case, we replace diagonal elements of certain matrices by $\frac{1}{n}$ times the trace. A numerical simulation with `data' Y_i, i = 1,2..., n generated from an hypothesized `true' f is used to compare the $\lambda$ chosen by minimizing this GACV score with the $\lambda$ chosen from two often used algorithms based on the generalized cross validation procedure (O'Sullivan {\em et al} 1986, Gu, 1990, 1992). In the examples here, the GACV estimate produces a better fit to the true f in terms of minimizing the Kullback-Liebler distance of the estimate of f from the true f. Figures suggest that the GACV may be an approximately unbiased estimate of the Kullback-Leibler distance of the estimate to the true f, however, a theoretical proof is yet to be found. The work of Wong (1992) suggests that an exact unbiased estimate does not exist in the {0,1} data case. The present work is related to Moody(1991), The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, and Liu(199), Unbiased estimate of generalization error and model selection in neural network. University of Wisconsin-Madison Statistics Department TR 930 September, 1994 Keywords: Generalized Approximate Cross Validation, smoothing spline, penalized likelihood, generalized cross validation, Kullback-Leibler distance. Other papers of potential interest for supervised machine learning in the directory ftp.stat.wisc.edu/pub/wahba are in the files: (some previously announced) nonlin-learn.ps.gz ml-bib.ps.gz soft-class.ps.gz ssanova.ps.gz theses/ywang.thesis.README nips6.ps.gz tuning-nwp.ps.gz Department of Statistics, University of Wisconsin-Madison wahba at stat.wisc.edu xiang at stat.wisc.edu PS to Geoff Hinton- The database is a great idea!!  From risto at cs.utexas.edu Thu Sep 22 01:26:07 1994 From: risto at cs.utexas.edu (Risto Miikkulainen) Date: Thu, 22 Sep 94 00:26:07 -0500 Subject: Connectionist NLP software available Message-ID: <9409220526.AA08371@cascais.cs.utexas.edu> The code and data for the DISCERN story processing model and the SPEC sentence understanding model are now available from our ftp/WWW site. These software packages are not general-purpose neural network simulators, but cleaned-up code for specific connectionist NLP models. I am making them available because they contain implementations of general ideas for debugging complex neural network systems through X11 graphics interface, for analyzing the performance of the models, and running experiments with such models. I've tried to pay special attention on making the code portable across platforms (it is based on ANSI/K&R C and X11R5 with Athena Widgets), and making the software easy to modify and built on. I hope the software can serve as a starting point for other experiments in connectionist NLP --- where building simulation programs from scratch turned out to be a heck of a lot of work :-) To get a quick feel of what these programs are like (without having to port them), take a look at the DISCERN demo under WWW at http://www.cs.utexas.edu/~nn/discern.html or by "telnet cascais.cs.utexas.edu 30000". The demo runs on remotely on cascais.cs.utexas.edu, with a display on your X11 screen. -- Risto Miikkulainen Here's a short discription of the software: DISCERN ------- DISCERN is a large modular system for processing script-based stories. It includes component models for lexical processing, episodic memory, and parsing, paraphrasing and question answering. The main reference is Miikkulainen (1993): "Subsymbolic Natural Language Processing: An integrated Model of Scripts, Lexicon and Memory", Cambridge, MA: MIT Press (a precis of this book was recently posted in the connectionists list). The DISCERN software consists of four components: (1) the full DISCERN performance system (i.e. the "demo" program), (2) training the simple recurrent and feedforward backprop networks for parsing, generating, and question answering, (3) training the lexicon feature maps and Hebbian associative connections, and (4) training the hierarchical feature maps of the episodic memory. All these are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/discern, or in WWW, from http://www.cs.utexas.edu/~nn. SPEC ---- SPEC is a model of parsing sentences with embedded relative clauses. It consists of the parser (a simple recurrent network), the stack (a RAAM network) and the segmenter (feedforward) networks that are trained together and generalize to novel sentence structures. For a quick description of the model, see our paper in AAAI-94, or a longer tech. report version from our ftp/www site. The SPEC software and papers are available by anonymous ftp from cs.utexas.edu:pub/neural-nets/spec or in WWW, from http://www.cs.utexas.edu/~nn.  From mbrown at aero.soton.ac.uk Thu Sep 22 17:31:33 1994 From: mbrown at aero.soton.ac.uk (Martin Brown) Date: Thu, 22 Sep 94 17:31:33 BST Subject: Post Doc and Post Grad jobs Message-ID: <23530.9409221631@aero.soton.ac.uk> Could you please post the following announcement on your list. Department of Aeronautics and Astronautics Post-doctoral Research Fellow and PhD Research Studentship in Intelligent (Neurofuzzy based) State Estimation for Dynamic Processes Applications are invited for a Post-doctoral Research Fellow from researchers nearing or having completed PhD's in NeuroFuzzy Systems or Probability and Stochastic Processes or Advanced Control Theory or Approximation Theory for a 4 year EPSRC research grant on the development of a new theory of self-organising neuro-fuzzy state estimators. Salary will be within the ACRA range 13941-20953 UK pounds. Applications are also invited for a Research Studentship, tenable over 3 years, supported by a DRA funded research grant with particular reference to intelligent estimation and guidance problems. Support will be at the EPSRC studentship level with additional allowances (all educational fees are paid and a living allowance is provided). Informal enquiries for both posts and applications for the Research Studentship only should be directed to Professor C.J. Harris, Advanced Systems Research Group, Department of Aeronautics and Astronautics, University of Southampton, England, Tel (0703) 592353, Fax (0703) 593058, email cjh at aero.soton.ac.uk Application forms for the Post-doctoral Research Assistant may be obtained from the Personnel Department (R/78), University of Southampton, Highfield, Southampton, SO17 1BJ, UK. Telephone (0703) 592421. The closing data for the RETURN of completed application forms is 31st October, quoting reference number R/78.  From prechelt at ira.uka.de Thu Sep 22 12:50:54 1994 From: prechelt at ira.uka.de (Lutz Prechelt) Date: Thu, 22 Sep 1994 18:50:54 +0200 Subject: machine learning databases In-Reply-To: Your message of "Wed, 21 Sep 1994 17:29:05 +0300." <9409211429.AA22262@vergina.eng.auth.gr> Message-ID: <"irafs2.ira.791:22.09.94.16.51.25"@ira.uka.de> > what i have not been able to find > is some standard datasets to be used for benchmarks (e.g. sonar, > radar signals, EEG, ECG data and so on). i mean the raw time series, > not some kinf of preprocessed data where the whole time series is > reduced to a static features vector. does anyone know of any such data > in the public domain (except speech data)? From andy at twinearth.wustl.edu Thu Sep 22 20:14:54 1994 From: andy at twinearth.wustl.edu (Andy Clark) Date: Thu, 22 Sep 94 19:14:54 CDT Subject: Philosophy/Neuroscience/Psychology technical reports Message-ID: <9409230014.AA07897@twinearth.wustl.edu> This is to announce a new archive for technical reports for the Philosophy/Neuroscience/Psychology program at Washington University. Reports are available in a number of areas of cognitive science and philosophy of mind. Reports are stored in various formats -- most are in ASCII or compressed Postscript. The former (files with .ascii) can be retrieved and read or printed directly; the latter (files with .ps.Z) must be retrieved, uncompressed (with "uncompress ") and printed on a laser printer. Some papers are stored in both formats for convenience. To retrieve a report -- e.g., clark.folk-psychology.ascii: 1. ftp thalamus.wustl.edu 2. Login as "anonymous" or "ftp" 3. Password: 4. cd pub/pnp/papers 5. get clark.folk-psychology.ascii An index of papers in the archive so far is included below. The list will be expanding frequently; an updated index can be found in the file INDEX in pub/pnp/papers. Andy Clark (andy at twinearth.wustl.edu) Philosophy/Neuroscience/Psychology Program Washington University St Louis, MO 63130. ---------------------------------------------------------------------------- Archive of Philosophy/Neuroscience/Psychology technical reports (Washington University), on thalamus.wustl.edu in pub/pnp/papers. 94-01 kwasny.sraam.ps Tail-Recursive Distributed Representations and Simple Recurrent Networks Stan C. Kwasny & Barry L. Kalman, Department of Computer Science Representation poses important challenges to connectionism. The ability to structurally compose representations is critical in achieving the capability considered necessary for cognition. We provide a technique for mapping any ordered collection (forest) of hierarchical structures (trees) into a set of training patterns which can be used effectively in training a simple recurrent network (SRN) to develop RAAM-style distributed representations. The advantages in our technique are three-fold: first, the fixed-valence restriction on RAAM structures is removed; second, representations correspond to ordered forests of labeled trees thereby extending what can be represented; third, training can be accomplished with an auto-associative SRN, making training much more straightforward. 94-02 kalman.trainrec.ps TRAINREC: A System for Training Feedforward & Simple Recurrent Networks Efficiently and Correctly Barry L. Kalman & Stan C. Kwasny, Department of Computer Science TRAINREC is a system for training feedforward and recurrent neural networks that incorporates several ideas. It uses the more efficient conjugate gradient method; we derive a new error function with several desirable properties; we argue for skip (shortcut) connections where appropriate, and for a sigmoidal yielding values in the [-1,1] interval; we use singular value decomposition to avoid overanalyzing the input feature space. We have made an effort to discover methods that work in both theory and practice, motivated by considerations ranging from efficiency of training to accuracy of the result. 94-03 chalmers.computation.{ps,ascii} A Computational Foundation for the Study of Cognition David J. Chalmers, Department of Philosophy Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? This article develops a systematic framework that addresses all of these questions. A careful analysis of computation and its relation to cognition suggests that the ambitions of artificial intelligence and the centrality of computation in cognitive science are justified. 94-04 chalmers.content.{ps,ascii} The Components of Content David J. Chalmers, Department of Philosophy. Are the contents of thought in the head of the thinker, in the environment, or in a combination of the two? In this paper I develop a two-dimensional intensional account of content, decomposing a thought's content into its notional content -- which is internal to the thinker -- and its relational content. Notional content is fully semantic, having truth-conditions of its own; and notional content is what governs the dynamics and rationality of thought. I apply this two-dimensional picture to dissolve a number of problems in the philosophy of mind and language. 94-05 chalmers.bibliography.{intro,1,2,3,4,5} Contemporary Philosophy of Mind: An Annotated Bibliography David J. Chalmers, Department of Philosophy This is an annotated bibliography of work in the philosophy of mind from the last thirty years. There are about 1700 entries, divided into five parts: (1) Consciousness and Qualia; (2) Mental Content; (3) Psychophysical Relations and Psychological Explanation; (4) Philosophy of Artificial Intelligence; (5) Miscellaneous Topics. 94-06 clark.trading-spaces.ascii Trading Spaces: Computation, Representation, and the Limits of Learning Andy Clark (Dept. of Philosophy) and Chris Thornton (U. of Sussex) We argue that existing learning algorithms are often poorly equipped to solve problems involving a certain type of (important and widespread) statistical regularity, which we call `type-2 regularity'. The solution is to trade achieved representation against computational search. We investigate several ways in which such a trade-off may be pursued. The upshot is that various kinds of incremental learning (e.g. Elman 1991) emerge not as peripheral but as absolutely central and essential features of successful cognition. 94-07 clark.folk-psychology.ascii Dealing in Futures: Folk Psychology and the Role of Representations in Cognitive Science. Andy Clark, Department of Philosophy. The paper investigates the Churchlands' long-standing critique of folk psychology. I argue that the scientific advances upon which the Churchlands so ably draw will have their most profound impact NOT upon our assessment of the folk discourse but upon our conception of the role of representations in the explanatory projects of cognitive science. Representation, I suggest, will indeed be reconceived, somewhat marginalized, and will emerge as at best one of the objects of cognitive scientific explanation rather than as its foundation. 94-08 clark.autonomous-agents.ascii Autonomous Agents and Real-Time Success: Some Foundational Issues. Andy Clark, Department of Philosophy Recent developments in situated robotics and related fields claim to challenge the pervasive role of internal representations in the production of intelligent behavior. Such arguments, I show, are both suggestive and misguided. The true lesson, I argue, lies in forcing a much-needed re-evaluation of the notion of internal representation itself. The present paper begins the task of developing such a notion by pursuing two concrete examples of fully situated yet representation-dependent cognition: animate vision and motor emulation. 94-09 mccann.gold-market.ps A Neural Network Model for the Gold Market Peter J. McCann and Barry L. Kalman, Department of Computer Science A neural network trend predictor for the gold bullion market is presented. A simple recurrent neural network was trained to recognize turning points in the gold market based on a to-date history of ten market indices. The network was tested on data that was held back from training, and a significant amount of predictive power was observed. The turning point predictions can be used to time transactions in the gold bullion and gold mining company stock index markets to obtain a significant paper profit during the test period. 94-10 chalmers.consciousness.{ps,ascii} Facing Up to the Problem of Consciousness David Chalmers, Department of Philosophy The problems of consciousness fall into two classes: the easy problems and the hard problems. The easy problems include reportability, accessibility, the difference between wakefulness and sleep, and the like; the hard problem is subjective experience. Most recent work attacks only the easy problems. I illustrate this with a critique, and argue that reductive approaches to the hard problem must inevitably fail. I outline a new framework for the nonreductive explanation of consciousness, in terms of basic principles connecting physical processes to experience. Using this framework, I sketch a candidate theory of conscious experience, revolving around principles of structural coherence and organizational invariance, and a double-aspect theory of information. 94-11 chalmers.qualia.{ps,ascii} Absent Qualia, Fading Qualia, Dancing Qualia David Chalmers, Department of Philosophy In this paper I use thought-experiments to argue that systems with the same fine-grained functional organization will have the same conscious experiences, no matter what they are made out of. These thought-experiments appeal to scenarios involving gradual replacement of neurons by silicon chips. I argue against the "absent qualia" hypothesis by using a "fading qualia" scnario, and against the "inverted qualia" hypothesis by using a "dancing qualia" scenario. The conclusion is that absent qualia and inverted qualia are logically possible but empirically impossible, leading to a kind of nonreductive functionalism. 94-12 christiansen.language-learning.ps Language Learning in the Full or, Why the Stimulus Might Not be So Poor, After All Morten Christiansen, Department of Philosophy Language acquisition is often said to require a massive innate body of language specific knowledge in order to overcome the poverty of the stimulus. In this picture, language learning merely implies setting a number of parameters in an internal Universal Grammar. But is the primary linguistic evidence really so poor that it warrants such an extreme nativism? Is there no room for a more empiricist approach to language acquisition? In this paper, I argue against the extreme nativist position, discussing recent results from psycholinguistics and connectionist research on natural language. 94-13 christiansen.nlp-recursion.ps Natural Language Recursion and Recurrent Neural Networks Morten Christiansen (Dept. of Philosophy) and Nick Chater (U. of Oxford) The recursive structure of natural language was one of the principal sources of difficulty for associationist models of linguistic behaviour. More recently, it has become a focus in the debate on neural network models of language, which many regard as the natural heirs of the associationist legacy. Can neural networks learn to handle recursive structures? If not, many would argue, neural networks can be ruled out as viable models of language processing. In this paper, we reconsider the implications of natural language recursion for neural network models, and present simulations in which recurrent neural networks are trained on simple recursive structures. We suggest implications for theories of human language processing. 94-14 bechtel.embodied.ps Embodied Connectionism William Bechtel, Department of Philosophy Classical approaches to modeling cognition have treated the cognitive system as disembodied. This I argue is a consequence of a common strategy of theory development in which researchers attempt to decompose functions into component functions and assign these components functions to parts of systems. But one might question the decomposition that segregates a cognitive system from its environment. I suggest how connectionist modeling may facilitate the development of cognitive models that do not so isolate cognitive systems from their environment. While such an approach may seem natural for lower cognitive activities, such as navigating an environment, I suggest that the approach be pursued with higher cognitive functions as well, using natural deduction as the example. 94-15 bechtel.consciousness.ps Consciousness: Perspectives from Symbolic and Connectionist AI William Bechtel, Department of Philosophy While consciousness has not been a major concern of most AI researchers, some have tried to explore how computational models might explain it. I explore how far computational models might go in explaining consciousness, focusing on three aspects of conscious mental states: their intrinsic intentionality, a subject's awareness of the contents of these intentional states, and the distinctive qualitative character of these states. I describe and evaluate strategies for developing connectionist systems that satisfy these aspects of consciousness. My assessment is that connectionist models can do quite well with regard to the first two components, but face far greater difficulties in explaining the qualitative character of conscious states. 94-16 bechtel.language.ps What Knowledge Must be in the Head in Order to Acquire Langauge? William Bechtel, Department of Philosophy A common strategy in theorizing about the linguistic capacity has localized it within the mind of the language user. A result has been that the mind itself is often taken to operate according to linguistic principles. I propose an approach to modeling linguistic capacity which distributes that capacity over a cognitive system and external symbols. This lowers the requirements that must be satisfied by the cognitive system itself. For example, productivity and systematicity might not result from processing characteristics of the cognitive system, but from the system's interaction with external symbols which themselves adhere to syntactic principles. To indicate how a relatively weak processing system can exhibit linguistic competence, I describe a recent model by St. John and McClelland. 94-17 bechtel.deduction.ps Natural Deduction in Connectionist Systems William Bechtel, Department of Philosophy I have argued elsewhere that the systematicity of human thought might be explained as a result of the fact that we have learned natural languages which are themselves syntactically structured. According to this view, linguistic symbols are external to the cognitive system and what the system must learn to do is produce and comprehend such symbols. In this paper I pursue that idea by arguing that ability in natural deduction itself may rely on pattern recognition abilities that enable us to operate on external symbols rather than encodings of rules that might be applied to internal representations. To support this suggestion, I present a series of experiments with connectionist networks that have been trained to construct simple natural deductions in sentential logic.  From jfj at limsi.fr Fri Sep 23 13:04:29 1994 From: jfj at limsi.fr (Jean-Francois Jodouin) Date: Fri, 23 Sep 94 19:04:29 +0200 Subject: Book Announcement Message-ID: <9409231704.AA04189@m79.limsi.fr> Ths is a book announcement for two French-language books on neural networks. Though there is a large and ever-growing number of such texts in English, connectionists required to teach neural networks in French have difficulty finding the equivalent in their language. Because there is to my knowledge no French connectionist forum, I have permitted myself to post this message here, despite the fact that only a small proportion of you are francophone. The following two books are now available : Reseaux de neurones : Principes et definitions, Jean-Francois Jodouin editions Hermes, Paris, 140p, 1994. Reseaux neuromimctiques : Modeles et applications, Jean-Francois Jodouin editions Hermes, Paris, 260p, 1994. The first is short presentation destined for the general public. The second is an introductory textbook intended for graduate students. Abstract and Table of Contents (in French) for both books follow. =========================================== Reseaux de neurones : Principes et definitions Jean-Francois Jodouin editions Hermes, Paris, 140p, 1994 ------------------------------------------------------------------------------ Resume Les reseaux neuromimetiques sont inspires de la neurobiologie. Ils ont depuis des notions de plusieurs disciplines : leurs concepteurs auraient peine a les reconnaitre, vu la grande diversite de formes qu'ils revetent aujourd'hui. A la fois objets d'etude et outils applicatifs, les reseaux ont un role a jouer dans un nombre rapidement croissant de domaines, autant en recherche qu'en industrie. Ce livre s'adresse aux etudiants, enseignants et professionnels ayant une culture scientifique generale. Son but est de presenter sous une forme didactique les principes generaux qui sous-tendent les travaux en reseaux de neurones, et qui ne sont souvent qu'esquisses dans les textes plus specialises. C'est donc un premier apercu utile de l'etude des reseaux de neurones. Il constitue une introduction aux textes de presentation plus approfondis, en particulier a "Reseaux neuromimetiques : Modeles et applications" (Hermes, 1994). ------------------------------------------------------------------------------ Table des matieres Avant-propos Introduction 1. Le reseau de neurones 1.1. Apercu general 1.1.1. Les elements constitutifs d'un reseau neuromimetique 1.1.2. L'architecture d'un reseau neuromimetique 1.1.3. Le choix des poids synaptiques 1.1.4. Les entrees et les sorties du reseau 1.1.5. Discussion : regles locales et comportement emergent 1.2. Applications des reseaux neuromimetiques 1.3. Bibliographie 2. Neurones et activation 2.1. Un modele formel du neurone 2.1.1. Le modele de McCulloch et de Pitts 2.1.2. Un modele plus general 2.1.3. Neurone formel et neurone biologique 2.2. La fonction d'activation d'un neurone formel 2.2.1. Caracteristiques des fonctions d'activation 2.2.2. Quelques exemples de fonctions 2.3. La propagation de l'activation 2.3.1. Phenomenes connus de la propagation d'activation 2.3.1.1. Detection de traits 2.3.1.2. Memoire associative 2.3.1.3. Satisfaction de contraintes 2.3.2. Comportements dynamiques des reseaux 2.3.2.1. Convergence vers un point fixe 2.3.2.2. Comportements dynamiques plus complexes 2.3.3. Exemple : un petit reseau 2.4. Capacites de calcul d'un reseau neuromimetique 2.4.1. Quelques simplifications preliminaires 2.4.2. Le Perceptron et ses limites 2.4.3. L'importance des neurones caches 2.4.4. Les reseaux non-lineaires 2.4.5. Langages formels et reseaux de neurones 2.5. Exemple : le modele de McClelland et de Rumelhart 2.5.1. La reconnaissance des lettres en contexte 2.5.2. Un modele de la lecture 2.5.3. Comportement du modele 2.5.4. Competition et cooperation 2.6. Bibliographie 3. Apprentissage et erreur 3.1. Le protocole d'apprentissage 3.1.1. La procedure d'apprentissage 3.1.2. La procedure de validation croisee 3.2. Trois types d'apprentissage 3.2.1. Apprentissage non-supervise 3.2.2. Apprentissage supervise 3.2.3. Apprentissage semi-supervise 3.2.4. Problemes d'apprentissage 3.3. Exemple : apprendre la fonction "Impair-5" 3.4. Bibliographie 4. Environnement et codage 4.1. Considerations generales 4.2. Quelques exemples de codages 4.2.1. Codage local 4.2.2. Codage semi-distribue 4.2.2.1. Un codage du sens : les micro-traits 4.2.2.2. Codage de la position : champs recepteurs et codage grossier 4.2.2.3. Codage de valeurs numeriques : le thermometre 4.2.3. Codage distribue et representations internes 4.3. Le codage interne d'un reseau 4.3.1. L'experience des deux familles 4.3.2. Phrases actives et passives 4.3.3. Discussion 4.4. Quelques exemples 4.5. Bibliographie Bibliographie generale Index =========================================== Reseaux neuromimetiques : Modeles et applications Jean-Francois Jodouin editions Hermes, Paris, 260, 1994 ------------------------------------------------------------------------------ Resume Les reseaux neuromimetiques sont des modeles mathematiques et informatiques, des assemblages d'unites de calculs appeles neurones formels, et dont l'inspiration originelle etait un modele de la cellule nerveuse humaine. Cet heritage de la neurobiologie forme une composante importante de la matiere, et le souci de maintenir un certaine correspondance avec le systeme nerveux humain a anime et continue a animer une part importante des recherches dans ce domaine. Malgre cet heritage, l'essentiel des travaux d'aujourd'hui ont pour objet le reseau de neurones formels et non son correlat neurobiologique. Vu comme des systemes de calcul, les reseaux de neurones possedent plusieurs proprietes qui les rendent interessants d'un point de vue theorique, et fort utiles en pratique. C'est cette approche - plus technique que biologique - qui sera prise dans le present texte. Ce livre n'est pas un texte de vulgarisation. Il a pour vocation de servir de point de depart et de texte de reference a celui qui desire utiliser ou etudier les reseaux de neurones. Il constitue la suite logique de l'ouvrage "Les reseaux de neurones : Principes et definitions" (Hermes, 1994). ------------------------------------------------------------------------------ Table des matieres Avant-propos Introduction Partie I : Les reseaux a couches 1. Les reseaux a couches : un premier apercu 1.1. Les reseaux a deux couches 1.1.1. Le Perceptron 1.1.2. L'Adaline 1.2. Le Perceptron Multi-Couches 1.2.1. Structure du reseau 1.2.2. Apprentissage 1.2.3. Exemple de calcul 1.2.4. Exemple : NET-talk 1.3. Bibliographie 2. De la theorie a la pratique 2.1. La constitution des corpus 2.2. L'architecture du reseau 2.2.1. Correlation en cascade 2.2.2. Le neurochirurgien 2.3. Les minima locaux 2.4. Les parametres du reseau 2.4.1. La fonction d'activation 2.4.2. La fonction d'erreur 2.4.3. Le pas d'apprentissage 2.4.3.1. L'ajout d'un terme d'inertie 2.4.3.2. Quickprop 2.4.3.3. Delta-bar-delta 2.5. Discussion 2.6. Bibliographie 3. Le Perceptron Multi-Couches et la statistique 3.1. Apprentissage supervise et estimation statistique 3.1.1. Mesure d'erreur et qualite de l'estimation 3.1.2. Biais et variance 3.2. Generalisation, taille de corpus et nombre de neurones 3.2.1. Convergence uniforme des PMC 3.2.2. Generalisation et dimension de Vapnik et Cervonenkis 3.3. Bibliographie 4. Les reseaux RBF 4.1. Methode RBF 4.2. Architecture et fonctionnement du reseau RBF 4.3. Apprentissage 4.4. Discussion 4.5. Bibliographie 5. Code informatique : Adaline et PMC 5.1. Une premiere definition 5.1.1. Modes d'utilisation d'un reseau 5.1.2. Reseau, neurone et lien 5.2. De l'efficacite des calculs 5.2.1. Representer l'architecture du reseau 5.2.2. Tableaux homogenes 5.3. Definitions principales 5.3.1. Reseau, neurone et lien 5.3.2. Couches de neurones 5.3.3. Patrons et corpus 5.4. Adaline 5.4.1. Utilisation du reseau 5.4.1.1. Calcul de l'activation 5.4.1.2. Fonction principale : la Question 5.4.2. Apprentissage et erreur 5.4.2.1. Calcul d'erreur et correction des liens 5.4.2.2. Fonction principale 5.5. Le Perceptron Multi-Couches 5.5.1. Mode d'utilisation 5.5.1.1. Propagation d'activation 5.5.1.2. Fonction principale 5.5.2. Mode apprentissage 5.5.2.1. Retropropagation Partie II : Les reseaux recurrents 6. Les reseaux a competition 6.1. Un reseau a competition simple 6.1.1. Architecture competitive 6.1.1.1. Points fixes et contraintes de connectivite 6.1.1.2. Fonctionnement du reseau 6.1.2. Apprentissage competitif 6.1.2.1. Une regle d'apprentissage competitif 6.1.2.2. Un petit exemple 6.1.2.3. Discussion 6.2. Learning Vector Quantization 6.2.1. Vector Quantization 6.2.2. Learning Vector Quantization 6.2.3. Discussion 6.3. Les cartes topologiques 6.3.1. Architecture 6.3.2. Apprentissage 6.3.3. Discussion 6.4. Adaptive Resonance Theory 6.4.1. Preliminaire : une etude des fonctions d'activation du neurone 6.4.1.1. Fonctions additives 6.4.1.2. Fonctions multiplicatives 6.4.2. La dynamique de ART-1 6.4.2.1. Le dilemme du bruit et de la saturation 6.4.2.2. Le probleme du codage 6.4.2.3. Deceler les erreurs 6.4.2.4. La regle des deux-tiers 6.4.2.5. Le systeme d'orientation 6.4.3. Apprentissage 6.4.3.1. Le dilemme de la stabilite et de la plasticite 6.4.3.2. Le mecanisme d'apprentissage de ART-1 6.4.4. Utiliser ART-1 6.4.5. Exemple 6.4.6. Discussion 6.5. Bibliographie 7. Les reseaux a connexions symetriques 7.1. Le reseau de Hopfield 7.1.1. Le reseau et son comportement 7.1.1.1. Le modele des verres de spin de Ising 7.1.1.2. Exemple : les deux equipes 7.1.1.3. Architecture du reseau de Hopfield 7.1.1.4. Energie et convergence vers un etat stable 7.1.2. Apprentissage 7.1.2.1. Capacite de memoire et oubli catastrophique 7.1.3. Discussion 7.2. La machine de Boltzmann 7.2.1. Architecture de la machine de Boltzmann 7.2.2. Le recuit simule 7.2.3. Apprentissage 7.2.4. Discussion 7.3. Bibliographie 8. Les reseaux et le temps 8.1. Solutions non recurrentes 8.1.1. Les fenetres temporelles 8.1.2. Time-Delay Neural Networks 8.2. Les reseaux recurrents a couches 8.2.1. Le reseau de Jordan 8.2.2. Le "Simple Recurrent Network" 8.3. La retropropagation dans les reseaux recurrents 8.3.1. La retropropagation dans le temps 8.3.2. L'apprentissage en temps reel 8.4. Bibliographie 9. Code informatique : reseaux recurrents 9.1. La carte topologique de Kohonen 9.1.1. Utilisation 9.1.2. Apprentissage 9.2. Le reseau de Hopfield 9.2.1. Utilisation 9.2.2. Apprentissage 9.3. La retropropagation dans le temps 9.3.1. Utilisation 9.3.2. Apprentissage Bibliographie generale Index ===========================================  From back at elec.uq.oz.au Tue Sep 27 14:17:17 1994 From: back at elec.uq.oz.au (Andrew Back) Date: Tue, 27 Sep 94 13:17:17 EST Subject: NIPS*94 Workshop CFP Message-ID: <9409270317.AA16221@s1.elec.uq.oz.au> We are organizing the following workshop for NIPS*94. The aim of this one-day workshop will be to discuss issues of nonlinear signal processing using neural network models, specifically those which are in-between the usual MLP and fully recurrent network architectures. The intended audience is for those applying neural networks to signal processing problems, and active signal processing (but not necessarily neural network) researchers. There is room for contributed talks. If you would like to give a short paper or a brief presentation of your work, please send a few details to the organizers. Presenters of short papers will be allocated 15-20 mins. For those who would like to contribute on an informal basis, yet be able to have the opportunity to speak, 5 mins `soap-box' sessions will be. available. Ample time will be allocated for informal discussion. We would also like to hear from others interested in attending the workshop. Andrew Back Department of Electrical and Computer Engineering University of Queensland Brisbane. 4072 AUSTRALIA Ph: +61 7 365 3965 Fax: +61 7 365 4999 email: back at elec.uq.oz.au ============================================================================= C A L L F O R P A P E R S NIPS*94 Workshop: "Neural Network Architectures with Time Delay Connections for Nonlinear Signal Processing: Theory and Applications" Organizers: Andrew D. Back and Eric A. Wan Nonlinear signal processing methods using neural network models are a topic of recent interest in the various application areas. Recurrent networks offer a potentially rich and powerful modelling capability though may suffer from some problems in training. On the other hand, simpler network structures which have an overall feedforward structure, but draw more strongly on linear signal processing approaches have been proposed. The resulting structures can be viewed as a nonlinear generalizations of linear filters. It is clear that relatively little is known about how to understand the various architectures in a signal processing context. For the most part we are able to do simulations, but proving the capabilities of the network architectures is much more difficult. It appears that they offer a convenient NARMA modelling framework, but many aspects of the models are yet to be considered. This workshop is aimed at addressing some of the issues that arise when adopting a nonlinear signal processing methodology, in particular, those employing some form of time delay connections and limited recurrent connections. Issues that may be of interest are: * Representational capabilities for various network structures * Methods of analysis - what methods from linear signal processing theory can be extended to these neural network architectures ? What methods from analysis of nonlinear systems can be used for these networks ? * What advantages are there in using locally recurrent connections within networks as opposed to globally recurrent connections ? (e.g. Frasconi-Gori-Soda networks vs Williams-Zipser/Robinson networks). * Learning algorithms - what difficulties are encountered and what methods can be applied to overcome them? * What types of problems or data are the different models best suited for. * Given a set of time series data, what model should be selected on the basis of the observed data ? What tests can be applied to a particular data set to determine what type of model should be used ? * What issues need to be resolved in order for these models to be confidently applied to a given problem/data-set ? * Successes and failures of networks on practical problems and data sets. * Comparisons between the methods and results that have been established by various researchers. * Theoretical issues which still need to be addressed, (e.g. approximation capabilities, convergence, stability, and computational complexity) * New network architectures Aim: --- At the workshop we intend to consolidate some of the theoretical and practical results of current research. We also hope to identify open issues which should be addressed in on-going work. Format: ------ The workshop will be a one day workshop and it is planned to have a number of short presentations of either 15 mins or 5 mins (`soap-box' sessions). In this way a number of people will be able to speak in some detail, while others can simply raise issues they feel are important. As an outcome of the workshop it is intended that there should be a report summarizing where we are at in this research area, and goals for future work. Contributions will be welcomed and details of proposed talks should be sent to Andrew Back as soon as possible. Andrew D. Back*, Eric A. Wan** *Department of Electrical and Computer Engineering, University of Queensland, St. Lucia, Queensland 4072. Australia. Ph: +61 7 365 3965 Fax: +61 7 365 4999 back at elec.uq.oz.au **Department of Electrical Engineering and Applied Physics Oregon Graduate Institute of Science and Technology P.O. Box 91000, Portland, Oregon, 97291. USA. Ph: (503) 690 1164 Fax: (503) 690 1406 ericwan at eeap.ogi.edu  From mpp at watson.ibm.com Tue Sep 27 18:15:37 1994 From: mpp at watson.ibm.com (Michael Perrone) Date: Tue, 27 Sep 1994 18:15:37 -0400 (EDT) Subject: CFP: NIPS*94 Postconference Workshop Message-ID: <9409272215.AA21936@austen.watson.ibm.com> CALL FOR PARTICIPANTS --------------------- Below is the preliminary summary of the NIPS*94 Postconference Workshop on algorithms for high dimensional spaces. If you would like to contribute a talk, please send a title and abstract to the organizer. ======================================================================== Title ----- Algorithms for High Dimensional Space: What Works and Why Description ----------- The performance of certain regression algorithms is robust as the dimensionality of the data and parameter spaces are increased. Even in cases where the number of parameters is much larger than the number of data, performance is often robust. The central question of the workshop will be: What makes these techniques robust in high dimensions? High dimensional spaces have (asymptotic) properties that are nonintuitive when considered from the perspective of the two- and three-dimensional cases generally used for visual examples. Because of this fact, algorithm design in high dimensional spaces can not always be done by simple analogy with low dimensional problems. For example, a radial basis network is intuitively appealing for a one dimensional regression task; but it must be used with care for a 100 dimensional space and it may not work at all in 1000. Thus having a familiarity with the nonintuitive properties of high dimensional space may lead to the development of better algorithms. We will discuss the issues that surround successful nonlinear regression estimation in high dimensional spaces and what we can do to incorporate these techniques into other algorithms and apply them in real-world tasks. The workshop will cover topics including the Curse of Dimensionality, Projection Pursuit, techniques for dimensionality reduction, feature extraction techniques, statistical properties of high dimensional spaces, clustering in high dimensions and all of the tricks that go along with these techniques to make them work. The workshop is targeted on researchers interested in both theoretical and practical aspects of improving network performance. Length ------ One day. Format ------ Morning: 3 half hour talks each followed by 10 minutes of questions. Afternoon: 3 half hour talks each followed by 10 minutes of questions. Organizer --------- Michael P. Perrone mpp at watson.ibm.com IBM - Thomas J. Watson Research Center P.O. Box 704 / Rm J1-K08 Yorktown Heights, NY 10598  From iehava at ie.technion.ac.il Wed Sep 28 12:49:27 1994 From: iehava at ie.technion.ac.il (Hava Siegelmann) Date: Wed, 28 Sep 1994 18:49:27 +0200 Subject: nips recurrent nets workshop Message-ID: <199409281649.SAA29154@ie.technion.ac.il> Announcing: The NIPS Workshop on Recurrent Neural Networks =========================================================== Unlike feedforward-acyclic networks, recurrent nets contain feedback loops, and thus give rise to dynamical systems. Theoretically, recurrent networks are very strong computationally. However, their dynamics introduces difficulties for learning and convergence. This 2-day workshop will feature formal sessions, discussions, and a panel discussion aimed at understanding the dynamics, theoretical capabilities, and practical applicability of recurrent network. The panel will focus on future directions of recurrent networks research. The schedule is almost finalized. If you feel you have what to contribute to the workshop by talking / discussing / asking / or suggesting... - please feel free to contact me. I will wait a few days for responses before announcing the final program. Sincerely, Hava (Eva) Siegelmann Assistant Professor Department of Information Systems Engineering School of Industrial Engineering Technion (Israel Institute of Technology)  From ethem at psyche.mit.edu Wed Sep 28 16:31:40 1994 From: ethem at psyche.mit.edu (Ethem Alpaydin) Date: Wed, 28 Sep 94 16:31:40 EDT Subject: Paper: Estimating Road Travel Distances Message-ID: <9409282031.AA01457@psyche.mit.edu> FTP-host : archive.cis.ohio-state.edu FTP-file : pub/neuroprose/alpaydin.road-distance.ps.Z (16 pages, 582,257 bytes compressed, 1,357,862 bytes uncompressed) Parametric Distance Metrics vs. Nonparametric Neural Networks for Estimating Road Travel Distances Ethem Alpaydin*, I. Kuban Altinel+, Necati Aras+ {alpaydin,altinel,arasn}@boun.edu.tr * Dept of Computer Engineering + Dept of Industrial Engineering Bogazici University TR-80815 Istanbul Turkey The actual distance between two cities is the length of the shortest road connecting them. Measuring and storing the actual distance between any two points of a region is often not feasible and it is a common practice to estimate it. The usual approach is to use theoretical distance metrics which are parameterized functions of the coordinates of the points. We propose to use nonparametric approaches using neural networks for estimating actual distances. We consider multi-layer perceptrons trained with the back-propagation rule and regression neural networks implementing nonparametric regression using Gaussian kernels. We also consider training multiple estimators and combining them in a hybrid architecture using voting and stacking. On a real-world study using cities drawn from Turkey, we found that out that these approaches improve performance considerably. Estimating actual distances has many applications in location and distribution theory.  From jbarnden at crl.nmsu.edu Wed Sep 28 11:44:42 1994 From: jbarnden at crl.nmsu.edu (John Barnden) Date: Wed, 28 Sep 1994 09:44:42 -0600 Subject: 2 books on CONNECTIONISM & ANALOGY Message-ID: <199409281544.JAA10922@NMSU.Edu> *************************************************************** * This is to announce TWO NEW VOLUMES in the book series * * * * ``ADVANCES IN CONNECTIONIST AND NEURAL COMPUTATION THEORY'' * * * * published by Ablex (Norwood, NJ). * *************************************************************** The new volumes, numbered 2 and 3, appeared this summer. They were originally to have been just one book, and they are to be regarded as companion volumes. They have the same introductory chapter (not listed below). They are about the application of connectionist and hybrid connectionist/symbolic techniques to analogy, reminding, case-based reasoning and metaphor. Please note that the volumes don't have the editors in the same order. VOLUME TWO: ``Analogical Connections'' -------------------------------------- Edited by: Keith J. Holyoak, University of California, Los Angeles John A. Barnden, New Mexico State University MAIN CONTENTS Part I: INTEGRATED MODELS OF ANALOGICAL THINKING 1. The Copycat Project: A Model of Mental Fluidity and Analogy-Making Douglas R. Hofstadter & Melanie Mitchell 2. Component Processes in Analogical Transfer: Mapping, Pattern Completion, and Adaptation Keith J. Holyoak, Laura R. Novick & Eric R. Melz 3. Integrating Analogy with Rules and Explanations Greg Nelson, Paul Thagard & Susan Hardy 4. A Hybrid Model of Continuous Analogical Reasoning Thomas C. Eskridge 5. A Hybrid Model of Reasoning by Analogy Boicho N. Kokinov Part II: SIMILARITY AND ANALOGICAL MAPPING 6. Similarity, Interactive Activation, and Mapping: An Overview Robert Goldstone & Douglas Medin 7. Connectionist Implications for Processing Capacity Limitations in Analogies Graeme S. Halford, William H. Wilson, Jian Guo, Ross W. Gayler, Janet Wiles & J.E.M. Stewart 8. Analogical Mapping by Dynamic Binding: Preliminary Investigations John E. Hummel, Bruce Burns & Keith J. Holyoak 9. Spatial Inclusion and Set Membership: A Case Study of Analogy at Work Keith Stenning & Jon Oberlander Published 1994/504 pages Cloth: 1-56750-039-0/$69.50 (*** $35.00 prepaid *** / no further discount applies) VOLUME THREE: ``ANALOGY, METAPHOR, AND REMINDING'' -------------------------------------------------- Edited by: John A. Barnden, New Mexico State University Keith J. Holyoak, University of California, Los Angeles MAIN CONTENTS 1. REMIND: Retrieval from Episodic Memory by Inferencing and Disambiguation Trent E. Lange & Charles M. Wharton 2. The Role of Goals in Retrieving Analogical Cases Colleen Seifert 3. A Case Study of Case Indexing: Designing Index Feature Sets to Suit Task Demands and Support Parallelism Eric A. Domeshek 4. The Case for Nonconnectionist Associative Retrieval in Case-Based Reasoning Systems Piero P. Bonissone, Lisa F. Rau & George Berg 5. What is Metaphor? George Lakoff 6. A Structured Connectionist Model of Figurative Adjective-Noun Combinations Susan H. Weber 7. Back-Propagation Representations for the Rule-Analogy Continuum: Pros and Cons Catherine Harris 8. On the Connectionist Implementation of Analogy and Working Memory Matching John A. Barnden Published 1994/392 pages Cloth: 1-56750-101-X/$69.50 (*** $35.00 prepaid *** / no further discount applies) ABLEX ORDER FORM Please enter my order for Advances in Connectionist and Neural Computation Theory _____ Volume Three: Analogy, Metaphor and Reminding Cloth: 1-56750-101-X/$69.50 ($35.00 prepaid) _____ Volume Two: Analogical Connections Cloth: 1-56750-039-0/$69.50 ($35.00 prepaid) _____ Volume One: High-Level Connectionist Models Cloth: 0-89391-687-0/$67.50 ($34.50 prepaid) *** If you're not prepaying, you can buy volumes 2 and 3 together for $105 instead of $139. *** (There is no additional discount for buying both volumes at the prepaid rate.) PAYMENT METHOD: [] Payment Enclosed [] VISA [] MasterCard Card #_______________________________ Exp. Date__________ Signature________________________________________________ Name_____________________________________________________ Address__________________________________________________ __________________________________________________________ City_______________________________State______ZIP_________ ORDERING INFORMATION All individual orders must be prepaid. Ablex will pay postage and handling charges for all prepaid orders placed within US and Canada. Payments must be made by check, money order, VISA or Mastercard in US currency only. Orders placed by libraries and universities with Ablex accounts will be invoiced. Initial orders must be prepaid, at which time an account will be established. Titles being considered by faculty members for course adoption may be ordered as examination copies for a 30-day period. Request must be made on letterhead stationery citing course name and enrollment. An invoice will accompany the shipment and will be cancelled upon adoption of at least ten copies or return of examination copies. ======================================================================== The two volumes described above form a natural sequel to the first volume of the series, namely: VOLUME ONE: ``HIGH-LEVEL CONNECTIONIST MODELS'' ----------------------------------------------- Edited by John A. Barnden Jordan B. Pollack MAIN CONTENTS 1. Introduction: Problems for High-Level Connectionism John A. Barnden & Jordan B. Pollack 2. Connectionism and Compositional Semantics David S. Touretzky 3. Symbolic NeuroEngineering for Natural Language Processing: A Multilevel Research Approach Michael G. Dyer 4. Schema Recognition for Text Understanding: An Analog Semantic Feature Approach Lawrence A. Bookman & Richard Alterman 5. A Context-Free Connectionist Parser Which Is Not Connectionist, But Then It Is Not Really Context-Free Either Eugene Charniak & Eugene Santos, Jr. 6. Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds Wendy G. Lehnert 7. Developing Hybrid Symbolic/Connectionist Models James Hendler 8. Encoding Complex Symbolic Data Structures with Some Unusual Connectionist Techniques John A. Barnden 9. Finding a Maximally Plausible Model of an Inconsistent Theory Mark Derthick 10. The Relevance of Connectionism to AI: A Representation and Reasoning Perspective Lokendra Shastri 11. Steps Toward Knowledge-Intensive Connectionist Learning Joachim Diederich 12. Learning Simple Arithmetic Procedures Garrison W. Cottrell & Fu-Sheng Tsung 13. The Similarity Between Connectionist and Other Parallel Computation Models Jiawei Hong & Xiaonan Tan 14. Complex Features in Planning and Understanding: Problems and Opportunities for Connectionism Lawrence Birnbaum 15. Conclusion Jordan B. Pollack & John A. Barnden Published 1991/400 pages Cloth: 0-89391-687-0/$67.50 (*** $34.50 prepaid ***/no further discount applies)  From degaris at hip.atr.co.jp Wed Sep 28 19:14:57 1994 From: degaris at hip.atr.co.jp (Hugo de Garis) Date: Wed, 28 Sep 94 19:14:57 JST Subject: PerAc94 Conference Report, Hugo de Garis, ATR, Kyoto Message-ID: <9409281014.AA18563@gauss> PerAc94 Conference Report, Hugo de Garis, ATR, Kyoto Tom Ray and I were invited by the Federal Poytechnic of Lausanne, Switzerland, to give talks on our ATR work to an ALifey type tutorial, which lasted two days, (Monday Sept 5 and 6). This was followed by a 3 day conference called "PerAc94", i.e. from Perception to Action, whose organiser asked me to write up and distribute a conference report over the relevant email networks. Following the PerAc conference was a 20 man brain storming session of invited experts on the future of the subject. I was originally invited to this, so I had expected to be able to report to you on the main issues, but when I asked the organizer (a rather prickly person) if it would be ok if I attended only half of it, (for reasons explained below) I got disinvited. A formal write up of this brain storming session will appear in the "Robotics and Autonomous Systems Journal", (Elsevier) in the next few months. The following Monday was a seminar on the work of Professor Mange and his group on adaptible hardware using FPGAs (field programmable gate arrays) which I believe to be of great importance. This report will focus on the second and fourth of these events. PerAc94 Conference This conference was a Swiss equivalent of the PPSN (Parallel Problem Solving from Nature), SAB (Simulation of Adaptive Behavior), and ALife type of conferences, where the key words characterizing the conference can be taken from the session headings, namely, "collective intelligence, simple behavioral robots, genetic algorithms, active perception, building blocks and architectures for designing intelligent systems, complex architectures to control autonomous robots, cognition, collective intelligence, simple behavioral robots, active perception". When I accepted to speak at the tutorial it was for two reasons. One was to talk with Professor Mange and his group, who are pioneering a new field that I will talk about later in this report, and the other was to be in Switzerland, for me the most beautiful country in the world. When I had to leave it was with a real sigh. The PerAc conference itself I expected to be rather "small beer", but in fact the world was there. Admittedly only about 100 people were present the day I counted, but they were a specialised audience. If you are working in the field of autonomous robots and their related problems of control, and in particular the problems involved in converting perception into action, then getting hold of the proceedings of this conference for you is a must. Happily, that will not be difficult, because, thanks to characteristic Swiss efficiency, an IEEE (Computer Society Press) book was ready and distributed to conference attendees. The book is entitled "Proceedings From Perception to Action Conference, Lausanne, Switzerland, September 7-9, 1994", IEEE Computer Society Press, 1994, ISBN 0-8186-6482-7, edited by P. Gaussier and J-D. Nicoud. The 3 day conference was small enough not to need split sessions, so some 30 odd talks and a similar number of posters were available for all to hear and see. Unfortunately that was not true in my case. The great beauty of Switzerland attracted my wife to take a train down from Brussels (where she was catching up on old friends), and she was on a tight schedule. Similarly, a close Japanese friend of mine came up from Geneva on an equally tight schedule, and both of them wanted to yodel in the mountains with me. I ended up attending only 1 of the 3 conference days. Hence this report will be more limited and objective (based on the book) than subjective (based on impressions from someone who attended all the sessions). Conference Highlights The hero of the conference in my view was not a human being, but an (ice hockey) puck-shaped robot called "KHEPERA", developed by the Swiss, which I found myself in conversation referring to as the "puck robot". This flat, round, battery or mains driven, wheeled robot (containing infra red and other sensors), measures about 5 cms in diameter, about 2 cms high and is a very versatile little tool to perform "evolutionary robotics" experiments on your desk. Many of the papers at the conference used this "puck robot" to obtain their results. These researchers were performing real time (i.e. mechanical) fitness measurements of the robot's motions controlled by the neural net systems they were evolving. It is far more practical to do such a thing on a puck robot than a Unimate (of car assembling size). If you are interested in buying one of these puck robots, then try emailing Mr. Gomi, the manufacturer and distributor of Brook's insect robots and others. He told me he has already sold some 40 of these little robots. Gomi is a Japanese Canadian, who speaks excellent English. His email address is 71021.2755 at CompuServe.Com These little puck robots, crammed with electronics and sensors, cannot be too expensive. At the conference itself there was a fair sprinkling of big names from around the world, e.g. Pfeifer (Zurich Univ, Switzerland), the keynote speaker, made the claim that too much research effort is going into the unidirectional approach, i.e. from perception to action, whereas in the biological world, the reverse is also true, namely that what one perceives also depends on ones actions. He hoped future research in the field would be more bidirectional. Deneubourg (Brussels Univ, Belgium) spoke about the self organization of transport systems in ants and robots. Fukuda (Nagoya Univ, Japan) showed how his cellular robots could connect together to show group behavior. McFarland (Oxford Univ, England) used his expertise in the ethological field to advise roboticists what qualities autonomous robots need, particularly in regard to motivation and cognition. Cruse (Bielefeld Univ, Germany) presented a new neural net controller for a 6 legged walking system which reacts adaptively to the environment. Ferrell (MIT, USA) is Brooks' chief grad student who is helping him coordinate research on MIT's "COG" (torso) robot. She introduced her "Hannibal" hexapod robot (similar to Genghis) and put it thru its paces. Steels (Brussels Univ, Belgium) spoke on a mathemetical analysis of behavior systems where the main idea is that a behaving system is in fact a dynamical system whose state reaches equilibrium once the behavior it controls is attained. 3 Musketeers (i.e. Husbands, Harvey, Cliff, Sussex Univ, England) presented their latest results in automated mechanical fitness measurement in neural net based evolutionary robotics. Evolving a single neural net module takes them about a day or so. Interestingly they had an M.Sc. student who simulated the puck robot and evolved the simulation at much higher speeds. The resulting simulated elite chromosome was then down loaded to the KEPHERA which then performed in the real world as predicted by the simulation. Interesting. This "fast simulation vs. slow reality" issue I will pursue later. Nolfi et al (NRC, ROME, Italy) spoke on plasticity (i.e. learning) in phenotypic neural nets. The new idea is that the mapping from GA chromosome to neural net does not occur instantaneously, but takes place over the lifetime of the individual and is sensitive to the environment. Taylor (Kings College, England) (ab?)used his brow beating style and actor's resonating voice to explain to his audience his theories of the relational mind, i.e. "consciousness arises due to the active comparison of ongoing brain activity stemming from external inputs in the various modalities, with somewhat similar past activity stored in semantic and episodic memory". Taylor, an ex theoretical physicist, with already an encyclopaedic knowledge of his new field, was probably the smartest man at the conference. Unfortunately, he succombs too easily to the temptation to let everyone know it. The considerable respect for his abilities that everyone has would only be enhanced if he he were more low key. His constant, rather condescending interjections got on ones nerves after a while. Comments The conference felt like a mini SAB conference, especially judging by the overlap of the conference committees of SAB94 and this one. As mentioned above, the KEPHERA puck robot featured strongly, but to my mind, I found myself becoming irritated (beyond the usual 7 hour jet lag) as speaker after speaker presented his (nearly always his) results using the puck. My gut feeling is that this is not the way to go. My dream is to build artificial brains. Dave Cliff wants to build artificial brains. Lots of people want to build artificial brains, but to do so will probably require the evolution of probably millions of neural modules. Simple math says that with a one day evolution per neural net module (e.g. using the Sussex "gantry robot"), one will die before building even the brain's retina. Somehow, the evolutionary process so vital to this field, needs to be speeded up significantly. I believe the way to go is to evolve neural circuits in hardware at electronic speeds. I hope this was one of the issues discussed at the post-conference brain storming session by the 20 odd invited experts. If not, I would be surprised. Its seems to me to be a critical issue. Keep an eye out for the Robotics and Autonomous Systems Journal for a writeup of the conclusions coming out of this brain storming session. *** Professor Mange's "Embryological Electronics" The first I heard about Professor Mange's work (pronounced as in the french word "mange(r)" = to eat) was when my colleage at ATR, Tom Ray, passed me a copy of a paper he had to review for ECAL93 (Euro Conf on ALife). Tom was really impressed. So was I. Mange's dream is to achieve von Neumann's universal calculator, universal constructor, using his specially designed FPGAs (field programmable gate arrays). These FPGA "cells" are relatively simple and can be made in large numbers on wafer scale silicon slices. They have the ability to reproduce the circuit of any computable function, and of self repair. In its present design, when a cell becomes dysfunctional, its row and column in the grid of cells are switched off. I asked Professor Mange if it would be possible to switch off only the faulty cell. Yes, he said. This blew my mind, because the consequences of this alone are profound, let alone those following from the achievement of Mange's dream. One of the really big problems in VLSI is "yield" i.e. the percentage of chips that are fault free. This factor limits the size of chips made from wafers. If a faulty piece of a circuit can be switched off and its function somehow "moved" elsewhere on the wafer using reproduction, then circuits of wafer size become possible. The yield becomes irrelevant. Wow! Mange's et al's papers are now starting to appear regularly in the GA/NN/ALife/... conference circuit. In the few pages of these papers, it is difficult for a non electronics digital hardware specialist to follow his work easily. I made this point to him, saying that John Holland had only himself to blame that his invention of genetic algorithms took 20 years to become hot, because his book was so unreadable. GAs really only became popular after Goldberg wrote a text that everyone could understand. Mange got the point and will either write a solid (100+ page) technical report spelling out all the details so people can understand and copy, or he will write a book based on a course he will be teaching at the EPFL. Mange's colleague, Sanchez, wants to use these FPGA chips to perform evolution in hardware, perhaps by using Koza's Genetic Programming approach, and binary decision trees. Since any Boolean circuit can be expressed in terms of binary decision diagrams and hence binary trees, which in turn can be implemented on Mange's FPGAs, then since binary trees can be evolved with Koza's GP, it follows that any Boolean circuit should be evolvable on Mange's FPGAs. If so, then this will be the first example (as far as I know) of what I call "intrinsic evolvable hardware". Maybe I should explain. Evolvable Hardware (EHW) Since 1992, I have been pushing the idea of evolvable hardware. Evolution and (what I call) "evolutionary engineering" (or applied evolution) is one of the major research themes in computer science in the 90s, but it is virtually all software based. My dream is to see a machine, (i.e. hardware) which evolves. The basic idea is to conceive the software bit string that is used to configure PLDs (programmable logic devices) as a genetic algorithm chromosome, and thus evolve the configuration (architecture) of the circuit at electronic speeds. Unfortunately, PLDs are not designed with evolution in mind. If they are not RAM based, they are not "infinitely" rewritable, so they need to be RAM based. Personally I use cellular automata machine hardware to evolve CA based neural nets, but this is cheating in a way, because strictly speaking, the hardware of the CA machine remains fixed in its architecture. So far, noone has evolved circuits directly in hardware. If I am wrong here, please email me. I would love to be shown up on this point. There are however, several groups of people around the world who claim to be doing evolvable hardware. I will list them and their email addresses below. These groups are doing what I call "extrinsic evolvable hardware", i.e. they take a software simulated description of a hardware circuit, evolve it in software, take the elite chromosome (circuit description) and down load it into the rewritable hardware. That is, the hardware gets written to just once. The evolution occurs outside (extrinsic to) the circuit, by using the software simulated description. Intrinsic evolvable hardware would rewrite (reconfigure) a hardware circuit for each chromosome for each generation of the genetic algorithm. The evolution occurs inside (intrinsic to) the circuit. The Mange team in Switzerland hopes to do this, and if so, they will open up a new era in electronics and evolutionary engineering. Circuits will be grown/evolved rather than be designed. Hence they can become more complex and hopefully more performant. The following groups of people around the world that I know of are doing (extrinsic) evolvable hardware. Mange and Sanchez (EPFL, Lausanne, Switzerland) - described above. mange at di.epfl.ch sanchez at di.epfl.ch Hemmi (ATR, Kyoto, Japan) - Hemmi is a colleague in the same group as Tom Ray and myself. He uses an HDL (hardware description language) in the form of trees, to which he applies Koza's GP approach. He has evolved digital counter circuits and the like. hemmi at hip.atr.co.jp Higuchi (ETL, Tsukuba, Japan) - Higuchi is an ex colleague of mine from my postdoc days at ETL (Electro Technical Lab). He used a simulated GAL PLD chip description which he evolved in software to perform various digital funtions. Lately he has been working on a hardware design to perform genetic algorithms - a GA machine. higuchi at etl.go.jp Cliff (Sussex Univ, England) - Dave Cliff has a new grad student who is a VLSI designer who wants to evolve hardware. I met Dave at this conference and asked him whether his student would be doing intrinsic or extrinsic EHW. Dave said that the student is trying to make a deal with a Silicon Valley company in California. If the deal comes thru, then he wants to do intrinsic EHW, otherwise he will do extrinsic EHW like everybody else. davec at cogs.susx.ac.uk If there are other people doing EHW, please let me know. I believe the era of EHW and "Darwin Machines" is upon us, and should be vigorously supported. We will never have truly performant artificial nervous systems and artificial brains until we can overcome the "slowness of evolution" problem. Evolution at electronic speeds i.e. EHW, is the key breakthrough here. Cheers, Hugo de Garis. Dr. Hugo de Garis, Brain Builder Group, Evolutionary Systems Department. ATR Human Information Processing Research Laboratories, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kansai Science City, Kyoto-fu, 619-02, Japan. tel. (+ 81) (0)7749 5 1079, fax. (+ 81) (0)7749 5 1008, email. degaris at hip.atr.co.jp  From gluck at pavlov.rutgers.edu Fri Sep 30 09:50:20 1994 From: gluck at pavlov.rutgers.edu (Mark Gluck) Date: Fri, 30 Sep 94 09:50:20 EDT Subject: Graduate Study at Rutgers/NJ in Cognitive & Computational Neuroscience Message-ID: <9409301350.AA09658@james.rutgers.edu> To: Students considering graduate study in Cognitive Neuroscience and/or Computational Neuroscience. The Center for Molecular & Behavioral Neuroscience at Rutgers University (the State Univ. of New Jersey) is a leading center for research in Cognitive Neuroscience and Computational Neuroscience. The Center offers a Ph.D. in "Behavioral & Neural Sciences" and emphasizes integration between levels of analysis (i.e., behavioral & biological) and across traditional interdisciplinary boundaries. The Center is one of the leading places in this country for the study of the neural bases of higher-cortical function (cognition) in humans, including labs devoted to memory, language, speech, motor-control, and vision. We also have a strong computational program for students interested in pursuing neural-network models as a tool for understanding psychological and biological issues. The Rutgers-Newark campus (as distinct from the New Brunswick campus), is 20 minutes outside New York City, and close to other major university research centers at NYU, Columbia, and Princeton, as well as major industrial research labs in Northern NJ, including ATT, Bellcore, Siemens, and NEC. My own research concerns the neural and behavioral bases of learning and memory. Our work is focussed on the development of computational models of cortico-hippocampal function, and the testing of these models in both animals and human amnesics patient populations. If you are interested in applying to our graduate program, or possibly applying to one of the labs as paid research assistant or programmer, please email me at gluck at pavlov.rutgers.edu and I will be happy to send you info on our research and graduate program, as well as set up an a possible visit to the Neuroscience Center here at Rutgers-Newark. - Mark Gluck ______________________________________________________________________ Dr. Mark A. Gluck Center for Molecular & Behavioral Neuroscience Rutgers University 197 University Ave. Newark, New Jersey 07102 Phone: (201) 648-1080 (Ext. 3221) Fax: (201) 648-1272 Email: gluck at pavlov.rutgers.edu