From Connectionists-Request at cs.cmu.edu Fri May 1 00:05:12 1992 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 May 92 00:05:12 -0400 Subject: Bi-monthly Reminder Message-ID: <19805.704693112@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & Hank Wan --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From mozer at dendrite.cs.colorado.edu Sun May 3 13:50:02 1992 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Sun, 3 May 1992 11:50:02 -0600 Subject: 1993 Connectionist Summer School preliminary announcement Message-ID: <199205031750.AA04386@neuron.cs.colorado.edu> PRELIMINARY ANNOUNCEMENT CONNECTIONIST MODELS SUMMER SCHOOL JUNE 1993 University of Colorado Boulder, CO The next Connectionist Models Summer School will be held at the University of Colorado, Boulder in the summer of 1993, tentatively June 23-July 6. This will be the fourth session in the series that was held at Carnegie-Mellon in 1986 and 1988, and at UCSD in 1990. The Summer School will offer guest lectures and workshops in a variety of areas of connectionism, with emphasis on theoretical foundations, computational neuroscience, cognitive science, and hardware implementation. Proceedings of the Summer School will be published the following fall. As in the past, participation will be limited to graduate students enrolled in PhD programs (full or part time). We hope to have sufficient funding to subsidize tuition and housing. This is a preliminary announcement. Information about how to apply will be posted over connectionists. As a point of contact, address electronic correspondence to "cmss at cs.colorado.edu". Jeff Elman University of California, San Diego Mike Mozer University of Colorado, Boulder Paul Smolensky University of Colorado, Boulder Dave Touretzky Carnegie-Mellon University Andreas Weigend Xerox PARC and University of Colorado, Boulder From jfeldman at ICSI.Berkeley.EDU Sat May 2 14:29:57 1992 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Sat, 2 May 92 11:29:57 PDT Subject: choices Message-ID: <9205021829.AA18784@icsib8.ICSI.Berkeley.EDU> The U.S. NIST has just made available three databases: fingerprints, tax forms and handwritten segmented characters. The expected applications of results in these three areas are quite different and anyone choosing to work on the first task should, in my opinion, seriously consider this. Jerry Feldman From mpadgett at eng.auburn.edu Mon May 4 11:44:46 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Mon, 4 May 92 10:44:46 CDT Subject: SimTec,WNN92/Houston,FNN Call Message-ID: <9205041544.AA15932@eng.auburn.edu> C A L L F O R P A P E R S * SimTec 92 * WNN92/Houston * 1992 INTERNATIONAL SIMULATION TECHNOLOGY CONFERENCE SimTec92 WNN92/Houston: A Neural Networks Conference held with SimTec NETS Users Group Meeting: NASA/JSC Neural Network Simulation FNN Symposium: Fuzzy Logic, Neural Networks Overview & Applications Lofti Zadeh, UC Berkeley NOVEMBER 4-7, 1992 SOUTH SHORE HARBOUR / JOHNSON SPACE CENTER CLEAR LAKE, TEXAS (NEAR HOUSTON AND GALVESTON) SimTec Sponsor: [SCS] The Society for Computer Simulation, Int. Co-sponsor: NASA/JSC; Cooperating: SPIE WNN Additional Sponsors: Cooperating: INNS Participating: IEEE NNC Computer and Simulation Technology Aerospace Life & Physical Sciences Intelligent Systems Panels, Exhibits, Standards Paper Contest: Academic, Industrial, Government Tour: NASA/JSC Simulation Facilities Keynote Speaker: Story Musgrave, M.D. NASA Astronaut ======================================================================== WNN92/Houston: Neural Networks Paper Contest, Performance Measure Methodology Contest Software Exchange, Panels, Demonstrations, Exhibits NETS USERS GROUP MEETING Sponsor: [SCS]; Co-sponsor: NASA/JSC Cooperating: SPIE and INNS; Participating: IEEE-NNC ======================================================================== FNN Symposium: Overview & Applications FUZZY LOGIC: Lofti Zadeh NEURAL NETWORKS: Mary Lou Padgett Sat., Nov. 7, 1992: Supplemental Fee Covers Tutorial Handouts and NETS Executable and Examples Sponsor: SCS; Co-sponsor: NASA/JSC ======================================================================== SimTec 92 Topics of Interest include, but are not limited to: Computer and Simulation Technology * Automatic Control Systems C. F. Chen, Boston U (617) 353-2567 * Education for Simulation Professionals Troy Henson, IBM (713) 282-7476 * Electronics/VLSI Joseph R. Cavallaro, Rice U (713)527-8101 x3589 * High Performance Computing / Computers Mohammad Obaidat, U Missouri (816) 235-1276 * Mathematical Modeling Richard Greechie, Louisiana Tech U (318) 257-2538 * Massively Parallel & Distributed Systems, Transputers, Languages Enrique Kortwright, Nichols St U (504) 448-4406 Stephen Seidman, Auburn University (205) 844-4330 * Software Modeling, Reliability & Quality Assurance Norm Schneidewind, Naval Postgrad. Sch. (408) 646-2719 John Munson, U West Florida (904) 474-2989 * Virtual Environments John Murphy, Westinghouse (412) 256-2693 * Multimedia Just-in-Time Training * Process Simulation & Process Control Standards * Simulation in ADA Aerospace * Pilot-in-the-Loop Flight Simulation Richard Cox, General Dynamics (817) 777-3744 * Real-Time Simulation and Engineering Simulators Pat Brown, The MITRE Corporation (713) 333-0926 * Robotics & Control Lloyd Wihl, CAE Electronics (514) 341-6780 * Satellite Simulators Juan Miro, European Space Agency (+49) 6151 902717 * Space Avionics / Display and Control Systems Rita Schindeler, Lockheed Engr. & Sci. (713) 333-7091 * Astronaut Training * Guidance, Navigation and Control (GN&C) Life & Physical Sciences * Biomedical Modeling & Simulation John Clark, Rice University (713) 527-8101 x3597 Lou Sheppard, UT Medical Branch (409) 772-3088 * Geophysical Modeling & Simulation David Norton, Houston Area Res. Ctr. (713) 363-7944 * High Energy Physics /SSC Modeling & Simulation David Adams, Rice University (713) 285-5316 * Computational Biology * Petrochemical Modeling & Simulation [PB] Intelligent Systems * Automation & Robotics Ian Walker, Rice University (713) 527-8101 x2359 * Expert Systems / KBS David Hamilton, IBM Corporation (713) 282-8357 * Fuzzy Logic Applications in Space Robert Lea, NASA/JSC (713) 483-8105 * Fuzzy Logic Applications Joe Mica, NASA/Goddard (301) 286-1343 * Intelligent Computer Aided Training (ICAT) Bowen Loftin, U of Houston & NASA/JSC (713) 483-8070 * Intelligent Computer Environments Michele Izygon, NASA/JSC (713) 483-8110 * Virtual Reality Lui Wang, NASA/JSC (713) 483-8074 * Genetic Algorithms * Object Oriented Programming * Simulation & AI SimTec ORGANIZING COMMITTEE: General Chair: Tony Sava, IBM Associate General Chair: Roberta Kirkham, Lockheed Program Chair: Troy Henson, IBM Technical Editor & SCS Representative: Mary Lou Padgett, Auburn U. Local Arrangements & Associate Technical Editor: Ankur Hajare, MITRE Exhibits Chair: Wade Webster, Lockheed NASA Representative: Robert Savely, NASA/JSC Juan Miro, ESA; Joe Mica, NASA/Goddard ======================================================================== WNN92/Houston: Neural Networks * Advances / Applications / Architectures / Hybrid Systems Mary Lou Padgett, Auburn U (205) 821-2472/3488 * Controls / Neurocontrols / Fuzzy-Neurocontrols Robert Shelton, NASA/JSC (713) 483-5901 * Computer Vision/Coupled Oscillation Steve Anderson, KAFB (505) 256-7799 * Electronics/VLSI INNS-SIG Ralph Castain, Los Alamos National Labs (505) 667-3283 * Neural Networks and Fuzzy Logic Robert Shelton, NASA/JSC (713) 483-5901 * Signal Processing and Analysis / Pattern Recognition Michael Hinman, Rome Labs (315) 330-3175 * Sensor Fusion Robert Pap, Accurate Automation (615) 622 4642 NETS USERS GROUP MEETING Robert Shelton NASA/JSC (713) 483-5901 WNN Program: Mary Lou Padgett, Auburn U.; Robert Shelton, NASA/JSC Walter J. Karplus, UCLA; Bart Kosko, USC; Paul Werbos, NSF DEADLINES: June 15: Abstracts and/or Draft Papers for Review August 1 (firm): Camera-Ready Full Papers SUBMIT TO: 30 Mary Lou Padgett, Auburn U., 1165 Owens Road, Auburn, AL 36830 (205) 821-2472/3488 Fax:(619) 277-3930 email:mpadgett at eng.auburn.edu ======================================================================== SimTec 92 1992 International Simulation Technology Conference WNN92/Houston & FNN Symposium November 4-7, 1992 South Shore Harbour / Johnson Space Center Clear Lake, Texas (near Houston and Galveston) If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu LOCAL ATTRACTIONS Sailing, Tennis, Swimming, Golf, Evening Dinner Cruise, Day Tour of Galveston Island, Space Center Houston EXHIBITOR INFORMATION Wade Webster P: (713) 282-6589 FAX:(713) 282-6423 ======= SCS OFFICE: (619) 277-3888 ======= SimTec 92 WNN92/Houston Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177  From pratt at cs.rutgers.edu Mon May 4 18:01:05 1992 From: pratt at cs.rutgers.edu (pratt@cs.rutgers.edu) Date: Mon, 4 May 92 18:01:05 EDT Subject: Announcing the availability of a hyperplane animator Message-ID: <9205042201.AA02052@klein.rutgers.edu> ----------------------------------- Announcing the availability of an X-based neural network hyperplane animator ----------------------------------- Lori Pratt and Paul Hoeper Computer Science Dept Rutgers University Understanding neural network behavior is an important goal of many research efforts. Although several projects have sought to translate neural network weights into symbolic representations, an alternative approach is to understand trained networks graphically. Many researchers have used a display of hyperplanes defined by the weights in a single layer of a back-propagation neural network. In contrast to some network visualization schemes, this approach shows both the training data and the network parameters that attempt to fit those data. At NIPS 1990, Paul Munro presented a video which demonstrated the dynamics of hyperplanes as a network changes during learning. This video was based on a program implemented for SGI workstations. At NIPS 1991, we presented an X-based hyperplane animator, similar in appearance to Paul Munro's, but with extensions to allow for interaction during training. The user may speed up, slow down, or freeze animation, and set various other parameters. Also, since it runs under X, this program should be more generally usable. This program is now being made available to the public domain. The remainder of this message contains more details of the hyperplane animator and ftp information. ------------------------------------------------------------------------------ 1. What is the Hyperplane Animator? The Hyperplane Animator is a program that allows easy graphical display of Back-Propagation training data and weights in a Back-Propagation neural network. Back-Propagation neural networks consist of processing nodes interconnected by adjustable, or ``weighted'' connections. Neural network learning consists of adjusting weights in response to a set of training data. The weights w1,w2,...wn on the connections into any one node can be viewed as the coefficients in the equation of an (n-1)-dimensional plane. Each non-input node in the neural net is thus associated with its own plane. These hyperplanes are graphically portrayed by the hyperplane animator. On the same graph it also shows the training data. 2. Why use it? As learning progresses and the weights in a neural net alter, hyperplane positions move. At the end of the training they are in positions that roughly divide training data into partitions, each of which contains only one class of data. Observations of hyperplane movement can yield valuable insights into neural network learning. 3. How to install the Animator. Although we've successfully compiled and run the hyperplane animator on several platforms, it is still not a stable program. It also only implements some of the functionality that we eventually hope to include. In particular, it only animates hyperplanes representing input-to-hidden weights. It does, however, allow the user to change some aspects of hyperplane display (color, line width, aspects of point labels, speed of movement, etc.), and allows the user to freeze hyperplane movement for examination at any point during training. How to install the hyperplane animator: 1. copy the file animator.tar.Z to your machine via ftp as follows: ftp cs.rutgers.edu (128.6.25.2) Name: anonymous Password: (your ID) ftp> cd pub/hyperplane.animator ftp> binary ftp> get animator.tar.Z ftp> quit 2. Uncompress animator.tar.Z 3. Extract files from animator.tar with: tar -xvf animator.tar 4. Read the README file there. It includes instructions for running a number of demonstration networks that are included with this distribution. DISCLAIMER: This software is distributed as shareware, and comes with no warantees whatsoever for the software itself or systems that include it. The authors deny responsibility for errors, misstatements, or omissions that may or may not lead to injuries or loss of property. This code may not be sold for profit, but may be distributed and copied free of charge as long as the credits window, copyright statement in the ha.c program, and this notice remain intact. ------------------------------------------------------------------------------- From kak at max.ee.lsu.edu Tue May 5 18:32:37 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 5 May 92 17:32:37 CDT Subject: Paper abstract Message-ID: <9205052232.AA01231@max.ee.lsu.edu> The following paper has just been published. PRAMANA - J. PHYS., Vol. 38, March 1992, pp. 271-278 ------------------------------------------------------------------ State Generators and Complex Neural Memories Subhash C. Kak Department of Electrical and Computer Engineering Louisiana State University, Baton Rouge, LA 70803, USA Abstract: The mechanism of self-indexing for feedback neural networks that generates memories from short subsequences is generalized so that a single bit together with an appropriate update order suffices for each memory. This mechanism can explain how stimulating an appropriate neuron can then recall a memory. Although the information is distributed in this model, yet our self-indexing mechanism makes it appear localized. Also a new complex valued neuron model is presented to generalize McCulloch-Pitts neurons. There are aspects to biological memory that are distributed and others that are localized. In the currently popular artificial neural network models the synaptic weights reflect the stored memories, which are thus distributed over the network. The question then arises whether these models can explain Penfield's observations on memory localization. This paper shows that such a localization does occur in these models if self-indexing is used. It is also shown how a generalization of the McCulloch-Pitts model of neurons appears essential in order to account for certain aspects of distributed information processing. One particular generalization, described in the paper, allows one to deal with some recent findings of Optican & Richmond (1987). From kak at max.ee.lsu.edu Tue May 5 18:51:52 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 5 May 92 17:51:52 CDT Subject: No subject Message-ID: <9205052251.AA08618@max.ee.lsu.edu> Call for Papers Two Sessions On NEURAL NETWORKS at the "First Int. Conf. on FUZZY THEORY AND TECHNOLOGY" October 14-18, 1992; Durham, North Carolina -------------------------------------------------------------- Deadline for Submission of Summaries: June 15, 1992 Authors informed about Acceptance Decision: July 15, 1992 Full Length Paper Due At the Conf.: Oct 15, 1992 Accepted papers will appear in hard-cover proceedings published by a major publisher. Revised papers for inclusion in the book would be due on April 15, 1993. The book will appear by October 1993. _______________________________________________________________ Summaries should not exceed 6 single-spaced pages inclusive of figures. 3 copies of the summary (only hard copies) should be mailed to the organizer of the sessions: Subhash Kak Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel: (504) 388-5552 E-mail: kak at max.ee.lsu.edu From omlinc at cs.rpi.edu Fri May 8 10:39:56 1992 From: omlinc at cs.rpi.edu (Christian Omlin) Date: Fri, 8 May 92 10:39:56 EDT Subject: TR available from the neuroprose archive Message-ID: <9205081439.AA24477@cs.rpi.edu> The following paper has been placed in the Neuroprose archive. Comments and questions are encouraged. ******************************************************************* -------------------------------------------- TRAINING SECOND-ORDER RECURRENT NEURAL NETWORKS USING HINTS -------------------------------------------- C.W. Omlin* C.L. Giles Computer Science Department *NEC Research Institute Rensselaer Polytechnic Institute 4 Independence Way Troy, N.Y. 12180 Princeton, N.J. 08540 omlinc at turing.cs.rpi.edu giles at research.nj.nec.com Abstract -------- We investigate a method for inserting rules into discrete-time second-order recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finite-state automaton. Inserting these rules as hints into networks with second-order connections is straight-forward. Our simulations results show that even weak hints seem to improve the convergence time by an order of magnitude. (To be published in Machine Learning: Proceedings of the Ninth International Conference (ML92),D. Sleeman and P. Edwards (eds.), Morgan Kaufmann, San Mateo, CA 1992). ******************************************************************** Filename: omlin.hints.ps.Z ---------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get omlin.hints.ps.Z ftp> bye unix% zcat omlin.hints.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ---------------------------------------------------------------- ---------------------------------------------------------------------------- Christian W. Omlin Troy, NY 12180 USA Computer Science Department Phone: (518) 276-2930 Fax: (518) 276-4033 Amos Eaton 119 E-mail: omlinc at turing.cs.rpi.edu Rensselaer Polytechnic Institute omlinc at research.nj.nec.com ---------------------------------------------------------------------------- From "MVUB::BLACK%hermes.mod.uk" at relay.MOD.UK Fri May 8 07:41:00 1992 From: "MVUB::BLACK%hermes.mod.uk" at relay.MOD.UK (John V. Black @ DRA Malvern) Date: Fri, 8 May 92 11:41 GMT Subject: 1st CFP: Third IEE International Conference on Artificial Neural Networks Message-ID: IEE 3rd INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS FIRST CALL FOR PAPERS Date : 25-27 May 1993 Venue : Conference Centre, Brighton, United Kingdom Contributions & Conference format: Oral & poster presentations in single, non-parallel sessions Scope: 3 principal areas of interest ----- Architecture & Learning Algorithms : Theory & design of neural networks, modular systems, comparison with classical techniques Applications & industrial systems : Vision and image processing, speech and language processing, biomedical systems, robotics & control, AI applications, expert systems, financial and business systems Implementations : parallel simulation/architecture, hardware implementations (analogue & digital), VLSI devices or systems, optoelectronics Travel: Frequent trains from London (journey time 60 mins) and from Gatwick airport (30 mins) Deadlines: --------- 1st September 1992 : Receipt of synopsis by secretariat. The synopsis should not exceed 1 A4 page October 1992 : Notification of provisional acceptance 25th January 1993 : Receipt of full typescript for final review by secretariat. This should be a maximum of 5 A4 pages - approximately 5,000 words, less if illustrations are included. Further details and contributions to: Sheila Griffiths ANN 93 Secretariat IEE Conference Services London WC2R 0BL United Kingdom Telephone (+44) 71 240 1871 Ext 222 Fax (+44) 71 497 3633 Telex 261176 IEE LDN G David Lowe Janet: lowe at uk.mod.hermes Internet: lowe%hermes.mod.uk at relay.mod.hermes From ml92 at computing-science.aberdeen.ac.uk Fri May 8 14:02:30 1992 From: ml92 at computing-science.aberdeen.ac.uk (ML92 Aberdeen) Date: Fri, 08 May 92 14:02:30 Subject: Ninth International Machine Learning Conference (ML92) Message-ID: <9205081302.AA29997@kite> MMM MMM LL 999 2222 MM MM MM MM LL 99 99 22 22 MM MM MM MM LL ==== 99 99 22 MM M MM LL 99999 222 MM MM LL 99 222 MM MM LLLLLL 999999 22222222 NINTH INTERNATIONAL MACHINE LEARNING CONFERENCE UNIVERSITY OF ABERDEEN, SCOTLAND 1 - 3 JULY 1992 Registration Information ======================== On behalf of the organizing committee, we are pleased to announce that ML92, the Ninth International Machine Learning Conference, will be held at the University of Aberdeen, Scotland, July 1-3, 1992. Informal workshop sessions will be held immediately after the conference on Saturday, July 4, 1992. The conference will feature invited speakers, technical and poster sessions. The invited speakers for ML92 are: * David Klahr Carnegie-Mellon University, USA * Ivan Bratko Jozef Stefan Institute, Ljubljana, Slovenia * Jude Shavlik University of Wisconsin, USA REGISTRATION FEE The registration fees for ML92 are as follows: Normal - Pound Sterling 115, Student - Pound Sterling 75. These fees cover conference participation, proceedings, teas and coffees during breaks and a number of evening receptions. The deadline for registration is May 29, 1992. After this date, a late fee of Pound Sterling 20 will be charged. Cancellation fees are as follows: before May 29 - Pound Sterling 10, May 30 to June 30 - Pound Sterling 20, after June 30 - at the discretion of the General Chairman. ACCOMMODATION A large number of rooms in university halls of residence (student dorms) have been available for ML92 delegates. Delegates requiring on-campus accommodation must return completed forms by May 29, 1992 - after that date accommodation cannot be guaranteed. In addition, block bookings have been made with three city centre hotels (all approx. 30 minutes walk, or a short bus ride from the conference venue.) Rates are as follows: University Hall (Student Dorm) Single: Pound Sterling 18.50 Copthorne Hotel Single: Pound Sterling 72.50 Double: Pound Sterling 87.50 Brentwood Hotel Tues June 30 - Thurs July 2: Single: Pound Sterling 48.00 Double: Pound Sterling 58.00 Fri July 3 & Sat July 4: Single: Pound Sterling 28.00 Double: Pound Sterling 38.00 Caledonian Thistle Hotel Tues June 30 - Thurs July 2: Single: Pound Sterling 88.20 Double: Pound Sterling 103.50 Fri July 3 & Sat July 4: Single: Pound Sterling 38.00 Double: Pound Sterling 76.00 Notes All double room prices are based on 2 people occupying room. All prices include breakfast except for Caledonian Thistle on June 30, July 1 & July 2. University accommodation should be booked using the conference registration form. Delegates requiring hotel accommodation should contact the hotels directly using the contact information below. When booking hotel accommodation delegates must mention ML92 to qualify for conference rates and should contact hotels before May 29, 1992, after which accommodation cannot be guaranteed. Copthorne Hotel Brentwood Hotel 122 Huntly Street 101 Crown Street Aberdeen, AB1 1SU Aberdeen, AB1 2HH SCOTLAND SCOTLAND Tel. +44 224 630404 Tel. +44 224 595440 Fax +44 224 640573 Fax +44 224 571593 Telex 739707 Telex 739316 Caledonian Thistle Hotel Union Terrace Aberdeen, AB9 1HE SCOTLAND Tel. +44 224 640233 Fax +44 224 641627 Telex 73758 LUNCH Lunch will be provided on campus, close to the conference venue. Cost: Pound Sterling 5.00 per day. All delegates are recommended to select conference lunches - as there are no alternatives close to the campus. Please indicate on the registration form the days on which you require lunch. (The cost of lunch on Saturday July 4 is included in the workshop fee.) CONFERENCE DINNER The conference dinner will be held on the evening of Friday, July 3 at the Pittodrie House Hotel, a country hotel outside Aberdeen. Places are limited and will be assigned on a first come, first served basis. Cost of the meal plus transport: Pound Sterling 27.00. WORKSHOPS - SATURDAY JULY 4 A number of informal workshops are to be held on Saturday July 4, immediately after the conference. The five workshops are: 1. Biases in Inductive Learning Coordinator: Diana Gordon (Naval Research Lab, Washington DC, USA) 2. Computational Architectures for Supporting Knowledge Acquisition & Machine Learning Coordinator: Mike Weintraub (GTE labs., USA) 3. Integrated Learning in Real-World Domains Coordinator: Patricia Riddle (Boeing, Seattle, USA) 4. Knowledge Compilation & Speedup Learning Coordinator: Prasad Tadepalli (Oregon State University, USA) 5. Machine Discovery Coordinator: Jan Zytkow (Wichita State University, USA) ACCOMPANYING PERSONS PROGRAM An accompanying persons program will be organized if numbers merit it. The area around Aberdeen has a high concentration of historic sites, castles, etc. and some of the most beautiful countryside in Scotland. TRAVELLING TO ABERDEEN Delegates travelling from the USA can now fly direct to Scotland. American Airlines fly Chicago to Glasgow, while British Airways fly New York to Glasgow. A regular rail service links Glasgow's Queen Street station to Aberdeen. A number of airlines fly direct to Aberdeen from European cities: Air France (Paris), SAS (Copenhagen) & AirUK (Amsterdam). Aberdeen is also served by regular flights from several UK airports: Manchester (British Airways, DanAir), London Gatwick (DanAir), London Heathrow (British Airways), Stansted (AirUK). A frequent high speed rail service links Aberdeen to London - journey time approx. 8 hours (Sleeper services are available). PRE- OR POST-CONFERENCE HOLIDAYS IN SCOTLAND Delegates wishing to arrange holidays in Scotland for the period immediately before or after the conference may wish to contact Chieftain Tours Ltd, who can offer help with car hire, hotel bookings, etc. Please mention ML92 when contacting Chieftain Tours. Please note that all correspondence regarding holidays must be done direct with the tour company and not through the ML92 organizers. Chieftain Tours can also provide information about low-cost flights to Scotland from the USA and Canada. Contact details for Chieftain Tours: Chieftain Tours Ltd. A8 Whitecrook Centre Tel +44 41 9511470 Whitecrook Street, Clydebank Fax +44 41 9511467 Glasgow, Scotland G81 1QF Telex 778169 Delegates from the USA may wish to make use of the following toll free fax number: 1 800 352 4350 PAYMENT Payment may be by cheque or credit card (Visa or Mastercard). Cheques must be in pounds sterling and must be made payable to "University of Aberdeen". Completed registration forms should be returned by mail or fax to: ML92 Registrations Department of Computing Science King's College University of Aberdeen Aberdeen, AB9 2UB Tel +44 224 272296 SCOTLAND Fax +44 224 487048 (Please note - registration by email is not acceptable.) CUT HERE ________________________________________________________________________ ML92 REGISTRATION FORM Please complete this form in typescript or BLOCK capitals and send to: ML92 Registrations, Department of Computing Science, King's College, University of Aberdeen, Aberdeen, AB9 2UB, SCOTLAND. PERSONAL DETAILS Name ___________________________________________ Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Email ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ REGISTRATION FEE Delegates requiring reduced (student) fee must provide proof of status (such as xerox of student ID). Please tick the appropriate boxes below. ---- Normal Registration Pound Sterling 115.00 | | ---- ---- Student Registration Pound Sterling 75.00 | | ---- ---- Late Fee (after May 29) Pound Sterling 20.00 | | ---- Registration Total ----------- (Pound Sterling): | | ----------- ACCOMMODATION Use this section to book university hall (student dorm) accommodation. (Pound Sterling 18.50 per night) University Hall (Single room) ---- Tuesday June 30 | | ---- ---- Wednesday July 1 | | ---- ---- Thursday July 2 | | ---- ---- Friday July 3 | | ---- ---- Saturday July 4 | | ---- Delegates requiring on-campus accommodation must return forms by May 29, 1992. Delegates whose completed forms arrive after this date cannot be guaranteed accommodation. Accommodation Total ----------- (Pound Sterling): | | ----------- LUNCH Please indicate the days on which you require lunch (Cost: Pound Sterling 5.00 per day) ---- Wednesday July 1 | | ---- ---- Thursday July 2 | | ---- ---- Friday July 3 | | ---- Lunch Total ----------- (Pound Sterling): | | ----------- CONFERENCE DINNER Yes - I wish to attend the ML92 Conference Dinner ---- (Pound Sterling 27.00) | | ---- Dinner Total ----------- (Pound Sterling): | | ----------- SPECIAL DIETARY REQUIREMENTS Please indicate below if you require a special diet. ---- Vegetarian | | ---- ---- Vegan | | ---- ---------------- Other (please specify) | | ---------------- ACCOMPANYING PERSONS PROGRAM Please indicate the number of persons ---------- likely to accompany you to the conference. | | ---------- WORKSHOPS - SATURDAY JULY 4 ---- Yes - I will be attending the ML92 workshops | | ---- ---- No - I will not be attending the ML92 workshops | | ---- If yes, please indicate which workshop you wish to attend: ---- Biases in Inductive Learning | | ---- ---- Computational Architectures for Supporting | | Knowledge Acquisition & Machine Learning ---- ---- Integrated Learning in Real-World Domains | | ---- ---- Knowledge Compilation & Speedup Learning | | ---- ---- Machine Discovery | | ---- The workshop fee includes the cost of the workshop proceedings, lunch on July 4 plus teas/coffees during breaks. ---- Normal Pound Sterling 20 | | ---- ---- Student Pound Sterling 15 | | ---- Workshop Total ----------- (Pound Sterling): | | ----------- FINAL TOTAL ----------- (Pound Sterling): | | ----------- PAYMENT CREDIT CARD ----------- Please charge my VISA / MASTERCARD (delete as appropriate) Name (as it appears on card) ---------------------------------------------------------------------------- | | | | | | | | | | | | | | | | | | | | | | | | | | ---------------------------------------------------------------------------- Card Number ------------------------------------------------- | | | | | | | | | | | | | | | | | ------------------------------------------------- ----------------- Expiry Date | | | / | | | ----------------- Amount -------------------- (Pound Sterling): | | | | . | | | -------------------- Signature _________________________________________________ ** The credit card payment facility for ML92 is available ** ** as a result of the generous support of Bank of Scotland plc.** CHEQUE ------ I enclose a pounds sterling cheque made payable to "University of Aberdeen" for : ----------- | | ----------- ________________________________________________________________________ From dhw at santafe.edu Mon May 11 17:15:17 1992 From: dhw at santafe.edu (David Wolpert) Date: Mon, 11 May 92 15:15:17 MDT Subject: New posting Message-ID: <9205112115.AA17949@sfi.santafe.edu> ****************** DO NOT FORWARD TO OTHER LISTS ********** The following article has been placed in neuroprose under the name "wolpert.stack_gen.ps.Z". It appears in the current issue of Neural Networks. It is a rather major rewrite of a preprint of the same name. STACKED GENERALIZATION by David H. Wolpert Abstract: This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use *some* version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory. To retrieve this article, do the following: unix> ftp archive.cis.ohio-state.edu login: anonymous password: {your e-mail address} ftp> binary ftp> cd pub/neuroprose ftp> get wolpert.stack_gen.ps.Z ftp> quit unix> uncompress wolpert.stack_gen.ps.Z unix> lpr wolpert.stack_gen.ps {or however you get postscript printout} From jose at tractatus.siemens.com Mon May 11 15:32:19 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 11 May 1992 15:32:19 -0400 (EDT) Subject: NIPS*92 Deadline Message-ID: Don't forget... if you havn't yet started you papers... start now! Deadline is MAY 22 (POSTMARKED) only 12 more shopping days left. Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From cabestan at eel.upc.es Tue May 12 13:32:20 1992 From: cabestan at eel.upc.es (JOAN CABESTANY) Date: Tue, 12 May 1992 13:32:20 UTC+0100 Subject: IWANN93 Workshop Message-ID: <355*/S=cabestan/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Please find herewith the First announcement and Call for Papers of IWANN93 (International Workshop on Artificial Neural Networks) to be held in Spain (near Barcelona) next June 1993. Thanks J.Cabestany UPC cabestan at eel.upc.es *************************************************************************** INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORK IWANN'93 First Announcement and Call for Papers Sitges (Barcelona), Spain June 9 - 11, 1993 SPONSORED BY IFIP (Working Group in Neural Computer Systems, WG10.6) IEEE Neural Networks Council UK&RI communication chapter of IEEE Spanish Computer Society chapter of IEEE AEIA (IEEE Affiliate society) ORGANISED BY Universidad Politecnica de Catalunya Universidad Autonoma de Barcelona Universidad de Barcelona UNED (Madrid) IWANN'91 (International Workshop on Artificial Neural Networks) was held in Granada (Spain) in September 1991. People from over 10 countries attended the Workshop, and over 50 oral presentations were given. IWANN'93 will be organised next June, 1993 in Sitges (Spain) with the following Scope and Topics. SCOPE Artificial Neural Networks (ANN) were first developed as structural or functional models of biological systems in an attempt to emulate their unique problem-solving abilities. The main interest in neural topics stems from their advantages in plasticity, speed and autonomy over conventional hardware and software, which have traditionally proven inadequate for handling certain tasks such as perception, learning, planning, knowledge acquisition and natural language processing. IWANN's main objective is to offer a forum for achieving a global, innovative and advanced perspective on ANN. In addition to conventional Neural Networks aspects, such as algorithms, architectures, software development tools , learning, implementations and applications, IWANN'93 will also be concerned with other complementary topics such as neural computation theory and methodology, physiological and anatomical basis, local computation models, organization and structures resembling biological systems. Contributions on the following aspects are welcome: * New models for biological networks. * New algorithms and architectures for autonomy and self- programmability using local learning strategies. * Relationship with symbolic and knowledge-based systems. * New implementation proposals using general or specific processors. Implementations with embedded learning are especially invited. * Applications. Finally, it is expected that IWANN'93 will also serve as a meeting point for engineers and scientists to establish professional contacts and relationships. TOPICS 1 - BIOLOGICAL PERSPECTIVES: anatomical and physiological basis, local circuits, biophysics and natural computation. 2 - THEORETICAL MODELS: analog, logic, inferential, statistical and fuzzy models. Statistical mechanics. 3 - ORGANIZATIONAL PRINCIPLES: network dynamics, self-organization, competition, recurrency, evolutive optimization and genetic algorithms. 4 - LEARNING: supervised and unsupervised strategies, local self- programming, continuous learning, evolutive algorithms 5 - COGNITIVE SCIENCE AND AI: perception and psychophysics, symbolic reasoning and memory. 6 - NEURAL SOFTWARE: languages, tools, simulation and benchmarks. 7 - HARDWARE IMPLEMENTATION: VLSI, parallel architectures, neurochips, preprocessing networks, neurodevices, benchmarks, optical and other technologies. 8 - NEURAL NETWORKS FOR SIGNAL PROCESSING: preprocessing, vision, speech recognition, adaptive filtering, noise reduction. 9 - NEURAL NETWORKS FOR COMMUNICATIONS SYSTEMS: modems and codecs, network management, digital communications. 10 - NEURAL NETWORKS FOR CONTROL AND ROBOTICS: system identification, motion, adaptive control, navigation, real time applications. LOCATION SITGES (BARCELONA), JUNE 9 - 11, 1993. Sitges is located 35 km. south of Barcelona. The city is well known for its beaches and its promenade facing the Mediterranean sea. Sitges is also known for its cultural events and history (Maricel museum, painters like Santiago Rusinol lived there and left part of their heritage). Sitges can be easily reached by car or by train (about 30 minutes from Barcelona). LANGUAGE English will be the official language of IWANN'93. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks original papers on the above mentioned Topics. Authors should pay special attention to the explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. Authors must take into account the following: INSTRUCTIONS TO AUTHORS Authors must submit four copies of full papers, not exceeding 6 pages in DIN-A4 format. The heading should be centered and include: . Title in capitals. . Name(s) of author(s). . Address(es) of author(s). . A 10 line abstract. Three blank lines should be left between each of the above items, and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 6 page limit. In addition, one sheet should be attached including the following information: . Title and author(s) name(s). . A list of five keywords. . A reference to the Topics the paper relates to. . Postal address, phone and fax numbers and E-mail (if available). All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length. (Springer-Verlag Proceedings are expected). DATES Second Call for Papers September 1, 1992 Final date for submission November 30, 1992 Committee's decision March 15, 1993 Workshop June 9-11, 1993 CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica UNED Senda del Rey, s/n Phone:+ 34.1.544.60.00 28040 MADRID (Spain) Fax: + 34.1.544.67.37 E-mail: jose.mira at human.uned.es ORGANIZATION COMMITTEE Jose Mira UNED. Madrid (E) **Chairman** Senen Barro Unv. de Santiago (E) Joan Cabestany Unv. Pltca. de Catalunya (E) Trevor Clarkson King's College London (UK) Ana Delgado UNED. Madrid (E) Federico Moran Unv. Complutense. Madrid (E) Conrad Perez Unv. de Barcelona (E) Francisco Sandoval Unv. de Malaga (E) Elena Valderrama CNM- Unv. Autonoma de Barcelona (E) LOCAL COMMITEE Joan Cabestany Unv. Pltca. de Catalunya (E) **Chairman** Jordi Carrabina CNM- Unv. Autonoma de Barcelona (E) Francisco Castillo Unv. Pltca. de Catalunya (E) Andreu Catala Unv. Pltca. de Catalunya (E) Gabriela Cembrano Instituto de Cibernetica. CSIC. Barcelona (E) Conrad Perez Unv. de Barcelona (E) Elena Valderrama CNM- Unv. Autonoma de Barcelona (E) GENERAL CHAIRMAN Alberto Prieto Unv. Granada. Spain PROGRAMME COMMITTEE Jose Mira UNED. Madrid (E) **Chairman** Sanjeev B. Ahuja Nielsen A.I. Research & Development. Bannokburn (USA) Igor Aleksander Imperial College. London (UK) Luis B. Almeida INESC. Lisboa (P) Shun-ichi Amari Faculty of Engineering. Unv. Tokyo (Jp) Xavier Arreguit CSEM SA (CH) Francois Blayo Ecole Polytechnique Federale de Lausanne (CH) Colin Campbell Bristol University of Bristol (UK) Leon Chua University of California (USA) Trevor Clarkson King's College London (UK) Michael Cosnard Ecole Normale Superieure de Lyon (F) Marie Cottrell Unv. Paris I (F) Dante Del Corso Politecnico di Torino (I) Gerard Dreyfus ESPCI Paris (F) J. Simoes da Fonseca Unv. de Lisboa (P) Kunihiko Fukushima Faculty of Engineering Science. Osaka University (Jp) Karl Goser Unv. Dortmund (D) Francesco Gregoretti Politecnico di Torino (I) Karl E. Grosspietsch Mathematik und Datenverarbeitung (GMD). St. Austin (D) Mohamad Hassoun Wayne State University (USA) Jeanny Herault INPG Grenoble (F) Jaap Hoekstra Delft University of Technology (N) P.T.W. Hudson Faculteit der Sociale Wetenschappen. Leiden University (N) Jose Luis Huertas CNM- Universidad de Sevilla (E) Simon Jones Unv. Nottingham (UK) Chistian Jutten INPG Grenoble (F) H. Klar Institut fur Mikroelektronik. Technische Universitat Berlin (D) Michael D. Lemmon University of Notre Dame. Notre Dame (USA) Panos Ligomenides Unv. of Maryland (USA) Javier Lopez Aligue Unv. de Extremadura. (E) Robert J. Marks II University of Washington (USA) Anthony N. Michel University of Notre Dame. Notre Dame (USA) Roberto Moreno Unv. Las Palmas Gran Canaria (E) Josef A. Nossek Inst. of Network Theory and Circuit Design. Tech. Univ. of Munich (D) Francisco J. Pelayo Unv. de Granada (E) Franz Pichler Johannes Kepler Univ. (A) Ulrich Ramacher Siemens AG. Munich (D) Tamas Raska Comp. & Aut. Res. Inst. Hungarian Academy of Science. Budapest (H) Leonardo Reyneri University di Pisa (I) Peter A. Rounce Dept. Computer Science. University College London (UK) V.B. David Sanchez German Aerospace Research Establishment. Wessling (G) E. Sanchez-Sinencio Texas A&M University (USA) Renato Stefanelli Politecnico di Milano (I) T.J. Stonham Brunel-University of West London (UK) John G. Taylor Centre for Neural Networks. King's College London (UK) Carme Torras Instituto de Cibernetica. CSIC. Barcelona (E) Philip Treleaven Dept. Computer Science. University College London (UK) Marley Vellasco Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen Unv. Catholique de Louvain (B) Michel Weinfeld Ecole Polytechnique Paris (F) INFORMATION FORM to be returned as soon as possible to: Prof. J.Cabestany IWANN'93 Dep.Ingenieria Electronica UPC P.O.Box 30.002 08080 Barcelona SPAIN Phone:+34.3.401.67.42 Fax: +34.3.401.68.01 E-mail: cabestan at eel.upc.es (cut here) ......................................................................... IWANN'93 INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORKS Sitges (Barcelona), Spain June 9 - 11, 1993 Name:____________________________________________________________________ Company/Organization:____________________________________________________ Address:_________________________________________________________________ State/Country:___________________________________________________________ Phone:_____________________ Fax:_______________________ E-mail:____________________________________ I intend to attend the Workshop: ______ I intend to submit a paper: _____ Tentative title: Authors: Related topics: From ajr at eng.cam.ac.uk Tue May 12 11:17:24 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Tue, 12 May 92 11:17:24 BST Subject: JOB AD: Hybrid Hidden Markov/Connectionist Models for Speech Recognition Message-ID: <17495.9205121017@dsl.eng.cam.ac.uk> [ My apologies if this appears on your screen more than once ] Cambridge University Engineering Department Research Assistant in Speech Recognition The Speech Group expects to appoint an RA for an ESPRIT project on large vocabulary speech recognition using hybrid HMM/ANN structures. Candidates will have PhD experience in one of these techniques and an interest in C programming in a parallel environment, joining a multi-nation research collaboration. It is expected that the project will start during the summer 1992, and will last for 15 months with a possible extension to 36 months. The RA starting date is negotiable. Salary is age related on RA1A rates. Further information and application forms can be obtained from Mavis Barber, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ. Closing date for applications is 1 June 1992. Further Particulars The Speech Group expects to appoint an RA for an ESPRIT project on large vocabulary speech recognition using hybrid HMM/ANN structures. The consortium for this Basic Research Project is: Lernout & Hauspie Speech Products, Brussels; INESC, Lisbon; CUED, with ICSI (International Computer Science Institute) Berkeley as sub-contractor. The baseline structures are the HMM/MLP pioneered by Bourlard (LHS) and Morgan (ICSI) and the HMM/recurrent NN (RNN) studied by Robinson (CUED). These will be extended in several ways; by incorporating improvements analogous to those used in state of the art HMM recognisers; by further development of the theoretical bases of hybrid classification; by the development of better acoustic features with enhanced speaker and communication channel robustness; by the development of better training procedures; by the investigation of fast speaker adaptation in hybrids. The project also aims to demonstrate real-time recognisers and their evaluation against international databases such as used by the DARPA community. A major feature of the project will be the use of the RAP, ring array processor, developed by ICSI, at each site; with further development of hardware and software tools by ICSI for the project. The RAP has 16 digital signal processors with a peak performance of 0.5 GFlps. Applicants should have PhD or equivalent experience in HMMs or ANNs in speech recognition. The RAP uses C & C++ in a UNIX environment and applicants should have relevant experience and an interest in extending their skills in a parallel environment. The RA at Cambridge will be mainly responsible for the development of HMM/RNN structures. It is expected that the project will start during the summer 1992, initially for a period of 15 months but with an extension to 36 months. The starting date for the RA is negotiable. Further details and application forms from Mavis Barber at Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, mavis at eng.cam.ac.uk F. Fallside A.J. Robinson May 1992 From shultz at hebb.psych.mcgill.ca Tue May 12 08:42:23 1992 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 12 May 92 08:42:23 EDT Subject: No subject Message-ID: <9205121242.AA10422@hebb.psych.mcgill.ca> Subject: Abstract Date: 12 May '92 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A Constraint Satisfaction Model of Cognitive Dissonance Phenomena Thomas R. Shultz Department of Psychology McGill University Mark R. Lepper Department of Psychology Stanford University Abstract A constraint satisfaction network model simulated cognitive dissonance data from the insufficient justification and free choice paradigms. The networks captured the psychological regularities in both paradigms. In the case of free choice, the model fit the human data better than did cognitive dissonance theory. This paper will be presented at the Fourteenth Annual Conference of the Cognitive Science Society, Indiana University, 1992. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.dissonance.ps.Z ftp> quit unix> uncompress shultz.dissonance.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca From SCHNEIDER at vms.cis.pitt.edu Tue May 12 17:17:00 1992 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Tue, 12 May 1992 16:17 EST Subject: Post-Doc in Neural Processes in Cognition for Fall 92 Message-ID: <01GJXHY8CO40PF8MJI@vms.cis.pitt.edu> Postdoctoral Training in NEURAL PROCESSES IN COGNITION For Fall 1992 The Pittsburgh Neural Processes in Cognition program, in its second year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an interdisciplinary program investigating the neurobiology of cognition utilizing neuroanatomical, neurophysiological, and computer simulation procedures. Students perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, MRI and PET brain scanners, the Pittsburgh Supercomputing Center, and access to human clinical populations. The is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Postdoctoral applicants MUST HAVE UNITED STATES RESIDENT'S STATUS. Applications are requested by June 20, 1992. Applicants must have a sponsor from the training faculty for the Neural Processes in Cognition Program. Application materials will be sent upon request. Each student receives full financial support, travel allowances, and a computer workstation access. Last year's class included mathematicians, psychologists, and neuroscience researchers. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. For information contact: Professor Walter Schneider, Program Director, Neural Processes in Cognition, University of Pittsburgh, 3939 O'Hara St, Pittsburgh, PA 15260, call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET From joe at cogsci.edinburgh.ac.uk Wed May 13 06:10:26 1992 From: joe at cogsci.edinburgh.ac.uk (Joe Levy) Date: Wed, 13 May 92 11:10:26 +0100 Subject: Research Position Message-ID: <10229.9205131010@muriel.cogsci.ed.ac.uk> Research Position University of Edinburgh, UK Centre for Cognitive Science Applications are invited for an RA position, at pre- or post-doctoral level (up to 16,755 pounds), working on ESRC-funded research into connectionist models of human language processing. The research involves using recurrent neural networks trained on speech corpora to model psycholinguistic data. Experience in C programming, neural networks and human language processing would be valuable. An early start date is preferred and the grant will run until 31 March 1994. Applications consisting of a cv and the names and addresses to 2 referees should be sent by Tuesday 2 June to Dr Richard Shillcock, Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, UK to whom informal enquiries should also be addressed (email rcs%cogsci.ed.ac.uk at nsfnet-relay.ac.uk; tel +44 31 650 4425). From rosanna at cns.edinburgh.ac.uk Thu May 14 16:30:55 1992 From: rosanna at cns.edinburgh.ac.uk (Rosanna Maccagnano) Date: Thu, 14 May 92 16:30:55 BST Subject: Email address for NETWORK Message-ID: <12931.9205141530@subnode.cns.ed.ac.uk> The correct email address for IOP Publishing is: In the UK: On Janet IOPPL at UK.AC.RL.GEC-B Outside the UK: IOPPL at GEC-B.RL.AC.UK Rosanna From 75008378%dcu.ie at BITNET.CC.CMU.EDU Thu May 14 12:02:00 1992 From: 75008378%dcu.ie at BITNET.CC.CMU.EDU (Barry McMullin, DCU (Dublin, Ireland) ) Date: Thu, 14 May 1992 16:02 GMT Subject: Workshop: "Autopoiesis and Perception" - Call for Participation. Message-ID: <01GK0A1NYOSG985B1A@dcu.ie> [The workshop announced below addresses an essentially cross- disciplinary subject area, potentially involving philosophy, computer science, engineering and biology - to name but a few. It is therefore being posted across a variety of forums (fora?): so my apologies for the noise if you see it more than once! All flames directly to me, please. In case you wish to print out the plain ascii text, it has been structured with 72 columns, 66 lines per page. Please pass on the notice to anyone else who may be interested. If you require further information, or wish to register, please follow the instructions below; but note that, due to other commitments over the next fortnight, no acknowledgements will be issued before May 27th. - Barry.] ------------------------------- CUT HERE ------------------------------- AUTOPOIESIS AND PERCEPTION A Workshop within ESPRIT BRA 3352 DUBLIN CITY UNIVERSITY: 25-26 August 1992 ************ CALL FOR PARTICIPATION ************ A common sense idea of perception is that, through the information processing capabilities of our sensory/brain system, we come to know "the" objectively real, external, world. However, this "spectator" paradigm has not proved very effective (so far) in attempts to build artificial perceptual systems. It therefore seems appropriate to critically examine this concept of perception. One alternative idea is to take a participatory rather than a spectator view of the relationship between "us" and "the external world". To perceive is not to process sensory data, but to apprehend meaning through interaction. Autopoiesis is an organizational paradigm which can support such a participatory view of perception. The concept of autopoiesis (lit. "self-producing"), was introduced to characterise the organisation which makes living systems autonomous. An autopoietic organisation is one which is self-renewing (in a suitable environment); autopoietic systems maintain their organisation through a network of component-producing processes such that the interacting components generate the same network of processes which produced them. In the autopoietic paradigm, perception is an emergent phenomenon characteristic of the interaction between an autopoietic system and its environment: the system responds to perturbations in just such a way as to maintain its (autopoietic) identity. Structure: ---------- The key objective of the workshop is to allow for extensive, open, discussion, and it has been structured accordingly. It will consist of a small number of prepared papers by invited keynote speakers, punctuated with extended discussion periods; it will run over one and a half days (from 9.30 AM on 25th August, to 1.00 PM on 26th August). To maximize the benefit of the discussion, the workshop will be limited to 30 participants. Invited Speakers (Confirmed): ----------------------------- Prof. Francisco Varela C.R.E.A., Ecole Polytechnique, Paris. Dr. David Vernon DG XIII, EC Commission, Brussels, and Computer Science, Trinity College Dublin. Dr. Dermot Furlong Department of Microelectronic and Electrical Engineering, Trinity College Dublin. Further Information: Barry McMullin, Electronic Engineering, -------------------- Dublin City University, Dublin 9, IRELAND. E-mail: Phone: +353-1-7045432 Fax: +353-1-7045508 [Page 1 of 2] ------------------------------------------------------------------------ AUTOPOIESIS AND PERCEPTION A Workshop within ESPRIT BRA 3352 DUBLIN CITY UNIVERSITY: 25-26 August 1992 ************* REGISTRATION FORM ************* The deadline for receipt of registration information is Friday, 31st July 1992. Due to the limit to 30 participants, early registration is advisable. However, postal services to Dublin are currently severely affected by an industrial dispute. Therefore, if you wish to register, it is recommended that you return this form by E-mail or FAX as soon as possible, paying the registration fee by Bank Transfer. Please advise if you require information on hotel accomodation; campus accomodation will be available at a rate of IRP 20 per night (approx.) - a separate booking form will be provided on request. The DCU campus is situated in the north Dublin suburb of Glasnevin, is less than 10 minutes from Dublin International Airport, and has easy access to the city centre. All correspondence should be directed to: Barry McMullin, Electronic Engineering, Dublin City University, Dublin 9, IRELAND. E-mail: Phone: +353-1-7045432 Fax: +353-1-7045508 Name:................................................................... Organisation:........................................................... Address:................................................................ City:........................... Country:.......................... Phone:............... FAX:................. E-mail:................... Is your organisation a member of the BRA 3352 Working Group on Vision? YES___ NO___ If YES, which consortium? ................... Registration Fee: Irish Pounds 60 (or equivalent) Payment Form: (Check One) 1) Internal Accounting (working group members only) ____ Requires signature of partner representative listed in BRA 3352 Technical Annex: Partner Representative:................... Signature................ 2) Bank Transfer: ____ Account Name: Dublin City University Conference a/c Bank: AIB Bank, 7-12 Dame St., Dublin 2, IRELAND. Account Number: 91765-215 Bank Sorting Code: 93 20 86 (IMPORTANT: Quote your NAME *and* "Ref: 421/01/121 (Autopoiesis)" in all bank transfer documents.) 3) Bank Draft (made payable to "Dublin City University"): ____ Equivalent of Irish Pounds amount in any EC currency drawn on a local bank -OR- DM, US$, or Sterling draft drawn on a UK bank. All charges to be bourn by the remitter. [Page 2 of 2] ------------------------------- CUT HERE ------------------------------- From thildebr at aragorn.csee.lehigh.edu Wed May 13 14:28:30 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Wed, 13 May 92 14:28:30 -0400 Subject: Preprints available Message-ID: <9205131828.AA05436@aragorn.csee.lehigh.edu> My translation of the following paper is soon to appear in Electronics, Information, and Communication in Japan (Scripta Technica, Silver Spring, MD). To obtain a copy in PostScript format (sans figure), e-mail your request directly to me. I cannot supply hardcopy, due to budgetary constraints, so please don't ask; also, the one figure is not necessary for understanding the paper. Thomas H Hildebrandt Visiting Researcher EE and CS Department Room 304 Packard Lab 19 Lehigh University Bethlehem, PA 18015 \title{Analysis of Neural Network Energy Functions Using Standard Forms \thanks{Translated from the Japanese by Thomas H. Hildebrandt. The original appeared in Denshi Joohoo Tsuushin Gakkai Ronbunshi (Transactions of the Institute of Electronics, Information, and Communication Engineers (of Japan)), V.J74-D-II, N.6, pp.804--811, June 1991}} \author{Masanori IZUMIDA\thanks{Information Engineering Department, Faculty of Engineering, Ehime University, Matsuyama-shi 790 Japan}, Kenji MURAKAMI$^{\dagger}$ and Tsunehiro AIBARA$^{\dagger}$} \begin{abstract} In this paper, we discuss a method for analyzing the energy function of a Hopfield type neural network. In order to analyze the energy function which solves the given minimization problem, or simply, the problem, we define the standard form of the energy function. In general, a multidimensional energy function is complex, and it is difficult to investigate the energy functions arising in practice, but when placed in the standard form, it is possible to compare and contrast the forms of the energy functions themselves. Since an orthonormal transformation will not change the form of an energy function, we can stipulate that the standard form represent identically energy functions which have the same form. Further, according to the theory associated with standard forms, it is possible to partition a general energy function according to the eigenvalues of the connection weight matrix, and if we analyze each energy function, we can investigate the properties of the actual energy function. Using this method, we analyze the energy function given by Hopfield for the Travelling Salesman Problem, and study how the minimization problem is realized in the energy function. Also, we study the mutual effects of a linear combination of energy functions and discuss the results. \end{abstract} KEYWORDS: Neural network, energy function, eigenvalue decomposition, travelling salesman problem, standard form. From gmk%idacrd at uunet.UU.NET Thu May 14 23:04:05 1992 From: gmk%idacrd at uunet.UU.NET (Gary M. Kuhn) Date: Thu, 14 May 92 23:04:05 EDT Subject: IEEE NNSP'92 program & registration form Message-ID: <9205150304.AA17400@> Dear Connectionists, Attached, please find the e-mail version of the advance program for IEEE NNSP'92, including registration form. This Workshop will be limited to about 200 persons. Registration is on a first-come first-serve basis. Thank you. Gary Kuhn - Publicity Gary M. Kuhn Internet: gmk%idacrd.uucp at princeton.edu CCRP - IDA UUCP: princeton!idacrd!gmk Thanet Road PHONE: (609) 924-4600 Princeton, NJ 08540 USA FAX: (609) 924-3061 *************************************************************************** ADVANCE PROGRAM IEEE Workshop on Neural Networks for Signal Processing August 31 - September 2, 1992 Copenhagen The Danish Computational Neural Network Center CONNECT and The Electronics Institute, The Technical University of Denmark In cooperation with the IEEE Signal Processing Society Invitation to Participate in the 1992 IEEE Workshop on Neural Networks for Signal Processing. The members of the Workshop Organizing Committee welcome you to the 1992 IEEE Workshop on Neural Networks for Signal Processing. The 1992 Workshop is the second workshop held in this area. The first took place in 1991 in Princeton, NJ, USA. The Workshop is organized by the IEEE Technical Committee for Neural Networks and Signal Processing. The purpose of the Workshop is to foster informal technical interac- tion on topics related to the application of neural networks to signal processing problems. Workshop Location The 1992 Workshop will be held at Hotel Marienlyst, Ndr. Strandvej 2, DK-3000 Helsingoer, Denmark, tel: +45 49202020, fax: +45 49262626. Helsingoer is a small town a little North of Copenhagen. The Workshop banquet will be held on Tuesday evening, September 1, at Kronborg Castle, which is situated close to the workshop hotel. Workshop Proceedings The proceedings of the Workshop, entitled "Neural Networks for Signal Processing - Proceedings of the 1992 IEEE Workshop", will be distributed at the Workshop. The registration fee covers one copy of the proceedings. Registration Information The Workshop registration information is given in the end of this program. It is possible to apply for a limited number of partial travel and registration grants via Program Chair. The address is given in the end of this program. Program Overview Time | Monday 31/8-92 | Tuesday 1/9-92 | Wednesday 2/9-92 ========================================================================== 8:15 AM | Opening Remarks | | -------------------------------------------------------------------------- 8:30 AM | Opening Keynote | Keynote Address | Keynote Address | Address | | -------------------------------------------------------------------------- 9:30 AM | Learning & Models | Speech 2 | Nonlinear Filtering | (Lecture) | (Lecture) | by Neural Networks | | | (Lecture) -------------------------------------------------------------------------- 11:00 AM | Break | Break | Break -------------------------------------------------------------------------- 11:30 AM | Speech 1 | Learning, System- | Image Processing and | (Poster preview) | identification and | Pattern Recognition | | Spectral Estimation | (Poster preview) | | (Poster preview) | -------------------------------------------------------------------------- 12:30 PM | Lunch | Lunch | Lunch -------------------------------------------------------------------------- 1:30 PM | Speech 1 | Learning, System- | Image Processing and | (Poster) | identification and | Pattern Recognition | | Spectral Estimation | (Poster) | | (Poster) | -------------------------------------------------------------------------- 2:45 PM | Break | Break | Break -------------------------------------------------------------------------- 3.15 PM | System | Image Processing and | Application Driven | Implementations | Analysis | Neural Models | (Lecture) | (Lecture) | (Lecture) -------------------------------------------------------------------------- Evening | Panel Discussion | Visit and Banquet at | | (8 PM) | Kronborg Castle | ========================================================================== Evening Events A Pre-Workshop reception will be held at Hotel Marienlyst at 7:00 PM on Sunday, August 30, 1992. Tuesday, September 1, 1992, 5:00 PM visit to Kronborg Castle and 7:00 PM banquet at the Castle. TECHNICAL PROGRAM Monday, August 31, 1992 8:15 AM; Opening Remarks: S.Y. Kung, F. Fallside, Workshop Chairs, Benny Lautrup, connect, Denmark, John Aa. Sorensen, Workshop Program Chair. 8:30 AM; Opening Keynote: System Identification Perspective of Neural Networks Professor Lennart Ljung, Department of Electrical Engineering, Linkoping University, Sweden. 9:30 AM; Learning & Models (Lecture Session) Chair: Jenq-Neng Hwang, Department of Electrical Engineering University of Washington, Seattle, WA, USA. 1. "Towards Faster Stochastic Gradient Search", Christian Darken, John Moody Yale University, New Haven, CT, USA. 2. "Inserting Rules into Recurrent Neural Networks", C.L. Giles, NEC Research Inst., Princeton, C.W. Omlin, Rensselaer Polytechnic Institute, Troy, NY, USA. 3. "On the Complexity of Neural Networks with Sigmoidal Units", Kai-Yeung Siu, University of California, Irvine, CA, Vwani Roychowdhury, Purdue University, West Lafayette, IN, Thomas Kailath, Stanford University, Stanford, CA, USA. 4. "A Generalization Error Estimate for Nonlinear Systems", Jan Larsen, Technical University of Denmark, Lyngby, Denmark. 11:00 AM; Coffe break 11:30 AM; Speech 1, (Oral previews of the afternoon poster session) Chair: Paul Dalsgaard, Institute of Electronic Systems, Aalborg University, Denmark. 1. "Interactive Query Learning for Isolated Speech Recognition", Jenq-Neng Hwang, Hang Li, University of Washington, Seattle, WA, USA. 2. "Adaptive Template Method for Speech Recognition", Yadong Liu, Yee-Chun Lee, Hsing-Hen Chen, Guo-Zheng Sun, University of Maryland, College Park, MD, USA. 3. "Fuzzy Partition Models and Their Effect in Continous Speech Recognition", Yoshinaga Kato, Masahide Sugiyama, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "Empirical Risk Optimization: Neural Networks and Dynamic Programming", Xavier Driancourt, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 5. "Text-Independent Talker Identification System Combining Connectionist and Conventional Models", Thierry Artieres, Younes Bennani, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 6. "A Two Layer Kohonen Neural Network using a Cochlear Model as a Front-End Processor for a Speech Recognition System", S. Lennon, E. Ambikairajah, Regional Technical College, Athlone, Ireland. 7. "Self-Structuring Hidden Control Neural Models", Helge B.D. Sorensen, Uwe Hartmann, Institute of Electronic Systems, Aalborg University, Aalborg, Denmark. 8. "Connectionist-Based Acoustic Word Models", Chuck Wooters, Nelson Morgan, International Computer Science Institute, Berkeley, CA, USA. 9. "Maximum Mutual Information Training of a Neural Predictive-Based HMM Speech Recognition System", K. Hassanein, L. Deng, M. Elmasry, University of Waterloo, Waterloo, Ontario, Canada. 10. "Training Continous Density Hidden Markov Models in Association with Self-Organizing Maps and LVQ", Mikko Kurimo, Kari Torkkola, Helsinki University of Technology, Finland. 11. "Unsupervised Sequence Classification", Jorg Kindermann, GMD-FIT.KI, Schloss Birlinghoven, Germany, Christoph Windheuser, Carnegie Mellon University, Pittsburg, PA, USA. 12. "A Mathematical Model for Speech Processing", Anna Esposito, Universita di Salerno, Salvatore Rampone, I.R.S.I.P-C.N.R, Napoli, Cesare Stanzione, International Institute for Advanced Scientific Studies, Salerno, Roberto Tagliaferri, Universita di Salerno, Italy. 13. "A New Voice and Pitch Estimator based on the Neocognitron", J.R.E. Moxham, P.A. Jones, H. McDermott, G.M. Clark, Australian Bionic Ear and Hearing Research Institute, East Melbourne 3002, Victoria, Australia. 12:30 PM; Lunch 1:30 PM; Speech 1, (Poster Session) 2:45 PM; Break 3:15 PM; System Implementations (Lecture session) Chair: Yu Hen Hu, Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA. 1. "CCD's for Pattern Recognition", Alice M. Chiang, Lincoln Laboratory, Massachusetts Institute of Technology, MA, USA. 2. "An Electronic Parallel Neural CAM for Decoding", Joshua Alspector, Bellcore, Morristown, NJ, Anthony Jayakumar Bon Ngo, Cornell, Ithaca, NY, USA. 3. "Netmap-Software Tool for Mapping Neural Networks onto Parallel Computers", K. Wojtek Przytula, Hughes Research Labs, Malibu, CA, Viktor Prasanna, University of California, Wei-Ming Lin, Mississippi State University, USA. 4. "A Fast Simulator for Neural Networks on DSPs of FPGAs", M. Ade, R. Lauwereins, J. Peperstraete, ESAT-Elektrotechniek, Heverlee, Belgium. Tuesday, September 1, 1992 8:30 AM; Keynote Address: "Capacity Control in Classifiers for Pattern Recognition" Dr. Sara Solla, AT\&T Bell Laboratories, Murray Hill, NJ, USA. 9:30 AM; Speech 2, (Lecture Session) Chair: S. Katagiri, ATR Auditory and Visual Perception Research Laboratories, Kyoto, Japan. 1. "Speech Representations for Recognition by Neural Networks", B.H. Juang, AT&T Bell Laboratories, Murray Hill, NJ, USA. 2. "Classification with a Codebook-Excited Neural Network" Lizhong Wu, Frank Fallside, Cambridge University, UK. 3. "Minimal Classification Error Optimization for a Speaker Mapping Neural Network", Masahide Sugiyama, Kentaro Kurinami, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "On the Identification of Phonemes Using Acoustic-Phonetic Features Derived by a Self-Organising Neural Network", Paul Dalsgaard, Ove Andersen, Rene Jorgensen, Institute of Electronic Systems, Aalborg University, Denmark. 11:00 AM; Coffe break 11:30 AM; Learning, System Identification and Spectral Estimation. (Oral previews of afternoon poster session) Chair: Lars Kai Hansen, connect, Electronics Institute, Technical University of Denmark, Lyngby, Denmark. 1. "Prediction of Chaotic Time Series Using Recurrent Neural Networks", Jyh-Ming Kuo, Jose C. Principe, Bert deVries, University of Florida, FL, USA. 2. "Nonlinear System Identification using Multilayer Perceptrons with Locally Recurrent Synaptic Structure", Andrew Back, Ah Chung Tsoi, University of Queensland, Australia. 3. "Chaotic Signal Emulation using a Recurrent Time Delay Neural Network", Michael R. Davenport, Department of Physics, U.B.C., Shawn P. Day, Department of Electrical Engineering, U.B.C., Vancouver, Canada. 4. "Prediction with Recurrent Networks", Niels Holger Wulff, Niels Bohr Institute, John A. Hertz, Nordita, Copenhagen, Denmark. 5. "Learning of Sinusoidal Frequencies by Nonlinear Constrained Hebbian Algorithms", Juha Karhunen, Jyrki Joutsensalo, Helsinki University of Technology, Finland. 6. "A Neural Feed-Forward Network with a Polynomial Nonlinearity", Nils Hoffmann, Technical University of Denmark, Lyngby, Denmark. 7. "Application of Frequency-Domain Neural Networks to the Active Control of Harmonic Vibrations in Nonlinear Structural Systems", T.J. Sutton, S.J. Elliott University of Southampton, England. 8. "Generalization in Cascade-Correlation Networks", Steen Sjoegaard, Aarhus University, Denmark. 9. "Noise Density Estimation Using Neural Networks", M.T. Musavi, D.M. Hummels, A.J. Laffely, S.P. Kennedy, University of Maine, Maine, USA. 10. "An Efficient Model for Systems with Complex Responses", Volker Tresp, Ira Leuthausser, Ralph Neuneier, Martin Schlang, Siemens AG, Munchen, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 11. "Generalized Feedforward Filters with Complex Poles", T. Oliveira e Silva, P. Guedes de Oliveira, Universidade de Aveiro, Aveiro, Portugal, J.C. Principe, University of Florida, Gainsville, FL, B. De Vries, David Sarnoff Research Center, Princeton, NJ, USA. 12. "A Simple Genetic Algorithm Applied to Discontinous Regularization", John Bach Jensen, Mads Nielsen, DIKU, University of Copenhagen, Denmark. 12:30 PM; Lunch 1:30 PM; Learning, System Identification and Spectral Estimation. (Poster Session) 2:45 PM; Break 3:15 PM; Image Processing and Analysis. (Lecture session) Chair: K. Wojtek Przytula, Huges Research Labs, Malibu, CA, USA. 1. "Decision-Based Hierarchical Perceptron (HiPer) Networks with Signal/Image Classification Applications", S.Y. Kung, J.S. Taur, Princeton University, NJ, USA. 2. "Lateral Inhibition Neural Networks for Classification of Simulated Radar Imagery", Charles M. Bachmann, Scott A. Musman, Abraham Schultz, Naval Research Laboratory, Washington, USA. 3. "Globally Trained Neural Network Architecture for Image Compression", L. Schweizer, G. Parladori, Alcatel Italia, Milano, G.L. Sicuranza, Universita'di Trieste, Italy. 4. "Robust Identification of Human-Faces Using Mosaic Pattern and BPN", Makoto Kosugi, NTT Human Interface Laboratories, Take Yokosukashi Kanagawaken, Japan. 5:00 PM; Departure to Kronborg Castle 7:00 PM; Banquet at Kronborg Castle Wednesday, September 2, 1992 8:30 AM; Keynote Address: "Application Perspectives of the DARPA Artificial Neural Network Technology Program" Dr. Barbara Yoon, DARPA/MTO, Arlington, VA, USA. 9:30 AM; "Nonlinear Filtering by Neural Networks (Lecture Session)" Chair: Gary M. Kuhn, CCRP-IDA, Princeton, NJ, USA. 1. "Neural Networks and Nonparametric Regression", Vladimir Cherkassky, University of Minnesota, Minneapolis, USA. 2. "A Partial Analysis of Stochastic Convergence for a Generalized Two-Layer Perceptron using Backpropagation", Jeffrey L. Vaughn, Neil J. Bershad, University of California, Irvine, John J. Shynk, University of California, Santa Barbara, CA, USA. 3. "A Recurrent Neural Network for Nonlinear Time Series Prediction - A Comparative Study", S.S. Rao, S. Sethuraman, V. Ramamurti, Villanova University, Villanova, PA, USA. 4. "Dispersive Networks for Nonlinear Adaptive Filtering", Shawn P. Day, Michael R. Davenport, University of British Columbia, Vancouver, Canada. 11:00 AM; Coffe break 11:30 AM; Image Processing and Pattern Recognition (Oral previews of afternoon poster sessions) Chair: Benny Lautrup, connect, Niels Bohr Institute, Copenhagen, Denmark. 1. "Unsupervised Multi-Level Segmentation of Multispectral Images", R.A. Fernandes, Institute for Space and Terrestrial Science, Richmond Hill, M.E. Jernigan, University of Waterloo, Waterloo, Ontario, Canada. 2. "Autoassociative Neural Networks for Image Compression: A Massively Parallel Implementation", Andrea Basso, Ecole Polytechnique Federale de Lausanne, Switzerland. 3. "Compression of Subband-Filtered Images via Neural Networks", S. Carrato, S. Marsi, University of Trieste, Trieste, Italy. 4. "An Adaptive Neural Network Model for Distinguishing Line- and Edge Detection from Texture Segregation", M.M. Van Hulle, T.Tollenaere, Katholieke Universiteit Leuven, Leuven, Belgium. 5. "Adaptive Segmentation of Textured Images using Linear Prediction and Neural Networks", Stefanos Kollias, Levon Sukissian, National Technical University of Athens, Athen, Greece. 6. "Neural Networks for Segmentation and Clustering of Biomedical Signals", Martin F. Schlang, Volker Tresp, Siemens, Munich, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 7. "Some New Results in Nonlinear Predictive Image Coding Using Neural Networks", Haibo Li, Linkoping University, Linkoping, Sweden. 8. "A Neural Network Approach to Multi-Sensor Point-of-Interest Detection", Ajay N. Jain, Alliant Techsystems Inc., Hopkins, MN, USA. 9. "Supervised Learning on Large Redundant Trainingsets", Martin F. Moller, Aarhus University, Aarhus, Denmark. 10. "Neural Network Detection of Small Moving Radar Targets in an Ocean Environment", Jane Cunningham, Simon Haykin, McMaster University, Hamilton, Ontario, Canada. 11. "Discrete Neural Networks and Fingerprint Identification", Steen Sjoegaard, Aarhus University, Aarhus, Denmark. 12. "Image Recognition using a Neural Network", Keng-Chung Ho, Bin-Chang Chieu, National Taiwan Institute of Technology, Taipei, Taiwan, Republic of China. 13. "Adaptive Training of Feedback Neural Networks for Non-Linear Filtering and Control: I - A General Approach." O. Nerrand, P. Roussel-Ragot, L. Personnaz, G. Dreyfus, Ecole Superieure de Physique et de Chimie Industrielles, Paris, S. Marcos, Ecole Superieure d'Electricite, Gif Sur Yvette, France. 12:30 PM; Lunch 1:30 PM; Image Processing and Pattern Recognition (Poster Session) 2:45 PM; Break 3:15 PM; Application Driven Neural Models (Lecture session) Chair: Sathyanarayan S. Rao, Department of Electrical Engineering Villanova University, Villanova, PA, USA. 1. "Artificial Neural Network for ECG Arrhythmia Monitoring", Y.H. Hu, W.J. Tompkins, Q. Xue, University of Wisconsin- Madison, WI, USA. 2. "Constructing Neural Networks for Contact Tracking", Christopher M. DeAngelis, Naval Underwater Warfare Center, Rhode Island, Robert W. Green, University of Massachusetts Dartmouth, North Dartmouth, MA, USA. 3. "Adaptive Decision-Feedback Equalizer Using Forward-Only Counterpropagation Networks for Rayleigh Fading Channels", Ryuji Kaneda, Takeshi Manabe, Satoshi Fujii, ATR Optical and Radio Communications Research Laboratories, Kyoto, Japan. 13. "Ensemble Methods for Handwritten Digit Recognition", Lars Kai Hansen, Technical University of Denmark, Christian Liisberg, Risoe National Laboratory, Denmark, Peter Salamon, San Diego State University, San Diego, CA, USA. Workshop Committee General Chairs: S.Y. Kung F. Fallside Dept. of Electrical Engineering Engineering Department Princeton University Cambridge University Princeton, NJ 08544, USA Cambridge CB2 1PZ, UK email: kung at princeton.edu email: fallside at eng.cam.ac.uk Program Chair: Proceedings Chair: John Aa. Sorensen Candace Kamm Electronics Institute, Build 349 Box 1910 Technical University of Denmark Bellcore, 445 South Street DK-2800 Lyngby, Denmark Morristown, NJ 07960, USA email: jaas at dthei.ei.dth.dk email: cak at thumper.bellcore.com Publicity Chair: Gary M. Kuhn CCRP - IDA Thanet Road Princeton, NJ 08540, USA email: gmk%idacrd.uucp at princeton.edu Program Committee: Ronald de Beer Jenq-Neng Hwang John E. Moody John Bridle Yu Hen Hu Carsten Peterson Erik Bruun B.H. Juang Sathyanarayan S. Rao Paul Dalsgaard S. Katagiri Peter Salamon Lee Giles T. Kohonen Christian J. Wellekens Lars Kai Hansen Gary M. Kuhn Barbara Yoon Steffen Duus Hansen Benny Lautrup John Hertz Peter Koefoed Moeller ---------------------------------------------------------------------------- REGISTRATION FORM: 1992 IEEE Workshop on Neural Networks for Signal Processing. August 31 - September 2, 1992. Registration fee including single room and meals at Hotel Marienlyst: Before July 15, 1992, Danish Kr. 5200. After July 15, 1992, Danish Kr. 5350. Companion fee at Hotel Marienlyst: Danish Kr. 1160. Registration fee without hotel room: Before July 15, 1992, Danish Kr. 2800. After July 15, 1992, Danish Kr. 2950. The registration fee of Danish Kr. 5200 (5350) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . Pre-Workshop reception at Sunday evening, August 30, 1992. . Hotel single room from August 30 to September 2, 1992. (3 nights). . 3 breakfasts, 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. . A Companion fee of additional Danish Kr. 1160 covers double room at Hotel Marienlyst and the Pre-Workshop reception, breakfasts and the banquet for 2 persons. The registration fee without hotel room: Danish Kr. 2800 (2950) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. Further information on registration: Ms. Anette Moeller-Uhl, The Niels Bohr Institute, tel: +45 3142 1616 ext. 388, fax: +45 3142 1016, email: uhl at connect.nbi.dk Please complete this form (type or print clearly) and mail with payment (by check, do not include cash) to: NNSP-92, CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen, Denmark. Name ------------------------------------------------------------------ Last First Middle Firm of University ---------------------------------------------------- Mailing Address ------------------------------------------------------- ----------------------------------------------------------------------- Country Phone Fax From nwanahs at cs.keele.ac.uk Fri May 15 16:14:13 1992 From: nwanahs at cs.keele.ac.uk (Hyacinth Nwana) Date: Fri, 15 May 92 16:14:13 BST Subject: AISB Call for Tutorial/Workshop Proposals Message-ID: <638.9205151514@des.cs.keele.ac.uk> Call for Tutorial & Workshop Proposals: AISB-93 9th Biennial Conference on Artificial Intelligence University of Birmingham, England 29th March -- 2nd April 1993 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB-93 Programme Committee invites proposals for the Tutorial & Workshop Programme of the 9th Biennial Conference on Artificial Intelligence (AISB-93) to be held at the University of Birmingham, England, during 29th March - 2nd April 1993. The first day and a half of the Conference are allocated to workshops and tutorials. Proposals for full day or half day tutorials/workshops will be considered. They may be offered both on standard topics and on new and more advanced aspects of Artificial Intelligence or Simulation of Behaviour. The Technical Programme of AISB-93 (Programme Chairman: Aaron Sloman) will include a special theme on: * Prospect for AI as the general science of intelligence Tutorials/Workshops related to this theme would be particularly welcome. Proposals from an individual or pair of presenters will be considered. Anyone interested in presenting a tutorial should submit a proposal to the AISB-93 Tutorial/Workshop Organiser, Dr Hyacinth Nwana, at the address below. Submission: ---------- A tutorial proposal should contain the following information: 1. Tutorial/Workshop Title 2. A brief description of the tutorial/workshop, suitable for inclusion in the conference brochure. 3. A detailed outline of the tutorial/workshop. This should include the necessary background and the potential target audience for the tutorial/workshop. 4. A brief resume of the presenter(s). This should include: background in the tutorial/workshop area, references to published work in the topic area (ideally, a published tutorial-level article on the subject), and teaching experience, including previous conference tutorials or short-courses presented. 5. Administrative information. This should include: name, mailing address, phone number, Fax, and email address if available. In the case of multiple presenters, information for each presenter should be provided, but one presenter should be identified as the principal contact. Dates: ------ Proposals must be received by September 17th, 1992. Decisions about topics and speakers will be made by November 5th, 1992. Speakers should be prepared to submit completed course materials by February 4th, 1993. Proposals should be sent to: Dr. Hyacinth S. Nwana Department of Computer Science University of Keele Keele, Staffordshire ST5 5BG UK Email: JANET: nwanahs at uk.ac.keele.cs BITNET: nwanahs%cs.kl.ac.uk at ukacrl UUCP: ...!ukc!kl-cs!nwanahs OTHER: nwanahs at cs.keele.ac.uk Tel: (+44) (0) 782 583413 Fax: (+44) (0) 782 713082 All other correspondence and queries regarding the conference should be sent to the Local Organiser, Donald Peterson. Dr. Donald Peterson School of Computer Science The University of Birmingham Edgbaston Birmingham B15 2TT UK Email: aisb93-prog at cs.bham.ac.uk (for communications relating to submission of papers) aisb93-delegates at cs.bham.ac.uk (for info. on accomodation, meals, programme, etc) Tel: (+44) (0) 21 414 3711 Fax: (+44) (0) 21 414 4281 From codelab at psych.purdue.edu Fri May 15 18:14:24 1992 From: codelab at psych.purdue.edu (Coding Lab Wasserman) Date: Fri, 15 May 92 17:14:24 EST Subject: preprint Message-ID: <9205152214.AA25275@psych.purdue.edu> A preprint has been deposited in the connectionists archive. Its abstract and retrieval information are given here: ------------------------------------------------------------------ ISOMORPHISM, TASK DEPENDENCE, AND THE MULTIPLE MEANING THEORY OF NEURAL CODING Gerald S. Wasserman Purdue University e-mail: codelab at psych.purdue.edu The neural coding problem is defined and several possible answers to it are reviewed. A widely accepted answer descends from early suggestions that neural activity, in general, is isomorphic with sensation and that the biological signal resident in the axon of a neuron, in particular, is given by its frequency of firing. More recent data are reviewed which indicate that the pattern of neural responses may also be informative. Such data led to the formulation of the multiple meaning theory which suggests that neural pattern may encode different information features in single responses. After a period in which attention turned elsewhere, the multiple meaning theory has quite recently been revived and has stimulated novel and careful experimental investigations. A corollary theory, the task dependence hypothesis, suggests that these information-bearing multiple response features are accessed differentially in different behavioral tasks. These theories place stringent temporal requirements on the generation and analysis of neural responses. Recent data are examined indicating that both requirements may indeed be satisfied by the nervous system. Finally, several methods of experimentally testing such coding theories are described; they involve manipulating the biological signals of neurons and observing the effect of these manipulations on behavior. KEY WORDS: biological signals, code, neuron, receptor, retina, cortex, brain, neural coding, sensory coding, frequency coding, impulse coding, bio-electric potentials, action potentials, synaptic potentials, receptor potentials, anesthesia ------------------------------------------------------------------ ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password:your e-mail address ftp> cd pub/neuroprose ftp> binary ftp> get wasserman.mult_mean.ps.Z ftp> quit uncompress wasserman.mult_mean.ps.Z lpr -s wasserman.mult_mean.ps If the printer for this job resides on a remote machine, this large (i.e., graphics-intensive) file may require that an operator issue the print command directly from the remote console. From stucki at cis.ohio-state.edu Sun May 17 00:25:47 1992 From: stucki at cis.ohio-state.edu (David J Stucki) Date: Sun, 17 May 92 00:25:47 -0400 Subject: preprint available from the neuroprose archive Message-ID: <9205170425.AA17489@retina.cis.ohio-state.edu> The following paper has been placed in the Neuroprose archive. ******************************************************************* Fractal (Reconstructive Analogue) Memory David J. Stucki and Jordan B. Pollack Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, OH 43210 Abstract This paper proposes a new approach to mental imagery that has the potential for resolving an old debate. We show that the methods by which fractals emerge from dynamical systems provide a natural computational framework for the relationship between the "deep" representations of long-term visual memory and the "surface" representations of the visual array, a distinction which was proposed by (Kosslyn, 1980). The concept of an iterated function system (IFS) as a highly compressed representation for a complex topological set of points in a metric space (Barnsley, 1988) is embedded in a connectionist model for mental imagery tasks. Two advantages of this approach over previous models are the capability for topological transformations of the images, and the continuity of the deep representations with respect to the surface representations. To be published in the Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, Bloomington, Indiana, July/August, 1992. ******************************************************************** Filename: stucki.frame.ps.Z ---------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get stucki.frame.ps.Z ftp> bye unix% zcat stucki.frame.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ---------------------------------------------------------------- dave... David J Stucki /\ ~~ /\ ~~ /\ ~~ /\ ~~ c/o Dept. Computer and 537 Harley Dr. #6 / \ / \ / \ / \ / Information Science Columbus, OH 43202 \/ \ / \ / \ / 2036 Neil Ave. stucki at cis.ohio-state.edu ~ \/ ~~ \/ ~~ \/ Columbus, OH 43210 There is no place in science for ideas, there is no place in epistemology for knowledge, and there is no place in semantics for meanings. -- W.V. Quine From BRUNAK at nbivax.nbi.dk Sun May 17 07:31:00 1992 From: BRUNAK at nbivax.nbi.dk (BRUNAK@nbivax.nbi.dk) Date: Sun, 17 May 1992 13:31 +0200 Subject: NetGene Message-ID: <01GK4BMAE640CIR5LI@nbivax.nbi.dk> ******** Announcement of the NetGene Mail-server: ********* DESCRIPTION: The NetGene mail server is a service producing neural network predictions of splice sites in vertebrate genes as described in: Brunak, S., Engelbrecht, J., and Knudsen, S. (1991) Prediction of Human mRNA Donor and Acceptor Sites from the DNA Sequence. Journal of Molecular Biology, 220, 49-65. ABSTRACT OF JMB ARTICLE: Artificial neural networks have been applied to the prediction of splice site location in human pre-mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives - in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism in vivo. When the presented method detects 95% of the true donor and acceptor sites it makes less than 0.1% false donor site assignments and less than 0.4% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non-coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non-coding signal and many stronger splice sites having more ill-defined transitions between coding and non-coding. INSTRUCTIONS: In order to use the NetGene mail-server: 1) Prepare a file with the sequence in a format similar to the fasta format: the first line must start with the symbol '>', the next word on that line is used as the sequence identifier. The following lines should contain the actual sequence, consisting of the symbols A, T, U, G, C and N. U is converted to T, letters not mentioned are converted to N. All letters are converted to upper case. Numbers, blanks and other nonletter symbols are skipped. The lines should not be longer than 80 characters. The minimum length analyzed is 451 nucleotides, and the maximum is 100000 nucleotides (your mail system may have a lower limit for the maximum size of a message). Due to the non-local nature of the algorithm sites closer than 225 nucleotides to the ends of the sequence will not be assigned. 2) Mail the file to netgene at virus.fki.dth.dk. The response time will depend on system load. If nothing else is running on the machine the speed is about 1000 nucleotides/min. It may take several hours before you get the answer, so please do not resubmit a job if you get no answer within a short while. REFERENCING AND FURTHER INFORMATION Publication of output from NetGene must be referenced as follows: Brunak, S., Engelbrecht, J., and Knudsen, S. (1991) Prediction of Human mRNA Donor and Acceptor Sites from the DNA Sequence. Journal of Molecular Biology, 220, 49-65. CONFIDENTIALITY Your submitted sequence will be deleted automatically immediately after processing by NetGene. PROBLEMS AND SUGGESTIONS: Should be addressed to: Jacob Engelbrecht e-mail: engel at virus.fki.dth.dk Department of Physical Chemistry The Technical University of Denmark Building 206 DK-2800 Lyngby Denmark phone: +45 4288 2222 ext. 2478 (operator) phone: +45 4593 1222 ext. 2478 (tone) fax: +45 4288 0977 EXAMPLE: A file test.seq is prepared with an editor with the following contents: >HUMOPS GGATCCTGAGTACCTCTCCTCCCTGACCTCAGGCTTCCTCCTAGTGTCACCTTGGCCCCTCTTAGAAGC CAATTAGGCCCTCAGTTTCTGCAGCGGGGATTAATATGATTATGAACACCCCCAATCTCCCAGATGCTG . Here come more lines with sequence. . . This is sent to the NetGene mail-server, on a Unix system like this: mail netgene at virus.fki.dth.dk < test.seq In return an answer similar to this is produced: From plaut+ at CMU.EDU Mon May 18 09:46:23 1992 From: plaut+ at CMU.EDU (David Plaut) Date: Mon, 18 May 92 09:46:23 -0400 Subject: preprint available in neuroprose archive Message-ID: <2034.706196783@K.GP.CS.CMU.EDU> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* Relearning after Damage in Connectionist Networks: Implications for Patient Rehabilitation David C. Plaut Department of Psychology Carnegie Mellon University To appear in the Proceedings of the 14th Annual Conference of the Cognitive Science Society, Bloomington, IN, August, 1992. Abstract Connectionist modeling is applied to issues in cognitive rehabilitation, concerning the degree and speed of recovery through retraining, the extent of generalization to untreated items, and how treated items are selected to maximize this generalization. A network previously used to model impairments in mapping orthography to semantics is retrained after damage. The degree of relearning and generalization varies considerably for different lesion locations, and has interesting implications for understanding the nature and variability of recovery in patients. In a second simulation, retraining on words whose semantics are atypical of their category yields more generalization than retraining on more prototypical words, suggesting a surprising strategy for selecting items in patient therapy to maximize recovery. To retrieve: unix> ftp 128.146.8.62 # archive.cis.ohio-state.edu Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get plaut.relearning.ps.Z ftp> quit unix> zcat plaut.relearning.ps.Z | lpr Thanks again to Jordan Pollack for maintaining the archive.... David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University Pittsburgh, PA 15213-3890 From rupa at dendrite.cs.colorado.edu Mon May 18 16:08:22 1992 From: rupa at dendrite.cs.colorado.edu (Sreerupa Das) Date: Mon, 18 May 1992 14:08:22 -0600 Subject: Preprint available Message-ID: <199205182008.AA22031@dendrite.cs.Colorado.EDU> The following paper has been placed in Neuroprose archive. --------------------------------------------------------------------------- Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory ----------------------------------------------------------------- Sreerupa Das C. Lee Giles Guo-Zheng Sun University of Colorado NEC Research Institute University of Maryland Boulder, CO 80309-430 Princeton, NJ 08540 College Park, MD 20742 rupa at cs.colorado.edu giles at research.nec.nj.com sun at sunext.umiacs.umd.edu ABSTRACT This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed. (To be published in the Proceedings of The Fourteenth Annual Conference of The Cognitive Science Society, July 29 -- August 1, 1992, Indiana University.) Questions and comments will be appreciated. ===================================================================== File name: das.cfg_induction.ps.Z ===================================================================== To retrieve the papers by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get das.cfg_induction.ps.Z ftp> quit unix> uncompress das.cfg_induction.ps.Z unix> lpr das.cfg_induction.ps ===================================================================== Sreerupa Das Department of Computer Science University of Colorado at Boulder Boulder, CO 80309-430 email:rupa at cs.colorado.edu ---------------------------------------------------------------------- From shultz at hebb.psych.mcgill.ca Tue May 19 09:35:53 1992 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 19 May 92 09:35:53 EDT Subject: No subject Message-ID: <9205191335.AA22593@hebb.psych.mcgill.ca> Subject: Abstract Date: 19 May '92 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: An Investigation of Balance Scale Success William C. Schmidt and Thomas R. Shultz Department of Psychology McGill University Abstract The success of a connectionist model of cognitive development on the balance scale task is due to manipulations which impede convergence of the back-propagation learning algorithm. The model was trained at different levels of a biased training environment with exposure to a varied number of training instances. The effects of weight updating method and modifying the network topology were also examined. In all cases in which these manipulations caused a decrease in convergence rate, there was an increase in the proportion of psychologically realistic runs. We conclude that incremental connectionist learning is not sufficient for producing psychologically successful connectionist balance scale models, but must be accompanied by a slowing of convergence. This paper will be presented at the Fourteenth Annual Conference of the Cognitive Science Society, Indiana University, 1992. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to schmidt at lima.psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get schmidt.balance.ps.Z ftp> quit unix> uncompress schmidt.balance.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca From jbower at cns.caltech.edu Tue May 19 11:59:31 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Tue, 19 May 92 08:59:31 PDT Subject: summer course at MBL Message-ID: <9205191559.AA00938@cns.caltech.edu> Methods in Computational Neuroscience Marine Biological Laboratory Woods Hole, MA. August 2 - August 29, 1992 For advanced graduate students and postdoctoral fellows in neurobiology, physics, electrical engineering, computer science, and psychology with an interest in Computational Neuroscience. A background in programming (preferably in C and UNIX) is highly desirable. Limited to 20 students. This four-week course presents the basic techniques necessary to study single cells and neural networks from a computational point of view, emphasizing their possible function in information processing. The aim is to enable participants to simulate the functional properties of their particular system of study and to appreciate the advantages and pitfalls of this approach to understanding the nervous system. The first section will focus on simulating the electrical properties of single neurons (compartmental models, active currents). The second part of the course will deal with the numerical and mathematical (e.g. theory of dynamical systems, information theory) techniques necessary for modeling single cells and neuronal networks. Examples of such simulations will be drawn from the invertebrate and vertebrate literature (central pattern generators, visual system of the fly, mammalian olfactory and visual cortex). In the final section, algorithms and connectionist neural networks relevant to visual perception, development in the mammalian cortex, as well as plasticity and learning algorithms will be analyzed and discussed from a neurobiological point of view. The course includes daily lectures, tutorials, and laboratories. The laboratory section is organized around GENESIS, the Neuronal Network simulator developed at the California Institute of Technology, running on 20 state-of-the-art, single-user, UNIX-based graphic color workstations. Other simulation programs, such as NEURON, will also be available to students. Students are expected to work on a simulation project of their own choosing. A small subset of students can remain for up to an additional week (until September 5) at the MBL to finish their computer projects. TUITION: $1,000 (includes room and board); partial financial aid is available to qualified applicants. APPLICATION DEADLINE: May 27, 1992 Directors: James M. Bower and Christof Koch, Computation and Neural System Program, California Institute of Technology. Faculty: Paul Adams, SUNY, Stony Brook; Richard Andersen, MIT; Joseph Arick, Rockefeller University; William Bialck, NEC Research Institute; Avis Cohen, University of Maryland; Rodney Douglas, MRC, U.K.; Nancy Kopell, Boston University; Rodolfo Llinas, New York University Medical Center; Eve Marder, Brandeis University; Michael Mascagni, Supercomputing Research Center; Kenneth Miller, Caltech; John Rinzel, NIH; Silvie Ryckebusch, Caltech; Idan Segev, Hebrew University, Israel; Terrence Sejnowski, UCSD/Salk Institute; David Van Essen, Caltech; Matthew Wilson, University of Arizona Teaching Assistants: David Beeman, University of Colorado; David Berkovitz, Yale University; Ojvind Bernander, Caltech; Maurice Lee, Caltech Computer Manager: John Uhley, Caltech ------------------------------------------------------------- APPLICATION "METHODS IN COMPUTATIONAL NEUROSCIENCE" August 2 - August 29, 1992 Name: Social Security Number: Citizenship: Institutional mailing address, e-mail address, telephone and fax numbers: Best mailing address, e-mail address, telephone and fax numbers, if different from above. Professional status: Graduate Postdoctoral Faculty Other How did you learn about this course? Advertisement(Give Name) Flyer Individual Email State your reasons for wanting to take this course: Outline your background, if any, in biological science, including courses taken. What experience, if any, have you had with experimental Neurobiology? Outline your background, if any, in applied mathematics (e.g. differential equations, linear algebra, Fourier transforms, dynamical systems, probability and statistics), including relevant courses in math, physics, engineering, etc. Which computer languages (e.g. C, PASCAL), machines (e.g. PDP, SUN) and operating systems (e.g. UNIX) have you used in the past? Indicate whether you are an expert (B), proficient (P) or a Novice (N). What experience, if any, have you had in using neural simulation programs, including the Genesis simulator? Given your experience, what particular questions would you like to address as course problems? (For instance, modelling retinal amacrine cells, computing motion in MT, learning in hippocampus, etc.) Education: Institution Highest Degree and year Professional Experience: If possible please have two letters of recomendation sent to: jbower at smaug.cns.caltech.edu Financial Aid If you are requesting financial aid, please provide a short statement of your needs. Applications are evaluated by an admissions committee and individuals are notified of acceptance or non-acceptance within two weeks of those decisions. A non-refundable $200 deposit is required of all accepted students by June 28, 1992. Return applications to: jbower at smaug.cns.caltech.edu APPLICATION DEADLINE: May 27, 1992 - MBL IS AN EQUAL OPPORTUNITY/AFFIRMATIVE ACTION INSTITUTION - From tesauro at watson.ibm.com Tue May 19 11:35:57 1992 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Tue, 19 May 92 11:35:57 EDT Subject: Reminder-- NIPS workshops deadline is May 22 Message-ID: This is to remind everyone that the deadline for submission of NIPS workshop proposals is this Friday, May 22. Submissions should be sent to "tesauro at watson.ibm.com" by e-mail, or by physical mail to the address given below. CALL FOR WORKSHOPS NIPS*92 Post-Conference Workshops December 4 and 5, 1992 Vail, Colorado Request for Proposals Following the regular NIPS program, workshops on current topics in Neural Information Processing will be held on December 4 and 5, 1992, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: Computational Neuroscience; Sensory Biophysics; Recurrent Nets; Self-Organization; Speech; Vision; Rules and Connectionist Models; Neural Network Dynamics; Computa- tional Complexity Issues; Benchmarking Neural Network Applica- tions; Architectural Issues; Fast Training Techniques; Active Learning and Control; Optimization; Bayesian Analysis; Genetic Algorithms; VLSI and Optical Implementations; Integration of Neural Networks with Conventional Software. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Specific open and/or controversial issues are encouraged and preferred as workshop topics. Individuals pro- posing to chair a workshop will have responsibilities including: arrange brief informal presentations by experts working on the topic, moderate or lead the discussion, and report its high points, findings and conclusions to the group during evening plenary ses- sions, and in a short (2 page) written summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 22, 1992. (Express mail is *not* necessary. Submissions by electronic mail will also be acceptable.) Proposals should include a title, a description of what the workshop is to address and accomplish, and the proposed length of the workshop (one day or two days). It should state why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholar- ship in the field of interest. Mail submissions to: Dr. Gerald Tesauro NIPS*92 Workshops Chair IBM T. J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 USA (e-mail: tesauro at watson.ibm.com) Name, mailing address, phone number, and e-mail net address (if applicable) must be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 22, 1992 Please Post From arun at hertz.njit.edu Tue May 19 12:35:01 1992 From: arun at hertz.njit.edu (arun maskara spec lec cis) Date: Tue, 19 May 92 12:35:01 -0400 Subject: paper available in neuroprose Message-ID: <9205191635.AA22978@hertz.njit.edu> Paper availble: Forced Simple Recurrent Neural Networks and Grammatical Inference Arun Maskara New Jersey Institute of Technology Department of Computer and Information Sciences University Heights, Newark, NJ 07102 arun at hertz.njit.edu Andrew Noetzel The William Paterson College Department of Computer Science Wayne, NJ 07470 ABSTRACT A simple recurrent neural network (SRN) introduced by Elman can be trained to infer a regular grammar from the positive examples of symbol sequences generated by the grammar. The network is trained, through the back-propagation of error, to predict the next symbol in each sequence, as the symbols are presented successively as inputs to the network. The modes of prediction failure of the SRN architecture are investigated. The SRN's internal encoding of the context (the previous symbols of the sequence) is found to be insufficiently developed when a particular aspect of context is not required for the immediate prediction at some point in the input sequence, but is required later. It is shown that this mode of failure can be avoided by using the auto-associative recurrent network (AARN). The AARN architecture contains additional output units, which are trained to show the current input and the current context. The effect of the size of the training set for grammatical inference is also considered. The SRN has been shown to be effective when trained on an infinite (very large) set of positive examples. When a finite (small) set of positive training data is used, the SRN architectures demonstrate a lack of generalization capability. This problem is solved through a new training algorithm that uses both positive and negative examples of the sequences. Simulation results show that when there is restriction on the number of nodes in the hidden layers, the AARN succeeds in the cases where the SRN fails. ---------------------------------------------------------------------------- This paper will appear in Proceedings of 14th Annual Coggnitive Science Conference, 1992 It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) Sorry, no hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maskara.cogsci92.ps.Z ftp> quit unix> uncompress maskara.cogsci92.ps.Z unix> lpr -Pxxx maskara.cogsci92.ps (or however you print postscript) Arun Maskara arun at hertz.njit.edu From thildebr at lion.csee.lehigh.edu Wed May 20 11:35:21 1992 From: thildebr at lion.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Wed, 20 May 92 11:35:21 -0400 Subject: Izumida translation Message-ID: <9205201535.AA03119@lion.csee.lehigh.edu> Regarding preprints of my translation of the paper by Izumida et al., "Analysis of Neural Network Energy Functions Using Standard Forms": Some mail systems refuse messages longer than 150000 bytes -- if yours is one, then I was unable to fulfill your request. It has been suggested that I place the PostScript on the Ohio State archive, but I do not wish to do this, as that might be in violation of certain copyrights. The alternative is to send out LaTeX source, which I will do if you renew your request. Thomas H. Hildebrandt From jbower at cns.caltech.edu Wed May 20 21:04:56 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Wed, 20 May 92 18:04:56 PDT Subject: CNS*92 registration Message-ID: <9205210104.AA04439@cns.caltech.edu> Registration announcement for: First Annual Computation and Neural Systems Meeting "CNS*92" Sunday, July 26 through Friday, July 31, 1992 San Francisco, California This is the first annual meeting of an inter-disciplinary conference intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. CNS*92 will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. ------------------------------------------------------------- Program The meeting itself is divided into three sections. The first day is devoted to tutorial presentations including basic tutorials for the uninitiated as well as advanced tutorials on new methods of data acquisition and analysis. The main meeting will take place on July 27 - 29 and will consist of the presentation of 106 papers accepted by peer review. The 24 most highly rated papers will be presented in a single oral session running over these three days. An additional 82 papers have been accepted for poster presentations. The last two days of the meeting (July 30-31) will be devoted to workshop sessions held at the Marconi Conference Center on the Pacific Coast, north of San Francisco. ------------------------------------------------------------- Registration Registration for the meeting is strictly limited to 350 on a first come first serve basis. Workshop registration is limited to 85. ------------------------------------------------------------- Housing and Travel Grants Housing: Relatively inexpensive housing will be available close to the meeting site for participants. Travel Grants: Some funds will also be available for travel reimbursements. The intention of the organizing committee is to use these funds to provide support for students and postdoctoral fellows attending the meeting. Information on how to apply for travel support is provided in the registration materials. ------------------------------------------------------------- Online Information Information about the meeting is available via FTP over the internet (address: 131.215.135.69). To obtain registration forms or information about the agenda, currently registered attendees, and/or paper abstracts use the following sequence (things you type are in quotes): > yourhost% "ftp 131.215.135.69" > 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.139.69:): "ftp" > 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" > 230 Guest login ok, access restrictions apply. ftp> "cd pub" > 250 CWD command successful. ftp> At this point relevant commands are: - "ls" to see the contents of the directory. - "get (filename)" to obtain (filename) - "mget *" to obtain everything listed Note that the abstracts are contained in a separate directory called To change to this directory type "cd abstracts". ------------------------------------------------------------- For additional information contact: cns92 at cns.caltech.edu ------------------------------------------------------------- CNS*92 Organizing Committee: Program Chair, James M. Bower, Caltech. Publicity Chair, Frank Eeckman, Lawrence Livermore Labs. Finances, John Miller, UC Berkeley and Nora Smiriga, Institute of Scientific Computing Res. Local Arrangements, Ted Lewis, UC Berkeley and Muriel Ross, NASA Ames. Program Committee: William Bialek, NEC Research Institute. James M. Bower, Caltech. Frank Eeckman, Lawrence Livermore Labs. Bard Ermentrout, Univ. Pittsburg. Scott Fraser, Caltech. Christof Koch, Caltech. Ted Lewis, UC Berkeley. Gerald Loeb, Queen's University. Eve Marder, Brandeis. Bruce McNaughton, University of Arizona. John Miller, UC Berkeley. Idan Segev, Hebrew University, Jerusalem Shihab Shamma, University of Maryland. Josef Skrzypek, UCLA. From thgoh at iss.nus.sg Wed May 20 16:54:53 1992 From: thgoh at iss.nus.sg (Goh Tiong Hwee) Date: Wed, 20 May 92 16:54:53 SST Subject: No subject Message-ID: <9205200854.AA06973@iss.nus.sg> I have place the following paper in the neuroprose archive. Hardcopy request by snailmail to me at the institute. Thanks to Steve Pollack for availing the archive service. Neural Networks And Genetic Algorithm For Economic Forecasting Francis Wong, PanYong Tan Institute of Systems Science National University of Singapore Abstract: This paper describes the application of an enhanced neural network and genetic algorithm to economic forecasting. Our proposed approach has several significant advantages over conventional forecasting methods such as regression and the Box-Jenkins methods. Apart from being simple and fast in learning, a major advantage is that no assumption need to be made about the underlying function or model, since the neural networks is able to extract hidden information from the historical data. In addition, the enhanced neural network offers selective activation and training of neurons based on the instantaneous causal relationship between the current set of input training data and the output target. This causal relationship is represented by the Accumulated Input Error (AIE) indices, which are computed based on the accumulated errors back-propagated to the input layers during training. The AIE indices are used in the selection of neurons for activation and training. Training time can be reduced significantly, especially for large networks designed to capture temporal information. Although neural networks represent a promising alternative for forecasting, the problem of network design remains a bottleneck that could impair widespread applications in practice. The genetic algorithm is used to evolved optimal neural network architectures automatically, thus eliminating the many pitfalls associated with human-engineering approaches. The proposed concepts and design paradigm were tested on serveral real applications ( Please email thgoh at iss.nus.sg for a copy of the software ), including the forecast of GDP, air passenger arrival and currency exchage rates. ftp Instructions: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get wong.nnga.ps.Z ftp> quit __________________________________________________________ Tiong Hwee Goh Institute of Systems Science National University of Singapore Heng Mui Keng Terrace Kent Ridge Singapore 0511. Telephone:(65)7726214 Fax :(65)7782571 From U53076%UICVM.BITNET at BITNET.CC.CMU.EDU Thu May 21 23:46:15 1992 From: U53076%UICVM.BITNET at BITNET.CC.CMU.EDU (Bruce Lambert) Date: Thu, 21 May 92 22:46:15 CDT Subject: Thesis placed on Neuroprose archive Message-ID: <01GKAIF3KVB4FOD29C@BITNET.CC.CMU.EDU> ****Please do not forward to other lists *********** The following paper has been placed on the neuroprose archive as lambert.thesis.ps.Z A CONNECTIONIST MODEL OF MESSAGE DESIGN Bruce L. Lambert Department of Speech Communication University of Illinois at Urbana-Champaign, 1992 Complexity in plan-based models of message production has typically been managed by the systematic exploitation of abstractions. Most signifi- cantly, abstract goals replace concrete situations, and act types replace instantiated linguistic contents. Consequently, plan-based models involve three key transformations: (a) the mapping of concrete situations onto abstract goals, (b) the mapping of abstract goals onto act types, and (c) the mapping of act types onto linguistic tokens. Transformations are made possible by the prior functional indexing of situations, acts and utterances. However, such functional indexing schemes are based on untenable assumptions about language and communication. For example, communicative means-ends relations are not fixed, and the assignment of functional significance to decontextualized linguistic forms contradicts a substantial body of evidence about the contextual dependency of language function. A connectionist model of message production is proposed to avoid these difficulties. The model uses Quickprop, a modified version of the backpropagation learning algorithm, to learn a mapping from thoughts to the elements of a phrasal lexicon for a hypothetical broken date situation. Goals were represented as distributed patterns of context- specific thoughts. Messages were represented as distributed patterns of phrases. Three studies were conducted to develop and test the model. The first study used protocol analysis to validate a checklist method for thought elicitation. The second study described a prototype system for automatically coding independent clauses into phrase categories (called message frames), and the system's ability to classify new examples was found to be limited. The final study analyzed the performance of the connectionist model. The generalization performance of the model was limited by the small number of examples, but an analysis of the receptive fields of several message frames was consistent with previous research about individual differences in reasoning about communication and supported the assertion that individual linguistic forms are multi- functional. The model provided preliminary support for a situated, phrasal lexical model of message design that does not rely on decontex- tualized, functionally indexed abstractions. Retrieve the file in the normal manner from archive.cis.ohio-state.edu. Unfortunately, several of the figures are not included. Hardcopy of the figures will be sent to individuals who contact me directly by email. And thanks of course to Jordan Pollack for providing what has become a tremendously valuable resouce to the connectionist community. Bruce Lambert Department of Pharmacy Administration (M/C 871) College of Pharmacy University of Illinois at Chicago 833 S. Wood St. Chicago, IL 60612 email: u53076 at uicvm.uic.edu From PYG0572 at VAX2.QUEENS-BELFAST.AC.UK Fri May 22 08:33:00 1992 From: PYG0572 at VAX2.QUEENS-BELFAST.AC.UK (GERRY ORCHARD) Date: Fri, 22 May 92 12:33 GMT Subject: Could you post this on the bulletin board please? Message-ID: <01GKAYMIBD80FOD5QD@BITNET.CC.CMU.EDU> The Second Irish Neural Networks Conference: June 25th and 26th 1992 Guest Speakers: Prof. John Taylor, Kings College, London Prof. Dan Amit, INFN Rome and Racah Institute of Physics Prof. Vicki Bruce, University of Nottingham Prof. George Irwin, Queen's Belfast Presentations: (alphabetical order) A NEURAL NETWORK MODEL OF A HUMAN ATTENTION SWITCHING ABILITY John Andrews and Mark Keane, Department of Computer Science O'Reilly Institute, Trinity College, Dublin 2 GENERATING OBJECT-ORIENTED CODE THROUGH ARTIFICIAL NEURAL NETWORKS J Brant Arseneau, Gillian F Sleith**, C Tim Spracklen, Gary Whittington, John MacRae*, Electronic Research Group, Department of Engineering, University of Aberdeen *Institute of Software Engineering, Island Street, Belfast **Dept. of Information Systems, Faculty of Informatics, UU at Jordanstown LEARNING TO LEARN: THE CONTRIBUTION OF BEHAVIOURISM TO CONNECTIONIST MODELS OF INFERENTIAL SKILLS IN HUMANS Dermot Barnes and Peter Hampson, Department of Applied Psychology, University College, Cork NEURAL NETWORK TASK IDENTIFICATION FOR DISTRIBUTED WORKING SUPPORT Russel Beale, Alan Dix* and Janet Finlay* School of Computer Science, University of Birmingham *HCI Group, Dept of Computer Science, University of York A NEURAL CONTROLLER FOR NAVIGATION OF NON- HOLONOMIC MOBILE ROBOTS USING SENSORY INFORMATION Rene Biewald, Control System Centre, U.M.I.S.T. USING CHAOS TO PREDICT COMMODITY MARKET PRICE FLUCTUATIONS IN NEURAL NETWORKS Christopher Burdorf and John Fitch, School of Mathematical Sciences, University of Bath APPLICATION OF NEURAL NETWORKS TO MOTION PLANNING FOR A ROBOT ARM TO GRASP MOVING OBJECTS Conor Doherty, Educational Research Centre, St Patrick's College, Dublin 9 SEMANTIC INTERACTION: A CONNECTIONIST MODEL OF LEXICAL COMBINATION George Dunbar*, Masja Kempen**, Noel Maessen** *Department of Psychology, University of Warwick **Department of Psychology, University of Leiden GENERALISATION AND CONVERGENCE IN THE MULTI -DIMENSIONAL ALBUS PERCEPTRON (CMAC) D. Ellison, Dundee Institute of Technology, Dundee, Scotland ARTIFICAL NEURAL NETWORK BASED ELECTRONIC NOSE E L Hines and J W Gardner, Dept of Engineering, University of Warwick, Coventry A CONNECTIONIST MODEL OF HUMAN MUSICAL SCORE PROCESSING James Hynan and Sean O Nuallian, Dublin City University, Glasnevin, Dublin 9 A NONLINEAR SYSTOLIC FILTER WITH RADIAL BASIS FUNCTION ESTIMATION J. Kadlec, F. M. F. Gaston, G.W. Irwin, Control Research Group, Dept. of Electrical and Electronic Engineering, The Queen's University of Belfast HYPERCUBE CUTS AND KARNAUGH MAPS: TOWARDS AN UNDERSTANDING OF BINARY FEEDFORWARD NEURAL NETWORKS. Brendan Kiernan, Dept. of Computer Science, Trinity College Dublin. A NEURAL NETWORK BASED DIAGNOSTIC TOOL FOR LIVER DISORDERS L Kilmartin, E Ambikairajah and *S M Lavelle Department of Electronic Engineering, Regional Technical College, Athlone *Department of Experimental Medicine, University College Galway MODELLING MEMBRANE POTENTIALS IS MORE FLEXIBLE THAN SPIKES Peter Laming , Dept of Biology and Biochemistry, The Queen's University of Belfast A NEURAL MECHANISM FOR DIVERSE BEHAVIOUR R Linggard, School of Information Systems, University of East Anglia, Norwich A NEURAL NETWORK APPROACH TO EQUALIZATION OF A NON-LINEAR CHANNEL E Luk and A D Fagan, Department of Electronic and Electrical Engineering, University College, Dublin. HANDWRITTEN SIGNATURE VERIFICATION USING THE BACKPROPAGATION NEURAL NETWORK D K R McCormack, Department of Computing Mathematics University of Wales College of Cardiff USING A 2-STAGE ARTIFICIAL NEURAL NETWORK TO DETECT ABNORMAL CERVICAL CELLS FROM THEIR FREQUENCY DOMAIN IMAGE. McKenna S, Ricketts IV, Cairns AY, Hussein KA* MicroCentre, Department of Mathematics and Computer Science The University, DUNDEE. *Dept. Pathology, Ninewells Hospital, Dundee. COMPARING FEEDFORWARD NEURAL NETWORK MODELS FOR TIME SERIES PREDICTION. John Mitchell, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin HAND-WRITTEN DIGIT RECOGNITION EXPLORATIONS IN CONNECTIONISM Michal Morciniec, Computer Science Department, University College, Dublin THE EFFECT OF ALL-CONNECTIVE BACK-PROPAGATION ALGORITHM ON THE LEARNING CHARACTERISTIC OF A NEURAL NETWORK S Namasivayam and J T McMullen* Applied Physical Science, Univeristy of Ulster at Coleraine *Centre for Energy Research, University of Ulster at Coleraine INFORMATION THEORY AND NEURAL NETWORK LEARNING ALGORTIHMS: AN OVERVIEW M. D. Plumbley, Centre for Neural Networks King's College London AN EXPLORATION OF CLAUSE BOUNDARY EFFECTS IN SIMPLE RECURRENT NETWORK REPRESENTATIONS Ronan Reilly, Department of Computer Science, University College Dublin. A MODEL FOR THE ORGANISATION OF OCULAR DOMINANCE STRIPES Craig R Renfrew, Dept of Computer Science, University of Strathclyde APPLICATION OF NEURAL NETWORKS TO ATMOSPHERIC CHERENKOV IMAGING DATA FROM THE CRAB BEBULA Paul T Reynolds, University College Dublin, Bellfield, Dublin 4 ARTIFICIAL REWARDS Tony Savage, School of Psychology, The Queen's University of Belfast A MODULAR NETWORK MODEL FOR SEGMENTING VISUAL TEXTURES BASED ON ORIENTATION CONTRAST Andrew J Schofield and David H Foster, Dept of Communication and Neuroscience, University of Keele PRESTRUCTURES NEURAL NETS AND THE TRANSFER OF KNOWLEDGE Amanda J.C. Sharkey and Noel E. Sharkey, Centre for Connection Science, Department of Computer Science, University of Exeter, Exeter, Devon HYBRID SYSTEMS - NEURAL NETWORKS IN PROBLEM-SOLVING ENVIRONMENTS P. Sims, D.A. Bell, Dept. of Information Systems University of Ulster at Jordanstown DESIGN OF AN INTEGRATED-CIRCUIT ANALOGUE NEURAL NETWORK Winand G van Sloten, School of Electrical Engineering and Computer Science, The Queen's University of Belfast SYNAPTIC CORRELATES OF SHORT-AND LONG-TERM MEMORY FORMATION IN THE CHICK FOREBRAIN FOLLOWING ONE-TRIAL PASSIVE AVOIDANCE LEARNING M G Stewart, Brain and Behaviour Research Group, Dept of Biology, Open University, Milton Keynes WHY CONNECTIONIST NETWORKS PROVIDE NATURAL MODELS OF THE WAY SENTENCE CONTEXT AFFECTS IDENTIFICATION OF A WORD. Eamonn Strain, Department of Psychology, University of Nottingham Roddy Cowie, School of Psychology, Queen's University, Belfast A NEW ALGORITHM FOR CORRECTING SLANT IN HANDWRITTEN NUMERALS S Sunthankar, School of Computer Science & Electronic Systems, Kingston Polytechnic AN ALGORITHM FOR SEGMENTING HANDWRITTEN NUMERAL STRINGS S Sunthankar, School of Computer Science & Electronic Systems, Kingston Polytechnic USING NEURAL NETWORKS TO FIND GOLD Peter M Williams, School of Cognitive and Computing Sciences University of Sussex NEURAL LEARNING ROBOT CONTROL: A NEW APPROACH VIA THE THEORY OF COGNITION A M S Zalzala, Control Engineering Research Group, Department of Electrical & Electronic Engineering The Queen's University of Belfast Registration 65 pounds sterling including lunches and coffe/tea FURTHER INFORMATION FROM: Dr. Gerry Orchard Cognitive and Computational Modelling Group School of Psychology Queen's University Belfast Tel 0232 245133 Ext 4354/4360 Fax 0232 664144 Email g.orchard@ uk.ac.qub.v2 From jagota at cs.Buffalo.EDU Sun May 24 16:37:17 1992 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Sun, 24 May 92 16:37:17 EDT Subject: (P)reprints by ftp Message-ID: <9205242037.AA13921@sybil.cs.Buffalo.EDU> Dear Connectionists: Latex sources of the following and other (p)reprints are available via ftp: ftp ftp.cs.buffalo.edu (or 128.205.32.3 subject-to-change) Name : anonymous > cd users/jagota > get Efficiently Approximating Max-Clique in a Hopfield-style Network Oral presentation at IJCNN'92 Baltimore. File: ijcnn92.tex Representing Discrete Structures in a Hopfield-style Network Book chapter (to appear). File: chapter.tex A Hopfield-style Network with a Graph-theoretic Characterization Journal article (to appear). File: JANN92.tex Problems? Contact jagota at cs.buffalo.edu Arun Jagota From davies at Athena.MIT.EDU Mon May 25 18:26:05 1992 From: davies at Athena.MIT.EDU (davies@Athena.MIT.EDU) Date: Mon, 25 May 92 18:26:05 EDT Subject: EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY Message-ID: <9205252226.AA27004@e40-008-9.MIT.EDU> ************************************************************************ ****** EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY ****** *********** INAUGURAL CONFERENCE *********** **** 17 - 19 JULY, 1992 **** ****** SECOND ANN0UNCEMENT: REGISTRATION AND ACCOMMODATION ****** The Inaugural Conference of the European Society for Philosophy and Psychology will be held in the Philosophy Institute, University of Louvain, Belgium, from Friday 17 July to Sunday 19 July, 1992. The goal of the Society is 'to promote interaction between philosophers and psychologists on issues of common concern'. ***** REGISTRATION ***** In order to register for the conference, please send your NAME, ACADEMIC AFFILIATION, POSTAL MAIL ADDRESS, EMAIL ADDRESS, and TELEPHONE NUMBER to: Beatrice de Gelder, Psychology Department, Tilburg University, P.O.Box 90153, 5000 LE Tilburg, Netherlands or provide the same information by email to: beadegelder at kub.nl In either case, please state whether you are paying the Registration Fee by cheque, banker's draft, or electronic transfer. THE REGISTRATION FEE is Bfrs. 2400,- (approx. 40 pounds sterling) or Bfrs. 1200,- for students (approx. 20 pounds sterling). If you would like to attend the conference dinner on Friday 17 July, then the extra charge is Bfrs. 1500,- including wine (approx. 25 pounds sterling). REGISTRATION IS NOT COMPLETE UNTIL PAYMENT HAS BEEN RECEIVED. Payment MUST be in Belgian francs, by cheque or banker's draft made payable to: B. de Gelder-Euro-SPP OR by electronic transfer to the following: Bank: AN-HYP Brussels, Belgium, Account number: 750-9345993-07, for the attention of: B. de Gelder-Euro-SPP. When registration is complete, you will be sent an information pack including maps and other touristic information along with a detailed programme. ************************************************************************ ***** ACCOMMODATION ***** Rooms have been reserved in several hotels (all within walking distance of The Philosophy Institute) at special reduced rates. In order to book accommodation, please contact one of the hotels directly, and mention the Euro-SPP Conference. The hotels and rates are: Hotel Binnenhof Hotel Industrie Maria-Theresiastraat 65 Martelarenplein 7 B-3000 Leuven (Louvain) B-3000 Leuven (Louvain) Tel: +32-16-20.55.92 Tel: +32-16-22.13.49 Fax: +32-16-23.69.26 Fax: +32-16-20.82.85 Rate: Bfrs. 2450,- Rate: Bfrs. 1050,- Begijnhof Congreshotel Hotel Arcade Tervuursevest 70 Brusselsestraat 52 B-3000 Leuven (Louvain) B-3000 Leuven (Louvain) Tel: +32-16-29.10.10 Tel: +32-16-29.31.11 Fax: +32-16-29.10.22 Fax: +32-16-23.87.92 Rate: Bfrs. 3.500,- Rate: Bfrs. 2000,- Student accommodation Contact: Stefan Cuypers Centre for Logic and Philosophy of Science B-3000 Leuven (Louvain) Tel: +32-16-28.63.15 Fax: +32-16-28.63.11 Rate: Bfrs. 585,- TO MAKE SURE YOU WILL OBTAIN HOTEL ACCOMMODATION YOU MUST CONTACT THE HOTEL OF YOUR CHOICE BEFORE 17 JUNE 1992. ************************************************************************ ***** PROGRAMME ***** FRIDAY 17 JULY Conference desk open from 11 am 2.00 pm Coffee 3.00 - 5.00 pm SYMPOSIUM 1: Consciousness 5.30 pm INVITED LECTURE: Larry Weiskrantz 7.00 pm RECEPTION at the kind invitation of the Philosophy Institute 8.00 pm CONFERENCE DINNER SATURDAY 18 JULY 9.00 - 10.30 am SYMPOSIUM 2: Probabilistic Reasoning 10.30 am Coffee 11.00 am - 1.00 pm SYMPOSIUM 3: Intentionality 1.00 - 2.30 pm Lunch 2.30 - 4.00 pm SYMPOSIUM 4: Theory of Mind 4.30 - 6.00 pm SYMPOSIUM 5: Philosophical Issues from Linguistics 6.15 pm INAUGURAL BUSINESS MEETING OF THE EURO-SPP 7.00 pm RECEPTION at the kind invitation of Blackwell Publishers SUNDAY 19 JULY 9.00 - 11.00 am SYMPOSIUM 6: Connectionist Models 11.00 am Coffee 11.30 am INVITED LECTURE: Dan Sperber 1.00 pm Lunch Symposium speakers include: Peter Carruthers, Andy Clark, Anthony Dickinson, Gerd Gigerenzer, Stevan Harnad, Nigel Harvey, Nick Humphrey, Pierre Jacob, Giuseppe Longobardi, Gabriele Miceli, Odmar Neumann, David Over, Josef Perner, Kim Plunkett, David Premack, Andrew Woodfield, Andy Young. ************************************************************************ For further information contact: Daniel Andler Martin Davies CREA Philosophy Department 1 rue Descartes Birkbeck College 75005 Paris Malet Street France London WC1E 7HX Email:azra at poly.polytechnique.fr England Email: ubty003 at cu.bbk.ac.uk Beatrice de Gelder Tony Marcel Psychology Department MRC Applied Psychology Unit Tilburg University 15 Chaucer Road P.O. Box 90153 Cambridge CB2 2EF 5000 LE Tilburg England Netherlands Email: tonym at mrc-apu.cam.ac.uk Email: beadegelder at kub.nl ************************************************************************ From burrow at grad1.cis.upenn.edu Mon May 25 16:27:26 1992 From: burrow at grad1.cis.upenn.edu (Thomas Fontaine) Date: Mon, 25 May 92 16:27:26 EDT Subject: TR available in neuroprose Message-ID: <9205252027.AA17328@gradient.cis.upenn.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: CHARACTER RECOGNITION USING A MODULAR SPATIOTEMPORAL CONNECTIONIST MODEL Thomas Fontaine and Lokendra Shastri Technical Report MS-CIS-92-24/LINC LAB 219 Computer and Information Science Department 200 South 33rd Street University of Pennsylvania Philadelphia, PA 19104-6389 We describe a connectionist model for recognizing handprinted characters. Instead of treating the input as a static signal, the image is scanned over time and converted into a time-varying signal. The temporalized image is processed by a spatiotemporal connectionist network suitable for dealing with time-varying signals. The resulting system offers several attractive features, including shift-invariance and inherent retention of local spatial relationships along the temporalized axis, a reduction in the number of free parameters, and the ability to process images of arbitrary length. Connectionist networks were chosen as they offer learnability, rapid recognition, and attractive commercial possibilities. A modular and structured approach was taken in order to simplify network construction, optimization and analysis. Results on the task of handprinted digit recognition are among the best reported to date on a set of real-world ZIP code digit images, provided by the United Stated Postal Service. The system achieved a 99.1\% recognition rate on the training set and a 96.0\% recognition rate on the test set with no rejections. A 99.0\% recognition rate on the test set was achieved when 14.6\% of the images were rejected. ************************ How to obtain a copy ************************ I'm sorry, but hardcopies are not available. To obtain via anonymous ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fontaine.charrec.ps.Z ftp> quit unix> uncompress fontaine.charrec.ps.Z unix> lpr fontaine.charrec.ps.Z (or however you print Postscript) [Please note that some of the figures were produced with a Macintosh and the resulting Postscript may not print on all printers. People using an Apple LaserWriter should have no problems, though.] From bernard%arti1.vub.ac.be at BITNET.CC.CMU.EDU Mon May 25 13:28:42 1992 From: bernard%arti1.vub.ac.be at BITNET.CC.CMU.EDU (Bernard Manderick) Date: Mon, 25 May 92 19:28:42 +0200 Subject: Announcement Message-ID: <9205251728.AA12164@arti1.vub.ac.be> Dear moderator, I would be pleased if you could publish the announcement below in the next issue of your electronic newsletter. I apologize for any inconvenience and many thanks in advance, Bernard Manderick Artificial Intelligence Lab VUB Pleinlaan 2 Phone +32 2 641 35 75 B-1050 Brussels Fax +32 2 641 35 82 BELGIUM Email ppsn at arti.vub.ac.be ------------------------------------------------------------------------------- PPSN92 PARALLEL PROBLEM SOLVING FROM NATURE CONFERENCE ARTIFICIAL INTELLIGENCE LAB FREE UNIVERSITY OF BRUSSELS BELGIUM 28 - 30 SEPTEMBER 1992 General Information =================== The second Parallel Problem Solving from Nature-conference (PPSN92) will be held at the Free University of Brussels, September 28-30, 1992. The unifying theme of the PPSN-conference is natural computation, i.e. the design, the theoretical and empirical understanding, and the comparison of algorithms gleaned from nature and their application to real-world problems in science and technology. Examples are genetic algorithms, evolution strategies, algorithms based on neural networks and immune systems. Since last year there is a collaboration with the International Conference on Genetic Algorithms (ICGA) resulting in an alternation of both conferences. The ICGA-conferences will be held in the US during the odd years while the PPSN-conferences will be held in Europe during the even years. The major objective of this conference is to provide a biannual international forum for scientists from all over the world where they can discuss new theoretical, empirical and implementational developments in natural computation in general and evolutionary computation in particular together with their applications in science, technology and administration. The conference will feature invited speakers, technical and poster sessions. Registration Information ======================== REGISTRATION FEE The registration fees for PPSN92 are as follows: Before August 15: Normal 8 500 BEF Student 5 000 BEF After August 15: Normal 10 500 BEF Student 7 000 BEF These fees cover conference participation, conference proceedings, refreshment breaks, lunches, wellcome reception and transport from the hotels to the conference site and back. The deadline for registration is August 15, 1992. After this date, an additional fee of 2000 BEF will be charged. (1 US dollar is about 35 BEF and 1 ECU is about 42 BEF). ACCOMMODATION The conference will take place in the Aula (building Q) of the Free University of Brussels (VUB), Pleinlaan 2, B-1050 Brussels. There is no on-campus housing available. We have made arrangments with a number of hotels. The following hotels have agreed to reserve rooms until July 15, 1992 on a first come, first served basis. All hotels are located in the center of the city. Transport from the hotels to the conference site and back will be available. The indicated rates are in belgian francs (BEF) and include breakfast. NOTE THAT RESERVATIONS HAVE TO BE MADE BY THE ATTENDEES AND THIS BEFORE JULY 15 IF THEY WANT TO BENEFIT FROM RESERVED ROOMS. Hotel Albert I Place Rogier 20 B-1210 Brussel tel. +32/2/217 21 25 fax +32/2/217 93 31 Single: 3 000, double: 3 500, triple room: 4 000 (50 rooms) Hotel Arcade Place Sainte Catherine B-1000 Brussel tel. +32/2/513 76 20 fax +32/2/514 22 14 Single: 3 600, double: 3 600, triple room: 4 150 (50 rooms) Hotel Arenberg Rue d'Assaut 15 B-1000 Brussel tel. +32/2/511 07 70 fax +32/2/514 19 76 Single: 4 500, double room: 5 100 (30 rooms) Hotel Delta Chaussee de Charleroi 17 B-1060 Brussel tel. +32/2/539 01 60 fax +32/2/537 90 11 Single: 4 500, double room: 5 100 (50 rooms) Hotel Ibis Grasmarkt 100 B-1000 Brussel tel. +32/2/514 40 40 fax +32/2/514 50 67 Single: 3 650, double room: 4 150, triple room: 5 150 (50 rooms) Hotel Palace Rue Gineste 3 B-1210 Brussel tel. +32/2/217 79 94 fax +32/2/218 76 51 Single: 4 500, double room: 5 100 (80 rooms) CONFERENCE BANQUET The conference dinner will be held on the evening of Tuesday, September 29 in one of the well-known restaurants in the famous Ilot Sacre which is close to the Grande Place. The banquet costs 1 500 BEF - see registration form. Places are limited and will be assigned on a first come, first served basis. ACCOMPANYING PERSONS PROGRAM We did not foresee an accompanying persons program. All hotels will be happy to inform you about all kind of attractions (shopping, tourism, gastronomy, museums and the like) in Brussels. Most of these attractions are in a walking distance from your hotel. TRAVELING TO BRUSSELS Brussels has three international railway stations. Brussels Central ("Brussel Centraal" in Dutch/"Gare Central" in French) lies in the center of the city, Brussels South ("Brussel Zuid"/"Gare du Midi") and Brussels North ("Brussel Noord"/"Gare du Nord") are less than 2 kilometers to the south and to the north of the center, respectively. The Brussels airport is located at 10 km from the center of the city where most hotels are situated. There is a special train between the airport and Brussels Central which runs at half hour intervals. It also stops at Brussels North. All major airlines fly to Brussels. PAYMENT * Cheques (to be sent to PPSN92 Registrations). Please note that all charges, if any, must be at the participants' expense. Eurocheques are preferred. * Bankers draft to the order of PPSN: ASLK-CGER Bank, Belgium: Account 001-2361627-44 mentioning your name. Please ask your bank to arrange the transfer at no cost of the beneficiary. Bank charges, if any, will be at the participants expense. CANCELLATION Refunds of 50% will be made if a written request is received before September 15. No refunds will be made for cancellations received after this date. SPONSORS The conference is sponsored by the Commission of the European Communities - DG XIII- Esprit, Siemens-Nixdorf, Parsytec, the National Fund for Scientific Research and the Research Council of the Free University of Brussels. REGISTRATION Attached you will find the registration form. Completed registration forms should be returned by mail, email or fax to: PPSN92 Registrations Artificial Intelligence Lab VUB Pleinlaan 2 Phone +32 2 641 35 75 B-1050 Brussels Fax +32 2 641 35 82 BELGIUM Email ppsn at arti.vub.ac.be ------------------------------------------------------------------------------- REGISTRATION FORM - CUT HERE ------------------------------------------------------------------------------ PPSN92 REGISTRATION FORM PERSONAL DETAILS Name, First Name ___________________________________________ Address ___________________________________________ ___________________________________________ Zip code ____________ City _______________________ Country ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ Email ___________________________________________ REGISTRATION FEE Studnets requiring reduced fee must provide proof of status (such as copy of student ID). Please tick the appropriate boxes below. ---- Normal Registration 8 500 BEF | | ---- ---- Student Registration 5 000 BEF | | ---- ---- Late Fee (after August 15) 2 000 BEF | | ---- Registration Total ----------- (BEF): | | ----------- LUNCHES ---- Vegetarian | | ---- CONFERENCE DINNER Yes - I wish to attend the PPSN92 Conference Dinner ---- (1 500 BEF) | | ---- Nr. of Additional tickets (Nr. * 1 500 BEF) | | ---- Dinner Total ----------- (BEF): | | ----------- FINAL TOTAL ----------- (BEF): | | ----------- From rich at gte.com Tue May 26 16:14:08 1992 From: rich at gte.com (Rich Sutton) Date: Tue, 26 May 92 16:14:08 -0400 Subject: Annoucement of papers extending Delta-Bar-Delta Message-ID: <9205262014.AA22222@bunny.gte.com> Dear Learning Researchers: I have recently done some work extending that by Jacobs and others on learning-rate-adaptation methods. The three papers announced below extend it in the directions of machine learning, optimal linear estimation, and psychological modeling, respectively. Information on how to obtain copies is given at the end of the message. -Rich Sutton ---------------------------------------------------------------------- To appear in the Proceedings of the Tenth National Conference on Artificial Intelligence, July 1992: ADAPTING BIAS BY GRADIENT DESCENT: AN INCREMENTAL VERSION OF DELTA-BAR-DELTA Richard S. Sutton GTE Laboratories Incorporated Appropriate bias is widely viewed as the key to efficient learning and generalization. I present a new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience. The IDBD algorithm is developed for the case of a simple, linear learning system---the LMS or delta rule with a separate learning-rate parameter for each input. The IDBD algorithm adjusts the learning-rate parameters, which are an important form of bias for this system. Because bias in this approach is adapted based on previous learning experience, the appropriate testbeds are drifting or non-stationary learning tasks. For particular tasks of this type, I show that the IDBD algorithm performs better than ordinary LMS and in fact finds the optimal learning rates. The IDBD algorithm extends and improves over prior work by Jacobs and by me in that it is fully incremental and has only a single free parameter. This paper also extends previous work by presenting a derivation of the IDBD algorithm as gradient descent in the space of learning-rate parameters. Finally, I offer a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation. -------------------------------------------------------------------- Appeared in the Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, May 1992, pages 161-166: GAIN ADAPTATION BEATS LEAST SQUARES? Richard S. Sutton GTE Laboratories Incorporated I present computational results suggesting that gain-adaptation algorithms based in part on connectionist learning methods may improve over least squares and other classical parameter-estimation methods for stochastic time-varying linear systems. The new algorithms are evaluated with respect to classical methods along three dimensions: asymptotic error, computational complexity, and required prior knowledge about the system. The new algorithms are all of the same order of complexity as LMS methods, O(n), where n is the dimensionality of the system, whereas least-squares methods and the Kalman filter are O(n^2). The new methods also improve over the Kalman filter in that they do not require a complete statistical model of how the system varies over time. In a simple computational experiment, the new methods are shown to produce asymptotic error levels near that of the optimal Kalman filter and significantly below those of least-squares and LMS methods. The new methods may perform better even than the Kalman filter if there is any error in the filter's model of how the system varies over time. ------------------------------------------------------------------------ To appear in the Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, July 1992: ADAPTATION OF CUE-SPECIFIC LEARNING RATES IN NETWORK MODELS OF HUMAN CATEGORY LEARNING Mark A. Gluck, Paul T. Glauthier Center for Molecular and Behavioral Neuroscience, Rutgers and Richard S. Sutton GTE Laboratories Incorporated Recent engineering considerations have prompted an improvement to the least mean squares (LMS) learning rule for training one-layer adaptive networks: incorporating a dynamically modifiable learning rate for each associative weight accellerates overall learning and provides a mechanism for adjusting the salience of individual cues (Sutton, 1992). Prior research has established that the standard LMS rule can characterize aspects of animal learning (Rescorla & Wagner, 1972) and human category learning (Gluck and Bower, 1988). We illustrate how this enhanced LMS rule is analogous to adding a cue-salience or attentional component to the psychological model, giving the network model a means of distinguishing between relevant and irrelevant cues. We then demonstrate the effectiveness of this enhanced LMS rule for modelling human performance in two non-stationary learning tasks for which the standard LMS network model fails to account for the data (Hurwitz, 1990; Gluck, Glauthier & Sutton, in preparation). ------------------------------------------------------------------------ To obtain copies of these papers, please send an email request to jpierce at gte.com. Be sure to include your physical mailing address. From wahba at stat.wisc.edu Wed May 27 21:43:00 1992 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 27 May 92 20:43:00 -0500 Subject: Book Advert-CV,GCV, et al Message-ID: <9205280143.AA26382@hera.stat.wisc.edu> BOOK ADVERT - CV, GCV, DF SIGNAL, The BIAS-VARIANCE TRADEOFF AND ALL THAT .... Spline Models for Observational Data by G. Wahba v 59 in the SIAM NSF/CBMS Series in Applied Mathematics Although this book is written in the language of statistics it covers a number of topics that are increasingly recognized as being of importance to the computational learning community. It is well known that models such as neural nets, radial basis functions, spline and other bayesian models that are adapted to fit the data very well may in fact overfit the data, leading to large generalization error. In particular, minimizing generalization error, aka aka the bias-variance tradeoff, is discussed in the context of smooth multivariate function estimation with noisy data. Here, reducing the bias (fitting the data well) increases the variance (a proxy for the generalization error) and vice versa. Included is an in-depth discussion of ordinary cross validation, generalized cross validation and unbiassed risk as criteria for optimizing the bias- variance tradeoff. The role of "degrees of freedom for signal" as well as the relationships between Bayes estimation, regularization, optimization in (reproducing kernel) hilbert spaces, splines, and certain radial basis functions are covered, as well as a discussion of the relationship between generalized cross validation and maximum likelihood estimates of the main parameter(s) controlling the bias-variance tradeoff, both in the context of a well- known prior for the unknown smooth function, and in the general context of (smooth) regularization. .................... Spline Models for Observational Data, by Grace Wahba v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: e-mail (internet) service at siam.org SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA Toll-Free 1-800-447-7426 (8:30-4:45 Eastern Standard Time, USA) Regular phone: (215)382-9800 FAX (215)386-7999 May be ordered on American Express, Visa or Mastercard, or paid by check or money order in US dollars, or may be billed (extra charge). *CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. From jose at tractatus.siemens.com Wed May 27 22:38:42 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Wed, 27 May 1992 22:38:42 -0400 (EDT) Subject: Paper Available. Message-ID: The following paper (NOT posted on neuro-prose) can be gotten by sending a note to kic at learning.siemens.com and your address. To Appear in Cognitive Science Conference, July 1992, Indiana University. DEVELOPMENT of SCHEMATA DURING EVENT PARSING: Neisser's Perceptual Cycle as a Recurrent Connectionist Network Catherine Hanson Stephen Jos\o'e\(aa' Hanson Department of Psychology Learning Systems Department Temple University SIEMENS Research Phildelphia, PA 19122 Princeton, NJ 08540 Phone: 215-787-1279 609-734-3360 EMAIL: cat at astro.ocis.temple.edu jose at tractatus.siemens.com Abstract Event boundary judgements depend on schema activation and subsequently affect encoding of perceptual action sequences. Past work has either focused on process level descriptions (Neisser) without computational implications or on knowledge structure level descriptions (Schank's "scripts") without also providing process level descriptions at a computational level. The present work combines both process level descriptions and learned knowledge structures in a simple recurrent connectionist network. The recurrent connectionist network is used to model human's event parsing judgements of two kinds of video-taped event sequences. The network can accomodate the complex event boundary judgement time-series and makes predictions about the basis of how schemata are activated, what role they play during encoding and how they develop during learning. Areas: Cognitive Psychology, Connectionist Models, AI Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From ngoddard at carrot.psc.edu Thu May 28 11:39:29 1992 From: ngoddard at carrot.psc.edu (ngoddard@carrot.psc.edu) Date: Thu, 28 May 92 11:39:29 -0400 Subject: Do you need faster/bigger simulations? Message-ID: <23931.707067569@carrot.psc.edu> The Pittsburgh Supercomputing Center (PSC) encourages applications for grants of supercomputing resources from researchers using neural networks. Our Cray YMP is running the NeuralShell, Aspirin and PlaNet simulators. The CM-2 (32k nodes) currently runs two neural network simulators, one being a data-parallel version of the Mclelland and Rumelhart PDP simulator. These simulators are also available on the 256 node CM-5 installed at the PSC (currently without vector units). Users can run their own simulation code or modify our simulators; there is some support for porting code. PSC is primarily funded by the National Science Foundation and there is no explicit charge to U.S.-based academic researchers for use of its facilities. International collaboration is encouraged, but each proposal must include a co-principal investigator from a U.S. institution. Both academic and industry researchers are encouraged to apply. The following numbers give an idea of the scale of experiment that can be run using PSC resources. The bottom line is that in a day one can obtain results that would have required months on a workstation. For a large backpropagation network (200-200-200) the Cray simulators reach approximately 20 million online connection-weight updates per second (MCUPS) on a single CPU. This is about 100 times the speed of a DecStation 5000/120 and about 30 times the speed of an IBM 730 on the same problem. It could be increased by a factor of 8 if all of the Cray YMP's 8 CPUS were dedicated to the task. The McClelland & Rumelhart simulator on the CM-2 achieves about 20 MCUPS (batch) using 8k processors or about 80 MCUPS using 32k processors. The Zhang CM-2 backpropagation simulator has been benchmarked at about 150 MCUPS using all 32k processors. Current CM-5 performance is around 35 MCUPS (batch) per 128-node partition for the McClelland & Rumelhart simulator. CM-5 performance should improve dramatically once vector units are installed. A service unit on the Cray YMP corresponds to approximately 40 minutes of CPU time using 2 MW of memory; on the CM2 it is one hour's exclusive use of 8k processors. Grants of up to 100 service units are awarded every two weeks; larger grants are awarded quarterly. Descriptions of the types of grants available and the application form can be obtained as described below. NeuralShell, Aspirin and PlaNet run on various platforms including workstations and are available by anonymous ftp as described below. The McClelland & Rumelhart simulator is availible with their book "Explorations in Parallel Distributed Processing", 1988. Documentation outlining the facilities provided by each simulator is included in the sources. The suitability of the supercomputer version of each simulator for different types of network sizes and topologies is discussed briefly in PSC documentation that can be obtained by anonymous ftp as described below. It should be possible to develop a neural network model on a workstation and later conduct full scale testing on the PSC's supercomputers without substantial changes. Instructions for how to use the anonymous ftp facility appear at the end of this message. Further inquiries concerning the Neural and Connectionist Modeling program should be sent to Nigel Goddard at ngoddard at psc.edu (Internet) or ngoddard at cpwpsca (Bitnet) or the address below. How to get PSC grant information and application materials ---------------------------------------------------------- A shortened form of the application materials in printer-ready postscript can be obtained via anonymous ftp from ftp.psc.edu (128.182.62.148). The files are in "pub/grants". The file INDEX describes what is in each of the other files. More detailed descriptions of PSC facilities and services are only available in hardcopy. The basic document is the Facilities and Services Guide. Hardcopy materials can be requested from: grants at psc.edu or (412) 268 4960 - ask for the Allocations Coordinator or Allocations Coordinator Pittsburgh Supercomputing Center 4400 Fifth Avenue Pittsburgh, PA 15213 How to get Aspirin/MIGRAINES ---------------------------- The software is available from two FTP sites, CMU's simulator collection (pt.cs.cmu.edu or 128.2.254.155 in directory /afs/cs/project/connect/code) and from UCLA's cognitive science machines (polaris.cognet.ucla.edu or 128.97.50.3 in directory "alexis"). The compressed tar file is a little less than 2 megabytes and is called "am5.tar.Z". How to get PlaNet ----------------- The software is availible from FTP site boulder.colorado.edu (128.138.240.1) in directory "pub/generic-sources", filename PlaNet.5.6.tar.Z How to get NeuralShell ---------------------- The software is availible from FTP site quanta.eng.ohio-state.edu (128.146.35.1) in directory "pub/NeuralShell", filename "NeuralShell.tar". Generic Anonymous FTP instructions. ------------------------------------ 1. Create an FTP connection to the ftp server. For example, you would connect to the PSC ftp server "ftp.psc.edu" (128.182.62.148) with the command "ftp ftp.psc.edu" or "ftp 128.182.62.148". 2. Log in as user "anonymous" with password your username at your-site. 3. Change to the requisite directory, usually /pub/somedir, by typing the command "cd pub/somedir" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Optionally look around by typing the command "ls". 6. Get the files you want using the command "get filename" or "mget *" if you want to get all the files. 7. Terminate the ftp connection using the command "quit". 8. If the file ends in .Z, uncompress it with the command "uncompress filename.Z" or "zcat filename.Z > filename". This uncompresses the file and removes the .Z from the filename 9. If the files end in .tar, extract the tar'ed files with the command "tar xvf filename.tar". 10. If a file ends in .ps, you can print it on a Postscript printer using the command "lpr -s -Pprintername filename.ps" From berenji at ptolemy.arc.nasa.gov Thu May 28 20:48:53 1992 From: berenji at ptolemy.arc.nasa.gov (Hamid Berenji) Date: Thu, 28 May 92 17:48:53 PDT Subject: ICNN'93 Call for papers Message-ID: 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS San Francisco, California, March 28 - April 1, 1993 The IEEE Neural Networks Council is pleased to announce its 1993 International Conference on Neural Networks (ICNN'93) to be held in San Francisco, California from March 28 to April 1, 1993. ICNN'93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE'93). Participants will be able to attend the technical events of both meetings. ICNN '93 will be devoted to the discussion of basic advances and applications of neurobiological systems, neural networks, and neural computers. Topics of interest include: * Neurodynamics * Associative Memories * Intelligent Neural Networks * Invertebrate Neural Networks * Neural Fuzzy Systems * Evolutionary Programming * Optical Neurocomputers * Supervised Learning * Unsupervised Learning * Sensation and Perception * Genetic Algorithms * Virtual Reality & Neural Networks * Applications to: - Image Processing and Understanding - Optimization - Control - Robotics and Automation - Signal Processing ORGANIZATION: General Chair: Enrique H. Ruspini Program Chairs: Hamid R. Berenji, Elie Sanchez, Shiro Usui ADVISORY BOARD: S.-i. Amari J. A. Anderson J. C. Bezdek Y. Burnod L. Cooper R. C. Eberhart R. Eckmiller J. Feldman M. Feldman F. Fukushima R. Hecht-Nielsen J. Holland C. Jorgensen T. Kohonen C. Lau C. Mead N. Packard D. Rummelhart B. Skyrms L. Stark A. Stubberud H. Takagi P. Treleaven B. Widrow PROGRAM COMMITTEE: K. Aihara I. Aleksander L.B. Almeida G. Andeen C. Anderson J. A. Anderson A. Andreou P. Antsaklis J. Barhen B. Bavarian H. R. Berenji A. Bergman J. C. Bezdek H. Bourlard D. E. Brown J. Cabestany D. Casasent S. Colombano R. de Figueiredo M. Dufosse R. C. Eberhart R. M. Farber J. Farrell J. Feldman W. Fisher W. Fontana A.A. Frolov T. Fukuda C. Glover K. Goser D. Hammerstrom M. H. Hassoun J. Herault J. Hertz D. Hislop A. Iwata M. Jordan C. Jorgensen L. P. Kaelbling P. Khedkar S. Kitamura B. Kosko J. Koza C. Lau C. Lucas R. J. Marks J. Mendel W.T. Miller M. Mitchell S. Miyake A.F. Murray J.-P. Nadal T. Nagano K. S. Narendra R. Newcomb E. Oja N. Packard A. Pellionisz P. Peretto L. Personnaz A. Prieto D. Psaltis H. Rauch T. Ray M. B. Reid E. Sanchez J. Shavlik B. Sheu S. Shinomoto J. Shynk P. K. Simpson N. Sonehara D. F. Specht A. Stubberud N. Sugie H. Takagi S. Usui D. White H. White R. Williams E. Yodogawa S. Yoshizawa S. W. Zucker ORGANIZING COMMITEE: PUBLICITY: H.R. Berenji EXHIBITS: W. Xu TUTORIALS: J.C. Bezdek VIDEO PROCEEDINGS: A. Bergman FINANCE: R. Tong SPONSORING SOCIETIES: ICNN'93 is sponsored by the Neural Networks Council. Constituent Societies: * IEEE Circuits and Systems Society * IEEE Communications Society * IEEE Control Systems Society * IEEE Engineering in Medicine & Biology Society * IEEE Industrial Electronics Society * IEEE Industry Applications Society * IEEE Information Theory Society * IEEE Lasers and Electro-Optics Society * IEEE Oceanic Engineering Society * IEEE Robotics and Automation Society * IEEE Signal Processing Society * IEEE Systems, Man, and Cybernetics Society CALL FOR PAPERS The program committee cordially invites interested authors to submit papers dealing with any aspects of research and applications related to the use of neural models. Papers must be written in English and must be received by SEPTEMBER 21, 1992. Six copies of the paper must be submitted and the paper should not exceed 8 pages including figures, tables, and references. Papers should be prepared on 8.5" x 11" white paper with 1" margins on all sides, using a typewriter or letter quality printer in one column format, in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, authors name(s) and affiliation(s) on top of first page followed by an abstract. FAX submissions are not acceptable. Please send submissions prior to the deadline to: Dr. Hamid Berenji, AI Research Branch, MS 269-2, NASA Ames Research Center, Moffett Field, California 94035 CALL FOR VIDEOS: The IEEE Neural Networks Council is pleased to announce its first Video Proceedings program, intended to present new and significant experimental work in the fields of artificial neural networks and fuzzy systems, so as to enhance and complement results presented in the Conference Proceedings. Interested researchers should submit a 2 to 3 minute video segment (preferable formats: 3/4" Betacam, or Super VHS) and a one page information sheet (including title, author, affiliation, address, a 200-word abstract, 2 to 3 references, and a short acknowledgment, if needed), prior to September 21, 1992, to Meeting Management, 5665 Oberlin Drive, Suite 110, San Diego, CA 92121. We encourage those interested in participating in this program to write to this address for important suggestions to help in the preparation of their submission. TUTORIALS: The Computational Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute Evolutionary Programming David Fogel Orincon Corporation Expert Systems and Neural Networks George Lendaris Portland State University Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Department of Cognitive and Linguistic Sciences Brown University EXHIBITS: ICNN '93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE '93). ICNN '93 and FUZZ-IEEE '93 are the largest conferences and trade shows in their fields. Participants to either conference will be able to attend the combined exhibit program. We anticipate an extraordinary trade show offering a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. Interested exhibitors are requested to contact the Chairman, Exhibits, ICNN '93 and FUZZ-IEEE '93, Wei Xu at Telephone (408) 428-1888, FAX (408) 428-1884. FOR ADDITIONAL INFORMATION, CONTACT Meeting Management 5665 Oberlin Drive Suite 110 San Diego, CA 92121 Tel. (619) 453-6222 FAX (619) 535-3880 ------- From haussler at cse.ucsc.edu Fri May 29 15:37:38 1992 From: haussler at cse.ucsc.edu (David Haussler) Date: Fri, 29 May 1992 12:37:38 -0700 Subject: COLT `92 conference program Message-ID: <199205291937.AA28632@arapaho.ucsc.edu> COLT '92 Workshop on Computational Learning Theory Sponsored by ACM SIGACT and SIGART July 27 - 29, 1992 University of Pittsburgh, Pittsburgh, Pennsylvania GENERAL INFORMATION Registration & Reception: Sunday, 7:00 - 10:00 pm, 2M56-2P56 Forbes Quadrangle Conference Banquet: Monday, 7:00 pm The conference sessions will be held in the William Pitt Union. Late Registration, etc.: Kurtzman Room (during technical sessions) Lectures & Impromptu Talks: Ballroom Poster Sessions: Assembly Room SCHEDULE OF TALKS Sunday, July 26 RECEPTION: 7:00 - 10:00 pm Monday, July 27 SESSION 1: 8:45 - 10:05 am 8:45 - 9:05 Learning boolean read-once formulas with arbitrary symmetric and constant fan-in gates, by Nader H. Bshouty, Thomas Hancock, and Lisa Hellerstein 9:05 - 9:25 On-line Learning of Rectangles, by Zhixiang Chen and Wolfgang Maass 9:25 - 9:45 Cryptographic lower bounds on learnability of AC^1 functions on the uniform distribution, by Michael Kharitonov 9:45 - 9:55 Learning hierarchical rule sets, by Jyrki Kivinen, Heikki Mannila and Esko Ukkonen 9:55 - 10:05 Random DFA's can be approximately learned from sparse uniform examples, by Kevin Lang SESSION 2: 10:30 - 11:50 am 10:30 - 10:50 An O(n^loglog n) Learning Algorithm for DNF, by Yishay Mansour 10:50 - 11:10 A technique for upper bounding the spectral norm with applications to learning, by Mihir Bellare 11:10 - 11:30 Exact learning of read-k disjoint DNF and not-so-disjoint DNF, by Howard Aizenstein and Leonard Pitt 11:30 - 11:40 Learning k-term DNF formulas with an incomplete membership oracle, by Sally A. Goldman, and H. David Mathias 11:40 - 11:50 Learning DNF formulae under classes of probability distributions, by Michele Flammini, Alberto Marchetti-Spaccamela and Ludek Kucera SESSION 3: 1:45 - 3:05 pm 1:45 - 2:05 Bellman strikes again -- the rate of growth of sample complexity with dimension for the nearest neighbor classifier, by Santosh S. Venkatesh, Robert R. Snapp, and Demetri Psaltis 2:05 - 2:25 A theory for memory-based learning, by Jyh-Han Lin and Jeffrey Scott Vitter 2:25 - 2:45 Learnability of description logics, by William W. Cohen and Haym Hirsh 2:45 - 2:55 PAC-learnability of determinate logic programs, by Savso Dvzeroski, Stephen Muggleton and Stuart Russell 2:55 - 3:05 Polynomial time inference of a subclass of context-free transformations, by Hiroki Arimura, Hiroki Ishizaka, and Takeshi Shinohara SESSION 4: 3:30 - 4:40 pm 3:30 - 3:50 A training algorithm for optimal margin classifiers, by Bernhard Boser, Isabell Guyon, and Vladimir Vapnik 3:50 - 4:10 The learning complexity of smooth functions of a single variable, by Don Kimber and Philip M. Long 4:10 - 4:20 Absolute error bounds for learning linear functions online, by Ethan Bernstein 4:20 - 4:30 Probably almost discriminative learning, by Kenji Yamanishi 4:30 - 4:40 PAC Learning with generalized samples and an application to stochastic geometry, by S.R. Kulkarni, S.K. Mitter, J.N. Tsitsiklis and O. Zeitouni POSTER SESSION #1 & IMPROMPTU TALKS: 5:00 - 6:30 pm BANQUET: 7:00 pm Tuesday, July 28 SESSION 5: 8:45 - 10:05 am 8:45 - 9:05 Degrees of inferability, by P. Cholak, R. Downey, L. Fortnow, W. Gasarch, E. Kinber, M. Kummer, S. Kurtz, and T. Slaman 9:05 - 9:25 On learning limiting programs, by John Case, Sanjay Jain, and Arun Sharma 9:25 - 9:45 Breaking the probability 1/2 barrier in FIN-type learning, by Robert Daley, Bala Kalyanasundaram, and Mahendran Velauthapillai 9:45 - 9:55 Case based learning in inductive inference, by Klaus P. Jantke 9:55 - 10:05 Generalization versus classification, by Rolf Wiehagen and Carl Smith SESSION 6: 10:30 - 11:50 am 10:30 - 10:50 Learning switching concepts, by Avrim Blum and Prasad Chalasani 10:50 - 11:10 Learning with a slowly changing distribution, by Peter L. Bartlett 11:10 - 11:30 Dominating distributions and learnability, by Gyora M. Benedek and Alon Itai 11:30 - 11:40 Polynomial uniform convergence and polynomial-sample learnability, by Alberto Bertoni, Paola Campadelli, Anna Morpurgo, and Sandra Panizza 11:40 - 11:50 Learning functions by simultaneously estimating errors, by Kevin Buescher and P.R. Kumar INVITED TALK: 1:45 - 2:45 pm: Reinforcement learning, by Andy Barto, University of Massachusetts SESSION 7: 3:10 - 4:40 pm 3:10 - 3:30 On learning noisy threshold functions with finite precision weights, by R. Meir and J.F. Fontanari 3:30 - 3:50 Query by committee, by H.S. Seung, M. Opper, H. Sompolinsky 3:50 - 4:00 A noise model on learning sets of strings, by Yasubumi Sakakibara and Rani Siromoney 4:00 - 4:10 Language learning from stochastic input, by Shyam Kapur and Gianfranco Bilardi 4:10 - 4:20 On exact specification by examples, by Martin Anthony, Graham Brightwell, Dave Cohen and John Shawe-Taylor 4:20 - 4:30 A computational model of teaching, by Jeffrey Jackson and Andrew Tomkins 4:30 - 4:40 Approximate testing and learnability, by Kathleen Romanik IMPROMPTU TALKS: 5:00 - 6:00 pm BUSINESS MEETING: 8:00 pm POSTER SESSION #2: 9:00 - 10:30 pm Wednesday, July 29 SESSION 8: 8:45 - 9:45 am 8:45 - 9:05 Characterizations of learnability for classes of 0,...,n-valued functions, by Shai Ben-David, Nicol`o Cesa-Bianchi and Philip M. Long 9:05 - 9:25 Toward efficient agnostic learning, by Michael J. Kearns, Robert E. Schapire, and Linda Sellie 9:25 - 9:45 Approximating Bayes decisions by additive estimations by Svetlana Anoulova, Paul Fischer, Stefan Polt, and Hans Ulrich Simon SESSION 9: 10:10 - 10:50 am 10:10 - 10:30 On the role of procrastination for machine learning, by Rusins Freivalds and Carl Smith 10:30 - 10:50 Types of monotonic language learning and their characterization, by Steffen Lange and Thomas Zeugmann SESSION 10: 11:10 - 11:50 am 11:10 - 11:30 An improved boosting algorithm and its implications on learning complexity, by Yoav Freund 11:30 - 11:50 Some weak learning results, by David P. Helmbold and Manfred K. Warmuth SESSION 11: 1:45 - 2:45 pm 1:45 - 2:05 Universal sequential learning and decision from individual data sequences, by Neri Merhav and Meir Feder 2:05 - 2:25 Robust trainability of single neurons, by Klaus-U. Hoffgen and Hans-U. Simon 2:25 - 2:45 On the computational power of neural nets, by Hava T. Siegelmann and Eduardo D. Sontag =============================================================================== ADDITIONAL INFORMATION To receive complete information regarding conference registration and accomodations contact Betty Brannick: E-mail: brannick at cs.pitt.edu PHONE: (412) 624-8493 FAX: (412) 624-8854. Please specify whether you want the information sent in PLAIN text or LATEX format. NOTE: Attendees must register BY JUNE 19 TO AVOID THE LATE REGISTRATION FEE. From rich at gte.com Thu May 28 13:03:55 1992 From: rich at gte.com (Rich Sutton) Date: Thu, 28 May 92 13:03:55 -0400 Subject: Reinforcement Learning Special Issue of Machine Learning Message-ID: <9205281703.AA06399@bunny.gte.com> Those of you interested in reinforcement learning may want to get a copy of the special issue on this topic of the journal Machine Learning. It just appeared this week. Here's the table of contents: Vol. 8, No. 3/4 of MACHINE LEARNING (May, 1992) Introduction: The Challenge of Reinforcement Learning ----- Richard S. Sutton (Guest Editor) Q-Learning ----- Christopher J. C. H. Watkins and Peter Dayan Practical Issues in Temporal Difference Learning ----- Gerald Tesauro Transfer of Learning by Composing Solutions for Elemental Sequential Tasks ----- Satinder Pal Singh Simple Gradient-Estimating Algorithms for Connectionist Reinforcement Learning ----- Ronald J. Williams Temporal Differences: TD(lambda) for general Lambda ----- Peter Dayan Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching ----- Long-ji Lin A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-Like Environments ----- Jose del R. Millan and Carme Torras Copies can be ordered from: Outside North America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands tel. 617-871-6600 fax. 617-871-6528 From mjolsness-eric at CS.YALE.EDU Fri May 29 12:51:56 1992 From: mjolsness-eric at CS.YALE.EDU (Eric Mjolsness) Date: Fri, 29 May 92 12:51:56 EDT Subject: neural programmer/analyst job opening Message-ID: <199205291651.AA07885@EXPONENTIAL.SYSTEMSZ.CS.YALE.EDU> Programmer/Analyst Position in Artificial Neural Networks The Yale Center for Theoretical and Applied Neuroscience (CTAN) and the Computer Science Department (Yale University, New Haven CT) .. are offering a challenging position in software engineering in support of new mathematical approaches to artificial neural networks, as described below. (The official job description is very close to this and is posted at Human Resources.) 1. Basic Function: Designer, programmer, and consultant for artificial neural network software at Yale's Center for Theoretical and Applied Neuroscience (CTAN) and the Computer Science Department. 2. Major duties: (a) To extend and implement a design for a new neural net compiler and simulator based on mathematical optimization and computer algebra, using serial and parallel computers. (b) To run and analyse computer experiments using this and other software tools in a variety of application projects, including image processing and computer vision. (c) To support other work in artificial neural networks at CTAN, including the preparation of software demonstrations. 3. Position Specifications: (a) Education: BA, including calculus, linear algebra, differential equations. helpful: mathematical optimization. (b) Experience: programming experience in C under UNIX. some of the following: C++ or other object-oriented language, symbolic programming, parallel programming, scientific computing, workstation graphics, circuit simulation, neural nets, UNIX system administration (c) Skills: High-level programming languages medium to large-scale software engineering good mathematical literacy Preferred starting date: July 1, 1992. For information or to submit an application (your resume and any supporting materials), please write: Eric Mjolsness Yale Computer Science Department P.O. Box 2158 Yale Station New Haven CT 06520 (mjolsness at cs.yale.edu) Any application must also be sent to Human Resources, with the position identification "C 7-20073", at this address: Jeffrey Drexler Department of Human Resources Yale University 155 Whitney Ave. New Haven, CT 06520 ------- From ahg at eng.cam.ac.uk Sat May 30 11:09:39 1992 From: ahg at eng.cam.ac.uk (A. H. Gee) Date: Sat, 30 May 92 11:09:39 BST Subject: Paper in neuroprose Message-ID: <1457.9205301009@dsl.eng.cam.ac.uk> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: POLYHEDRAL COMBINATORICS AND NEURAL NETWORKS Andrew Gee and Richard Prager Technical Report CUED/F-INFENG/TR 100 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The often disappointing performance of optimizing neural networks can be partly attributed to the rather ad hoc manner in which problems are mapped onto them for solution. In this paper a rigorous mapping is described for quadratic 0-1 programming problems with linear equality and inequality constraints, this being the most general class of problem such networks can solve. The problem's constraints define a polyhedron P containing all the valid solution points, and the mapping guarantees strict confinement of the network's state vector to P. However, forcing convergence to a 0-1 point within P is shown to be generally intractable, rendering the Hopfield and similar models inapplicable to the vast majority of problems. Some alternative dynamics, based on the principle of tabu search, are presented as a more coherent approach to general problem solving. When tested on a large database of solved problems, the alternative dynamics produced some very encouraging results. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get gee.poly.ps.Z ftp> quit unix> uncompress gee.poly.ps.Z unix> lpr gee.poly.ps (or however you print PostScript) Please note that a couple of the figures in the paper were produced on an Apple Mac, and the resulting PostScript is not quite standard. People using an Apple LaserWriter should have no problems though. b) Via postal mail: Request a hardcopy from Andrew Gee, Speech Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: ahg at eng.cam.ac.uk  From D.M.Peterson at computer-science.birmingham.ac.uk Sat May 30 18:19:35 1992 From: D.M.Peterson at computer-science.birmingham.ac.uk (Donald Peterson) Date: Sat, 30 May 92 18:19:35 BST Subject: Philosophy and the Cognitive Sciences Message-ID: <682.9205301719@christopher-robin.cs.bham.ac.uk> ________________________________________________________________________ ________________________________________________________________________ CONFERENCE REGISTRATION INFORMATION Royal Institute of Philosophy PHILOSOPHY and the COGNITIVE SCIENCES The University of Birmingham September 11-14 1992 ________________________________________________________________________ ________________________________________________________________________ The conference will address a variety of questions concerning the foundations of cognitive science and the philosophical importance of the models of mind which it produces. Topics will include: connectionism and classical AI, rules and representation, reasoning, concepts, rationality, multiple personality, the mind as a control system, blindsight, etc. Speakers will include: Stephen Stitch, Terry Horgan, Michael Tye, Margaret Boden, Aaron Sloman, Brian McLaughlin, Andrew Woodfield, Martin Davies, Antony Galton, Stephen Mills, Niels Ole Bernsen. Papers given at the conference will be published in a volume produced by the Cambridge University Press, as a supplement to the journal _Philosophy_. Copies of this document together with titles as they become available can be obtained by emailing the auto reply service: rip92 at bham.ac.uk REGISTRATION To attend the conference, please fill in the form below and return it by post (not email) together with payment by cheque in pounds sterling to: Royal Institute of Philosophy Conference, Department of Philosophy, The University of Birmingham, Birmingham, B15 2TT, U.K. Delegates will be considered registered when cheques have been cleared. The total for Registration, Bed and Breakfast and all meals is 122.50 pounds. For registered students and the unwaged, the Registration Fee will be waived if evidence of status is sent. For a limited number of postgraduate students, all other charges will be at half-price: if you wish to apply for this reduction, please write to the organisers indicating your research topic and reason for attending the conference. For bookings received after 7th August we cannot guarantee accommodation, and for bookings received after 10th August an additional charge of 10 pounds has to be made. Organisers: Chris Hookway (Philosophy), Donald Peterson (Cognitive Science). 30 May 1992. ______________________________________________________________________ REGISTRATION FORM Royal Institute of Philosophy PHILOSOPHY and the COGNITIVE SCIENCES The University of Birmingham September 11-14 1992 ______________________________________________________________________ REQUIREMENTS Registration Fee 25.00 _______ Late Registration Fee 10.00 _______ Bed and Breakfast 64.00 _______ Dinner 11 September 7.50 _______ Lunch 12 September 5.50 _______ Dinner 12 September 7.50 _______ Lunch 13 September 5.50 _______ Dinner 13 September 7.50 _______ All Meals 33.50 _______ Total _______ Vegetarian meals (please tick) _______ PERSONAL DETAILS Name ___________________________________________ Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ Email ___________________________________________ REGISTRATION I wish to register for the Royal Institute of Philosophy Conference Philosophy and the Cognitive Sciences, and enclose a cheque in pounds sterling payable to C.J. Hookway (Royal Institute of Philosophy Conference) for ________ Signed ___________________________________________  From belew%FRULM63.BITNET at pucc.Princeton.EDU Fri May 15 08:55:04 1992 From: belew%FRULM63.BITNET at pucc.Princeton.EDU (Rick BELEW) Date: Fri, 15 May 92 14:55:04 +0200 Subject: Paradigmatic over-fitting Message-ID: There has been a great deal of discussion in the GA community concerning the use of particular functions as a "test suite" against which different methods (e.g., of performing cross-over) might be compared. The GA is perhaps particularly well-suited to this mode of analysis, given the way arbitrary "evaluation" functions are neatly cleaved from characteristics of the algorithm itself. We have argued before that dichotomizing the GA/evaluation function relationship in this fashion is inapprorpiate [W. Hart & R. K. Belew, ICGA'91]. This note, however, is intended to focus on the general use of test sets, in any fashion. Ken DeJong set an ambitious precident with his thesis [K. DeJong, U. Michigan, 1975]. As part of a careful empirical investigation of several major dimensions of the GA, DeJong identified a set of five functions that seemed to him (at that time) to provide a wide variety of test conditions affecting the algorithm's performance. Since then, the resulting "De Jong functions F1-F5" have assumed almost mythic status, and continue to be one of the primary ways in which new GA techniques are evaluated. Within the last several years, a number of researchers have re-evaluated the DeJong test suite and found it wanting in one respect or another. Some have felt that it does not provide a real test for cross-over, some that the set does not accurately characterize the general space of evaluation functions, and some that naturally occuring problems provide a much better evaluation than anything synthesized on theoretical grounds. There is merit to each of these criticisms, and all of this discussion has furthered our understanding of the GA. But it seems to me that somewhere along the line the original DeJong suite became villified. It isn't just that "hind-sight is always 20-20." I want to argue that DeJong's functions WERE excellent, so good that they now ARE a victim of their own success. My argument goes beyond any particular features of these functions or the GA, and therefore won't make historical references beyond those just sketched. \footnote{If I'm right though, it would be an interesting exercise in history of science to confirm it, with a careful analysis of just which test functions were used, when, by whom, with citation counting, etc.} It will rely instead on some fundamental facts from machine learning. Paradigmatic Over-fitting "Over-fitting" is a widely recognized phenonemon in machine learning (and before that, statistics). It refers to a tendancy by learning algorithms to force the rule induced from a training corpus to agree with this data set too closely, at the expense of generalization to other instances. We have all probably seen the example of the same data set fit with two polynomials, one that is correct and a second, higher-order one that also attempts to fit the data's noise. A more recent example is provided by some neural networks, which generalize much better to unseen data if their training is stopped a bit early, even though further epochs of training would continue to reduce the observed error on the training set. I suggest entire scientific disciplines can suffer a similar fate. Many groups of scientists have found it useful to identify a particular data set, test suite or "model animal" (i.e., particular species or even genetic strains that become {\em de rigueur} for certain groups of biologists). In fact, collective agreement as the validity and utility of scientific artifacts like this are critically involved in defining the "paradigms" (ala Kuhn) in which scientists work. Scientifically, there are obvious benefits to coordinated use of common test sets. For example, a wide variety of techniques can be applied to common data and the results of these various experiments can be compared directly. But if science is also seen as an inductive process, over-fitting suggests there may also be dangers inherent in this practice. Initially, standardized test sets are almost certain to help any field evaluate alternative methods; suppose they show that technique A1 is superior to B1 and C1. But as soon as the results of these experiments are used to guide the development ("skew the sampling") of new methods (to A2, A3 and A4 for example), our confidence in the results of this second set of experiments as accurate reflections of what will be found generally true, must diminish. Over time, then, the same data set that initially served the field well can come to actually impeed progress by creating a false characterization of the real problems to be solved. The problem is that the time-scale of scientific induction is so much slower than that of our computational methods that the biases resulting from "paradigmatic over-fitting" may be very difficult to recognize. Machine learning also offers some remedies to the dilema of over-training. The general idea is to use more than one data set for training, or more accurately, partition available training data into subsets. Then, portions of the training set can be methodically held back in order to compare the result induced from one subset with that induced from another (via cross validatation, jack-knifing, etc.). How might this procedure be applied to science? It would be somewhat artificial to purposefully identify but then hold back some data sets, perhaps for years. More natural strategies with about the same effect seem workable, however. First, a field should maintain MULTIPLE data sets, to minimize aberations due to any one. Second, each of these can only be USED FOR A LIMITED TIME, to be replaced by a new ones. The problem is that even these modest conventions require significant "discipline discipline." Accomplishing any coordination across independent-minded scientists is difficult, and the use of shared data sets is a fairly effortless way to accomplish useful coordination. Data sets are difficult to obtain in the first place, and convincing others to become familiar with them ever harder; these become intertial forces that will make scientists reluctant to part with the classic data sets they know well. Evaluating results across multiple data sets also makes new problems for reviewers and editors. And, because the time-scale of scientific induction is so long relative to the careers of the scientists involved, the costs associated with all these concrete problems, relative to theoretical ones due to pardigmatic over-fitting, will likely seem huge: "Why should I give up familiar data sets when we previously agreed to their validity, especially since my methods seem to be working better and better on them?!" Back to the GA The GA community, at present, seems fairly healthy according to this analysis. In addition to De Jong's, people like Dave Ackley have genereated very useful sets of test functions. There are now test suites that have many desirable properities, like being "GA-hard," "GA-deceptive," "royal road," "practical" and "real-world." So there are clearly plenty of tests. For this group, my main point is that this plurality is very desirable. I too am dissatisfied with De Jong's test suite, but I am equally dissatisfied with any ONE of the more recently proposed alternatives. I suggest it's time we move beyond debates about whose tests are most illuminating. If we ever did pick just one set to use for testing GAs it would --- like De Jong's --- soon come to warp the development of GAs according to ITS inevitable biases. What we need are more sophisticated analyses and methodologies that allow a wide variety of testing procedures, each showing something different. Flame off, Rik Belew [I owe the basic insight --- that an entire discipline can be seen to over-fit to a limited training corpora --- to conversations with Richard Palmer and Rich Sutton, at the Santa Fe Institute in March, 1992. Of course, all blame for damage occuring as the neat, little insight was stretched into this epistle concerning GA research is mine alone.] Richard K. Belew Computer Science & Engr. Dept (0014) Univ. California - San Diego La Jolla, CA 92093 rik at cs.ucsd.edu From now until about 20 July I will be working in Paris: Status: RO c/o J-A. Meyer Groupe de BioInformatique URA686. Ecole Normale Superieure 46 rue d'Ulm 75230 PARIS Cedex05 France Tel: 44 32 36 23 Fax: 44 32 39 01 belew at wotan.ens.fr  From Connectionists-Request at cs.cmu.edu Fri May 1 00:05:12 1992 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 May 92 00:05:12 -0400 Subject: Bi-monthly Reminder Message-ID: <19805.704693112@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & Hank Wan --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From mozer at dendrite.cs.colorado.edu Sun May 3 13:50:02 1992 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Sun, 3 May 1992 11:50:02 -0600 Subject: 1993 Connectionist Summer School preliminary announcement Message-ID: <199205031750.AA04386@neuron.cs.colorado.edu> PRELIMINARY ANNOUNCEMENT CONNECTIONIST MODELS SUMMER SCHOOL JUNE 1993 University of Colorado Boulder, CO The next Connectionist Models Summer School will be held at the University of Colorado, Boulder in the summer of 1993, tentatively June 23-July 6. This will be the fourth session in the series that was held at Carnegie-Mellon in 1986 and 1988, and at UCSD in 1990. The Summer School will offer guest lectures and workshops in a variety of areas of connectionism, with emphasis on theoretical foundations, computational neuroscience, cognitive science, and hardware implementation. Proceedings of the Summer School will be published the following fall. As in the past, participation will be limited to graduate students enrolled in PhD programs (full or part time). We hope to have sufficient funding to subsidize tuition and housing. This is a preliminary announcement. Information about how to apply will be posted over connectionists. As a point of contact, address electronic correspondence to "cmss at cs.colorado.edu". Jeff Elman University of California, San Diego Mike Mozer University of Colorado, Boulder Paul Smolensky University of Colorado, Boulder Dave Touretzky Carnegie-Mellon University Andreas Weigend Xerox PARC and University of Colorado, Boulder From jfeldman at ICSI.Berkeley.EDU Sat May 2 14:29:57 1992 From: jfeldman at ICSI.Berkeley.EDU (Jerry Feldman) Date: Sat, 2 May 92 11:29:57 PDT Subject: choices Message-ID: <9205021829.AA18784@icsib8.ICSI.Berkeley.EDU> The U.S. NIST has just made available three databases: fingerprints, tax forms and handwritten segmented characters. The expected applications of results in these three areas are quite different and anyone choosing to work on the first task should, in my opinion, seriously consider this. Jerry Feldman From mpadgett at eng.auburn.edu Mon May 4 11:44:46 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Mon, 4 May 92 10:44:46 CDT Subject: SimTec,WNN92/Houston,FNN Call Message-ID: <9205041544.AA15932@eng.auburn.edu> C A L L F O R P A P E R S * SimTec 92 * WNN92/Houston * 1992 INTERNATIONAL SIMULATION TECHNOLOGY CONFERENCE SimTec92 WNN92/Houston: A Neural Networks Conference held with SimTec NETS Users Group Meeting: NASA/JSC Neural Network Simulation FNN Symposium: Fuzzy Logic, Neural Networks Overview & Applications Lofti Zadeh, UC Berkeley NOVEMBER 4-7, 1992 SOUTH SHORE HARBOUR / JOHNSON SPACE CENTER CLEAR LAKE, TEXAS (NEAR HOUSTON AND GALVESTON) SimTec Sponsor: [SCS] The Society for Computer Simulation, Int. Co-sponsor: NASA/JSC; Cooperating: SPIE WNN Additional Sponsors: Cooperating: INNS Participating: IEEE NNC Computer and Simulation Technology Aerospace Life & Physical Sciences Intelligent Systems Panels, Exhibits, Standards Paper Contest: Academic, Industrial, Government Tour: NASA/JSC Simulation Facilities Keynote Speaker: Story Musgrave, M.D. NASA Astronaut ======================================================================== WNN92/Houston: Neural Networks Paper Contest, Performance Measure Methodology Contest Software Exchange, Panels, Demonstrations, Exhibits NETS USERS GROUP MEETING Sponsor: [SCS]; Co-sponsor: NASA/JSC Cooperating: SPIE and INNS; Participating: IEEE-NNC ======================================================================== FNN Symposium: Overview & Applications FUZZY LOGIC: Lofti Zadeh NEURAL NETWORKS: Mary Lou Padgett Sat., Nov. 7, 1992: Supplemental Fee Covers Tutorial Handouts and NETS Executable and Examples Sponsor: SCS; Co-sponsor: NASA/JSC ======================================================================== SimTec 92 Topics of Interest include, but are not limited to: Computer and Simulation Technology * Automatic Control Systems C. F. Chen, Boston U (617) 353-2567 * Education for Simulation Professionals Troy Henson, IBM (713) 282-7476 * Electronics/VLSI Joseph R. Cavallaro, Rice U (713)527-8101 x3589 * High Performance Computing / Computers Mohammad Obaidat, U Missouri (816) 235-1276 * Mathematical Modeling Richard Greechie, Louisiana Tech U (318) 257-2538 * Massively Parallel & Distributed Systems, Transputers, Languages Enrique Kortwright, Nichols St U (504) 448-4406 Stephen Seidman, Auburn University (205) 844-4330 * Software Modeling, Reliability & Quality Assurance Norm Schneidewind, Naval Postgrad. Sch. (408) 646-2719 John Munson, U West Florida (904) 474-2989 * Virtual Environments John Murphy, Westinghouse (412) 256-2693 * Multimedia Just-in-Time Training * Process Simulation & Process Control Standards * Simulation in ADA Aerospace * Pilot-in-the-Loop Flight Simulation Richard Cox, General Dynamics (817) 777-3744 * Real-Time Simulation and Engineering Simulators Pat Brown, The MITRE Corporation (713) 333-0926 * Robotics & Control Lloyd Wihl, CAE Electronics (514) 341-6780 * Satellite Simulators Juan Miro, European Space Agency (+49) 6151 902717 * Space Avionics / Display and Control Systems Rita Schindeler, Lockheed Engr. & Sci. (713) 333-7091 * Astronaut Training * Guidance, Navigation and Control (GN&C) Life & Physical Sciences * Biomedical Modeling & Simulation John Clark, Rice University (713) 527-8101 x3597 Lou Sheppard, UT Medical Branch (409) 772-3088 * Geophysical Modeling & Simulation David Norton, Houston Area Res. Ctr. (713) 363-7944 * High Energy Physics /SSC Modeling & Simulation David Adams, Rice University (713) 285-5316 * Computational Biology * Petrochemical Modeling & Simulation [PB] Intelligent Systems * Automation & Robotics Ian Walker, Rice University (713) 527-8101 x2359 * Expert Systems / KBS David Hamilton, IBM Corporation (713) 282-8357 * Fuzzy Logic Applications in Space Robert Lea, NASA/JSC (713) 483-8105 * Fuzzy Logic Applications Joe Mica, NASA/Goddard (301) 286-1343 * Intelligent Computer Aided Training (ICAT) Bowen Loftin, U of Houston & NASA/JSC (713) 483-8070 * Intelligent Computer Environments Michele Izygon, NASA/JSC (713) 483-8110 * Virtual Reality Lui Wang, NASA/JSC (713) 483-8074 * Genetic Algorithms * Object Oriented Programming * Simulation & AI SimTec ORGANIZING COMMITTEE: General Chair: Tony Sava, IBM Associate General Chair: Roberta Kirkham, Lockheed Program Chair: Troy Henson, IBM Technical Editor & SCS Representative: Mary Lou Padgett, Auburn U. Local Arrangements & Associate Technical Editor: Ankur Hajare, MITRE Exhibits Chair: Wade Webster, Lockheed NASA Representative: Robert Savely, NASA/JSC Juan Miro, ESA; Joe Mica, NASA/Goddard ======================================================================== WNN92/Houston: Neural Networks * Advances / Applications / Architectures / Hybrid Systems Mary Lou Padgett, Auburn U (205) 821-2472/3488 * Controls / Neurocontrols / Fuzzy-Neurocontrols Robert Shelton, NASA/JSC (713) 483-5901 * Computer Vision/Coupled Oscillation Steve Anderson, KAFB (505) 256-7799 * Electronics/VLSI INNS-SIG Ralph Castain, Los Alamos National Labs (505) 667-3283 * Neural Networks and Fuzzy Logic Robert Shelton, NASA/JSC (713) 483-5901 * Signal Processing and Analysis / Pattern Recognition Michael Hinman, Rome Labs (315) 330-3175 * Sensor Fusion Robert Pap, Accurate Automation (615) 622 4642 NETS USERS GROUP MEETING Robert Shelton NASA/JSC (713) 483-5901 WNN Program: Mary Lou Padgett, Auburn U.; Robert Shelton, NASA/JSC Walter J. Karplus, UCLA; Bart Kosko, USC; Paul Werbos, NSF DEADLINES: June 15: Abstracts and/or Draft Papers for Review August 1 (firm): Camera-Ready Full Papers SUBMIT TO: 30 Mary Lou Padgett, Auburn U., 1165 Owens Road, Auburn, AL 36830 (205) 821-2472/3488 Fax:(619) 277-3930 email:mpadgett at eng.auburn.edu ======================================================================== SimTec 92 1992 International Simulation Technology Conference WNN92/Houston & FNN Symposium November 4-7, 1992 South Shore Harbour / Johnson Space Center Clear Lake, Texas (near Houston and Galveston) If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu LOCAL ATTRACTIONS Sailing, Tennis, Swimming, Golf, Evening Dinner Cruise, Day Tour of Galveston Island, Space Center Houston EXHIBITOR INFORMATION Wade Webster P: (713) 282-6589 FAX:(713) 282-6423 ======= SCS OFFICE: (619) 277-3888 ======= SimTec 92 WNN92/Houston Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177  From pratt at cs.rutgers.edu Mon May 4 18:01:05 1992 From: pratt at cs.rutgers.edu (pratt@cs.rutgers.edu) Date: Mon, 4 May 92 18:01:05 EDT Subject: Announcing the availability of a hyperplane animator Message-ID: <9205042201.AA02052@klein.rutgers.edu> ----------------------------------- Announcing the availability of an X-based neural network hyperplane animator ----------------------------------- Lori Pratt and Paul Hoeper Computer Science Dept Rutgers University Understanding neural network behavior is an important goal of many research efforts. Although several projects have sought to translate neural network weights into symbolic representations, an alternative approach is to understand trained networks graphically. Many researchers have used a display of hyperplanes defined by the weights in a single layer of a back-propagation neural network. In contrast to some network visualization schemes, this approach shows both the training data and the network parameters that attempt to fit those data. At NIPS 1990, Paul Munro presented a video which demonstrated the dynamics of hyperplanes as a network changes during learning. This video was based on a program implemented for SGI workstations. At NIPS 1991, we presented an X-based hyperplane animator, similar in appearance to Paul Munro's, but with extensions to allow for interaction during training. The user may speed up, slow down, or freeze animation, and set various other parameters. Also, since it runs under X, this program should be more generally usable. This program is now being made available to the public domain. The remainder of this message contains more details of the hyperplane animator and ftp information. ------------------------------------------------------------------------------ 1. What is the Hyperplane Animator? The Hyperplane Animator is a program that allows easy graphical display of Back-Propagation training data and weights in a Back-Propagation neural network. Back-Propagation neural networks consist of processing nodes interconnected by adjustable, or ``weighted'' connections. Neural network learning consists of adjusting weights in response to a set of training data. The weights w1,w2,...wn on the connections into any one node can be viewed as the coefficients in the equation of an (n-1)-dimensional plane. Each non-input node in the neural net is thus associated with its own plane. These hyperplanes are graphically portrayed by the hyperplane animator. On the same graph it also shows the training data. 2. Why use it? As learning progresses and the weights in a neural net alter, hyperplane positions move. At the end of the training they are in positions that roughly divide training data into partitions, each of which contains only one class of data. Observations of hyperplane movement can yield valuable insights into neural network learning. 3. How to install the Animator. Although we've successfully compiled and run the hyperplane animator on several platforms, it is still not a stable program. It also only implements some of the functionality that we eventually hope to include. In particular, it only animates hyperplanes representing input-to-hidden weights. It does, however, allow the user to change some aspects of hyperplane display (color, line width, aspects of point labels, speed of movement, etc.), and allows the user to freeze hyperplane movement for examination at any point during training. How to install the hyperplane animator: 1. copy the file animator.tar.Z to your machine via ftp as follows: ftp cs.rutgers.edu (128.6.25.2) Name: anonymous Password: (your ID) ftp> cd pub/hyperplane.animator ftp> binary ftp> get animator.tar.Z ftp> quit 2. Uncompress animator.tar.Z 3. Extract files from animator.tar with: tar -xvf animator.tar 4. Read the README file there. It includes instructions for running a number of demonstration networks that are included with this distribution. DISCLAIMER: This software is distributed as shareware, and comes with no warantees whatsoever for the software itself or systems that include it. The authors deny responsibility for errors, misstatements, or omissions that may or may not lead to injuries or loss of property. This code may not be sold for profit, but may be distributed and copied free of charge as long as the credits window, copyright statement in the ha.c program, and this notice remain intact. ------------------------------------------------------------------------------- From kak at max.ee.lsu.edu Tue May 5 18:32:37 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 5 May 92 17:32:37 CDT Subject: Paper abstract Message-ID: <9205052232.AA01231@max.ee.lsu.edu> The following paper has just been published. PRAMANA - J. PHYS., Vol. 38, March 1992, pp. 271-278 ------------------------------------------------------------------ State Generators and Complex Neural Memories Subhash C. Kak Department of Electrical and Computer Engineering Louisiana State University, Baton Rouge, LA 70803, USA Abstract: The mechanism of self-indexing for feedback neural networks that generates memories from short subsequences is generalized so that a single bit together with an appropriate update order suffices for each memory. This mechanism can explain how stimulating an appropriate neuron can then recall a memory. Although the information is distributed in this model, yet our self-indexing mechanism makes it appear localized. Also a new complex valued neuron model is presented to generalize McCulloch-Pitts neurons. There are aspects to biological memory that are distributed and others that are localized. In the currently popular artificial neural network models the synaptic weights reflect the stored memories, which are thus distributed over the network. The question then arises whether these models can explain Penfield's observations on memory localization. This paper shows that such a localization does occur in these models if self-indexing is used. It is also shown how a generalization of the McCulloch-Pitts model of neurons appears essential in order to account for certain aspects of distributed information processing. One particular generalization, described in the paper, allows one to deal with some recent findings of Optican & Richmond (1987). From kak at max.ee.lsu.edu Tue May 5 18:51:52 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 5 May 92 17:51:52 CDT Subject: No subject Message-ID: <9205052251.AA08618@max.ee.lsu.edu> Call for Papers Two Sessions On NEURAL NETWORKS at the "First Int. Conf. on FUZZY THEORY AND TECHNOLOGY" October 14-18, 1992; Durham, North Carolina -------------------------------------------------------------- Deadline for Submission of Summaries: June 15, 1992 Authors informed about Acceptance Decision: July 15, 1992 Full Length Paper Due At the Conf.: Oct 15, 1992 Accepted papers will appear in hard-cover proceedings published by a major publisher. Revised papers for inclusion in the book would be due on April 15, 1993. The book will appear by October 1993. _______________________________________________________________ Summaries should not exceed 6 single-spaced pages inclusive of figures. 3 copies of the summary (only hard copies) should be mailed to the organizer of the sessions: Subhash Kak Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel: (504) 388-5552 E-mail: kak at max.ee.lsu.edu From omlinc at cs.rpi.edu Fri May 8 10:39:56 1992 From: omlinc at cs.rpi.edu (Christian Omlin) Date: Fri, 8 May 92 10:39:56 EDT Subject: TR available from the neuroprose archive Message-ID: <9205081439.AA24477@cs.rpi.edu> The following paper has been placed in the Neuroprose archive. Comments and questions are encouraged. ******************************************************************* -------------------------------------------- TRAINING SECOND-ORDER RECURRENT NEURAL NETWORKS USING HINTS -------------------------------------------- C.W. Omlin* C.L. Giles Computer Science Department *NEC Research Institute Rensselaer Polytechnic Institute 4 Independence Way Troy, N.Y. 12180 Princeton, N.J. 08540 omlinc at turing.cs.rpi.edu giles at research.nj.nec.com Abstract -------- We investigate a method for inserting rules into discrete-time second-order recurrent neural networks which are trained to recognize regular languages. The rules defining regular languages can be expressed in the form of transitions in the corresponding deterministic finite-state automaton. Inserting these rules as hints into networks with second-order connections is straight-forward. Our simulations results show that even weak hints seem to improve the convergence time by an order of magnitude. (To be published in Machine Learning: Proceedings of the Ninth International Conference (ML92),D. Sleeman and P. Edwards (eds.), Morgan Kaufmann, San Mateo, CA 1992). ******************************************************************** Filename: omlin.hints.ps.Z ---------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get omlin.hints.ps.Z ftp> bye unix% zcat omlin.hints.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ---------------------------------------------------------------- ---------------------------------------------------------------------------- Christian W. Omlin Troy, NY 12180 USA Computer Science Department Phone: (518) 276-2930 Fax: (518) 276-4033 Amos Eaton 119 E-mail: omlinc at turing.cs.rpi.edu Rensselaer Polytechnic Institute omlinc at research.nj.nec.com ---------------------------------------------------------------------------- From "MVUB::BLACK%hermes.mod.uk" at relay.MOD.UK Fri May 8 07:41:00 1992 From: "MVUB::BLACK%hermes.mod.uk" at relay.MOD.UK (John V. Black @ DRA Malvern) Date: Fri, 8 May 92 11:41 GMT Subject: 1st CFP: Third IEE International Conference on Artificial Neural Networks Message-ID: IEE 3rd INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS FIRST CALL FOR PAPERS Date : 25-27 May 1993 Venue : Conference Centre, Brighton, United Kingdom Contributions & Conference format: Oral & poster presentations in single, non-parallel sessions Scope: 3 principal areas of interest ----- Architecture & Learning Algorithms : Theory & design of neural networks, modular systems, comparison with classical techniques Applications & industrial systems : Vision and image processing, speech and language processing, biomedical systems, robotics & control, AI applications, expert systems, financial and business systems Implementations : parallel simulation/architecture, hardware implementations (analogue & digital), VLSI devices or systems, optoelectronics Travel: Frequent trains from London (journey time 60 mins) and from Gatwick airport (30 mins) Deadlines: --------- 1st September 1992 : Receipt of synopsis by secretariat. The synopsis should not exceed 1 A4 page October 1992 : Notification of provisional acceptance 25th January 1993 : Receipt of full typescript for final review by secretariat. This should be a maximum of 5 A4 pages - approximately 5,000 words, less if illustrations are included. Further details and contributions to: Sheila Griffiths ANN 93 Secretariat IEE Conference Services London WC2R 0BL United Kingdom Telephone (+44) 71 240 1871 Ext 222 Fax (+44) 71 497 3633 Telex 261176 IEE LDN G David Lowe Janet: lowe at uk.mod.hermes Internet: lowe%hermes.mod.uk at relay.mod.hermes From ml92 at computing-science.aberdeen.ac.uk Fri May 8 14:02:30 1992 From: ml92 at computing-science.aberdeen.ac.uk (ML92 Aberdeen) Date: Fri, 08 May 92 14:02:30 Subject: Ninth International Machine Learning Conference (ML92) Message-ID: <9205081302.AA29997@kite> MMM MMM LL 999 2222 MM MM MM MM LL 99 99 22 22 MM MM MM MM LL ==== 99 99 22 MM M MM LL 99999 222 MM MM LL 99 222 MM MM LLLLLL 999999 22222222 NINTH INTERNATIONAL MACHINE LEARNING CONFERENCE UNIVERSITY OF ABERDEEN, SCOTLAND 1 - 3 JULY 1992 Registration Information ======================== On behalf of the organizing committee, we are pleased to announce that ML92, the Ninth International Machine Learning Conference, will be held at the University of Aberdeen, Scotland, July 1-3, 1992. Informal workshop sessions will be held immediately after the conference on Saturday, July 4, 1992. The conference will feature invited speakers, technical and poster sessions. The invited speakers for ML92 are: * David Klahr Carnegie-Mellon University, USA * Ivan Bratko Jozef Stefan Institute, Ljubljana, Slovenia * Jude Shavlik University of Wisconsin, USA REGISTRATION FEE The registration fees for ML92 are as follows: Normal - Pound Sterling 115, Student - Pound Sterling 75. These fees cover conference participation, proceedings, teas and coffees during breaks and a number of evening receptions. The deadline for registration is May 29, 1992. After this date, a late fee of Pound Sterling 20 will be charged. Cancellation fees are as follows: before May 29 - Pound Sterling 10, May 30 to June 30 - Pound Sterling 20, after June 30 - at the discretion of the General Chairman. ACCOMMODATION A large number of rooms in university halls of residence (student dorms) have been available for ML92 delegates. Delegates requiring on-campus accommodation must return completed forms by May 29, 1992 - after that date accommodation cannot be guaranteed. In addition, block bookings have been made with three city centre hotels (all approx. 30 minutes walk, or a short bus ride from the conference venue.) Rates are as follows: University Hall (Student Dorm) Single: Pound Sterling 18.50 Copthorne Hotel Single: Pound Sterling 72.50 Double: Pound Sterling 87.50 Brentwood Hotel Tues June 30 - Thurs July 2: Single: Pound Sterling 48.00 Double: Pound Sterling 58.00 Fri July 3 & Sat July 4: Single: Pound Sterling 28.00 Double: Pound Sterling 38.00 Caledonian Thistle Hotel Tues June 30 - Thurs July 2: Single: Pound Sterling 88.20 Double: Pound Sterling 103.50 Fri July 3 & Sat July 4: Single: Pound Sterling 38.00 Double: Pound Sterling 76.00 Notes All double room prices are based on 2 people occupying room. All prices include breakfast except for Caledonian Thistle on June 30, July 1 & July 2. University accommodation should be booked using the conference registration form. Delegates requiring hotel accommodation should contact the hotels directly using the contact information below. When booking hotel accommodation delegates must mention ML92 to qualify for conference rates and should contact hotels before May 29, 1992, after which accommodation cannot be guaranteed. Copthorne Hotel Brentwood Hotel 122 Huntly Street 101 Crown Street Aberdeen, AB1 1SU Aberdeen, AB1 2HH SCOTLAND SCOTLAND Tel. +44 224 630404 Tel. +44 224 595440 Fax +44 224 640573 Fax +44 224 571593 Telex 739707 Telex 739316 Caledonian Thistle Hotel Union Terrace Aberdeen, AB9 1HE SCOTLAND Tel. +44 224 640233 Fax +44 224 641627 Telex 73758 LUNCH Lunch will be provided on campus, close to the conference venue. Cost: Pound Sterling 5.00 per day. All delegates are recommended to select conference lunches - as there are no alternatives close to the campus. Please indicate on the registration form the days on which you require lunch. (The cost of lunch on Saturday July 4 is included in the workshop fee.) CONFERENCE DINNER The conference dinner will be held on the evening of Friday, July 3 at the Pittodrie House Hotel, a country hotel outside Aberdeen. Places are limited and will be assigned on a first come, first served basis. Cost of the meal plus transport: Pound Sterling 27.00. WORKSHOPS - SATURDAY JULY 4 A number of informal workshops are to be held on Saturday July 4, immediately after the conference. The five workshops are: 1. Biases in Inductive Learning Coordinator: Diana Gordon (Naval Research Lab, Washington DC, USA) 2. Computational Architectures for Supporting Knowledge Acquisition & Machine Learning Coordinator: Mike Weintraub (GTE labs., USA) 3. Integrated Learning in Real-World Domains Coordinator: Patricia Riddle (Boeing, Seattle, USA) 4. Knowledge Compilation & Speedup Learning Coordinator: Prasad Tadepalli (Oregon State University, USA) 5. Machine Discovery Coordinator: Jan Zytkow (Wichita State University, USA) ACCOMPANYING PERSONS PROGRAM An accompanying persons program will be organized if numbers merit it. The area around Aberdeen has a high concentration of historic sites, castles, etc. and some of the most beautiful countryside in Scotland. TRAVELLING TO ABERDEEN Delegates travelling from the USA can now fly direct to Scotland. American Airlines fly Chicago to Glasgow, while British Airways fly New York to Glasgow. A regular rail service links Glasgow's Queen Street station to Aberdeen. A number of airlines fly direct to Aberdeen from European cities: Air France (Paris), SAS (Copenhagen) & AirUK (Amsterdam). Aberdeen is also served by regular flights from several UK airports: Manchester (British Airways, DanAir), London Gatwick (DanAir), London Heathrow (British Airways), Stansted (AirUK). A frequent high speed rail service links Aberdeen to London - journey time approx. 8 hours (Sleeper services are available). PRE- OR POST-CONFERENCE HOLIDAYS IN SCOTLAND Delegates wishing to arrange holidays in Scotland for the period immediately before or after the conference may wish to contact Chieftain Tours Ltd, who can offer help with car hire, hotel bookings, etc. Please mention ML92 when contacting Chieftain Tours. Please note that all correspondence regarding holidays must be done direct with the tour company and not through the ML92 organizers. Chieftain Tours can also provide information about low-cost flights to Scotland from the USA and Canada. Contact details for Chieftain Tours: Chieftain Tours Ltd. A8 Whitecrook Centre Tel +44 41 9511470 Whitecrook Street, Clydebank Fax +44 41 9511467 Glasgow, Scotland G81 1QF Telex 778169 Delegates from the USA may wish to make use of the following toll free fax number: 1 800 352 4350 PAYMENT Payment may be by cheque or credit card (Visa or Mastercard). Cheques must be in pounds sterling and must be made payable to "University of Aberdeen". Completed registration forms should be returned by mail or fax to: ML92 Registrations Department of Computing Science King's College University of Aberdeen Aberdeen, AB9 2UB Tel +44 224 272296 SCOTLAND Fax +44 224 487048 (Please note - registration by email is not acceptable.) CUT HERE ________________________________________________________________________ ML92 REGISTRATION FORM Please complete this form in typescript or BLOCK capitals and send to: ML92 Registrations, Department of Computing Science, King's College, University of Aberdeen, Aberdeen, AB9 2UB, SCOTLAND. PERSONAL DETAILS Name ___________________________________________ Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Email ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ REGISTRATION FEE Delegates requiring reduced (student) fee must provide proof of status (such as xerox of student ID). Please tick the appropriate boxes below. ---- Normal Registration Pound Sterling 115.00 | | ---- ---- Student Registration Pound Sterling 75.00 | | ---- ---- Late Fee (after May 29) Pound Sterling 20.00 | | ---- Registration Total ----------- (Pound Sterling): | | ----------- ACCOMMODATION Use this section to book university hall (student dorm) accommodation. (Pound Sterling 18.50 per night) University Hall (Single room) ---- Tuesday June 30 | | ---- ---- Wednesday July 1 | | ---- ---- Thursday July 2 | | ---- ---- Friday July 3 | | ---- ---- Saturday July 4 | | ---- Delegates requiring on-campus accommodation must return forms by May 29, 1992. Delegates whose completed forms arrive after this date cannot be guaranteed accommodation. Accommodation Total ----------- (Pound Sterling): | | ----------- LUNCH Please indicate the days on which you require lunch (Cost: Pound Sterling 5.00 per day) ---- Wednesday July 1 | | ---- ---- Thursday July 2 | | ---- ---- Friday July 3 | | ---- Lunch Total ----------- (Pound Sterling): | | ----------- CONFERENCE DINNER Yes - I wish to attend the ML92 Conference Dinner ---- (Pound Sterling 27.00) | | ---- Dinner Total ----------- (Pound Sterling): | | ----------- SPECIAL DIETARY REQUIREMENTS Please indicate below if you require a special diet. ---- Vegetarian | | ---- ---- Vegan | | ---- ---------------- Other (please specify) | | ---------------- ACCOMPANYING PERSONS PROGRAM Please indicate the number of persons ---------- likely to accompany you to the conference. | | ---------- WORKSHOPS - SATURDAY JULY 4 ---- Yes - I will be attending the ML92 workshops | | ---- ---- No - I will not be attending the ML92 workshops | | ---- If yes, please indicate which workshop you wish to attend: ---- Biases in Inductive Learning | | ---- ---- Computational Architectures for Supporting | | Knowledge Acquisition & Machine Learning ---- ---- Integrated Learning in Real-World Domains | | ---- ---- Knowledge Compilation & Speedup Learning | | ---- ---- Machine Discovery | | ---- The workshop fee includes the cost of the workshop proceedings, lunch on July 4 plus teas/coffees during breaks. ---- Normal Pound Sterling 20 | | ---- ---- Student Pound Sterling 15 | | ---- Workshop Total ----------- (Pound Sterling): | | ----------- FINAL TOTAL ----------- (Pound Sterling): | | ----------- PAYMENT CREDIT CARD ----------- Please charge my VISA / MASTERCARD (delete as appropriate) Name (as it appears on card) ---------------------------------------------------------------------------- | | | | | | | | | | | | | | | | | | | | | | | | | | ---------------------------------------------------------------------------- Card Number ------------------------------------------------- | | | | | | | | | | | | | | | | | ------------------------------------------------- ----------------- Expiry Date | | | / | | | ----------------- Amount -------------------- (Pound Sterling): | | | | . | | | -------------------- Signature _________________________________________________ ** The credit card payment facility for ML92 is available ** ** as a result of the generous support of Bank of Scotland plc.** CHEQUE ------ I enclose a pounds sterling cheque made payable to "University of Aberdeen" for : ----------- | | ----------- ________________________________________________________________________ From dhw at santafe.edu Mon May 11 17:15:17 1992 From: dhw at santafe.edu (David Wolpert) Date: Mon, 11 May 92 15:15:17 MDT Subject: New posting Message-ID: <9205112115.AA17949@sfi.santafe.edu> ****************** DO NOT FORWARD TO OTHER LISTS ********** The following article has been placed in neuroprose under the name "wolpert.stack_gen.ps.Z". It appears in the current issue of Neural Networks. It is a rather major rewrite of a preprint of the same name. STACKED GENERALIZATION by David H. Wolpert Abstract: This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use *some* version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory. To retrieve this article, do the following: unix> ftp archive.cis.ohio-state.edu login: anonymous password: {your e-mail address} ftp> binary ftp> cd pub/neuroprose ftp> get wolpert.stack_gen.ps.Z ftp> quit unix> uncompress wolpert.stack_gen.ps.Z unix> lpr wolpert.stack_gen.ps {or however you get postscript printout} From jose at tractatus.siemens.com Mon May 11 15:32:19 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 11 May 1992 15:32:19 -0400 (EDT) Subject: NIPS*92 Deadline Message-ID: Don't forget... if you havn't yet started you papers... start now! Deadline is MAY 22 (POSTMARKED) only 12 more shopping days left. Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From cabestan at eel.upc.es Tue May 12 13:32:20 1992 From: cabestan at eel.upc.es (JOAN CABESTANY) Date: Tue, 12 May 1992 13:32:20 UTC+0100 Subject: IWANN93 Workshop Message-ID: <355*/S=cabestan/OU=eel/O=upc/PRMD=iris/ADMD=mensatex/C=es/@MHS> Please find herewith the First announcement and Call for Papers of IWANN93 (International Workshop on Artificial Neural Networks) to be held in Spain (near Barcelona) next June 1993. Thanks J.Cabestany UPC cabestan at eel.upc.es *************************************************************************** INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORK IWANN'93 First Announcement and Call for Papers Sitges (Barcelona), Spain June 9 - 11, 1993 SPONSORED BY IFIP (Working Group in Neural Computer Systems, WG10.6) IEEE Neural Networks Council UK&RI communication chapter of IEEE Spanish Computer Society chapter of IEEE AEIA (IEEE Affiliate society) ORGANISED BY Universidad Politecnica de Catalunya Universidad Autonoma de Barcelona Universidad de Barcelona UNED (Madrid) IWANN'91 (International Workshop on Artificial Neural Networks) was held in Granada (Spain) in September 1991. People from over 10 countries attended the Workshop, and over 50 oral presentations were given. IWANN'93 will be organised next June, 1993 in Sitges (Spain) with the following Scope and Topics. SCOPE Artificial Neural Networks (ANN) were first developed as structural or functional models of biological systems in an attempt to emulate their unique problem-solving abilities. The main interest in neural topics stems from their advantages in plasticity, speed and autonomy over conventional hardware and software, which have traditionally proven inadequate for handling certain tasks such as perception, learning, planning, knowledge acquisition and natural language processing. IWANN's main objective is to offer a forum for achieving a global, innovative and advanced perspective on ANN. In addition to conventional Neural Networks aspects, such as algorithms, architectures, software development tools , learning, implementations and applications, IWANN'93 will also be concerned with other complementary topics such as neural computation theory and methodology, physiological and anatomical basis, local computation models, organization and structures resembling biological systems. Contributions on the following aspects are welcome: * New models for biological networks. * New algorithms and architectures for autonomy and self- programmability using local learning strategies. * Relationship with symbolic and knowledge-based systems. * New implementation proposals using general or specific processors. Implementations with embedded learning are especially invited. * Applications. Finally, it is expected that IWANN'93 will also serve as a meeting point for engineers and scientists to establish professional contacts and relationships. TOPICS 1 - BIOLOGICAL PERSPECTIVES: anatomical and physiological basis, local circuits, biophysics and natural computation. 2 - THEORETICAL MODELS: analog, logic, inferential, statistical and fuzzy models. Statistical mechanics. 3 - ORGANIZATIONAL PRINCIPLES: network dynamics, self-organization, competition, recurrency, evolutive optimization and genetic algorithms. 4 - LEARNING: supervised and unsupervised strategies, local self- programming, continuous learning, evolutive algorithms 5 - COGNITIVE SCIENCE AND AI: perception and psychophysics, symbolic reasoning and memory. 6 - NEURAL SOFTWARE: languages, tools, simulation and benchmarks. 7 - HARDWARE IMPLEMENTATION: VLSI, parallel architectures, neurochips, preprocessing networks, neurodevices, benchmarks, optical and other technologies. 8 - NEURAL NETWORKS FOR SIGNAL PROCESSING: preprocessing, vision, speech recognition, adaptive filtering, noise reduction. 9 - NEURAL NETWORKS FOR COMMUNICATIONS SYSTEMS: modems and codecs, network management, digital communications. 10 - NEURAL NETWORKS FOR CONTROL AND ROBOTICS: system identification, motion, adaptive control, navigation, real time applications. LOCATION SITGES (BARCELONA), JUNE 9 - 11, 1993. Sitges is located 35 km. south of Barcelona. The city is well known for its beaches and its promenade facing the Mediterranean sea. Sitges is also known for its cultural events and history (Maricel museum, painters like Santiago Rusinol lived there and left part of their heritage). Sitges can be easily reached by car or by train (about 30 minutes from Barcelona). LANGUAGE English will be the official language of IWANN'93. Simultaneous translation will not be provided. CALL FOR PAPERS The Programme Committee seeks original papers on the above mentioned Topics. Authors should pay special attention to the explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. Authors must take into account the following: INSTRUCTIONS TO AUTHORS Authors must submit four copies of full papers, not exceeding 6 pages in DIN-A4 format. The heading should be centered and include: . Title in capitals. . Name(s) of author(s). . Address(es) of author(s). . A 10 line abstract. Three blank lines should be left between each of the above items, and four between the heading and the body of the paper, 1.6 cm left, right, top and bottom margins, single-spaced and not exceeding the 6 page limit. In addition, one sheet should be attached including the following information: . Title and author(s) name(s). . A list of five keywords. . A reference to the Topics the paper relates to. . Postal address, phone and fax numbers and E-mail (if available). All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length. (Springer-Verlag Proceedings are expected). DATES Second Call for Papers September 1, 1992 Final date for submission November 30, 1992 Committee's decision March 15, 1993 Workshop June 9-11, 1993 CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Informatica y Automatica UNED Senda del Rey, s/n Phone:+ 34.1.544.60.00 28040 MADRID (Spain) Fax: + 34.1.544.67.37 E-mail: jose.mira at human.uned.es ORGANIZATION COMMITTEE Jose Mira UNED. Madrid (E) **Chairman** Senen Barro Unv. de Santiago (E) Joan Cabestany Unv. Pltca. de Catalunya (E) Trevor Clarkson King's College London (UK) Ana Delgado UNED. Madrid (E) Federico Moran Unv. Complutense. Madrid (E) Conrad Perez Unv. de Barcelona (E) Francisco Sandoval Unv. de Malaga (E) Elena Valderrama CNM- Unv. Autonoma de Barcelona (E) LOCAL COMMITEE Joan Cabestany Unv. Pltca. de Catalunya (E) **Chairman** Jordi Carrabina CNM- Unv. Autonoma de Barcelona (E) Francisco Castillo Unv. Pltca. de Catalunya (E) Andreu Catala Unv. Pltca. de Catalunya (E) Gabriela Cembrano Instituto de Cibernetica. CSIC. Barcelona (E) Conrad Perez Unv. de Barcelona (E) Elena Valderrama CNM- Unv. Autonoma de Barcelona (E) GENERAL CHAIRMAN Alberto Prieto Unv. Granada. Spain PROGRAMME COMMITTEE Jose Mira UNED. Madrid (E) **Chairman** Sanjeev B. Ahuja Nielsen A.I. Research & Development. Bannokburn (USA) Igor Aleksander Imperial College. London (UK) Luis B. Almeida INESC. Lisboa (P) Shun-ichi Amari Faculty of Engineering. Unv. Tokyo (Jp) Xavier Arreguit CSEM SA (CH) Francois Blayo Ecole Polytechnique Federale de Lausanne (CH) Colin Campbell Bristol University of Bristol (UK) Leon Chua University of California (USA) Trevor Clarkson King's College London (UK) Michael Cosnard Ecole Normale Superieure de Lyon (F) Marie Cottrell Unv. Paris I (F) Dante Del Corso Politecnico di Torino (I) Gerard Dreyfus ESPCI Paris (F) J. Simoes da Fonseca Unv. de Lisboa (P) Kunihiko Fukushima Faculty of Engineering Science. Osaka University (Jp) Karl Goser Unv. Dortmund (D) Francesco Gregoretti Politecnico di Torino (I) Karl E. Grosspietsch Mathematik und Datenverarbeitung (GMD). St. Austin (D) Mohamad Hassoun Wayne State University (USA) Jeanny Herault INPG Grenoble (F) Jaap Hoekstra Delft University of Technology (N) P.T.W. Hudson Faculteit der Sociale Wetenschappen. Leiden University (N) Jose Luis Huertas CNM- Universidad de Sevilla (E) Simon Jones Unv. Nottingham (UK) Chistian Jutten INPG Grenoble (F) H. Klar Institut fur Mikroelektronik. Technische Universitat Berlin (D) Michael D. Lemmon University of Notre Dame. Notre Dame (USA) Panos Ligomenides Unv. of Maryland (USA) Javier Lopez Aligue Unv. de Extremadura. (E) Robert J. Marks II University of Washington (USA) Anthony N. Michel University of Notre Dame. Notre Dame (USA) Roberto Moreno Unv. Las Palmas Gran Canaria (E) Josef A. Nossek Inst. of Network Theory and Circuit Design. Tech. Univ. of Munich (D) Francisco J. Pelayo Unv. de Granada (E) Franz Pichler Johannes Kepler Univ. (A) Ulrich Ramacher Siemens AG. Munich (D) Tamas Raska Comp. & Aut. Res. Inst. Hungarian Academy of Science. Budapest (H) Leonardo Reyneri University di Pisa (I) Peter A. Rounce Dept. Computer Science. University College London (UK) V.B. David Sanchez German Aerospace Research Establishment. Wessling (G) E. Sanchez-Sinencio Texas A&M University (USA) Renato Stefanelli Politecnico di Milano (I) T.J. Stonham Brunel-University of West London (UK) John G. Taylor Centre for Neural Networks. King's College London (UK) Carme Torras Instituto de Cibernetica. CSIC. Barcelona (E) Philip Treleaven Dept. Computer Science. University College London (UK) Marley Vellasco Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen Unv. Catholique de Louvain (B) Michel Weinfeld Ecole Polytechnique Paris (F) INFORMATION FORM to be returned as soon as possible to: Prof. J.Cabestany IWANN'93 Dep.Ingenieria Electronica UPC P.O.Box 30.002 08080 Barcelona SPAIN Phone:+34.3.401.67.42 Fax: +34.3.401.68.01 E-mail: cabestan at eel.upc.es (cut here) ......................................................................... IWANN'93 INTERNATIONAL WORKSHOP ON ARTIFICIAL NEURAL NETWORKS Sitges (Barcelona), Spain June 9 - 11, 1993 Name:____________________________________________________________________ Company/Organization:____________________________________________________ Address:_________________________________________________________________ State/Country:___________________________________________________________ Phone:_____________________ Fax:_______________________ E-mail:____________________________________ I intend to attend the Workshop: ______ I intend to submit a paper: _____ Tentative title: Authors: Related topics: From ajr at eng.cam.ac.uk Tue May 12 11:17:24 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Tue, 12 May 92 11:17:24 BST Subject: JOB AD: Hybrid Hidden Markov/Connectionist Models for Speech Recognition Message-ID: <17495.9205121017@dsl.eng.cam.ac.uk> [ My apologies if this appears on your screen more than once ] Cambridge University Engineering Department Research Assistant in Speech Recognition The Speech Group expects to appoint an RA for an ESPRIT project on large vocabulary speech recognition using hybrid HMM/ANN structures. Candidates will have PhD experience in one of these techniques and an interest in C programming in a parallel environment, joining a multi-nation research collaboration. It is expected that the project will start during the summer 1992, and will last for 15 months with a possible extension to 36 months. The RA starting date is negotiable. Salary is age related on RA1A rates. Further information and application forms can be obtained from Mavis Barber, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ. Closing date for applications is 1 June 1992. Further Particulars The Speech Group expects to appoint an RA for an ESPRIT project on large vocabulary speech recognition using hybrid HMM/ANN structures. The consortium for this Basic Research Project is: Lernout & Hauspie Speech Products, Brussels; INESC, Lisbon; CUED, with ICSI (International Computer Science Institute) Berkeley as sub-contractor. The baseline structures are the HMM/MLP pioneered by Bourlard (LHS) and Morgan (ICSI) and the HMM/recurrent NN (RNN) studied by Robinson (CUED). These will be extended in several ways; by incorporating improvements analogous to those used in state of the art HMM recognisers; by further development of the theoretical bases of hybrid classification; by the development of better acoustic features with enhanced speaker and communication channel robustness; by the development of better training procedures; by the investigation of fast speaker adaptation in hybrids. The project also aims to demonstrate real-time recognisers and their evaluation against international databases such as used by the DARPA community. A major feature of the project will be the use of the RAP, ring array processor, developed by ICSI, at each site; with further development of hardware and software tools by ICSI for the project. The RAP has 16 digital signal processors with a peak performance of 0.5 GFlps. Applicants should have PhD or equivalent experience in HMMs or ANNs in speech recognition. The RAP uses C & C++ in a UNIX environment and applicants should have relevant experience and an interest in extending their skills in a parallel environment. The RA at Cambridge will be mainly responsible for the development of HMM/RNN structures. It is expected that the project will start during the summer 1992, initially for a period of 15 months but with an extension to 36 months. The starting date for the RA is negotiable. Further details and application forms from Mavis Barber at Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, mavis at eng.cam.ac.uk F. Fallside A.J. Robinson May 1992 From shultz at hebb.psych.mcgill.ca Tue May 12 08:42:23 1992 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 12 May 92 08:42:23 EDT Subject: No subject Message-ID: <9205121242.AA10422@hebb.psych.mcgill.ca> Subject: Abstract Date: 12 May '92 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A Constraint Satisfaction Model of Cognitive Dissonance Phenomena Thomas R. Shultz Department of Psychology McGill University Mark R. Lepper Department of Psychology Stanford University Abstract A constraint satisfaction network model simulated cognitive dissonance data from the insufficient justification and free choice paradigms. The networks captured the psychological regularities in both paradigms. In the case of free choice, the model fit the human data better than did cognitive dissonance theory. This paper will be presented at the Fourteenth Annual Conference of the Cognitive Science Society, Indiana University, 1992. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get shultz.dissonance.ps.Z ftp> quit unix> uncompress shultz.dissonance.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca From SCHNEIDER at vms.cis.pitt.edu Tue May 12 17:17:00 1992 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Tue, 12 May 1992 16:17 EST Subject: Post-Doc in Neural Processes in Cognition for Fall 92 Message-ID: <01GJXHY8CO40PF8MJI@vms.cis.pitt.edu> Postdoctoral Training in NEURAL PROCESSES IN COGNITION For Fall 1992 The Pittsburgh Neural Processes in Cognition program, in its second year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an interdisciplinary program investigating the neurobiology of cognition utilizing neuroanatomical, neurophysiological, and computer simulation procedures. Students perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, MRI and PET brain scanners, the Pittsburgh Supercomputing Center, and access to human clinical populations. The is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Postdoctoral applicants MUST HAVE UNITED STATES RESIDENT'S STATUS. Applications are requested by June 20, 1992. Applicants must have a sponsor from the training faculty for the Neural Processes in Cognition Program. Application materials will be sent upon request. Each student receives full financial support, travel allowances, and a computer workstation access. Last year's class included mathematicians, psychologists, and neuroscience researchers. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. For information contact: Professor Walter Schneider, Program Director, Neural Processes in Cognition, University of Pittsburgh, 3939 O'Hara St, Pittsburgh, PA 15260, call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET From joe at cogsci.edinburgh.ac.uk Wed May 13 06:10:26 1992 From: joe at cogsci.edinburgh.ac.uk (Joe Levy) Date: Wed, 13 May 92 11:10:26 +0100 Subject: Research Position Message-ID: <10229.9205131010@muriel.cogsci.ed.ac.uk> Research Position University of Edinburgh, UK Centre for Cognitive Science Applications are invited for an RA position, at pre- or post-doctoral level (up to 16,755 pounds), working on ESRC-funded research into connectionist models of human language processing. The research involves using recurrent neural networks trained on speech corpora to model psycholinguistic data. Experience in C programming, neural networks and human language processing would be valuable. An early start date is preferred and the grant will run until 31 March 1994. Applications consisting of a cv and the names and addresses to 2 referees should be sent by Tuesday 2 June to Dr Richard Shillcock, Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, UK to whom informal enquiries should also be addressed (email rcs%cogsci.ed.ac.uk at nsfnet-relay.ac.uk; tel +44 31 650 4425). From rosanna at cns.edinburgh.ac.uk Thu May 14 16:30:55 1992 From: rosanna at cns.edinburgh.ac.uk (Rosanna Maccagnano) Date: Thu, 14 May 92 16:30:55 BST Subject: Email address for NETWORK Message-ID: <12931.9205141530@subnode.cns.ed.ac.uk> The correct email address for IOP Publishing is: In the UK: On Janet IOPPL at UK.AC.RL.GEC-B Outside the UK: IOPPL at GEC-B.RL.AC.UK Rosanna From 75008378%dcu.ie at BITNET.CC.CMU.EDU Thu May 14 12:02:00 1992 From: 75008378%dcu.ie at BITNET.CC.CMU.EDU (Barry McMullin, DCU (Dublin, Ireland) ) Date: Thu, 14 May 1992 16:02 GMT Subject: Workshop: "Autopoiesis and Perception" - Call for Participation. Message-ID: <01GK0A1NYOSG985B1A@dcu.ie> [The workshop announced below addresses an essentially cross- disciplinary subject area, potentially involving philosophy, computer science, engineering and biology - to name but a few. It is therefore being posted across a variety of forums (fora?): so my apologies for the noise if you see it more than once! All flames directly to me, please. In case you wish to print out the plain ascii text, it has been structured with 72 columns, 66 lines per page. Please pass on the notice to anyone else who may be interested. If you require further information, or wish to register, please follow the instructions below; but note that, due to other commitments over the next fortnight, no acknowledgements will be issued before May 27th. - Barry.] ------------------------------- CUT HERE ------------------------------- AUTOPOIESIS AND PERCEPTION A Workshop within ESPRIT BRA 3352 DUBLIN CITY UNIVERSITY: 25-26 August 1992 ************ CALL FOR PARTICIPATION ************ A common sense idea of perception is that, through the information processing capabilities of our sensory/brain system, we come to know "the" objectively real, external, world. However, this "spectator" paradigm has not proved very effective (so far) in attempts to build artificial perceptual systems. It therefore seems appropriate to critically examine this concept of perception. One alternative idea is to take a participatory rather than a spectator view of the relationship between "us" and "the external world". To perceive is not to process sensory data, but to apprehend meaning through interaction. Autopoiesis is an organizational paradigm which can support such a participatory view of perception. The concept of autopoiesis (lit. "self-producing"), was introduced to characterise the organisation which makes living systems autonomous. An autopoietic organisation is one which is self-renewing (in a suitable environment); autopoietic systems maintain their organisation through a network of component-producing processes such that the interacting components generate the same network of processes which produced them. In the autopoietic paradigm, perception is an emergent phenomenon characteristic of the interaction between an autopoietic system and its environment: the system responds to perturbations in just such a way as to maintain its (autopoietic) identity. Structure: ---------- The key objective of the workshop is to allow for extensive, open, discussion, and it has been structured accordingly. It will consist of a small number of prepared papers by invited keynote speakers, punctuated with extended discussion periods; it will run over one and a half days (from 9.30 AM on 25th August, to 1.00 PM on 26th August). To maximize the benefit of the discussion, the workshop will be limited to 30 participants. Invited Speakers (Confirmed): ----------------------------- Prof. Francisco Varela C.R.E.A., Ecole Polytechnique, Paris. Dr. David Vernon DG XIII, EC Commission, Brussels, and Computer Science, Trinity College Dublin. Dr. Dermot Furlong Department of Microelectronic and Electrical Engineering, Trinity College Dublin. Further Information: Barry McMullin, Electronic Engineering, -------------------- Dublin City University, Dublin 9, IRELAND. E-mail: Phone: +353-1-7045432 Fax: +353-1-7045508 [Page 1 of 2] ------------------------------------------------------------------------ AUTOPOIESIS AND PERCEPTION A Workshop within ESPRIT BRA 3352 DUBLIN CITY UNIVERSITY: 25-26 August 1992 ************* REGISTRATION FORM ************* The deadline for receipt of registration information is Friday, 31st July 1992. Due to the limit to 30 participants, early registration is advisable. However, postal services to Dublin are currently severely affected by an industrial dispute. Therefore, if you wish to register, it is recommended that you return this form by E-mail or FAX as soon as possible, paying the registration fee by Bank Transfer. Please advise if you require information on hotel accomodation; campus accomodation will be available at a rate of IRP 20 per night (approx.) - a separate booking form will be provided on request. The DCU campus is situated in the north Dublin suburb of Glasnevin, is less than 10 minutes from Dublin International Airport, and has easy access to the city centre. All correspondence should be directed to: Barry McMullin, Electronic Engineering, Dublin City University, Dublin 9, IRELAND. E-mail: Phone: +353-1-7045432 Fax: +353-1-7045508 Name:................................................................... Organisation:........................................................... Address:................................................................ City:........................... Country:.......................... Phone:............... FAX:................. E-mail:................... Is your organisation a member of the BRA 3352 Working Group on Vision? YES___ NO___ If YES, which consortium? ................... Registration Fee: Irish Pounds 60 (or equivalent) Payment Form: (Check One) 1) Internal Accounting (working group members only) ____ Requires signature of partner representative listed in BRA 3352 Technical Annex: Partner Representative:................... Signature................ 2) Bank Transfer: ____ Account Name: Dublin City University Conference a/c Bank: AIB Bank, 7-12 Dame St., Dublin 2, IRELAND. Account Number: 91765-215 Bank Sorting Code: 93 20 86 (IMPORTANT: Quote your NAME *and* "Ref: 421/01/121 (Autopoiesis)" in all bank transfer documents.) 3) Bank Draft (made payable to "Dublin City University"): ____ Equivalent of Irish Pounds amount in any EC currency drawn on a local bank -OR- DM, US$, or Sterling draft drawn on a UK bank. All charges to be bourn by the remitter. [Page 2 of 2] ------------------------------- CUT HERE ------------------------------- From thildebr at aragorn.csee.lehigh.edu Wed May 13 14:28:30 1992 From: thildebr at aragorn.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Wed, 13 May 92 14:28:30 -0400 Subject: Preprints available Message-ID: <9205131828.AA05436@aragorn.csee.lehigh.edu> My translation of the following paper is soon to appear in Electronics, Information, and Communication in Japan (Scripta Technica, Silver Spring, MD). To obtain a copy in PostScript format (sans figure), e-mail your request directly to me. I cannot supply hardcopy, due to budgetary constraints, so please don't ask; also, the one figure is not necessary for understanding the paper. Thomas H Hildebrandt Visiting Researcher EE and CS Department Room 304 Packard Lab 19 Lehigh University Bethlehem, PA 18015 \title{Analysis of Neural Network Energy Functions Using Standard Forms \thanks{Translated from the Japanese by Thomas H. Hildebrandt. The original appeared in Denshi Joohoo Tsuushin Gakkai Ronbunshi (Transactions of the Institute of Electronics, Information, and Communication Engineers (of Japan)), V.J74-D-II, N.6, pp.804--811, June 1991}} \author{Masanori IZUMIDA\thanks{Information Engineering Department, Faculty of Engineering, Ehime University, Matsuyama-shi 790 Japan}, Kenji MURAKAMI$^{\dagger}$ and Tsunehiro AIBARA$^{\dagger}$} \begin{abstract} In this paper, we discuss a method for analyzing the energy function of a Hopfield type neural network. In order to analyze the energy function which solves the given minimization problem, or simply, the problem, we define the standard form of the energy function. In general, a multidimensional energy function is complex, and it is difficult to investigate the energy functions arising in practice, but when placed in the standard form, it is possible to compare and contrast the forms of the energy functions themselves. Since an orthonormal transformation will not change the form of an energy function, we can stipulate that the standard form represent identically energy functions which have the same form. Further, according to the theory associated with standard forms, it is possible to partition a general energy function according to the eigenvalues of the connection weight matrix, and if we analyze each energy function, we can investigate the properties of the actual energy function. Using this method, we analyze the energy function given by Hopfield for the Travelling Salesman Problem, and study how the minimization problem is realized in the energy function. Also, we study the mutual effects of a linear combination of energy functions and discuss the results. \end{abstract} KEYWORDS: Neural network, energy function, eigenvalue decomposition, travelling salesman problem, standard form. From gmk%idacrd at uunet.UU.NET Thu May 14 23:04:05 1992 From: gmk%idacrd at uunet.UU.NET (Gary M. Kuhn) Date: Thu, 14 May 92 23:04:05 EDT Subject: IEEE NNSP'92 program & registration form Message-ID: <9205150304.AA17400@> Dear Connectionists, Attached, please find the e-mail version of the advance program for IEEE NNSP'92, including registration form. This Workshop will be limited to about 200 persons. Registration is on a first-come first-serve basis. Thank you. Gary Kuhn - Publicity Gary M. Kuhn Internet: gmk%idacrd.uucp at princeton.edu CCRP - IDA UUCP: princeton!idacrd!gmk Thanet Road PHONE: (609) 924-4600 Princeton, NJ 08540 USA FAX: (609) 924-3061 *************************************************************************** ADVANCE PROGRAM IEEE Workshop on Neural Networks for Signal Processing August 31 - September 2, 1992 Copenhagen The Danish Computational Neural Network Center CONNECT and The Electronics Institute, The Technical University of Denmark In cooperation with the IEEE Signal Processing Society Invitation to Participate in the 1992 IEEE Workshop on Neural Networks for Signal Processing. The members of the Workshop Organizing Committee welcome you to the 1992 IEEE Workshop on Neural Networks for Signal Processing. The 1992 Workshop is the second workshop held in this area. The first took place in 1991 in Princeton, NJ, USA. The Workshop is organized by the IEEE Technical Committee for Neural Networks and Signal Processing. The purpose of the Workshop is to foster informal technical interac- tion on topics related to the application of neural networks to signal processing problems. Workshop Location The 1992 Workshop will be held at Hotel Marienlyst, Ndr. Strandvej 2, DK-3000 Helsingoer, Denmark, tel: +45 49202020, fax: +45 49262626. Helsingoer is a small town a little North of Copenhagen. The Workshop banquet will be held on Tuesday evening, September 1, at Kronborg Castle, which is situated close to the workshop hotel. Workshop Proceedings The proceedings of the Workshop, entitled "Neural Networks for Signal Processing - Proceedings of the 1992 IEEE Workshop", will be distributed at the Workshop. The registration fee covers one copy of the proceedings. Registration Information The Workshop registration information is given in the end of this program. It is possible to apply for a limited number of partial travel and registration grants via Program Chair. The address is given in the end of this program. Program Overview Time | Monday 31/8-92 | Tuesday 1/9-92 | Wednesday 2/9-92 ========================================================================== 8:15 AM | Opening Remarks | | -------------------------------------------------------------------------- 8:30 AM | Opening Keynote | Keynote Address | Keynote Address | Address | | -------------------------------------------------------------------------- 9:30 AM | Learning & Models | Speech 2 | Nonlinear Filtering | (Lecture) | (Lecture) | by Neural Networks | | | (Lecture) -------------------------------------------------------------------------- 11:00 AM | Break | Break | Break -------------------------------------------------------------------------- 11:30 AM | Speech 1 | Learning, System- | Image Processing and | (Poster preview) | identification and | Pattern Recognition | | Spectral Estimation | (Poster preview) | | (Poster preview) | -------------------------------------------------------------------------- 12:30 PM | Lunch | Lunch | Lunch -------------------------------------------------------------------------- 1:30 PM | Speech 1 | Learning, System- | Image Processing and | (Poster) | identification and | Pattern Recognition | | Spectral Estimation | (Poster) | | (Poster) | -------------------------------------------------------------------------- 2:45 PM | Break | Break | Break -------------------------------------------------------------------------- 3.15 PM | System | Image Processing and | Application Driven | Implementations | Analysis | Neural Models | (Lecture) | (Lecture) | (Lecture) -------------------------------------------------------------------------- Evening | Panel Discussion | Visit and Banquet at | | (8 PM) | Kronborg Castle | ========================================================================== Evening Events A Pre-Workshop reception will be held at Hotel Marienlyst at 7:00 PM on Sunday, August 30, 1992. Tuesday, September 1, 1992, 5:00 PM visit to Kronborg Castle and 7:00 PM banquet at the Castle. TECHNICAL PROGRAM Monday, August 31, 1992 8:15 AM; Opening Remarks: S.Y. Kung, F. Fallside, Workshop Chairs, Benny Lautrup, connect, Denmark, John Aa. Sorensen, Workshop Program Chair. 8:30 AM; Opening Keynote: System Identification Perspective of Neural Networks Professor Lennart Ljung, Department of Electrical Engineering, Linkoping University, Sweden. 9:30 AM; Learning & Models (Lecture Session) Chair: Jenq-Neng Hwang, Department of Electrical Engineering University of Washington, Seattle, WA, USA. 1. "Towards Faster Stochastic Gradient Search", Christian Darken, John Moody Yale University, New Haven, CT, USA. 2. "Inserting Rules into Recurrent Neural Networks", C.L. Giles, NEC Research Inst., Princeton, C.W. Omlin, Rensselaer Polytechnic Institute, Troy, NY, USA. 3. "On the Complexity of Neural Networks with Sigmoidal Units", Kai-Yeung Siu, University of California, Irvine, CA, Vwani Roychowdhury, Purdue University, West Lafayette, IN, Thomas Kailath, Stanford University, Stanford, CA, USA. 4. "A Generalization Error Estimate for Nonlinear Systems", Jan Larsen, Technical University of Denmark, Lyngby, Denmark. 11:00 AM; Coffe break 11:30 AM; Speech 1, (Oral previews of the afternoon poster session) Chair: Paul Dalsgaard, Institute of Electronic Systems, Aalborg University, Denmark. 1. "Interactive Query Learning for Isolated Speech Recognition", Jenq-Neng Hwang, Hang Li, University of Washington, Seattle, WA, USA. 2. "Adaptive Template Method for Speech Recognition", Yadong Liu, Yee-Chun Lee, Hsing-Hen Chen, Guo-Zheng Sun, University of Maryland, College Park, MD, USA. 3. "Fuzzy Partition Models and Their Effect in Continous Speech Recognition", Yoshinaga Kato, Masahide Sugiyama, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "Empirical Risk Optimization: Neural Networks and Dynamic Programming", Xavier Driancourt, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 5. "Text-Independent Talker Identification System Combining Connectionist and Conventional Models", Thierry Artieres, Younes Bennani, Patrick Gallinari, Universite' de Paris Sud, Orsay, France. 6. "A Two Layer Kohonen Neural Network using a Cochlear Model as a Front-End Processor for a Speech Recognition System", S. Lennon, E. Ambikairajah, Regional Technical College, Athlone, Ireland. 7. "Self-Structuring Hidden Control Neural Models", Helge B.D. Sorensen, Uwe Hartmann, Institute of Electronic Systems, Aalborg University, Aalborg, Denmark. 8. "Connectionist-Based Acoustic Word Models", Chuck Wooters, Nelson Morgan, International Computer Science Institute, Berkeley, CA, USA. 9. "Maximum Mutual Information Training of a Neural Predictive-Based HMM Speech Recognition System", K. Hassanein, L. Deng, M. Elmasry, University of Waterloo, Waterloo, Ontario, Canada. 10. "Training Continous Density Hidden Markov Models in Association with Self-Organizing Maps and LVQ", Mikko Kurimo, Kari Torkkola, Helsinki University of Technology, Finland. 11. "Unsupervised Sequence Classification", Jorg Kindermann, GMD-FIT.KI, Schloss Birlinghoven, Germany, Christoph Windheuser, Carnegie Mellon University, Pittsburg, PA, USA. 12. "A Mathematical Model for Speech Processing", Anna Esposito, Universita di Salerno, Salvatore Rampone, I.R.S.I.P-C.N.R, Napoli, Cesare Stanzione, International Institute for Advanced Scientific Studies, Salerno, Roberto Tagliaferri, Universita di Salerno, Italy. 13. "A New Voice and Pitch Estimator based on the Neocognitron", J.R.E. Moxham, P.A. Jones, H. McDermott, G.M. Clark, Australian Bionic Ear and Hearing Research Institute, East Melbourne 3002, Victoria, Australia. 12:30 PM; Lunch 1:30 PM; Speech 1, (Poster Session) 2:45 PM; Break 3:15 PM; System Implementations (Lecture session) Chair: Yu Hen Hu, Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA. 1. "CCD's for Pattern Recognition", Alice M. Chiang, Lincoln Laboratory, Massachusetts Institute of Technology, MA, USA. 2. "An Electronic Parallel Neural CAM for Decoding", Joshua Alspector, Bellcore, Morristown, NJ, Anthony Jayakumar Bon Ngo, Cornell, Ithaca, NY, USA. 3. "Netmap-Software Tool for Mapping Neural Networks onto Parallel Computers", K. Wojtek Przytula, Hughes Research Labs, Malibu, CA, Viktor Prasanna, University of California, Wei-Ming Lin, Mississippi State University, USA. 4. "A Fast Simulator for Neural Networks on DSPs of FPGAs", M. Ade, R. Lauwereins, J. Peperstraete, ESAT-Elektrotechniek, Heverlee, Belgium. Tuesday, September 1, 1992 8:30 AM; Keynote Address: "Capacity Control in Classifiers for Pattern Recognition" Dr. Sara Solla, AT\&T Bell Laboratories, Murray Hill, NJ, USA. 9:30 AM; Speech 2, (Lecture Session) Chair: S. Katagiri, ATR Auditory and Visual Perception Research Laboratories, Kyoto, Japan. 1. "Speech Representations for Recognition by Neural Networks", B.H. Juang, AT&T Bell Laboratories, Murray Hill, NJ, USA. 2. "Classification with a Codebook-Excited Neural Network" Lizhong Wu, Frank Fallside, Cambridge University, UK. 3. "Minimal Classification Error Optimization for a Speaker Mapping Neural Network", Masahide Sugiyama, Kentaro Kurinami, ATR Interpreting Telephony Research Laboratories, Kyoto, Japan. 4. "On the Identification of Phonemes Using Acoustic-Phonetic Features Derived by a Self-Organising Neural Network", Paul Dalsgaard, Ove Andersen, Rene Jorgensen, Institute of Electronic Systems, Aalborg University, Denmark. 11:00 AM; Coffe break 11:30 AM; Learning, System Identification and Spectral Estimation. (Oral previews of afternoon poster session) Chair: Lars Kai Hansen, connect, Electronics Institute, Technical University of Denmark, Lyngby, Denmark. 1. "Prediction of Chaotic Time Series Using Recurrent Neural Networks", Jyh-Ming Kuo, Jose C. Principe, Bert deVries, University of Florida, FL, USA. 2. "Nonlinear System Identification using Multilayer Perceptrons with Locally Recurrent Synaptic Structure", Andrew Back, Ah Chung Tsoi, University of Queensland, Australia. 3. "Chaotic Signal Emulation using a Recurrent Time Delay Neural Network", Michael R. Davenport, Department of Physics, U.B.C., Shawn P. Day, Department of Electrical Engineering, U.B.C., Vancouver, Canada. 4. "Prediction with Recurrent Networks", Niels Holger Wulff, Niels Bohr Institute, John A. Hertz, Nordita, Copenhagen, Denmark. 5. "Learning of Sinusoidal Frequencies by Nonlinear Constrained Hebbian Algorithms", Juha Karhunen, Jyrki Joutsensalo, Helsinki University of Technology, Finland. 6. "A Neural Feed-Forward Network with a Polynomial Nonlinearity", Nils Hoffmann, Technical University of Denmark, Lyngby, Denmark. 7. "Application of Frequency-Domain Neural Networks to the Active Control of Harmonic Vibrations in Nonlinear Structural Systems", T.J. Sutton, S.J. Elliott University of Southampton, England. 8. "Generalization in Cascade-Correlation Networks", Steen Sjoegaard, Aarhus University, Denmark. 9. "Noise Density Estimation Using Neural Networks", M.T. Musavi, D.M. Hummels, A.J. Laffely, S.P. Kennedy, University of Maine, Maine, USA. 10. "An Efficient Model for Systems with Complex Responses", Volker Tresp, Ira Leuthausser, Ralph Neuneier, Martin Schlang, Siemens AG, Munchen, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 11. "Generalized Feedforward Filters with Complex Poles", T. Oliveira e Silva, P. Guedes de Oliveira, Universidade de Aveiro, Aveiro, Portugal, J.C. Principe, University of Florida, Gainsville, FL, B. De Vries, David Sarnoff Research Center, Princeton, NJ, USA. 12. "A Simple Genetic Algorithm Applied to Discontinous Regularization", John Bach Jensen, Mads Nielsen, DIKU, University of Copenhagen, Denmark. 12:30 PM; Lunch 1:30 PM; Learning, System Identification and Spectral Estimation. (Poster Session) 2:45 PM; Break 3:15 PM; Image Processing and Analysis. (Lecture session) Chair: K. Wojtek Przytula, Huges Research Labs, Malibu, CA, USA. 1. "Decision-Based Hierarchical Perceptron (HiPer) Networks with Signal/Image Classification Applications", S.Y. Kung, J.S. Taur, Princeton University, NJ, USA. 2. "Lateral Inhibition Neural Networks for Classification of Simulated Radar Imagery", Charles M. Bachmann, Scott A. Musman, Abraham Schultz, Naval Research Laboratory, Washington, USA. 3. "Globally Trained Neural Network Architecture for Image Compression", L. Schweizer, G. Parladori, Alcatel Italia, Milano, G.L. Sicuranza, Universita'di Trieste, Italy. 4. "Robust Identification of Human-Faces Using Mosaic Pattern and BPN", Makoto Kosugi, NTT Human Interface Laboratories, Take Yokosukashi Kanagawaken, Japan. 5:00 PM; Departure to Kronborg Castle 7:00 PM; Banquet at Kronborg Castle Wednesday, September 2, 1992 8:30 AM; Keynote Address: "Application Perspectives of the DARPA Artificial Neural Network Technology Program" Dr. Barbara Yoon, DARPA/MTO, Arlington, VA, USA. 9:30 AM; "Nonlinear Filtering by Neural Networks (Lecture Session)" Chair: Gary M. Kuhn, CCRP-IDA, Princeton, NJ, USA. 1. "Neural Networks and Nonparametric Regression", Vladimir Cherkassky, University of Minnesota, Minneapolis, USA. 2. "A Partial Analysis of Stochastic Convergence for a Generalized Two-Layer Perceptron using Backpropagation", Jeffrey L. Vaughn, Neil J. Bershad, University of California, Irvine, John J. Shynk, University of California, Santa Barbara, CA, USA. 3. "A Recurrent Neural Network for Nonlinear Time Series Prediction - A Comparative Study", S.S. Rao, S. Sethuraman, V. Ramamurti, Villanova University, Villanova, PA, USA. 4. "Dispersive Networks for Nonlinear Adaptive Filtering", Shawn P. Day, Michael R. Davenport, University of British Columbia, Vancouver, Canada. 11:00 AM; Coffe break 11:30 AM; Image Processing and Pattern Recognition (Oral previews of afternoon poster sessions) Chair: Benny Lautrup, connect, Niels Bohr Institute, Copenhagen, Denmark. 1. "Unsupervised Multi-Level Segmentation of Multispectral Images", R.A. Fernandes, Institute for Space and Terrestrial Science, Richmond Hill, M.E. Jernigan, University of Waterloo, Waterloo, Ontario, Canada. 2. "Autoassociative Neural Networks for Image Compression: A Massively Parallel Implementation", Andrea Basso, Ecole Polytechnique Federale de Lausanne, Switzerland. 3. "Compression of Subband-Filtered Images via Neural Networks", S. Carrato, S. Marsi, University of Trieste, Trieste, Italy. 4. "An Adaptive Neural Network Model for Distinguishing Line- and Edge Detection from Texture Segregation", M.M. Van Hulle, T.Tollenaere, Katholieke Universiteit Leuven, Leuven, Belgium. 5. "Adaptive Segmentation of Textured Images using Linear Prediction and Neural Networks", Stefanos Kollias, Levon Sukissian, National Technical University of Athens, Athen, Greece. 6. "Neural Networks for Segmentation and Clustering of Biomedical Signals", Martin F. Schlang, Volker Tresp, Siemens, Munich, Klaus Abraham-Fuchs, Wolfgang Harer, Siemens, Erlangen, Germany. 7. "Some New Results in Nonlinear Predictive Image Coding Using Neural Networks", Haibo Li, Linkoping University, Linkoping, Sweden. 8. "A Neural Network Approach to Multi-Sensor Point-of-Interest Detection", Ajay N. Jain, Alliant Techsystems Inc., Hopkins, MN, USA. 9. "Supervised Learning on Large Redundant Trainingsets", Martin F. Moller, Aarhus University, Aarhus, Denmark. 10. "Neural Network Detection of Small Moving Radar Targets in an Ocean Environment", Jane Cunningham, Simon Haykin, McMaster University, Hamilton, Ontario, Canada. 11. "Discrete Neural Networks and Fingerprint Identification", Steen Sjoegaard, Aarhus University, Aarhus, Denmark. 12. "Image Recognition using a Neural Network", Keng-Chung Ho, Bin-Chang Chieu, National Taiwan Institute of Technology, Taipei, Taiwan, Republic of China. 13. "Adaptive Training of Feedback Neural Networks for Non-Linear Filtering and Control: I - A General Approach." O. Nerrand, P. Roussel-Ragot, L. Personnaz, G. Dreyfus, Ecole Superieure de Physique et de Chimie Industrielles, Paris, S. Marcos, Ecole Superieure d'Electricite, Gif Sur Yvette, France. 12:30 PM; Lunch 1:30 PM; Image Processing and Pattern Recognition (Poster Session) 2:45 PM; Break 3:15 PM; Application Driven Neural Models (Lecture session) Chair: Sathyanarayan S. Rao, Department of Electrical Engineering Villanova University, Villanova, PA, USA. 1. "Artificial Neural Network for ECG Arrhythmia Monitoring", Y.H. Hu, W.J. Tompkins, Q. Xue, University of Wisconsin- Madison, WI, USA. 2. "Constructing Neural Networks for Contact Tracking", Christopher M. DeAngelis, Naval Underwater Warfare Center, Rhode Island, Robert W. Green, University of Massachusetts Dartmouth, North Dartmouth, MA, USA. 3. "Adaptive Decision-Feedback Equalizer Using Forward-Only Counterpropagation Networks for Rayleigh Fading Channels", Ryuji Kaneda, Takeshi Manabe, Satoshi Fujii, ATR Optical and Radio Communications Research Laboratories, Kyoto, Japan. 13. "Ensemble Methods for Handwritten Digit Recognition", Lars Kai Hansen, Technical University of Denmark, Christian Liisberg, Risoe National Laboratory, Denmark, Peter Salamon, San Diego State University, San Diego, CA, USA. Workshop Committee General Chairs: S.Y. Kung F. Fallside Dept. of Electrical Engineering Engineering Department Princeton University Cambridge University Princeton, NJ 08544, USA Cambridge CB2 1PZ, UK email: kung at princeton.edu email: fallside at eng.cam.ac.uk Program Chair: Proceedings Chair: John Aa. Sorensen Candace Kamm Electronics Institute, Build 349 Box 1910 Technical University of Denmark Bellcore, 445 South Street DK-2800 Lyngby, Denmark Morristown, NJ 07960, USA email: jaas at dthei.ei.dth.dk email: cak at thumper.bellcore.com Publicity Chair: Gary M. Kuhn CCRP - IDA Thanet Road Princeton, NJ 08540, USA email: gmk%idacrd.uucp at princeton.edu Program Committee: Ronald de Beer Jenq-Neng Hwang John E. Moody John Bridle Yu Hen Hu Carsten Peterson Erik Bruun B.H. Juang Sathyanarayan S. Rao Paul Dalsgaard S. Katagiri Peter Salamon Lee Giles T. Kohonen Christian J. Wellekens Lars Kai Hansen Gary M. Kuhn Barbara Yoon Steffen Duus Hansen Benny Lautrup John Hertz Peter Koefoed Moeller ---------------------------------------------------------------------------- REGISTRATION FORM: 1992 IEEE Workshop on Neural Networks for Signal Processing. August 31 - September 2, 1992. Registration fee including single room and meals at Hotel Marienlyst: Before July 15, 1992, Danish Kr. 5200. After July 15, 1992, Danish Kr. 5350. Companion fee at Hotel Marienlyst: Danish Kr. 1160. Registration fee without hotel room: Before July 15, 1992, Danish Kr. 2800. After July 15, 1992, Danish Kr. 2950. The registration fee of Danish Kr. 5200 (5350) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . Pre-Workshop reception at Sunday evening, August 30, 1992. . Hotel single room from August 30 to September 2, 1992. (3 nights). . 3 breakfasts, 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. . A Companion fee of additional Danish Kr. 1160 covers double room at Hotel Marienlyst and the Pre-Workshop reception, breakfasts and the banquet for 2 persons. The registration fee without hotel room: Danish Kr. 2800 (2950) covers: . Attendance at all workshop sessions. . Workshop Proceedings. . 3 lunches, 1 dinner, 1 banquet. . Coffe breaks and refreshments. Further information on registration: Ms. Anette Moeller-Uhl, The Niels Bohr Institute, tel: +45 3142 1616 ext. 388, fax: +45 3142 1016, email: uhl at connect.nbi.dk Please complete this form (type or print clearly) and mail with payment (by check, do not include cash) to: NNSP-92, CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen, Denmark. Name ------------------------------------------------------------------ Last First Middle Firm of University ---------------------------------------------------- Mailing Address ------------------------------------------------------- ----------------------------------------------------------------------- Country Phone Fax From nwanahs at cs.keele.ac.uk Fri May 15 16:14:13 1992 From: nwanahs at cs.keele.ac.uk (Hyacinth Nwana) Date: Fri, 15 May 92 16:14:13 BST Subject: AISB Call for Tutorial/Workshop Proposals Message-ID: <638.9205151514@des.cs.keele.ac.uk> Call for Tutorial & Workshop Proposals: AISB-93 9th Biennial Conference on Artificial Intelligence University of Birmingham, England 29th March -- 2nd April 1993 Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) The AISB-93 Programme Committee invites proposals for the Tutorial & Workshop Programme of the 9th Biennial Conference on Artificial Intelligence (AISB-93) to be held at the University of Birmingham, England, during 29th March - 2nd April 1993. The first day and a half of the Conference are allocated to workshops and tutorials. Proposals for full day or half day tutorials/workshops will be considered. They may be offered both on standard topics and on new and more advanced aspects of Artificial Intelligence or Simulation of Behaviour. The Technical Programme of AISB-93 (Programme Chairman: Aaron Sloman) will include a special theme on: * Prospect for AI as the general science of intelligence Tutorials/Workshops related to this theme would be particularly welcome. Proposals from an individual or pair of presenters will be considered. Anyone interested in presenting a tutorial should submit a proposal to the AISB-93 Tutorial/Workshop Organiser, Dr Hyacinth Nwana, at the address below. Submission: ---------- A tutorial proposal should contain the following information: 1. Tutorial/Workshop Title 2. A brief description of the tutorial/workshop, suitable for inclusion in the conference brochure. 3. A detailed outline of the tutorial/workshop. This should include the necessary background and the potential target audience for the tutorial/workshop. 4. A brief resume of the presenter(s). This should include: background in the tutorial/workshop area, references to published work in the topic area (ideally, a published tutorial-level article on the subject), and teaching experience, including previous conference tutorials or short-courses presented. 5. Administrative information. This should include: name, mailing address, phone number, Fax, and email address if available. In the case of multiple presenters, information for each presenter should be provided, but one presenter should be identified as the principal contact. Dates: ------ Proposals must be received by September 17th, 1992. Decisions about topics and speakers will be made by November 5th, 1992. Speakers should be prepared to submit completed course materials by February 4th, 1993. Proposals should be sent to: Dr. Hyacinth S. Nwana Department of Computer Science University of Keele Keele, Staffordshire ST5 5BG UK Email: JANET: nwanahs at uk.ac.keele.cs BITNET: nwanahs%cs.kl.ac.uk at ukacrl UUCP: ...!ukc!kl-cs!nwanahs OTHER: nwanahs at cs.keele.ac.uk Tel: (+44) (0) 782 583413 Fax: (+44) (0) 782 713082 All other correspondence and queries regarding the conference should be sent to the Local Organiser, Donald Peterson. Dr. Donald Peterson School of Computer Science The University of Birmingham Edgbaston Birmingham B15 2TT UK Email: aisb93-prog at cs.bham.ac.uk (for communications relating to submission of papers) aisb93-delegates at cs.bham.ac.uk (for info. on accomodation, meals, programme, etc) Tel: (+44) (0) 21 414 3711 Fax: (+44) (0) 21 414 4281 From codelab at psych.purdue.edu Fri May 15 18:14:24 1992 From: codelab at psych.purdue.edu (Coding Lab Wasserman) Date: Fri, 15 May 92 17:14:24 EST Subject: preprint Message-ID: <9205152214.AA25275@psych.purdue.edu> A preprint has been deposited in the connectionists archive. Its abstract and retrieval information are given here: ------------------------------------------------------------------ ISOMORPHISM, TASK DEPENDENCE, AND THE MULTIPLE MEANING THEORY OF NEURAL CODING Gerald S. Wasserman Purdue University e-mail: codelab at psych.purdue.edu The neural coding problem is defined and several possible answers to it are reviewed. A widely accepted answer descends from early suggestions that neural activity, in general, is isomorphic with sensation and that the biological signal resident in the axon of a neuron, in particular, is given by its frequency of firing. More recent data are reviewed which indicate that the pattern of neural responses may also be informative. Such data led to the formulation of the multiple meaning theory which suggests that neural pattern may encode different information features in single responses. After a period in which attention turned elsewhere, the multiple meaning theory has quite recently been revived and has stimulated novel and careful experimental investigations. A corollary theory, the task dependence hypothesis, suggests that these information-bearing multiple response features are accessed differentially in different behavioral tasks. These theories place stringent temporal requirements on the generation and analysis of neural responses. Recent data are examined indicating that both requirements may indeed be satisfied by the nervous system. Finally, several methods of experimentally testing such coding theories are described; they involve manipulating the biological signals of neurons and observing the effect of these manipulations on behavior. KEY WORDS: biological signals, code, neuron, receptor, retina, cortex, brain, neural coding, sensory coding, frequency coding, impulse coding, bio-electric potentials, action potentials, synaptic potentials, receptor potentials, anesthesia ------------------------------------------------------------------ ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password:your e-mail address ftp> cd pub/neuroprose ftp> binary ftp> get wasserman.mult_mean.ps.Z ftp> quit uncompress wasserman.mult_mean.ps.Z lpr -s wasserman.mult_mean.ps If the printer for this job resides on a remote machine, this large (i.e., graphics-intensive) file may require that an operator issue the print command directly from the remote console. From stucki at cis.ohio-state.edu Sun May 17 00:25:47 1992 From: stucki at cis.ohio-state.edu (David J Stucki) Date: Sun, 17 May 92 00:25:47 -0400 Subject: preprint available from the neuroprose archive Message-ID: <9205170425.AA17489@retina.cis.ohio-state.edu> The following paper has been placed in the Neuroprose archive. ******************************************************************* Fractal (Reconstructive Analogue) Memory David J. Stucki and Jordan B. Pollack Laboratory for Artificial Intelligence Research Department of Computer and Information Science The Ohio State University Columbus, OH 43210 Abstract This paper proposes a new approach to mental imagery that has the potential for resolving an old debate. We show that the methods by which fractals emerge from dynamical systems provide a natural computational framework for the relationship between the "deep" representations of long-term visual memory and the "surface" representations of the visual array, a distinction which was proposed by (Kosslyn, 1980). The concept of an iterated function system (IFS) as a highly compressed representation for a complex topological set of points in a metric space (Barnsley, 1988) is embedded in a connectionist model for mental imagery tasks. Two advantages of this approach over previous models are the capability for topological transformations of the images, and the continuity of the deep representations with respect to the surface representations. To be published in the Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, Bloomington, Indiana, July/August, 1992. ******************************************************************** Filename: stucki.frame.ps.Z ---------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get stucki.frame.ps.Z ftp> bye unix% zcat stucki.frame.ps.Z | lpr (or whatever *you* do to print a compressed PostScript file) ---------------------------------------------------------------- dave... David J Stucki /\ ~~ /\ ~~ /\ ~~ /\ ~~ c/o Dept. Computer and 537 Harley Dr. #6 / \ / \ / \ / \ / Information Science Columbus, OH 43202 \/ \ / \ / \ / 2036 Neil Ave. stucki at cis.ohio-state.edu ~ \/ ~~ \/ ~~ \/ Columbus, OH 43210 There is no place in science for ideas, there is no place in epistemology for knowledge, and there is no place in semantics for meanings. -- W.V. Quine From BRUNAK at nbivax.nbi.dk Sun May 17 07:31:00 1992 From: BRUNAK at nbivax.nbi.dk (BRUNAK@nbivax.nbi.dk) Date: Sun, 17 May 1992 13:31 +0200 Subject: NetGene Message-ID: <01GK4BMAE640CIR5LI@nbivax.nbi.dk> ******** Announcement of the NetGene Mail-server: ********* DESCRIPTION: The NetGene mail server is a service producing neural network predictions of splice sites in vertebrate genes as described in: Brunak, S., Engelbrecht, J., and Knudsen, S. (1991) Prediction of Human mRNA Donor and Acceptor Sites from the DNA Sequence. Journal of Molecular Biology, 220, 49-65. ABSTRACT OF JMB ARTICLE: Artificial neural networks have been applied to the prediction of splice site location in human pre-mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives - in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism in vivo. When the presented method detects 95% of the true donor and acceptor sites it makes less than 0.1% false donor site assignments and less than 0.4% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non-coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non-coding signal and many stronger splice sites having more ill-defined transitions between coding and non-coding. INSTRUCTIONS: In order to use the NetGene mail-server: 1) Prepare a file with the sequence in a format similar to the fasta format: the first line must start with the symbol '>', the next word on that line is used as the sequence identifier. The following lines should contain the actual sequence, consisting of the symbols A, T, U, G, C and N. U is converted to T, letters not mentioned are converted to N. All letters are converted to upper case. Numbers, blanks and other nonletter symbols are skipped. The lines should not be longer than 80 characters. The minimum length analyzed is 451 nucleotides, and the maximum is 100000 nucleotides (your mail system may have a lower limit for the maximum size of a message). Due to the non-local nature of the algorithm sites closer than 225 nucleotides to the ends of the sequence will not be assigned. 2) Mail the file to netgene at virus.fki.dth.dk. The response time will depend on system load. If nothing else is running on the machine the speed is about 1000 nucleotides/min. It may take several hours before you get the answer, so please do not resubmit a job if you get no answer within a short while. REFERENCING AND FURTHER INFORMATION Publication of output from NetGene must be referenced as follows: Brunak, S., Engelbrecht, J., and Knudsen, S. (1991) Prediction of Human mRNA Donor and Acceptor Sites from the DNA Sequence. Journal of Molecular Biology, 220, 49-65. CONFIDENTIALITY Your submitted sequence will be deleted automatically immediately after processing by NetGene. PROBLEMS AND SUGGESTIONS: Should be addressed to: Jacob Engelbrecht e-mail: engel at virus.fki.dth.dk Department of Physical Chemistry The Technical University of Denmark Building 206 DK-2800 Lyngby Denmark phone: +45 4288 2222 ext. 2478 (operator) phone: +45 4593 1222 ext. 2478 (tone) fax: +45 4288 0977 EXAMPLE: A file test.seq is prepared with an editor with the following contents: >HUMOPS GGATCCTGAGTACCTCTCCTCCCTGACCTCAGGCTTCCTCCTAGTGTCACCTTGGCCCCTCTTAGAAGC CAATTAGGCCCTCAGTTTCTGCAGCGGGGATTAATATGATTATGAACACCCCCAATCTCCCAGATGCTG . Here come more lines with sequence. . . This is sent to the NetGene mail-server, on a Unix system like this: mail netgene at virus.fki.dth.dk < test.seq In return an answer similar to this is produced: From plaut+ at CMU.EDU Mon May 18 09:46:23 1992 From: plaut+ at CMU.EDU (David Plaut) Date: Mon, 18 May 92 09:46:23 -0400 Subject: preprint available in neuroprose archive Message-ID: <2034.706196783@K.GP.CS.CMU.EDU> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* Relearning after Damage in Connectionist Networks: Implications for Patient Rehabilitation David C. Plaut Department of Psychology Carnegie Mellon University To appear in the Proceedings of the 14th Annual Conference of the Cognitive Science Society, Bloomington, IN, August, 1992. Abstract Connectionist modeling is applied to issues in cognitive rehabilitation, concerning the degree and speed of recovery through retraining, the extent of generalization to untreated items, and how treated items are selected to maximize this generalization. A network previously used to model impairments in mapping orthography to semantics is retrained after damage. The degree of relearning and generalization varies considerably for different lesion locations, and has interesting implications for understanding the nature and variability of recovery in patients. In a second simulation, retraining on words whose semantics are atypical of their category yields more generalization than retraining on more prototypical words, suggesting a surprising strategy for selecting items in patient therapy to maximize recovery. To retrieve: unix> ftp 128.146.8.62 # archive.cis.ohio-state.edu Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get plaut.relearning.ps.Z ftp> quit unix> zcat plaut.relearning.ps.Z | lpr Thanks again to Jordan Pollack for maintaining the archive.... David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University Pittsburgh, PA 15213-3890 From rupa at dendrite.cs.colorado.edu Mon May 18 16:08:22 1992 From: rupa at dendrite.cs.colorado.edu (Sreerupa Das) Date: Mon, 18 May 1992 14:08:22 -0600 Subject: Preprint available Message-ID: <199205182008.AA22031@dendrite.cs.Colorado.EDU> The following paper has been placed in Neuroprose archive. --------------------------------------------------------------------------- Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory ----------------------------------------------------------------- Sreerupa Das C. Lee Giles Guo-Zheng Sun University of Colorado NEC Research Institute University of Maryland Boulder, CO 80309-430 Princeton, NJ 08540 College Park, MD 20742 rupa at cs.colorado.edu giles at research.nec.nj.com sun at sunext.umiacs.umd.edu ABSTRACT This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed. (To be published in the Proceedings of The Fourteenth Annual Conference of The Cognitive Science Society, July 29 -- August 1, 1992, Indiana University.) Questions and comments will be appreciated. ===================================================================== File name: das.cfg_induction.ps.Z ===================================================================== To retrieve the papers by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get das.cfg_induction.ps.Z ftp> quit unix> uncompress das.cfg_induction.ps.Z unix> lpr das.cfg_induction.ps ===================================================================== Sreerupa Das Department of Computer Science University of Colorado at Boulder Boulder, CO 80309-430 email:rupa at cs.colorado.edu ---------------------------------------------------------------------- From shultz at hebb.psych.mcgill.ca Tue May 19 09:35:53 1992 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 19 May 92 09:35:53 EDT Subject: No subject Message-ID: <9205191335.AA22593@hebb.psych.mcgill.ca> Subject: Abstract Date: 19 May '92 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: An Investigation of Balance Scale Success William C. Schmidt and Thomas R. Shultz Department of Psychology McGill University Abstract The success of a connectionist model of cognitive development on the balance scale task is due to manipulations which impede convergence of the back-propagation learning algorithm. The model was trained at different levels of a biased training environment with exposure to a varied number of training instances. The effects of weight updating method and modifying the network topology were also examined. In all cases in which these manipulations caused a decrease in convergence rate, there was an increase in the proportion of psychologically realistic runs. We conclude that incremental connectionist learning is not sufficient for producing psychologically successful connectionist balance scale models, but must be accompanied by a slowing of convergence. This paper will be presented at the Fourteenth Annual Conference of the Cognitive Science Society, Indiana University, 1992. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to schmidt at lima.psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get schmidt.balance.ps.Z ftp> quit unix> uncompress schmidt.balance.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca From jbower at cns.caltech.edu Tue May 19 11:59:31 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Tue, 19 May 92 08:59:31 PDT Subject: summer course at MBL Message-ID: <9205191559.AA00938@cns.caltech.edu> Methods in Computational Neuroscience Marine Biological Laboratory Woods Hole, MA. August 2 - August 29, 1992 For advanced graduate students and postdoctoral fellows in neurobiology, physics, electrical engineering, computer science, and psychology with an interest in Computational Neuroscience. A background in programming (preferably in C and UNIX) is highly desirable. Limited to 20 students. This four-week course presents the basic techniques necessary to study single cells and neural networks from a computational point of view, emphasizing their possible function in information processing. The aim is to enable participants to simulate the functional properties of their particular system of study and to appreciate the advantages and pitfalls of this approach to understanding the nervous system. The first section will focus on simulating the electrical properties of single neurons (compartmental models, active currents). The second part of the course will deal with the numerical and mathematical (e.g. theory of dynamical systems, information theory) techniques necessary for modeling single cells and neuronal networks. Examples of such simulations will be drawn from the invertebrate and vertebrate literature (central pattern generators, visual system of the fly, mammalian olfactory and visual cortex). In the final section, algorithms and connectionist neural networks relevant to visual perception, development in the mammalian cortex, as well as plasticity and learning algorithms will be analyzed and discussed from a neurobiological point of view. The course includes daily lectures, tutorials, and laboratories. The laboratory section is organized around GENESIS, the Neuronal Network simulator developed at the California Institute of Technology, running on 20 state-of-the-art, single-user, UNIX-based graphic color workstations. Other simulation programs, such as NEURON, will also be available to students. Students are expected to work on a simulation project of their own choosing. A small subset of students can remain for up to an additional week (until September 5) at the MBL to finish their computer projects. TUITION: $1,000 (includes room and board); partial financial aid is available to qualified applicants. APPLICATION DEADLINE: May 27, 1992 Directors: James M. Bower and Christof Koch, Computation and Neural System Program, California Institute of Technology. Faculty: Paul Adams, SUNY, Stony Brook; Richard Andersen, MIT; Joseph Arick, Rockefeller University; William Bialck, NEC Research Institute; Avis Cohen, University of Maryland; Rodney Douglas, MRC, U.K.; Nancy Kopell, Boston University; Rodolfo Llinas, New York University Medical Center; Eve Marder, Brandeis University; Michael Mascagni, Supercomputing Research Center; Kenneth Miller, Caltech; John Rinzel, NIH; Silvie Ryckebusch, Caltech; Idan Segev, Hebrew University, Israel; Terrence Sejnowski, UCSD/Salk Institute; David Van Essen, Caltech; Matthew Wilson, University of Arizona Teaching Assistants: David Beeman, University of Colorado; David Berkovitz, Yale University; Ojvind Bernander, Caltech; Maurice Lee, Caltech Computer Manager: John Uhley, Caltech ------------------------------------------------------------- APPLICATION "METHODS IN COMPUTATIONAL NEUROSCIENCE" August 2 - August 29, 1992 Name: Social Security Number: Citizenship: Institutional mailing address, e-mail address, telephone and fax numbers: Best mailing address, e-mail address, telephone and fax numbers, if different from above. Professional status: Graduate Postdoctoral Faculty Other How did you learn about this course? Advertisement(Give Name) Flyer Individual Email State your reasons for wanting to take this course: Outline your background, if any, in biological science, including courses taken. What experience, if any, have you had with experimental Neurobiology? Outline your background, if any, in applied mathematics (e.g. differential equations, linear algebra, Fourier transforms, dynamical systems, probability and statistics), including relevant courses in math, physics, engineering, etc. Which computer languages (e.g. C, PASCAL), machines (e.g. PDP, SUN) and operating systems (e.g. UNIX) have you used in the past? Indicate whether you are an expert (B), proficient (P) or a Novice (N). What experience, if any, have you had in using neural simulation programs, including the Genesis simulator? Given your experience, what particular questions would you like to address as course problems? (For instance, modelling retinal amacrine cells, computing motion in MT, learning in hippocampus, etc.) Education: Institution Highest Degree and year Professional Experience: If possible please have two letters of recomendation sent to: jbower at smaug.cns.caltech.edu Financial Aid If you are requesting financial aid, please provide a short statement of your needs. Applications are evaluated by an admissions committee and individuals are notified of acceptance or non-acceptance within two weeks of those decisions. A non-refundable $200 deposit is required of all accepted students by June 28, 1992. Return applications to: jbower at smaug.cns.caltech.edu APPLICATION DEADLINE: May 27, 1992 - MBL IS AN EQUAL OPPORTUNITY/AFFIRMATIVE ACTION INSTITUTION - From tesauro at watson.ibm.com Tue May 19 11:35:57 1992 From: tesauro at watson.ibm.com (Gerald Tesauro) Date: Tue, 19 May 92 11:35:57 EDT Subject: Reminder-- NIPS workshops deadline is May 22 Message-ID: This is to remind everyone that the deadline for submission of NIPS workshop proposals is this Friday, May 22. Submissions should be sent to "tesauro at watson.ibm.com" by e-mail, or by physical mail to the address given below. CALL FOR WORKSHOPS NIPS*92 Post-Conference Workshops December 4 and 5, 1992 Vail, Colorado Request for Proposals Following the regular NIPS program, workshops on current topics in Neural Information Processing will be held on December 4 and 5, 1992, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: Computational Neuroscience; Sensory Biophysics; Recurrent Nets; Self-Organization; Speech; Vision; Rules and Connectionist Models; Neural Network Dynamics; Computa- tional Complexity Issues; Benchmarking Neural Network Applica- tions; Architectural Issues; Fast Training Techniques; Active Learning and Control; Optimization; Bayesian Analysis; Genetic Algorithms; VLSI and Optical Implementations; Integration of Neural Networks with Conventional Software. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Specific open and/or controversial issues are encouraged and preferred as workshop topics. Individuals pro- posing to chair a workshop will have responsibilities including: arrange brief informal presentations by experts working on the topic, moderate or lead the discussion, and report its high points, findings and conclusions to the group during evening plenary ses- sions, and in a short (2 page) written summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 22, 1992. (Express mail is *not* necessary. Submissions by electronic mail will also be acceptable.) Proposals should include a title, a description of what the workshop is to address and accomplish, and the proposed length of the workshop (one day or two days). It should state why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholar- ship in the field of interest. Mail submissions to: Dr. Gerald Tesauro NIPS*92 Workshops Chair IBM T. J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 USA (e-mail: tesauro at watson.ibm.com) Name, mailing address, phone number, and e-mail net address (if applicable) must be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 22, 1992 Please Post From arun at hertz.njit.edu Tue May 19 12:35:01 1992 From: arun at hertz.njit.edu (arun maskara spec lec cis) Date: Tue, 19 May 92 12:35:01 -0400 Subject: paper available in neuroprose Message-ID: <9205191635.AA22978@hertz.njit.edu> Paper availble: Forced Simple Recurrent Neural Networks and Grammatical Inference Arun Maskara New Jersey Institute of Technology Department of Computer and Information Sciences University Heights, Newark, NJ 07102 arun at hertz.njit.edu Andrew Noetzel The William Paterson College Department of Computer Science Wayne, NJ 07470 ABSTRACT A simple recurrent neural network (SRN) introduced by Elman can be trained to infer a regular grammar from the positive examples of symbol sequences generated by the grammar. The network is trained, through the back-propagation of error, to predict the next symbol in each sequence, as the symbols are presented successively as inputs to the network. The modes of prediction failure of the SRN architecture are investigated. The SRN's internal encoding of the context (the previous symbols of the sequence) is found to be insufficiently developed when a particular aspect of context is not required for the immediate prediction at some point in the input sequence, but is required later. It is shown that this mode of failure can be avoided by using the auto-associative recurrent network (AARN). The AARN architecture contains additional output units, which are trained to show the current input and the current context. The effect of the size of the training set for grammatical inference is also considered. The SRN has been shown to be effective when trained on an infinite (very large) set of positive examples. When a finite (small) set of positive training data is used, the SRN architectures demonstrate a lack of generalization capability. This problem is solved through a new training algorithm that uses both positive and negative examples of the sequences. Simulation results show that when there is restriction on the number of nodes in the hidden layers, the AARN succeeds in the cases where the SRN fails. ---------------------------------------------------------------------------- This paper will appear in Proceedings of 14th Annual Coggnitive Science Conference, 1992 It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) Sorry, no hardcopy available. FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (your email address) ftp> cd pub/neuroprose ftp> binary ftp> get maskara.cogsci92.ps.Z ftp> quit unix> uncompress maskara.cogsci92.ps.Z unix> lpr -Pxxx maskara.cogsci92.ps (or however you print postscript) Arun Maskara arun at hertz.njit.edu From thildebr at lion.csee.lehigh.edu Wed May 20 11:35:21 1992 From: thildebr at lion.csee.lehigh.edu (Thomas H. Hildebrandt ) Date: Wed, 20 May 92 11:35:21 -0400 Subject: Izumida translation Message-ID: <9205201535.AA03119@lion.csee.lehigh.edu> Regarding preprints of my translation of the paper by Izumida et al., "Analysis of Neural Network Energy Functions Using Standard Forms": Some mail systems refuse messages longer than 150000 bytes -- if yours is one, then I was unable to fulfill your request. It has been suggested that I place the PostScript on the Ohio State archive, but I do not wish to do this, as that might be in violation of certain copyrights. The alternative is to send out LaTeX source, which I will do if you renew your request. Thomas H. Hildebrandt From jbower at cns.caltech.edu Wed May 20 21:04:56 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Wed, 20 May 92 18:04:56 PDT Subject: CNS*92 registration Message-ID: <9205210104.AA04439@cns.caltech.edu> Registration announcement for: First Annual Computation and Neural Systems Meeting "CNS*92" Sunday, July 26 through Friday, July 31, 1992 San Francisco, California This is the first annual meeting of an inter-disciplinary conference intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. CNS*92 will bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. ------------------------------------------------------------- Program The meeting itself is divided into three sections. The first day is devoted to tutorial presentations including basic tutorials for the uninitiated as well as advanced tutorials on new methods of data acquisition and analysis. The main meeting will take place on July 27 - 29 and will consist of the presentation of 106 papers accepted by peer review. The 24 most highly rated papers will be presented in a single oral session running over these three days. An additional 82 papers have been accepted for poster presentations. The last two days of the meeting (July 30-31) will be devoted to workshop sessions held at the Marconi Conference Center on the Pacific Coast, north of San Francisco. ------------------------------------------------------------- Registration Registration for the meeting is strictly limited to 350 on a first come first serve basis. Workshop registration is limited to 85. ------------------------------------------------------------- Housing and Travel Grants Housing: Relatively inexpensive housing will be available close to the meeting site for participants. Travel Grants: Some funds will also be available for travel reimbursements. The intention of the organizing committee is to use these funds to provide support for students and postdoctoral fellows attending the meeting. Information on how to apply for travel support is provided in the registration materials. ------------------------------------------------------------- Online Information Information about the meeting is available via FTP over the internet (address: 131.215.135.69). To obtain registration forms or information about the agenda, currently registered attendees, and/or paper abstracts use the following sequence (things you type are in quotes): > yourhost% "ftp 131.215.135.69" > 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.139.69:): "ftp" > 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" > 230 Guest login ok, access restrictions apply. ftp> "cd pub" > 250 CWD command successful. ftp> At this point relevant commands are: - "ls" to see the contents of the directory. - "get (filename)" to obtain (filename) - "mget *" to obtain everything listed Note that the abstracts are contained in a separate directory called To change to this directory type "cd abstracts". ------------------------------------------------------------- For additional information contact: cns92 at cns.caltech.edu ------------------------------------------------------------- CNS*92 Organizing Committee: Program Chair, James M. Bower, Caltech. Publicity Chair, Frank Eeckman, Lawrence Livermore Labs. Finances, John Miller, UC Berkeley and Nora Smiriga, Institute of Scientific Computing Res. Local Arrangements, Ted Lewis, UC Berkeley and Muriel Ross, NASA Ames. Program Committee: William Bialek, NEC Research Institute. James M. Bower, Caltech. Frank Eeckman, Lawrence Livermore Labs. Bard Ermentrout, Univ. Pittsburg. Scott Fraser, Caltech. Christof Koch, Caltech. Ted Lewis, UC Berkeley. Gerald Loeb, Queen's University. Eve Marder, Brandeis. Bruce McNaughton, University of Arizona. John Miller, UC Berkeley. Idan Segev, Hebrew University, Jerusalem Shihab Shamma, University of Maryland. Josef Skrzypek, UCLA. From thgoh at iss.nus.sg Wed May 20 16:54:53 1992 From: thgoh at iss.nus.sg (Goh Tiong Hwee) Date: Wed, 20 May 92 16:54:53 SST Subject: No subject Message-ID: <9205200854.AA06973@iss.nus.sg> I have place the following paper in the neuroprose archive. Hardcopy request by snailmail to me at the institute. Thanks to Steve Pollack for availing the archive service. Neural Networks And Genetic Algorithm For Economic Forecasting Francis Wong, PanYong Tan Institute of Systems Science National University of Singapore Abstract: This paper describes the application of an enhanced neural network and genetic algorithm to economic forecasting. Our proposed approach has several significant advantages over conventional forecasting methods such as regression and the Box-Jenkins methods. Apart from being simple and fast in learning, a major advantage is that no assumption need to be made about the underlying function or model, since the neural networks is able to extract hidden information from the historical data. In addition, the enhanced neural network offers selective activation and training of neurons based on the instantaneous causal relationship between the current set of input training data and the output target. This causal relationship is represented by the Accumulated Input Error (AIE) indices, which are computed based on the accumulated errors back-propagated to the input layers during training. The AIE indices are used in the selection of neurons for activation and training. Training time can be reduced significantly, especially for large networks designed to capture temporal information. Although neural networks represent a promising alternative for forecasting, the problem of network design remains a bottleneck that could impair widespread applications in practice. The genetic algorithm is used to evolved optimal neural network architectures automatically, thus eliminating the many pitfalls associated with human-engineering approaches. The proposed concepts and design paradigm were tested on serveral real applications ( Please email thgoh at iss.nus.sg for a copy of the software ), including the forecast of GDP, air passenger arrival and currency exchage rates. ftp Instructions: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get wong.nnga.ps.Z ftp> quit __________________________________________________________ Tiong Hwee Goh Institute of Systems Science National University of Singapore Heng Mui Keng Terrace Kent Ridge Singapore 0511. Telephone:(65)7726214 Fax :(65)7782571 From U53076%UICVM.BITNET at BITNET.CC.CMU.EDU Thu May 21 23:46:15 1992 From: U53076%UICVM.BITNET at BITNET.CC.CMU.EDU (Bruce Lambert) Date: Thu, 21 May 92 22:46:15 CDT Subject: Thesis placed on Neuroprose archive Message-ID: <01GKAIF3KVB4FOD29C@BITNET.CC.CMU.EDU> ****Please do not forward to other lists *********** The following paper has been placed on the neuroprose archive as lambert.thesis.ps.Z A CONNECTIONIST MODEL OF MESSAGE DESIGN Bruce L. Lambert Department of Speech Communication University of Illinois at Urbana-Champaign, 1992 Complexity in plan-based models of message production has typically been managed by the systematic exploitation of abstractions. Most signifi- cantly, abstract goals replace concrete situations, and act types replace instantiated linguistic contents. Consequently, plan-based models involve three key transformations: (a) the mapping of concrete situations onto abstract goals, (b) the mapping of abstract goals onto act types, and (c) the mapping of act types onto linguistic tokens. Transformations are made possible by the prior functional indexing of situations, acts and utterances. However, such functional indexing schemes are based on untenable assumptions about language and communication. For example, communicative means-ends relations are not fixed, and the assignment of functional significance to decontextualized linguistic forms contradicts a substantial body of evidence about the contextual dependency of language function. A connectionist model of message production is proposed to avoid these difficulties. The model uses Quickprop, a modified version of the backpropagation learning algorithm, to learn a mapping from thoughts to the elements of a phrasal lexicon for a hypothetical broken date situation. Goals were represented as distributed patterns of context- specific thoughts. Messages were represented as distributed patterns of phrases. Three studies were conducted to develop and test the model. The first study used protocol analysis to validate a checklist method for thought elicitation. The second study described a prototype system for automatically coding independent clauses into phrase categories (called message frames), and the system's ability to classify new examples was found to be limited. The final study analyzed the performance of the connectionist model. The generalization performance of the model was limited by the small number of examples, but an analysis of the receptive fields of several message frames was consistent with previous research about individual differences in reasoning about communication and supported the assertion that individual linguistic forms are multi- functional. The model provided preliminary support for a situated, phrasal lexical model of message design that does not rely on decontex- tualized, functionally indexed abstractions. Retrieve the file in the normal manner from archive.cis.ohio-state.edu. Unfortunately, several of the figures are not included. Hardcopy of the figures will be sent to individuals who contact me directly by email. And thanks of course to Jordan Pollack for providing what has become a tremendously valuable resouce to the connectionist community. Bruce Lambert Department of Pharmacy Administration (M/C 871) College of Pharmacy University of Illinois at Chicago 833 S. Wood St. Chicago, IL 60612 email: u53076 at uicvm.uic.edu From PYG0572 at VAX2.QUEENS-BELFAST.AC.UK Fri May 22 08:33:00 1992 From: PYG0572 at VAX2.QUEENS-BELFAST.AC.UK (GERRY ORCHARD) Date: Fri, 22 May 92 12:33 GMT Subject: Could you post this on the bulletin board please? Message-ID: <01GKAYMIBD80FOD5QD@BITNET.CC.CMU.EDU> The Second Irish Neural Networks Conference: June 25th and 26th 1992 Guest Speakers: Prof. John Taylor, Kings College, London Prof. Dan Amit, INFN Rome and Racah Institute of Physics Prof. Vicki Bruce, University of Nottingham Prof. George Irwin, Queen's Belfast Presentations: (alphabetical order) A NEURAL NETWORK MODEL OF A HUMAN ATTENTION SWITCHING ABILITY John Andrews and Mark Keane, Department of Computer Science O'Reilly Institute, Trinity College, Dublin 2 GENERATING OBJECT-ORIENTED CODE THROUGH ARTIFICIAL NEURAL NETWORKS J Brant Arseneau, Gillian F Sleith**, C Tim Spracklen, Gary Whittington, John MacRae*, Electronic Research Group, Department of Engineering, University of Aberdeen *Institute of Software Engineering, Island Street, Belfast **Dept. of Information Systems, Faculty of Informatics, UU at Jordanstown LEARNING TO LEARN: THE CONTRIBUTION OF BEHAVIOURISM TO CONNECTIONIST MODELS OF INFERENTIAL SKILLS IN HUMANS Dermot Barnes and Peter Hampson, Department of Applied Psychology, University College, Cork NEURAL NETWORK TASK IDENTIFICATION FOR DISTRIBUTED WORKING SUPPORT Russel Beale, Alan Dix* and Janet Finlay* School of Computer Science, University of Birmingham *HCI Group, Dept of Computer Science, University of York A NEURAL CONTROLLER FOR NAVIGATION OF NON- HOLONOMIC MOBILE ROBOTS USING SENSORY INFORMATION Rene Biewald, Control System Centre, U.M.I.S.T. USING CHAOS TO PREDICT COMMODITY MARKET PRICE FLUCTUATIONS IN NEURAL NETWORKS Christopher Burdorf and John Fitch, School of Mathematical Sciences, University of Bath APPLICATION OF NEURAL NETWORKS TO MOTION PLANNING FOR A ROBOT ARM TO GRASP MOVING OBJECTS Conor Doherty, Educational Research Centre, St Patrick's College, Dublin 9 SEMANTIC INTERACTION: A CONNECTIONIST MODEL OF LEXICAL COMBINATION George Dunbar*, Masja Kempen**, Noel Maessen** *Department of Psychology, University of Warwick **Department of Psychology, University of Leiden GENERALISATION AND CONVERGENCE IN THE MULTI -DIMENSIONAL ALBUS PERCEPTRON (CMAC) D. Ellison, Dundee Institute of Technology, Dundee, Scotland ARTIFICAL NEURAL NETWORK BASED ELECTRONIC NOSE E L Hines and J W Gardner, Dept of Engineering, University of Warwick, Coventry A CONNECTIONIST MODEL OF HUMAN MUSICAL SCORE PROCESSING James Hynan and Sean O Nuallian, Dublin City University, Glasnevin, Dublin 9 A NONLINEAR SYSTOLIC FILTER WITH RADIAL BASIS FUNCTION ESTIMATION J. Kadlec, F. M. F. Gaston, G.W. Irwin, Control Research Group, Dept. of Electrical and Electronic Engineering, The Queen's University of Belfast HYPERCUBE CUTS AND KARNAUGH MAPS: TOWARDS AN UNDERSTANDING OF BINARY FEEDFORWARD NEURAL NETWORKS. Brendan Kiernan, Dept. of Computer Science, Trinity College Dublin. A NEURAL NETWORK BASED DIAGNOSTIC TOOL FOR LIVER DISORDERS L Kilmartin, E Ambikairajah and *S M Lavelle Department of Electronic Engineering, Regional Technical College, Athlone *Department of Experimental Medicine, University College Galway MODELLING MEMBRANE POTENTIALS IS MORE FLEXIBLE THAN SPIKES Peter Laming , Dept of Biology and Biochemistry, The Queen's University of Belfast A NEURAL MECHANISM FOR DIVERSE BEHAVIOUR R Linggard, School of Information Systems, University of East Anglia, Norwich A NEURAL NETWORK APPROACH TO EQUALIZATION OF A NON-LINEAR CHANNEL E Luk and A D Fagan, Department of Electronic and Electrical Engineering, University College, Dublin. HANDWRITTEN SIGNATURE VERIFICATION USING THE BACKPROPAGATION NEURAL NETWORK D K R McCormack, Department of Computing Mathematics University of Wales College of Cardiff USING A 2-STAGE ARTIFICIAL NEURAL NETWORK TO DETECT ABNORMAL CERVICAL CELLS FROM THEIR FREQUENCY DOMAIN IMAGE. McKenna S, Ricketts IV, Cairns AY, Hussein KA* MicroCentre, Department of Mathematics and Computer Science The University, DUNDEE. *Dept. Pathology, Ninewells Hospital, Dundee. COMPARING FEEDFORWARD NEURAL NETWORK MODELS FOR TIME SERIES PREDICTION. John Mitchell, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin HAND-WRITTEN DIGIT RECOGNITION EXPLORATIONS IN CONNECTIONISM Michal Morciniec, Computer Science Department, University College, Dublin THE EFFECT OF ALL-CONNECTIVE BACK-PROPAGATION ALGORITHM ON THE LEARNING CHARACTERISTIC OF A NEURAL NETWORK S Namasivayam and J T McMullen* Applied Physical Science, Univeristy of Ulster at Coleraine *Centre for Energy Research, University of Ulster at Coleraine INFORMATION THEORY AND NEURAL NETWORK LEARNING ALGORTIHMS: AN OVERVIEW M. D. Plumbley, Centre for Neural Networks King's College London AN EXPLORATION OF CLAUSE BOUNDARY EFFECTS IN SIMPLE RECURRENT NETWORK REPRESENTATIONS Ronan Reilly, Department of Computer Science, University College Dublin. A MODEL FOR THE ORGANISATION OF OCULAR DOMINANCE STRIPES Craig R Renfrew, Dept of Computer Science, University of Strathclyde APPLICATION OF NEURAL NETWORKS TO ATMOSPHERIC CHERENKOV IMAGING DATA FROM THE CRAB BEBULA Paul T Reynolds, University College Dublin, Bellfield, Dublin 4 ARTIFICIAL REWARDS Tony Savage, School of Psychology, The Queen's University of Belfast A MODULAR NETWORK MODEL FOR SEGMENTING VISUAL TEXTURES BASED ON ORIENTATION CONTRAST Andrew J Schofield and David H Foster, Dept of Communication and Neuroscience, University of Keele PRESTRUCTURES NEURAL NETS AND THE TRANSFER OF KNOWLEDGE Amanda J.C. Sharkey and Noel E. Sharkey, Centre for Connection Science, Department of Computer Science, University of Exeter, Exeter, Devon HYBRID SYSTEMS - NEURAL NETWORKS IN PROBLEM-SOLVING ENVIRONMENTS P. Sims, D.A. Bell, Dept. of Information Systems University of Ulster at Jordanstown DESIGN OF AN INTEGRATED-CIRCUIT ANALOGUE NEURAL NETWORK Winand G van Sloten, School of Electrical Engineering and Computer Science, The Queen's University of Belfast SYNAPTIC CORRELATES OF SHORT-AND LONG-TERM MEMORY FORMATION IN THE CHICK FOREBRAIN FOLLOWING ONE-TRIAL PASSIVE AVOIDANCE LEARNING M G Stewart, Brain and Behaviour Research Group, Dept of Biology, Open University, Milton Keynes WHY CONNECTIONIST NETWORKS PROVIDE NATURAL MODELS OF THE WAY SENTENCE CONTEXT AFFECTS IDENTIFICATION OF A WORD. Eamonn Strain, Department of Psychology, University of Nottingham Roddy Cowie, School of Psychology, Queen's University, Belfast A NEW ALGORITHM FOR CORRECTING SLANT IN HANDWRITTEN NUMERALS S Sunthankar, School of Computer Science & Electronic Systems, Kingston Polytechnic AN ALGORITHM FOR SEGMENTING HANDWRITTEN NUMERAL STRINGS S Sunthankar, School of Computer Science & Electronic Systems, Kingston Polytechnic USING NEURAL NETWORKS TO FIND GOLD Peter M Williams, School of Cognitive and Computing Sciences University of Sussex NEURAL LEARNING ROBOT CONTROL: A NEW APPROACH VIA THE THEORY OF COGNITION A M S Zalzala, Control Engineering Research Group, Department of Electrical & Electronic Engineering The Queen's University of Belfast Registration 65 pounds sterling including lunches and coffe/tea FURTHER INFORMATION FROM: Dr. Gerry Orchard Cognitive and Computational Modelling Group School of Psychology Queen's University Belfast Tel 0232 245133 Ext 4354/4360 Fax 0232 664144 Email g.orchard@ uk.ac.qub.v2 From jagota at cs.Buffalo.EDU Sun May 24 16:37:17 1992 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Sun, 24 May 92 16:37:17 EDT Subject: (P)reprints by ftp Message-ID: <9205242037.AA13921@sybil.cs.Buffalo.EDU> Dear Connectionists: Latex sources of the following and other (p)reprints are available via ftp: ftp ftp.cs.buffalo.edu (or 128.205.32.3 subject-to-change) Name : anonymous > cd users/jagota > get Efficiently Approximating Max-Clique in a Hopfield-style Network Oral presentation at IJCNN'92 Baltimore. File: ijcnn92.tex Representing Discrete Structures in a Hopfield-style Network Book chapter (to appear). File: chapter.tex A Hopfield-style Network with a Graph-theoretic Characterization Journal article (to appear). File: JANN92.tex Problems? Contact jagota at cs.buffalo.edu Arun Jagota From davies at Athena.MIT.EDU Mon May 25 18:26:05 1992 From: davies at Athena.MIT.EDU (davies@Athena.MIT.EDU) Date: Mon, 25 May 92 18:26:05 EDT Subject: EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY Message-ID: <9205252226.AA27004@e40-008-9.MIT.EDU> ************************************************************************ ****** EUROPEAN SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY ****** *********** INAUGURAL CONFERENCE *********** **** 17 - 19 JULY, 1992 **** ****** SECOND ANN0UNCEMENT: REGISTRATION AND ACCOMMODATION ****** The Inaugural Conference of the European Society for Philosophy and Psychology will be held in the Philosophy Institute, University of Louvain, Belgium, from Friday 17 July to Sunday 19 July, 1992. The goal of the Society is 'to promote interaction between philosophers and psychologists on issues of common concern'. ***** REGISTRATION ***** In order to register for the conference, please send your NAME, ACADEMIC AFFILIATION, POSTAL MAIL ADDRESS, EMAIL ADDRESS, and TELEPHONE NUMBER to: Beatrice de Gelder, Psychology Department, Tilburg University, P.O.Box 90153, 5000 LE Tilburg, Netherlands or provide the same information by email to: beadegelder at kub.nl In either case, please state whether you are paying the Registration Fee by cheque, banker's draft, or electronic transfer. THE REGISTRATION FEE is Bfrs. 2400,- (approx. 40 pounds sterling) or Bfrs. 1200,- for students (approx. 20 pounds sterling). If you would like to attend the conference dinner on Friday 17 July, then the extra charge is Bfrs. 1500,- including wine (approx. 25 pounds sterling). REGISTRATION IS NOT COMPLETE UNTIL PAYMENT HAS BEEN RECEIVED. Payment MUST be in Belgian francs, by cheque or banker's draft made payable to: B. de Gelder-Euro-SPP OR by electronic transfer to the following: Bank: AN-HYP Brussels, Belgium, Account number: 750-9345993-07, for the attention of: B. de Gelder-Euro-SPP. When registration is complete, you will be sent an information pack including maps and other touristic information along with a detailed programme. ************************************************************************ ***** ACCOMMODATION ***** Rooms have been reserved in several hotels (all within walking distance of The Philosophy Institute) at special reduced rates. In order to book accommodation, please contact one of the hotels directly, and mention the Euro-SPP Conference. The hotels and rates are: Hotel Binnenhof Hotel Industrie Maria-Theresiastraat 65 Martelarenplein 7 B-3000 Leuven (Louvain) B-3000 Leuven (Louvain) Tel: +32-16-20.55.92 Tel: +32-16-22.13.49 Fax: +32-16-23.69.26 Fax: +32-16-20.82.85 Rate: Bfrs. 2450,- Rate: Bfrs. 1050,- Begijnhof Congreshotel Hotel Arcade Tervuursevest 70 Brusselsestraat 52 B-3000 Leuven (Louvain) B-3000 Leuven (Louvain) Tel: +32-16-29.10.10 Tel: +32-16-29.31.11 Fax: +32-16-29.10.22 Fax: +32-16-23.87.92 Rate: Bfrs. 3.500,- Rate: Bfrs. 2000,- Student accommodation Contact: Stefan Cuypers Centre for Logic and Philosophy of Science B-3000 Leuven (Louvain) Tel: +32-16-28.63.15 Fax: +32-16-28.63.11 Rate: Bfrs. 585,- TO MAKE SURE YOU WILL OBTAIN HOTEL ACCOMMODATION YOU MUST CONTACT THE HOTEL OF YOUR CHOICE BEFORE 17 JUNE 1992. ************************************************************************ ***** PROGRAMME ***** FRIDAY 17 JULY Conference desk open from 11 am 2.00 pm Coffee 3.00 - 5.00 pm SYMPOSIUM 1: Consciousness 5.30 pm INVITED LECTURE: Larry Weiskrantz 7.00 pm RECEPTION at the kind invitation of the Philosophy Institute 8.00 pm CONFERENCE DINNER SATURDAY 18 JULY 9.00 - 10.30 am SYMPOSIUM 2: Probabilistic Reasoning 10.30 am Coffee 11.00 am - 1.00 pm SYMPOSIUM 3: Intentionality 1.00 - 2.30 pm Lunch 2.30 - 4.00 pm SYMPOSIUM 4: Theory of Mind 4.30 - 6.00 pm SYMPOSIUM 5: Philosophical Issues from Linguistics 6.15 pm INAUGURAL BUSINESS MEETING OF THE EURO-SPP 7.00 pm RECEPTION at the kind invitation of Blackwell Publishers SUNDAY 19 JULY 9.00 - 11.00 am SYMPOSIUM 6: Connectionist Models 11.00 am Coffee 11.30 am INVITED LECTURE: Dan Sperber 1.00 pm Lunch Symposium speakers include: Peter Carruthers, Andy Clark, Anthony Dickinson, Gerd Gigerenzer, Stevan Harnad, Nigel Harvey, Nick Humphrey, Pierre Jacob, Giuseppe Longobardi, Gabriele Miceli, Odmar Neumann, David Over, Josef Perner, Kim Plunkett, David Premack, Andrew Woodfield, Andy Young. ************************************************************************ For further information contact: Daniel Andler Martin Davies CREA Philosophy Department 1 rue Descartes Birkbeck College 75005 Paris Malet Street France London WC1E 7HX Email:azra at poly.polytechnique.fr England Email: ubty003 at cu.bbk.ac.uk Beatrice de Gelder Tony Marcel Psychology Department MRC Applied Psychology Unit Tilburg University 15 Chaucer Road P.O. Box 90153 Cambridge CB2 2EF 5000 LE Tilburg England Netherlands Email: tonym at mrc-apu.cam.ac.uk Email: beadegelder at kub.nl ************************************************************************ From burrow at grad1.cis.upenn.edu Mon May 25 16:27:26 1992 From: burrow at grad1.cis.upenn.edu (Thomas Fontaine) Date: Mon, 25 May 92 16:27:26 EDT Subject: TR available in neuroprose Message-ID: <9205252027.AA17328@gradient.cis.upenn.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: CHARACTER RECOGNITION USING A MODULAR SPATIOTEMPORAL CONNECTIONIST MODEL Thomas Fontaine and Lokendra Shastri Technical Report MS-CIS-92-24/LINC LAB 219 Computer and Information Science Department 200 South 33rd Street University of Pennsylvania Philadelphia, PA 19104-6389 We describe a connectionist model for recognizing handprinted characters. Instead of treating the input as a static signal, the image is scanned over time and converted into a time-varying signal. The temporalized image is processed by a spatiotemporal connectionist network suitable for dealing with time-varying signals. The resulting system offers several attractive features, including shift-invariance and inherent retention of local spatial relationships along the temporalized axis, a reduction in the number of free parameters, and the ability to process images of arbitrary length. Connectionist networks were chosen as they offer learnability, rapid recognition, and attractive commercial possibilities. A modular and structured approach was taken in order to simplify network construction, optimization and analysis. Results on the task of handprinted digit recognition are among the best reported to date on a set of real-world ZIP code digit images, provided by the United Stated Postal Service. The system achieved a 99.1\% recognition rate on the training set and a 96.0\% recognition rate on the test set with no rejections. A 99.0\% recognition rate on the test set was achieved when 14.6\% of the images were rejected. ************************ How to obtain a copy ************************ I'm sorry, but hardcopies are not available. To obtain via anonymous ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fontaine.charrec.ps.Z ftp> quit unix> uncompress fontaine.charrec.ps.Z unix> lpr fontaine.charrec.ps.Z (or however you print Postscript) [Please note that some of the figures were produced with a Macintosh and the resulting Postscript may not print on all printers. People using an Apple LaserWriter should have no problems, though.] From bernard%arti1.vub.ac.be at BITNET.CC.CMU.EDU Mon May 25 13:28:42 1992 From: bernard%arti1.vub.ac.be at BITNET.CC.CMU.EDU (Bernard Manderick) Date: Mon, 25 May 92 19:28:42 +0200 Subject: Announcement Message-ID: <9205251728.AA12164@arti1.vub.ac.be> Dear moderator, I would be pleased if you could publish the announcement below in the next issue of your electronic newsletter. I apologize for any inconvenience and many thanks in advance, Bernard Manderick Artificial Intelligence Lab VUB Pleinlaan 2 Phone +32 2 641 35 75 B-1050 Brussels Fax +32 2 641 35 82 BELGIUM Email ppsn at arti.vub.ac.be ------------------------------------------------------------------------------- PPSN92 PARALLEL PROBLEM SOLVING FROM NATURE CONFERENCE ARTIFICIAL INTELLIGENCE LAB FREE UNIVERSITY OF BRUSSELS BELGIUM 28 - 30 SEPTEMBER 1992 General Information =================== The second Parallel Problem Solving from Nature-conference (PPSN92) will be held at the Free University of Brussels, September 28-30, 1992. The unifying theme of the PPSN-conference is natural computation, i.e. the design, the theoretical and empirical understanding, and the comparison of algorithms gleaned from nature and their application to real-world problems in science and technology. Examples are genetic algorithms, evolution strategies, algorithms based on neural networks and immune systems. Since last year there is a collaboration with the International Conference on Genetic Algorithms (ICGA) resulting in an alternation of both conferences. The ICGA-conferences will be held in the US during the odd years while the PPSN-conferences will be held in Europe during the even years. The major objective of this conference is to provide a biannual international forum for scientists from all over the world where they can discuss new theoretical, empirical and implementational developments in natural computation in general and evolutionary computation in particular together with their applications in science, technology and administration. The conference will feature invited speakers, technical and poster sessions. Registration Information ======================== REGISTRATION FEE The registration fees for PPSN92 are as follows: Before August 15: Normal 8 500 BEF Student 5 000 BEF After August 15: Normal 10 500 BEF Student 7 000 BEF These fees cover conference participation, conference proceedings, refreshment breaks, lunches, wellcome reception and transport from the hotels to the conference site and back. The deadline for registration is August 15, 1992. After this date, an additional fee of 2000 BEF will be charged. (1 US dollar is about 35 BEF and 1 ECU is about 42 BEF). ACCOMMODATION The conference will take place in the Aula (building Q) of the Free University of Brussels (VUB), Pleinlaan 2, B-1050 Brussels. There is no on-campus housing available. We have made arrangments with a number of hotels. The following hotels have agreed to reserve rooms until July 15, 1992 on a first come, first served basis. All hotels are located in the center of the city. Transport from the hotels to the conference site and back will be available. The indicated rates are in belgian francs (BEF) and include breakfast. NOTE THAT RESERVATIONS HAVE TO BE MADE BY THE ATTENDEES AND THIS BEFORE JULY 15 IF THEY WANT TO BENEFIT FROM RESERVED ROOMS. Hotel Albert I Place Rogier 20 B-1210 Brussel tel. +32/2/217 21 25 fax +32/2/217 93 31 Single: 3 000, double: 3 500, triple room: 4 000 (50 rooms) Hotel Arcade Place Sainte Catherine B-1000 Brussel tel. +32/2/513 76 20 fax +32/2/514 22 14 Single: 3 600, double: 3 600, triple room: 4 150 (50 rooms) Hotel Arenberg Rue d'Assaut 15 B-1000 Brussel tel. +32/2/511 07 70 fax +32/2/514 19 76 Single: 4 500, double room: 5 100 (30 rooms) Hotel Delta Chaussee de Charleroi 17 B-1060 Brussel tel. +32/2/539 01 60 fax +32/2/537 90 11 Single: 4 500, double room: 5 100 (50 rooms) Hotel Ibis Grasmarkt 100 B-1000 Brussel tel. +32/2/514 40 40 fax +32/2/514 50 67 Single: 3 650, double room: 4 150, triple room: 5 150 (50 rooms) Hotel Palace Rue Gineste 3 B-1210 Brussel tel. +32/2/217 79 94 fax +32/2/218 76 51 Single: 4 500, double room: 5 100 (80 rooms) CONFERENCE BANQUET The conference dinner will be held on the evening of Tuesday, September 29 in one of the well-known restaurants in the famous Ilot Sacre which is close to the Grande Place. The banquet costs 1 500 BEF - see registration form. Places are limited and will be assigned on a first come, first served basis. ACCOMPANYING PERSONS PROGRAM We did not foresee an accompanying persons program. All hotels will be happy to inform you about all kind of attractions (shopping, tourism, gastronomy, museums and the like) in Brussels. Most of these attractions are in a walking distance from your hotel. TRAVELING TO BRUSSELS Brussels has three international railway stations. Brussels Central ("Brussel Centraal" in Dutch/"Gare Central" in French) lies in the center of the city, Brussels South ("Brussel Zuid"/"Gare du Midi") and Brussels North ("Brussel Noord"/"Gare du Nord") are less than 2 kilometers to the south and to the north of the center, respectively. The Brussels airport is located at 10 km from the center of the city where most hotels are situated. There is a special train between the airport and Brussels Central which runs at half hour intervals. It also stops at Brussels North. All major airlines fly to Brussels. PAYMENT * Cheques (to be sent to PPSN92 Registrations). Please note that all charges, if any, must be at the participants' expense. Eurocheques are preferred. * Bankers draft to the order of PPSN: ASLK-CGER Bank, Belgium: Account 001-2361627-44 mentioning your name. Please ask your bank to arrange the transfer at no cost of the beneficiary. Bank charges, if any, will be at the participants expense. CANCELLATION Refunds of 50% will be made if a written request is received before September 15. No refunds will be made for cancellations received after this date. SPONSORS The conference is sponsored by the Commission of the European Communities - DG XIII- Esprit, Siemens-Nixdorf, Parsytec, the National Fund for Scientific Research and the Research Council of the Free University of Brussels. REGISTRATION Attached you will find the registration form. Completed registration forms should be returned by mail, email or fax to: PPSN92 Registrations Artificial Intelligence Lab VUB Pleinlaan 2 Phone +32 2 641 35 75 B-1050 Brussels Fax +32 2 641 35 82 BELGIUM Email ppsn at arti.vub.ac.be ------------------------------------------------------------------------------- REGISTRATION FORM - CUT HERE ------------------------------------------------------------------------------ PPSN92 REGISTRATION FORM PERSONAL DETAILS Name, First Name ___________________________________________ Address ___________________________________________ ___________________________________________ Zip code ____________ City _______________________ Country ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ Email ___________________________________________ REGISTRATION FEE Studnets requiring reduced fee must provide proof of status (such as copy of student ID). Please tick the appropriate boxes below. ---- Normal Registration 8 500 BEF | | ---- ---- Student Registration 5 000 BEF | | ---- ---- Late Fee (after August 15) 2 000 BEF | | ---- Registration Total ----------- (BEF): | | ----------- LUNCHES ---- Vegetarian | | ---- CONFERENCE DINNER Yes - I wish to attend the PPSN92 Conference Dinner ---- (1 500 BEF) | | ---- Nr. of Additional tickets (Nr. * 1 500 BEF) | | ---- Dinner Total ----------- (BEF): | | ----------- FINAL TOTAL ----------- (BEF): | | ----------- From rich at gte.com Tue May 26 16:14:08 1992 From: rich at gte.com (Rich Sutton) Date: Tue, 26 May 92 16:14:08 -0400 Subject: Annoucement of papers extending Delta-Bar-Delta Message-ID: <9205262014.AA22222@bunny.gte.com> Dear Learning Researchers: I have recently done some work extending that by Jacobs and others on learning-rate-adaptation methods. The three papers announced below extend it in the directions of machine learning, optimal linear estimation, and psychological modeling, respectively. Information on how to obtain copies is given at the end of the message. -Rich Sutton ---------------------------------------------------------------------- To appear in the Proceedings of the Tenth National Conference on Artificial Intelligence, July 1992: ADAPTING BIAS BY GRADIENT DESCENT: AN INCREMENTAL VERSION OF DELTA-BAR-DELTA Richard S. Sutton GTE Laboratories Incorporated Appropriate bias is widely viewed as the key to efficient learning and generalization. I present a new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience. The IDBD algorithm is developed for the case of a simple, linear learning system---the LMS or delta rule with a separate learning-rate parameter for each input. The IDBD algorithm adjusts the learning-rate parameters, which are an important form of bias for this system. Because bias in this approach is adapted based on previous learning experience, the appropriate testbeds are drifting or non-stationary learning tasks. For particular tasks of this type, I show that the IDBD algorithm performs better than ordinary LMS and in fact finds the optimal learning rates. The IDBD algorithm extends and improves over prior work by Jacobs and by me in that it is fully incremental and has only a single free parameter. This paper also extends previous work by presenting a derivation of the IDBD algorithm as gradient descent in the space of learning-rate parameters. Finally, I offer a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation. -------------------------------------------------------------------- Appeared in the Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, May 1992, pages 161-166: GAIN ADAPTATION BEATS LEAST SQUARES? Richard S. Sutton GTE Laboratories Incorporated I present computational results suggesting that gain-adaptation algorithms based in part on connectionist learning methods may improve over least squares and other classical parameter-estimation methods for stochastic time-varying linear systems. The new algorithms are evaluated with respect to classical methods along three dimensions: asymptotic error, computational complexity, and required prior knowledge about the system. The new algorithms are all of the same order of complexity as LMS methods, O(n), where n is the dimensionality of the system, whereas least-squares methods and the Kalman filter are O(n^2). The new methods also improve over the Kalman filter in that they do not require a complete statistical model of how the system varies over time. In a simple computational experiment, the new methods are shown to produce asymptotic error levels near that of the optimal Kalman filter and significantly below those of least-squares and LMS methods. The new methods may perform better even than the Kalman filter if there is any error in the filter's model of how the system varies over time. ------------------------------------------------------------------------ To appear in the Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, July 1992: ADAPTATION OF CUE-SPECIFIC LEARNING RATES IN NETWORK MODELS OF HUMAN CATEGORY LEARNING Mark A. Gluck, Paul T. Glauthier Center for Molecular and Behavioral Neuroscience, Rutgers and Richard S. Sutton GTE Laboratories Incorporated Recent engineering considerations have prompted an improvement to the least mean squares (LMS) learning rule for training one-layer adaptive networks: incorporating a dynamically modifiable learning rate for each associative weight accellerates overall learning and provides a mechanism for adjusting the salience of individual cues (Sutton, 1992). Prior research has established that the standard LMS rule can characterize aspects of animal learning (Rescorla & Wagner, 1972) and human category learning (Gluck and Bower, 1988). We illustrate how this enhanced LMS rule is analogous to adding a cue-salience or attentional component to the psychological model, giving the network model a means of distinguishing between relevant and irrelevant cues. We then demonstrate the effectiveness of this enhanced LMS rule for modelling human performance in two non-stationary learning tasks for which the standard LMS network model fails to account for the data (Hurwitz, 1990; Gluck, Glauthier & Sutton, in preparation). ------------------------------------------------------------------------ To obtain copies of these papers, please send an email request to jpierce at gte.com. Be sure to include your physical mailing address. From wahba at stat.wisc.edu Wed May 27 21:43:00 1992 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 27 May 92 20:43:00 -0500 Subject: Book Advert-CV,GCV, et al Message-ID: <9205280143.AA26382@hera.stat.wisc.edu> BOOK ADVERT - CV, GCV, DF SIGNAL, The BIAS-VARIANCE TRADEOFF AND ALL THAT .... Spline Models for Observational Data by G. Wahba v 59 in the SIAM NSF/CBMS Series in Applied Mathematics Although this book is written in the language of statistics it covers a number of topics that are increasingly recognized as being of importance to the computational learning community. It is well known that models such as neural nets, radial basis functions, spline and other bayesian models that are adapted to fit the data very well may in fact overfit the data, leading to large generalization error. In particular, minimizing generalization error, aka aka the bias-variance tradeoff, is discussed in the context of smooth multivariate function estimation with noisy data. Here, reducing the bias (fitting the data well) increases the variance (a proxy for the generalization error) and vice versa. Included is an in-depth discussion of ordinary cross validation, generalized cross validation and unbiassed risk as criteria for optimizing the bias- variance tradeoff. The role of "degrees of freedom for signal" as well as the relationships between Bayes estimation, regularization, optimization in (reproducing kernel) hilbert spaces, splines, and certain radial basis functions are covered, as well as a discussion of the relationship between generalized cross validation and maximum likelihood estimates of the main parameter(s) controlling the bias-variance tradeoff, both in the context of a well- known prior for the unknown smooth function, and in the general context of (smooth) regularization. .................... Spline Models for Observational Data, by Grace Wahba v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: e-mail (internet) service at siam.org SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA Toll-Free 1-800-447-7426 (8:30-4:45 Eastern Standard Time, USA) Regular phone: (215)382-9800 FAX (215)386-7999 May be ordered on American Express, Visa or Mastercard, or paid by check or money order in US dollars, or may be billed (extra charge). *CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. From jose at tractatus.siemens.com Wed May 27 22:38:42 1992 From: jose at tractatus.siemens.com (Steve Hanson) Date: Wed, 27 May 1992 22:38:42 -0400 (EDT) Subject: Paper Available. Message-ID: The following paper (NOT posted on neuro-prose) can be gotten by sending a note to kic at learning.siemens.com and your address. To Appear in Cognitive Science Conference, July 1992, Indiana University. DEVELOPMENT of SCHEMATA DURING EVENT PARSING: Neisser's Perceptual Cycle as a Recurrent Connectionist Network Catherine Hanson Stephen Jos\o'e\(aa' Hanson Department of Psychology Learning Systems Department Temple University SIEMENS Research Phildelphia, PA 19122 Princeton, NJ 08540 Phone: 215-787-1279 609-734-3360 EMAIL: cat at astro.ocis.temple.edu jose at tractatus.siemens.com Abstract Event boundary judgements depend on schema activation and subsequently affect encoding of perceptual action sequences. Past work has either focused on process level descriptions (Neisser) without computational implications or on knowledge structure level descriptions (Schank's "scripts") without also providing process level descriptions at a computational level. The present work combines both process level descriptions and learned knowledge structures in a simple recurrent connectionist network. The recurrent connectionist network is used to model human's event parsing judgements of two kinds of video-taped event sequences. The network can accomodate the complex event boundary judgement time-series and makes predictions about the basis of how schemata are activated, what role they play during encoding and how they develop during learning. Areas: Cognitive Psychology, Connectionist Models, AI Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From ngoddard at carrot.psc.edu Thu May 28 11:39:29 1992 From: ngoddard at carrot.psc.edu (ngoddard@carrot.psc.edu) Date: Thu, 28 May 92 11:39:29 -0400 Subject: Do you need faster/bigger simulations? Message-ID: <23931.707067569@carrot.psc.edu> The Pittsburgh Supercomputing Center (PSC) encourages applications for grants of supercomputing resources from researchers using neural networks. Our Cray YMP is running the NeuralShell, Aspirin and PlaNet simulators. The CM-2 (32k nodes) currently runs two neural network simulators, one being a data-parallel version of the Mclelland and Rumelhart PDP simulator. These simulators are also available on the 256 node CM-5 installed at the PSC (currently without vector units). Users can run their own simulation code or modify our simulators; there is some support for porting code. PSC is primarily funded by the National Science Foundation and there is no explicit charge to U.S.-based academic researchers for use of its facilities. International collaboration is encouraged, but each proposal must include a co-principal investigator from a U.S. institution. Both academic and industry researchers are encouraged to apply. The following numbers give an idea of the scale of experiment that can be run using PSC resources. The bottom line is that in a day one can obtain results that would have required months on a workstation. For a large backpropagation network (200-200-200) the Cray simulators reach approximately 20 million online connection-weight updates per second (MCUPS) on a single CPU. This is about 100 times the speed of a DecStation 5000/120 and about 30 times the speed of an IBM 730 on the same problem. It could be increased by a factor of 8 if all of the Cray YMP's 8 CPUS were dedicated to the task. The McClelland & Rumelhart simulator on the CM-2 achieves about 20 MCUPS (batch) using 8k processors or about 80 MCUPS using 32k processors. The Zhang CM-2 backpropagation simulator has been benchmarked at about 150 MCUPS using all 32k processors. Current CM-5 performance is around 35 MCUPS (batch) per 128-node partition for the McClelland & Rumelhart simulator. CM-5 performance should improve dramatically once vector units are installed. A service unit on the Cray YMP corresponds to approximately 40 minutes of CPU time using 2 MW of memory; on the CM2 it is one hour's exclusive use of 8k processors. Grants of up to 100 service units are awarded every two weeks; larger grants are awarded quarterly. Descriptions of the types of grants available and the application form can be obtained as described below. NeuralShell, Aspirin and PlaNet run on various platforms including workstations and are available by anonymous ftp as described below. The McClelland & Rumelhart simulator is availible with their book "Explorations in Parallel Distributed Processing", 1988. Documentation outlining the facilities provided by each simulator is included in the sources. The suitability of the supercomputer version of each simulator for different types of network sizes and topologies is discussed briefly in PSC documentation that can be obtained by anonymous ftp as described below. It should be possible to develop a neural network model on a workstation and later conduct full scale testing on the PSC's supercomputers without substantial changes. Instructions for how to use the anonymous ftp facility appear at the end of this message. Further inquiries concerning the Neural and Connectionist Modeling program should be sent to Nigel Goddard at ngoddard at psc.edu (Internet) or ngoddard at cpwpsca (Bitnet) or the address below. How to get PSC grant information and application materials ---------------------------------------------------------- A shortened form of the application materials in printer-ready postscript can be obtained via anonymous ftp from ftp.psc.edu (128.182.62.148). The files are in "pub/grants". The file INDEX describes what is in each of the other files. More detailed descriptions of PSC facilities and services are only available in hardcopy. The basic document is the Facilities and Services Guide. Hardcopy materials can be requested from: grants at psc.edu or (412) 268 4960 - ask for the Allocations Coordinator or Allocations Coordinator Pittsburgh Supercomputing Center 4400 Fifth Avenue Pittsburgh, PA 15213 How to get Aspirin/MIGRAINES ---------------------------- The software is available from two FTP sites, CMU's simulator collection (pt.cs.cmu.edu or 128.2.254.155 in directory /afs/cs/project/connect/code) and from UCLA's cognitive science machines (polaris.cognet.ucla.edu or 128.97.50.3 in directory "alexis"). The compressed tar file is a little less than 2 megabytes and is called "am5.tar.Z". How to get PlaNet ----------------- The software is availible from FTP site boulder.colorado.edu (128.138.240.1) in directory "pub/generic-sources", filename PlaNet.5.6.tar.Z How to get NeuralShell ---------------------- The software is availible from FTP site quanta.eng.ohio-state.edu (128.146.35.1) in directory "pub/NeuralShell", filename "NeuralShell.tar". Generic Anonymous FTP instructions. ------------------------------------ 1. Create an FTP connection to the ftp server. For example, you would connect to the PSC ftp server "ftp.psc.edu" (128.182.62.148) with the command "ftp ftp.psc.edu" or "ftp 128.182.62.148". 2. Log in as user "anonymous" with password your username at your-site. 3. Change to the requisite directory, usually /pub/somedir, by typing the command "cd pub/somedir" 4. Set binary mode by typing the command "binary" ** THIS IS IMPORTANT ** 5. Optionally look around by typing the command "ls". 6. Get the files you want using the command "get filename" or "mget *" if you want to get all the files. 7. Terminate the ftp connection using the command "quit". 8. If the file ends in .Z, uncompress it with the command "uncompress filename.Z" or "zcat filename.Z > filename". This uncompresses the file and removes the .Z from the filename 9. If the files end in .tar, extract the tar'ed files with the command "tar xvf filename.tar". 10. If a file ends in .ps, you can print it on a Postscript printer using the command "lpr -s -Pprintername filename.ps" From berenji at ptolemy.arc.nasa.gov Thu May 28 20:48:53 1992 From: berenji at ptolemy.arc.nasa.gov (Hamid Berenji) Date: Thu, 28 May 92 17:48:53 PDT Subject: ICNN'93 Call for papers Message-ID: 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS San Francisco, California, March 28 - April 1, 1993 The IEEE Neural Networks Council is pleased to announce its 1993 International Conference on Neural Networks (ICNN'93) to be held in San Francisco, California from March 28 to April 1, 1993. ICNN'93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE'93). Participants will be able to attend the technical events of both meetings. ICNN '93 will be devoted to the discussion of basic advances and applications of neurobiological systems, neural networks, and neural computers. Topics of interest include: * Neurodynamics * Associative Memories * Intelligent Neural Networks * Invertebrate Neural Networks * Neural Fuzzy Systems * Evolutionary Programming * Optical Neurocomputers * Supervised Learning * Unsupervised Learning * Sensation and Perception * Genetic Algorithms * Virtual Reality & Neural Networks * Applications to: - Image Processing and Understanding - Optimization - Control - Robotics and Automation - Signal Processing ORGANIZATION: General Chair: Enrique H. Ruspini Program Chairs: Hamid R. Berenji, Elie Sanchez, Shiro Usui ADVISORY BOARD: S.-i. Amari J. A. Anderson J. C. Bezdek Y. Burnod L. Cooper R. C. Eberhart R. Eckmiller J. Feldman M. Feldman F. Fukushima R. Hecht-Nielsen J. Holland C. Jorgensen T. Kohonen C. Lau C. Mead N. Packard D. Rummelhart B. Skyrms L. Stark A. Stubberud H. Takagi P. Treleaven B. Widrow PROGRAM COMMITTEE: K. Aihara I. Aleksander L.B. Almeida G. Andeen C. Anderson J. A. Anderson A. Andreou P. Antsaklis J. Barhen B. Bavarian H. R. Berenji A. Bergman J. C. Bezdek H. Bourlard D. E. Brown J. Cabestany D. Casasent S. Colombano R. de Figueiredo M. Dufosse R. C. Eberhart R. M. Farber J. Farrell J. Feldman W. Fisher W. Fontana A.A. Frolov T. Fukuda C. Glover K. Goser D. Hammerstrom M. H. Hassoun J. Herault J. Hertz D. Hislop A. Iwata M. Jordan C. Jorgensen L. P. Kaelbling P. Khedkar S. Kitamura B. Kosko J. Koza C. Lau C. Lucas R. J. Marks J. Mendel W.T. Miller M. Mitchell S. Miyake A.F. Murray J.-P. Nadal T. Nagano K. S. Narendra R. Newcomb E. Oja N. Packard A. Pellionisz P. Peretto L. Personnaz A. Prieto D. Psaltis H. Rauch T. Ray M. B. Reid E. Sanchez J. Shavlik B. Sheu S. Shinomoto J. Shynk P. K. Simpson N. Sonehara D. F. Specht A. Stubberud N. Sugie H. Takagi S. Usui D. White H. White R. Williams E. Yodogawa S. Yoshizawa S. W. Zucker ORGANIZING COMMITEE: PUBLICITY: H.R. Berenji EXHIBITS: W. Xu TUTORIALS: J.C. Bezdek VIDEO PROCEEDINGS: A. Bergman FINANCE: R. Tong SPONSORING SOCIETIES: ICNN'93 is sponsored by the Neural Networks Council. Constituent Societies: * IEEE Circuits and Systems Society * IEEE Communications Society * IEEE Control Systems Society * IEEE Engineering in Medicine & Biology Society * IEEE Industrial Electronics Society * IEEE Industry Applications Society * IEEE Information Theory Society * IEEE Lasers and Electro-Optics Society * IEEE Oceanic Engineering Society * IEEE Robotics and Automation Society * IEEE Signal Processing Society * IEEE Systems, Man, and Cybernetics Society CALL FOR PAPERS The program committee cordially invites interested authors to submit papers dealing with any aspects of research and applications related to the use of neural models. Papers must be written in English and must be received by SEPTEMBER 21, 1992. Six copies of the paper must be submitted and the paper should not exceed 8 pages including figures, tables, and references. Papers should be prepared on 8.5" x 11" white paper with 1" margins on all sides, using a typewriter or letter quality printer in one column format, in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, authors name(s) and affiliation(s) on top of first page followed by an abstract. FAX submissions are not acceptable. Please send submissions prior to the deadline to: Dr. Hamid Berenji, AI Research Branch, MS 269-2, NASA Ames Research Center, Moffett Field, California 94035 CALL FOR VIDEOS: The IEEE Neural Networks Council is pleased to announce its first Video Proceedings program, intended to present new and significant experimental work in the fields of artificial neural networks and fuzzy systems, so as to enhance and complement results presented in the Conference Proceedings. Interested researchers should submit a 2 to 3 minute video segment (preferable formats: 3/4" Betacam, or Super VHS) and a one page information sheet (including title, author, affiliation, address, a 200-word abstract, 2 to 3 references, and a short acknowledgment, if needed), prior to September 21, 1992, to Meeting Management, 5665 Oberlin Drive, Suite 110, San Diego, CA 92121. We encourage those interested in participating in this program to write to this address for important suggestions to help in the preparation of their submission. TUTORIALS: The Computational Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute Evolutionary Programming David Fogel Orincon Corporation Expert Systems and Neural Networks George Lendaris Portland State University Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Department of Cognitive and Linguistic Sciences Brown University EXHIBITS: ICNN '93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE '93). ICNN '93 and FUZZ-IEEE '93 are the largest conferences and trade shows in their fields. Participants to either conference will be able to attend the combined exhibit program. We anticipate an extraordinary trade show offering a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. Interested exhibitors are requested to contact the Chairman, Exhibits, ICNN '93 and FUZZ-IEEE '93, Wei Xu at Telephone (408) 428-1888, FAX (408) 428-1884. FOR ADDITIONAL INFORMATION, CONTACT Meeting Management 5665 Oberlin Drive Suite 110 San Diego, CA 92121 Tel. (619) 453-6222 FAX (619) 535-3880 ------- From haussler at cse.ucsc.edu Fri May 29 15:37:38 1992 From: haussler at cse.ucsc.edu (David Haussler) Date: Fri, 29 May 1992 12:37:38 -0700 Subject: COLT `92 conference program Message-ID: <199205291937.AA28632@arapaho.ucsc.edu> COLT '92 Workshop on Computational Learning Theory Sponsored by ACM SIGACT and SIGART July 27 - 29, 1992 University of Pittsburgh, Pittsburgh, Pennsylvania GENERAL INFORMATION Registration & Reception: Sunday, 7:00 - 10:00 pm, 2M56-2P56 Forbes Quadrangle Conference Banquet: Monday, 7:00 pm The conference sessions will be held in the William Pitt Union. Late Registration, etc.: Kurtzman Room (during technical sessions) Lectures & Impromptu Talks: Ballroom Poster Sessions: Assembly Room SCHEDULE OF TALKS Sunday, July 26 RECEPTION: 7:00 - 10:00 pm Monday, July 27 SESSION 1: 8:45 - 10:05 am 8:45 - 9:05 Learning boolean read-once formulas with arbitrary symmetric and constant fan-in gates, by Nader H. Bshouty, Thomas Hancock, and Lisa Hellerstein 9:05 - 9:25 On-line Learning of Rectangles, by Zhixiang Chen and Wolfgang Maass 9:25 - 9:45 Cryptographic lower bounds on learnability of AC^1 functions on the uniform distribution, by Michael Kharitonov 9:45 - 9:55 Learning hierarchical rule sets, by Jyrki Kivinen, Heikki Mannila and Esko Ukkonen 9:55 - 10:05 Random DFA's can be approximately learned from sparse uniform examples, by Kevin Lang SESSION 2: 10:30 - 11:50 am 10:30 - 10:50 An O(n^loglog n) Learning Algorithm for DNF, by Yishay Mansour 10:50 - 11:10 A technique for upper bounding the spectral norm with applications to learning, by Mihir Bellare 11:10 - 11:30 Exact learning of read-k disjoint DNF and not-so-disjoint DNF, by Howard Aizenstein and Leonard Pitt 11:30 - 11:40 Learning k-term DNF formulas with an incomplete membership oracle, by Sally A. Goldman, and H. David Mathias 11:40 - 11:50 Learning DNF formulae under classes of probability distributions, by Michele Flammini, Alberto Marchetti-Spaccamela and Ludek Kucera SESSION 3: 1:45 - 3:05 pm 1:45 - 2:05 Bellman strikes again -- the rate of growth of sample complexity with dimension for the nearest neighbor classifier, by Santosh S. Venkatesh, Robert R. Snapp, and Demetri Psaltis 2:05 - 2:25 A theory for memory-based learning, by Jyh-Han Lin and Jeffrey Scott Vitter 2:25 - 2:45 Learnability of description logics, by William W. Cohen and Haym Hirsh 2:45 - 2:55 PAC-learnability of determinate logic programs, by Savso Dvzeroski, Stephen Muggleton and Stuart Russell 2:55 - 3:05 Polynomial time inference of a subclass of context-free transformations, by Hiroki Arimura, Hiroki Ishizaka, and Takeshi Shinohara SESSION 4: 3:30 - 4:40 pm 3:30 - 3:50 A training algorithm for optimal margin classifiers, by Bernhard Boser, Isabell Guyon, and Vladimir Vapnik 3:50 - 4:10 The learning complexity of smooth functions of a single variable, by Don Kimber and Philip M. Long 4:10 - 4:20 Absolute error bounds for learning linear functions online, by Ethan Bernstein 4:20 - 4:30 Probably almost discriminative learning, by Kenji Yamanishi 4:30 - 4:40 PAC Learning with generalized samples and an application to stochastic geometry, by S.R. Kulkarni, S.K. Mitter, J.N. Tsitsiklis and O. Zeitouni POSTER SESSION #1 & IMPROMPTU TALKS: 5:00 - 6:30 pm BANQUET: 7:00 pm Tuesday, July 28 SESSION 5: 8:45 - 10:05 am 8:45 - 9:05 Degrees of inferability, by P. Cholak, R. Downey, L. Fortnow, W. Gasarch, E. Kinber, M. Kummer, S. Kurtz, and T. Slaman 9:05 - 9:25 On learning limiting programs, by John Case, Sanjay Jain, and Arun Sharma 9:25 - 9:45 Breaking the probability 1/2 barrier in FIN-type learning, by Robert Daley, Bala Kalyanasundaram, and Mahendran Velauthapillai 9:45 - 9:55 Case based learning in inductive inference, by Klaus P. Jantke 9:55 - 10:05 Generalization versus classification, by Rolf Wiehagen and Carl Smith SESSION 6: 10:30 - 11:50 am 10:30 - 10:50 Learning switching concepts, by Avrim Blum and Prasad Chalasani 10:50 - 11:10 Learning with a slowly changing distribution, by Peter L. Bartlett 11:10 - 11:30 Dominating distributions and learnability, by Gyora M. Benedek and Alon Itai 11:30 - 11:40 Polynomial uniform convergence and polynomial-sample learnability, by Alberto Bertoni, Paola Campadelli, Anna Morpurgo, and Sandra Panizza 11:40 - 11:50 Learning functions by simultaneously estimating errors, by Kevin Buescher and P.R. Kumar INVITED TALK: 1:45 - 2:45 pm: Reinforcement learning, by Andy Barto, University of Massachusetts SESSION 7: 3:10 - 4:40 pm 3:10 - 3:30 On learning noisy threshold functions with finite precision weights, by R. Meir and J.F. Fontanari 3:30 - 3:50 Query by committee, by H.S. Seung, M. Opper, H. Sompolinsky 3:50 - 4:00 A noise model on learning sets of strings, by Yasubumi Sakakibara and Rani Siromoney 4:00 - 4:10 Language learning from stochastic input, by Shyam Kapur and Gianfranco Bilardi 4:10 - 4:20 On exact specification by examples, by Martin Anthony, Graham Brightwell, Dave Cohen and John Shawe-Taylor 4:20 - 4:30 A computational model of teaching, by Jeffrey Jackson and Andrew Tomkins 4:30 - 4:40 Approximate testing and learnability, by Kathleen Romanik IMPROMPTU TALKS: 5:00 - 6:00 pm BUSINESS MEETING: 8:00 pm POSTER SESSION #2: 9:00 - 10:30 pm Wednesday, July 29 SESSION 8: 8:45 - 9:45 am 8:45 - 9:05 Characterizations of learnability for classes of 0,...,n-valued functions, by Shai Ben-David, Nicol`o Cesa-Bianchi and Philip M. Long 9:05 - 9:25 Toward efficient agnostic learning, by Michael J. Kearns, Robert E. Schapire, and Linda Sellie 9:25 - 9:45 Approximating Bayes decisions by additive estimations by Svetlana Anoulova, Paul Fischer, Stefan Polt, and Hans Ulrich Simon SESSION 9: 10:10 - 10:50 am 10:10 - 10:30 On the role of procrastination for machine learning, by Rusins Freivalds and Carl Smith 10:30 - 10:50 Types of monotonic language learning and their characterization, by Steffen Lange and Thomas Zeugmann SESSION 10: 11:10 - 11:50 am 11:10 - 11:30 An improved boosting algorithm and its implications on learning complexity, by Yoav Freund 11:30 - 11:50 Some weak learning results, by David P. Helmbold and Manfred K. Warmuth SESSION 11: 1:45 - 2:45 pm 1:45 - 2:05 Universal sequential learning and decision from individual data sequences, by Neri Merhav and Meir Feder 2:05 - 2:25 Robust trainability of single neurons, by Klaus-U. Hoffgen and Hans-U. Simon 2:25 - 2:45 On the computational power of neural nets, by Hava T. Siegelmann and Eduardo D. Sontag =============================================================================== ADDITIONAL INFORMATION To receive complete information regarding conference registration and accomodations contact Betty Brannick: E-mail: brannick at cs.pitt.edu PHONE: (412) 624-8493 FAX: (412) 624-8854. Please specify whether you want the information sent in PLAIN text or LATEX format. NOTE: Attendees must register BY JUNE 19 TO AVOID THE LATE REGISTRATION FEE. From rich at gte.com Thu May 28 13:03:55 1992 From: rich at gte.com (Rich Sutton) Date: Thu, 28 May 92 13:03:55 -0400 Subject: Reinforcement Learning Special Issue of Machine Learning Message-ID: <9205281703.AA06399@bunny.gte.com> Those of you interested in reinforcement learning may want to get a copy of the special issue on this topic of the journal Machine Learning. It just appeared this week. Here's the table of contents: Vol. 8, No. 3/4 of MACHINE LEARNING (May, 1992) Introduction: The Challenge of Reinforcement Learning ----- Richard S. Sutton (Guest Editor) Q-Learning ----- Christopher J. C. H. Watkins and Peter Dayan Practical Issues in Temporal Difference Learning ----- Gerald Tesauro Transfer of Learning by Composing Solutions for Elemental Sequential Tasks ----- Satinder Pal Singh Simple Gradient-Estimating Algorithms for Connectionist Reinforcement Learning ----- Ronald J. Williams Temporal Differences: TD(lambda) for general Lambda ----- Peter Dayan Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching ----- Long-ji Lin A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-Like Environments ----- Jose del R. Millan and Carme Torras Copies can be ordered from: Outside North America: Kluwer Academic Publishers Kluwer Academic Publishers Order Department Order Department P.O. Box 358 P.O. Box 322 Accord Station 3300 AH Dordrecht Hingham, MA 02018-0358 The Netherlands tel. 617-871-6600 fax. 617-871-6528 From mjolsness-eric at CS.YALE.EDU Fri May 29 12:51:56 1992 From: mjolsness-eric at CS.YALE.EDU (Eric Mjolsness) Date: Fri, 29 May 92 12:51:56 EDT Subject: neural programmer/analyst job opening Message-ID: <199205291651.AA07885@EXPONENTIAL.SYSTEMSZ.CS.YALE.EDU> Programmer/Analyst Position in Artificial Neural Networks The Yale Center for Theoretical and Applied Neuroscience (CTAN) and the Computer Science Department (Yale University, New Haven CT) .. are offering a challenging position in software engineering in support of new mathematical approaches to artificial neural networks, as described below. (The official job description is very close to this and is posted at Human Resources.) 1. Basic Function: Designer, programmer, and consultant for artificial neural network software at Yale's Center for Theoretical and Applied Neuroscience (CTAN) and the Computer Science Department. 2. Major duties: (a) To extend and implement a design for a new neural net compiler and simulator based on mathematical optimization and computer algebra, using serial and parallel computers. (b) To run and analyse computer experiments using this and other software tools in a variety of application projects, including image processing and computer vision. (c) To support other work in artificial neural networks at CTAN, including the preparation of software demonstrations. 3. Position Specifications: (a) Education: BA, including calculus, linear algebra, differential equations. helpful: mathematical optimization. (b) Experience: programming experience in C under UNIX. some of the following: C++ or other object-oriented language, symbolic programming, parallel programming, scientific computing, workstation graphics, circuit simulation, neural nets, UNIX system administration (c) Skills: High-level programming languages medium to large-scale software engineering good mathematical literacy Preferred starting date: July 1, 1992. For information or to submit an application (your resume and any supporting materials), please write: Eric Mjolsness Yale Computer Science Department P.O. Box 2158 Yale Station New Haven CT 06520 (mjolsness at cs.yale.edu) Any application must also be sent to Human Resources, with the position identification "C 7-20073", at this address: Jeffrey Drexler Department of Human Resources Yale University 155 Whitney Ave. New Haven, CT 06520 ------- From ahg at eng.cam.ac.uk Sat May 30 11:09:39 1992 From: ahg at eng.cam.ac.uk (A. H. Gee) Date: Sat, 30 May 92 11:09:39 BST Subject: Paper in neuroprose Message-ID: <1457.9205301009@dsl.eng.cam.ac.uk> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS **************** The following technical report has been placed in the neuroprose archives at Ohio State University: POLYHEDRAL COMBINATORICS AND NEURAL NETWORKS Andrew Gee and Richard Prager Technical Report CUED/F-INFENG/TR 100 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The often disappointing performance of optimizing neural networks can be partly attributed to the rather ad hoc manner in which problems are mapped onto them for solution. In this paper a rigorous mapping is described for quadratic 0-1 programming problems with linear equality and inequality constraints, this being the most general class of problem such networks can solve. The problem's constraints define a polyhedron P containing all the valid solution points, and the mapping guarantees strict confinement of the network's state vector to P. However, forcing convergence to a 0-1 point within P is shown to be generally intractable, rendering the Hopfield and similar models inapplicable to the vast majority of problems. Some alternative dynamics, based on the principle of tabu search, are presented as a more coherent approach to general problem solving. When tested on a large database of solved problems, the alternative dynamics produced some very encouraging results. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get gee.poly.ps.Z ftp> quit unix> uncompress gee.poly.ps.Z unix> lpr gee.poly.ps (or however you print PostScript) Please note that a couple of the figures in the paper were produced on an Apple Mac, and the resulting PostScript is not quite standard. People using an Apple LaserWriter should have no problems though. b) Via postal mail: Request a hardcopy from Andrew Gee, Speech Laboratory, Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: ahg at eng.cam.ac.uk  From D.M.Peterson at computer-science.birmingham.ac.uk Sat May 30 18:19:35 1992 From: D.M.Peterson at computer-science.birmingham.ac.uk (Donald Peterson) Date: Sat, 30 May 92 18:19:35 BST Subject: Philosophy and the Cognitive Sciences Message-ID: <682.9205301719@christopher-robin.cs.bham.ac.uk> ________________________________________________________________________ ________________________________________________________________________ CONFERENCE REGISTRATION INFORMATION Royal Institute of Philosophy PHILOSOPHY and the COGNITIVE SCIENCES The University of Birmingham September 11-14 1992 ________________________________________________________________________ ________________________________________________________________________ The conference will address a variety of questions concerning the foundations of cognitive science and the philosophical importance of the models of mind which it produces. Topics will include: connectionism and classical AI, rules and representation, reasoning, concepts, rationality, multiple personality, the mind as a control system, blindsight, etc. Speakers will include: Stephen Stitch, Terry Horgan, Michael Tye, Margaret Boden, Aaron Sloman, Brian McLaughlin, Andrew Woodfield, Martin Davies, Antony Galton, Stephen Mills, Niels Ole Bernsen. Papers given at the conference will be published in a volume produced by the Cambridge University Press, as a supplement to the journal _Philosophy_. Copies of this document together with titles as they become available can be obtained by emailing the auto reply service: rip92 at bham.ac.uk REGISTRATION To attend the conference, please fill in the form below and return it by post (not email) together with payment by cheque in pounds sterling to: Royal Institute of Philosophy Conference, Department of Philosophy, The University of Birmingham, Birmingham, B15 2TT, U.K. Delegates will be considered registered when cheques have been cleared. The total for Registration, Bed and Breakfast and all meals is 122.50 pounds. For registered students and the unwaged, the Registration Fee will be waived if evidence of status is sent. For a limited number of postgraduate students, all other charges will be at half-price: if you wish to apply for this reduction, please write to the organisers indicating your research topic and reason for attending the conference. For bookings received after 7th August we cannot guarantee accommodation, and for bookings received after 10th August an additional charge of 10 pounds has to be made. Organisers: Chris Hookway (Philosophy), Donald Peterson (Cognitive Science). 30 May 1992. ______________________________________________________________________ REGISTRATION FORM Royal Institute of Philosophy PHILOSOPHY and the COGNITIVE SCIENCES The University of Birmingham September 11-14 1992 ______________________________________________________________________ REQUIREMENTS Registration Fee 25.00 _______ Late Registration Fee 10.00 _______ Bed and Breakfast 64.00 _______ Dinner 11 September 7.50 _______ Lunch 12 September 5.50 _______ Dinner 12 September 7.50 _______ Lunch 13 September 5.50 _______ Dinner 13 September 7.50 _______ All Meals 33.50 _______ Total _______ Vegetarian meals (please tick) _______ PERSONAL DETAILS Name ___________________________________________ Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Telephone No. ___________________________________________ Fax No. ___________________________________________ Email ___________________________________________ REGISTRATION I wish to register for the Royal Institute of Philosophy Conference Philosophy and the Cognitive Sciences, and enclose a cheque in pounds sterling payable to C.J. Hookway (Royal Institute of Philosophy Conference) for ________ Signed ___________________________________________  From belew%FRULM63.BITNET at pucc.Princeton.EDU Fri May 15 08:55:04 1992 From: belew%FRULM63.BITNET at pucc.Princeton.EDU (Rick BELEW) Date: Fri, 15 May 92 14:55:04 +0200 Subject: Paradigmatic over-fitting Message-ID: There has been a great deal of discussion in the GA community concerning the use of particular functions as a "test suite" against which different methods (e.g., of performing cross-over) might be compared. The GA is perhaps particularly well-suited to this mode of analysis, given the way arbitrary "evaluation" functions are neatly cleaved from characteristics of the algorithm itself. We have argued before that dichotomizing the GA/evaluation function relationship in this fashion is inapprorpiate [W. Hart & R. K. Belew, ICGA'91]. This note, however, is intended to focus on the general use of test sets, in any fashion. Ken DeJong set an ambitious precident with his thesis [K. DeJong, U. Michigan, 1975]. As part of a careful empirical investigation of several major dimensions of the GA, DeJong identified a set of five functions that seemed to him (at that time) to provide a wide variety of test conditions affecting the algorithm's performance. Since then, the resulting "De Jong functions F1-F5" have assumed almost mythic status, and continue to be one of the primary ways in which new GA techniques are evaluated. Within the last several years, a number of researchers have re-evaluated the DeJong test suite and found it wanting in one respect or another. Some have felt that it does not provide a real test for cross-over, some that the set does not accurately characterize the general space of evaluation functions, and some that naturally occuring problems provide a much better evaluation than anything synthesized on theoretical grounds. There is merit to each of these criticisms, and all of this discussion has furthered our understanding of the GA. But it seems to me that somewhere along the line the original DeJong suite became villified. It isn't just that "hind-sight is always 20-20." I want to argue that DeJong's functions WERE excellent, so good that they now ARE a victim of their own success. My argument goes beyond any particular features of these functions or the GA, and therefore won't make historical references beyond those just sketched. \footnote{If I'm right though, it would be an interesting exercise in history of science to confirm it, with a careful analysis of just which test functions were used, when, by whom, with citation counting, etc.} It will rely instead on some fundamental facts from machine learning. Paradigmatic Over-fitting "Over-fitting" is a widely recognized phenonemon in machine learning (and before that, statistics). It refers to a tendancy by learning algorithms to force the rule induced from a training corpus to agree with this data set too closely, at the expense of generalization to other instances. We have all probably seen the example of the same data set fit with two polynomials, one that is correct and a second, higher-order one that also attempts to fit the data's noise. A more recent example is provided by some neural networks, which generalize much better to unseen data if their training is stopped a bit early, even though further epochs of training would continue to reduce the observed error on the training set. I suggest entire scientific disciplines can suffer a similar fate. Many groups of scientists have found it useful to identify a particular data set, test suite or "model animal" (i.e., particular species or even genetic strains that become {\em de rigueur} for certain groups of biologists). In fact, collective agreement as the validity and utility of scientific artifacts like this are critically involved in defining the "paradigms" (ala Kuhn) in which scientists work. Scientifically, there are obvious benefits to coordinated use of common test sets. For example, a wide variety of techniques can be applied to common data and the results of these various experiments can be compared directly. But if science is also seen as an inductive process, over-fitting suggests there may also be dangers inherent in this practice. Initially, standardized test sets are almost certain to help any field evaluate alternative methods; suppose they show that technique A1 is superior to B1 and C1. But as soon as the results of these experiments are used to guide the development ("skew the sampling") of new methods (to A2, A3 and A4 for example), our confidence in the results of this second set of experiments as accurate reflections of what will be found generally true, must diminish. Over time, then, the same data set that initially served the field well can come to actually impeed progress by creating a false characterization of the real problems to be solved. The problem is that the time-scale of scientific induction is so much slower than that of our computational methods that the biases resulting from "paradigmatic over-fitting" may be very difficult to recognize. Machine learning also offers some remedies to the dilema of over-training. The general idea is to use more than one data set for training, or more accurately, partition available training data into subsets. Then, portions of the training set can be methodically held back in order to compare the result induced from one subset with that induced from another (via cross validatation, jack-knifing, etc.). How might this procedure be applied to science? It would be somewhat artificial to purposefully identify but then hold back some data sets, perhaps for years. More natural strategies with about the same effect seem workable, however. First, a field should maintain MULTIPLE data sets, to minimize aberations due to any one. Second, each of these can only be USED FOR A LIMITED TIME, to be replaced by a new ones. The problem is that even these modest conventions require significant "discipline discipline." Accomplishing any coordination across independent-minded scientists is difficult, and the use of shared data sets is a fairly effortless way to accomplish useful coordination. Data sets are difficult to obtain in the first place, and convincing others to become familiar with them ever harder; these become intertial forces that will make scientists reluctant to part with the classic data sets they know well. Evaluating results across multiple data sets also makes new problems for reviewers and editors. And, because the time-scale of scientific induction is so long relative to the careers of the scientists involved, the costs associated with all these concrete problems, relative to theoretical ones due to pardigmatic over-fitting, will likely seem huge: "Why should I give up familiar data sets when we previously agreed to their validity, especially since my methods seem to be working better and better on them?!" Back to the GA The GA community, at present, seems fairly healthy according to this analysis. In addition to De Jong's, people like Dave Ackley have genereated very useful sets of test functions. There are now test suites that have many desirable properities, like being "GA-hard," "GA-deceptive," "royal road," "practical" and "real-world." So there are clearly plenty of tests. For this group, my main point is that this plurality is very desirable. I too am dissatisfied with De Jong's test suite, but I am equally dissatisfied with any ONE of the more recently proposed alternatives. I suggest it's time we move beyond debates about whose tests are most illuminating. If we ever did pick just one set to use for testing GAs it would --- like De Jong's --- soon come to warp the development of GAs according to ITS inevitable biases. What we need are more sophisticated analyses and methodologies that allow a wide variety of testing procedures, each showing something different. Flame off, Rik Belew [I owe the basic insight --- that an entire discipline can be seen to over-fit to a limited training corpora --- to conversations with Richard Palmer and Rich Sutton, at the Santa Fe Institute in March, 1992. Of course, all blame for damage occuring as the neat, little insight was stretched into this epistle concerning GA research is mine alone.] Richard K. Belew Computer Science & Engr. Dept (0014) Univ. California - San Diego La Jolla, CA 92093 rik at cs.ucsd.edu From now until about 20 July I will be working in Paris: Status: RO c/o J-A. Meyer Groupe de BioInformatique URA686. Ecole Normale Superieure 46 rue d'Ulm 75230 PARIS Cedex05 France Tel: 44 32 36 23 Fax: 44 32 39 01 belew at wotan.ens.fr